id
stringlengths
20
20
content
stringlengths
211
2.4M
meta
dict
BkiUfjjxK0wg09FXYCkT
\section{The Lie group topology on the gauge group} \label{sect:topologisationOfTheGaugeGroup} We shall mostly identify the gauge group with the space of \mbox{$K$}-equivariant continuous mappings \mbox{$C^{\infty}(P,K)^{K}$}, where \mbox{$K$} acts on itself by conjugation from the right. This identification allows us to topologise the gauge group very similar to mapping groups \mbox{$C^{\infty}(M,K)$} for compact \mbox{$M$}. Since the compactness of \mbox{$M$} is the crucial point in the topologisation of mapping groups, we can not take this approach directly, because our structure groups \mbox{$K$} shall not be compact, even infinite-dimensional. \begin{definition}\label{def:bundleAutomorphismsAndGaugeTransformations} If \mbox{$K$} is a topological group and \mbox{$\ensuremath{\mathcal{P}}=\pfb$} is a continuous principal \mbox{$K$}-bundle, then we denote by \[ \ensuremath{\operatorname{Aut}_{c}}(\ensuremath{\mathcal{P}}):=\{f\in\ensuremath{\operatorname{Homeo}} (P):\rho_{k}\op{\circ}f=f\op{\circ}\rho_{k}\ensuremath{\;\text{\textup{ for all }}\;} k\in K\} \] the group of continuous \emph{bundle automorphisms} and by \[ \ensuremath{\operatorname{Gau}_{c}}(\ensuremath{\mathcal{P}}):=\{f\in \ensuremath{\operatorname{Aut}_{c}}(\ensuremath{\mathcal{P}}):\pi \op{\circ}f=\pi \} \] the group of continuous \emph{vertical} bundle automorphisms or \emph{continuous gauge group}. If, in addition, \mbox{$K$} is a Lie group, \mbox{$M$} is a manifold with corners and \mbox{$\ensuremath{\mathcal{P}}$} is a smooth principal bundle, then we denote by \[ \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}):=\{f\in\ensuremath{\operatorname{Diff}} (P):\rho_{k}\op{\circ}f=f\op{\circ}\rho_{k}\ensuremath{\;\text{\textup{ for all }}\;} k\in K\} \] the the group of \emph{smooth bundle automorphisms} (or shortly \emph{bundle automorphisms}). Then each \mbox{$F\in \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} induces an element \mbox{$F_{M}\in \ensuremath{\operatorname{Diff}} (M)$} , given by \mbox{$F_{M}(p\cdot K):=F(p)\cdot K$} if we identify \mbox{$M$} with \mbox{$P/K$}. This yields a homomorphism \mbox{$Q\ensuremath{\nobreak:\nobreak} \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\to \ensuremath{\operatorname{Diff}}(M)$}, \mbox{$F\mapsto F_{M}$} and we denote by \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} the kernel of \mbox{$Q$} and by \mbox{$\ensuremath{\operatorname{Diff}} (M)_{\ensuremath{\mathcal{P}}}$} the image of \mbox{$Q$}. Thus \[ \ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})=\{f\in \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}): \pi \op{\circ}f=\pi\}, \] which we call the group of (smooth) vertical bundle automorphisms or shortly the \emph{gauge group} of \mbox{$\ensuremath{\mathcal{P}}$}. \end{definition} \begin{remark}\label{rem:gaugeGroupIsIsomorphicToEquivariantMappings} If \mbox{$\ensuremath{\mathcal{P}}$} is a smooth principal \mbox{$K$}-bundle and if we denote by \[ C^{\infty}(P,K)^{K}:=\{\gamma \in C^{\infty}(P,K): \gamma (p\cdot k)= k^{-1}\cdot \gamma (p)\cdot k\ensuremath{\;\text{\textup{ for all }}\;} p\in P,k\in K\} \] the group of \mbox{$K$}-equivariant smooth maps from \mbox{$P$} to \mbox{$K$}, then the map \[ C^{\infty}(P,K)^{K}\ni f\mapsto \big(p\mapsto p\cdot f(p)\big)\in\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}}) \] is an isomorphism of groups and we will mostly identify \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} with \mbox{$C^{\infty}(P,K)^{K}$} via this map. \end{remark} The algebraic counterpart of the gauge group is the gauge algebra. This will serve as the modelling space for the gauge group later on. \begin{definition} If \mbox{$\ensuremath{\mathcal{P}}$} is a smooth principal \mbox{$K$}-bundle, then the space \[ \ensuremath{\operatorname{\mathfrak{gau}}}(\ensuremath{\mathcal{P}}):=C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}:=\{\eta \in C^{\infty}(P,\ensuremath{\mathfrak{k}} )^{K}: \eta (p\cdot k)=\ensuremath{\operatorname{Ad}}(k^{-1}).\eta (p)\ensuremath{\;\text{\textup{ for all }}\;} p\in P,k\in K \} \] is called the \emph{gauge algebra} of \mbox{$\ensuremath{\mathcal{P}}$}. We endow it with the subspace topology from \mbox{$C^{\infty}(P,\ensuremath{\mathfrak{k}})$} and with the pointwise Lie bracket. \end{definition} \begin{proposition}\label{prop:isomorphismOfTheGaugeAlgebra} Let \mbox{$\ensuremath{\mathcal{P}}=\pfb$} be a smooth principal \mbox{$K$}-bundle over the finite-di\-men\-sion\-al manifold with corners \mbox{$M$}. If \mbox{$\cl{\ensuremath{\mathcal{V}}} :=(\cl{V}_{i},\sigma_{i})_{i\in I}$} is a smooth closed trivialising system of \mbox{$\ensuremath{\mathcal{P}}$} with transition functions \mbox{$k_{ij}\ensuremath{\nobreak:\nobreak} \cl{V}_{i}\cap \cl{V}_{j}\to K$}, then we denote \[ \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}}):=\left\{(\eta_{i})_{i\in I}\in\prod_{i\in I} C^{\infty}(\cl{V}_{i},\ensuremath{\mathfrak{k}} ): \eta_{i}(m)=\ensuremath{\operatorname{Ad}} (k_{ij}(m)).\eta_{j}(m)\ensuremath{\;\text{\textup{ for all }}\;} m\in \cl{V}_{i}\cap \cl{V}_{j} \right\}. \] If \mbox{$\ensuremath{\mathcal{V}}$} denotes the smooth open trivialising system underlying \mbox{$\cl{\ensuremath{\mathcal{V}}}$}, then we set \[ \ensuremath{\mathfrak{g}}_{\ensuremath{\mathcal{V}}}(\ensuremath{\mathcal{P}}):=\left\{(\eta_{i})_{i\in I}\in\prod_{i\in I} C^{\infty}(V_{i},\ensuremath{\mathfrak{k}} ): \eta_{i}(m)=\ensuremath{\operatorname{Ad}}(k_{ij}(m)).\eta_{j}(m)\ensuremath{\;\text{\textup{ for all }}\;} m\in V_{i}\cap V_{j}\right\}, \] and we have isomorphisms of topological vector spaces \[ \ensuremath{\operatorname{\mathfrak{gau}}}(\ensuremath{\mathcal{P}})= C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K} \cong S(\ensuremath{\operatorname{Ad}}(\ensuremath{\mathcal{P}}))\cong \ensuremath{\mathfrak{g}}_{\ensuremath{\mathcal{V}}}(\ensuremath{\mathcal{P}})\cong\ensuremath{\mathfrak{g}}_{\ol{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}}). \] Furthermore, each of these spaces is a locally convex Lie algebra in a natural way and the isomorphisms are isomorphisms of topological Lie algebras. \end{proposition} \begin{prf} The last two isomorphisms are provided by Proposition \ref{prop:gluingForVectorBundles_CLOSED_Version} and Corollary \ref{cor:gluingForVectorBundles_OPEN_Version}, so we show \mbox{$C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}\cong \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$}. For each \mbox{$\eta\in C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}$} the element \mbox{$(\eta_{i})_{i\in I}$} with \mbox{$\eta_{i}=\eta \circ \sigma_{i}$} defines an element of \mbox{$\ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} and the map \[ \psi \ensuremath{\nobreak:\nobreak} C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}\to \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}}),\quad \eta\mapsto(\eta_{i})_{i\in I} \] is continuous. In fact, \mbox{$\sigma_{i}(m)=\sigma_{j}(m)\cdot k_{ji}(m)$} for \mbox{$m\in \cl{V}_{i}\cap \cl{V}_{j}$} implies \[ \eta_{i}(m)=\eta (\sigma_{i}(m))=\eta (\sigma_{j}(m)\cdot k_{ji}(m)) =\ensuremath{\operatorname{Ad}} (k_{ji}(m))^{-1}.\eta(\sigma_{j}(m))=\ensuremath{\operatorname{Ad}}(k_{ij}(m)).\eta_{j}(m) \] and thus \mbox{$(\eta_{i})_{i\in I}\in\ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$}. Recall that if \mbox{$X$} is a topological space, then a map \mbox{$f\ensuremath{\nobreak:\nobreak} X\to C^{\infty}(\cl{V}_{i},\ensuremath{\mathfrak{k}})$} is continuous if and only if \mbox{$x\mapsto d^{n}f(x)$} is continuous for each \mbox{$n\in \ensuremath{\mathds{N}}_{0}$} (Remark \ref{rem:alternativeDescriptionOfTopologyOnSpaceOfSmoothMappings}). This implies that \mbox{$\psi$} is continuous, because \mbox{$d^{n}\eta_{i}=d^{n}\eta\op{\circ}T^{n}\sigma_{i}$} and pull-backs along continuous maps are continuous. On the other hand, if \mbox{$k_{i}\ensuremath{\nobreak:\nobreak} \pi^{-1}(\cl{V}_{i})\to K$} is given by \mbox{$p=\sigma_{i}(\pi (p))\cdot k_{i}(p)$} and if \mbox{$(\eta_{i})_{i\in I}\in \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$}, then the map \[ \eta:P\to \ensuremath{\mathfrak{k}},\quad p\mapsto \ensuremath{\operatorname{Ad}}\left(k(p)\right)^{-1}.\eta_{i}\left(\pi (p)\right) \;\text{ if }\; \pi (p)\in \cl{V}_{i} \] is well-defined, smooth and \mbox{$K$}-equivariant. Furthermore, \mbox{$(\eta_{i})_{i\in I}\mapsto \eta$} is an inverse of \mbox{$\psi$} and it thus remains to check that it is continuous, i.e., that \[ \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\ni(\eta_{i})_{i\in I}\mapsto d^{n}\eta \in C(T^{n}P,\ensuremath{\mathfrak{k}}) \] is continuous for all \mbox{$n\in \ensuremath{\mathds{N}}_{0}$}. If \mbox{$C\ensuremath{\nobreak\subseteq\nobreak} T^{n}P$} is compact, then \mbox{$(T^{n}\pi) (C)\ensuremath{\nobreak\subseteq\nobreak} T^{n}M$} is compact and hence it is covered by finitely many \mbox{$T^{n}V_{i_{1}},\dots ,T^{n}V_{i_{m}}$} and thus \mbox{$\left(T^{n}\left(\pi^{-1}(\ol{V_{i}})\right)\right)_{i=i_{1},\dots ,i_{m}}$} is a finite closed cover of \mbox{$C\ensuremath{\nobreak\subseteq\nobreak} T^{n}P$}. Hence it suffices to show that the map \[ \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\ni(\eta_{i})_{i\in I}\mapsto T^{n}(\left.\eta \right|_{\pi^{-1}(\cl{V}_{i})}) \in C(T^{n}\pi^{-1}(\cl{V}_{i}),\ensuremath{\mathfrak{k}} ) \] is continuous for \mbox{$n\in \ensuremath{\mathds{N}}_{0}$} and \mbox{$i\in I$} and we may thus w.l.o.g. assume that \mbox{$\ensuremath{\mathcal{P}}$} is trivial. In the trivial case we have \mbox{$\eta=\ensuremath{\operatorname{Ad}}(k^{-1}).(\eta\circ \pi)$} if \mbox{$p\mapsto (\pi (p),k(p))$} defines a global trivialisation. We shall make the case \mbox{$n=1$} explicit. The other cases can be treated similarly and since the formulae get quite long we skip them here. Given any open zero neighbourhood in \mbox{$C(TP,\ensuremath{\mathfrak{k}} )$}, which we may assume to be \mbox{$\lfloor C,V\rfloor$} with \mbox{$C\ensuremath{\nobreak\subseteq\nobreak} TP$} compact and \mbox{$0\in V\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\mathfrak{k}}$} open, we have to construct an open zero neighbourhood \mbox{$O$} in \mbox{$C^{\infty}(M,\ensuremath{\mathfrak{k}})$} such that \mbox{$\varphi(O)\ensuremath{\nobreak\subseteq\nobreak} \lfloor C,V\rfloor$}. For \mbox{$\eta '\in C^{\infty}(M,\ensuremath{\mathfrak{k}})$} and \mbox{$X_{p}\in C$} we get with Lemma \ref{lem:productrule} \[ d(\varphi(\eta ' ))(X_{p})= \ensuremath{\operatorname{Ad}}(k^{-1}(p)).d\eta'(T\pi(X_{p}))- [\delta^{l}(k)(X_{p}),\ensuremath{\operatorname{Ad}}(k^{-1}(p)).\eta' (\pi(p)) ]. \] Since \mbox{$\delta^{l}(C)\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\mathfrak{k}}$} is compact, there exists an open zero neighbourhood \mbox{$V'\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\mathfrak{k}}$} such that \[ \ensuremath{\operatorname{Ad}}(k^{-1}(p)).V'+[\delta^{l}(k)(X_{p}),\ensuremath{\operatorname{Ad}}(k^{-1}(p)).V']\ensuremath{\nobreak\subseteq\nobreak} V \] for each \mbox{$X_{p}\in C$}. Since \mbox{$T\pi :TP\to TM$} is continuous, \mbox{$T\pi (C)$} is compact and we may set \mbox{$O=\lfloor T\pi (C),V'\rfloor$}. That \mbox{$\ensuremath{\mathfrak{g}}_{\ensuremath{\mathcal{V}}}(\ensuremath{\mathcal{P}})$} and \mbox{$\ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} are locally convex Lie algebras follows because they are closed subalgebras of \mbox{$\prod_{i\in I}C^{\infty}(V_{i},\ensuremath{\mathfrak{k}})$} and \mbox{$\prod_{i\in I}C^{\infty}(\cl{V}_{i},\ensuremath{\mathfrak{k}} )$}. Since the isomorphisms \[ C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K} \cong S(\ensuremath{\operatorname{Ad}}(\ensuremath{\mathcal{P}}))\cong \ensuremath{\mathfrak{g}}_{\ensuremath{\mathcal{V}}}(\ensuremath{\mathcal{P}})\cong\ensuremath{\mathfrak{g}}_{\ol{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}}). \] are all isomorphisms of abstract Lie algebras an isomorphisms of locally convex vector spaces, it follows that they are isomorphisms of topological Lie algebras. \end{prf} \begin{definition}\label{def:localGaugeGroup} If \mbox{$\ensuremath{\mathcal{P}}$} is a smooth \mbox{$K$}-principal bundle with compact base and \mbox{$\cl{\ensuremath{\mathcal{V}}}=(\cl{V}_{i},\sigma_{i})_{i=1,\ldots,n}$} is a smooth closed trivialising system with corresponding transition functions \mbox{$k_{ij}\ensuremath{\nobreak:\nobreak}\cl{V}_{i}\cap \cl{V}_{j}\to K$}, then we denote \[ G_{\ol{\ensuremath{\mathcal{V}} }}(\ensuremath{\mathcal{P}}):=\left\{(\gamma_{i})_{i=1,\ldots,n}\in \prod_{i=1}^{n}C^{\infty}(\ol{V_{i}},K): \gamma_{i}(m)=k_{ij}(m)\gamma_{j}k_{ji}(m) \ensuremath{\;\text{\textup{ for all }}\;} m\in \cl{V}_{i}\cap \cl{V}_{j}\right\} \] and turn it into a group with respect to pointwise group operations. \end{definition} \begin{remark}\label{rem:isoToTheGaugeGroupInLocalCoordinates} In the situation of Definition \ref{def:localGaugeGroup}, the map \begin{align} \label{eqn:localGaugeGroupIsAbstractlyIsomorphisToEquivariantMappings} \psi \ensuremath{\nobreak:\nobreak} G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\to C^{\infty}(P,K)^{K},\quad \psi ((\gamma_{i})_{i=1,\ldots,n})(p)= k^{-1}_{\sigma_{i}}(p)\cdot \gamma_{i}(\pi(p))\cdot k_{\sigma_{i}}(p)\;\text{ if }\; \pi (p)\in \cl{V}_{i} \end{align} is an isomorphism of abstract groups, where the map on the right hand side is well-defined because \mbox{$k_{\sigma_{i}}(p)=k_{ij}(\pi (p))\cdot k_{\sigma_{j}}(p)$} and thus \begin{multline*} k_{\sigma_{i}}^{-1}(p)\cdot \gamma_{i}(\pi (p))\cdot k_{\sigma_{i}}(p)= k_{\sigma_{j}}(p)^{-1}\cdot \underbrace{k_{ji}(\pi (p))\cdot \gamma_{i}(\pi (p))\cdot k_{ij}(\pi (p))}% _{\gamma_{j}(\pi (p))}\cdot k_{\sigma_{j}(p)}\\ =k_{\sigma_{j}}(p)^{-1}\cdot \gamma_{j}(\pi (p))\cdot k_{\sigma_{j}}(p). \end{multline*} In particular, this implies that \mbox{$\psi ((\gamma_{i})_{i=1,\ldots,n})$} is smooth. Since for \mbox{$m\in \ol{V}_{i}$} the evaluation map \mbox{$\ensuremath{\operatorname{ev}}_{m}:C^{\infty}(\ol{V}_{i},K)\to K$} is continuous, \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} is a closed subgroup of \mbox{$\prod_{i=1}^{n}C^{\infty}(\ol{V}_{i},K)$}. \end{remark} Since an infinite-dimensional Lie group may posses closed subgroups which are no Lie groups (cf.\ \cite[Exercise III.8.2]{bourbakiLie}), the preceding remark does not automatically yield a Lie group structure on \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$}. However, in many situations, it will turn out that \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} has a natural Lie group structure. The following definition encodes the necessary requirement ensuring a Lie group structure on \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} that is induced by the natural Lie group structure on \mbox{$\prod_{i=1}^{n}C^{\infty}(\cl{V}_{i},K)$}. Since quite different properties of \mbox{$\ensuremath{\mathcal{P}}$} will ensure this requirement it seems to be worth extracting it as a condition on \mbox{$\ensuremath{\mathcal{P}}$}. The name for this requirement will be justified in Corollary \ref{cor:propertySUB}. \begin{definition}\label{def:propertySUB} If \mbox{$\ensuremath{\mathcal{P}}$} is a smooth principal \mbox{$K$}-bundle with compact base and \mbox{$\cl{\ensuremath{\mathcal{V}}}=(\cl{V}_{i},\sigma_{i})_{i=1,\ldots,n}$} is a smooth closed trivialising system, then we say that \mbox{$\ensuremath{\mathcal{P}}$} has the \emph{property SUB} with respect to \mbox{$\cl{\ensuremath{\mathcal{V}}}$} if there exists a convex centred chart \mbox{$\varphi\ensuremath{\nobreak:\nobreak} W\to W'$} of \mbox{$K$} such that \[ \varphi_{*}\ensuremath{\nobreak:\nobreak} G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C^{\infty}(\cl{V}_{i},W) \to \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C^{\infty }(\cl{V}_{i},W'), \quad (\gamma_{i})_{i=1,\ldots,n}\mapsto (\varphi \op{\circ}\gamma_{i})_{i=1,\ldots,n} \] is bijective. We say that \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB if \mbox{$\ensuremath{\mathcal{P}}$} has this property with respect to some trivialising system. \end{definition} It should be emphasised that in all relevant cases, known to the author, the bundles have the property SUB, and it is still unclear, whether there are bundles, which do not have this property (cf.\ Lemma \ref{lem:propertySUB} and Remark \ref{rem:propertySUB}). This property now ensures the existence of a natural Lie group structure on \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$}. \begin{proposition}\label{prop:gaugeGroupInLocalCoordinatesIsLieGroup} \textup{\textbf{a)}} Let \mbox{$\ensuremath{\mathcal{P}}$} be a smooth principal \mbox{$K$}-bundle with compact base \mbox{$M$}, which has the property SUB with respect to the smooth closed trivialising system \mbox{$\cl{\ensuremath{\mathcal{V}}}$}. Then \mbox{$\varphi_{*}$} induces a smooth manifold structure on \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C(\cl{V}_{i},W)$}. Furthermore, the conditions \mbox{$i)-iii)$} of Proposition \ref{prop:localDescriptionsOfLieGroups} are satisfied such that \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} can be turned into a Lie group modelled on \mbox{$\ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$}. \textup{\textbf{b)}} In the setting of \textup{\textbf{a)}}, the map \mbox{$\psi \ensuremath{\nobreak:\nobreak} G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\to C^{\infty}(P,K)^{K}$} is an isomorphism of topological groups if \mbox{$C^{\infty}(P,K)^{K}$} is endowed with the subspace topology from \mbox{$C^{\infty}(P,K)$}. \textup{\textbf{c)}} In the setting of \textup{\textbf{a)}}, we have \mbox{$\op{L}(G_{\cl{\ensuremath{\mathcal{V}} }}(\ensuremath{\mathcal{P}}))\cong\ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}} }}(\ensuremath{\mathcal{P}})$}. \end{proposition} \begin{prf} \textbf{a)} Set \mbox{$U:=G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C(\cl{V}_{i},K)$}. Since \mbox{$\varphi_{*}$} is bijective by assumption and \mbox{$\varphi_{*}(U)$} is open in \mbox{$\ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$}, it induces a smooth manifold structure on \mbox{$U$}. Let \mbox{$W_{0}\ensuremath{\nobreak\subseteq\nobreak} W$} be an open unit neighbourhood with \mbox{$W_{0}\cdot W_{0}\ensuremath{\nobreak\subseteq\nobreak} W$} and \mbox{$W_{0}^{-1}=W_{0}$}. Then \mbox{$U_{0}:=G_{\cl{\ensuremath{\mathcal{V}} }}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C^{\infty}(\ol{V_{i}},W_{0})$} is an open unit neighbourhood in \mbox{$U$} with \mbox{$U_{0}\cdot U_{0}\ensuremath{\nobreak\subseteq\nobreak} U$} and \mbox{$U_{0}=U_{0}^{-1}$}. Since each \mbox{$C^{\infty}(\ol{V_{i}},K)$} is a topological group, there exist for each \mbox{$(\gamma_{i})_{i=1,\ldots,n}$} open unit neighbourhoods \mbox{$U_{i}\ensuremath{\nobreak\subseteq\nobreak} C^{\infty}(\ol{V_{i}},K)$} with \mbox{$\gamma_{i}\cdot U_{i}\cdot \gamma_{i}^{-1}\ensuremath{\nobreak\subseteq\nobreak} C^{\infty}(\ol{V_{i}},W)$}. Since \mbox{$C^{\infty}(\ol{V_{i}},W_{0})$} is open in \mbox{$C^{\infty}(\ol{V_{i}},K)$}, so is \mbox{$U'_{i}:=U_{i}\cap C^{\infty}(\ol{V_{i}},W_{0})$}. Hence \[ (\gamma_{i})_{i=1,\ldots,n} \cdot \left(G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap(U'_{1}\times \dots \times U'_{n}) \right)\cdot (\gamma_{i}^{-1})_{i=1,\ldots,n}\ensuremath{\nobreak\subseteq\nobreak} U \] and conditions \mbox{$i)-iii)$} of Proposition \ref{prop:localDescriptionsOfLieGroups} are satisfied, where the required smoothness properties are consequences of the smoothness of push forwards of mappings between function spaces (cf.\ \cite[Proposition 28 and Corollary 29]{smoothExt} and \cite[3.2]{gloeckner02a}). \textbf{b)} We show that the map \mbox{$\psi \ensuremath{\nobreak:\nobreak} G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\to C^{\infty}(P,K)^{K}$} from \eqref{eqn:localGaugeGroupIsAbstractlyIsomorphisToEquivariantMappings} is a homeomorphism. Let \mbox{$\left.\ensuremath{\mathcal{P}}\right|_{\cl{V}_{i}}=:\ensuremath{\mathcal{P}}_{i}$} be the restricted bundle. Since \mbox{$T^{n}\cl{V}_{i}$} is closed in \mbox{$T^{n}M$}, we have that \mbox{$C^{\infty}(P,K)^{K}$} is homeomorphic to \[ \wt{G}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}}):=\{(\wt{\gamma}_{i})_{i=1,\ldots,n}\in \prod_{i=1}^{n}C^{\infty}(P_{i},K)^{K}:\wt{\gamma}_{i}(p)=\wt{\gamma}_{j}(p) \ensuremath{\;\text{\textup{ for all }}\;} p\in \pi^{-1}(\cl{V}_{i}\cap \cl{V}_{j})\} \] as in Proposition \ref{prop:gluingForVectorBundles_CLOSED_Version}. With respect to this identification, \mbox{$\psi$} is given by \[ (\gamma_{i})_{i=1,\ldots,n}\mapsto (k_{\sigma_{i}}^{-1}\cdot (\gamma_{i}\op{\circ}\pi) \cdot k_{\sigma_{i}})_{i=1,\ldots,n} \] and it thus suffices to show the assertion for trivial bundles. So let \mbox{$\sigma \ensuremath{\nobreak:\nobreak} M\to P$} be a global section. The map \mbox{$C^{\infty}(M,K)\ni f\mapsto f\circ \pi \in C^{\infty}(P,K)$} is continuous since \[ C^{\infty}(M,K)\ni f\mapsto T^{k}(f\circ \pi)=T^{k}f\circ T^{k}\pi = (T^{k}\pi)_{*}(T^{k}f)\in C(T^{k}P,T^{k}K) \] is continuous as a composition of a pullback an the map \mbox{$f\mapsto T^{k}f$}, which defines the topology on \mbox{$C^{\infty}(M,K)$}. Since conjugation in \mbox{$C^{\infty}(P,K)$} is continuous, it follows that \mbox{$\varphi$} is continuous. Since the map \mbox{$f\mapsto f\circ \sigma$} is also continuous (with the same argument), the assertion follows. \textbf{c)} This follows immediately from \mbox{$\op{L}(C^{\infty}(\ol{V_{i}},K))\cong C^{\infty}(\ol{V_{i}},\ensuremath{\mathfrak{k}})$} (cf.\ \cite[Section 3.2]{gloeckner02a}). \end{prf} The next corollary is a mere observation. Since it justifies the name ``property SUB'', it is made explicit here. \begin{corollary}\label{cor:propertySUB} If \mbox{$\ensuremath{\mathcal{P}}$} is a smooth principal \mbox{$K$}-bundle with compact base \mbox{$M$}, having the property SUB with respect to the smooth closed trivialising system \mbox{$\cl{\ensuremath{\mathcal{V}}}$}, then \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} is a closed subgroup of \mbox{$\prod_{i=1}^{n}C^{\infty}(\cl{V}_{i},K)$}, which is a Lie group modelled on \mbox{$\ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$}. \end{corollary} That different choices of charts lead to isomorphic Lie group structures follows directly from Proposition \ref{prop:localDescriptionsOfLieGroups}. We show next that in fact, different choices of trivialising systems also lead to isomorphic Lie group structures on \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$}. \begin{proposition} \label{prop:isomorhicLieGroupsStructuresOnGau(P)ForDifferentTrivialisations} Let \mbox{$\ensuremath{\mathcal{P}}$} be a smooth principal \mbox{$K$}-bundle with compact base. If we have two trivialising systems \mbox{$\cl{\ensuremath{\mathcal{V}}}=(\cl{V}_{i},\sigma_{i})_{i=1,\ldots,n}$} and \mbox{$\cl{\mathcal{U}}=(\cl{U}_{j},\tau_{j})_{j=1,\ldots,m}$} and \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to \mbox{$\cl{\ensuremath{\mathcal{V}}}$} and \mbox{$\cl{\mathcal{U}}$}, then \mbox{$G_{\ol{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} is isomorphic to \mbox{$G_{\ol{\mathcal{U}}}(\ensuremath{\mathcal{P}})$} as a Lie group. \end{proposition} \begin{prf} First, we note that if the covers underlying \mbox{$\cl{\ensuremath{\mathcal{V}}}$} and \mbox{$\cl{\mathcal{U}}$} are the same, but the sections differ by smooth functions \mbox{$k_{i}\in C^{\infty}(\cl{V}_{i},K)$}, i.e., \mbox{$\sigma_{i}=\tau_{i}\cdot k_{i}$}, then this induces an automorphism of Lie groups \[ G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\to G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}}),\quad (\gamma_{i})_{i=1,\ldots,n}\mapsto (k_{i}^{-1}\cdot\gamma_{i}\cdot k_{i})_{i=1,\ldots,n}, \] because conjugation with \mbox{$k_{i}^{-1}$} is an automorphism of \mbox{$C^{\infty}(\cl{V}_{i},K)$}. Since each two open covers have a common refinement it suffices to show the assertion if one cover is a refinement of the other. So let \mbox{$V_{1},\dots ,V_{n}$} be a refinement of \mbox{$U_{1},\dots ,U_{m}$} and let \mbox{$\{1,\dots ,n\} \ni i\mapsto j(i)\in \{1,\dots ,m\}$} be a function with \mbox{$V_{i}\ensuremath{\nobreak\subseteq\nobreak} U_{j(i)}$}. Since different choices of sections lead to automorphisms we may assume that \mbox{$\sigma_{i}=\left.\sigma_{j(i)}\right|_{\cl{V}_{i}}$}, implying in particular \mbox{$k_{ii'}(m)=k_{j(i)j(i')}(m)$}. Then the restriction map from Lemma \ref{lem:restrictionMapForCurrentGroupIsSmooth} yields a smooth homomorphism \[ \psi :G_{\cl{\mathcal{U}}}(\ensuremath{\mathcal{P}})\to G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}}),\quad (\gamma_{j})_{j\in J}\mapsto (\left.\gamma_{j(i)}\right|_{\cl{V}_{i}})_{i\in I}. \] For \mbox{$\psi^{-1}$} we construct each component \mbox{$\psi_{j}^{-1}\ensuremath{\nobreak:\nobreak} G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\to C^{\infty}(\cl{U}_{j},K)$} separately. The condition that \mbox{$(\psi_{j}^{-1})_{j\in J}$} is inverse to \mbox{$\psi$} is then \begin{align} \label{eqn:isomorhicLieGroupsStructuresOnGau(P)ForDifferentTrivialisations1} \left.\psi^{-1}_{j}((\gamma_i)_{i\in I})\right|_{\cl{V}_{i}}=\gamma_{i} \ensuremath{\;\text{\textup{ for all }}\;} i\;\text{ with }\; j=j(i). \end{align} Set \mbox{$I_{j}:=\{i\in I: \cl{V}_{i}\ensuremath{\nobreak\subseteq\nobreak} \cl{U}_{j}\}$} and note that \mbox{$j(i)=j$} implies \mbox{$i\in I_{j}$}. Since a change of the sections \mbox{$\sigma_{i}$} induces an automorphism on \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} we may assume that \mbox{$\sigma_{i}=\left.\sigma_{j(i)}\right|_{\cl{V}_{i}}$} for each \mbox{$i\in I_{j}$}. Let \mbox{$x \in \cl{U}_{j}\backslash \cup_{i\in I_{j}}V_{i}$}. Then \mbox{$x\in V_{i_{x}}$} for some \mbox{$i_{x}\in I$} and thus there exists an open neighbourhood \mbox{$U_{x}$} of \mbox{$x$} such that \mbox{$\cl{U}_{x}$} is a manifold with corners, contained in \mbox{$\cl{U}_{j}\cap \cl{V}_{i_{x}}$}. Now finitely many \mbox{$U_{x_{1}},\dots ,U_{x_{l}}$} cover \mbox{$\cl{U}_{j}\backslash \cup_{i\in I_{j}}V_{i}$} and we set \[ \psi^{-1}_{j}((\gamma_{i})_{i\in I})=\ensuremath{\operatorname{glue}} \left((\gamma_{i})_{i\in I_{j}}, \left(\left.(k_{ji_{x_{k}}}\cdot\gamma_{i_{x_{k}}}\cdot k_{i_{x_{k}}j})\right|_{U_{x_{k}}} \right)_{k=1,\dots ,l} \right). \] Then this defines a smooth map by Proposition \ref{prop:gluingLemmaForCurrentGroup} and \eqref{eqn:isomorhicLieGroupsStructuresOnGau(P)ForDifferentTrivialisations1} is satisfied because \mbox{$j(i)=i$} implies \mbox{$i\in I_{j}$} \end{prf} We now come to the main result of this section. \begin{theorem}[Lie group structure on \mbox{$\boldsymbol{\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})}$}] \label{thm:gaugeGroupIsLieGroup} Let \mbox{$\ensuremath{\mathcal{P}}$} be a smooth principal \mbox{$K$}-bundle over the compact manifold \mbox{$M$} (possibly with corners). If \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB, then the gauge group \mbox{\mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\cong C^{\infty}(P,K)^{K}$}} carries a Lie group structure, modelled on \mbox{$C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}$}. If, moreover, \mbox{$K$} is locally exponential , then \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} is so. \end{theorem} \begin{prf} We endow \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} with the Lie group structure induced from the isomorphisms of groups \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\cong C^{\infty}(P,K)^{K}\cong G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} for some smooth closed trivialising system \mbox{$\cl{\ensuremath{\mathcal{V}}}$}. To show that \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} is locally exponential if \mbox{$K$} is so we first show that if \mbox{$M$} is a compact manifold with corners and \mbox{$K$} has an exponential function, then \[ (\exp_{K})_{*}:C^{\infty}(M,\ensuremath{\mathfrak{k}} )\to C^{\infty}(M,K),\quad \eta \mapsto \exp_{K}\op{\circ }\eta \] is an exponential function for \mbox{$C^{\infty}(M,K)$}. For \mbox{$x\in \ensuremath{\mathfrak{k}}$} let \mbox{$\gamma_{x}\in C^{\infty}(\ensuremath{\mathds{R}},K)$} be the solution of the initial value problem \mbox{$\gamma (0)=e$}, \mbox{$\gamma' (t)=\gamma (t).x$}. Take \mbox{$\eta \in C^{\infty}(M,\ensuremath{\mathfrak{k}})$}. Then \[ \Gamma_{\eta}\ensuremath{\nobreak:\nobreak} \ensuremath{\mathds{R}}\to C^{\infty}(M,K),\quad (t,m)\mapsto \gamma_{\eta (m)}(t) =\exp_{K}(t\cdot \eta (m)) \] is a homomorphism of abstract groups. Furthermore, \mbox{$\Gamma_{\eta}$} is smooth, because it is smooth on a zero neighbourhood of \mbox{$\ensuremath{\mathds{R}}$}, for the push-forward of the local inverse of \mbox{$\exp_{K}$} provide charts on a unit neighbourhood in \mbox{$C^{\infty}(M,K)$}. Then \[ \delta^{l}(\Gamma_{\eta})(t)=\Gamma_{\eta}(t)^{-1}\cdot \Gamma '(t)=\Gamma_{\eta}(t)^{-1}\cdot \Gamma_{\eta}(t)\cdot \eta=\eta, \] thought of as an equation in the Lie group \mbox{$T\big(C^{\infty}(M,K)\big)\cong C^{\infty}(M,\ensuremath{\mathfrak{k}})\rtimes C^{\infty}(M,K)$}, shows that \mbox{$\eta \mapsto \Gamma_{\eta}(1)=\exp_{K}\circ \gamma$} is an exponential function for \mbox{$C^{\infty}(M,K)$}. The proof of the preceding lemma yields immediately that \[ \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C^{\infty}(\ol{V_{i}},W')\to G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}}),\quad (\eta_{i})_{i=1,\ldots,n}\mapsto (\exp_{K}\circ \eta )_{i=1,\ldots,n} \] is a diffeomorphism and thus \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} is locally exponential. \end{prf} It remains to elaborate on the arcane property SUB. First we shall see that this property behaves well with respect to refinements of trivialising systems. \begin{lemma}\label{lem:propertySUBIsCompatibleWithRefinements} Let \mbox{$\ensuremath{\mathcal{P}}$} be a smooth principal \mbox{$K$}-bundle with compact base and \mbox{$\cl{\ensuremath{\mathcal{V}}}=(\cl{V}_{i},\sigma_{i})_{i=1,\ldots,n}$} be a smooth closed trivialising system of \mbox{$\ensuremath{\mathcal{P}}$}. If \mbox{$\cl{\mathcal{U}}=(\cl{U}_{j},\tau_{j})_{j=1,\ldots,m }$} is a refinement of \mbox{$\cl{\ensuremath{\mathcal{V}}}$}, then \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to \mbox{$\cl{\ensuremath{\mathcal{V}}}$} if and only if \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to \mbox{$\cl{\mathcal{U}}$}. \end{lemma} \begin{prf} Let \mbox{$\{1,\dots ,m\}\ni j\mapsto i(j)\in\{1,\dots ,n\}$} be a map such that \mbox{$U_{j}\ensuremath{\nobreak\subseteq\nobreak} V_{i(j)}$} and \mbox{$\tau_{j}=\left.\sigma_{i(j)}\right|_{\cl{U}_{j}}$}. Then we have bijective mappings \begin{alignat*}{2} \psi_{G}&\ensuremath{\nobreak:\nobreak} G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\to G_{\cl{\mathcal{U}}}(\ensuremath{\mathcal{P}}),\quad &&(\gamma_{i})_{i=1,\ldots,n}\mapsto (\left.\gamma_{i(j)}\right|_{\cl{j}})_{j=1,\ldots,m}\\ \psi_{\ensuremath{\mathfrak{g}}}&\ensuremath{\nobreak:\nobreak} \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\to \ensuremath{\mathfrak{g}}_{\cl{\mathcal{U}}}(\ensuremath{\mathcal{P}}),\quad &&(\eta_{i})_{i=1,\ldots,n}\mapsto (\left.\eta_{i(j)}\right|_{\cl{j}})_{j=1,\ldots,m} \end{alignat*} (cf.\ Proposition \ref{prop:isomorhicLieGroupsStructuresOnGau(P)ForDifferentTrivialisations}). Now let \mbox{$\varphi \ensuremath{\nobreak:\nobreak} W\to W'$} be an arbitrary convex centred chart of \mbox{$K$} and set \begin{gather*} Q:=G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C(\cl{V}_{i},W)\quad\quad \wt{Q}:=G_{\cl{\mathcal{U}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C(\cl{U}_{i},W)\\ Q':=\ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C(\cl{V}_{i},W')\quad\quad \wt{Q}':=\ensuremath{\mathfrak{g}}_{\cl{\mathcal{U}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C(\cl{U}_{i},W') \end{gather*} Then we have \mbox{$\psi_{G}(Q)=\wt{Q}$} and \mbox{$\psi_{\ensuremath{\mathfrak{g}}}(Q')=\wt{Q}'$} and the assertion follows from the commutative diagram \[ \begin{CD} Q@>\varphi_{*}>>Q'\\ @V\psi_{G}VV @V\psi_{\ensuremath{\mathfrak{g}}}VV\\ \wt{Q}@>\varphi_{*}>>\wt{Q}'. \end{CD} \] \end{prf} The following lemma will be needed later in the proof that bundles with structure group a direct limit Lie group have the property SUB. \begin{lemma}\label{lem:equivariantExtensionOfCharts} Let \mbox{$C$} be a compact Lie group, \mbox{$M,N$} be smooth finite-dimensional manifolds with \mbox{$N\ensuremath{\nobreak\subseteq\nobreak} M$} and assume that the inclusion \mbox{$i\ensuremath{\nobreak:\nobreak} M\hookrightarrow N$} is a smooth immersion. Furthermore, let \mbox{$C$} act on \mbox{$N$} from the right such that \mbox{$M\cdot C=M$} and that \mbox{$x\in M$} is fix-point. Then \mbox{$C$} acts on \mbox{$T_{x}N$}, leaving \mbox{$T_{x}M$} invariant. If there exists a \mbox{$C$}-invariant smoothly and \mbox{$C$}-equivariantly contractible relatively compact open neighbourhood \mbox{$U$} of \mbox{$x$} and a \mbox{$C$}-equivariant chart \mbox{$\varphi \ensuremath{\nobreak:\nobreak} U\to \wt{U}\ensuremath{\nobreak\subseteq\nobreak} T_{x}M$} with \mbox{$\varphi (x)=0$}, then there exists a \mbox{$C$}-equivariant chart \mbox{$\psi \ensuremath{\nobreak:\nobreak} V\to \wt{V}\ensuremath{\nobreak\subseteq\nobreak} T_{x}N$} such that \mbox{$V$} is a \mbox{$C$}-invariant smoothly and \mbox{$C$}-equivariantly contractible relatively compact open neighbourhood of \mbox{$x$} in \mbox{$M$}, satisfying \mbox{$V\cap M=U$}, \mbox{$\psi (V)\cap T_{x}M=\varphi (U)$} and \mbox{$\left.\psi\right|_{U}=\varphi$}. \end{lemma} \begin{prf}(cf. \cite[Lemma 2.1]{gloeckner05} for the non-equivariant case) Fix a \mbox{$C$}-invariant metric on \mbox{$TN$}, inducing a \mbox{$C$}-invariant metric on \mbox{$N$} (c.f.\ \cite[Section VI.2]{bredon72transformationGroups}). Then \mbox{$C$} acts on \mbox{$N$} by isometries. As in \cite[Lemma 2.1]{gloeckner05} we find a \mbox{$\sigma$}-compact, relatively compact open submanifold \mbox{$W'$} of \mbox{$N$} such that \mbox{$W'\cap \ol{U}=U$}, whence \mbox{$U$} is a closed submanifold of \mbox{$W'$}. We now set \mbox{$W:=\cup_{c\in C}W'\cdot c$}. Since \mbox{$C$} acts by isometries, this is an open \mbox{$C$}-invariant subset of \mbox{$N$} and we deduce that we still have \[ W\cap \ol{U}=\left(\cup_{c\in C}W'\cdot c\right)\cap \left(\cup_{c\in C}\ol{U}\cdot c \right)=\cup_{c\in C}\left((W'\cap \ol{U})\cdot c\right)=U. \] By shrinking \mbox{$W$} if necessary we thus get an open \mbox{$C$}-invariant relatively compact submanifold of \mbox{$M$} with \mbox{$U$} as closed submanifold. By \cite[Theorem VI.2.2]{bredon72transformationGroups}, \mbox{$U$} has an open \mbox{$C$}-invariant tubular neighbourhood in \mbox{$W$}, i.e., there exists a \mbox{$C$}-vector bundle \mbox{$\xi \ensuremath{\nobreak:\nobreak} E\to U$} and a \mbox{$C$}-equivariant diffeomorphism \mbox{$\Phi \ensuremath{\nobreak:\nobreak} E\to W$} onto some \mbox{$C$}-invariant open neighbourhood \mbox{$\Phi (E)$} of \mbox{$W$} such that the restriction of \mbox{$\Phi$} to the zero section \mbox{$U$} is the inclusion of \mbox{$U$} in \mbox{$W$}. The proof of \cite[Theorem 2.2]{bredon72transformationGroups} shows that \mbox{$E$} can be taken to be the normal bundle of \mbox{$U$} with the canonical \mbox{$C$}-action, which is canonically isomorphic to the \mbox{$C$}-invariant subbundle \mbox{$TU^{\perp}$}. We will thus identify \mbox{$E$} with \mbox{$TU^{\perp}$} from now on. That \mbox{$U$} is smoothly and \mbox{$C$}-equivariantly contractible means that there exists a homotopy \mbox{$F\ensuremath{\nobreak:\nobreak} [0,1]\times U\to U$} such that each \mbox{$F(t,\cdot )\ensuremath{\nobreak:\nobreak} U\to U$} is smooth and \mbox{$C$}-equivariant and that \mbox{$F(1,\cdot)$} is the map which is constantly \mbox{$x$} and \mbox{$F(0,\cdot )=\ensuremath{\operatorname{id}}_{U}$}. Pulling back \mbox{$E$} along the smooth and equivariant map \mbox{$F(1,\cdot)$} gives the \mbox{$C$}-vector bundle \mbox{$\ensuremath{\operatorname{pr}}_{1}\ensuremath{\nobreak:\nobreak} U\times T_{x}U^{\perp}\to U$}, where the action of \mbox{$C$} on \mbox{$U$} is the one given by assumption and the action of \mbox{$C$} on \mbox{$T_{x}U^{\perp}$} is the one induced from the canonical action of \mbox{$C$} on \mbox{$T_{x}M$}. By \cite[Corollary 2.5]{wasserman69EquivariantDifferentialTopology}, \mbox{$F(1,\cdot )^{*}(TU^{\perp})$} and \mbox{$F(0,\cdot)^{*}(TU^{\perp})$} are equivalent \mbox{$C$}-vector bundles and thus there exists a smooth \mbox{$C$}-equivariant bundle equivalence \mbox{$\Psi \ensuremath{\nobreak:\nobreak} TU^{\perp}=F(1,\cdot )^{*}(TU^{\perp})\to U\times T_{x}U^{\perp}=F(0,\cdot)^{*}(TU^{\perp})$}. We now define \[ \psi\ensuremath{\nobreak:\nobreak} V:=\Phi (TU^{\perp})\to T_{x}N,\quad y\mapsto \varphi \left(\Psi_{1}(\Phi^{-1}(y))\right)+\Psi_{2}\left(\Phi^{-1}(y)\right), \] where \mbox{$\Psi_{1}$} and \mbox{$\Psi_{2}$} are the components of \mbox{$\Psi$}. Since \mbox{$\Phi$}, \mbox{$\Psi$} and \mbox{$\varphi$} are \mbox{$C$}-equivariant so is \mbox{$\psi$} and it is a diffeomorphism onto the open subset \mbox{$U\times T_{x}U^{\perp}$} because \mbox{$T_{x}N=T_{x}M\oplus T_{x}U^{\perp}$}. This yields a \mbox{$C$}-equivariant chart. Moreover, \mbox{$V$} is relatively compact as a subset of \mbox{$W$}. Furthermore, if we denote by \mbox{$\psi_{1}$} the \mbox{$T_{x}M$}-component and by \mbox{$\psi_{2}$} the \mbox{$T_{x}U^{\perp}$}-component of \mbox{$\psi$}, then \[ [0,1]\times V\to V,\quad (t,y)\mapsto (\psi_{1}(y),t\cdot \psi_{2}(y)) \] defines a smooth \mbox{$C$}-equivariant homotopy from \mbox{$\ensuremath{\operatorname{id}}_{V}$} to the projection \mbox{$V\to U$}. Composing this homotopy with the smooth \mbox{$C$}-equivariant homotopy from \mbox{$\ensuremath{\operatorname{id}}_{U}$} to the map which is constantly \mbox{$x$} yields the asserted homotopy. Since \mbox{$V\cap M$} is the zero section in \mbox{$\left.TU^{\perp}\right|_{U}$}, we have \mbox{$V\cap M=U$}. Furthermore, we have \mbox{$\psi (V)\cap T_{x}M=\varphi (U)$}, because \mbox{$\Phi$} and \mbox{$\Psi$} restrict to the identity on the zero section. This also implies \mbox{$\left.\psi\right|_{U}=\varphi $}. We thus have checked all requirements from the assertion. \end{prf} Although it is presently unclear, which bundles have the property SUB and which not, we shall now see that \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB in many interesting cases, covering large classes of presently known locally convex Lie groups. \begin{lemma}\label{lem:propertySUB} Let \mbox{$\ensuremath{\mathcal{P}}$} be a smooth principal \mbox{$K$}-bundle over the compact manifold with corners \mbox{$M$}. \begin{itemize} \item [\textup{\textbf{a)}}] If \mbox{$\ensuremath{\mathcal{P}}$} is trivial, then there exists a global smooth trivialising system and \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to each such system. \item [\textup{\textbf{b)}}] If \mbox{$K$} is abelian, then \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to each smooth closed trivialising system. \item [\textup{\textbf{c)}}] If \mbox{$K$} is a Banach--Lie group, then \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to each smooth closed trivialising system. \item [\textup{\textbf{d)}}] If \mbox{$K$} is locally exponential, then \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to each smooth closed trivialising system. \item [\textup{\textbf{e)}}] If \mbox{$K$} is a countable direct limit of finite-dimensional Lie groups in the sense of \cite{gloeckner05}, then there exists a smooth closed trivialising system such that the corresponding transition functions take values in a compact subgroup \mbox{$C$} of some \mbox{$K_{i}$} and \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to each such system. \end{itemize} \end{lemma} \begin{prf} \textup{\textbf{a)}} If \mbox{$\ensuremath{\mathcal{P}}$} is trivial, then there exists a global section \mbox{$\sigma \ensuremath{\nobreak:\nobreak} M\to P$} and thus \mbox{$\cl{\ensuremath{\mathcal{V}}}=(M,\sigma)$} is a trivialising system of \mbox{$\ensuremath{\mathcal{P}}$}. Then \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})=C^{\infty}(M,K)$} and \mbox{$\varphi_{*}$} is bijective for any convex centred chart \mbox{$\varphi \ensuremath{\nobreak:\nobreak} W\to W'$}. \textup{\textbf{b)}} If \mbox{$K$} is abelian, then the conjugation action of \mbox{$K$} on itself and the adjoint action of \mbox{$K$} on \mbox{$\ensuremath{\mathfrak{k}}$} are trivial. Then a direct verification shows that \mbox{$\varphi_{*}$} is bijective for any trivialising system \mbox{$\cl{\ensuremath{\mathcal{V}}}$} and any convex centred chart \mbox{$\varphi \ensuremath{\nobreak:\nobreak} W\to W'$}. \textup{\textbf{c)}} If \mbox{$K$} is a Banach--Lie group, then it is in particular locally exponential (cf.\ Remark \ref{rem:banachLieGroupsAreLocallyExponential}) and it thus suffices to show \textup{\textbf{d)}}. \textup{\textbf{d)}} Let \mbox{$K$} be locally exponential and \mbox{$\cl{\ensuremath{\mathcal{V}}}=(\cl{V}_{i},\sigma_{i})_{i=1,\ldots,n}$} be a trivialising system. Furthermore, let \mbox{$W'\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\mathfrak{k}}$} be an open zero neighbourhood such that \mbox{$\exp_{K}$} restricts to a diffeomorphism on \mbox{$W'$} and set \mbox{$W=\exp (W')$} and \mbox{$\varphi:=\exp^{-1} \ensuremath{\nobreak:\nobreak} W\to W'$}. Then we have \[ (\gamma_{i})_{i=1,\ldots,n}\in G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C(\cl{V}_{i},W) \;\;\Leftrightarrow\;\; \varphi_{*}((\gamma_{i})_{i=1,\ldots,n}) \in \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C(\cl{V}_{i},W'), \] because \mbox{$\exp_{K}(\ensuremath{\operatorname{Ad}}(k).x)=k\cdot \exp_{K}(x)\cdot k^{-1}$} holds for all \mbox{$k\in K$} and \mbox{$x\in W'$} (cf.\ Lemma \ref{lem:interchangeOfActionsOnGroupAndAlgebra}). Furthermore, \mbox{$(\eta_{i})_{i=1,\ldots,n}\mapsto (\exp\op{\circ}\eta_{i})_{i=1,\ldots,n}$} provides an inverse to \mbox{$\varphi_{*}$}. \textup{\textbf{e)}} Let \mbox{$K$} be a direct limit of the countable direct system \mbox{$\mathcal{S}=\left((K_{i})_{i\in I},(\lambda_{i,j})_{i\geq j} \right)$} of finite-dimensional Lie groups \mbox{$K_{i}$} and Lie group morphisms \mbox{$\lambda_{i,j}\ensuremath{\nobreak:\nobreak} K_{j}\to K_{i}$} with \mbox{$\lambda_{i,j}\op{\circ}\lambda_{j,\ell}=\lambda_{i,\ell}$} if \mbox{$i\geq j\geq \ell$}. Then there exists an associated injective quotient system \mbox{$\left((\ol{K}_{i})_{i\in I},(\ol{\lambda}_{i,j})_{i\geq j}\right)$} with \mbox{$\ol{K}_{i}=K_{i}/N_{i}$}, where \mbox{$N_{i}=\ol{\bigcup_{j\geq i}\ker \lambda_{j,i}}$} and \mbox{$\ol{\lambda}_{i,j}\ensuremath{\nobreak:\nobreak} \ol{K}_{j}\to \ol{K}_{i}$} is determined by \mbox{$q_{i}\op{\circ}\lambda_{i,j}=\ol{\lambda}_{i,j}\op{\circ}q_{j}$} for the quotient map \mbox{$q_{i}\ensuremath{\nobreak:\nobreak} K_{i}\to \ol{K}_{i}$}. In particular, each \mbox{$\ol{\lambda}_{i,j}$} is an injective immersion. After passing to a cofinal subsequence of an equivalent direct system (cf.\ \cite[\S{}1.6]{gloeckner05}), we may without loss of generality assume that \mbox{$I=\ensuremath{\mathds{N}}$}, that \mbox{$K_{1}\ensuremath{\nobreak\subseteq\nobreak} K_{2}\ensuremath{\nobreak\subseteq\nobreak} \dots$} and that the immersions are the inclusion maps. Then a chart \mbox{$\varphi\ensuremath{\nobreak:\nobreak} W\to W'$} of \mbox{$K$} around \mbox{$e$} is the direct limit of a sequence of charts \mbox{$(\varphi_{i}\ensuremath{\nobreak:\nobreak} W_{i}\to W'_{i})_{i\in \ensuremath{\mathds{N}}}$} such that \mbox{$W=\bigcup_{i\in \ensuremath{\mathds{N}}}W_{i}$}, \mbox{$W'=\bigcup_{i\in \ensuremath{\mathds{N}}}W_{i}'$} and that \mbox{$\left.\varphi_{i}\right|_{W_{j}}=\varphi_{j}$} if \mbox{$i\geq j$} (cf.\ \cite[Theorem 3.1]{gloeckner05}). Now, let \mbox{$\cl{\ensuremath{\mathcal{V}}}=(\ol{V}_{i},\ol{\sigma}_{i})_{i=1,\ldots,n}$} be a smooth closed trivialising system of \mbox{$\ensuremath{\mathcal{P}}$}. Then the corresponding transition functions are defined on the compact subset \mbox{$\ol{V}_{i}\cap \ol{V}_{j}$} and thus take values in a compact subset of \mbox{$K$}. Since each compact subset of \mbox{$K$} is entirely contained in one of the \mbox{$K_{i}$} (cf.\ \cite[Lemma 1.7]{gloeckner05}), the transition functions take values in some \mbox{$K_{a}$}. Since each finite-dimensional principal \mbox{$K_{a}$}-bundle can be reduced to a \mbox{$C$}-bundle, where \mbox{$C\ensuremath{\nobreak\subseteq\nobreak} K_{a}$} is the maximal compact Subgroup of \mbox{$K_{a}$} (cf.\ \cite[p.\ 59]{steenrod51}), we find smooth mappings \mbox{$f_{i}\ensuremath{\nobreak:\nobreak} \ol{V}_{i}\to K_{i}$} such that \mbox{$\ol{\tau}_{i}:=\ol{\sigma}_{i}\cdot f_{i}$} is a smooth closed trivialising system of \mbox{$\ensuremath{\mathcal{P}}$} and the corresponding transition functions take values in the compact Lie group \mbox{$C$}. We now define a chart \mbox{$\varphi \ensuremath{\nobreak:\nobreak} W\to W'$} satisfying the requirements of Definition \ref{def:propertySUB}. For \mbox{$i<a$} let \mbox{$W_{i}$} and \mbox{$W_{i}'$} be empty. For \mbox{$i=a$} denote \mbox{$\ensuremath{\mathfrak{k}}_{a}:=\op{L}(K_{a})$} and let \mbox{$\exp_{a}\ensuremath{\nobreak:\nobreak} \ensuremath{\mathfrak{k}}_{a}\to K_{a}$} be the exponential function of \mbox{$K_{a}$}. Now \mbox{$C$} acts on \mbox{$K_{a}$} from the right by conjugation and on \mbox{$\ensuremath{\mathfrak{k}}_{a}$} by the adjoint representation, which is simply the induced action on \mbox{$T_{e}K_{a}$} for the fixed-point \mbox{$e\in K_{a}$}. By Lemma \ref{lem:interchangeOfActionsOnGroupAndAlgebra} we have \mbox{$\exp_{a}(\ensuremath{\operatorname{Ad}} (c).x)=c\cdot \exp_{a}(x)\cdot c^{-1}$} for each \mbox{$c\in C$}. We choose a \mbox{$C$}-invariant metric on \mbox{$\ensuremath{\mathfrak{k}}_{a}$}. Then there exists an \mbox{$\varepsilon >0$} such that \mbox{$\exp_{a}$} restricts to a diffeomorphism on the open \mbox{$\varepsilon$}-ball \mbox{$W_{a}'$} around \mbox{$0\in\ensuremath{\mathfrak{k}}_{a}$} in the chosen invariant metric. Then \[ \varphi_{a}:=(\left.\exp_{a}\right|_{W'_{a}})^{-1}\ensuremath{\nobreak:\nobreak} W_{a}:=\exp_{a}(W_{a}')\to W'_{a},\quad \exp_{a}(x)\mapsto x \] defines an equivariant chart of \mbox{$K_{a}$} for the corresponding \mbox{$C$}-actions on \mbox{$K_{a}$} and \mbox{$\ensuremath{\mathfrak{k}}_{a}$}, and, moreover, we may choose \mbox{$W_{a}'$} so that \mbox{$\exp_{a}(W'_{a})$} is relatively compact in \mbox{$K_{a}$}. In addition, \[ [0,1]\times W_{a}\ni(t,k)\mapsto \exp_{a}\left(t\cdot \varphi (k) \right)\in W_{a} \] defines a smooth \mbox{$C$}-equivariant contraction of \mbox{$W_{a}$}. By Lemma \ref{lem:equivariantExtensionOfCharts} we may extend \mbox{$\varphi_{a}$} to an equivariant chart \mbox{$\varphi_{a+1}\ensuremath{\nobreak:\nobreak} W_{a+1}\to W_{a+1}'$} with \mbox{$W_{a+1}\cap K_{a}=W_{a}$} \mbox{$\left.\varphi_{a+1}\right|_{W_{a}}=\varphi_{a}$} such that \mbox{$W_{a+1}$} is relatively compact in \mbox{$K_{a+1}$} and smoothly and \mbox{$G$}-equivariantly contractible. Proceeding in this way we define \mbox{$G$}-equivariant charts \mbox{$\varphi_{i}$} for \mbox{$i\geq a$}. This yields a direct limit chart \mbox{$\varphi :=\lim_{\to}\varphi_{i}\ensuremath{\nobreak:\nobreak} W\to W'$} of \mbox{$K$} for which we have \mbox{$W=\bigcup_{i\in I}W_{i}$} and \mbox{$W'=\bigcup_{i\in I}W'_{i}$}. Since the action of \mbox{$C$} on \mbox{$T_{e}K_{i}=\ensuremath{\mathfrak{k}}_{i}$} is the induced action in each step and the construction yields \mbox{$\varphi_{i}(c^{-1}\cdot k\cdot c)=\ensuremath{\operatorname{Ad}} (c^{-1}).\varphi_{i}(k)$} we conclude that we have \[ \varphi^{-1}(\ensuremath{\operatorname{Ad}} (c^{-1}).x)=c^{-1}\cdot \varphi^{-1}(x)\cdot c\ensuremath{\;\text{\textup{ for all }}\;} x\in W'\text{ and } c\in C \] (note that \mbox{$\exp$} is \emph{not} an inverse to \mbox{$\varphi$} any more). Since the transition functions of the trivialising system \mbox{$\cl{\ensuremath{\mathcal{V}}}':=(\ol{\tau}_{i},\ol{V}_{i})$} take values in \mbox{$C$}, we may proceed as in \textup{\textbf{d)}} to see that \mbox{$\varphi_{*}$} is bijective. \end{prf} \begin{remark}\label{rem:propertySUB} The preceding lemma shows that there are different kinds of properties of \mbox{$\ensuremath{\mathcal{P}}$} that can ensure the property SUB, i.e., topological in case \textbf{a)}, algebraical in case \textbf{b)} and geometrical in case \textbf{d)}. Case \textbf{e)} is even more remarkable, since it provides examples of principal bundle with the property SUB, whose structure groups are not locally exponential in general (c.f.\ \cite[Remark 4.7]{gloeckner05}). It thus seems to be hard to find a bundle which does \emph{not} have this property. However, a more systematic answer to the question which bundles have this property is not available at the moment. \end{remark} \begin{problem} Is there a smooth principal \mbox{$K$}-bundle \mbox{$\ensuremath{\mathcal{P}}$} over a compact base space \mbox{$M$} which does not have the property SUB? \end{problem} Lie group structures on the gauge group have already been considered by other authors in similar settings. \begin{remark} If the structure group \mbox{$K$} is the group of diffeomorphisms \mbox{$\ensuremath{\operatorname{Diff}}(N)$} of some closed compact manifold \mbox{$N$}, then it does not follow from Lemma \ref{lem:propertySUB} that \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB, because \mbox{$\ensuremath{\operatorname{Diff}}(N)$} fails to be locally exponential or abelian. However, in this case, \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} is as a split submanifold of the Lie group \mbox{$\ensuremath{\operatorname{Diff}}(P)$}, which provides a smooth structure on \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} \cite[Theorem 14.4]{michor91GaugeTheoryForFiberBundles}. Identifying \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} with the space of section in the associated bundle \mbox{$\op{AD}(\ensuremath{\mathcal{P}})$} for the conjugation action \mbox{$\op{AD}:K\times K\to K$}, \cite[Proposition 6.6]{omoriMaedaYoshiokaKobayashi83OnRegularFrechetLieGroups} also provides a Lie group structure on \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$}. The advantage of Theorem \ref{thm:gaugeGroupIsLieGroup} is, that it provides charts for \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$}, which allows us to reduce questions on gauge groups to similar question on mapping groups. This correspondence is crucial for all the following considerations. \end{remark} In the end of the section we provide as approximation result that makes homotopy groups of \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} accessible in terms of continuous data from the bundle \mbox{$\ensuremath{\mathcal{P}}$} (cf.\ \cite{connHom}). \begin{remark}\label{rem:topologyOnContinuousGaugeGroup} Let \mbox{$\ensuremath{\mathcal{P}}$} be one of the bundles that occur in Lemma \ref{lem:propertySUB} with the corresponding closed trivialising system \mbox{$\cl{\ensuremath{\mathcal{V}}}$} and let \mbox{$\ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})_{c}$} (resp.\ \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}} )_{c}$}) be the continuous counterparts of \mbox{$\ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} (resp.\ \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}$}), which is isomorphic to the space of \mbox{$K$}-equivariant continuous maps \mbox{$C(P,\ensuremath{\mathfrak{k}})^{K}$} (resp.\ \mbox{$C(P,K)^{K}$}). We endow all spaces of continuous mappings with the compact-open topology. Then the map \[ \varphi_{*}\ensuremath{\nobreak:\nobreak} G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})_{c}\cap \prod_{i=1}^{n}C(\cl{V}_{i},W) \to \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})_{c}\cap \prod_{i=1}^{n}C(\cl{V}_{i},W'), \quad (\gamma_{i})_{i=1,\ldots,n}\mapsto (\varphi \op{\circ}\gamma_{i})_{i=1,\ldots,n} \] is also bijective, inducing a smooth manifold structure on the left-hand-side, because the right-hand-side is an open subset in a locally convex space. Furthermore, it can be shown exactly as in the smooth case, that this manifold structure endows \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})_{c}$} with a Lie group structure by Proposition \ref{prop:localDescriptionsOfLieGroups}. One could go on and call the requirement that \mbox{$\varphi_{*}$} is bijective ``\textit{continuous property SUB}'', but since the next proposition is the sole application of it this seems to be exaggerated. \end{remark} \begin{lemma}\label{lem:approximationLemma2} Let \mbox{$\ensuremath{\mathcal{P}}$} be a smooth principal \mbox{$K$}-bundle over the compact base \mbox{$M$}, having the property SUB with respect to the smooth closed trivialising system \mbox{$\cl{\ensuremath{\mathcal{V}}}=(\cl{V}_{i},\sigma_{i})_{i=1,\ldots,n}$} and let \mbox{$\varphi \ensuremath{\nobreak:\nobreak} W\to W'$} be the corresponding chart of \mbox{$K$} (cf.\ Definition \ref{def:propertySUB}). If \mbox{$(\gamma_{i})_{i=1,\ldots,n}\in G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} represents an element of \mbox{$C^{\infty}(P,K)^{K}$} (cf.\ Remark \ref{rem:isoToTheGaugeGroupInLocalCoordinates}), which is close to identity, in the sense that \mbox{$\gamma_{i} (\ol{V_{i}})\ensuremath{\nobreak\subseteq\nobreak} W$}, then \mbox{$(\gamma_{i})_{i=1,\ldots,n}$} is homotopic to the constant map \mbox{$(x\mapsto e)_{i=1,\ldots,n}$}. \end{lemma} \begin{prf} Since the map \[ \varphi_{*}:U:=G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})\cap \prod_{i=1}^{n}C^{\infty}(\ol{V_{i}},W)\to \ensuremath{\mathfrak{g}} (\ensuremath{\mathcal{P}}),\;\; (\gamma'_{i})_{i=1,\ldots,n} \mapsto (\varphi \op{\circ}\gamma'_{i})_{i=1,\ldots,n}, \] is a chart of \mbox{$G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} (cf.\ Proposition \ref{prop:gaugeGroupInLocalCoordinatesIsLieGroup}) and \mbox{$\varphi_{*}(U)\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\mathfrak{g}}_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}})$} is convex, the map \[ [0,1]\ni t \mapsto \varphi_{*}^{-1}\big(t\cdot \varphi_{*}((\gamma_{i})_{i=1,\ldots,n})\big)\in G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}}) \] defines the desired homotopy. \end{prf} \begin{proposition}\label{thm:weakHomotopyEquivalence} If \mbox{$\ensuremath{\mathcal{P}}$} is one of the bundles that occur in Lemma \ref{lem:propertySUB}, the natural inclusion \mbox{$\iota \ensuremath{\nobreak:\nobreak}\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}}) \hookrightarrow \ensuremath{\operatorname{Gau}_{c}}(\ensuremath{\mathcal{P}})$} of smooth into continuous gauge transformations is a weak homotopy equivalence, i.e., the induced mappings \mbox{$\pi_{n}(\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}}))\to\pi_{n}\left(\ensuremath{\operatorname{Gau}_{c}}(\ensuremath{\mathcal{P}})\right)$} are isomorphisms of groups for \mbox{$n\in \ensuremath{\mathds{N}}_{0}$}. \end{proposition} \begin{prf} We identify \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} with \mbox{$C^{\infty}(P,K)^{K}$} and \mbox{$\ensuremath{\operatorname{Gau}_{c}}(\ensuremath{\mathcal{P}})$} with \mbox{$C(P,K)^{K}$}. To see that \mbox{$\pi_{n}(\iota)$} is surjective, consider the continuous principal \mbox{$K$}-bundle \mbox{$\ensuremath{\operatorname{pr}}^{*}(\ensuremath{\mathcal{P}})$} obtained form \mbox{$\ensuremath{\mathcal{P}}$} by pulling it back along the projection \mbox{$\ensuremath{\operatorname{pr}}:\ensuremath{\mathds{S}}^{n}\times M\to M$}. Then \mbox{$\ensuremath{\operatorname{pr}}^{*}(\ensuremath{\mathcal{P}})\cong (K,\ensuremath{\operatorname{id}}\times \pi ,\ensuremath{\mathds{S}}^{n}\times P,\ensuremath{\mathds{S}}^{n}\times M)$}, where \mbox{$K$} acts trivially on the first factor of \mbox{$\ensuremath{\mathds{S}}^{n}\times P$}. We have with respect to this action \mbox{$C(\ensuremath{\operatorname{pr}}^{*}(P),K)^{K}\cong C (\ensuremath{\mathds{S}}^n\times P,K)^{K}$} and \mbox{$C^{\infty}(\ensuremath{\operatorname{pr}}^{*}(P))^{K}\cong C^{\infty} (\ensuremath{\mathds{S}}^n\times P,K)^{K}$}. The isomorphisms \mbox{$C(\ensuremath{\mathds{S}}^{n},G_{0})\cong C_{*}(\ensuremath{\mathds{S}}^{n},G_{0})\rtimes G_{0}=C_{*}(\ensuremath{\mathds{S}}^{n},G)\rtimes G_{0}$}, where \mbox{$C_{*}(\ensuremath{\mathds{S}}^{n},G)$} denotes the space of base-point-preserving maps from \mbox{$\ensuremath{\mathds{S}}^{n}$} to \mbox{$G$}, yield \mbox{$\pi_{n}(G)=\pi_{0}(C_{*}(\ensuremath{\mathds{S}}^{n},G))= \pi_{0}(C(\ensuremath{\mathds{S}}^{n},G_{0}))$} for any topological group \mbox{$G$}. We thus get a map \begin{multline*} \pi_{n}(C^{\infty}(P,K)^{K})=\pi_{0}(C_{*}(\ensuremath{\mathds{S}}^{n},C^{\infty}(P,K)^{K}))=\\ \pi_{0}(C(\ensuremath{\mathds{S}}^{n},C^{\infty}(P,K)^{K}_{\;\;0})) \stackrel{\eta}{\to}\pi_{0}(C(\ensuremath{\mathds{S}}^{n},C(P,K)^{K}_{\;\; 0})), \end{multline*} where \mbox{$\eta$} is induced by the inclusion \mbox{$C^{\infty}(P,K)^{K}\hookrightarrow C(P,K)^{K}$}. If \mbox{$f\in C(\ensuremath{\mathds{S}}^{n}\times P,K)$} represents an element \mbox{$[F]\in\pi_{0}(C(\ensuremath{\mathds{S}}^{n},C(P,K)^{K}_{\;\; 0}))$} (recall that we have \mbox{$C(P,K)^{K}\cong G_{c,\ensuremath{\mathcal{V}} }(\ensuremath{\mathcal{P}})\ensuremath{\nobreak\subseteq\nobreak} \prod_{i=1}^{n}C(V_{i},K)$} and \mbox{$C(\ensuremath{\mathds{S}}^{n},C(V_{i},K))\cong C(\ensuremath{\mathds{S}}^{n}\times V_{i},K)$}), then there exists \mbox{$\wt{f}\in C^{\infty}(\ensuremath{\mathds{S}}^{n}\times P,K)^{K}$} which is contained in the same connected component of \mbox{$C(\ensuremath{\mathds{S}}^{n}\times P,K)^{K}$} as \mbox{$f$} (cf.\ \cite[Theorem 11]{approx}). Since \mbox{$\wt{f}$} is in particular smooth in the second argument, it follows that \mbox{$\wt{f}$} represents an element \mbox{$\wt{F}\in C(\ensuremath{\mathds{S}}^{n},C^{\infty}(P,K)^{K})$}. Since the connected components and the arc components of \mbox{$C(\ensuremath{\mathds{S}}^{n}\times P,K)^{K}$} coincide (since it is a Lie group, cf. Remark \ref{rem:topologyOnContinuousGaugeGroup}), there exists a path \[ \tau :[0,1]\to C(\ensuremath{\mathds{S}}^{n}\times P,K)^{K}_{\;\;0} \] such that \mbox{$t\mapsto \tau (t)\cdot f$} is a path connecting \mbox{$f$} and \mbox{$\wt{f}$}. Since \mbox{$\ensuremath{\mathds{S}}^{n}$} is connected it follows that \mbox{$C(\ensuremath{\mathds{S}}^{n}\times P,K)^{K}_{\;\;0}\cong C(\ensuremath{\mathds{S}}^{n},C(P,K)^{K})_{0}\ensuremath{\nobreak\subseteq\nobreak} C(\ensuremath{\mathds{S}}^{n},C(P,K)^{K}_{\;\;0})$}. Thus \mbox{$\tau$} represents a continuous path in \mbox{$C(\ensuremath{\mathds{S}}^n,C(P,K)^{K}_{0}))$} connecting \mbox{$F$} and \mbox{$\wt{F}$} whence \mbox{$[F]=[\wt{F}]\in \pi_{0}(C(\ensuremath{\mathds{S}}^{n},C(P,K)^{K}_{\;\;0}))$}. That \mbox{$\pi_{n} (\iota )$} is injective follows with Lemma \ref{lem:approximationLemma2} as in \cite[Theorem A.3.7]{neeb03}. \end{prf} \section{The automorphism group as an infinite-dimensional Lie group} \label{sect:theFullAutomorphismGroupAsInfiniteDimensionalLieGroup} In this section we describe the Lie group structure on \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} for a principal \mbox{$K$}-bundle over a compact manifold \mbox{$M$} \emph{without} boundary, i.e., a \emph{closed} compact manifold. We will do this using the extension of abstract groups \begin{align}\label{eqn:extensionOfGauByDiff} \ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}}) \hookrightarrow \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\xrightarrow{Q} \ensuremath{\operatorname{Diff}}(M)_{\ensuremath{\mathcal{P}}}, \end{align} where \mbox{$\ensuremath{\operatorname{Diff}} (M)_{\ensuremath{\mathcal{P}}}$} is the image of the homomorphism \mbox{$Q\ensuremath{\nobreak:\nobreak} \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\to\ensuremath{\operatorname{Diff}}(M)$}, \mbox{$F\mapsto F_{M}$} from Definition \ref{def:bundleAutomorphismsAndGaugeTransformations}. More precisely, we will construct a Lie group structure on \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} that turns \eqref{eqn:extensionOfGauByDiff} into an extension of Lie groups, i.e., into a locally trivial bundle. We should advertise in advance that we shall not need regularity assumptions on \mbox{$K$} in order to lift diffeomorphisms of \mbox{$M$} to bundle automorphisms by lifting vector fields. However, we elaborate shortly on the regularity of \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} and \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} in the end of the section. \vskip\baselineskip We shall consider bundles over bases without boundary, i.e., our base manifolds will always be closed compact manifolds. Throughout this section we fix one particular given principal \mbox{$K$}-bundle \mbox{$\ensuremath{\mathcal{P}}$} over a closed compact manifold \mbox{$M$} and we furthermore assume that \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB. \begin{definition}(cf. \cite{neeb06nonAbelianExtensions}) If \mbox{$N$}, \mbox{$\wh{G}$} and \mbox{$G$} are Lie groups, then an extension of groups \[ N\hookrightarrow \wh{G}\twoheadrightarrow G \] is called an \emph{extension of Lie groups} if \mbox{$N$} is a split Lie subgroup of \mbox{$\wh{G}$}. That means that \mbox{$(N,q:\wh{G}\to G)$} is a smooth principal \mbox{$N$}-bundle, where \mbox{$q:\wh{G}\to G\cong \wh{G}/N$} is the induced quotient map. We call two extensions \mbox{$N\hookrightarrow \wh{G}_{1}\twoheadrightarrow G$} and \mbox{$N\hookrightarrow \wh{G}_{2}\twoheadrightarrow G$} \emph{equivalent} if there exists a morphism of Lie groups \mbox{$\psi \ensuremath{\nobreak:\nobreak} \wh{G}_{1}\to \wh{G}_{2}$} such that the diagram \[ \begin{CD} N@>>> \wh{G}_{1} @>>> G\\ @V\ensuremath{\operatorname{id}}_{N}VV @V{\psi}VV @V\ensuremath{\operatorname{id}}_{G}VV\\ N@>>> \wh{G}_{2} @>>> G \end{CD} \] commutes. \end{definition} \begin{remark}\label{rem:choiceOfLocalTrivialisations} Unless stated otherwise, for the rest of this section we choose and fix one particular smooth closed trivialising system \mbox{$\cl{\ensuremath{\mathcal{V}}}=(\cl{V}_{i},\sigma_{i})_{i=1,\ldots,n}$} of \mbox{$\ensuremath{\mathcal{P}}$} such that \begin{itemize} \item each \mbox{$\cl{V}_{i}$} is a compact manifold with corners diffeomorphic to \mbox{$[0,1]^{\dim(M)}$}, \item \mbox{$\cl{\ensuremath{\mathcal{V}}}$} is a refinement of a smooth open trivialising system \mbox{${\mathcal{U}}=({U}_{i},\tau_{i})_{i=1,\ldots,n}$} and we have \mbox{$\cl{V}_{i}\ensuremath{\nobreak\subseteq\nobreak} U_{i}$} and \mbox{$\sigma_{i}=\left.\tau_{i}\right|_{\cl{V}_{i}}$}, \item each \mbox{$\cl{U}_{i}$} is a compact manifold with corners diffeomorphic to \mbox{$[0,1]^{\dim(M)}$} and \mbox{$\tau_{i}$} extends to a smooth section \mbox{$\tau_{i}\ensuremath{\nobreak:\nobreak} \cl{U}_{i}\to P$}, \item \mbox{$\cl{\mathcal{U}}\!=\!(\cl{U}_{i},\tau_{i})_{i=1,\ldots,n}$} is a refinement of a smooth open trivialising system \mbox{$\mathcal{U}'\!=\!(U'_{i},\tau_{j})_{j=1,\ldots,m}$}, \item the values of the transition functions \mbox{$k_{ij}\ensuremath{\nobreak:\nobreak} U'_{i}\cap U'_{j}\to K$} of \mbox{$\mathcal{U}'$} are contained in open subsets \mbox{$W_{ij}$} of \mbox{$K$}, which are diffeomorphic to open zero neighbourhoods of \mbox{$\ensuremath{\mathfrak{k}}$}, \item \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to \mbox{$\cl{\ensuremath{\mathcal{V}}}$} (and thus with respect to \mbox{$\cl{\mathcal{U}}$} by Lemma \ref{lem:propertySUBIsCompatibleWithRefinements}). \end{itemize} We choose \mbox{$\cl{\ensuremath{\mathcal{V}}}$} by starting with an arbitrary smooth closed trivialising system such that \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to this system. Note that this exists because we assume throughout this section that \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB. Then Lemma \ref{lem:forcingTransitionFunctionsIntoOpenCovers} implies that there exists a refinement \mbox{$\mathcal{U}'=(U'_{j},\tau_{j})_{j=1,\ldots,m}$} such that the transition functions \mbox{$k_{ij}\ensuremath{\nobreak:\nobreak} U_{i}\cap U_{j}\to K$} take values in open subsets \mbox{$W_{ij}$} of \mbox{$K$}, which are diffeomorphic to open convex zero neighbourhoods of \mbox{$\ensuremath{\mathfrak{k}}$}. Now each \mbox{$x\in M$} has neighbourhoods \mbox{$V_{x}$} and \mbox{$U_{x}$} such that \mbox{$\cl{V}_{x}\ensuremath{\nobreak\subseteq\nobreak} U_x$}, \mbox{$\cl{V}_{x}$} and \mbox{$\cl{U}_{x}$} are diffeomorphic to \mbox{$[0,1]^{\dim(M)}$} and \mbox{$\cl{U}_{x}\ensuremath{\nobreak\subseteq\nobreak} U_{j(x)}$} for some \mbox{$j(x)\in\{1,\dots ,m\}$}. Then finitely many \mbox{$V_{x_{1}},\dots ,V_{x_{n}}$} cover \mbox{$M$} and so do \mbox{$U_{x_{1}},\dots ,U_{x_{n}}$}. Furthermore, the sections \mbox{$\tau_{j}$} restrict to smooth sections on \mbox{$V_{i}$}, \mbox{$\cl{V}_{i}$}, \mbox{$U_{i}$} and \mbox{$\cl{U_{i}}$}. This choice of \mbox{$\cl{\mathcal{U}}$} in turn implies that \mbox{$\left.k_{ij}\right|_{\cl{U}_{i}\cap \cl{U}_{j}}$} arises as the restriction of some smooth function on \mbox{$M$}. In fact, if \mbox{$\varphi_{ij}\ensuremath{\nobreak:\nobreak} W_{ij}\to W'_{ij}\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\mathfrak{k}}$} is a diffeomorphism onto a convex zero neighbourhood and \mbox{$f_{ij}\in C^{\infty}(M,\ensuremath{\mathds{R}})$} is a smooth function with \mbox{$\left.f_{ij}\right|_{\cl{U}_{i}\cap \cl{U}_{j}}\equiv 1$} and \mbox{$\ensuremath{\operatorname{supp}} (f_{ij})\ensuremath{\nobreak\subseteq\nobreak} U'_{i}\cap U'_{j}$}, then \[ m\mapsto \left\{\begin{array}{ll} \varphi_{ij}^{-1}(f_{ij}(m)\cdot \varphi_{ij}(k_{ij}(m))) & \text{ if }m\in U'_{i}\cap U'_{j}\\ \varphi_{ij}^{-1}(0) & \text{ if }m\notin U'_{i}\cap U'_{j} \end{array}\right. \] is a smooth function, because each \mbox{$m\in \partial (U'_{i}\cap U'_{j})$} has a neighbourhood on which \mbox{$f_{ij}$} vanishes, and this function coincides with \mbox{$k_{ij}$} on \mbox{$\cl{U}_{i}\cap \cl{U}_{j}$}. Similarly, let \mbox{$(\gamma_{1},\dots ,\gamma_{n})\in G_{\cl{\mathcal{U}}}(\ensuremath{\mathcal{P}})\ensuremath{\nobreak\subseteq\nobreak} \prod_{i=1}^{n}C^{\infty}(\cl{U}_{i},K)$} be the local description of some element \mbox{$\gamma\in C^{\infty}(P,K)^{K}$}. We will show that each \mbox{$\left.\gamma_{i}\right|_{\cl{V}_{i}}$} arises as the restriction of a smooth map on \mbox{$M$}. In fact, take a diffeomorphism \mbox{$\varphi_{i}\ensuremath{\nobreak:\nobreak} \cl{U}_{i}\to [0,1]^{\dim(M)}$}. Then \mbox{$\cl{V}_{i}\ensuremath{\nobreak\subseteq\nobreak} U_{i}$} implies that we have \mbox{$\varphi_{i}(\cl{V}_{i})\ensuremath{\nobreak\subseteq\nobreak} (0,1)^{\dim(M)}$} and thus there exits an \mbox{$\varepsilon >0$} such that \mbox{$\varphi_{i} (\cl{V}_{i})\ensuremath{\nobreak\subseteq\nobreak} (\varepsilon ,1-\varepsilon)^{\dim (M)}$} for all \mbox{$i=1,\dots ,n$}. Now let \[ f\ensuremath{\nobreak:\nobreak} [0,1]^{\dim(M)}\backslash (\varepsilon ,1-\varepsilon)^{\dim(M)} \to [\varepsilon ,1-\varepsilon]^{\dim(M)} \] be a map that restricts to the identity on \mbox{$\partial [\varepsilon ,1-\varepsilon]^{\dim(M)}$} and collapses \mbox{$\partial [0,1]^{\dim(M)}$} to a single point \mbox{$x_{0}$}. We then set \[ \gamma'_{i}\ensuremath{\nobreak:\nobreak} M\to K\quad m\mapsto \left\{\begin{array}{ll} \gamma_{i}(m) & \text{ if }m\in \cl{U}_{i}\text{ and } \varphi_{i} (m)\in [\varepsilon ,1-\varepsilon]^{\dim(M)}\\ \gamma_{i}(\varphi_{i}^{-1}(f(\varphi_{i} (m)))) & \text{ if }m\in \cl{U}_{i}\text{ and } \varphi_{i} (m)\notin (\varepsilon ,1-\varepsilon)^{\dim(M)}\\ \gamma_{i}(\varphi_{i}^{-1}(x_{0})) & \text{ if }m\notin U_{i}, \end{array} \right. \] and \mbox{$\gamma '_{i}$} is well-defined and continuous, because \mbox{$f(\varphi_{i} (m))=\varphi_{i} (m)$} if \mbox{$\varphi_{i} (m)\in \partial [\varepsilon ,1-\varepsilon]^{\dim(M)}$} and \mbox{$f(\varphi_{i} (m))=x_{0}$} if \mbox{$\varphi_{i} (m)\in\partial [0,1]^{\dim(M)}$}. Since \mbox{$\gamma '_{i}$} coincides with \mbox{$\gamma_{i}$} on the neighbourhood \mbox{$\varphi_{i}^{-1}((\varepsilon ,1-\varepsilon)^{\dim(M)})$}, it thus is smooth on this neighbourhood. Now \cite[Corollary 12]{approx}, yields a smooth map \mbox{$\wt{\gamma}_{i}$} on \mbox{$M$} with \mbox{$\left.\gamma_{i}\right|_{\cl{V}_{i}}= \left.\wt{\gamma}_{i}\right|_{\cl{V}_{i}}$}. \end{remark} We now give the description of a strategy for lifting special diffeomorphisms to bundle automorphisms. This should motivate the procedure of this section. \begin{remark}\label{rem:liftingDiffeomorphismsToBundleAutomorphisms} Let \mbox{$U\ensuremath{\nobreak\subseteq\nobreak} M$} be open and trivialising with section \mbox{$\sigma :U\to P$} and corresponding \mbox{$k_{\sigma}:\pi^{-1}(U)\to K$}, given by \mbox{$\sigma (\pi (p))\cdot k_{\sigma}(p)=p$}. If \mbox{$g \in\ensuremath{\operatorname{Diff}}(M)$} is such that \mbox{$\ensuremath{\operatorname{supp}}(g)\ensuremath{\nobreak\subseteq\nobreak} U$}, then we may define a smooth bundle automorphism \mbox{$\wt{g}$} by \[ \wt{g }(p)=\left\{ \begin{array}{ll} \sigma \left(g\left(\pi (p) \right) \right)\cdot k(p) &\text{ if }p\in\pi^{-1}(U)\\ p&\text{else,} \end{array} \right. \] because each \mbox{$x\in \partial U$} has a neighbourhood on which \mbox{$g$} is the identity. Furthermore, one easily verifies \mbox{$Q(\wt{g})=\wt{g}_{M}=g$} and \mbox{$\wt{g^{-1}}=\wt{g}^{-1}$}, where \mbox{$Q\ensuremath{\nobreak:\nobreak}\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\to \ensuremath{\operatorname{Diff}}(M)$} is the homomorphism from Definition \ref{def:bundleAutomorphismsAndGaugeTransformations}. \end{remark} \begin{remark}\label{rem:chartsForDiffeomorphismGroups} Let \mbox{$M$} be a closed compact manifold with a fixed Riemannian metric \mbox{$g$} and let \mbox{$\pi \ensuremath{\nobreak:\nobreak} TM\to M$} be its tangent bundle and \mbox{$\ensuremath{\operatorname{Exp}} \ensuremath{\nobreak:\nobreak} TM\to M$} be the exponential mapping of \mbox{$g$}. Then \mbox{$\pi \times \ensuremath{\operatorname{Exp}}\ensuremath{\nobreak:\nobreak} TM\to M\times M$}, \mbox{$X_{m}\mapsto (m,\ensuremath{\operatorname{Exp}} (X_{m}))$} restricts to a diffeomorphism on an open neighbourhood \mbox{$U$} of the zero section in \mbox{$TM$}. We set \mbox{$O':=\{X\in\ensuremath{\mathcal{V}}(M):X(M)\ensuremath{\nobreak\subseteq\nobreak} U\}$} and define \[ \varphi^{-1}:O'\to C^{\infty}(M,M),\quad \varphi^{-1} (X)(m)=\ensuremath{\operatorname{Exp}}(X(m)) \] For the following, observe that \mbox{$\varphi^{-1}(X)(m)=m$} if and only if \mbox{$X(m)=0_{m}$}. After shrinking \mbox{$O'$} to a convex open neighbourhood in the \mbox{$C^{1}$}-topology, one can also ensure that \mbox{$\varphi^{-1}(X)\in \ensuremath{\operatorname{Diff}}(M)$} for all \mbox{$X\in O'$}. Since \mbox{$\pi \times \ensuremath{\operatorname{Exp}}$} is bijective on \mbox{$U$}, \mbox{$\varphi^{-1}$} maps \mbox{$O'$} bijectively to \mbox{$O:=\varphi^{-1}(O')\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\operatorname{Diff}}(M)$} and thus endows \mbox{$O$} with a smooth manifold structure. Furthermore, it can be shown that in view of Proposition \ref{prop:localDescriptionsOfLieGroups}, this chart actually defines a Lie group structure on \mbox{$\ensuremath{\operatorname{Diff}}(M)$} (cf.\ \cite{leslie67}, \cite[Theorem 43.1]{krieglmichor97} or \cite{gloeckner02patched}). It is even possible to put Lie group structures on \mbox{$\ensuremath{\operatorname{Diff}}(M)$} in the case of non-compact manifolds, possibly with corners \cite[Theorem 11.11]{michor80}, but we will not go into this generality here. \end{remark} \begin{lemma}\label{lem:decompositionOfDiffeomorphisms} For the open cover \mbox{$V_{1},\dots ,V_{n}$} of the closed compact manifold \mbox{$M$} and the open identity neighbourhood \mbox{$O\ensuremath{\nobreak\subseteq\nobreak}\ensuremath{\operatorname{Diff}}(M)$} from Remark \ref{rem:chartsForDiffeomorphismGroups}, there exist smooth maps \begin{align}\label{eqn:decompositionOfDiffeomorphisms} s_{i}:O\to O\circ O^{-1} \end{align} for \mbox{$1\leq i\leq n$} such that \mbox{$\ensuremath{\operatorname{supp}}(s_{i}(g))\ensuremath{\nobreak\subseteq\nobreak} V_{i}$} and \mbox{$s_{n}(g)\op{\circ }\dots \op{\circ } s_{1}(g)=g$}. \end{lemma} \begin{prf}(cf.\ \cite[Proposition 1]{hallerTeichmann04}) Let \mbox{$f_{1},\ldots,f_{n}$} be a partition of unity subordinated to the open cover \mbox{$V_{1},\ldots,V_{n}$} and let \mbox{$\varphi\ensuremath{\nobreak:\nobreak} O\to \varphi (O)\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\mathcal{V}} (M)$} be the chart of \mbox{$\ensuremath{\operatorname{Diff}}(M)$} form Remark \ref{rem:chartsForDiffeomorphismGroups}. In particular, \mbox{$\varphi^{-1}(X)(m)=m$} if \mbox{$X(m)=0_{m}$}. Since \mbox{$\varphi (O)$} is convex, we may define \mbox{$s_{i}\ensuremath{\nobreak:\nobreak} O\to O\op{\circ}O^{-1}$}, \[ s_{i}(g)= \varphi^{-1}\big((f_{n}+\ldots+f_{i})\cdot\varphi (g) \big) \op{\circ} \big(\varphi^{-1}\big((f_{n}+\ldots+f_{i+1})\cdot \varphi(g) \big)\big)^{-1} \] if \mbox{$i<n$} and \mbox{$s_{n}(g)=\varphi^{-1} (f_{n}\cdot \varphi (g ))$}, which are smooth since they are given by a push-forward of the smooth map \mbox{$\ensuremath{\mathds{R}} \times TM\to TM$} \mbox{$(\lambda ,X_{m})\mapsto \lambda \cdot X_{m}$}. Furthermore, if \mbox{$f_{i}(x)=0$}, then the left and the right factor annihilate each other and thus \mbox{$\ensuremath{\operatorname{supp}}(s_{i}(g ))\ensuremath{\nobreak\subseteq\nobreak} V_{i}$}. \end{prf} The preceding lemma enables us now to lift elements of \mbox{$O\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\operatorname{Diff}}(M)$} to elements of \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$}. \begin{definition}\label{def:sectionFromDiffeomorphismsToBundleAutomorphisms} If \mbox{$O\ensuremath{\nobreak\subseteq\nobreak}\ensuremath{\operatorname{Diff}}(M)$} is the open identity neighbourhood from Remark \ref{rem:chartsForDiffeomorphismGroups} and \mbox{$s_{i}:O\to O\op{\circ }O^{-1}$} are the smooth mappings from Lemma \ref{lem:decompositionOfDiffeomorphisms}, then we define \begin{align}\label{eqn:sectionFromDiffeomorphismIntoBundleAutoporphisms} S\ensuremath{\nobreak:\nobreak} O\to \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}),\quad g \mapsto S(g ):=\wt{g_{n}}\op{\circ}\dots \op{\circ}\wt{g_{1\,}}, \end{align} where \mbox{$\wt{g_{i}}$} is the bundle automorphism of \mbox{$\ensuremath{\mathcal{P}}$} from Remark \ref{rem:liftingDiffeomorphismsToBundleAutomorphisms}. This defines a local section for the homomorphism \mbox{$Q\ensuremath{\nobreak:\nobreak} \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\to \ensuremath{\operatorname{Diff}}(M)$}, \mbox{$F\mapsto F_{M}$} from Definition \ref{def:bundleAutomorphismsAndGaugeTransformations}. \end{definition} We shall frequently need an explicit description of \mbox{$S(g)$} in terms of local trivialisations, i.e., how \mbox{$S(g)(\sigma_{i}(x))$} can be expressed in terms of \mbox{$g_{j}$}, \mbox{$\sigma_{j}$} and \mbox{$k_{jj'}$}. \begin{remark}\label{rem:valuesOfTheLiftedDiffeomorphismInTermsOfLocalData} Let \mbox{$x\in V_{i}\ensuremath{\nobreak\subseteq\nobreak} M$} be such that \mbox{$x\notin V_{j}$} for \mbox{$j<i$} and \mbox{$g_{i}(x)\notin V_{j}$} for \mbox{$j> i$}. Then \mbox{$g_{j}(x)=x$} for all \mbox{$j<i$}, \mbox{$g_{j} (g_{i}(x))=g_{i}(x)$} for all \mbox{$j>i$} and thus \mbox{$S(g )(\sigma_{i}(x))=\sigma_{i}(g_{i}(x))=\sigma_{i}(g (x))$}. In general, things are more complicated. The first \mbox{$\wt{g_{j_{1}}}$} in \eqref{eqn:sectionFromDiffeomorphismIntoBundleAutoporphisms} that could move \mbox{$\sigma_{i}(x)$} is the one for the minimal \mbox{$j_{1}$} such that \mbox{$x\in \cl{V}_{j_{1}}$}. We then have \[ \wt{g_{j_{1}}}(\sigma_{i}(x)) =\wt{g_{j_{1}}}(\sigma_{j_{1}}(x))\cdot k_{j_{1}i}(x) =\sigma_{j_{1}}({g_{j_{1}}}(x))\cdot k_{j_{1}i}(x). \] The next \mbox{$\wt{g_{j_{2}}}$} in \eqref{eqn:sectionFromDiffeomorphismIntoBundleAutoporphisms} that could move \mbox{$\wt{g_{j_{1}}}(\sigma_{i}(x))$} in turn is the one for the minimal \mbox{$j_{2}>j_{1}$} such that \mbox{$g_{j_{1}}(x)\in \cl{V}_{j_{2}}$}, and we then have \[ \wt{g_{j_{2}}}(\wt{g_{j_{1}}}(\sigma_{i}(x))) =\sigma_{j_{2}}(g_{j_{2}}\op{\circ }{g_{j_{1}}}(x)) \cdot k_{j_{2}j_{1}}(g_{j_{1}}(x))\cdot k_{j_{1}i}(x). \] We eventually get \begin{align}\label{eqn:valuesOfTheLiftedDiffeomorphismsInTermsOflocalData} S(g )(\sigma_{i}(x)) =\sigma_{j_{\ell}}(g (x)) \cdot k_{j_{\ell}j_{\ell-1}}(g_{j_{\ell-1}}\op{\circ}\dots \op{\circ} g_{j_{1}}(x))\cdot\ldots \cdot k_{j_{1}i}(x), \end{align} where \mbox{$\{j_{1},\dots ,j_{\ell}\}\ensuremath{\nobreak\subseteq\nobreak}\{1,\dots ,n\}$} is maximal such that \[ g_{j_{p-1}}\op{\circ}\ldots\op{\circ}g_{i_{1}}(x)\in U_{j_{p}}\cap U_{j_{p-1}} \;\text{ for }\; 2\leq p\leq \ell\;\text{ and }\;j_{1}<\ldots <j_{p}. \] Note that we cannot write down such a formula using all \mbox{$j\in\{1,\dots ,n\}$}, because the corresponding \mbox{$k_{jj'}$} and \mbox{$\sigma_{j}$} would not be defined properly. Of course, \mbox{$g$} and \mbox{$x$} influence the choice of \mbox{$j_{1},\dots ,j_{\ell}$}, but there exist open neighbourhoods \mbox{$O_{g}$} of \mbox{$g$} and \mbox{$U_{x}$} of \mbox{$x$} such that we may use \eqref{eqn:valuesOfTheLiftedDiffeomorphismsInTermsOflocalData} as a formula for all \mbox{$g '\in O_{g}$} and \mbox{$x'\in U_{x}$}. In fact, the action \mbox{$\ensuremath{\operatorname{Diff}}(M)\times M\to M$}, \mbox{$g.m=g (m)$} is smooth by \cite[Proposition 7.2]{gloeckner02patched}, and thus in particular continuous. If \begin{align} g_{j_{p}}\op{\circ}\ldots\op{\circ}g_{j_{1}}(x)\notin \cl{V}_{j} \;&\text{ for }\; 2\leq p\leq \ell\;\text{ and }\;j\notin \{j_{1},\dots ,j_{p}\} \label{eqn:valuesOfTheLiftedDiffeomorphismsInTermsOflocalData1}\\ g_{j_{p}}\op{\circ}\ldots\op{\circ}g_{j_{1}}(x)\in U_{j_{p}}\cap U_{j_{p-1}} \;&\text{ for }\; 2\leq p\leq \ell\;\text{ and }\;j_{1}<\ldots <j_{p} \label{eqn:valuesOfTheLiftedDiffeomorphismsInTermsOflocalData2} \end{align} then this is also true for \mbox{$g '$} and \mbox{$x'$} in some open neighbourhood of \mbox{$g$} and \mbox{$x$}. This yields finitely many open neighbourhoods of \mbox{$g$} and \mbox{$x$} and we define their intersections to be \mbox{$O_{g }$} and \mbox{$U_{x}$}. Then \eqref{eqn:valuesOfTheLiftedDiffeomorphismsInTermsOflocalData} still holds for \mbox{$g '\in O_{g }$} and \mbox{$x'\in U_{x}$}, because \eqref{eqn:valuesOfTheLiftedDiffeomorphismsInTermsOflocalData1} implies \mbox{$g_{j}(g_{j_{p}}\op{\circ}\ldots\op{\circ}g_{j_{1}}(x)) =g_{j_{p}}\op{\circ}\ldots\op{\circ}g_{j_{1}}(x)$} and \eqref{eqn:valuesOfTheLiftedDiffeomorphismsInTermsOflocalData2} implies that \mbox{$k_{j_{p}j_{p-1}}$} is defined and satisfies the cocycle condition. \end{remark} In order to determine a Lie group structure on \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$}, the map \mbox{$S\ensuremath{\nobreak:\nobreak} O\to \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} has to satisfy certain smoothness properties, which will be ensured by the subsequent lemmas. \begin{remark}\label{rem:actionsOfTheAutomorphismOnMappingsInterchange} If we identify the normal subgroup \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\unlhd \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} with \mbox{$C^{\infty }(P,K)^{K}$} via \[ C^{\infty}(P,K)^{K}\to \ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}}),\quad \gamma \mapsto F_{\gamma } \] with \mbox{$F_{\gamma}(p)=p\cdot \gamma (p)$}, then the conjugation action \mbox{$c:\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\times \ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\to\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$}, given by \mbox{$c_{F}(F_{\gamma })=F\op{\circ}F_{\gamma}\op{\circ}F^{-1}$} changes into \[ c:\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\times C^{\infty}(P,K)^{K}\to C^{\infty}(P,K)^{K},\quad (F,\gamma)\mapsto \gamma \op{\circ}F^{-1}. \] In fact, this follows from \[ (F\circ F_{\gamma}\circ F^{-1})(p) =F\big(F^{-1}(p)\cdot \gamma (F^{-1}(p))\big) =p\cdot \gamma (F^{-1}(p))=F_{(\gamma \circ F^{-1})}(p). \] \end{remark} In the following remarks and lemmas we show the smoothness of the maps \mbox{$T$}, \mbox{$\omega$} and \mbox{$\omega_{g}$}, mentioned before. \begin{lemma}\label{lem:localActionOftheDiffeomorphismGroupOnTheGaugeAlgebra} Let \mbox{$O\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\operatorname{Diff}}(M)$} be the open identity neighbourhood from Remark \ref{rem:chartsForDiffeomorphismGroups} and let \mbox{$S:O\to \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} be the map from Definition \ref{def:sectionFromDiffeomorphismsToBundleAutomorphisms}. Then we have that for each \mbox{$F\in \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} the map \mbox{$C^{\infty}(P,\ensuremath{\mathfrak{k}} )^{K}\to C^{\infty}(P,\ensuremath{\mathfrak{k}} )^{K}$}, \mbox{$\eta \mapsto \eta \op{\circ}F^{-1}$} is an automorphism of \mbox{$C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}$} and the map \[ t\ensuremath{\nobreak:\nobreak} C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}\times O\to C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K},\quad (\eta ,g )\mapsto \eta \op{\circ}S(g )^{-1} \] is smooth. \end{lemma} \begin{prf} That \mbox{$\eta \mapsto \eta \op{\circ}F^{-1}$} is an element of \mbox{$\ensuremath{\operatorname{Aut}}(C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K})$} follows immediately from the (pointwise) definition of the bracket on \mbox{$C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}$}. We shall use the previous established isomorphisms \mbox{$C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}\cong \ensuremath{\mathfrak{g}}_{\mathcal{U}'}(\ensuremath{\mathcal{P}})\cong\ensuremath{\mathfrak{g}}_{\mathcal{U}}(\ensuremath{\mathcal{P}})\cong\ensuremath{\mathfrak{g}}_{\ensuremath{\mathcal{V}}}(\ensuremath{\mathcal{P}})$} from Proposition \ref{prop:isomorphismOfTheGaugeAlgebra} and reduce the smoothness of \mbox{$t$} to the smoothness of \[ C^{\infty}(M,\ensuremath{\mathfrak{k}})\times \ensuremath{\operatorname{Diff}}(M)\to C^{\infty}(M,\ensuremath{\mathfrak{k}}),\quad (\eta ,g)\mapsto \eta \op{\circ }g^{-1} \] from \cite[Proposition 6]{gloeckner02patched} and to the action of \mbox{$g_{i}^{-1}$} on \mbox{$C^{\infty}(\cl{V}_{i},\ensuremath{\mathfrak{k}})$}, because we have no description of what \mbox{$g_{i}^{-1}$} does with \mbox{$U_{j}$} for \mbox{$j\neq i$}. It clearly suffices to show that the map \[ t_{i}:C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}\times \ensuremath{\operatorname{Diff}}(M)\to C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}\times \ensuremath{\operatorname{Diff}} (M),\quad (\eta ,g )\mapsto (\eta\op{\circ}\wt{g_{i}}^{-1},g ) \] is smooth for each \mbox{$1\leq i\leq n$}, because then \mbox{$t=\ensuremath{\operatorname{pr}}_{1}\op{\circ }t_{n}\op{\circ}\ldots\op{\circ}t_{1}$} is smooth. This in turn follows from the smoothness of \begin{align}\label{eqn:localActionOnTheCurrentAlgebra} C^{\infty}(U'_{i},\ensuremath{\mathfrak{k}})\times \ensuremath{\operatorname{Diff}}(M)\to C^{\infty}({U}_{i},\ensuremath{\mathfrak{k}}),\quad (\eta ,g)\mapsto \eta \op{\circ } \left.g_{i}^{-1}\right|_{U_{i}}, \end{align} because this is the local description of \mbox{$t_{i}$}. In fact, for each \mbox{$j\neq i$} there exists an open subset \mbox{$V'_{j}$} with \mbox{$U_{j}\backslash U_{i}\ensuremath{\nobreak\subseteq\nobreak} V'_{j}\ensuremath{\nobreak\subseteq\nobreak} U_{j}\backslash V_{i}$}, because \mbox{$\cl{V}_{i}\ensuremath{\nobreak\subseteq\nobreak} U_{i}$} and \mbox{$U_{j}$} is diffeomorphic to \mbox{$(0,1)^{\dim(M)}$}. Furthermore, we set \mbox{$V'_{i}:=U_{i}$}. Then \mbox{$(V'_{1},\dots ,V'_{n})$} is an open cover of \mbox{$M$}, leading to a refinement \mbox{$\ensuremath{\mathcal{V}}'$} of the trivialising system \mbox{$\mathcal{U}'$} and we have \[ t_{i}:\ensuremath{\mathfrak{g}}_{\mathcal{U}'}(\ensuremath{\mathcal{P}})\times O\to \ensuremath{\mathfrak{g}}_{\ensuremath{\mathcal{V}} '}(\ensuremath{\mathcal{P}}),\quad ((\eta_{1},\dots ,\eta_{n}),g )\mapsto (\left.\eta_{1}\right|_{{V'_{1}}},\dots , \left.\eta_{i}\op{\circ}g_{i}^{-1}\right|_{{V'_{i}}},\dots , \left.\eta_{n}\right|_{{V'_{n}}}) \] because \mbox{$\ensuremath{\operatorname{supp}} (g_{i})\ensuremath{\nobreak\subseteq\nobreak} V_{i}$} and \mbox{$V'_{j}\cap V_{i}=\emptyset$} if \mbox{$j\neq i$}. To show that \eqref{eqn:localActionOnTheCurrentAlgebra} is smooth, choose some \mbox{$f_{i}\in C^{\infty}(M,\ensuremath{\mathds{R}})$} with \mbox{$\left.f_{i}\right|_{U_{i}}\equiv 1$} and \mbox{$\ensuremath{\operatorname{supp}} (f_{i})\ensuremath{\nobreak\subseteq\nobreak} U'_{i}$}. Then \[ h_{i}:C^{\infty}(U'_{i},\ensuremath{\mathfrak{k}})\to C^{\infty}(M,\ensuremath{\mathfrak{k}} ),\quad \eta \mapsto \left( m\mapsto \left\{\begin{array}{ll} f_{i}(m)\cdot \eta (m) & \text{ if } m\in U'_{i}\\ 0 & \text{ if }m\notin U'_{i} \end{array} \right.\right) \] is smooth by Corollary \ref{cor:gluingForVectorBundles_OPEN_Version}, because \mbox{$\eta \mapsto \left.f_{i}\right|_{U'_{i}}\cdot\eta $} is linear, continuous and thus smooth. Now we have \mbox{$\ensuremath{\operatorname{supp}}(g_{i})\ensuremath{\nobreak\subseteq\nobreak} V_{i}\ensuremath{\nobreak\subseteq\nobreak} U_{i}$} and thus \mbox{$\left.h_{i}(\eta)\op{\circ}g_{i}^{-1}\right|_{U_{i}} =\left.\eta\op{\circ}g_{i}^{-1}\right|_{U_{i}}$} depends smoothly on \mbox{$g$} and \mbox{$\eta$} by Corollary \ref{cor:restrictionMapIsSmooth}. \end{prf} The following proofs share a common idea. We will always have to show that certain mappings with values in \mbox{$C^{\infty}(P,K)^{K}$} are smooth. This can be established by showing that their compositions with the pull-back \mbox{$(\sigma_{i})^{*}$} of a section \mbox{$\sigma_{i}\ensuremath{\nobreak:\nobreak} \cl{V}_{i}\to P$} (then with values in \mbox{$C^{\infty}(\cl{V}_{i},K)$}) are smooth for all \mbox{$1\leq i \leq n$}. As described in Remark \ref{rem:valuesOfTheLiftedDiffeomorphismInTermsOfLocalData}, it will not be possible to write down explicit formulas for these mappings in terms of the transition functions \mbox{$k_{ij}$} for all \mbox{$x\in\cl{V}_{i}$} simultaneously, but we will be able to do so on some open neighbourhood \mbox{$U_{x}$} of \mbox{$x$}. For different \mbox{$x_{1}$} and \mbox{$x_{2}$} these formulas will define the same mapping on \mbox{${U}_{x_{1}}\cap {U}_{x_{2}}$}, because there they define \mbox{$(\sigma_{i}^{*}(S(g)))=S(g)\op{\circ }\sigma_{i}$}. By restriction and gluing we will thus be able to reconstruct the original mappings and then see that they depend smoothly on their arguments. \begin{lemma} \label{lem:orbitMapFromDiffeomorhismsToBundleAutomorphismsIsSmooth} If \mbox{$O\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\operatorname{Diff}}(M)$} is the open identity neighbourhood from Remark \ref{rem:chartsForDiffeomorphismGroups} and if \mbox{$S:O\to \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} is the map from Definition \ref{def:sectionFromDiffeomorphismsToBundleAutomorphisms}, then for each \mbox{$\gamma \in C^{\infty}(P,K)^{K}$} the map \[ O\ni g \mapsto \gamma \op{\circ}S(g)^{-1}\in C^{\infty}(P,K)^{K} \] is smooth. \end{lemma} \begin{prf} It suffices to show that \mbox{$\gamma \op{\circ}S(g)^{-1}\op{\circ }\left.\sigma_{i}\right|_{\cl{V}_{i}}$} depends smoothly on \mbox{$g$} for \mbox{$1\leq i\leq n$}. Let \mbox{$(\gamma_{1},\dots ,\gamma_{n})\in G_{\mathcal{U}}(\ensuremath{\mathcal{P}})\ensuremath{\nobreak\subseteq\nobreak} \prod_{i=1}^{n}C^{\infty}(\cl{U}_{i},K)$} be the local description of \mbox{$\gamma$}. Fix \mbox{$g \in O$} and \mbox{$x\in \cl{V}_{i}$}. Then Remark \ref{rem:valuesOfTheLiftedDiffeomorphismInTermsOfLocalData} yields open neighbourhoods \mbox{$O_{g}$} of \mbox{$g$} and \mbox{$U_{x}$} of \mbox{$x$} (w.l.o.g. such that \mbox{$\cl{U}_{x}\ensuremath{\nobreak\subseteq\nobreak} \cl{V}_{i}$} is a manifold with corners) such that \begin{multline*} \gamma (S(g')^{-1}(\sigma_{i}(x'))) = \gamma \big(\sigma_{j_{\ell}}(g' (x')) \underbrace{\cdot k_{j_{\ell}j_{\ell-1}}(g'_{j_{\ell-1}}\op{\circ}\dots \op{\circ} g'_{j_{1}}(x'))\cdot\ldots \cdot k_{j_{1}i}(x')}_{:=\kappa_{x,g '}(x')} \big)\\ = \kappa_{x,g'}(x')^{-1}\cdot\gamma\big(\sigma_{j_{\ell}}(g'(x')) \cdot \kappa_{x,g'}(x') = \underbrace{ \kappa_{x,g'}(x')^{-1}\cdot\gamma_{j_{\ell}}(g'(x'))\cdot \kappa_{x,g'}(x')}_{:=\theta_{x,g'}(x')} \end{multline*} for all \mbox{$g '\in O_{g}$} and \mbox{$x'\in \cl{U}_{x}$}. Since we will not vary \mbox{$i$} and \mbox{$g$} in the sequel, we suppressed the dependence of \mbox{$\kappa_{x,g '}(x')$} and \mbox{$\theta_{x,g '}(x')$} on \mbox{$i$} and \mbox{$g$}. Note that each \mbox{$k_{jj'}$} and \mbox{$\gamma_{i}$} can be assumed to be defined on \mbox{$M$} (cf.\ Remark \ref{rem:choiceOfLocalTrivialisations}). Thus, for fixed \mbox{$x$}, the formula for \mbox{$\theta _{x,g '}$} defines a smooth function on \mbox{$M$} that depends smoothly on \mbox{$g'$}, because the action of \mbox{$\ensuremath{\operatorname{Diff}}(M)$} on \mbox{$C^{\infty}(M,K)$} is smooth (cf.\ \cite[Proposition 10.3]{gloeckner02patched}). Furthermore, \mbox{$\theta_{x_{1},g'}$} and \mbox{$\theta_{x_{2},g '}$} coincide on \mbox{$\cl{U}_{x_{1}}\cap \cl{U}_{x_{2}}$}, because both define \mbox{$\gamma \op{\circ}S(g ')^{-1}\op{\circ }\sigma_{i}$} there. Now finitely many \mbox{$U_{x_{1}},\dots ,U_{x_{m}}$} cover \mbox{$\cl{V}_{i}$}, and since the gluing and restriction maps from Lemma \ref{lem:restrictionMapForCurrentGroupIsSmooth} and Proposition \ref{prop:gluingLemmaForCurrentGroup} are smooth we have that \[ \gamma \op{\circ}S(g ')^{-1}\op{\circ}\sigma_{i}=\ensuremath{\operatorname{glue}} (\left.\theta_{x_{1},g '}\right|_{\cl{U}_{x_{1}}} ,\dots , \left.\theta_{x_{m},g '}\right|_{\cl{U}_{x_{m}}}) \] depends smoothly on \mbox{$g '$}. \end{prf} \begin{lemma}\label{lem:localActionOfTheDiffeomorphismGroupOnTheGaugeGroup} Let \mbox{$O\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\operatorname{Diff}}(M)$} be the open identity neighbourhood from Remark \ref{rem:chartsForDiffeomorphismGroups} and let \mbox{$S:O\to \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} be the map from Definition \ref{def:sectionFromDiffeomorphismsToBundleAutomorphisms}. Then we have that for each \mbox{$F\in \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} the map \mbox{$c_{F}\ensuremath{\nobreak:\nobreak} C^{\infty}(P,K)^{K}\to C^{\infty}(P,K)^{K}$}, \mbox{$\gamma \mapsto \gamma \op{\circ}F^{-1}$} is an automorphism of \mbox{$C^{\infty}(P,K)^{K}$} and the map \begin{align} \label{eqn:definitionOfTheOuterActionOfTheDiffeomorphismsOnTheGaugeGroup} T:C^{\infty}(P,K)^{K}\times O\to C^{\infty }(P,K)^{K},\quad (\gamma ,g)\mapsto \gamma \op{\circ}S(g)^{-1} \end{align} is smooth. \end{lemma} \begin{prf} Since \mbox{$\gamma \mapsto \gamma \op{\circ}F^{-1}$} is a group homomorphism, it suffices to show that it is smooth on a unit neighbourhood. Because the charts on \mbox{$C^{\infty}(P,K)^{K}$} are constructed by push-forwards (cf.\ Proposition \ref{prop:gaugeGroupInLocalCoordinatesIsLieGroup}) this follows immediately from the fact that the corresponding automorphism of \mbox{$C^{\infty}(P,\ensuremath{\mathfrak{k}})^{K}$}, given by \mbox{$\eta \mapsto \eta \op{\circ}F^{-1}$}, is continuous and thus smooth. For the same reason, Lemma \ref{lem:localActionOftheDiffeomorphismGroupOnTheGaugeAlgebra} implies that there exists a unit neighbourhood \mbox{$U\ensuremath{\nobreak\subseteq\nobreak} C^{\infty}(P,K)^{K}$} such that \[ U\times O\to C^{\infty}(P,K)^{K},\quad (\gamma ,g)\mapsto \gamma \op{\circ}S(g )^{-1} \] is smooth. Now for each \mbox{$\gamma_{0} \in C^{\infty}(P,K)^{K}$} there exists an open neighbourhood \mbox{$U_{\gamma_{0} }$} with \mbox{$\gamma_{0}^{-1}\cdot U_{\gamma_{0}}\ensuremath{\nobreak\subseteq\nobreak} U$}. Hence \[ \gamma \op{\circ}S(g )^{-1}=(\gamma_{0}\cdot \gamma_{0}^{-1}\cdot \gamma )\op{\circ}S(g)^{-1}= \big(\gamma_{0}\op{\circ}S(g)^{-1}\big)\cdot\big( (\gamma_{0}^{-1}\cdot \gamma)\op{\circ}S(g)^{-1}\big), \] and the first factor depends smoothly on \mbox{$g$} due to Lemma \ref{lem:orbitMapFromDiffeomorhismsToBundleAutomorphismsIsSmooth}, and the second factor depends smoothly on \mbox{$\gamma$} and \mbox{$g$}, because \mbox{$\gamma_{0}^{-1}\cdot \gamma \in U$}. \end{prf} \begin{lemma}\label{lem:smoothCocycleOnAutomorhismGroup} If \mbox{$O\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\operatorname{Diff}}(M)$} is the open identity neighbourhood from Remark \ref{rem:chartsForDiffeomorphismGroups} and if \mbox{$S:O\to\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} is the map from Definition \ref{def:sectionFromDiffeomorphismsToBundleAutomorphisms}, then \begin{align} \label{eqn:definitionOfTheCocycleFromDiffeomorphismsToTheGaugeGroup} \omega \ensuremath{\nobreak:\nobreak} O\times O\to C^{\infty}(P,K)^{K},\quad (g ,g ')\mapsto S (g )\op{\circ } S (g ')\op{\circ } S(g \op{\circ } g')^{-1} \end{align} is smooth. Furthermore, if \mbox{$Q\ensuremath{\nobreak:\nobreak} \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\to \ensuremath{\operatorname{Diff}}(M)$}, \mbox{$F\mapsto F_{M}$} is the homomorphism from Definition \ref{def:bundleAutomorphismsAndGaugeTransformations} then for each \mbox{$g \in Q(\ensuremath{\operatorname{Diff}}(M))$} there exists an open identity neighbourhood \mbox{$O_{g}\ensuremath{\nobreak\subseteq\nobreak} O$} such that \begin{align}\label{eqn:conjugationOfTheCocycle} \omega_{g}:O_{g }\to C^{\infty}(P,K)^{K},\quad g '\mapsto F\op{\circ } S(g ')\op{\circ } F^{-1} \op{\circ } S(g \op{\circ } g' \op{\circ }g^{-1})^{-1} \end{align} is smooth for any \mbox{$F\in \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} with \mbox{$F_{M}=g$}. \end{lemma} \begin{prf} First observe that \mbox{$\omega (g ,g ')$} actually is an element of \mbox{$C^{\infty}(P,K)^{K}\cong \ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})=\ker(Q)$}, because \mbox{$Q$} is a homomorphism of groups, \mbox{$S$} is a section of \mbox{$Q$} and thus \[ Q(\omega (g ,g '))=Q(S(g))\op{\circ } Q(S(g '))\op{\circ } Q(S(g \op{\circ}g '))^{-1}=\ensuremath{\operatorname{id}}_{M}. \] To show that \mbox{$\omega$} is smooth, we derive an explicit formula for \mbox{$\omega(g, g ')\op{\circ}\sigma_{i}\in C^{\infty}(\cl{V}_{i},K)$} that depends smoothly on \mbox{$g$} and \mbox{$g '$}. Denote \mbox{$\wh{g }:=g \op{\circ}g '$} for \mbox{$g ,g '\in O$} and fix \mbox{$g ,g '\in O$}, \mbox{$x\in \cl{V}_{i}$}. Proceeding as in Remark \ref{rem:valuesOfTheLiftedDiffeomorphismInTermsOfLocalData}, we find \mbox{$i_{1},\dots ,i_{\ell}$} such that \[ S(\wh{g})^{-1}(\sigma_{i_{\ell}}(x))=\sigma_{\ell}(\wh{g }^{-1}(x))\cdot k_{i_{\ell}i_{\ell-1}}((\wh{g}_{i_{\ell-1}})^{-1}\op{\circ}\ldots \op{\circ}\left(\wh{g}_{i_{1}}\right)^{-1}(x))\cdot \ldots\cdot k_{i_{1}i}(x). \] Accordingly we find \mbox{$i'_{\ell'},\dots ,i'_{1}$} for \mbox{$S(g')$} and \mbox{$i''_{\ell''},\dots ,i''_{1}$} for \mbox{$S(g)$}. We get as in Remark \ref{rem:valuesOfTheLiftedDiffeomorphismInTermsOfLocalData} open neighbourhoods \mbox{$O_{g },O_{g '}$} of \mbox{$g ,g '$} and \mbox{$U_{x}$} of \mbox{$x$} (w.l.o.g. such that \mbox{$\cl{U}_{x}\ensuremath{\nobreak\subseteq\nobreak} \cl{V}_{i}$} is a manifold with corners) such that for \mbox{$h\in O_{g}$}, \mbox{$h\in O_{g '}$} and \mbox{$x'\in \cl{U}_{x}$} we have \mbox{$ S(h )\cdot S(h ')\cdot S(h \cdot h')^{-1}(\sigma_{i}(x')) = $} \begin{alignat*}{3} \sigma_{i}(x') \cdot \Big[ & k_{i\,i''_{\ell''}}(x') &&&&\\ \cdot & k_{i''_{\ell''}i''_{\ell''-1}}\big(h_{i''_{\ell''-1}}\op{\circ}\ldots \op{\circ }h_{i''_{1}}\op{\circ } h^{-1}(x')\big) \cdot\ldots & \,\cdot & k_{i''_{1}i'_{\ell'}}(h^{-1}(x')) \\ \cdot & k_{i'_{\ell'}i'_{\ell'-1}}\big(h'_{i'_{\ell'-1}}\op{\circ }\ldots \op{\circ} h'_{i'_{1}}\op{\circ } \wh{h}^{-1}(x')\big) \cdot \ldots & \cdot & k_{i'_{1}i_{\ell}}(\wh{h}^{-1}(x')) \\ \cdot & k_{i_{\ell}i_{\ell-1}}\big((\wh{h}_{i_{\ell-1}})^{-1}\op{\circ}\ldots \op{\circ }\left.(\wh{h}_{i_{1}}\right.)^{-1}(x')\big) \cdot\ldots & \cdot & k_{i_{1}i}(x')\Big]. \end{alignat*} Denote by \mbox{$\kappa_{x,h,h'}(x')\in K$} the element in brackets on the right hand side, and note that it defines \mbox{$\omega (h,h')\op{\circ}\sigma_{i}(x')$} by Remark \ref{rem:gaugeGroupIsIsomorphicToEquivariantMappings}. Since we will not vary \mbox{$g$} and \mbox{$g '$} in the sequel we suppressed the dependence of \mbox{$\kappa_{x,h,h'}(x')$} on them. Now each \mbox{$k_{ij}$} can be assumed to be defined on \mbox{$M$} (cf.\ Remark \ref{rem:choiceOfLocalTrivialisations}). Thus, for fixed \mbox{$x$}, the formula for \mbox{$\kappa_{x,h,h'}$} defines a smooth function on \mbox{$M$} that depends smoothly on \mbox{$h$} and \mbox{$h'$}, because the action of \mbox{$\ensuremath{\operatorname{Diff}}(M)$} on \mbox{$C^{\infty}(M,K)$} is smooth (cf.\ \cite[Proposition 10.3]{gloeckner02patched}). Furthermore, \mbox{$\kappa_{x_{1},h,h'}$} coincides with \mbox{$\kappa_{x_{2},h,h'}$} on \mbox{$\cl{U}_{x_{1}}\cap \cl{U}_{x_{2}}$}, because \[ \sigma_{i}(x')\cdot \kappa_{x_{1},h,h'}(x')=S(h )\op{\circ } S(h ')\op{\circ } S(h \op{\circ } h')^{-1}(\sigma_{i}(x')) =\sigma_{i}(x')\cdot \kappa_{x_{2},h,h'}(x') \] for \mbox{$x'\in \cl{U}_{x_{1}}\cap \cl{U}_{x_{2}}$}. Now finitely many \mbox{$U_{x_{1}},\dots ,U_{x_{m}}$} cover \mbox{$\cl{V}_{i}$} and we thus see that \[ \omega (h,h')\op{\circ }\sigma_{i}= \ensuremath{\operatorname{glue}}( \left.\kappa_{x_{1},h,h'}\right|_{\cl{U}_{x_{1}}} ,\dots , \left.\kappa_{x_{m},h,h'}\right|_{\cl{U}_{x_{m}}} ) \] depends smoothly on \mbox{$h$} and \mbox{$h'$}. We derive an explicit formula for \mbox{$\omega_{g}(g ')\op{\circ}\sigma_{i}\in C^{\infty}(\cl{V}_{i},K)$} to show the smoothness of \mbox{$\omega_{g}$}. Let \mbox{$O_{g }\ensuremath{\nobreak\subseteq\nobreak} O$} be an open identity neighbourhood with \mbox{$g \op{\circ}O_{g}\op{\circ }g^{-1}\ensuremath{\nobreak\subseteq\nobreak} O$} and denote \mbox{$\ol{g '}=g \op{\circ } g '\op{\circ } g ^{-1}$} for \mbox{$g' \in O_{g }$}. Fix \mbox{$g '$} and \mbox{$x\in\cl{V}_{i}$}. Proceeding as in Remark \ref{rem:valuesOfTheLiftedDiffeomorphismInTermsOfLocalData} we find \mbox{$j_{\ell},\dots ,j_{1}$} such that \[ S(\ol{g'})^{-1}(\sigma_{i}(x))=\sigma_{i_{\ell}}(\ol{g' }^{-1}(x))\cdot k_{j_{\ell}j_{\ell-1}}((\ol{g}_{j_{\ell-1}})^{-1}\op{\circ}\ldots \op{\circ}\left(\ol{g}_{j_{1}}\right)^{-1}(x))\cdot \ldots\cdot k_{j_{1}i}(x). \] Furthermore, let \mbox{$j'_{1}$} be minimal such that \[ \big(F_{M}^{-1}\op{\circ}S(\ol{g '})^{-1}_{M}\big)(x)=g ^{-1}\op{\circ}g'^{-1}(x)\in V_{j'_{1}} \] and let \mbox{$U_{x}$} be an open neighbourhood of \mbox{$x$} (w.l.o.g. such that \mbox{$\cl{U}_{x}\ensuremath{\nobreak\subseteq\nobreak} \cl{V}_{i}$} is a manifold with corners) such that \mbox{$\ol{g'}^{-1}(\cl{U}_{x})\ensuremath{\nobreak\subseteq\nobreak} V_{j_{\ell}}$} and \mbox{$g^{-1}\op{\circ}g'^{-1}(\cl{U}_{x})\ensuremath{\nobreak\subseteq\nobreak} V_{j'_{1}}$}. Since \mbox{$F_{M}=g$} and \[ F^{-1}(\sigma_{j_{\ell}}(\ol{g '}^{-1}(x')))\in \sigma_{j'_{1}}({g}^{-1} \op{\circ}g'^{-1}(x'))\text{ for } x'\in U_{x} \] we have \[ F^{-1}\big(\sigma_{j_{\ell}}(\ol{g '}^{-1}(x'))\big)= \sigma_{j'_{1}}(g^{-1}\op{\circ }g'^{-1}(x'))\cdot k_{F,x,g '} ( x') \text{ for } x'\in U_{x}, \] for some smooth function \mbox{$k_{F,x,g '}:U_{x}\to K$}. In fact, we have \[ k_{F,x,g '}(x)=k_{\sigma_{j'_{1}}}(F^{-1}(\sigma_{j_{\ell}}(\ol{g '}^{-1}(x)))). \] After possibly shrinking \mbox{$U_{x}$}, a construction as in Remark \ref{rem:choiceOfLocalTrivialisations} shows that \mbox{$\left.k_{\sigma_{j'_{1}}}\op{\circ}F^{-1}\op{\circ} \sigma_{j_{\ell}}\right|_{\cl{U}_{x}}$} extends to a smooth function on \mbox{$M$}. Thus \mbox{$\left.k_{F,x,g' }\right|_{\cl{U}_{x}}\in C^{\infty}(\cl{U}_{x},K)$} depends smoothly on \mbox{$g '$} for fixed \mbox{$x$}. Accordingly, we find \mbox{$j'_{2},\dots ,j'_{\ell'}$} and a smooth function \mbox{$k'_{F,x,g '}:\cl{U}_{x}\to K$} (possibly after shrinking \mbox{$U_{x}$}), depending smoothly on \mbox{$g$} such that \begin{align}\label{eqn:valuesOfTheLiftedDiffeomorphismInSmoothCocycle} \omega_{g}(g ')(\sigma_{i}(x))= \sigma_{i}(x)\cdot \big[ k'_{F,x,g '}(x) &\cdot k_{j'_{\ell'}j'_{\ell'-1}}(g (x)) \cdot \ldots\cdot k_{j'_{2}j'_{1}}(g '^{-1}\op{\circ}g^{-1}(x))\cdot k_{F,x,g '}(x)\notag\\ &\cdot k_{j_{\ell}j_{\ell-1}}(g '(x)) \cdot \ldots\cdot k_{j_{1}i}(x) \big]. \end{align} Denote the element in brackets on the right hand side by \mbox{$\kappa_{x,g '}$}. Since we will not vary \mbox{$F$} and \mbox{$g$} in the sequel, we suppressed the dependence of \mbox{$\kappa_{x,g '}$} on them. By continuity (cf.\ Remark \ref{rem:valuesOfTheLiftedDiffeomorphismInTermsOfLocalData}), we find open neighbourhoods \mbox{$O_{g' }$} and \mbox{$U'_{x}$} of \mbox{$g '$} and \mbox{$x$} (w.l.o.g. such that \mbox{$\cl{U'}_{x}\ensuremath{\nobreak\subseteq\nobreak}\cl{V}_{i}$} is a manifold with corners) such that \eqref{eqn:valuesOfTheLiftedDiffeomorphismInSmoothCocycle} defines \mbox{$\omega_{g}(h')(\sigma_{i}(x'))$} for all \mbox{$h'\in O_{g '}$} and \mbox{$x'\in \cl{U}_{x}$}. Then \mbox{$\kappa_{x_{1},g '}=\kappa_{x_{2},g '}$} on \mbox{$\cl{U}_{x_{1}}\cap \cl{U}_{x_{2}}$}, finitely many \mbox{$U_{x_{1}},\dots ,U_{x_{m}}$} cover \mbox{$\cl{V}_{i}$} and since the gluing and restriction maps from Lemma \ref{lem:restrictionMapForCurrentGroupIsSmooth} and Proposition \ref{prop:gluingLemmaForCurrentGroup} are smooth, \[ \omega_{g}(g ')\op{\circ }\sigma_{i}= \ensuremath{\operatorname{glue}}( \left.\kappa_{x_{1},g '}\right|_{\cl{U}_{x_{1}}} ,\dots , \left.\kappa_{x_{m},g '}\right|_{\cl{U}_{x_{m}}} ) \] shows that \mbox{$\omega_{g}(g ')\op{\circ }\sigma_{i}$} depends smoothly on \mbox{$g '$}. \end{prf} Before coming to the main result of this section we give a description of the image of \mbox{$\ensuremath{\operatorname{Diff}}(M)_{\ensuremath{\mathcal{P}}}:=Q(\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}))$} in terms of \mbox{$\ensuremath{\mathcal{P}}$}, without referring to \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$}. \begin{remark}\label{rem:alternativeDescriptionOfDIFF_P} Let \mbox{$Q\ensuremath{\nobreak:\nobreak} \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\to \ensuremath{\operatorname{Diff}}(M)$}, \mbox{$F\mapsto F_{M}$} be the homomorphism from Definition \ref{def:bundleAutomorphismsAndGaugeTransformations}. If \mbox{$g \in \ensuremath{\operatorname{Diff}}(M)_{\ensuremath{\mathcal{P}}}$}, then there exists an \mbox{$F\in \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} that covers \mbox{$g$}. Hence the commutative diagram \[ \begin{CD} g^{*}(P)@>g_{\ensuremath{\mathcal{P}}}>> P @>F^{-1}>> P\\ @Vg^{*}(\pi)VV @V\pi VV @V\pi VV \\ M @>g >> M @>g^{-1}>> M \end{CD} \] shows that \mbox{$g^{*}(\ensuremath{\mathcal{P}})$} is equivalent to \mbox{$\ensuremath{\mathcal{P}}$}. On the other hand, if \mbox{$\ensuremath{\mathcal{P}}\simg^{*}(\ensuremath{\mathcal{P}})$}, then the commutative diagram \[ \begin{CD} P@>\sim >> g^{*}(P)@>g_{\ensuremath{\mathcal{P}}}>> P\\ @V\pi VV @Vg^{*}(\pi)VV @V\pi VV \\ M@= M @>g >> M \end{CD} \] shows that there is an \mbox{$F\in \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} covering \mbox{$g$}. Thus \mbox{$\ensuremath{\operatorname{Diff}} (M)_{\ensuremath{\mathcal{P}}}$} consists of those diffeomorphisms preserving the equivalence class of \mbox{$\ensuremath{\mathcal{P}}$} under pull-backs. This shows also that \mbox{$\ensuremath{\operatorname{Diff}} (M)_{\ensuremath{\mathcal{P}}}$} is open because homotopic maps yield equivalent bundles. It thus is contained in \mbox{$\ensuremath{\operatorname{Diff}}(M)_{0}$}. Note, that it is not possible to say what \mbox{$\ensuremath{\operatorname{Diff}} (M)_{\ensuremath{\mathcal{P}}}$} is in general, even in the case of bundles over \mbox{$M=\ensuremath{\mathds{S}}^{1}$}. In fact, we then have \mbox{$\pi_{0}(\ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1}))\cong \ensuremath{\mathds{Z}}_{2}$} (cf.\ \cite{milnor84}), and the component of \mbox{$\ensuremath{\operatorname{Diff}}(\ensuremath{\mathds{S}}^{1})$}, which does not contain the identity, are precisely the orientation reversing diffeomorphisms on \mbox{$\ensuremath{\mathds{S}}^{1}$}. It follows from the description of equivalence classes of principal bundles over \mbox{$\ensuremath{\mathds{S}}^{1}$} by \mbox{$\pi_{0}(K)$} that pulling back the bundle along a orientation reversing diffeomorphism inverts a representing element for the bundle in \mbox{$K$}. Thus we have \mbox{$g^{*}(\ensuremath{\mathcal{P}}_{k})\cong \ensuremath{\mathcal{P}}_{k^{-1}}$} for \mbox{$g \notin \ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1})_{0}$}. If \mbox{$\pi_{0}(K)\cong \ensuremath{\mathds{Z}}_{2}$}, then \mbox{$\ensuremath{\mathcal{P}}_{k^{-1}}$} and \mbox{$\ensuremath{\mathcal{P}}_{k}$} are equivalent because \mbox{$[k]=[k^{-1}]$} in \mbox{$\pi_{0}(K)$} and thus \mbox{$g \in \ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1})_{\ensuremath{\mathcal{P}}_{k}}$} and \mbox{$\ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1})_{\ensuremath{\mathcal{P}}_{k}}=\ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1})$}. If \mbox{$\pi_{0}(K)\cong \ensuremath{\mathds{Z}}_{3}$}, then \mbox{$\ensuremath{\mathcal{P}}_{k}$} and \mbox{$\ensuremath{\mathcal{P}}_{k^{-1}}$} are \emph{not} equivalent because \mbox{$[k]\neq [k^{-1}]$} in \mbox{$\pi_{0}(K)$} and thus \mbox{$g \notin \ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1})_{\ensuremath{\mathcal{P}}_{k}}$} and \mbox{$\ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1})_{\ensuremath{\mathcal{P}}_{k}}=\ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1})_{0}$}. \end{remark} \begin{theorem}[\mbox{$\boldsymbol{\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})}$} as an extension of \mbox{$\boldsymbol{\ensuremath{\operatorname{Diff}}(M)_{\ensuremath{\mathcal{P}}}}$} by \mbox{$\boldsymbol{\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})}$}] \label{thm:automorphismGroupIsLieGroup} Let \mbox{$\ensuremath{\mathcal{P}}$} be a smooth principal \mbox{$K$}-bundle over the closed compact manifold \mbox{$M$}. If \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB, then \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} carries a Lie group structure such that we have an extension of smooth Lie groups \begin{align}\label{eqn:extensionOfGauByDiff2} \ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\hookrightarrow \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\xrightarrow{Q} \ensuremath{\operatorname{Diff}}(M)_{\ensuremath{\mathcal{P}}}, \end{align} where \mbox{$Q:\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\to \ensuremath{\operatorname{Diff}}(M)$} is the homomorphism from Definition \ref{def:bundleAutomorphismsAndGaugeTransformations} and \mbox{$\ensuremath{\operatorname{Diff}}(M)_{\ensuremath{\mathcal{P}}}$} is the open subgroup of \mbox{$\ensuremath{\operatorname{Diff}}(M)$} preserving the equivalence class of \mbox{$\ensuremath{\mathcal{P}}$} under pull-backs. \end{theorem} \begin{prf} We identify \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} with \mbox{$C^{\infty}(P,K)^{K}$} and extend \mbox{$S$} to a (possibly non-continuous) section \mbox{$S\ensuremath{\nobreak:\nobreak}\ensuremath{\operatorname{Diff}}(M)_{\ensuremath{\mathcal{P}}}\to \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} of \mbox{$Q$}. Now the preceding lemmas show that \mbox{$(T,\omega)$} is a smooth factor system \cite[Proposition II.8]{neeb06nonAbelianExtensions}, which yields the assertion. \end{prf} \begin{proposition}\label{prop:actionOfAutomorphismGroupOnBundleIsSmooth} In the setting of the previous theorem, the natural action \[ \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\times P\to P,\quad (F,p)\mapsto F(p) \] is smooth. \end{proposition} \begin{prf} First we note the \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\cong C^{\infty}(P,K)^{K}$} acts smoothly on \mbox{$P$} by \mbox{$(\gamma ,p)\mapsto p\cdot \gamma (p)$}. Let \mbox{$O\ensuremath{\nobreak\subseteq\nobreak} \ensuremath{\operatorname{Diff}}(M)$} be the neighbourhood from Remark \ref{rem:chartsForDiffeomorphismGroups} and \mbox{$S\ensuremath{\nobreak:\nobreak} O\to \ensuremath{\operatorname{Aut}}(P)$}, \mbox{$g\mapsto \wt{g_{n}}\op{\circ}\dots \op{\circ}\wt{g_{1\,}}$} be the map from Definition \ref{def:sectionFromDiffeomorphismsToBundleAutomorphisms}. Then \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\op{\circ} S(O)$} is an open neighbourhood in \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} and it suffices to show that the restriction of the action to this neighbourhood is smooth. Since \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} acts smoothly on \mbox{$\ensuremath{\mathcal{P}}$}, this in turn follows from the smoothness of the map \[ R\ensuremath{\nobreak:\nobreak} O\times P\to P,\quad (g ,p)\mapsto S(g )(p)=\wt{g_{n}}\op{\circ}\dots \op{\circ}\wt{g_{1\,}}(p). \] To check the smoothness of \mbox{$R$} it suffices to check that \mbox{$r_{i}\ensuremath{\nobreak:\nobreak} O\times P\to P\times O$}, \mbox{$(g ,p)\mapsto (\wt{g_{i}}(p),g)$} is smooth, because then \mbox{$R=\ensuremath{\operatorname{pr}}_{1}\op{\circ}r_{n}\op{\circ}\dots \op{\circ}r_{1}$} is smooth. Now the explicit formula \[ \wt{g_{i}}(\pi (p))=\left\{\begin{array}{ll} \sigma_{i}(g_{i}(\pi (p)))\cdot k_{i}(p) & \text{ if } p\in \pi^{-1}(U_{i})\\ p & \text{ if } p\in \pi^{-1}(\cl{V}_{i})^{c} \end{array}\right. \] shows that \mbox{$r_{i}$} is smooth on \mbox{$\big(O\times \pi^{-1}(U_{i})\big)\cup \big(O\times \pi^{-1}(\cl{V_{i}})^{c}\big)=O\times P$}. \end{prf} \begin{proposition}\label{lem:automorphismGroupActingOnConnections} If \mbox{\mbox{$\ensuremath{\mathcal{P}}$}} is a finite-dimensional smooth principal \mbox{\mbox{$K$}}-bundle over the closed compact manifold \mbox{\mbox{$M$}}, then the action \[ \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\times \Omega^{1}(P,\ensuremath{\mathfrak{k}}) \to \Omega^{1}(P,\ensuremath{\mathfrak{k}}),\quad F\mapsto (F^{-1})^{*}A, \] is smooth. Since this action preserves the closed subspace \mbox{$\ensuremath{\operatorname{Conn}}(\ensuremath{\mathcal{P}})$} of connection \mbox{$1$}-forms of \mbox{$\Omega^{1}(P,\ensuremath{\mathfrak{k}})$}, the restricted action \[ \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\times \ensuremath{\operatorname{Conn}}(\ensuremath{\mathcal{P}}) \to \ensuremath{\operatorname{Conn}}(\ensuremath{\mathcal{P}}),\quad F\mapsto (F^{-1})^{*}A \] is also smooth. \end{proposition} \begin{prf} As in Proposition \ref{prop:actionOfAutomorphismGroupOnBundleIsSmooth} it can bee seen that the canonical action \mbox{\mbox{$\ensuremath{\operatorname{Aut}}(P)\times TP\to TP$}}, \mbox{\mbox{$F.X_{p}=TF(X_{p})$}} is smooth. Since \mbox{\mbox{$P$}} is assumed to be finite-dimensional and the topology on \mbox{\mbox{$\Omega^{1}(P,\ensuremath{\mathfrak{k}})$}} is the induced topology from \mbox{\mbox{$C^{\infty}(TP,\ensuremath{\mathfrak{k}})$}}, the assertion now follows from \cite[Proposition 6.4]{gloeckner02patched}. \end{prf} \begin{remark} Of course, the Lie group structure on \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} from Theorem \ref{thm:automorphismGroupIsLieGroup} depends on the choice of \mbox{$S$} and thus on the choice of the chart \mbox{$\varphi :O\to \ensuremath{\mathcal{V}}(M)$} from Remark \ref{rem:chartsForDiffeomorphismGroups}, the choice of the trivialising system from Remark \ref{rem:choiceOfLocalTrivialisations} and the choice of the partition of unity chosen in the proof of Lemma \ref{lem:decompositionOfDiffeomorphisms}. However, different choices lead to isomorphic Lie group structures on \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} and, moreover to equivalent extensions. To show this we show that \mbox{$\ensuremath{\operatorname{id}}_{\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})}$} is smooth when choosing two different trivialising systems \mbox{$\cl{\ensuremath{\mathcal{V}}}=(\cl{V}_{i},\sigma_{i})_{i=1,\ldots,n}$} and \mbox{$\cl{\ensuremath{\mathcal{V}}}'=(\cl{V}'_{j},\tau_{j})_{j=1,\ldots,m}$}. Denote by \mbox{$S\ensuremath{\nobreak:\nobreak} O\to \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} and \mbox{$S'\ensuremath{\nobreak:\nobreak} O\to \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} the corresponding sections of \mbox{$Q$}. Since \[ \ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\op{\circ}S(O)=Q^{-1}(O)=\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\op{\circ}S'(O) \] is an open unit neighbourhood and \mbox{$\ensuremath{\operatorname{id}}_{\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})}$} is an isomorphism of abstract groups, it suffices to show that the restriction of \mbox{$\ensuremath{\operatorname{id}}_{\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})}$} to \mbox{$Q^{-1}(O)$} is smooth. Now the smooth structure on \mbox{$Q^{-1}(O)$} induced from \mbox{$S$} and \mbox{$S'$} is given by requiring \begin{align*} Q^{-1}(O)\ni &F\mapsto (F\op{\circ }S(F_{M})^{-1},F_{M})\in\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\times \ensuremath{\operatorname{Diff}}(M)\\ Q^{-1}(O)\ni &F\mapsto (F\op{\circ }S'(F_{M})^{-1},F_{M})\in\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\times \ensuremath{\operatorname{Diff}}(M) \end{align*} to be diffeomorphisms and we thus have to show that \[ O\ni g \mapsto S(g)\op{\circ}S'(g)^{-1}\in \ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}}) \] is smooth. By deriving explicit formulae for \mbox{$S(g)\op{\circ}S'(g)^{-1}(\sigma_{i}(x))$} on a neighbourhood \mbox{$U_{x}$} of \mbox{$x\in \cl{V}_{i}$}, and \mbox{$O_{g}$} of \mbox{$g \in O$} this follows exactly as in Lemma \ref{lem:smoothCocycleOnAutomorhismGroup}. \end{remark} We have not mentioned regularity so far since it is not needed to obtain the preceding results. However, it is an important concept and we shall elaborate on it now. \begin{proposition} Let \mbox{$\ensuremath{\mathcal{P}}$} be a smooth principal \mbox{$K$}-bundle over the compact manifold with corners \mbox{$M$}. If \mbox{$K$} is regular, then so is \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} and, furthermore, if \mbox{$M$} is closed, then \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} is also regular. \end{proposition} \begin{prf} The second assertion follows from the first, because extensions of regular Lie groups by regular ones are themselves regular \cite[Theorem 5.4]{omoriMaedaYoshiokaKobayashi83OnRegularFrechetLieGroups} (cf.\ \cite[Theorem V.1.8]{neeb06}). Let \mbox{$\cl{\ensuremath{\mathcal{V}}}=(\ol{V}_{i},\ol{\sigma_{i}})$} be a smooth closed trivialising system such that \mbox{$\ensuremath{\mathcal{P}}$} has the property SUB with respect to \mbox{$\cl{\ensuremath{\mathcal{V}}}$}. We shall use the regularity of \mbox{$C^{\infty}(\ol{V}_{i},K)$} to obtain the regularity of \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$}. If \mbox{$\xi \ensuremath{\nobreak:\nobreak} [0,1]\to \ensuremath{\operatorname{\mathfrak{gau}}}(\ensuremath{\mathcal{P}})$} is smooth, then this determines smooth maps \mbox{$\xi_{i}\ensuremath{\nobreak:\nobreak} [0,1]\to C^{\infty}(\ol{V}_{i},\ensuremath{\mathfrak{k}})$}, satisfying \mbox{$\xi_{i}(t)(m)=\ensuremath{\operatorname{Ad}}(k_{ij}(m)).\xi_{j}(t)(m)$}. By regularity of \mbox{$C^{\infty}(\ol{V}_{i},K)$} this determines smooth maps \mbox{$\gamma_{\xi,i}\ensuremath{\nobreak:\nobreak} [0,1]\to C^{\infty}(\ol{V}_{i},K)$}. By uniqueness of solutions of differential equations we see that the mappings \mbox{\mbox{$t\mapsto \gamma_{\xi ,i}(t)$}} and \mbox{\mbox{$t\mapsto k_{ij}\cdot \gamma_{\xi ,j}(t)\cdot k_{ji}$}} have to coincide, ensuring \mbox{\mbox{$\gamma_{\xi ,i}(t)(m)=k_{ij}(m)\cdot \gamma_{\xi ,j}(t)(m)\cdot k_{ji}(m)$}} for all \mbox{$t\in [0,1]$} and \mbox{$m\in \ol{V}_{i}\cap \ol{V}_{j}$}. Thus \mbox{$[0,1]\ni t\mapsto (\gamma_{\xi ,i}(t))_{i=1,\ldots,n}\in G_{\cl{\ensuremath{\mathcal{V}}}}(\ensuremath{\mathcal{P}} )$} is a solution of the corresponding initial value problem and the desired properties follows from the regularity of \mbox{$C^{\infty}(\ol{V}_{i},K)$}. \end{prf} \begin{remark} A Lie group structure on \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} has been considered in \cite{michorAbbatiCirelliMania89automorphismGroup} in the convenient setting, and the interest in \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} as a symmetry group coupling the gauge symmetry of Yang-Mills theories and the \mbox{$\ensuremath{\operatorname{Diff}}(M)$}-invariance of general relativity is emphasised. Moreover, it is also shown that \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} is a split Lie subgroup of \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$}, that \[ \ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\hookrightarrow \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\twoheadrightarrow \ensuremath{\operatorname{Diff}}(M)_{\ensuremath{\mathcal{P}}} \] is an exact sequence of Lie groups and that the action \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\times P\to P$} is smooth. However, the Lie group structure is constructed out of quite general arguments allowing to give the space \mbox{$\ensuremath{\operatorname{Hom}}(\ensuremath{\mathcal{P}},\ensuremath{\mathcal{P}})$} of bundle morphisms a smooth structure and then to consider \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} as an open subset of \mbox{$\ensuremath{\operatorname{Hom}}(\ensuremath{\mathcal{P}},\ensuremath{\mathcal{P}})$}. The approach taken in this section is somehow different, since the Lie group structure on \mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$} is constructed by foot and the construction provides explicit charts given by charts of \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$} and \mbox{$\ensuremath{\operatorname{Diff}}(M)$}. \end{remark} \begin{remark} The approach to the Lie group structure in this section used detailed knowledge on the chart \mbox{$\varphi \ensuremath{\nobreak:\nobreak} O\to \ensuremath{\mathcal{V}}(M)$} of the Lie group \mbox{$\ensuremath{\operatorname{Diff}}(M)$} from Remark \ref{rem:chartsForDiffeomorphismGroups}. We used this when decomposing a diffeomorphism into a product of diffeomorphisms with support in some trivialising subset of \mbox{$M$}. The fact that we needed was that for a diffeomorphism \mbox{$g \in O$} we have \mbox{$g (m)=m$} if the vector field \mbox{$\varphi (g )$} vanishes in \mbox{$m$}. This should also be true for the charts on \mbox{$\ensuremath{\operatorname{Diff}} (M)$} for compact manifolds with corners and thus the procedure of this section should carry over to bundles over manifolds with corners. \end{remark} \begin{example}[\mbox{\mbox{$\boldsymbol{\ensuremath{\operatorname{Aut}}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},\ensuremath{\mathfrak{k}}))}$}}] \label{exmp:automorphismGroupOfTwistedLoopAlgebra}Let \mbox{\mbox{$K$}} be a simple finite-dimensional Lie group, \mbox{\mbox{$K_{0}$}} be compact and simply connected and \mbox{\mbox{$\ensuremath{\mathcal{P}}_{k}$}} be a smooth principal \mbox{\mbox{$K$}}-bundle over \mbox{\mbox{$\ensuremath{\mathds{S}}^{1}$}}, uniquely determined up to equivalence by \mbox{$[k]\in \pi_{0}(K)$}. Identifying the twisted loop algebra \[ C^{\infty }_{k}(\ensuremath{\mathds{S}}^{1},\ensuremath{\mathfrak{k}}):=\{\eta \in C^{\infty}(\ensuremath{\mathds{R}},\ensuremath{\mathfrak{k}}):\eta (x+n)=\ensuremath{\operatorname{Ad}}(k)^{-n}.\eta (x)\ensuremath{\;\text{\textup{ for all }}\;} x\in\ensuremath{\mathds{R}},n\in\ensuremath{\mathds{Z}}\}. \] with the gauge algebra of the flat principal bundle \mbox{$\ensuremath{\mathcal{P}}_{k}$}, we get a smooth action of \mbox{\mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k})$}} on \mbox{\mbox{$C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},\ensuremath{\mathfrak{k}})$}}, which can also be lifted to the twisted loop group \mbox{$C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K)$}, the affine Kac--Moody algebra \mbox{$\wh{C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},\ensuremath{\mathfrak{k}})}$} and to the affine Kac--Moody group \mbox{$\wh{C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K)}$} \cite{diss}. Various results (cf.\ \cite[Theorem 16]{lecomte80}) assert that each automorphism of \mbox{\mbox{$C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},\ensuremath{\mathfrak{k}})$}} arises in this way and we thus have a geometric description of \mbox{\mbox{$\ensuremath{\operatorname{Aut}}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},\ensuremath{\mathfrak{k}}))\cong \ensuremath{\operatorname{Aut}}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K)_{0})\cong \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k})$}} for \mbox{$C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K)_{0}$} is simply connected. Furthermore, this also leads to topological information on \mbox{\mbox{$\ensuremath{\operatorname{Aut}}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},\ensuremath{\mathfrak{k}}))$}}, since we get a long exact homotopy sequence \begin{multline}\label{eqn:longExactHomotopySequenceForAutomorphismGroup} \dots \to \pi_{n+1}(\ensuremath{\operatorname{Diff}}(\ensuremath{\mathds{S}}^{1})) \xrightarrow{\delta_{n+1}} \pi_{n}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K)) \to \pi_{n}(\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k}))\\ \to \pi_{n}(\ensuremath{\operatorname{Diff}}(\ensuremath{\mathds{S}}^{1})) \xrightarrow{\delta_{n}} \pi_{n-1}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K)) \to \dots \end{multline} induced by the locally trivial bundle \mbox{\mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}}_{k})\hookrightarrow \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k})\xrightarrow{q} \ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1})_{\ensuremath{\mathcal{P}}_{k}}$}} and the isomorphisms \mbox{\mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}}_{k})\cong C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K)$}} and \mbox{\mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k})\cong \ensuremath{\operatorname{Aut}}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},\ensuremath{\mathfrak{k}}))$}}. E.g., in combination with \begin{align}\label{eqn:homotopyGroupsForDiffeomorphismGroup} \pi_{n}(\ensuremath{\operatorname{Diff}}(\ensuremath{\mathds{S}}^{1}))\cong\left\{\begin{array}{ll} \ensuremath{\mathds{Z}}_{2} & \text{ if }n=0\\ \ensuremath{\mathds{Z}} & \text{ if }n=1\\ 0 & \text{ if }n\geq 2 \end{array} \right. \end{align} (cf.\ \cite{milnor84}), one obtains information on \mbox{\mbox{$\pi_{n}(\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k}))$}}. In fact, consider the exact sequence \begin{multline*} 0 \to \pi_{1}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K)) \to \pi_{1}(\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k})) \to\underbrace{\pi_{1}(\ensuremath{\operatorname{Diff}} (M))}_{\cong \ensuremath{\mathds{Z}}} \xrightarrow{\delta_{1}} \pi_{0}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K))\\ \to \pi_{0}(\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k}))\xrightarrow{\pi_{0}(q)} \pi_{0}(\ensuremath{\operatorname{Diff}}(\ensuremath{\mathds{S}}^{1})_{\ensuremath{\mathcal{P}}_{k}}) \end{multline*} induced by \eqref{eqn:longExactHomotopySequenceForAutomorphismGroup} and \eqref{eqn:homotopyGroupsForDiffeomorphismGroup}. Since \mbox{\mbox{$\pi_{1}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K))$}} vanishes, this implies \mbox{\mbox{$\pi_{1}(\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k}))\cong \ensuremath{\mathds{Z}}$}}. A generator of \mbox{$\pi_{1}(\ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1}))$} is \mbox{$\ensuremath{\operatorname{id}}_{\ensuremath{\mathds{S}}^{1}}$}, which lifts to a generator of \mbox{$\pi_{1}(\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k}))$}. Thus the connecting homomorphism \mbox{$\delta_{1}$} vanishes. The argument from Remark \ref{rem:alternativeDescriptionOfDIFF_P} shows precisely that \mbox{\mbox{$\pi_{0}(\ensuremath{\operatorname{Diff}} (\ensuremath{\mathds{S}}^{1})_{\ensuremath{\mathcal{P}}_{k}})\cong \ensuremath{\mathds{Z}}_{2}$}} if and only if \mbox{\mbox{$k^{2}\in K_{0}$}} and that \mbox{$\pi_{0}(q)$} is surjective. We thus end up with an exact sequence \[ \operatorname{Fix}_{\pi_{0}(K)} ([k])\to \pi_{0}(\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k}))\twoheadrightarrow \left\{\begin{array}{ll} \ensuremath{\mathds{Z}}_{2} & \text{ if }k^{2}\in K_{0}\\ \ensuremath{\mathds{1}} & \text{ else.} \end{array} \right. \] Since \eqref{eqn:homotopyGroupsForDiffeomorphismGroup} implies that \mbox{\mbox{$\ensuremath{\operatorname{Diff}}(\ensuremath{\mathds{S}}^{1})_{0}$}} is a \mbox{\mbox{$K(1,\ensuremath{\mathds{Z}})$}}, we also have \mbox{\mbox{$\pi_{n}(\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}}_{k}))\cong \pi_{n}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},K))$}} for \mbox{\mbox{$n\geq 2$}}. \end{example} \begin{remark} The description of \mbox{\mbox{$\ensuremath{\operatorname{Aut}}(C^{\infty}_{k}(\ensuremath{\mathds{S}}^{1},\ensuremath{\mathfrak{k}}))$}} in Example \ref{exmp:automorphismGroupOfTwistedLoopAlgebra} should arise out of a general principle, describing the automorphism group for gauge algebras of flat bundles, i.e., of bundles of the form \[ \ensuremath{\mathcal{P}}_{\varphi}=\wt{M}\times K/\sim \;\text{ where }\; (m,k)\sim (m\cdot d,\varphi^{-1}(d)\cdot k). \] Here \mbox{\mbox{$\varphi \ensuremath{\nobreak:\nobreak} \pi_{1}(M)\to K$}} is a homomorphism and \mbox{\mbox{$\wt{M}$}} is the simply connected cover of \mbox{\mbox{$M$}}, on which \mbox{\mbox{$\pi_{1}(M)$}} acts canonically. Then \[ \ensuremath{\operatorname{\mathfrak{gau}}}(\ensuremath{\mathcal{P}})\cong C^{\infty}_{\varphi}(M,\ensuremath{\mathfrak{k}}):=\{\eta \in C^{\infty}(\wt{M},\ensuremath{\mathfrak{k}}): \eta (m\cdot d)=\ensuremath{\operatorname{Ad}} (\varphi (d))^{-1}.\eta (m)\}. \] and this description should allow to reconstruct gauge transformations and diffeomorphisms out of the ideals of \mbox{\mbox{$C^{\infty}_{\varphi}(M,\ensuremath{\mathfrak{k}})$}} (cf.\ \cite{lecomte80}). \end{remark} \begin{problem}(cf.\ \cite[Problem IX.5]{neeb06}) Let \mbox{\mbox{$\ensuremath{\mathcal{P}}_{\varphi}$}} be a (flat) principal \mbox{\mbox{$K$}}-bundle over the closed compact manifold \mbox{\mbox{$M$}}. Determine the automorphism group \mbox{\mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\operatorname{\mathfrak{gau}}} (\ensuremath{\mathcal{P}}))$}}. In which cases does it coincide with \mbox{\mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})$}} (the main point here is the surjectivity of the canonical map \mbox{\mbox{$\ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\to \ensuremath{\operatorname{Aut}}(\ensuremath{\operatorname{\mathfrak{gau}}}(\ensuremath{\mathcal{P}}))$}}). \end{problem} \begin{remark} In some special cases, the extension \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\hookrightarrow \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\twoheadrightarrow \ensuremath{\operatorname{Diff}}(M)_{\ensuremath{\mathcal{P}}}$} from Theorem \ref{thm:automorphismGroupIsLieGroup} splits. This is the case for trivial bundles and for bundles with abelian structure group \mbox{$K$}, but also for frame bundles, since we then have a natural homomorphism \mbox{$\ensuremath{\operatorname{Diff}}(M)\to \ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})$}, \mbox{$g \mapsto dg$}. However, it would be desirable to have a characterisation of the bundles, for which this extension splits. \end{remark} \begin{problem}(cf.\ \cite[Problem V.5]{neeb06}) Find a characterisation of those principal \mbox{$K$}-bundles \mbox{$\ensuremath{\mathcal{P}}$} for which the extension \mbox{$\ensuremath{\operatorname{Gau}}(\ensuremath{\mathcal{P}})\hookrightarrow \ensuremath{\operatorname{Aut}}(\ensuremath{\mathcal{P}})\twoheadrightarrow \ensuremath{\operatorname{Diff}}(M)_{\ensuremath{\mathcal{P}}}$} splits on the group level. \end{problem}
{ "attr-fineweb-edu": 1.71582, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUflw5qhLBsCDnQ5i1
\section{Introduction} The second layer $^3$He on graphite is an ideal example for studying two-dimensional (2D) Fermion system. By controlling the number of adsorption atoms, we can widely change the areal density $\rho$ from nearly ideal Fermi gas to highly compressed 2D solid \cite{Greywall90, Fukuyama08}. At the commensurate phase in the second layer, so called ``4/7 phase", a gap-less quantum spin liquid state is suggested. The transition between a Fermi fluid and the 4/7 phase is Mott-Hubbard type with divergence of the effective mass $m^*$ \cite{Casey03}. Moreover, recent heat capacity study shows the existence of a low-density liquid state ($\rho_{liq} \leq 1$ nm$^{-2}$) in the third layer \cite{Sato10}. But, this self-condensed state has not been confirmed in the second layer yet. To study the spin correlation in the quantum spin liquid, the self-condensed liquid, and the relaxation process in the liquid or dilute Fermi gas, pulsed-NMR is a useful probe. In this work, we report a preliminary pulsed-NMR study of the spin-spin relaxation time ($T_2$) and magnetic susceptibility ($\chi$) of Fermi fluid phase in the second layer. To remove the large Curie susceptibility from the first layer, we replaced the first layer $^3$He with $^4$He. It is also helpful to reduce the substrate heterogeneity effect because the heterogeneous regions with deep potential are preferentially filled with $^4$He atoms. \section{Experimental} The substrate used in this work was Grafoil, an exfoliated graphite, with the total surface area of $A_{tot} = 53.6$ m$^2$. Grafoil consists of 10-20 nm size micro-crystallites with a mosaic angle spread of $\pm 30^{\circ}$ \cite{Takayoshi10}. Before adding $^3$He sample at 2.7 K, 11.78 nm$^{-2}$ of $^4$He was adsorbed at 4.2 K as the first layer. This density is slightly higher than the highest density of the first layer $^3$He determined from the neutron scattering (11.06 nm$^{-2}$) \cite{Lauter90} and the heat capacity measurement (11.4 nm$^{-2}$) \cite{Greywall90}. Therefore, we believe that $^3$He atoms don't sink into the first layer. The conventional spin-echo technique with the pulse sequence of $90^{\circ}$-$\tau$-$180^{\circ}$ were used for measuring the spin-spin relaxation time $T_2$. In presence of the mosaic angle spread of Grafoil, the decay of the spin-echo signal shows about 5\% of long tail component due to the angular dependence of $T_2$ \cite{Takayoshi09}. But, in this work, spin-echo measurements were carried out within $0 \leq t \leq 2T_2$ where the contribution of the long tail is not important, and $T_2$ was obtained by single exponential fitting. Most of the experiments, except a Larmor frequency dependence of $T_2$, were carried out in a Larmor frequency of 5.5 MHz ($B = 172$ mT). The static magnetic field is parallel to the substrate. The details of this set-up are described in earlier paper, where Grafoil is replaced with ZYX \cite{Murakawa03}. \begin{figure}[b] \begin{center} \includegraphics[width=35pc]{Fig1.eps}\hspace{1pc} \caption{\label{fig1} {\bf a.} Temperature dependence of $\chi(T)/\chi_0(0)$. The solid curves are fitting lines by eq.(\ref{eqn1}). {\bf b.} Density dependence of the susceptibility at $T = 0$. The closed circles are the data taken in this work, while the open circles are those in ref.\cite{Lauter90}. {\bf c.} Density dependence of effective temperature $T_F^{**}$.} \end{center} \end{figure} \section{Results and discussions} Figure 1(a) shows the temperature dependence of the magnetic susceptibility $\chi(T)$ at $^3$He density range of $0.68 \leq \rho \leq 5.28$ nm$^{-2}$. The signals are well described by the phenomenological expression for the susceptibility of a Fermi liquid \cite{Dyugaev90}: \begin{equation}\label{eqn1} \chi(T) = C/\sqrt{T^2 + T_F^{**2}} \end{equation} down to $\rho = 0.68$ nm$^{-2}$, which is about half of the lowest density in the earlier study \cite{Lusher91}. Now, $C$ is the Curie constant and $T_F^{**}$ is the effective Fermi temperature. At $\rho = 5.28$ nm$^{-2}$, $\chi$ increases slightly at lowest temperature $T \approx 10$ mK. Considering a small amount of paramagnetic solid $^3$He, this fraction corresponds to within 2\% of the total $^3$He atoms in experimental accuracy. We thought that this is trapped $^3$He atoms in substrate heterogeneities, and it can be removed by introducing a few percent of excess $^4$He atoms \cite{Lusher91, Collin01}. In Landau Fermi liquid theory, the $T = 0$ susceptibility of 2D Fermi liquid is given by $\chi(0) = C/T_F^{**} = (m^*/m)(1+F_0^a)^{-1}C/T_F$, where $F_0^a$ is a Landau parameter and $T_F$ is the Fermi temperature. Since $C$ is proportional to $N$, number of $^3$He atoms, and $T_F$ is proportional to $N/A$, where $A$ is the surface area over which the $^3$He liquid occupies, we obtain $\chi(0) \propto A(m^*/m)(1+F_0^a)^{-1}$. Considering the self-condensation, $A$ would be smaller than the total surface area ($A \leq A_{tot}$). Now, because the $T = 0$ susceptibility of an ideal 2D Fermi gas is $\chi_0(0) = C/T_F \propto A_{tot}$, we obtain $\chi(0)/\chi_0(0) = (A/A_{tot})(m^*/m)(1+F_0^a)^{-1}$. If $^3$He atoms are uniformly spread over the surface area ($A = A_{tot}$), this function approaches to a constant value (in ideal gas case, unity) at dilute limit. On the other hand, when self-condensation occurs, $A$ changes linearly with increasing $N$, resulting in $\chi(0)/\chi_0(0) \propto N$ similarly to the density dependence of $\gamma$ in heat capacity measurement of the third layer \cite{Sato10}. Figure 1(b) shows obtained $\chi(0)/\chi_0(0)$ against $\rho$. The value approaches to unity with decreasing $\rho$, and there is no anomaly down to $\rho = 0.68$ nm$^{-2}$. Therefore, there is no evidence of self-condensation at least at the density above 0.68 nm$^{-2}$ in the second layer. The density dependence of $T_2$ at $T = 100$ mK and $f = 5.5$ MHz is shown in Figure 2. It shows a broad maximum of 5.7 ms around $\rho = 3$ nm$^{-2}$. At high density side, the particle motion is suppressed by the divergence of $m^*$ toward a Mott localization \cite{Casey03}. Therefore, $T_2$ decreases with increasing $\rho$ due to the suppression of motional narrowing. At the lowest density $\rho = 0.68$ nm$^{-2}$, the scattering radius $d$ is much shorter than a mean free path $\ell \approx 1/2d\rho$, and a dilute 2D $^3$He gas model is applicable. From an analogous approach to a dilute bulk $^3$He gas \cite{Chapman74}, we obtain: \begin{equation}\label{eqn2} 1/T_2 \sim vd\rho(\mu^2/d^6)(d/v)^2 \end{equation} where $\mu$ is magnetic moment of $^3$He, and the averaged velocity $v \approx 2v_F/3 \propto \rho^{1/2}$ at Fermi degenerated gas ($T < T_F^{**}$ from Figure 1(c)). As a consequence, $T_2$ increases with decreasing $\rho$ by $T_2 \propto \rho^{-1/2}$ at dilute limit. On the other hand, obtained $T_2$ decreases at $\rho = 0.68$ nm$^{-2}$. It indicates that there is another relaxation process that becomes important at dilute limit. Cowan and Kent measured $T_2$ of very low density $^3$He films on bare graphite \cite{Cowan84}, and showed that a small amount of $^3$He ($\sim$ few percent of monolayer) would form high-density solid with $T_2 = 0.3$ ms in heterogeneous region of the substrate. By atomic exchanges between this solid and the 2D gas, relaxation in the heterogeneous region dominates the relaxation process of the 2D gas. The linear $\rho$-dependence of $T_2$ in the ``first layer'' is also shown in the same graph (Fig.\ref{fig2}) for the reference. Because the adsorption potential in the second layer is about an order of magnitude smaller than that of the first layer, it is unrealistic to consider such a high-density solid. However, since the heterogeneous region can exist as a relaxation spot, and very small amount of $^3$He could also exist as a solid. This is consistent with the increase of $\chi(T)$ at lowest temperature. Magnetic impurity in substrate may be another causation of the relaxation. \begin{figure}[h] \begin{minipage}{18pc} \includegraphics[width=17pc]{Fig2.eps} \caption{\label{fig2} The density dependence of $T_2$ of the second layer $^3$He at 100 mK and 5.5 MHz. The closed circles are the data in this work. The open circles are those in the previous study \cite{Takayoshi09}. The vertical line corresponds to the density of the commensurate solid (4/7 phase). The dashed-dotted line is a reference in the first layer at 1.2 K and 5 MHz \cite{Cowan84}.} \end{minipage}\hspace{2pc \begin{minipage}{18pc} \includegraphics[width=18pc]{Fig3.eps} \caption{\label{fig3} Larmor frequency dependence of $T_2$. The closed circles are $T_2$ of the second layer Fermi fluid at $\rho = 5.28$ nm$^{-2}$ and $T = 100$ mK. The open circles are those of the first layer solid at a coverage of 0.76 monolayer and $T = 1.2$ K \cite{Cowan87}. The solid lines are linear fitting to the data.} \end{minipage} \end{figure} In Figure 3, we show the Larmor frequency dependence of $T_2$ in a high-density Fermi fluid ($\rho = 5.28$ nm$^{-2}$) at $T = 100$ mK (closed circles). We observed a linear relation between $1/T_2$ and $f$. This characteristic linear dependence of $T_2$ is also observed in a first layer solid $^3$He \cite{Cowan87} (open circles). It is curious since $T_2$ should be independent of $f$ as long as $f \ll 1/\tau$, where $\tau$ ($\sim d/v \sim 10^{-11}$ sec) is the correlation time. Although the explanation for this linearity is not satisfactory established, this could be a microscopic magnetic field inhomogeneity effect caused by the mosaic angle spread and diamagnetism of the graphite substrate. Assuming that the frequency dependence is an extrinsic effect related to the diffusion process in the microscopic magnetic field gradient, the intrinsic $T_2$ is obtained by extrapolating to $f = 0$ where the diamagnetic field of graphite disappears. Namely, $1/T_2 = 1/T_2^{int} + c f$, where $T_2^{int}$ is the intrinsic $T_2$ of the system. From the extrapolation, $T_2^{int}$ at $\rho = 5.28$ nm$^{-2}$ is very long ($T_2^{int} > 0.1$ s) although the strict value is not available because $y$-intercept is too small. This is much longer than that of the first layer solid ($T_2^{int} \approx 10$ ms). Nevertheless, the value of $c$, which is a slope of $1/T_2$, is almost the same between these two samples. Therefore, it is clear that the $f$-dependence of $T_2$ is extrinsic one. To obtain the strict value of $T_2^{int}$, lower frequency experiment or a substrate with larger platelet size and smaller mosaic angle will be required. In conclusion, we measured $\chi(T)$ and $T_2$ of the second layer $^3$He at the density region of $0.68 \leq \rho \leq 5.28$ nm$^{-2}$ on graphite pleplated with a monolayer $^4$He. Obtained $\chi(T)$ shows Fermi fluid behaviour at all densities, and there is no self-condensed state down to $\rho = 0.68$ nm$^{-2}$. Density dependence of $T_2$ at $f = 5.5$ MHz shows a broad maximum of 5.7 msec. The decrease of $T_2$ at $\rho = 0.68$ nm$^{-2}$ could be related to the $^3$He solid at the heterogeneity in the substrate. We also observed a $f$-linear dependence of $1/T_2$, which is similar to that in the earlier study for the first layer solid \cite{Cowan87}. This $f$-dependence of $T_2$ is an extrinsic effect, and the intrinsic $T_2$ in the Fermi fluid is much longer than measured value. \medskip
{ "attr-fineweb-edu": 1.983398, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfqPxK6EuNA_gQDuN
\section{Introduction} To understand the M-theory, it should be important to study the M2-branes and the M5-branes. Recently, it was proposed that the stack of $N$ M2-branes on the orbifold $\mathbb{C}^4/\mathbb{Z}_k$ $(k=1,2,\cdots )$ can be studied by a three dimensional $\text{U}(N)\times \text{U}(N)$ superconformal Chern-Simons matter theory \cite{ABJM}, which is called the ABJM theory. Similarly, the M5-branes are believed to be described by some non-trivial six dimensional field theory. The field theory on the M5-branes itself would also be interesting and play important roles to understand various non-perturbative properties of supersymmetric gauge theories. However, there are no explicit field theoretical descriptions for the multiple M5-branes so far. On the other hand, the M5-branes can be realized as the solitons in the ABJM theory. Indeed we can construct BPS solitons which are the analogues of the Nahm data for the non-abelian monopoles in $(3+1)$ dimensional gauge theory \cite{T,NT,ST}.\footnote{ This structure is an extension of the work \cite{BH} which motivate the early proposal of the field theory on multiple M2-branes \cite{BL1,G,BL2,BL3} (see also \cite{GRVV,HL}). } In these solutions the M5-branes are realized as the M2-branes blowing up into a fuzzy sphere. There is also a noncommutative-plane-like construction of the M5-branes \cite{TY1,TY2}. These solutions enable us to investigate the M5-branes via the ABJM theory. Similar fuzzy sphere structure appears also as non-trivial vacua in the mass deformed ABJM theory \cite{GRVV} (see also \cite{NPR}). This configuration does not carry net M5-brane charges, but carries the dipole M5-brane charges. This is an analogue of the Myers effect for the D-branes \cite{M}, which is, of course, related to the Nahm construction of the monopoles. The mass deformed ABJM theory should also be a useful tool to study the M5-branes. Relation to the M5-branes is also observed in the gravity side. The UV limit of the mass deformed ABJM theory is same as the ABJM theory itself. On the other hand, the mass deformation breaks the conformal invariance and hence should result in the different IR behavior. Indeed the analyses of the holographic RG flow \cite{BW,PTW} suggest that the theory would describe a particular configuration of the M5-branes in the IR limit. Hence it is interesting to study the mass deformed ABJM theory for large $N$. In this paper, as a simple example we shall study the partition function (free energy) of the mass deformed ABJM theory on $S^3$ in the large $N$ limit. With the help of the localization technique \cite{P,W,N}, the partition function can be exactly computed by a $2N$ dimensional matrix model \cite{KWY2,J,HHL}. Though it is still difficult to perform these integrals for general large number $N$, in the large $N$ limit we can evaluate the partition function by the saddle point approximation. We achieve to solve the saddle point equations for finite values of the Chern-Simons levels and the mass deformation parameter, i.e. in the M-theoretical regime. With the solutions (eigenvalue distributions), we compute the exact large $N$ partition function. Interestingly, we find two different solutions, which cause a first order phase transition in the large $N$ limit. Since the theory we consider coincide to the ABJM theory in the UV limit, the dual geometry will be characterized by the same asymptotics as in the ABJM case: it will be asymptotically $\text{AdS}_4\times S^7/\mathbb{Z}_k$ where the boundary of $\mathrm{AdS}_4$ is $S^3$. Our result may predict the existence of new solutions of this class and a phase transition in the gravity side. Among various mass deformations of the ABJM theory, the theory we consider have the largest ${\cal N}=6$ supersymmetry; the same amount as the undeformed ABJM theory. In this sence our setup is the ``simplest'' example of the non-conformally deformed field theories. Hopefully, the mass deformed ABJM theory we consider would play important roles of the toy model to reveal fundamental structures in the theories with non-trivial RG flow. This paper is organized as follows. In the next section we shall briefly review the mass deformation of the ABJM theory. In section \ref{localization} we display the matrix model expression of the partition function resulting from the localization, and the saddle point equation for this matrix model. If we formally take the mass parameter pure imaginary, the partition function coincides with that of the ABJM theory with the non-canonical $R$-charge assignments which have been studied intensively \cite{J,JKPS} in the context of the F-theorem \cite{J,CDFKS}. To compare with our main results, we also provide the solution to the saddle point equation for the imaginary mass. In section \ref{realmass} we solve the saddle point equation for real mass parameter. We find two distinctive solutions and evaluate the partition function for these solutions. In section \ref{discuss} we summarize our results with discussion and future directions. \section{Brief review of mass deformed ABJM theory} In this section we review the ABJM theory and its mass deformation \cite{GRVV} concerned in this paper. The ABJM theory is the $(2+1)$ dimensional ${\cal N}=6$ $\text{U}(N)\times \text{U}(N)$ superconformal Chern-Simons theory with the Chern-Simons levels $\pm k$ coupled with four bifundamental matter fields. In terms of the ${\cal N}=2$ superfields \cite{BKKS}, the field content of the ABJM theory consists of the two vector multiplets \begin{align} {\cal V}=(A_\mu,\sigma,\chi,D),\quad {\widetilde{\cal V}}=({\widetilde A}_\mu,{\widetilde\sigma},{\widetilde\chi},{\widetilde D}) \end{align} and four chiral multiplets $(\alpha=1,2,\dot{\alpha}=1,2)$ \begin{align} {\cal Z}_\alpha=(A_\alpha,\varphi_\alpha,F_\alpha),\quad {\cal W}_{\dot\alpha}=(B_{\dot\alpha},\psi_{\dot\alpha},G_{\dot\alpha}) \end{align} which are in the bifundamental representation $(N, \bar{N})$ and $(\bar{N},N)$ of the gauge group $\text{U}(N)\times \text{U}(N)$ respectively. The action of the ABJM theory is written as \begin{align} S_\text{ABJM}=S_\text{CS}+S_\text{mat}+S_\text{pot} \end{align} with \begin{align} S_\text{CS}&=-\frac{ik}{8\pi}\int dx^3d\theta^4\int_{0}^{1}dt\Bigl[\Tr{\cal V}\bar{D}^{a}(e^{t{\cal V}}D_{a}e^{-t{\cal V}})-\Tr{\widetilde{\cal V}}\bar{D}^{a}(e^{t{\widetilde{\cal V}}}D_{a}e^{-t{\widetilde{\cal V}}})\Bigr],\nonumber \\ S_\text{mat}&=-\int dx^3d\theta^4\Bigl(\Tr\bar{\cal Z}^\alpha e^{-{\cal V}}{\cal Z}_\alpha e^{\widetilde{\cal V}}+\Tr\bar{\cal W}^{\dot{\alpha}}e^{-{\widetilde{\cal V}}}{\cal W}_{\dot{\alpha}}e^{\cal V}\Bigr),\nonumber \\ S_\text{pot}&=\frac{2\pi}{k}\int dx^3 d\theta^2\Bigl(\Tr\epsilon^{\alpha\beta}\epsilon^{\dot{\alpha}\dot{\beta}}{\cal Z}_\alpha {\cal W}_{\dot{\alpha}}{\cal Z}_\beta {\cal W}_{\dot{\beta}} +\Tr\epsilon_{\alpha\beta}\epsilon_{\dot{\alpha}\dot{\beta}}\bar{\cal Z}^\alpha \bar{\cal W}^{\dot{\alpha}}\bar{\cal Z}^\beta \bar{\cal W}^{\dot{\beta}} \Bigr). \end{align} The chiral multiplets ${\cal Z}_\alpha$ and ${\cal W}_{\dot\alpha}$ transforms under $\text{SU}(2)\times \text{SU}(2)$ $R$-symmetry respectively. The symmetry actually enhances to $SO(6)_R$, hence the theory have the ${\cal N}=6$ supersymmetry. Integrating out the auxiliary fields $D$ and ${\widetilde D}$ in the vector multiplets from the Chern-Simons term $S_\text{CS}$ and the matter kinetic term $S_\text{mat}$, we obtain the following $D$-term potential for the scalars in the matter multiplets \begin{align} V_D=\Tr|\sigma A_\alpha-A_\alpha {\widetilde \sigma}|^2+\Tr|B_{\dot{\alpha}}\sigma-{\widetilde\sigma}B_{\dot{\alpha}}|^2 \label{Dpot} \end{align} together with the constraints \begin{align} \sigma=\frac{2\pi}{k}(A_\alpha A^{\dagger\alpha}-B^{\dagger\dot{\alpha}}B_{\dot{\alpha}}),\quad {\widetilde \sigma}=\frac{2\pi}{k}(A^{\dagger\alpha} A_\alpha-B_{\dot{\alpha}}B^{\dagger\dot{\alpha}}). \label{sigmaAB} \end{align} Eliminating $\sigma$ and ${\widetilde\sigma}$ we indeed obtain the sextic potential for the scalars which is essential to describe the M5-branes as a fuzzy funnel as discussed in \cite{BH}. In this paper we shall introduce the mass deformation through the following Fayet-Iliopoulos $D$-term \cite{GRVV} \begin{align} S_{\text{FI}}=\frac{\zeta}{2\pi} \int dx^3 d\theta^4(\Tr{\cal V}+\Tr{\widetilde{\cal V}})=\frac{\zeta}{2\pi} \int d^3x(\Tr D+\Tr {\widetilde D}) \end{align} with the FI parameter $\zeta\in\mathbb{R}$. Though this deformation breaks the $SO(6)_R$ symmetry down to $\text{SU}(2)\times \text{SU}(2)\times \text{U}(1)\times \mathbb{Z}_2$, the deformed theory still have ${\cal N}=6$ supersymmetry. In this case, the constraints \eqref{sigmaAB} are shifted by the FI parameter \begin{align} \sigma=\frac{2\pi}{k}(A_\alpha A^{\dagger\alpha}-B^{\dagger\dot{\alpha}}B_{\dot{\alpha}})+\frac{\zeta}{k},\quad {\widetilde \sigma}=\frac{2\pi}{k}(A^{\dagger\alpha} A_\alpha-B_{\dot{\alpha}}B^{\dagger\dot{\alpha}})-\frac{\zeta}{k}. \end{align} Thus the potential \eqref{Dpot} gives the mass terms with the same mass $m=\zeta/k$ to all the four scalars. There are also some terms including the fermions and we can confirm that the theory indeed have the ${\mathcal N}=6$ supersymmetry.\footnote{ The superpotential mass term \cite{GRVV}, on the other hand, breaks some supersymmetries. We will concentrate on the maximally supersymmetric theory in this paper for simplicity. } The classical vacua of the mass deformed theory was studied in \cite{GRVV,NPR}. These vacua are also given by the matrices representing the fuzzy three sphere \cite{T,HL}. This is an analogue of the Myers effect in the D-brane system, and the mass deformed ABJM theory represent the M2-M5 brane system where the M2-branes brow up into spherical M5-branes. \section{Large $N$ saddle point equations} \label{localization} In this section we analyze the partition function of the mass deformed ABJM theory on $S^3$ with unit radius.\footnote{ We can recover the radius by the replacement $\zeta\rightarrow\zeta\cdot r_{S^3}$ since there are no other dimensionful parameters in the theory. } We take the limit $N\rightarrow \infty$ while keeping the level $k$ and the mass deformation $\zeta$ finite.\footnote{ As we will see later, the large $N$ behaviors may be different for $\zeta/k < 1/4$ and $\zeta/k > 1/4$. In this paper we shall concentrate on the former case $\zeta/k < 1/4$, which is different from the situation considered in \cite{AZ,AR}. } Thus we are considering the M2-branes in eleven dimensional spacetime, with finite background flux depending on $\zeta$. The supersymmetric gauge theories on $S^3$ were studied in \cite{KWY2,J,HHL}. With the help of the localization technique, they showed that the partition function of our theory is given by the following matrix integral \begin{align} Z(N)=\prod_{i=1}^N\int d\lambda_id{\widetilde\lambda}_i e^{-f(\lambda,{\widetilde\lambda})}, \label{ZN} \end{align} where \begin{align} f(\lambda,{\widetilde\lambda})&= \pi ik\biggl( \sum_{i\ge 1}\lambda_i^2 -\sum_{i\ge 1}{\widetilde\lambda}_i^2 \biggr) -2\pi i \zeta\biggl(\sum_{i\ge 1}\lambda_i+\sum_{i\ge 1}{\widetilde\lambda}_i\biggr)\nonumber \\ &\quad -\sum_{i>j}\log\sinh^2\pi(\lambda_i-\lambda_j) -\sum_{i>j}\log\sinh^2\pi({\widetilde\lambda}_i-{\widetilde\lambda}_j) +\sum_{i,j\ge 1}\log\cosh^2\pi(\lambda_i-{\widetilde\lambda_j}). \label{iff} \end{align} Here $\lambda_{i}$ and ${\widetilde\lambda}_i$ $(i=1,\ldots,N)$ denote the eigenvalues of $\sigma$ and ${\widetilde \sigma}$ which are constant for the saddle points in the localization computation. For the large $N$ limit, the integrals can be evaluated by the saddle point approximation. The saddle point configuration is the solution of the following saddle point equations \begin{align} 0&=\frac{\partial f(\lambda,{\widetilde\lambda})}{\partial \lambda_i}=2\pi ik\lambda_i-2\pi i \zeta-2\pi\sum_{j\neq i}\coth\pi(\lambda_i-\lambda_j)+2\pi\sum_j\tanh\pi(\lambda_i-{\widetilde\lambda}_j),\nonumber \\ 0&=\frac{\partial f(\lambda,{\widetilde\lambda})}{\partial {\widetilde\lambda}_i}=-2\pi ik{\widetilde\lambda}_i-2\pi i \zeta-2\pi\sum_{j\neq i}\coth\pi({\widetilde\lambda}_i-{\widetilde\lambda}_j)-2\pi\sum_j\tanh\pi(\lambda_j-{\widetilde\lambda}_i). \label{isaddle} \end{align} The free energy $F=-\log Z(N)$ can be evaluated by the saddle point configuration \begin{align} F\approx f(\lambda,{\widetilde \lambda}) |_{{\rm saddle}}. \label{iF} \end{align} Note that the saddle point configurations may be complex although $\lambda_i$ and ${\widetilde \lambda}_i$ are real in the original integration contours in \eqref{ZN}. Below we will find the solutions of the saddle point equations \eqref{isaddle}. For this purpose the symmetries of the equations are helpful, as argued in \cite{HKPT}. In the case of the ABJM theory, i.e. $\zeta=0$, the saddle point equations \eqref{isaddle} are invariant under both of the exchanges of $\lambda_i\rightarrow \pm{\widetilde\lambda}^*_i$. Under these exchanges the free energy transforms as $f(\lambda,{\widetilde \lambda}) \rightarrow (f(\lambda,{\widetilde \lambda}))^*$. The solutions are always paired, except for the case $\lambda_i=\pm {\widetilde\lambda}^*_i$. Then, it is natural to assume that the lowest free energy will be realized by such self-conjugate configurations.\footnote{ If we assume a dual gravity description, paired solutions will correspond to the semi-classical solutions which are not allowed as a lowest free energy configuration. } This fact was indeed confirmed in the ABJM theory. For a general complex deformation, these exchange symmetries are broken. There are two special choice of $\zeta$, however, where one of the $\mathbb{Z}_2$ symmetry remains. For a real $\zeta$, which corresponds to the mass deformation, the saddle point equations are invariant under $\lambda_i \rightarrow -{\widetilde\lambda}^*_i$. Hence we will pose the ansatz $\lambda_i =-{\widetilde\lambda}^*_i$ to solve the saddle point equations. The other choice is the pure imaginary $\zeta$, where the remaining symmetry is $\lambda_i \rightarrow {\widetilde\lambda}^*_i$ and we should assume $\lambda_i={\widetilde\lambda}^*_i$. \subsection{Imaginary FI-parameter (known case)} \label{ifi} Before going on to the real mass deformation, we shall investigate the case with pure imaginary FI-parameter\footnote{ The reader concerning only the results for mass deformed ABJM theory may skip this subsection. } \begin{align} \zeta=-i\xi,\quad \xi\in\mathbb{R}. \end{align} Though the matrix model is equivalent to that for the $R$-charge deformation of the ABJM theory studied in \cite{J,JKPS} (see also \cite{CDFKS}), it is useful to consider these model for a demonstration of the general ideas in the evaluation of the free energy in the large $N$ limit. Interestingly, however, the results for mass deformed ABJM theory and its ``analytically continued'' version we consider here are substantially different in various ways, contrary to the naive expectation. \subsubsection{Analytical solution in large $N$ limit} \label{anal} In the case of pure imaginary FI parameter, we can find the solution to the saddle point equations \eqref{isaddle} in the large $N$ limit by evaluating the equations up to ${\cal O}(N^0)$. As we discuss above we set \begin{align} \lambda_{i}={\widetilde\lambda}_i^*. \end{align} Furthermore, we shall assume \begin{align} \lambda_{i}=N^{\alpha}x_{i}+iy_{i},\quad {\widetilde\lambda}_i=N^{\alpha}x_i-iy_i, \end{align} with $x_{i}$ and $y_{i}$ being of order $\mathcal{O}(N^0)$. We have introduced a factor $N^{\alpha}$ to represent the growth of the real part of the eigenvalues, where the scaling exponent $\alpha$ will be determined later.\footnote{Although it is difficult to show there are no solutions without the assumption, we believe this assumption is valid for the lowest free energy configuration, partially based on some numerical calculations.} In the large $N$ limit, we can define continuous functions $x(s),y(s): [0,1]\rightarrow \mathbb{R}$ to replace $x_i$ and $y_i$ as \begin{align} \label{iassump} x_{i}=x\Bigl(\frac{i}{N}\Bigr),\quad y_{i}=y\Bigl(\frac{i}{N}\Bigr). \end{align} Here we have ordered the eigenvalues so that $x(s)$ is a strictly increasing function. It is more reasonable to take the real part of the eigenvalues $x$ as the fundamental variable rather than $s$ and introduce the eigenvalue density $\rho(x)$ in the $x$-direction \begin{align} \rho(x)=\frac{ds}{dx} \end{align} which is normalized to unity \begin{align} \int_Idx\rho(x)=1\label{unit} \end{align} so that \begin{align} \sum_i(\cdots)_i \rightarrow N\int_Idx\rho(x)(\cdots )(x). \end{align} Here $I$ is the support of the eigenvalues which we shall assume to be a single finite interval $I=[a,b]$. In the continuum notation the saddle point equations \eqref{isaddle} become \begin{align} \label{icsaddle} 0=-ik(N^{\alpha}x+iy(x))+\xi+N\int_{I}dx'\rho(x')\coth\pi\bigl[(x-x')N^{\alpha}+i(y(x)-y(x'))\bigl] \nonumber \\ -N\int_{I}dx'\rho(x')\tanh\pi\bigr[(x-x')N^{\alpha}+i(y(x)+y(x'))\bigl]. \end{align} We regard the integral whose integrand is singular at $x=x^{\prime}$ as the principal value integral. Now to solve the saddle point equation \eqref{icsaddle} means to find the functions $y(x)$ and $\rho(x)$ which satisfy \eqref{icsaddle} and the normalization \eqref{unit}. Now we shall consider the large $N$ expansion of the last two terms in \eqref{icsaddle} including the integration over $x^\prime$. Since the arguments of $\coth$ and $\tanh$ are scaled by $N^\alpha$, this is achieved by approximating them by the sign of the real part of the arguments and evaluating the deviations by the integration by parts. First, we notice the following expansion formulas \begin{align} &\tanh(z)=\begin{cases} \displaystyle 1-2\sum_{n=1}^{\infty}(-1)^{n-1}e^{-2nz} & ({\rm Re}(z) \geq0) \\ \displaystyle -1+2\sum_{n=1}^{\infty}(-1)^{n-1}e^{2nz} & ({\rm Re}(z)<0) \end{cases},\nonumber \\ &\coth(z)= \begin{cases} \displaystyle 1+2\sum_{n=1}^{\infty}e^{-2nz} & ({\rm Re}(z) \geq0) \\ \displaystyle -1-2\sum_{n=1}^{\infty}e^{2nz} & ({\rm Re}(z)<0) \end{cases}. \label{iseries2} \end{align} The leading terms in \eqref{iseries2} come from the sign function approximation. In the two integrals they are precisely canceled together \begin{align} &N\int_{I}dx'\rho(x')\coth\pi\bigl[(x-x')N^{\alpha}+i(y(x)-y(x'))\bigl]\nonumber \\ &\quad\quad\quad\quad \quad\quad\quad\quad -N\int_{I}dx'\rho(x')\tanh\pi\bigr[(x-x')N^{\alpha}+i(y(x)+y(x'))\bigl]\nonumber \\ &\sim N\int_Idx^\prime\rho(x^\prime)\sgn(x-x^\prime)-N\int_Idx^\prime\rho(x^\prime)\sgn(x-x^\prime)=0. \label{cancel} \end{align} Since the real part of the arguments grows with $N^{\alpha}$, the contributions from the remaining terms $e^{-2 n z}$ in \eqref{iseries2} seem to be exponentially suppressed in large $N$ limit and do not contribute to the $1/N$ expansion. However, the contributions of the integration near $z \sim 0$ give $1/N^{\alpha}$ corrections. We can evaluate these terms in \eqref{iseries2} by separating the integration interval into $x>x^{\prime}$ and $x<x^{\prime}$ and integrating by parts \begin{align} &N\int_{I}dx'\rho(x')\coth\pi\bigl[(x-x')N^{\alpha}+i(y(x)-y(x'))\bigl]\nonumber \\ &\quad\quad\quad\quad \quad\quad\quad\quad -N\int_{I}dx'\rho(x')\tanh\pi\bigr[(x-x')N^{\alpha}+i(y(x)+y(x'))\bigl]\nonumber \\ &=-2iN^{1-\alpha}\rho(x)\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{\pi n}\sin(4n\pi y(x))-N^{1-2\alpha}\dot{\rho}(x)\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{\pi^2n^2}\cos(4n\pi y(x))\nonumber \\ &\quad\quad -N^{1-2\alpha}\dot{\rho}(x)\sum_{n=1}^{\infty}\frac{1}{\pi^2n^2}+2N^{1-2\alpha}\rho(x)\dot{y}(x)\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{\pi n}\sin(4n\pi y(x))+\mathcal{O}(N^{1-3\alpha}), \label{evalu} \end{align} where we have used the following formula \begin{align} \label{intformula} \int_{a}^{b}g(x)e^{Ax+iy(x)}dx=\sum_{\ell=0}^{\infty}\frac{(-1)^{\ell}}{A^{\ell+1}}\biggr[\frac{d^{\ell}}{dx^{\ell}}\bigr(g(x)e^{iy(x)}\bigl)e^{Ax}\biggl]^{b}_{a} \end{align} with $A$ an arbitrary constant. In our case $A$ is proportional to $N^{\alpha}$ and this formula gives the $1/N^{\alpha}$ expansion. Here we have kept the terms up to ${\cal O}(N^{1-2\alpha})$ since these terms will be the leading contributions. Plugging \eqref{evalu} into the saddle point equation \eqref{icsaddle}, we finally obtain two equations from the real part and the imaginary part \begin{align} \begin{cases} ({\rm imaginary \ part})=0&\rightarrow\quad -kN^{\alpha}x-4N^{1-\alpha}\rho(x)y(x)=0\\ ({\rm real \ part})=0&\rightarrow\quad \displaystyle ky(x)+\xi-N^{1-2\alpha}\dot{\rho}(x)\Bigl[\frac{1}{4}-4y^{2}(x)\Bigr]+4N^{1-2\alpha}\rho(x)y(x)\dot{y}(x)=0 \end{cases}, \label{sadi} \end{align} where dot ``$\cdot$'' is the abbreviation for the differential over $x$. We have used the following Fourier series expansion formulas by assuming $-\frac{1}{4}\leq y(x) \leq \frac{1}{4}$: \begin{align} \label{ifourier} \sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n^2}\cos(4\pi ny)=&\frac{\pi^2}{12}-4\pi^2y^2, \quad \quad -\frac{1}{4}\leq y\leq \frac{1}{4}, \\ \sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n}\sin(4\pi ny)=&2\pi y, \ \quad \quad \quad \quad \quad -\frac{1}{4}\leq y\leq \frac{1}{4}. \end{align} Outside the range $-\frac{1}{4}\leq y(x) \leq \frac{1}{4}$, \eqref{sadi} is no longer correct. Although we will not consider this possibility here, the formulas can be generalized by considering the periodicity of the trigonometric functions. In order to obtain a non-trivial solution we have to balance the scalings of the two terms in the imaginary part of \eqref{sadi}, hence we shall choose \begin{align} \alpha=\frac{1}{2}. \label{cons} \end{align} This choice also balance the scalings of all the terms in the real part of \eqref{sadi}. Note that the non-local saddle point equations \eqref{icsaddle} have reduced to the local differential equations \eqref{sadi}. This is because the non-local part of the equation vanishes under the assumption $\lambda_{i}={\widetilde \lambda}^*_i$, as we have seen in \eqref{cancel}. The saddle point equations can be solved by \begin{align} y(x)=-\frac{kx}{4(4\xi x+C)}, \quad \rho(x)=4\xi x+C, \label{isol} \end{align} where $C$ is an integration constant which is determined from the normalization condition \eqref{unit} as \begin{align} C=\frac{1}{b-a}-2\xi(b+a). \end{align} Formally the solution \eqref{isol} implies $y(x)$ diverges at $x=-\frac{C}{4\xi}$, and the density $\rho(x)$ is negative for $x<-\frac{C}{4\xi}$ which obviously contradicts to the notion of the eigenvalue density. We assume that these points are excluded from the support $I$. Note that although we have found the solution $(y(x),\rho(x))$ the support $I$ is still completely undetermined. The support is determined by the extremization of the free energy, as we will see in the next section. \subsubsection{Leading behavior of free energy} \label{imfree} From the solution \eqref{isol} we have obtained in the last section, we can compute the free energy in the large $N$ limit \eqref{iff}. We will obtain the free energy as a function of the edges of the support $(a,b)$. As in the case of the ABJM theory, we can determine the support by choosing the local minimum of the free energy under the variation of $a$ and $b$. We shall start with the continuum limit of the free energy \eqref{iff} \begin{align} \label{icfree} \nonumber f(\lambda,{\widetilde\lambda})=&-4N^{\frac{3}{2}}\pi k\int_{I}dxx\rho(x)y(x)-4N^{\frac{3}{2}}\pi \xi\int_{I}dx\rho(x)x\\ \nonumber &-2N^{2}{\rm Re}\int_{I}dx\int_{I}dx^{\prime}\rho(x)\rho(x^{\prime})\log\sinh N^{\frac{1}{2}}\pi\bigr[(x-x^{\prime})+i(y(x)-y(x^{\prime}))\bigl]\\ &+2N^{2}\int_{I}dx\int_{I}dx^{\prime}\rho(x)\rho(x^{\prime})\log\cosh N^{\frac{1}{2}}\pi\bigr[(x-x^{\prime})+i(y(x)+y(x^{\prime}))\bigl]. \end{align} The last two double integrations can be evaluated in the parallel way as in the last section, with the help of the formulas obtained by integrating \eqref{iseries2} \begin{align} &\log\cosh(z)=\begin{cases} \displaystyle z+\sum_{n=1}^{\infty}\frac{(-1)^{n-1}e^{-2nz}}{n}-\log2 & ({\rm Re}(z) \geq0) \\ \displaystyle -z+\sum_{n=1}^{\infty}\frac{(-1)^{n-1}e^{2nz}}{n}-\log2 & ({\rm Re}(z)<0) \end{cases},\nonumber \\ &\log\sinh(z)= \begin{cases} \displaystyle z-\sum_{n=1}^{\infty}\frac{e^{-2nz}}{n}-\log2 & ({\rm Re}(z) \geq0) \\ \displaystyle -z-\sum_{n=1}^{\infty}\frac{e^{2nz}}{n}-\log2 \pm i\pi & ({\rm Re}(z)<0) \end{cases}. \label{iseries} \end{align} The contributions to the free-energy from the first terms in \eqref{iseries} are again canceled, hence there are no terms including double integration in the free energy. The contribution from the second terms in \eqref{iseries} are evaluated by the integration by parts with the formula \eqref{intformula} and we obtain \begin{align} \label{evalu2} 2\pi N^{2-\frac{1}{2}}\int_{I}dx\Bigl[\frac{1}{4}-4y^{2}(x)\Bigr]\rho^{2}(x), \end{align} where we have used the formula \eqref{ifourier} to reorganize the sum over $n$. It is enough to keep the terms up to $N^{\frac{3}{2}}$ since the Chern-Simons terms and FI terms are already of ${\cal O}(N^{\frac{3}{2}})$ for $\alpha=\frac{1}{2}$. Plugging \eqref{evalu2} into \eqref{icfree} and performing the single integrations for the solution \eqref{isol}, we finally obtain \begin{align} \frac{f}{\pi N^{\frac{3}{2}}}=\frac{k^2(b^3-a^3)}{6}\Bigl(1-\frac{16\xi^2}{k^2}\Bigr)+\frac{1}{2(b-a)}-2\xi(b+a)+2\xi^2(b+a)^2(b-a). \end{align} In order for the free energy to have a local minimum with respect to $(a,b)$, the deformation $\xi$ is required to satisfy the inequality \begin{align} 1-\frac{16\xi^2}{k^2}>0. \end{align} Inside this region, the values of $a$ and $b$ can be uniquely determined as\footnote{ So far we have not considered the other non-local constraint the following by integrating the real part of the saddle point equation \eqref{icsaddle} \begin{align} k\int_a^b dx\rho(x)y(x)+\xi=0, \end{align} which should have been considered before the variation of the free energy. The solutions $(a,b)$ \eqref{isol2} indeed satisfy this condition. } \begin{align} a=-\frac{1}{\sqrt{2k}}\Bigl(1-\frac{4\xi}{k}\Bigr),\quad b=\frac{1}{\sqrt{2k}}\Bigl(1+\frac{4\xi}{k}\Bigr), \label{isol2} \end{align} on which the free energy is \begin{align} F=\frac{\sqrt{2}}{3}\pi N^{\frac{3}{2}}k^{\frac{1}{2}}\biggr(1-\frac{16\xi^2}{k^2}\biggl). \label{resultif} \end{align} Substituting these vaules of $(a,b)$ into the solution \eqref{isol} we finally obtain \begin{align} y(x)=-\frac{kx}{4\Bigl[4\xi x+\sqrt{\frac{k}{2}}\Bigl(1-\frac{16\xi^2}{k^2}\Bigr)\Bigr]}, \quad \rho(x)=4\xi x+\sqrt{\frac{k}{2}}\biggr(1-\frac{16\xi^2}{k^2}\biggl). \end{align} Note that the solution indeed satisfies the bound $-\frac{1}{4}\le y(x)\le \frac{1}{4}$ we have assumed and the positivity of the density $\rho(x)$ on the support. Before closing this section we shall comment on the relation to the results obtained in \cite{JKPS} where the ABJM theory was deformed by assigning the non-canonical $R$-charges $\Delta$ to the bifundamental matter fields $A_i$ and $B_i$. Our solution \eqref{isol} and \eqref{resultif} correspond to the special case of their results (See section 5 in \cite{JKPS}) with the parameters related as $\Delta_{A_{1}}=\Delta_{A_{2}}=\frac{1}{2}+\frac{2\xi}{k}$, $\Delta_{B_{1}}=\Delta_{B_{2}}=\frac{1}{2}-\frac{2\xi}{k}$ and $\Delta_{m}=0$. The dual gravity solution was also constructed \cite{FP} which is consistent with the field theory result. In the next section, we will consider the case of real mass with the similar method used here. The naive guess is that the free energy and the eigenvalue distribution in the mass deformed ABJM theory would be obtained by simply replacing $\xi \rightarrow i\zeta$ and assuming $\zeta\in\mathbb{R}$. Such an ``analytic continuation'' of the parameter, however, is not allowed generally.\footnote{ Our calculation in the large $N$ limit breaks the holomorphy in the sense that the eigenvalue distribution is separated into the real part and the imaginary part, which are real functions. However, it is expected that the partition function \eqref{ZN} is holomorphic at least around $\zeta=0$ from the general argument in \cite{CDFK}. In fact, we will confirm that the partition function is holomorphic in the large $N$ limit when the parameter is sufficiently small in section \ref{realmass}. } Indeed, the behavior of the matrix model \eqref{iff} greatly depends on whether the FI parameter $\zeta$ is real or imaginary. \section{Large $N$ behavior of mass deformed ABJM theory} \label{realmass} In this section, we investigate the leading behavior of the mass deformed ABJM theory by the saddle point approximation. Though the method we will use is parallel with that in the last section, the results are substantially different. We will see that there is a solution for a real $\zeta$ where the free energy is the ``analytic continuation'' of \eqref{resultif} while the eigenvalue distribution is completely different from \eqref{isol}. Moreover there is another solution which gives smaller free energy for $(3-2\sqrt{2})/4\leq \zeta/k \leq 1/4$, thus there occurs a phase transition as we increase $\zeta$. These results may reflect the nontrivial structure of the vacua of the mass deformed ABJM theory. Below we first provide the general solutions to the saddle point equations \eqref{isaddle} with real mass $\zeta\in\mathbb{R}$ in section \ref{gensol}. We shall assume $k>0$ and $\zeta>0$ without loss of generality. As in the case of imaginary FI parameter the solutions contains some integration constants to be determined by the non-local constraints. In section \ref{fereal} we determine these constants and evaluate the free energy for each solutions. \subsection{General solutions as continuous distribution} \label{gensol} To solve the saddle point equations, we need to pose several ansatz on the eigenvalue distributions. First we impose the following reality condition on the eigenvalues \begin{align} {\widetilde\lambda}_i=-\lambda_i^*, \label{real} \end{align} as discussed in the previous section. Second we switch to continuous distributions on the $x$-support $I$ as in the previous section: \begin{align} \label{continu} \lambda_i\rightarrow \lambda(x)=N^{\frac{1}{2}}(x+iy(x)),\quad {\widetilde\lambda}_i\rightarrow {\widetilde\lambda}(x)=N^{\frac{1}{2}}(-x+iy(x)),\quad x\in I. \end{align} The overall scaling $N^{\frac{1}{2}}$ is observed in the numerical analysis of the saddle point equations \eqref{isaddle} (see figure \ref{numerical}). \begin{figure}[ht!] \begin{center} \includegraphics[width=10cm]{101400001.eps} \end{center} \caption{ The numerical solutions $\{\lambda_i\}_{i=1}^N$ of the saddle point equation \eqref{isaddle} with reality condition \eqref{real}. The blue circles are the eigenvalue distribution of $N=20,k=5,\zeta=0.05$, while the red triangles are that for $N=80,k=5,\zeta=0.05$. The solutions are obtained by introducing the fictitious time $t$ and analyze late time solution $\lambda_i(t)$ of the heat equation $d\lambda_i/dt=\partial F/\partial \lambda_i$ \cite{HKPT}. The graph indicates that the maximum of $\mathrm{Re}[\lambda_i]$ is doubled as $N$ being quadrupled. } \label{numerical} \end{figure} The eigenvalue distribution in the $x$-direction is encoded in the density $\rho(x)$ normalized as \eqref{unit}. Taking the continuum limit, the saddle point equation \eqref{isaddle} becomes \begin{align} 0=-iN^{\frac{1}{2}}k(x+iy(x))+i\zeta+N\int_{I}dx'\rho(x')\coth\pi N^{\frac{1}{2}}\bigr[(x-x')+i(y(x)-y(x'))\bigr]\nonumber \\ -N\int_{I}dx'\rho(x')\tanh\pi N^{\frac{1}{2}}\bigr[(x+x')+i(y(x)-y(x'))\bigl]. \label{csaddle} \end{align} Now let us evaluate the last two integrations with the expansion formulas \eqref{iseries2}. In current case the non-local contributions from the first terms in the formula \eqref{iseries2} do not cancel: \begin{align} N\int_{I}dx^{\prime}\rho(x^{\prime})\bigr[\sgn(x-x^{\prime})-\sgn(x+x^{\prime})\bigl]. \label{nonloc2} \end{align} These terms in the saddle point equation would be of ${\cal O}(N)$. If these are no cancellation in this non-local contributions, it is impossible to solve the saddle point equations \eqref{isaddle} since there are no other terms with comparable order. However, we can decrease the order in $N$ of the non-local contribution by assuming the support $I$ to be symmetric under the reflection $x\rightarrow -x$. With this choice of $I$ it is reasonable to define the odd- and the even- function parts of $\rho(x)$ as \begin{align} \rho(x)=\rho_{ev}(x)+N^{-\frac{1}{2}}\rho_{od}(x), \label{rhoevrhood} \end{align} where the scalings of the even/odd part are indeed required from the normalization condition \eqref{unit} and the condition \begin{align} N^{\frac{1}{2}}\int_Idx\rho(x)x=\frac{\zeta}{k} \label{imaginary} \end{align} which is the continuum version of the summation of the imaginary part of the saddle point equation \eqref{isaddle} over the index $i$. The odd part of the distribution is of ${\cal O}(N^{-\frac{1}{2}})$ because the $\text{r.h.s.}$ of \eqref{imaginary} is $N$-independent. The non-local part in the saddle point equation \eqref{csaddle} becomes \begin{align} N\int_{I}dx^{\prime}\rho(x^{\prime})\bigr[\sgn(x-x^{\prime})-\sgn(x+x^{\prime})\bigl]=2N^{\frac{1}{2}}\int_{I}dx^{\prime}\rho_{od}(x)\sgn(x-x^{\prime}). \end{align} Though the contribution is still non-vanishing, we have achieved to reduce the order in $N$. To solve the saddle point equations \eqref{csaddle} it is necessary to postulate the different scalings in $N$ also for the even/odd-function part of $y(x)$ as \begin{align} \label{continu2} y(x)=y_{ev}(x)+N^{-\frac{1}{2}}y_{od}(x). \end{align} The scaling of each part is required for the consistency of the saddle point equations. See appendix \ref{derive} for details. Let us continue to evaluate the last two terms in \eqref{csaddle} with the formulas \eqref{iseries2}. After substituting above ansatz, the second terms in \eqref{iseries} are evaluated by dividing the integration interval of $x^{\prime}$ into two intervals $x>x^{\prime}$ and $x<x^{\prime}$. Then we can integrate them by parts as we have done in section \ref{anal}. The leading part of the saddle point equations \eqref{csaddle} can also be divided into four parts, the real/imaginary and the even/odd-function parts. Massaging the resulting equations, we finally obtain the following four equations: \begin{align} 0=&k\frac{d}{dx}\bigr[xy_{ev}(x)\bigl]+2\int_Idx^\prime \rho_{od}(x^\prime)\sgn(x-x^\prime),\label{saddle1}\\ 0=&kx+\frac{4\rho_{ev}(y_{od}+h\sgn(x))}{1+\dot{y}_{ev}^2},\label{saddle2}\\ 0=&-2k\dot{y}_{ev}(x)h{\rm sgn}(x)+\zeta(1-\dot{y}^2_{ev}(x))+\frac{1}{4}\frac{\rho_{ev}(x)\ddot{y}_{ev}(x)}{1+\dot{y}^2_{ev}(x)},\label{saddle3}\\ 0=&-kh{\rm sgn}(x)-\zeta\dot{y}_{ev}(x)-\frac{1}{4}\frac{d}{dx}\biggr[\frac{\rho_{ev}(x)}{1+\dot{y}^2_{ev}(x)}\biggl]\label{saddle4}, \end{align} where we abbreviated the differential $\frac{d}{dx}$ to dots ``$\cdot$'' as in the last section. Here $h\in\mathbb{Z}/2$ is determined such that \begin{align} \label{bd} -\frac{1}{4}\le y_{od}(x)+h\sgn(x)< \frac{1}{4}. \end{align} For the details of the derivation, see appendix \ref{calculations}. Differentiating the first equation \eqref{saddle1}, we obtain a set of four differential equations against four unknown functions $(y_{ev}(x),y_{od}(x),\rho_{ev}(x),\rho_{od}(x))$, whose general solutions are the following two \begin{align} \mathrm{I}\quad&\begin{cases} y_{ev}(x)&=-\omega|x|-\sqrt{(1+\omega^2)(x^2+a)}+b\\ y_{od}(x)&=-\frac{kx}{16\zeta\sqrt{(1+\omega^2)(x^2+a)}}-h\sgn(x)\\ \rho_{ev}(x)&=4\zeta(1+\omega^2)\Bigl[(2x^2+a)\sqrt{\frac{1+\omega^2}{x^2+a}}+2\omega|x|\Bigr]\\ \rho_{od}(x)&=\frac{kx\sqrt{1+\omega^2}(2x^2+3a)}{4(x^2+a)^{\frac{3}{2}}}+\frac{k\omega\sgn(x)}{2} \end{cases},\nonumber \\ \nonumber \\ \mathrm{II}\quad&\begin{cases} y_{ev}(x)&=-(\omega+\sqrt{1+\omega^2})|x|+a\\ y_{od}(x)&=-\frac{kx}{2}\frac{\sqrt{1+\omega^2}(\sqrt{1+\omega^2}+\omega)}{8\zeta|x|(1+\omega^2)(\sqrt{1+\omega^2}+\omega)+b}-h\sgn(x)\\ \rho_{ev}(x)&=8\zeta(1+\omega^2)(\sqrt{1+\omega^2}+\omega)|x|+b\\ \rho_{od}(x)&=\frac{k(\sqrt{1+\omega^2}+\omega)\sgn(x)}{2} \end{cases}, \label{solns} \end{align} where \begin{align} \omega=\frac{hk}{\zeta}. \end{align} Here $a,b\in\mathbb{R}$ are integration constants. These constants, together with the choice of the support $I$, should be determined by the non-local constraints \eqref{unit}, \eqref{imaginary} and \eqref{saddle1}. We stress that there are two independent solutions. As we shall see below, only one of these solutions is connected to that of the ABJM theory in the undeformed limit. \subsection{Free energy} \label{fereal} The contribution to the free energy in the continuum limit can be written as \begin{align} \label{cfree} \nonumber f(\lambda,{\widetilde\lambda})=&-4\pi N^{2}k\int_{I}dxx\rho(x)y(x)+4N^{\frac{3}{2}}\pi\zeta\int_{I}dx\rho(x)y(x)\\ \nonumber &-2N^{2}{\rm Re}\int_{I}dx\int_{I}dx^{\prime}\rho(x)\rho(x^{\prime})\log\sinh N^{\frac{1}{2}}\pi\bigr[(x-x^{\prime})+i(y(x)-y(x^{\prime}))\bigl]\\ &+2N^{2}\int_{I}dx\int_{I}dx^{\prime}\rho(x)\rho(x^{\prime})\log\cosh N^{\frac{1}{2}}\pi\bigr[(x+x^{\prime})+i(y(x)-y(x^{\prime}))\bigl]. \end{align} After substituting the ansatz \eqref{continu}, \eqref{rhoevrhood} and \eqref{continu2} in section \ref{gensol}, we can evaluate the last terms in \eqref{cfree} by formula \eqref{iseries} as in section \ref{imfree} (see appendix \ref{derive} for the details): \begin{align} f(\lambda,{\widetilde\lambda})&=N^{\frac{3}{2}}\biggl[-4\pi k\int_{I}dx\Bigl(x\rho_{ev}(x)y_{od}(x)+x\rho_{od}(x)y_{ev}(x)\Bigr)+4\pi\zeta\int_{I}dxy_{ev}(x)\rho_{ev}(x)\nonumber \\ &\quad\quad\quad-4\pi\int_{I}dx\int_{I}dx'\rho_{od}(x)\rho_{od}(x^\prime)|x-x^\prime|\nonumber \\ &\quad\quad\quad +2\pi\int_{I}dx\frac{\rho^2_{ev}(x)}{1+\dot{y}^2_{ev}}\Bigl(\frac{1}{4}-4(y_{od}(x)+h\sgn(x))^2\Bigr)\biggr]. \label{F2} \end{align} In this section we compute this quantity for each solution to the saddle point equations \eqref{solns}. For simplicity we shall assume the support $I$ to be a single segment surrounding the origin: \begin{align} I=(-L,L). \label{singlesupport} \end{align} \subsubsection{Free energy for solution I} To compute the free energy of the solution I in \eqref{solns}, we need to determine the integration constants $(a,b)$ and the support $I$. Under the assumption of single support \eqref{singlesupport}, we have three parameters $(a,b,L)$ to be determined. Using the three non-local constraints \eqref{unit}, \eqref{imaginary} and \eqref{saddle1} we will completely determine these parameters.\footnote{ This is in contrast to the ABJM theory and the $R$-charge deformation considered in section \ref{imfree} where the parameters were not completely determined from the saddle point equations but were chosen so that the free energy is minimized. } We first note that $a$ must be non-negative so that the solution is well defined on the support. From the constraints \eqref{unit} and \eqref{imaginary} it follows that \begin{align} \sqrt{1+\frac{a}{L^2}}=\frac{-(X-1)\omega+\sqrt{(X-1)^2\omega^2+4X(1+\omega^2)}}{2X\sqrt{1+\omega^2}} \end{align} with $X=16(h^2+\zeta^2/k^2)$. We can show that the $\text{r.h.s.}$, regarded as a function of two variables $(X,\omega)$, is always smaller than 1 for $X>1$, hence we conclude that the solution I is valid only when \begin{align} h=0,\quad \frac{\zeta}{k}\le \frac{1}{4}. \label{hzeta} \end{align} Under these condition the three parameters are determined as \begin{align} a=\frac{k}{32\zeta^2}\biggl(1-\frac{16\zeta^2}{k^2}\biggr),\quad b=\frac{\sqrt{k}}{4\sqrt{2}\zeta}\biggl(1+\frac{16\zeta^2}{k^2}\biggr),\quad L=\frac{1}{\sqrt{2k}}. \end{align} After the substitution of the these values the solution I are written as \begin{align} \label{sol1} y_{ev}(x)&=\frac{\sqrt{k}}{4\sqrt{2}\zeta}\biggl(1+\frac{16\zeta^2}{k^2}\biggr)-\sqrt{x^2+\frac{k}{32\zeta^2}\Bigl(1-\frac{16\zeta^2}{k^2}\Bigr)},\nonumber \\ y_{od}(x)&=- \frac{k}{16\zeta}\frac{x}{\sqrt{x^2+\frac{k}{32\zeta^2}\Bigl(1-\frac{16\zeta^2}{k^2}\Bigr)}},\nonumber \\ \rho_{ev}(x)&= 4\zeta\frac{d}{dx}\bigr(x\sqrt{x^2 + a}\bigl)=4\zeta\cdot \frac{2x^2+\frac{k}{32\zeta^2}\Bigl(1-\frac{16\zeta^2}{k^2}\Bigr)}{\sqrt{x^2+\frac{k}{32\zeta^2}\Bigl(1-\frac{16\zeta^2}{k^2}\Bigr)}},\nonumber \\ \rho_{od}(x)&=-\frac{k}{4}\frac{d^2}{dx^2}\bigr(xy_{ev}(x)\bigl)=\frac{kx}{4}\cdot \frac{2x^2+3\frac{k}{32\zeta^2}\Bigl(1-\frac{16\zeta^2}{k^2}\Bigr)}{\Bigl[x^2+\frac{k}{32\zeta^2}\Bigl(1-\frac{16\zeta^2}{k^2}\Bigr)\Bigr]^{\frac{3}{2}}}. \end{align} In figure \ref{fit} we compare the solution with the numerically obtained eigenvalue distribution. We can see that the numerical one coincides with the analytical one with good accuracy. Now that the solution is completely determined, we can compute the free energy \eqref{F2} and obtain \begin{align} \label{result} f_{\text{I}}=\frac{\pi\sqrt{2k}N^{\frac{3}{2}}}{3}\biggl(1+\frac{16\zeta^2}{k^2}\biggr). \end{align} Note that this free energy is obtained from \eqref{resultif} just by changing parameter $\xi \rightarrow i\zeta$ while the solution $y(x)$ and $\rho(x)$ is greatly different from \eqref{isol} and \eqref{isol2}.\footnote{We can check that these solution are related by changing parameter $\xi \rightarrow i\zeta$ when we rewrite these solutions in terms of $s$ instead of $x$ (See general solutions \cite{NST2}). } This solution can be regarded as the solution connected to that of the ABJM theory in the sense that the saddle point configuration and the free energy are equal to those obtained in \cite{HKPT} when we take the undeformed limit $\zeta \rightarrow 0$. As $\zeta$ increases the free energy monotonically increases until $\zeta=\frac{k}{4}$, in contrast with \eqref{resultif}. However, as we will see later, the free energy corresponding to the solution II becomes smaller than that of the solution I as $\zeta$ crosses a certain threshold in $0<\zeta<\frac{k}{4}$. \begin{figure}[ht!] \begin{center} \includegraphics[width=10cm]{1014000012.eps} \end{center} \caption{ The blue line is $\lambda(x)=N^{\frac{1}{2}}(x+iy(x))$ for solution I \eqref{sol1}, while the red dots are the eigenvalue distribution obtained by a numerical analysis. } \label{fit} \end{figure} \subsubsection{Free energy for solution II} The free energy for the solution II \footnote{ This solution does not satisfy the boudary condition, which is considered as a part of the saddle point equation \cite{NST2}. The boundary condition has been ignored in previous studies. } in \eqref{solns} with the assumption of single support \eqref{singlesupport} can be evaluated in similar way. Interestingly, the same condition for $h$ and $\zeta$ \eqref{hzeta} follows from \eqref{unit}, \eqref{imaginary} and the constraint $b>0$ which is required for the positivity of the eigenvalue density $\rho_{ev}(x)$. Together with the remaining equation \eqref{saddle1}, we can determine the three parameters $(a,b,L)$ as \begin{align} a=\frac{2\sqrt{2\zeta}}{k},\quad b=\frac{k}{2\sqrt{2\zeta}}\biggl(1-\frac{16\zeta^2}{k^2}\biggr),\quad L=\frac{\sqrt{2\zeta}}{k}. \end{align} \label{result2} With these relations the complete expression of the solution II is \begin{align} y_{ev}(x)&=-|x|+\frac{2\sqrt{2\zeta}}{k},\nonumber \\ y_{od}(x)&=-\frac{kx}{16\zeta|x|+\frac{k}{\sqrt{2\zeta}}\Bigl(1-\frac{16\zeta^2}{k^2}\Bigr)},\nonumber \\ \rho_{ev}(x)&=8\zeta |x|+\frac{k}{2\sqrt{2\zeta}}\biggl(1-\frac{16\zeta^2}{k^2}\biggr),\nonumber \\ \rho_{od}(x)&=\frac{k}{2}\sgn(x). \end{align} The free energy is computed as \begin{align} f_{\text{II}}=\frac{\pi\sqrt{2k}N^{\frac{3}{2}}}{3}\sqrt{\frac{k}{\zeta}}\biggl(\frac{3}{16}+\frac{14\zeta^2}{k^2}-\frac{16\zeta^4}{k^4}\biggr). \end{align} This solution II is not connected to that of the ABJM theory since the free energy becomes infinite as $\zeta \rightarrow 0$. The free energies for the two solutions are plotted in figure \ref{fcompare}. \begin{figure}[ht!] \centering \includegraphics[width=10cm]{1013_fcomparekey.eps} \caption{The solid blue line is the free energy for solution I, while the dashed red line for solution II. $f_\text{ABJM}=\pi\sqrt{2k}/3$ is the value for the ABJM theory. The intersection point is at $\zeta/k=(3-2\sqrt{2})/4$ and at $\zeta/k=1/4$. } \label{fcompare} \end{figure} In the point of view of the saddle point approximation, the smallest free energy is dominant in the large $N$ limit. In this sense $f_{I}$ is preferred when $0\le \zeta/k \leq (3-2\sqrt{2})/4$ while $f_{\text{II}}$ is preferred for $(3-2\sqrt{2})/4\leq \zeta/k \leq 1/4$. Therefore, we conclude that there is a first order phase transition in the large $N$ limit of the mass deformed ABJM theory on $S^3$, with respect to the mass parameter $\zeta/k$ in this region.\footnote{ Note that the phase transition occurs only in the large $N$ limit. For finite $N$ and finite volume, the free energy is expected to be an analytic function of $\zeta$. } It is interesting to consider the gravity dual of this theory. For the imaginary FI parameter $\zeta$ or the $R$-charge deformation, the corresponding gravity solution in the four dimensional supergravity was obtained in \cite{FP} and the free energy \eqref{resultif} was reproduced. We can see that the parameter of the dual geometry corresponding to the $R$-charge can be consistently replaced into pure imaginary, which realize the same free energy as the solution I. Our result indicates that there exist another gravity solution corresponding to the solution II and that the phase transition also occurs in the gravity side. We hope to investigate these points in future. \section{Discussion} \label{discuss} In this paper we have calculated the large $N$ behavior of the free energy of the mass deformed ABJM theory. With the localization method in \cite{KWY2,J,HHL}, the theory on $S^{3}$ reduces to the matrix model. To investigate the large $N$ behavior of the free energy we have used the saddle point approximation to the matrix model and have solved the saddle point equations. The crucial point in the analysis is that we can not take the reality condition $\lambda={\widetilde \lambda}^{*}$ in contrast to the ABJM case and the case of the $R$-charge deformation in \cite{JKPS}. As a result, we can not eliminate the non-local terms in the saddle point equations coming from the one-loop determinant, whatever ansatz we choose for the eigenvalue density $\rho(x)$. We have to consider the support of $\rho(x)$ to be symmetric: $I=[-b,b]$ to solve the saddle point equations. Once we take $I=[-b,b]$, we can guess that it is necessary to impose the even and the odd parts of $\rho(x)$ and $y(x)$ to have different scalings in $N$. It is also important that there are two solutions of the saddle point equations: one is connected to that of the ABJM theory while the other is not. It depends on the value of $\zeta/k$ which solution is dominant among those two. This is a novel phenomenon, which was absent in the ABJM theory and the theory deformed by an imaginary FI parameter where the saddle point configuratoin was uniquely determined. Thus it would be related to the non-trivial vacuum structure of the mass deformed ABJM theory. We also stress that the free energy we obtained in this paper scales as $N^{\frac{3}{2}}$ in the large $N$ limit even though the non-local contributions survive in the free energy. There remains several problems to be concerned in future works. In this paper we have assumed that the support $I$ of the eigenvalues is a single segment for simplicity. It is interesting whether we can determine the solutions supported by multiple segments, and whether these solutions can be more preferable than currently obtained solutions or not. It would also be important to reveal what occurs in the regime $\zeta>\frac{k}{4}$ outside the bound \eqref{hzeta}. For example, the decompactification limit of the three sphere, studied in \cite{LM}, corresponds to the limit $\zeta\rightarrow\infty$. Since the bound automatically follows from our ansatz \eqref{continu2}, it is necessary to seek another ansatz to solve the saddle point equations. Furthermore, there are many interesting extension of our analysis.\footnote{ See \cite{MPP} for recent analysis in gravity side. } For examples, the 't Hooft limit of the mass deformed ABJM theory and the theory with a boundary defined in \cite{SuTe} are interesting to be studied. Also, though we have only considered the strict large $N$ limit, it is interesting to study the large $N$ expansion including the sub-leading corrections in $1/N$ (see recent work \cite{DF} and \cite{No} for $R$-charge deformations). We hope to investigate these issues in near future. \section*{Acknowledgement} We thank to Masazumi Honda for valuable discussion. $\text{T.N.}$ is grateful to Takaya Miyamoto for helpful comments on the numerical analysis, and also to Louise Anderson and Sanefumi Moriyama for many pieces of advice. The work of $\text{T.N.}$ is partly supported by the JSPS Research Fellowships for Young Scientists. Lastly, the authors would like to thank the Yukawa Institute for Theoretical Physics at Kyoto University. Discussions during the YITP workshop YITP-W-15-12 on "Development in String theory and Quantum Field Theory" were useful to complete this work.
{ "attr-fineweb-edu": 1.974609, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfr3xK0zjCxN_vkBQ
\section*{Acknowledgements} \input{sections/appendix} \section*{Data Availability Statement} JAX-FLUIDS is available under the MIT license at \url{https://github.com/tumaer/JAXFLUIDS}. \newpage \section{Numerical and Case Setup Files for Sod Shock Tube Test Case} \label{sec:Appendix} \begin{figure} \inputminted[numbers=left, frame=lines, fontsize=\scriptsize, framesep=2mm]{json}{./figures/code_snippets/numerical_setup.json} \caption{Numerical setup \textit{json} file for the Sod shock tube test case.} \label{fig:numerical_setup} \end{figure} \begin{figure} \inputminted[numbers=left, frame=lines, fontsize=\scriptsize, framesep=2mm]{json}{./figures/code_snippets/case_setup.json} \caption{Case setup \textit{json} file for the Sod shock tube test case.} \label{fig:case_setup} \end{figure} \section{Machine Learning in JAX-FLUIDS} \label{sec:BackwardPass} Having showcased that the JAX-FLUIDS solver functions very well as a modern and easy-to-use fluid dynamics simulator, we now discuss its capabilities for ML research, in particular its automatic differentiation capabilities for end-to-end optimization. In this section, we demonstrate that we can successfully differentiate through the entire JAX-FLUIDS solver. We validate the AD gradients for single- and two-phase flows. We then showcase the potential of JAX-FLUIDS by training a data-driven Riemann solver leveraging end-to-end optimization. \subsection{Deep Learning Fundamentals} \label{subsec:Deeplearning} Given a data set of input-outputs pairs $\mathcal{D} = \left\{(\mathbf{x}_1, \mathbf{y}_1), ..., (\mathbf{x}_N, \mathbf{y}_N)\right\}$ with $\mathbf{x} \in \mathcal{X}$ and $\mathbf{y} \in \mathcal{Y}$, supervised learning tries to find a function $f: \mathcal{X} \rightarrow \mathcal{Y}$ which (approximately) minimizes an average loss \begin{align} \mathcal{L} = \frac{1}{N} \sum_{i=1}^{N} L(\mathbf{y}_i, f(\mathbf{x}_i)), \end{align} where $L: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathbb{R}$ is a suitable loss function. $\mathcal{X}$ and $\mathcal{Y}$ are input and output spaces, and $f \in \mathcal{F}$, where $\mathcal{F}$ is the hypothesis space. We use $\mathbf{\hat{y}}_i$ to denote the output of the function $f$ for input $\mathbf{x}_i$, $\mathbf{\hat{y}}_i = f(\mathbf{x}_i)$. A popular loss in regression tasks is the mean-squared error (MSE) \begin{align} \mathcal{L} = MSE(\mathbf{x}, \mathbf{y}) = \frac{1}{N} \sum_{i=1}^{N} (\mathbf{y}_i - \mathbf{\hat{y}}_i)^2. \end{align} One possible and highly-expressive class of functions are deep neural networks (DNN) \cite{Lecun2015,Goodfellow2016}. DNNs are parameterizable nonlinear compound functions, $f = f_{\mathbf{\theta}}$, where the network parameters consist of weights and biases, $\mathbf{\theta} = \left\{ \mathbf{W}, \mathbf{b} \right\}$. DNNs consist of multiple hidden layers of units between input layer and output layer. The values in each layer are called activations. Multilayer perceptrons (MLPs) are one particular kind of DNN in which adjacent layers are densely connected \cite{Goodfellow2016}. We compute the activations $\mathbf{a}^l$ in layer $l$ from the activations of the previous layer $\mathbf{a}^{l-1}$, \begin{align} \mathbf{a}^l = \sigma (\mathbf{W}^{l-1} \mathbf{a}^{l-1} + \mathbf{b}^{l-1}). \end{align} Here, $\mathbf{W}^{l-1}$ is the weight matrix linking layers $l-1$ and $l$, $\mathbf{b}^{l-1}$ is the bias vector, and $\sigma(\cdot)$ is the element-wise nonlinearity. Typically, DNNs are trained by minimizing $\mathcal{L}$ via mini-batch gradient descent or more advanced optimization routines like AdaGrad \cite{Duchi2011} or Adam \cite{Kingma2015}. \subsection{Optimization through PDE Trajectories} Machine learning has the potential to discover and learn novel data-driven numerical algorithms for the numerical computation of fluid dynamics. For supervised learning, we need input-ouput pairs. In fluid dynamics, exact solutions rarely exist for complex flow. We usually take highly-resolved numerical simulations as exact. In the context of differentiable physics, end-to-end optimization usually refers to supervised learning of ML models which receive gradients that are backpropagated through a differentiable physics simulator. Here, the ML model is placed inside a differentiable PDE solver. A trajectory obtained from a forward simulation of the PDE solver is compared to a ground truth trajectory, and the derivatives of the loss are propagated across the temporal sequence, i.e., the trajectory. We denote a trajectory of states as \begin{align} \mathbf{\tau} = \left\{ \mathbf{U}^1,...,\mathbf{U}^{N_T} \right\}. \end{align} The differentiable solver, e.g., JAX-FLUIDS, can be interpreted as a parameterizable generator $\mathcal{G}_{\mathbf{\theta}}$ of such a trajectory starting from the initial condition $\mathbf{U}_0$. \begin{align} \mathbf{\tau}^{PDE}_{\mathbf{\theta}} = \left\{ \mathbf{U}^1,...,\mathbf{U}^{N_T} \right\} = \mathcal{G}_{\mathbf{\theta}}(\mathbf{U}_0) \end{align} Our loss objective is the difference between the trajectory $\mathbf{\tau}^{PDE}_{\mathbf{\theta}}$ and a ground truth trajectory $\mathbf{\hat{\tau}} = \left\{\mathbf{\hat{U}}^1,...,\mathbf{\hat{U}}^{N_T}\right\}$. For example, using the MSE in state space \begin{align} \mathcal{L}^{\tau} = \frac{1}{N_T} \sum_{i=1}^{N_T} MSE(\mathbf{U}^i, \mathbf{\hat{U}}^i). \end{align} The derivatives of the loss with respect to the tuneable parameters $\partial \mathcal{L}/\partial \mathbf{\theta}$ are backpropagated across the simulation trajectory and through the entire differentiable PDE solver. And the ML model is optimized by using $\partial \mathcal{L}/\partial \mathbf{\theta}$ in a gradient-based optimization routine. In particular, multiple steps through the simulator can be chained together. Thereby, the ML model observes the full dynamics of the underlying PDE and learns how its actions influence the entire simulation trajectory. Naturally, the trained model is equation-specific and physics-informed. This training procedure alleviates the problem of distribution mismatch between training and test data as the model sees its own outputs during the training phase. Additionally, the ML model could potentially account for approximation errors of other parts of the solver. JAX-FLUIDS allows us to take gradients through an entire CFD simulation trajectory by applying \mintinline{python}{jax.grad} to any scalar observable of the state trajectory. We want to stress that JAX-FLUIDS thereby differentiates for each time step through complex subfunctions such as the spatial reconstruction, Riemann solvers, or two-phase interactions. \subsection{Validation of Automatic Differentiation Gradients} \label{subsec:gradientvalid} Before we showcase the full potential of JAX-FLUIDS for learning data-driven numerical schemes by end-to-end optimization, we first validate the gradients obtained from automatic differentiation by comparing them with gradients obtained from finite-differences. We consider a single shock wave with shock Mach number $M_S$ propagating into a fluid at rest. We fix the state ahead of the shock with $\rho_R = p_R = 1, u_R = 0$. The Rankine-Hugoniot relations \cite{Toro2009a} determine the state behind the shock wave as a function of the shock Mach number $M_S$. As the shock wave crosses the computational domain, the integral entropy increases. The described setup is depicted in Figure \ref{fig:gradients} on left. The left and right states are separated by an initial shock discontinuity. We consider a single-phase and two-phase setup. In both setups all fluids are modeled by the ideal gas law. In the former, the same fluid is left and right of the initial shock wave and $\gamma_L = \gamma_R$. In the latter, two immiscible fluids are separated by the shock wave, i.e., $\gamma_L \neq \gamma_R$. For the second setup, we make use of the entire level-set algorithm as described earlier. As the shock wave propagates into the domain, the integral entropy in the domain increases by $\Delta S$, see the schematic in the middle of Figure \ref{fig:gradients}. The integral entropy at time $t$ is defined by \begin{align} S(t) = \int_\Omega \rho(x,t) s(x,t) dx, \end{align} and the increase in integral entropy is \begin{align} \Delta S(t) = S(t) - S(t_0) = \int_\Omega \left(\rho(x,t) s(x,t) - \rho(x,t=0) s(x,t=0) \right) dx. \end{align} In the simplified setting under investigation, the increase in integral entropy at any fixed point in time, say $t^n$, is solely determined by the shock Mach number $M_S$, i.e., $\Delta S^n = \Delta S(t=t^n) = \Delta S(M_S)$. We let the shock wave propagate for $n$ steps with fixed $\Delta t$, i.e., $t^n = n \Delta t$, and compute the gradient of the total entropy increase $\Delta S^n$ with respect to the shock Mach number $M_S$. \begin{align} g = \frac{\partial \Delta S^n}{\partial M_S} \end{align} We compute the gradient with automatic differentiation $g_{AD}$ and with second-order central finite-differences according to \begin{align} g_{FD}^{\varepsilon} = \frac{\Delta S^n (M_S + \varepsilon) - \Delta S^n (M_S - \varepsilon)}{2 \varepsilon}. \end{align} Here, we set $M_S = 2$. We use the \textit{HLLC} setup described in the previous section. We set $\Delta t = 1 \times 10^{-2}$ and $n = 5$. In the single-phase case $\gamma_L = \gamma_R = 1.4$, in the two-phase case $\gamma_L = 1.4, \gamma_R = 1.667$. On the right of Figure \ref{fig:gradients}, we visualize the $l_1$ norm between the gradients $g_{AD}$ and $g_{FD}^{\varepsilon}$ for the single-phase and two-phase setup. We choose $\varepsilon \in \left[ 1e-1, 3e-2, 1e-2, 3e-3, 1e-3, 3e-4, 1e-4 \right]$. We observe that the finite-difference approximations converge with second-order to the respective automatic differentiation gradients. We conclude that automatic differentiation through the entire JAX-FLUIDS code works and gives correct gradients. In passing, we note that although the described setting is a simplified one-dimensional setup, and we only differentiate w.r.t a single parameter, the AD call has to backpropagate through multiple integration steps with several Runge-Kutta substeps each accompanied by calls to spatial reconstruction and Riemann solver. Especially in the two-phase setup, gradients are obtained through the entire level-set computational routines including level-set advection, level-set reinitilization, and the extension procedure. \begin{figure}[t] \centering \input{figures/fig_twophase_schematic/schematic.tex} \caption{Left: Schematic of the computational setup. The constant initial states are separated by a single right-running shock discontinuity. Middle: Schematic of the total entropy increase. Right: Error convergence of the gradient obtained by second-order central finite-difference approximation with respect to the gradient obtained by automatic differentiation, i.e., $\vert g_{AD} - g_{FD}^{\varepsilon} \vert_p $.} \label{fig:gradients} \end{figure} \subsection{End-to-end Optimization of a Riemann Solver} \label{subsec:rusanovnn} Automatic differentiation yields the opportunity to optimize and learn numerical schemes from data by end-to-end optimization through a numerical simulator \cite{Bar-Sinai2019,Bezgin2021a,Kochkove2101784118}. In this section, we want to showcase how JAX-FLUIDS is able to learn a numerical flux function (i.e., an approximate Riemann solver) by minimizing a loss between predicted trajectory and ground truth trajectory. We optimize the popular Rusanov flux function (also known as local Lax-Friedrichs flux function). The Rusanov flux at the cell face $x_{i+1/2}$ is \begin{align} \mathbf{F}_{i+1/2}^\text{Rusanov} = \frac{1}{2} (\mathbf{F}_L + \mathbf{F}_R) - \frac{1}{2} \alpha (\mathbf{U}_R - \mathbf{U}_L). \label{eq:Rusanov} \end{align} Here, $\mathbf{U}_{L/R}$ and $\mathbf{F}_{L/R}$ are the left and right sided cell face reconstructions of the conservative variable and the flux. $\alpha$ is the scalar numerical viscosity. For the classical Rusanov method the numerical viscosity is defined at each cell face $\alpha_\text{Rusanov} = \max\left\{|u_L - c_L |, |u_L + c_L|, |u_R - c_R|, |u_R + c_R|\right\}$. $u$ is the cell face normal velocity and $c$ is the local speed of sound. It is well known that although the Rusanov method yields a stable solution, the excess numerical diffusion leads to smearing out of the solution. As a simple demonstration of the AD-capabilities of our CFD solver, we introduce the Rusanov-NN flux with the dissipation $\alpha^\text{NN}_\text{Rusanov} = NN \left( \vert \Delta u \vert, u_M, c_M, \vert \Delta s \vert \right)$ to be optimized. The dissipation is output of a multi-layer perceptron which takes as inputs the jump in normal velocity $\Delta u = \vert u_R - u_L \vert$, the mean normal velocity $u_M = \frac{1}{2} (u_L + u_R)$, the mean speed of sound $c_M = \frac{1}{2} (c_L + c_R)$, and the entropy jump $\Delta s = \vert s_R - s_L \vert$. The network is composed of three layers with 32 nodes each. We use RELU activations for the hidden layers. An exponential activation function in the output layer guarantees $\alpha_\text{Rusanov}^\text{NN} \geq 0$. \begin{figure} \centering \input{figures/fig_rusanovnn/trajectories.tex} \caption{Trajectories of the absolute velocity. From top to bottom: Ground truth (Exact) on $128 \times 128$, Coarse-grained (CG) on $32 \times 32$, Rusanov-NN on $32 \times 32$, and Rusanov on $32 \times 32$. For each time step, values are normalized with the minimum and maximum value of the exact solution.} \label{fig:Trajectory} \end{figure} We set up a highly-resolved simulation of a two-dimensional implosion test case to generate the ground truth trajectory. The initial conditions are a diagonally placed jump in pressure and density, \begin{align} \left( \rho, u, v, p \right) = \begin{cases} \left(0.14, 0, 0, 0.125\right) & \text{if}\ x + y \leq 0.15,\\ \left(1, 0, 0, 1\right) & \text{if}\ x + y > 0.15, \end{cases} \label{eq:FullImplosion} \end{align} on a domain with extent $x \times y \in [0, 1] \times [0, 1]$. A shock, a contact discontinuity, and a rarefaction wave emanate from the initial discontinuity and travel along the diagonal. The shock is propagating towards the lower left corner and is reflected by the walls resulting in a double Mach reflection, while the rarefaction wave is travelling into the open domain. The high resolution simulation is run on a mesh with $128 \times 128$ cells with a WENO3-JS cell face reconstruction, TVD-RK2 integration scheme, and the HLLC Riemann solver. We use a fixed time step $\Delta t = 2.5 \times 10^{-4}$ and sample the trajectory every $\Delta t_\text{CG} = 10^{-3}$, which is the time step for the coarse grained trajectories. A trajectory of 2501 time steps is generated, i.e., $t \in [0, 2500 \Delta t_\text{CG}]$. Exemplary time snapshots for the absolute velocity are visualized in the top row of Figure \ref{fig:Trajectory}. We obtain the ground truth data after coarse-graining the high-resolution trajectory onto $32 \times 32$ points, see second row of Figure \ref{fig:Trajectory}. The dissipation network model is trained in a supervised fashion by minimizing the loss between the coarse-grained (CG) trajectory and the simulation produced by the Rusanov-NN model. The loss function is defined as the mean-squared error between the predicted and coarse-grained primitive state vectors, $\mathbf{W}^\text{NN}$ and $\mathbf{W}^\text{CG}$, over a trajectory of length $N_T$, \begin{align} L = \frac{1}{N_T}\sum_{i = 1}^{N_T} MSE(\mathbf{W}^\text{NN}_i, \mathbf{W}^\text{CG}_i). \label{eq:Loss} \end{align} The training data set consists of the first $1000$ time steps of the coarse grained reference solution. During training, the model is unrolled for $N_T = 15$ time steps. We use the Adam optimizer with a constant learning rate $5e-4$ and a batch size of $20$. The Rusanov-NN model is trained for $200$ epochs. Although we have trained on trajectories of length $15$, the trained model is then evaluated for the entire trajectory of $2501$ time steps. Figure \ref{fig:Trajectory} compares the results of the Rusanov and the final Rusanov-NN flux functions. The NN-Rusanov flux is less dissipative than the classical Rusanov scheme and recovers small scale flow structures very well, e.g., see time step 100 in Figure \ref{fig:Trajectory}. The NN-Rusanov flux stays stable over the course of the simulation and consistently outperforms the Rusanov flux, see the relative errors in pressure and density in Figure \ref{fig:RelError}. The ML model even performs very well outside the training set (time steps larger than $1000$). \begin{figure}[t!] \centering \input{figures/fig_rusanovnn/rel_error.tex} \caption{Relative $L_1$ error for density (blue) and pressure (orange). The gray line indicates the training horizon.} \label{fig:RelError} \end{figure} \section{Software Implementation Details} \label{sec:Implementation} In the past, CFD solvers have been written predominantly in low-level programming languages like Fortran and C/C++. These languages offer computational performance and CPU parallelization capabilities. However, the integration of ML models, which are typically coded in Python, is not straightforward and automatic differentiation capabilities are nonexistent. With the present work, we want to provide a CFD framework that allows for seamless integration and end-to-end optimization of ML models, without major performance losses. The Python package JAX \cite{jax2018github} satisfies these requirements. Therefore, we use JAX as the fundamental building block of JAX-FLUIDS. In this section, we give implementation details and the algorithmic structure of JAX-FLUIDS. \subsection{Array Programming in JAX} JAX is a Python library for high-performance numerical computations, which uses XLA to compile and run code on accelerators like CPUs, GPUs, and TPUs. Designed as a machine learning library, JAX supports automatic differentiation \cite{Baydin2018}. In particular, JAX comes with a fully-differentiable version of the popular NumPy package \cite{Harris2020} called JAX NumPy. Both Python and NumPy are widely popular and easy to use. NumPy adds high-level functionality to handle large arrays and matrices. The backbone of (JAX) NumPy is the multidimensional array, so called \mintinline{python}{jax.numpy.DeviceArray}. We use arrays to store all of our field data. In particular, the vectors of conservative and primitive variables, $\mathbf{U}$ and $\mathbf{W}$, are stored in arrays of shape $(5, N_x + 2 N_h, N_y + 2 N_h, N_z + 2 N_h)$. $(N_x, N_y, N_z)$ denote the resolution in the three spatial directions, and $N_h$ specifies the number of halo cells. Our implementation naturally degenerates for one- and two-dimensional settings by using only a single cell in the excess dimensions. In the \textit{array programming paradigm} (see \cite{Walt2011} for details on numerical operations on arrays in NumPy), every operation is performed on the entire array. I.e., instead of writing loops over the array as is common in Fortran or C/C++, we use appropriate indexing/slicing operations. As a result, many parts of the code used from classical CFD solvers have to be rewritten according to the \textit{array programming paradigm}. As an example we include the source code of our implementation of the second-order central cell face reconstruction scheme in Figure \ref{fig:reconstruction}. The \mintinline{python}{class CentralSecondOrderReconstruction} inherits from \mintinline{python}{class SpatialReconstruction}. This parent class has an abstract member function \mintinline[escapeinside=||,mathescape=true]{python}{reconstruct_xi}, which must be implemented in every child class. \mintinline[escapeinside=||,mathescape=true]{python}{reconstruct_xi} receives the entire \mintinline{python}{buffer} and the reconstruction direction \mintinline{python}{axis} as arguments. The data buffer has to be indexed/sliced differently depending on the spatial direction of the reconstruction and the dimensionality of the problem. Note that the buffer array has shape $(5, N_x + 2 N_h, N_y + 2 N_h, N_z + 2 N_h)$ if the problem is three-dimensional, $(5, N_x + 2 N_h, N_y + 2 N_h, 1)$ if the problem is two-dimensional, and $(5, N_x + 2 N_h, 1, 1)$ if the problem is one-dimensional. The slice indices have to be adapted accordingly. To prevent boilerplate code in the reconstruction routine, we abstract the slicing operations. The member variable \mintinline{python}{self.slices} is a list that holds the correct slice objects for each spatial direction. Consider reconstructing in $x$-direction (\mintinline{python}{axis=0}) using the present second-order cell face reconstruction scheme, \begin{equation} \mathbf{U}_{i+\frac{1}{2},j,k}= \frac{1}{2}( \mathbf{U}_{i,j,k} + \mathbf{U}_{i+1,j,k} ). \end{equation} Here, we require two slice objects: \mintinline{python}{jnp.s_[...,self.nh-1:-self.nh,self.nhy,self.nhz]} for $\mathbf{U}_{i,j,k}$ and \mintinline{python}{jnp.s_[...,self.nh:-self.nh+1,self.nhy,self.nhz]} for $\mathbf{U}_{i+1,j,k}$. The variable \mintinline{python}{self.nh} denotes the number of halo cells. The slices in $x$-direction, i.e., \linebreak \mintinline{python}{self.nh-1:-self.nh} and \mintinline{python}{self.nh:-self.nh+1}, are determined by the reconstruction scheme itself. \mintinline{python}{self.nhy} and \mintinline{python}{self.nhz} denote the slice objects for the dimensions in which we do not reconstruct. These are either \mintinline{python}{self.nh:-self.nh} if the dimension is active or \mintinline{python}{None:None} if the dimension is inactive. \mintinline{python}{self.nhx}, \mintinline{python}{self.nhy}, and \mintinline{python}{self.nhz} are defined in the parent class. \begin{figure} \centering \inputminted[numbers=left, frame=lines, fontsize=\scriptsize, framesep=2mm]{python}{./figures/code_snippets/reconstruction_stencil.py} \caption{Code snippet for the second-order central cell face reconstruction stencil.} \label{fig:reconstruction} \end{figure} \subsection{Object-oriented Programming in the Functional Programming World of JAX} As already alluded to in the previous subsection, we use object-oriented programming (OOP) throughout the entire JAX-FLUIDS solver. Although JAX leans inherently more towards a functional programming style, we have opted to program JAX-FLUIDS in a modular object-oriented approach since this has several benefits. Firstly, comprehensive CFD solvers typically offer a plethora of interchangeable numerical algorithms. Naturally, OOP allows the implementation of many derived classes, e.g., different spatial reconstructors (see \mintinline{python}{class SpatialReconstruction} in the previous subsection), and saves boilerplate code. Secondly, the modular OOP approach allows users to customize a solver specific to their problem. Via a numerical setup file, the user can detail the numerical setup prior to every simulation. Thirdly, we want to stress that the modularity of our solver allows for straightforward integration of custom modules and implementations. For example, avid ML-CFD researchers can easily implement their own submodule into the JAX-FLUIDS framework, either simply for forward simulations or for learning new routines from data. \subsection{Just-in-time (jit) Compilation and Pure Functions} JAX offers the possibility to just-in-time (jit) compile functions, which significantly increases the performance. However, jit-compilation imposes two constraints: \begin{enumerate} \item \textbf{The function must be a pure function}. A function is pure if its return values are identical for identical input arguments and the function has no side effects. \item \textbf{Control flow statements in the function must not depend on input argument values}. During jit-compilation, an abstract version of the function is cached that works for arbitrary argument values. I.e., the function is not compiled for concrete argument values but rather for the set of all possible argument values where only the array shape and type is fixed. Therefore, control flow statements that depend on the input argument values can not be jit-compiled, unless the argument is a \textit{static argument}. In this case, the function is recompiled for all values that the \textit{static argument} takes during runtime. \end{enumerate} The aforementioned requirements to jit-compile a function have the following impact on our implementation: \begin{itemize} \item The \mintinline{python}{self} argument in jit-compiled member functions must be a \textit{static argument}. This implies that class member variables that are used in the function are static and hence must not be modified. In other words, the type of class member variables is similar to the C++ type \mintinline{C++}{constexpr}. \item Control flow statements in jit-compiled functions can only be evaluated on \textit{static arguments}. In our code, there are three distinct types of control flow statements where this has an impact: \begin{enumerate} \item \textbf{Conditional exit of a for/while loop}. The top level compute loop over the physical simulation time (compare Algorithm \ref{alg:main_loop}) is not jit-compiled, since it consists of a while loop that is exited when the final simulation time is reached $t \geq t_{end}$. However, $t$ is not a static variable. We therefore only jit-compile the functions \mintinline{python}{compute_timestep}, \\ \mintinline{python}{do_integration_step()}, and \mintinline{python}{compute_forcings()}. These are the functions that constitute the heavy compute within the main loop. \item \textbf{Conditional slicing of an array}. In multiple parts of the JAX-FLUIDS code, arrays are sliced depending on the present spatial direction. The present spatial direction is indicated by the input argument \mintinline{python}{axis} (compare cell face reconstruction in Figure \ref{fig:reconstruction}). The code that is actually executed conditionally depends on the value of \mintinline{python}{axis}. Therefore, \mintinline{python}{axis} must be a static argument. Functions that receive \mintinline{python}{axis} as an input argument are compiled for all values that \mintinline{python}{axis} might take during runtime. Each compiled version of those functions is cached. \item \textbf{Conditional executation of code sections}. We explained above that class member variables are always static. In practice, we often use them to conditionally compile code sections, much like the \mintinline{C++}{if constexpr} in C++. An example is the cell face reconstruction of the variables, which can be done on primitive or conservative variables in either physical or characteristic space. \end{enumerate} \item Element-wise conditional array operations are implemented using masks, as jit-compilation requires the array shapes to be fixed at compile time. Figure \ref{fig:masks} exemplarily illustrates the use of masks, here for the evaluation of the level-set advection equation. We make frequent use of \mintinline{python}{jnp.where} to implement element-wise conditional array operations. \end{itemize} \begin{figure} \centering \inputminted[numbers=left, frame=lines, fontsize=\scriptsize, framesep=2mm]{python}{./figures/code_snippets/conditional_operations.py} \caption{Code snippet showing the right-hand-side computation of the level-set advection equation. } \label{fig:masks} \end{figure} \subsection{Main Objects and Compute Loops} \label{subsec:ComputeLoops} We put emphasis on making JAX-FLUIDS an easy-to-use Python package for ML-CFD research. Figure \ref{fig:run_solver} shows the required code lines to run a simulation. The user must provide a numerical setup and a case setup file in the \textit{json} format. The numerical setup specifies the combination of numerical methods that will be used for the simulation. The case setup details the physical properties of the simulation including the spatial domain and its resolution, initial and boundary conditions, and material properties. As an example, we include the numerical setup and case setup for the Sod shock tube test case in the appendix in Figure \ref{fig:numerical_setup} and \ref{fig:case_setup}, respectively. Using the \textit{json} files, we create an \mintinline{python}{class InputReader} instance, which we denote as \mintinline{python}{input_reader}. The \mintinline{python}{input_reader} performs necessary data type transformations and a thorough sanity check ensuring that the provided numerical and case setups are consistent. Then, an \mintinline{python}{class Initializer} and a \mintinline{python}{class SimulationManager} object are created using the \mintinline{python}{input_reader} object. The \mintinline{python}{class Initializer} implements functionality to generate the initial buffers from either the initial condition specified in the case setup file or from a restart file. We receive these buffers by calling the \mintinline{python}{initialization} method of the \mintinline{python}{initializer} object. The \mintinline{python}{class SimulationManager} is the main class in JAX-FLUIDS, implementing the algorithm to advance the initial buffers in time using the specified numerical setup. The initial buffers must be passed to the \mintinline{python}{simulate} method of the \mintinline{python}{simulation_manager} object. The simulation starts upon execution of this function. \begin{figure} \inputminted[numbers=left, frame=lines, fontsize=\scriptsize, framesep=2mm]{python}{./figures/code_snippets/run_nlfvs.py} \caption{Code snippet illustrating how to run a simulation with JAX-FLUIDS.} \label{fig:run_solver} \end{figure} The code contains three major loops: \begin{enumerate} \item \textbf{Loop over physical simulation time}, see Algorithm \ref{alg:main_loop}. \item \textbf{Loop over Runge-Kutta stages}, see Algorithm \ref{alg:do_integration_step}. \item \textbf{Loop over active spatial dimensions}, see Algorithm \ref{alg:compute_rhs}. \end{enumerate} \begin{algorithm} \caption{Loop over physical simulation time in the \mintinline[escapeinside=||,mathescape=true]{python}{simulate} function. Red text color indicates functions that are only executed for simulations with active forcings.} \label{alg:main_loop} \While{time < end time}{ \mintinline[escapeinside=||,mathescape=true]{python}{compute_timestep()}\\ \textcolor{red}{\mintinline[escapeinside=||,mathescape=true]{python}{forcings_handler.compute_forcings()}}\\ \mintinline[escapeinside=||,mathescape=true]{python}{do_integration_step()}\\ \mintinline[escapeinside=||,mathescape=true]{python}{output_writer.write_output()} } \end{algorithm} \begin{algorithm} \caption{Loop over Runge-Kutta stages in the \mintinline[escapeinside=||,mathescape=true]{python}{do_integration_step} function. Blue text color indicates functions that are only executed for two-phase simulations.} \label{alg:do_integration_step} \For{RK stages}{ \mintinline[escapeinside=||,mathescape=true]{python}{space_solver.compute_rhs()}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{levelset_handler.transform_volume_averages_to_conservatives()}\\} \mintinline[escapeinside=||,mathescape=true]{python}{time_integrator.prepare_buffers_for_integration()}\\ \mintinline[escapeinside=||,mathescape=true]{python}{time_integrator.integrate_conservatives()}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{time_integrator.integrate_levelset()}}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{levelset_handler.reinitialize_levelset()}}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{boundary_condition.fill_boundaries_levelset()}}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{levelset_handler.compute_geometrical_quantities()}}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{levelset_handler.mix_conservatives()}}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{levelset_handler.transform_conservatives_to_volume_averages()}}\\ \mintinline[escapeinside=||,mathescape=true]{python}{get_primitives_from_conservatives()}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{levelset_handler.extend_primitives_into_ghost_cells()}}\\ \mintinline[escapeinside=||,mathescape=true]{python}{boundary_condition.fill_material_boundaries()}\\ } \end{algorithm} \begin{algorithm} \caption{Loop over spatial dimensions in the \mintinline[escapeinside=||,mathescape=true]{python}{compute_rhs} function. Blue text color indicates functions that are only executed for two-phase simulations.} \label{alg:compute_rhs} \For{active axis}{ \mintinline[escapeinside=||,mathescape=true]{python}{flux_computer.compute_inviscid_flux_xi()}\\ \mintinline[escapeinside=||,mathescape=true]{python}{flux_computer.compute_viscous_flux_xi()}\\ \mintinline[escapeinside=||,mathescape=true]{python}{flux_computer.compute_heat_flux_xi()}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{levelset_handler.weight_cell_face_flux_xi()}}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{levelset_handler.compute_interface_flux_xi()}}\\ \textcolor{blue}{\mintinline[escapeinside=||,mathescape=true]{python}{levelset_handler.compute_levelset_advection_rhs()}}\\ } \end{algorithm} \subsection{Gradient Computation in JAX-FLUIDS} \label{subsec:gradients} JAX-FLUIDS offers the \mintinline{python}{simulate} method to perform a standard forward CFD simulation. In this regard, JAX-FLUIDS serves as a physics simulator for data generation, development of numerical methods, and exploration of fluid dynamics. The \mintinline{python}{simulate} method does not have a return value, therefore, it is not meant to be used for end-to-end optimization. To use the automatic differentiation capabilities of JAX-FLUIDS, we offer the \mintinline{python}{feed_forward} method. This method takes in a batch of initial buffers and propagates them in time for a fixed number of time steps. The user provides the number of integration steps and a fixed time step. The output of the \mintinline{python}{feed_forward} method is the solution trajectory. In particular, the shape of the initial buffer is $(N_b, 5, N_x, N_y, N_z)$ where $N_b$ is the batch size. The shape of the solution trajectory is $(N_b, N_T + 1, 5, N_x, N_y, N_z)$ where $N_T$ is the number of integration steps. Internally, the \mintinline{python}{feed_forward} method uses the \mintinline{python}{jax.vmap} routine to vectorize over the batch dimension. \mintinline{python}{feed_forward} is jit-compilable and can be differentiated with \mintinline{python}{jax.grad} or \mintinline{python}{jax.value_and_grad}. The \mintinline{python}{feed_forward} method of JAX-FLUIDS can therefore be used for end-to-end optimization of ML models. \subsection{Integration of ML models into JAX-FLUIDS} \label{subsec:MLJAXFLUIDS} JAX-FLUIDS works with Haiku \cite{haiku2020github} and Optax \cite{optax2020github}. The Haiku package is a neural network library for JAX. In Haiku, neural networks are of type \mintinline{python}{haiku.Module}. To use them in combination with JAX, the feedforward method of the network has to be embedded in a function that is transformed into a forward wrapper object of type \mintinline{python}{haiku.Transformed}. This forward wrapper object provides two pure methods, \mintinline{python}{init} and \mintinline{python}{apply}. The \mintinline{python}{init} method initializes the network parameters, and the \mintinline{python}{apply} method executes the feedforward of the network. Network parameters have to be explicitly passed to the \mintinline{python}{apply} method. We refer to the Haiku documentation for more details. Optax provides optimization routines, e.g., the popular Adam optimizer \cite{Kingma2015}. In JAX-FLUIDS, we provide functionality to include preexisting ML models and optimize new ones. Neural networks can be passed to the \mintinline{python}{simulate} and \mintinline{python}{feed_forward} method. Note that only the \mintinline{python}{feed_forward} method can be differentiated, see Subsection \ref{subsec:gradients}. I.e., it must be used for optimization of deep learning models. A typical use case in ML-CFD research is substituting a conventional numerical subroutine with a data-driven alternative. We provide a number of interfaces inside the JAX-FLUIDS solver to which network modules can be passed from \mintinline{python}{feed_forward}, e.g., to the cell face reconstruction, the Riemann solver, or the forcing module. On the top level, the user gives a dictionary of transformed network modules and a dictionary with corresponding network parameters to the \mintinline{python}{feed_forward} method. The keys of these dictionaries specify the JAX-FLUIDS subroutine to which the corresponding values are passed. \section{Conclusion} \label{sec:Conclusion} We have presented JAX-FLUIDS, a comprehensive state-of-the-art fully-differentiable python package for compressible three-dimensional computational fluid dynamics. Machine learning is becoming more and more dominant in the physical and engineering sciences. Especially, fluid dynamics represents a field in which ML techniques and data-driven methods show very promising results and seem to have a high potential. Despite the recent surge of ML-CFD research, a comprehensive state-of-the-art differentiable CFD solver has not been published. JAX-FLUIDS provides researchers at the intersection of fluid dynamics and deep learning the opportunity to explore new data-driven numerical models for fluid dynamics. JAX-FLUIDS offers powerful high-order numerical methods for a wide variety of fluid dynamics problems, e.g., turbulent flows, flows with arbitrary solid boundaries, compressible flows, and two-phase flows. The modular architecture of JAX-FLUIDS makes integration of custom submodules easy and straightforward. Although JAX-FLUIDS covers a wide range of flow physics, some intriguing and complex phenomena like combustion, fluid-structure interaction, or cavitation cannot yet be modeled with JAX-FLUIDS. For the future, we plan to implement appropriate numerical methods. Currently, by far the largest limitation of JAX-FLUIDS is the available memory of the GPU. Although JAX-FLUIDS scales out of the box to problem sizes with roughly 400 million degrees of freedom (DOFs) on a single modern GPU, many problems in fluid dynamics require higher DOFs. Generally, there are two ways of tackling this problem: adaptive multiresolution \cite{Harten1994,Harten1995} and parallelization to multiple GPUs, e.g. \cite{Romero2020,Hafner2021}. Adaptive multiresolution increases the mesh resolution in areas of interest (e.g., where small scale structures are present) while using a much coarser resolution in other parts of the flow field. Compared to a uniform mesh, adaptive multiresolution increases the efficiency of the simulation in terms of computational resources. To the knowledge of the authors, multiresolution strategies seem to be problematic with a jit-compiled code framework such as JAX, as the computegraph has to be static. A second approach for increasing the computable problem size is parallelization. The latest version of JAX \cite{jax2018github} as well as other works \cite{Hafner2021b} propose different parallelization strategies for JAX algorithms and provide promising avenues for future work. \section{Validation of JAX-FLUIDS as a Classical CFD Simulator} \label{sec:ForwardPass} We show the capabilities of the JAX-FLUIDS solver as a classical fluid dynamics simulator. We validate our implementation on established one- and two-dimensional test cases from gas dynamics and several canonical turbulent flows. In Subsection \ref{subsec:SinglePhase} we first validate the single-phase implementation in JAX-FLUIDS. In Subsection \ref{subsec:Twophase} we then validate the two-phase level-set implementation for fluid-solid and fluid-fluid interactions. We define two numerical setups which we will use predominantly throughout this section. Firstly, we use the \textit{High-Order Godunov} formulation. This setup consists of a WENO5-JS reconstruction of the primitive variables with an approximate HLLC Riemann solver. We will refer to this setup as \textit{HLLC}. Secondly, we employ the \textit{Flux-Splitting} formulation. In particular, we choose Roe approximate Riemann solver with a WENO5-JS flux reconstruction in characteristic space. This setup will be denoted as \textit{ROE}. The dissipative fluxes are discretized with a fourth-order central finite difference stencil as described in \ref{subsec:Dissipative}. If not specified otherwise we will use the stiffened gas equation of state with $\gamma = 1.4$ and $B = 0$, i.e., the ideal gas law. We will use the classical TVD-RK3 scheme with $CFL_\text{conservatives} = 0.9$ for time integration. For the two-phase test cases, we additionally apply the following methods. The level-set advection equation is discretized with a HOUC5 stencil. We solve the extension equation using a first-order upwind spatial derivative stencil combined with an Euler integration scheme. Here, we apply a fixed number of $15$ steps with $CFL_\text{extension}=0.7$. The reinitialization equation is solved with a WENO3-HJ stencil combined with a TVD-RK2 scheme. The level-set field is reinitialized each physical time step by integrating the reinitialization equation for one step with $CFL_\text{reinitialization}=0.7$. We refer to Section \ref{sec:numericalmodel} and therein to Table \ref{tab:NumericalMethods} for details on numerical models. \subsection{Single Phase Simulations} \label{subsec:SinglePhase} \subsubsection{Convergence Study} \label{subsubsec:Convergence} We analyze the convergence behavior of our solver. We simulate the advection of a density profile in a one-dimensional periodic domain of extent $x \in [0, 1]$ with constant velocity $u=1$ and pressure $p=1$. We use a sinusoidal initial condition for the density \begin{equation} \rho(x, t=0) = 1.5 + \sin(2\pi x). \label{eq:SinusDensity} \end{equation} Note that we initialize the cell-averaged values of Equation \eqref{eq:SinusDensity}. I.e., for cell $i$ with cell-center $x_i$ we use $\bar{\rho}_i = 1.5 - 1.0 / (2 \pi \Delta x) (\cos(2 \pi x_{i+1/2}) - \cos(2 \pi x_{i-1/2})) $. We conduct simulations with WENO1-JS, WENO3-JS, and WENO5-JS spatial discretizations and evaluate the rate of convergence upon mesh refinement. We use TVD-RK2 for WENO1- and WENO3-JS. For WENO5-JS we use TVD-RK3. We use a fixed time step $\Delta t = 1 \times 10^{-4}$ which is chosen small enough to exclude any influence of the time integration scheme. The simulation is propagated until $t_{\text{end}} = 1.0$. Figure \ref{fig:convergence} shows the convergence rates in the $l_1$, $l_2$, and $l_\infty$ norms. We increase the resolution from $10$ to $1000$ points. The expected convergence rates of $\mathcal{O}(\Delta x^1)$, $\mathcal{O}(\Delta x^2)$, and $\mathcal{O}(\Delta x^5)$ are reached. Note that it is well known that the convergence order of WENO3-JS drops to second order under the presence of extreme points. \begin{figure} \centering \input{figures/fig_convergence/convergence.tex} \caption{Error convergence $\vert \rho - \hat{\rho} \vert_p $ for the linear advection of Equation \eqref{eq:SinusDensity} with WENO1, WENO3, and WENO5 spatial discretization. $\hat{\rho}$ is the analytical solution at $t_{\text{end}} = 1.0$. From left to right: $l_1$, $l_2$, and $l_{\infty}$ norms.} \label{fig:convergence} \end{figure} \subsubsection{Shock Tube Tests} \label{subsubsec:Shocktube} The shock tube tests of Sod \cite{Sod1978a} and Lax \cite{Lax1954} are standard one-dimensional test cases for validating fluid solvers for compressible flows. Specifically, we use these test cases to validate the implementation of the convective fluxes and the shock-capturing schemes. In both shock tube test cases three waves emanate from the initial discontinuities: a left running rarefaction, a right running contact discontinuity, and a right running shock. A detailed description of the tests and their setups are provided in the presented references. We discretize the domain $x \in [0, 1]$ with $N = 100$ points. The analytical reference solution is taken from an exact Riemann solver (e.g, \cite{Toro2009a}). We run both shock tube tests with the \textit{HLLC} and \textit{ROE} setups. Figures \ref{fig:shocktube_1} and \ref{fig:shocktube_2} show the density and velocity distributions for the Sod and Lax shock tube tests at $t=0.2$ and $t=0.14$, respectively. The \textit{HLLC} and \textit{ROE} solutions agree very well with the analytical reference. The \textit{HLLC} solutions show slightly oscillatory behavior which is due to the cell face reconstruction of the primitive variables, see \cite{Qiu2002}. \begin{figure} \centering \input{figures/fig_shocktube_1/shocktube_1.tex} \caption{Sod shock tube: density $\rho$ and velocity $u$ at $t=0.2$.} \label{fig:shocktube_1} \end{figure} \begin{figure} \centering \input{figures/fig_shocktube_2/shocktube_2.tex} \caption{Lax shock tube: density $\rho$ and velocity $u$ at $t=0.14$.} \label{fig:shocktube_2} \end{figure} \subsubsection{Lid-driven Cavity} \label{subsubsec:Cavity} The lid-driven cavity test case describes the flow in a square cavity that is driven by a moving wall. Viscous forces lead to the generation of one primary vortex and, depending on the Reynolds number, one or more secondary vortices. We use this test case to validate the implementation of the viscous fluxes. The computational domain $x\times y\in[0,1]\times[0,1]$ is discretized with a grid consisting of $500\times500$ cells. All boundary conditions are no-slip walls. The north wall is moving with a constant velocity $u_W$, resulting in a Reynolds number $Re=\frac{u_W L}{\nu} = 5000$. The test case is simulated until a steady state is reached with the \textit{HLLC} setup. Figure \ref{fig:driven_cavity} depicts the distribution of $\frac{u}{u_W}$ over $y$ and $\frac{v}{u_W}$ over $x$ across the domain center. The present result agrees very well with the reference \cite{Ghia1982}. \begin{figure} \centering \input{figures/fig_drivencavity/drivencavity_velocity_5000.tex} \caption{Lid driven cavity at $Re = \sfrac{u_W L}{\nu} = 5000$. (Left) Instantaneous streamlines with colors ranging from largest (red) and smallest (blue) value of the absolute velocity. (Middle) Normalized $u$ velocity over $y$ through domain center. (Right) Normalized $v$ velocity over $x$ through domain center. Reference is taken from \cite{Ghia1982}.} \label{fig:driven_cavity} \end{figure} \subsubsection{Compressible Decaying Isotropic Turbulence} \label{subsubsec:HIT} Many flows in nature and engineering applications are turbulent. The direct numerical simulation of turbulent flows requires the resolution of the smallest scales present (the Kolmogorov scales) which is prohibitively expansive \cite{Pope2000}. In JAX-FLUIDS we have implemented the implicit large eddy simulation (ILES) model \textit{ALDM} \cite{Hickel2014b} which enables us to simulate complex compressible turbulent flows up to very high Reynolds numbers. To validate the implementation of \textit{ALDM} and the viscous forces, we simulate compressible decaying isotropic turbulence at various turbulent Mach numbers. Specifically, we investigate the performance of the \textit{ALDM} ILES implementation on the basis of cases 1-3 from Spyropoulos et al. \cite{Spyropoulos1996}. The turbulent Mach numbers $M_t = \sqrt{<u^2 + v^2 + w^2>} / <c>$ for the three cases are $0.2$, $0.4$, and $0.6$, respectively. The turbulent Reynolds number is $Re_T = \frac{\rho u'^4}{\epsilon \mu} = 2742$, where $u'$ is the root-mean-square (rms) velocity and $\epsilon$ is the dissipation rate of turbulent kinetic energy \cite{Pope2000}. The spatial domain has extent $x \times y \times z \in \left[0, 2\pi\right] \times \left[0, 2\pi\right] \times \left[0, 2\pi\right]$. We use the DNS data from \cite{Spyropoulos1996} and an \textit{HLLC} simulation with a resolution of $128^3$ cells as reference. The LES simulations are performed on a coarse grid with $32^3$ cells. We use \textit{ALDM} and \textit{HLLC} on this resolution to conduct LES simulations. The initial velocity field is divergence free and has the energy spectrum \begin{align} E(k) = A k^4 \exp(-2 k^2 / k_0^2), \end{align} where $k$ is the wave number, $k_0$ is the wave number at which the spectrum is maximal, and $A$ is a constant chosen to adjust a specified initial kinetic energy. The initial density and temperature fields are uniform. Figure \ref{fig:densityrmsdecay} shows the temporal evolution of the rms density fluctuations $\rho_{rms} = \sqrt{<\rho' \rho'>}$. We normalize the time axis with the initial eddy turnover time $\tau = \lambda_f / u' \approx 0.85$. $\lambda_f$ is the lateral Taylor microscale \cite{Pope2000}. The \textit{HLLC} simulation at $128^3$ recovers the DNS reference very well. On the coarse mesh, the performance of the \textit{ALDM} ILES becomes obvious when compared to \textit{HLLC} simulations at the same resolution. \textit{ALDM} gives good results consistent with the DNS data, indicating that \textit{ALDM} recovers the correct sub-grid scale terms for compressible isotropic turbulence. \begin{figure} \centering \input{figures/fig_decaying_turbulence/density_rms_decay.tex} \caption{Temporal evolution of the rms density fluctuations for decaying isotropic turbulence. The time axis is normalized with the initial eddy turnover time. The DNS results are cases 1-3 from Spyropoulos et al. \cite{Spyropoulos1996}.} \label{fig:densityrmsdecay} \end{figure} \subsection{Two-phase Simulations} \label{subsec:Twophase} \subsubsection{Two-phase Sod Shock Tube} \label{subsubsec:TwophaseSod} We consider a two-phase variant of the Sod shock tube problem \cite{Sod1978a}. This test case validates the inviscid fluid-fluid interface interactions. In particular, we investigate an air-helium shock tube problem in which the materials left and right of the initial discontinuity are air ($\gamma_{\text{air}} = 1.4$) and helium ($\gamma_{\text{helium}} = 1.667$), respectively. We use the previously described \textit{HLLC} setup. The domain $x \in [0, 1]$ is resolved with 200 cells. Figure \ref{fig:twophaseshocktube} shows the results at $t = 0.15$. The numerical approximations are in good agreement with the analytical solution. The interface position and shock speed and shock strength are captured correctly. We observe slight density oscillations around the interface. This is in agreement with previous literature \cite{Hu2006} as no isobaric fix is employed. \begin{figure} \centering \input{figures/fig_twophase_shocktube/twophase_shocktube.tex} \caption{Air-helium shock tube problem. Density $\rho$ and velocity $u$ at $t = 0.15$.} \label{fig:twophaseshocktube} \end{figure} \subsubsection{Bow Shock} \label{subsubsec:Bowshock} Bow shocks occur in supersonic flows around blunt bodies \cite{PEERY1988}. Here, we simulate the flow around a stationary cylinder at high Mach numbers $Ma = \sfrac{\sqrt{\mathbf{u} \cdot \mathbf{u}}}{\sqrt{\gamma\frac{p}{\rho}}}=\{3,20\}$. This test case validates the implementation of the inviscid fluid-solid interface fluxes. The computational domain $x\times y\in[-0.3,0.0]\times[-0.4,0.4]$ is discretized with a grid consisting of $480\times1280$ cells. A cylinder with diameter $0.2$ is placed at the center of the east boundary. The north, east, and south boundaries are zero-gradient. The west boundary is of Dirichlet type, imposing the post shock fluid state. The fluid is initialized with the post shock state, i.e., $(\rho,u,v,p)=\left(1,\sqrt{1.4} Ma,0,1\right)$. We simulate the test case with the \textit{HLLC} setup until a steady state solution is reached. Figure \ref{fig:bowshock} illustrates the steady state density and pressure distributions. The results compare well to results from literature, e.g., \cite{Fleischmann2020}. \begin{figure} \centering \begin{tikzpicture} \node (A) at (0,0) {\includegraphics[scale=0.4]{figures/fig_bowshock/bowshock_density_3.png}}; \node [right = 0cm of A] (B) {\includegraphics[scale=0.4]{figures/fig_bowshock/bowshock_pressure_3.png}}; \node at (A.north) {$\rho$}; \node at (B.north) {$p$}; \draw[->] ([xshift=-.1cm, yshift=-.2cm]A.south west) -- ([yshift=-.2cm, xshift=.4cm]A.south west) node[at end, below] {$x$}; \draw[->] ([xshift=-.1cm, yshift=-.2cm]A.south west) -- ([yshift=.3cm, xshift=-.1cm]A.south west) node[at end, left] {$y$}; \node [right = 2cm of B] (C) {\includegraphics[scale=0.4]{figures/fig_bowshock/bowshock_density_20.png}}; \node [right = 0cm of C] (D) {\includegraphics[scale=0.4]{figures/fig_bowshock/bowshock_pressure_20.png}}; \node at (C.north) {$\rho$}; \node at (D.north) {$p$}; \end{tikzpicture} \caption{Density and pressure for the bow shock at $Ma=3$ (left) and $Ma=20$ (right). The colormap ranges from minimum (blue) to maximum (red) value; $\rho\in[0.7, 4.5]$, $p\in[1.0, 15.0]$ for $Ma=3$ and $\rho\in[0.7, 7.2]$, $p\in[1.0, 550.0]$ for $Ma=20$. The black lines represent Mach isocontours from 0.1 to 2.5 in steps of 0.2.} \label{fig:bowshock} \end{figure} \subsubsection{Oscillating Drop} \label{subsubsec:OscillatingDrop} We consider a drop oscillating due to the interplay of surface tension and inertia. Starting from an ellipsoidal shape, surface tension forces drive the drop to a circular shape. This process is associated with a transfer of potential to kinetic energy. The oscillating drop test case validates the implementation of the surface tension forces. The drop oscillates with a distinct frequency. The oscillation period $T$ is given by \cite{Rayleigh1879} \begin{equation} \omega^2 = \frac{6\sigma}{(\rho_b+\rho_d)R^3}, \qquad T = \frac{2\pi}{\omega}. \end{equation} We discretize the computational domain $x\times y\in[0,1]\times[0,1]$ with a grid consisting of $200\times200$ cells. We place an ellipse with semi-major and semi-minor axes of $0.2$ and $0.15$ at the center of the domain. The effective circle radius is therefore $R=0.17321$. The bulk and drop densities are $\rho_b=\rho_d=1.0$ and the surface tension coefficient is $\sigma=0.05$. All boundaries are zero-gradient. We use the \textit{HLLC} setup for this simulation. Figure \ref{fig:oscillatingdrop} displays instantaneous pressure fields and the kinetic energy of the drop over time. The present result for the oscillation period is $T=1.16336$, which is in very good agreement with the analytical reference $T_{ref} = 1.16943$. \begin{figure}[H] \centering \input{figures/fig_oscillatingdrop/energy.tex} \caption{Oscillating drop. (Left) Temporal evolution of the pressure distribution within the drop. The colors range from maximum (red) to minimum (blue) value within the shown time period. (Right) Kinetic energy $E_\text{kin}=\int_V \rho \mathbf{u}\cdot\mathbf{u}\text{d}V$ of the drop over time. The orange dots indicate the times that correspond to the pressure distributions on the left.} \label{fig:oscillatingdrop} \end{figure} \subsubsection{Shear Drop Deformation} The shear drop deformation test case describes the deformation of an initially circular shaped drop due to homogenous shear. Viscous forces lead to the deformation to an ellipsoidal shape. For stable parameters, surface tension forces will eventually balance the viscous forces, which results in a steady state solution. The shear flow is generated with moving no-slip wall boundaries. We use this test case to validate the viscous fluid-fluid interface fluxes. The steady state solution is characterized by the viscosity ratio $\mu_b/\mu_d$ and the capillary number $Ca$ \begin{equation} Ca = \frac{\mu_b R \dot{s}}{\sigma}, \end{equation} where $R$ denotes the initial drop radius, $\sigma$ the surface tension coefficient, $\dot{s}$ the shear rate, and $\mu_b$ and $\mu_d$ the viscosities of the drop and bulk fluid, respectively. The following relation holds for small deformations \cite{TaylorG.1934}. \begin{equation} \qquad D = \frac{B_1 - B_2}{B_1 + B_2} = Ca \frac{19\mu_b/\mu_d+16}{16\mu_b/\mu_d+16} \end{equation} Herein, $B_1$ and $B_2$ indicate the semi-major and semi-minor axes of the steady state ellipse. To simulate this test case, we discretize the domain $x\times y\in[0,1]\times[0,1]$ with a grid that consists of $250\times250$ cells. A drop with radius $R=0.2$ is placed at the center of the domain. We move the north and south wall boundaries with an absolute velocity of $u_W = 0.1$ in positive and negative direction, respectively. This results in a shear rate of $0.2$. At the east and west boundaries we enforce periodicity. The viscosities are $\mu_b=\mu_d=0.1$. We simulate multiple capillary numbers by varying the surface tension coefficient with the \textit{HLLC} setup until a steady state solution is reached. \label{subsubsec:ShearDrop} \begin{figure}[H] \centering \input{figures/fig_sheardrop/sheardrop.tex} \caption{Shear drop deformation. (Left) Steady state pressure field for $Ca=0.2$. (Right) Deformation parameter $D$ over $Ca$.} \label{fig:sheardrop} \end{figure} Figure \ref{fig:sheardrop} illustrates the pressure distribution of the steady state ellipse at $Ca=0.2$. Furthermore, it shows the deformation parameter $D$ over the capillary number $Ca$. The present result agrees well with the analytical reference. \subsubsection{Shock-bubble Interaction} \label{subsubsec:ShockBubble} We simulate the interaction of a shock with Mach number 1.22 with a helium bubble immersed in air. This is an established test case to assess the robustness and validity of compressible two-phase numerical methods. Reports are well documented in literature \cite{Fedkiw1999a,Terashima2009,Hoppe2022a} and experimental data are available \cite{Haas1987}. A helium bubble with diameter $D = 50\, \text{mm}$ is placed at the origin of the computational domain $x \times y \in [-90\, \text{mm}, 266\, \text{mm}] \times [-44.5\, \text{mm}, 44.5\, \text{mm}]$. The initial shock wave is located $5\, \text{mm}$ to the left of the helium bubble. The shock wave travels right and impacts with the helium bubble. Figure \ref{fig:shockbubbleflowfield} shows the flow field at two later time instances. The interaction of the initial shock with the helium bubble generates a reflected shock which is travelling to the left and a second shock which is transmitted through the bubble, see Figure \ref{fig:shockbubbleflowfield} on the left. The incident shock wave is visible as a vertical line. The transmitted wave travels faster than the incident shock. The helium bubbles deforms strongly and a re-entrant jet forms. Figure \ref{fig:shockbubbleflowfield} on the right shows the instance in time at which the jet impinges on the interface of the bubble. The numerical schlieren images and the flow fields in Figure \ref{fig:shockbubbleflowfield} are in good qualitative agreement with results from literature, compare with Figure 7 from \cite{Haas1987} or Figure 10 from \cite{Hoppe2022a}. Figure \ref{fig:shockbubblespacetime} shows the temporal evolution of characteristic interface points. The results are in very good quantitative agreement with \cite{Terashima2009}. \begin{figure} \centering \begin{tikzpicture} \node (A) at (0,0) {\includegraphics[scale=0.6]{figures/fig_shockbubble/flowfield_500e-05.png}}; \node at ([yshift=-.2cm]A.south) {$t = 55 \mu s$}; \node [right = 0.0cm of A] (B) {\includegraphics[scale=0.6]{figures/fig_shockbubble/flowfield_550e-04.png}}; \node at ([yshift=-.2cm]B.south) {$t = 555 \mu s$}; \end{tikzpicture} \caption{Visualizations of the flow field at two different instances in time. The upper half of each image shows numerical schlieren, the lower half shows the pressure field from the smallest (blue) to the largest (red) pressure value. The black line indicates the location of the interface. } \label{fig:shockbubbleflowfield} \end{figure} \begin{figure} \centering \input{figures/fig_shockbubble/points.tex} \caption{Space-time diagram of three characteristic interface points. Positions of the upstream point (left-most point of the interface), the downstream point (right-most point of the interface), and the jet (left-most point of the interface on the center-line) are tracked. Reference values are taken from Terashima et al. \cite{Terashima2009}.} \label{fig:shockbubblespacetime} \end{figure} \section{Introduction} \label{sec:Intro} The evolution of most known physical systems can be described by partial differential equations (PDEs). Navier-Stokes equations (NSE) are partial differential equations that describe the continuum-scale flow of fluids. Fluid flows are omnipresent in engineering applications and in nature, and the accurate numerical simulation of complex flows is crucial for the prediction of global weather phenomena \cite{Bauer2015,Hafner2021}, for applications in biomedical engineering such as air flow through the lungs or blood circulation \cite{Nowak2003,Johnston2004}, and for the efficient design of turbomachinery \cite{Denton1998}, wind turbines \cite{Hansen2006,Sanderse2011}, and airplane wings \cite{Lyu2014}. Computational fluid dynamics (CFD) aims to solve these problems with numerical algorithms. While classical CFD has a rich history, in recent years the symbiosis of machine learning (ML) and CFD has sparked a great interest amongst researchers \cite{Duraisamy2019,Brenner2019,Brunton2020a}. The amalgamation of classical CFD and ML requires powerful novel algorithms which allow seamless integration of data-driven models and, more importantly, end-to-end automatic differentiation \cite{Baydin2018} through the entire algorithm. Here, we provide JAX-FLUIDS (\url{https://github.com/tumaer/JAXFLUIDS}): the first state-of-the-art fully-differentiable CFD framework for the computation of three-dimensional compressible two-phase flows with high-order numerical methods. Over the course of this paper, we discuss the challenges of hybrid ML-accelerated CFD solvers and highlight how novel architectures like JAX-FLUIDS have the potential to facilitate ML-supported fluid dynamics research. The quest for powerful numerical fluid dynamics algorithms has been a long-lasting challenge. In the mid of the last century, the development of performant computer processing units (CPUs) laid the fundament for the development of computational fluid dynamics (CFD). For the first time, computers were used to simulate fluid flows \cite{Harlow2004}. Computational fluid dynamics, i.e., the numerical investigation of fluid flows, became a scientific field on its own. With the rapid development of computational hardware, the CFD community witnessed new and powerful algorithms for the computation of more and more complex flows. Among others, robust time integration schemes, high-order spatial discretizations, and accurate flux-functions were thoroughly investigated. In the 1980s and 1990s, many advancements in numerical methods for compressible fluid flows followed, e.g., \cite{VanLeer1979,Roe1981,Woodward1984,Toro1994,Liou1996}. In recent years, machine learning (ML) has invigorated the physical sciences by providing novel tools for predicting the evolution of physical systems. Breakthrough inventions in ML \cite{Lecun1989,Hochreiter1997} and rapid technical developments of graphics processing units (GPUs) have led to an ever-growing interest in machine learning. Starting from applications in computer vision \cite{LeCun2010,Lecun2015}, cognitive sciences \cite{Lake2015}, and genomics \cite{Alipanahi2015}, machine learning and data-driven methods have also become more and more popular in physical and engineering sciences. This development has partially been fuelled by the emergence of powerful general-purpose automatic differentiation frameworks, such as Tensorflow \cite{Abadi}, Pytorch \cite{Paszke2019}, and JAX \cite{jax2018github}. Natural and engineering sciences can profit from ML methods as they have capabilities to learn complex relations from data and enable novel strategies for modelling physics as well as new avenues of post-processing. For example, machine learning has been successfully used to identify PDEs from data \cite{Brunton2016}, and physics-informed neural networks provide new ways for solving inverse problems \cite{Raissi2019,Buhendwa2021c} by combining data and PDE knowledge. It is well-known that fluid dynamics is a data-rich and compute-intensive discipline \cite{Brunton2020a}. The nonlinearity of the underlying governing equations, the Navier-Stokes equations for viscous flows and the Euler equations for inviscid flows, is responsible for very complex spatio-temporal features of fluid flows. For example, turbulent flows exhibit chaotic behavior with strong intermittent flow features, and non-Gaussian statistics. At the same time, in the inviscid Euler equations strong discontinuities can form over time due to the hyperbolic character of the underlying PDEs. Machine learning offers novel data-driven methods to tackle long-standing problems in fluid dynamics \cite{Duraisamy2019,Brunton2020a,Brunton2020b}. A multitude of research has put forward different ways of incorporating ML in CFD applications. Applications range from fully data-driven surrogate models to less invasive hybrid data-driven numerical methods. Thereby, scientific ML methods can be categorized according to different criteria. One important distinction is the level of physical prior-knowledge that is included in model and training \cite{Karniadakis2021}. Entirely data-driven ML models have the advantage of being quickly implemented and efficient during inference. However, they typically do not offer guarantees on performance (e.g., convergence, stability, and generalization), convergence in training is challenging, and it is often difficult to enforce physical constraints such as symmetries or conservation of energy. In contrast, established numerical methods are consistent and enforce physical constraints. Recently, intense research has investigated the hybridization of ML and classical numerical methods. A second major distinction of ML models can be made according to on- and offline training. Up until now, ML models have been typically optimized offline, i.e., outside of physics simulators. Upon proper training, they are then plugged into an existing CFD solver for evaluation of down-stream tasks. Examples include training of explicit subgrid scale models in large eddy simulations \cite{Beck2019}, interface reconstruction in multiphase flows \cite{Patel2019a,Buhendwa2021b}, and cell face reconstruction in shock-capturing schemes \cite{Stevens2020c,Bezgin2021b}. Although the offline training of ML models is relatively easy, there are several drawbacks to this approach. For one, these models suffer from a data-distribution mismatch between the data seen at training and test time. Secondly, they generally do not directly profit from a priori knowledge about the dynamics of the underlying PDE. Additionally, fluid mechanics solvers are often very complex, written in low-level programming languages like Fortran or C++, and heavily optimized for CPU computations. This is in contrast with practices in ML research: ML models are typically trained in Python and optimized for GPU usage. Inserting these models into preexisting CFD software frameworks can be a tedious task. To tackle this problem, researchers have come up with differentiable CFD frameworks written entirely in Python which allow end-to-end optimization of ML models. The end-to-end (online) training approach utilizes automatic differentiation \cite{Baydin2018} through entire simulation trajectories. ML models trained in such a fashion experience the PDE dynamics and also see their own outputs during training. Among others, Bar-Sinai and co-workers \cite{Bar-Sinai2019,Zhuang2021} have proposed a differentiable framework for finding optimal discretizations for simple non-linear one-dimensional problems and turbulent mixing in two-dimensions. In \cite{Um2020}, a differentiable solver was placed in the training loop to reduce the error of iterative solvers. Bezgin et al. \cite{Bezgin2021a} have put forward a subgrid scale model for nonclassical shocks. Kochkov et al. \cite{Kochkove2101784118} have optimized a subgrid scale model for two-dimensional turbulence. Afore-noted works focus on simpler flow configurations. The problems are often one- and two-dimensional, incompressible, and lack complex physics such as two-phase flows, three-dimensional turbulence, or compressibility effects. Additionally, the written code packages often are highly problem specific and cannot be used as general differentiable CFD software packages. To the knowledge of the authors, despite a high-interest in ML-accelerated CFD, to this day there does not exist a comprehensive mesh-based software package for \textit{differentiable fluid dynamics} in the Eulerian reference frame. At the same time, for Lagrangian frameworks, the differentiable software package JAX-MD \cite{Schoenholz} has been successfully used for molecular dynamics and provides a general fundament for other particle-based discretization methods. We reiterate that the steady rise and success of machine learning in computational fluid dynamics, but also more broadly in computational physics, calls for a new generation of algorithms which allow \begin{enumerate} \item rapid prototyping in high-level programming languages, \item algorithms which can be run on CPUs, GPUs, and TPUs, \item the seamless integration of machine learning models into solver frameworks, \item fully-differentiable algorithms which allow end-to-end optimization of data-driven models. \end{enumerate} Realizing the advent of \textit{differentiable fluid dynamics} and the increasing need for a differentiable general high-order CFD solver, here we introduce JAX-FLUIDS as a fully-differentiable general-purpose 3D finite-volume CFD solver for compressible two-phase flows. JAX-FLUIDS is written entirely in JAX \cite{jax2018github}, a numerical computing library with automatic differentiation capabilities which has seen increased popularity in the physical science community over the past several years. JAX-FLUIDS provides a wide variety of state-of-the-art high-order methods for compressible turbulent flows. A powerful level-set implementation allows the simulation of arbitrary solid boundaries and two-phase flows. High-order shock-capturing methods enable the accurate computation of shock-dominated compressible flow problems and shock-turbulence interactions. Performance and usability were key design goals during development of JAX-FLUIDS. The solver can conveniently be installed as a Python package. The source code builds on the JAX NumPy API. Therefore, JAX-FLUIDS is accessible, performant, and runs on accelerators like CPUs, GPUs, and TPUs. Additionally, an object-oriented programming style and a modular design philosophy allows users the easy integration of new modules. We believe that frameworks like JAX-FLUIDS are crucial to facilitate research at the intersection of ML and CFD, and may pave the way for an era of \textit{differentiable fluid dynamics}. The remainder of this paper is organized as follows. In Sections \ref{sec:PhysicalModel} and \ref{sec:numericalmodel}, we describe the physical and numerical model. Section \ref{sec:Implementation} describes challenges of writing a high-performance CFD code in a high-level programming language such as JAX. We additionally detail the general structure of our research code. In Section \ref{sec:ForwardPass}, we evaluate our code as a classical physics simulator. We validate the numerical framework on canonical test cases from compressible fluid mechanics, including strong-shocks and turbulent flows. In Section \ref{sec:Performance}, we assess the computational performance of JAX-FLUIDS. Section \ref{sec:BackwardPass} showcases how the proposed framework can be used in machine learning tasks. Specifically, we demonstrate full-differentiability of the framework by optimizing a numerical flux function. Finally, in Section \ref{sec:Conclusion} we conclude and summarize the main results of our work. \section{Numerical Model} \label{sec:numericalmodel} In this section we detail the numerical methods of JAX-FLUIDS. Table \ref{tab:NumericalMethods} provides an overview on the implemented numerical methods. \subsection{Finite-Volume Discretization} \label{subsec:FVD} The differential form of the conservation law in Equations \eqref{eq:DiffConsLaw1} and \eqref{eq:DiffConsLaw2} assume smooth solutions for which partial derivatives exist. In practice, we solve the integral form of the partial differential equations using the finite-volume method. We use cuboid cells on a Cartesian grid. In general, cell $(i,j,k)$ has spatial extension $\Delta x$, $\Delta y$, and $\Delta z$ in the spatial dimensions $x$, $y$, and $z$, respectively. We denote the corresponding cell volume as $V=\Delta x\Delta y\Delta z$. Often, we use cubic cells for which $\Delta x = \Delta y = \Delta z$. In the finite-volume formulation we are interested in the spatio-temporal distribution of cell-averaged values which are defined by \begin{equation} \bar{\mathbf{U}}_{i,j,k} = \frac{1}{V} \int_{x_{i-\frac{1}{2},j,k}}^{x_{i+\frac{1}{2},j,k}} \int_{y_{i,j-\frac{1}{2},k}}^{y_{i,j+\frac{1}{2},k}} \int_{z_{i,j,k-\frac{1}{2}}}^{z_{i,j,k+\frac{1}{2}}} \mathbf{U} \text{d}x\text{d}y\text{d}z. \end{equation} After application of volume integration to Equation \eqref{eq:DiffConsLaw2}, the temporal evolution of the cell-averaged value in cell $(i,j,k)$ is given by \begin{align} \begin{split} \frac{\text{d}}{\text{d}t} \bar{\mathbf{U}}_{i,j,k} = &- \frac{1}{\Delta x} \left(\mathbf{F}_{i+\frac{1}{2},j,k} - \mathbf{F}_{i-\frac{1}{2},j,k} \right) \\ & - \frac{1}{\Delta y} \left( \mathbf{G}_{i,j+\frac{1}{2},k} - \mathbf{G}_{i,j-\frac{1}{2},k} \right) \\ & - \frac{1}{\Delta z} \left( \mathbf{H}_{i,j,k+\frac{1}{2}} - \mathbf{H}_{i,j,k-\frac{1}{2}}\right) \\ & + \frac{1}{\Delta x} \left(\mathbf{F}^d_{i+\frac{1}{2},j,k} - \mathbf{F}^d_{i-\frac{1}{2},j,k} \right) \\ & + \frac{1}{\Delta y} \left( \mathbf{G}^d_{i,j+\frac{1}{2},k} - \mathbf{G}^d_{i,j-\frac{1}{2},k} \right) \\ & + \frac{1}{\Delta z} \left( \mathbf{H}^d_{i,j,k+\frac{1}{2}} - \mathbf{H}^d_{i,j,k-\frac{1}{2}}\right) \\ & + \bar{\mathbf{S}}_{i,j,k}. \label{eq:FVD} \end{split} \end{align} $\mathbf{F}$, $\mathbf{G}$, and $\mathbf{H}$ are the convective intercell numerical fluxes across the cell faces in $x$-, $y$-, and $z$-direction, and $\mathbf{F}^d$, $\mathbf{G}^d$, and $\mathbf{H}^d$ are the numerical approximations to the dissipative fluxes in $x$-, $y$-, and $z$-direction. $\bar{\mathbf{S}}_{i,j,k}$ is the cell-averaged source term. Note that the convective and dissipative fluxes in Equation \eqref{eq:FVD} are cell face averaged quantities. We use a simple one-point Gaussian quadrature to evaluate the necessary cell face integrals, however approximations of higher order are also possible, see for example \cite{Coralic2014e}. For the calculation of the convective intercell numerical fluxes, we use WENO-type high-order discretization schemes \cite{Jiang1996} in combination with approximate Riemann solvers. Subsection \ref{subsec:InviscidFlux} provides more details. Dissipative fluxes are calculated by central finite-differences, see Subsection \ref{subsec:Dissipative} for details. Multidimensional settings are treated via the dimension-by-dimension technique in a straightforward manner, i.e., aforementioned steps are repeated for each spatial dimension separately. \subsection{Time Integration} \label{subsec:timeint} The finite-volume discretization yields a set of ordinary differential equations (ODEs) \eqref{eq:FVD} that can be integrated in time by an ODE solver of choice. We typically use explicit total-variation diminishing (TVD) Runge-Kutta methods \cite{Gottlieb1998a}. The time step size is given by the $CFL$ criterion as described in \cite{Hoppe2022a}. \subsection{Convective Flux Calculation} \label{subsec:InviscidFlux} For the calculation of the convective fluxes an approximative solution to a Riemann problem has to be found at each cell face. As we make use of the dimension-by-dimension technique, without loss of generality, we restrict ourselves to a one-dimensional setting in this subsection. We are interested in finding the cell face flux $\mathbf{F}_{i+\frac{1}{2}}$ at the cell face $x_{i+\frac{1}{2}}$. We distinguish between two different methods for the calculation of the convective intercell fluxes: the \textit{High-order Godunov} approach and the \textit{Flux-splitting} approach. In the \textit{High-order Godunov} approach, we first reconstruct flow states left and right of a cell face, and subsequently enter an approximate Riemann solver for the calculation of the convective intercell flux. In the \textit{Flux-splitting} approach, we first perform cell-wise flux-splitting, reconstruct left and right going fluxes for each cell face, and subsequently assemble the final flux. In the following, we briefly sketch both methods.\\ \noindent\textit{High-order Godunov approach} \begin{enumerate} \item Apply WENO reconstruction on the primitive variable $\mathbf{W}_i$/the conservative variable $\mathbf{U}_i$ to obtain the cell face quantities $\mathbf{W}^{\pm}_{i+\frac{1}{2}}$/$\mathbf{U}^{\pm}_{i+\frac{1}{2}}$. Note that the reconstruction can be done directly in physical space or via transformation in characteristic space. High-order reconstruction in physical space can lead to spurious oscillations due to the interaction of discontinuities in different fields \cite{Qiu2002}. \item Transform the reconstructed primitive/conservative variables at the cell face into conservative/primitive cell face quantities: $\mathbf{W}^{\pm}_{i+\frac{1}{2}} \rightarrow \mathbf{U}^{\pm}_{i+\frac{1}{2}}$ / $\mathbf{U}^{\pm}_{i+\frac{1}{2}} \rightarrow \mathbf{W}^{\pm}_{i+\frac{1}{2}}$. \item Compute the final flux with an appropriate flux function/approximate Riemann solver, e.g., HLL \cite{Harten1983a} or HLLC \cite{Toro1994}: \begin{equation} \mathbf{F}_{i+\frac{1}{2}} = \mathbf{F}_{i+\frac{1}{2}} \left( \mathbf{U}_{i+\frac{1}{2}}^{-}, \mathbf{U}_{i+\frac{1}{2}}^{+}, \mathbf{W}_{i+\frac{1}{2}}^{-}, \mathbf{W}_{i+\frac{1}{2}}^{+} \right) \label{eq:ApproxRiemannSolver} \end{equation} \end{enumerate} \noindent\textit{Flux-Splitting approach} \begin{enumerate} \item At the cell face $x_{i+\frac{1}{2}}$ compute an appropriate average state from neighboring cells $\mathbf{U}_{i+\frac{1}{2}} = \mathbf{U}_{i+\frac{1}{2}} \left( \mathbf{U}_{i}, \mathbf{U}_{i+1} \right)$ (e.g., by arithmetic mean or Roe average \cite{Toro2009a}) and the corresponding Jacobian $\mathbf{A}_{i+\frac{1}{2}} = \mathbf{A}_{i+\frac{1}{2}} \left( \mathbf{U}_{i+\frac{1}{2}} \right)$. \item Eigenvalue decomposition of the Jacobian: $\mathbf{A}_{i+\frac{1}{2}} = \mathbf{R}_{i+\frac{1}{2}} \mathbf{\Lambda}_{i+\frac{1}{2}} \mathbf{R}_{i+\frac{1}{2}}^{-1}$, with the matrix of right eigenvectors $\mathbf{R}_{i+\frac{1}{2}}$, the matrix of left eigenvectors $\mathbf{R}_{i+\frac{1}{2}}^{-1}$, and the eigenvalues $\mathbf{\Lambda}_{i+\frac{1}{2}}$. \item Transform the cell state $\mathbf{U}_i$ and the flux $\mathbf{F}_i$ to characteristic space: $\mathbf{V}_j = \mathbf{R}_{i+\frac{1}{2}}^{-1} \mathbf{U}_i, \quad \mathbf{G}_j = \mathbf{R}_{i+\frac{1}{2}}^{-1} \mathbf{F}_i$. \item Perform the user-specified flux splitting: $\hat{\mathbf{G}}^{\pm}_i = \frac{1}{2} \left( \mathbf{G}_i \pm \bar{\mathbf{\Lambda}}_{i+\frac{1}{2}} \mathbf{V}_i \right)$, where $\bar{\mathbf{\Lambda}}_{i+\frac{1}{2}}$ is the eigenvalue matrix of the respective flux-splitting scheme. \item Apply WENO reconstruction on $\hat{\mathbf{G}}^{\pm}_i$ to obtain $\hat{\mathbf{G}}^{\pm}_{i+\frac{1}{2}}$ at the cell face $x_{i+\frac{1}{2}}$. \item Assemble the final flux in characteristic space: $\hat{\mathbf{G}}_{i+\frac{1}{2}} = \hat{\mathbf{G}}^{+}_{i+\frac{1}{2}} + \hat{\mathbf{G}}^{-}_{i+\frac{1}{2}}$. \item Transform the final flux back to physical space: $\mathbf{F}_{i+\frac{1}{2}} = \mathbf{R}_{i+\frac{1}{2}} \hat{\mathbf{G}}_{i+\frac{1}{2}}$. \end{enumerate} \subsection{Dissipative Flux Calculation} \label{subsec:Dissipative} For the calculation of the dissipative fluxes, we have to evaluate derivatives at the cell faces. We do this using central finite-differences. As we do all computations in a Cartesian framework, central finite-differences can be evaluated directly at a cell face if the direction of the derivative is parallel to the cell face normal. For example, at the cell face $x_{i+1/2,j,k}$ the $x$-derivative of any quantity $\psi$ is directly approximated with second-order or fourth-order central finite-differences, \begin{equation} \left.\frac{\partial \psi}{\partial x}\right\vert_{x_{i+1/2,j,k}}^{C2} = \frac{-\psi_{i,j,k} + \psi_{i+1,j,k}}{\Delta x}, \quad \left.\frac{\partial \psi}{\partial x}\right\vert_{x_{i+1/2,j,k}}^{C4} = \frac{\psi_{i-1,j,k} - 27 \psi_{i,j,k} + 27 \psi_{i+1,j,k} - \psi_{i+2,j,k}}{24 \Delta x}. \end{equation} If the cell face normal is perpendicular to the direction of the derivative, we use a two-step process to approximate the derivative. We first evaluate the derivative at the cell-centers and then use a central interpolation to obtain the value at the cell face of interest. Again, we use second-order or fourth-order order central finite-differences for the derivatives. Let us consider the $y$-derivative of the quantity $\psi$ at the cell face $x_{i+\frac{1}{2},j,k}$ (the $z$-derivative is complelety analogous). We first calculate the derivative at the cell centers \begin{equation} \left.\frac{\partial \psi}{\partial y}\right\vert_{x_{i,j,k}}^{C2} = \frac{-\psi_{i,j-1,k} + \psi_{i,j+1,k}}{\Delta y}, \quad \left.\frac{\partial \psi}{\partial y}\right\vert_{x_{i,j,k}}^{C4} = \frac{\psi_{i,j-2,k} - 8 \psi_{i,j-1,k} + 8 \psi_{i,j+1,k} - \psi_{i,j+2,k}}{12 \Delta y}. \end{equation} We subsequently interpolate these values to the cell face, \begin{align} \begin{split} &\left.\frac{\partial \psi}{\partial y}\right\vert_{x_{i+1/2,j,k}} = \frac{1}{2} \left( \left.\frac{\partial \psi}{\partial y}\right\vert_{x_{i-1,j,k}} + \left.\frac{\partial \psi}{\partial y}\right\vert_{x_{i+1,j,k}} \right),\\ &\left.\frac{\partial \psi}{\partial y}\right\vert_{x_{i+1/2,j,k}} = \frac{1}{16} \left( - \left.\frac{\partial \psi}{\partial y}\right\vert_{x_{i-2,j,k}} + 9 \left.\frac{\partial \psi}{\partial y}\right\vert_{x_{i-1,j,k}} + 9 \left.\frac{\partial \psi}{\partial y}\right\vert_{x_{i+1,j,k}} - \left.\frac{\partial \psi}{\partial y}\right\vert_{x_{i+2,j,k}} \right). \end{split} \end{align} \subsection{Source Terms and Forcings} \label{subsec:forcing} The source terms $S(\mathbf{U})$ represent body forces and heat sources. We use them to impose physical constraints, e.g., fixed mass flow rates or temperature profiles. These forcings are required to simulate a variety of test cases. Examples include channel flows or driven homogenous isotropic turbulence. Fixed mass flow rates are enforced with a PID controller minimizing the error between target and current mass flow rate $e(t) = \frac{\dot{m}_{\text{target}} - \dot{m}(t)}{\dot{m}_{\text{target}}}$. Here, the control variable is an acceleration $a_{\dot{m}}$ that drives the fluid in the prescribed direction. We denote the unit vector pointing towards the prescribed direction as $\mathbf{N}$. The controller variable and resulting source terms read \begin{equation} a_{\dot{m}} = K_p e(t) + K_I \int_0^t e(\tau)\text{d}\tau + K_d \frac{\text{d}e(t)}{\text{d}t}, \qquad S(\mathbf{U}) = \begin{pmatrix} 0 \\ \rho a_{\dot{m}} \mathbf{N} \\ \rho a_{\dot{m}} \mathbf{u} \cdot \mathbf{N} \end{pmatrix}, \label{eq:PID_massflow} \end{equation} where $K_p$, $K_I$, and $K_d$ are the controller parameters. The integral and derivative in Equation \eqref{eq:PID_massflow} are approximated with first order schemes. Fixed temperature profiles are enforced with a heat source $\dot{\omega}_T$. The heat source and resulting source term is given by \begin{equation} \dot{\omega}_T = \rho R \frac{\gamma}{\gamma - 1} \frac{T_{\text{target}} - T}{\Delta t}, \qquad S(\mathbf{U}) = \dot{\omega}_T [0,0,0,0,1]^T. \end{equation} \subsection{Level-set Method for Two-phase Flows} \label{subsec:levelset} We use the level-set method \cite{Osher1988} to model two-phase flows with fluid-fluid and fluid-solid interfaces. In particular, we implement the sharp-interface level-set method proposed by Hu et al. \cite{Hu2006}, which is also used in the solver ALPACA \cite{Hoppe2020a,Hoppe2022a}. The interface is tracked by a scalar field $\phi$ whose values represent the signed distance from the interface of each cell center within the mesh of the finite-volume discretization. This implies that there is a positive phase ($\phi > 0$) and a negative phase ($\phi < 0$) with the interface being located at the zero level-set of $\phi$. A cell that is intersected by the interface is referred to as cut cell. Figure \ref{fig:cut_cell} shows a schematic of a cut cell in the finite-volume discretization. The apertures $A_{i\pm\frac{1}{2},j,k}$, $A_{i,j\pm\frac{1}{2},k}$, and $A_{i,j,k\pm\frac{1}{2}}$ represent the portion of the cell face area that is covered by the respective fluid. The volume fraction $\alpha_{i,j,k}$ denotes the portion of the cell volume covered by the respective fluid. Hereinafter, we will refer to the positive phase with the subscript 1 and to the negative phase with the subscript 2. The following relations between the geometrical quantities for the positive and negative phase apply: \begin{equation} \alpha_1 = 1 - \alpha_2,\quad A_1 = 1 - A_2. \label{eq:posnegphase} \end{equation} \begin{figure}[t] \centering \input{figures/subcell_reconstruction.tex} \caption{Schematic finite-volume discretization for cut cell $(i,j,k)$ on a Cartesian grid. The red dots represent the cell centers. The red line indicates the interface, and the blue line gives the linear approximation of the interface. The fluid with positive level-set values is colored in gray, and the fluid with negative level-set values is colored in white. Volume fraction and apertures are computed for the positive fluid. Note that the figure illustrates a two-dimensional slice in the $(x,y)$-plane. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)} \label{fig:cut_cell} \end{figure} We solve Equation \eqref{eq:FVD} for both phases separately. However, in a cut cell $(i,j,k)$, the equation is modified as follows. \begin{align} \begin{split} \frac{\text{d}}{\text{d}t} \alpha_{i,j,k} \bar{\mathbf{U}}_{i,j,k} = & - \frac{1}{\Delta x} \left[A_{i+\frac{1}{2},j,k} \left(\mathbf{F}_{i+\frac{1}{2},j,k} + \mathbf{F}^d_{i+\frac{1}{2},j,k}\right) - A_{i-\frac{1}{2},j,k} \left(\mathbf{F}_{i-\frac{1}{2},j,k} + \mathbf{F}^d_{i-\frac{1}{2},j,k} \right) \right] \\ & - \frac{1}{\Delta y} \left[A_{i,j+\frac{1}{2},k} \left(\mathbf{G}_{i,j+\frac{1}{2},k} + \mathbf{G}^d_{i,j+\frac{1}{2},k}\right) - A_{i,j-\frac{1}{2},k} \left(\mathbf{G}_{i,j-\frac{1}{2},k} + \mathbf{G}^d_{i,j-\frac{1}{2},k} \right) \right] \\ & - \frac{1}{\Delta z} \left[A_{i,j,k+\frac{1}{2}} \left(\mathbf{H}_{i,j,k+\frac{1}{2}} + \mathbf{H}^d_{i,j,k+\frac{1}{2}}\right) - A_{i,j,k-\frac{1}{2}} \left(\mathbf{H}_{i,j,k-\frac{1}{2}} + \mathbf{H}^d_{i,j,k-\frac{1}{2}} \right) \right] \\ & + \alpha_{i,j,k} \bar{\mathbf{S}}_{i,j,k}\\ & - \frac{1}{\Delta x \Delta y \Delta z} \left[\mathbf{X}_{i,j,k}(\Delta \Gamma) + \mathbf{X}^d_{i,j,k}(\Delta \Gamma) \right] \label{eq:FVD_levelset} \end{split} \end{align} The cell-averaged state and the intercell fluxes must be weighted with the volume fraction and the cell face apertures, respectively. The terms $\mathbf{X}(\Delta \Gamma)$ and $\mathbf{X}^d(\Delta \Gamma)$ denote the convective and dissipative interface flux, with $\Delta \Gamma$ being the interface segment length. We define the projections of the interface segment length on the $x$, $y$, and $z$ direction as the vector \begin{equation} \Delta \mathbf{\Gamma}_p = \begin{pmatrix} \Delta \Gamma (\mathbf{i} \cdot \mathbf{n}_I) \\ \Delta \Gamma (\mathbf{j} \cdot \mathbf{n}_I) \\ \Delta \Gamma (\mathbf{k} \cdot \mathbf{n}_I) \\ \end{pmatrix}, \qquad \label{eq:interface_segment_projection} \end{equation} where $\mathbf{i}$, $\mathbf{j}$, and $\mathbf{k}$ represent the unit vectors in $x$, $y$, and $z$ direction, respectively. The interface normal is given by $\mathbf{n}_I = \nabla \phi / |\nabla \phi|$. The interface fluxes read \begin{equation} \mathbf{X} = \begin{pmatrix} 0 \\ p_I \Delta \mathbf{\Gamma}_p \\ p_I \Delta \mathbf{\Gamma}_p \cdot \mathbf{u}_I \end{pmatrix}, \qquad \mathbf{X}^d = \begin{pmatrix} 0 \\ \mathbf{\tau}_I^T \Delta \mathbf{\Gamma}_p \\ (\mathbf{\tau}_I^T \Delta \mathbf{\Gamma}_p) \cdot \mathbf{u}_I - \mathbf{q}_I \cdot \Delta \mathbf{\Gamma}_p \end{pmatrix}. \label{eq:interface_flux} \end{equation} Here, $p_I$ and $\mathbf{u}_I$ denote the interface pressure and interface velocity. The viscous interface stress tensor $\mathbf{\tau}_I$ is given by \begin{equation} \mathbf{\tau}_I = \begin{pmatrix} \tau_I^{11} & \tau_I^{12} & \tau_I^{13} \\ \tau_I^{21} & \tau_I^{22} & \tau_I^{23} \\ \tau_I^{31} & \tau_I^{32} & \tau_I^{33} \\ \end{pmatrix}, \qquad \tau_I^{ij} = \mu_I \left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i}\right) - \frac{2}{3} \mu_I \delta_{ij} \frac{\partial u_k}{\partial x_k} , \qquad \mu_I = \frac{\mu_1\mu_2}{\alpha_1\mu_2 + \alpha_2\mu_1}. \end{equation} The interface heat flux $\mathbf{q}_I$ reads \begin{equation} \mathbf{q}_I = -\lambda_I \nabla T, \qquad \lambda_I = \frac{\lambda_1\lambda_2}{\alpha_1\lambda_2 + \alpha_2\lambda_1}. \end{equation} The evaluation of $\mathbf{\tau}_I$ and $\mathbf{q}_I$ requires the computation of velocity and temperature gradients at the interface. The gradient at the interface is approximated with the gradient at the cell center. We use the real fluid state to evaluate these gradients. The real fluid state in a cut cell is approximated by $\mathbf{W} = \alpha_1\mathbf{W}_1 + \alpha_2\mathbf{W}_2$. As we solve Equations \eqref{eq:FVD_levelset} and \eqref{eq:interface_segment_projection} for each phase separately, we emphasize that $\mathbf{n}_I$, $\alpha$, and $A$ must be computed with respect to the present phase. Conservation is therefore satisfied since $\mathbf{n}_{I1} = -\mathbf{n}_{I2}$, i.e., the interface flux terms for the two fluids at the interface have the same magnitude but opposite sign. The computation of the interface velocity $\mathbf{u}_I$ and interface pressure $p_I$ depends on the type of interface interaction: \begin{itemize} \item For \textbf{fluid-solid} interactions, the interface velocity is prescribed as either a constant or a space and time-dependent function. The interface pressure $p_I$ is approximated with the cell pressure. \item For \textbf{fluid-fluid} interactions, the two-material Riemann problem at the interface is solved. The solution reads \cite{Hu2004} \begin{align} \mathbf{u}_I &= \frac{\rho_1c_1 \mathbf{u}_1\cdot \mathbf{n}_{I1} + \rho_2c_2 \mathbf{u}_2\cdot \mathbf{n}_{I1} + p_2 - p_1 - \sigma \kappa}{\rho_1c_1 + \rho_2c_2} \mathbf{n}_{I1}, \notag \\ p_{I1} &= \frac{\rho_1c_1(p_2 + \sigma \kappa) + \rho_2c_2p_1 + \rho_1c_1\rho_2c_2(\mathbf{u}_2\cdot \mathbf{n}_{I1} - \mathbf{u}_1\cdot \mathbf{n}_{I1})}{\rho_1c_1 + \rho_2c_2}, \\ p_{I2} &= \frac{\rho_1c_1p_2 + \rho_2c_2(p_1 - \sigma \kappa) + \rho_1c_1\rho_2c_2(\mathbf{u}_2\cdot \mathbf{n}_{I1} - \mathbf{u}_1\cdot \mathbf{n}_{I1})}{\rho_1c_1 + \rho_2c_2}, \notag \end{align} where $c$ denotes the speed of sound, $\sigma$ is the surface tension coefficient, and $\kappa = \nabla \cdot \mathbf{n}_{I1}$ denotes the curvature. For $\sigma \neq 0$ and $\kappa \neq 0$, the interface pressure experiences a jump at the interface as constituted by mechanical equilibrium. The interface pressure in Equation \eqref{eq:interface_flux} must be chosen with respect to the present phase. \end{itemize} Assuming a linear interface within each cut cell, the cell face apertures are computed analytically as follows. The level-set values at the edges of a computational cell are computed using trilinear interpolation. The sign of the level-set values at the four corners of a cell face determine the cut cell face configuration. Figure \ref{fig:cut_cell_face} illustrates three typical cases of a cell face that is intersected by the interface. In total, there are $2^4$ different sign combinations of the level-set values along the corners. Hence, there are $2^4$ different cut cell face configurations. For each of these, the cell face aperture is evaluated as the (sum of) areas of the basic geometric shapes, i.e., triangle or trapezoid. \begin{figure}[!h] \centering \begin{tikzpicture} \draw[thick, black] (0,0) -- (0,2) node[at end, circle, draw=blue] {} -- (2,2) node[at end, circle, draw=blue] {} -- (2,0) node[at end, circle, draw=blue] {} -- (0,0) node[at end, circle, draw=blue, fill=blue] {}; \draw[thick, dashed] (1.0,0.0) -- (0.0,1.0); \draw[thick, black] (3,0) -- (3,2) node[at end, circle, draw=blue, fill=blue] {} -- (5,2) node[at end, circle, draw=blue] {} -- (5,0) node[at end, circle, draw=blue] {} -- (3,0) node[at end, circle, draw=blue, fill=blue] {}; \draw[thick, dashed] (3.8,0.0) -- (4.2,2.0); \draw[thick, black] (6,0) -- (6,2) node[at end, circle, draw=blue, fill=blue] {} -- (8,2) node[at end, circle, draw=blue] {} -- (8,0) node[at end, circle, draw=blue, fill=blue] {} -- (6,0) node[at end, circle, draw=blue] {}; \draw[thick, dashed] (6.0,0.5) -- (7.0,2.0); \draw[thick, dashed] (7.2,0.0) -- (8.0,0.8); \end{tikzpicture} \caption{Three typical cut cell face configurations. Solid and hollow blue circles indicate positive and negative level-set corner values, respectively.} \label{fig:cut_cell_face} \end{figure} The interface segment length $\Delta \Gamma$ is computed from the apertures as follows. \begin{equation} \Delta \Gamma_{i,j,k} = \left[(A_{i+\frac{1}{2},j,k} - A_{i-\frac{1}{2},j,k})^2\Delta y\Delta z + (A_{i,j+\frac{1}{2},k} - A_{i,j-\frac{1}{2},k})^2\Delta x\Delta z + (A_{i,j,k+\frac{1}{2}} - A_{i,j,k-\frac{1}{2}})^2\Delta x\Delta y \right]^\frac{1}{2} \end{equation} Geometrical reconstruction with seven pyramids yields the volume fraction $\alpha$. \begin{align} \alpha_{i,j,k} &= \frac{1}{3} \frac{1}{\Delta x \Delta y \Delta z} \left[ A_{i+\frac{1}{2},j,k} \Delta y \Delta z \frac{1}{2} \Delta x + A_{i-\frac{1}{2},j,k} \Delta y \Delta z \frac{1}{2} \Delta x + A_{i,j+\frac{1}{2},k} \Delta x \Delta z \frac{1}{2} \Delta y \right. \notag \\ &+ \left. A_{i,j-\frac{1}{2},k} \Delta x \Delta z \frac{1}{2} \Delta y + A_{i,j,k-\frac{1}{2}} \Delta x \Delta y \frac{1}{2} \Delta z + A_{i,j,k+\frac{1}{2}} \Delta x \Delta y \frac{1}{2} \Delta z + \Delta \Gamma_{i,j,k} \phi_{i,j,k} \vphantom{\frac12}\right] \end{align} Note that the described approach yields the volume fraction and apertures with respect to the positive phase. The values of the negative phase can be obtained from relations \eqref{eq:posnegphase}. The level-set field is advected by the interface velocity $\mathbf{u}_I= \mathbf{n}_I u_I$ by solving the level-set advection equation. \begin{equation} \frac{\partial \phi}{\partial t} + \mathbf{u}_I\cdot\nabla\phi = 0 \label{eq:levelset_advection} \end{equation} The spatial term in Equation \eqref{eq:levelset_advection} is discretized using high-order upstream central (HOUC) \cite{Nourgaliev2007} stencils. For the temporal integration, we apply the same scheme that is used to integrate the conservative variables, which typically is a Runge-Kutta method, see Subsection \ref{subsec:timeint}. The level-set field is only advected within a narrowband around the interface. In order to apply the reconstruction stencils used in the finite-volume discretization near the interface, we extrapolate the real fluid state to the other side of the interface. We denote the cells on the other side of the interface as ghost cells. An arbitrary quantity $\psi$ is extrapolated from the real cells to the ghost cells by advancing it in a fictitious time $\tau$ to steady state according to \begin{equation} \frac{\partial \psi}{\partial \tau} \pm \mathbf{n}_I\cdot\nabla \psi = 0. \label{eq:extension} \end{equation} The sign of the interface normal $\pm \mathbf{n}_I$ depends on the sign of the present phase: To extrapolate the real fluid state of the positive phase into its ghost cells, we must extrapolate into the direction of the negative level-set, i.e. $-\mathbf{n}_I$, and vice versa. The spatial term is discretized using a first-order upwind stencil. Temporal integration is performed with the Euler method. Note that we also use this extension procedure to extend the interface velocity from the cut cells into the narrowband around the interface, where we advect the level-set field. The computation of the geometrical quantities requires the level-set field to be a signed distance function. During a simulation, the level-set field loses its signed distance property due to numerical errors and/or a shearing flow field. Additionally, since we only advect the level-set field within a narrowband around the interface, the level-set field develops a kink at the edge of the narrowband and the remainder of the computational domain. The signed distance property is maintained via reinitialization of the level-set field. The reinitialization equation reads \begin{equation} \frac{\partial \phi}{\partial \tau} + sgn(\phi^0) (|\nabla\phi| - 1) = 0. \label{eq:reinitialization} \end{equation} Here, $\phi^0$ represents the level-set at a fictitious time $\tau=0$. We apply first-order \cite{Russo2000} or higher-order WENO-HJ \cite{jiang2000weighted} schemes to solve this equation. We reinitialize the level-set each physical time step for a fixed amount of fictitious time steps resulting in a sufficiently small residual of Equation \eqref{eq:reinitialization}. The presented level-set method is not consistent when the interface crosses a cell face within a single time step, i.e., when new fluid cells are created or fluid cells have vanished. Figure \ref{fig:mixing} displays this scenario for a 1D discretization. The interface is moving from cell $i-1$ to cell $i$. At $t^n$, cell $i$ is a newly created cell w.r.t. phase 1 and cell $i-1$ is a vanished cell w.r.t. phase 2. To maintain conservation, we must do the following. For phase 1, conservative variables must be taken from cell $i-1$ and put into the newly created cell $i$. For phase 2, conservative variables must be taken from cell $i$ and put into the vanished cell $i-1$. In addition to the scenario where an interface crosses the cell face, small cut cells may generally lead to an unstable integration using the time step restriction that is based on a full cell. We apply a mixing procedure \cite{Hu2006} that deals with these problems. The procedure is applied to two types of cells. \begin{enumerate} \item Cells where $\alpha = 0$ after integration but $\alpha \neq 0$ before (vanished cells). \item Cells with $\alpha < \alpha_{\text{mix}}$ after integration (newly created cells and small cells). \end{enumerate} We use a mixing threshold of $\alpha_{\text{mix}} = 0.6$. For each cell that requires mixing, we identify target (trg) cells from the interface normal. Consider cell $i$ in Figure \ref{fig:mixing}, which is a small/newly created cell for phase 1. Here, the target cell in $x$ direction is cell $i-1$, as $\mathbf{n}_{I1}\cdot \mathbf{i} < 0$. Analogously, cell $i-1$ is a vanished cell for phase 2. The corresponding target is cell $i$, since $\mathbf{n}_{I2}\cdot \mathbf{i} > 0$. In 3D, there are 7 target cells in total: One in each spatial direction $x$, $y$, and $z$, one in each plane $xy$, $xz$, and $yz$, and one in $xyz$. Seven mixing weights are computed as \begin{align} \beta_x &= |\mathbf{n}_I\cdot \mathbf{i}|^2 \alpha_{trg}, \notag \\ \beta_y &= |\mathbf{n}_I\cdot \mathbf{j}|^2 \alpha_{trg}, \notag \\ \beta_z &= |\mathbf{n}_I\cdot \mathbf{k}|^2 \alpha_{trg}, \notag \\ \beta_{xy} &= |\left(\mathbf{n}_I\cdot \mathbf{i}\right) \left(\mathbf{n}_I\cdot \mathbf{j}\right) | \alpha_{trg}, \\ \beta_{xz} &= |\left(\mathbf{n}_I\cdot \mathbf{i}\right) \left(\mathbf{n}_I\cdot \mathbf{k}\right) | \alpha_{trg}, \notag \\ \beta_{yz} &= |\left(\mathbf{n}_I\cdot \mathbf{j}\right) \left(\mathbf{n}_I\cdot \mathbf{k}\right) | \alpha_{trg}, \notag \\ \beta_{xyz} &= |\left(\mathbf{n}_I\cdot \mathbf{i}\right) \left(\mathbf{n}_I\cdot \mathbf{j}\right) \left(\mathbf{n}_I\cdot \mathbf{k}\right) |^{2/3} \alpha_{trg}. \notag \end{align} Here, $\alpha_{trg}$ denotes the volume fraction of the target cell in the corresponding direction. We normalize the mixing weights so that $\sum_{trg} \beta_{trg} =1$, where $trg\in\{x,y,z,xy,xz,yz,xyz\}$. Subsequently, the mixing flux $\mathbf{M}_{trg}$ is computed like \begin{equation} \mathbf{M}_{trg} = \frac{\beta_{trg}}{\alpha \beta_{trg} + \alpha_{trg}} \left[(\alpha_{trg}\bar{\mathbf{U}}_{trg})\alpha - (\alpha \bar{\mathbf{U}})\alpha_{trg} \right]. \end{equation} The conservative variables are then updated according to \begin{align} \alpha \bar{\mathbf{U}} &= \left( \alpha \bar{\mathbf{U}} \right)^* + \sum_{trg}\mathbf{M}_{trg}, \\ \alpha_{trg} \bar{\mathbf{U}}_{trg} &= \left( \alpha_{trg} \bar{\mathbf{U}}_{trg} \right)^* - \sum_{trg}\mathbf{M}_{trg}. \end{align} Here, $\alpha \bar{\mathbf{U}}$ and $\alpha_{trg} \bar{\mathbf{U}}_{trg}$ denote the conservative variables of the cells that require mixing and the conservative variables of the corresponding target cells, respectively. Star-quantities denote conservative variables before mixing. \begin{figure} \centering \begin{tikzpicture} \draw[line width=1pt] (0,0) -- (14,0); \node[circle, draw=black, fill=black, inner sep=1pt] (A) at (4,0) {}; \node[circle, draw=black, fill=black, inner sep=1pt] (B) at (10,0) {}; \node[below,yshift=-0.5cm] at (A) {$i-1$}; \node[below,yshift=-0.5cm] at (7,0) {$i-\frac{1}{2}$}; \node[below,yshift=-0.5cm] at (B) {$i$}; \node[color=blue] at ([yshift=2.2cm]A) {vanished cell}; \node[color=blue] at ([yshift=2.2cm]B) {target cell}; \node[color=red] at ([yshift=-2.2cm]A) {target cell}; \node[color=red] at ([yshift=-2.2cm]B) {newly created cell}; \node[color=red] at (1,-2.2) {$\phi > 0$}; \node[color=blue] at (13,2.2) {$\phi < 0$}; \draw (1,0.3) -- (1,-0.3); \draw (7,0.3) -- (7,-0.3); \draw (13,0.3) -- (13,-0.3); \draw[line width=0.8pt, dashed, color=green!50!black] (6.2,1.5) -- (6.2,-1.5) node[at start, above] {$\Gamma_0(t^{n-1})$}; \draw[line width=0.8pt, dashed, color=green!50!black] (8.5,1.5) -- (8.5,-1.5) node[at start, above] {$\Gamma_0(t^n)$}; \draw[<->] (7,-1.5) -- (8.5,-1.5) node[above, midway] {$\alpha_1$}; \draw[<->] (8.5,1.5) -- (13,1.5) node[below, midway] {$\alpha_2$}; \draw[->, line width=1pt] (5,-3) -- (9,-3) node[below, yshift=-0.2cm, midway] {Mixing flux $\mathbf{M}_1$}; \draw[<-, line width=1pt] (5,3) -- (9,3) node[above, yshift=0.2cm, midway] {Mixing flux $\mathbf{M}_2$}; \end{tikzpicture} \caption{Schematic illustrating the mixing procedure in a 1D discretization at $t^n$. Red and blue color indicate the positive and negative phases. Green indicates interface positions $\Gamma_0$ at $t^{n-1}$ and $t^n$.} \label{fig:mixing} \end{figure} \subsection{Computational Domain and Boundary Conditions} The computational domain is a cuboid. Figure \ref{fig:computational_domain} depicts an exemplary computational domain including the nomenclature for the boundary locations. The solver provides symmetry, periodic, no-slip wall, Dirichlet, and Neumann boundary conditions. The no-slip wall boundary condition allows the user to specify either a constant value or time-dependent function for the wall velocity. The Dirichlet and Neumann boundary conditions allow the user to specify either a constant value or a space and time-dependent function. Furthermore, (in 2D only) multiple different types of boundary conditions may be imposed along a single boundary location. Here, the user must specify the starting and end point of each of the different boundary types along the specific boundary location. The level-set implementation allows for arbitrary immersed solid boundaries. \begin{figure} \centering \begin{tikzpicture} \node (-0.1, -0.1) {}; \draw[fill=blue, opacity=0.1] (0,0) -- (0,3) -- (5,3) -- (5,0) -- cycle; \draw[fill=blue, opacity=0.1] (1.5,1.5) -- (1.5,4.5) -- (6.5,4.5) -- (6.5,1.5) -- cycle; \draw[fill=red, opacity=0.1] (5,0) -- (5,3) -- (6.5,4.5) -- (6.5,1.5) -- cycle; \draw[fill=red, opacity=0.1] (0,0) -- (0.0,3.0) -- (1.5,4.5) -- (1.5,1.5) -- cycle; \draw[fill=green, opacity=0.1] (0,0) -- (1.5,1.5) -- (6.5,1.5) -- (5,0) -- cycle; \draw[fill=green, opacity=0.1] (0,3) -- (1.5,4.5) -- (6.5,4.5) -- (5,3) -- cycle; \draw[thick] (0,0) -- (0,3) -- (5,3) -- (5,0) -- cycle; \draw[thick] (0,3) -- (1.5,4.5) -- (6.5,4.5) -- (5,3); \draw[thick] (5,0) -- (6.5,1.5) -- (6.5,4.5); \node[opacity=0.5] at (3.25,0.75) {\textcolor{green!70!black}{south}}; \node[opacity=0.5] at (4,2.8) {\textcolor{blue!70!black}{top}}; \draw[draw=none] (0.0,1.5) -- (1.5,3.0) node[midway, sloped] {\textcolor{red!70!black}{west}}; \draw[draw=none] (5.0,1.5) -- (6.5,3.0) node[midway, sloped] {\textcolor{red!70!black}{east}}; \draw[line width=1pt, ->] (1.5,1.5) -- (2.2,1.5) node[at end, above] {$x$}; \draw[line width=1pt, ->] (1.5,1.5) -- (1.5,2.2) node[at end, left] {$y$}; \draw[line width=1pt, ->] (1.5,1.5) -- (1.1,1.1) node[at end, above left] {$z$}; \node at (3.25,3.75) {\textcolor{green!70!black}{north}}; \node at (2.5,1.3) {\textcolor{blue!70!black}{bottom}}; \end{tikzpicture} \caption{Computational domain with boundary locations.} \label{fig:computational_domain} \end{figure} \begin{table}[t!] \begin{center} \footnotesize \begin{tabular}{c c c} \hline Time Integration & Euler & \\ & TVD-RK2 \cite{Gottlieb1998a} & \\ & TVD-RK3 \cite{Gottlieb1998a} & \\ \hline Flux Function/Riemann Solver & Lax-Friedrichs (LxF) & According to \\ & Local Lax-Friedrichs (LLxF, Rusanov) & According to \\ & HLL/HLLC/HLLC-LM \cite{Harten1983a,Toro1994,Toro2009a,Toro2019,Fleischmann2020} & Signal speed estimates see below \\ & AUSM+ \cite{Liou1996} & \\ & Componentwise LLxF & Flux-splitting formulation \\ & Roe \cite{Roe1981} & Flux-splitting formulation \\ \hline Signal Speed Estimates & Arithmetic & \\ & Davis \cite{Davis1988} & \\ & Einfeldt \cite{Einfeldt1988a} & \\ & Toro \cite{Toro1994} & \\ \hline Spatial Reconstruction & WENO1 \cite{Jiang1996} & \\ & WENO3-JS/Z/N/F3+/NN \cite{Jiang1996,Acker2016a,Gande2020,Bezgin2021b} & \\ & WENO5-JS/Z \cite{Jiang1996,Borges2008a} & \\ & WENO6-CU/CUM \cite{Hu2010,Hu2011} & \\ & WENO7-JS \cite{Balsara2000}& \\ & WENO9-JS \cite{Balsara2000}& \\ & TENO5 \cite{Fu2016} & \\ & Second-order central & For dissipative terms only \\ & Fourth-order central & For dissipative terms only \\ \hline Spatial Derivatives & Second-order central & \\ & Fourth-order central & \\ & HOUC-3/5/7 \cite{Nourgaliev2007}& \\ \hline Levelset reinitialization & First-order \cite{Russo2000} & \\ & HJ-WENO \cite{jiang2000weighted} & \\ \hline Ghost fluid extension & First-order upwind \cite{Hu2006} & \\ \hline LES Modules & ALDM \cite{Hickel2014b} & \\ \hline Equation of State & Ideal Gas & \\ & Stiffened Gas \cite{Menikoff1989} & \\ & Tait \cite{Fedkiw1999a}& \\ \hline Boundary Conditions & Periodic & \\ & Zero Gradient & E.g., used for outflow boundaries \\ & Neumann & E.g., for a prescribed heat flux \\ & Dirichlet & \\ & No-slip Wall & \\ & Immersed Solid Boundaries & Arbitrary geometries via level-set \\ \hline \end{tabular} \caption{Overview on numerical methods available in JAX-FLUIDS.} \label{tab:NumericalMethods} \end{center} \end{table} \section{Single Node Performance} \label{sec:Performance} We assess the single node performance of the JAX-FLUIDS solver on an NVIDIA RTX A6000 GPU. The NVIDIA RTX A6000 provides 48GB of GPU memory and a bandwidth of 768 GB/s. We conduct simulations of the three-dimensional compressible Taylor-Green vortex (TGV) \cite{Brachet1984} at $Ma = 0.1$ on a series of grids with increasing resolution. Specifically, we simulate TGVs on $64^3$, $128^3$, $256^3$, and $384^3$ cells. We use the two numerical setups described in the previous section. We use JAX version 0.2.26. As JAX-FLUIDS can handle single- and double-precision computations, we assess the performance for both data types. Table \ref{tab:Performance} summarizes the results. At $384^3$ cells, only the simulation setup \textit{HLLC-float32} did not exceed the memory resources of the A6000 GPU, compare with Table \ref{tab:Memory}. All results reported here are averaged over 5 independent runs. For the \textit{HLLC-float32} setup, JAX-FLUIDS achieves a performance of around 25 $\mu s$ per time step. This corresponds to three evaluations of the right-hand side in Equation \eqref{eq:FVD} as we use TVD-RK3 time integration. JAX-FLUIDS, therefore, provides a strong performance taking into consideration that the code is written entirely in the high-level language of Python/JAX. For the \textit{ROE-float32} setup, the computation of the eigenvectors and eigenvalues increases the wall clock time roughly by an order of magnitude. For \textit{HLLC} and \textit{ROE} schemes, we observe that the single-precision calculations are between 2.5 and 3 times faster than the double-precision calculations. As GPU memory is a critical resource when working with JAX, we investigate the memory consumption of JAX-FLUIDS. The default behavior of JAX preallocates 90\% of GPU memory in order to avoid memory fragmentation. Therefore, to monitor the actual memory consumption, we set \mintinline{python}{XLA_PYTHON_CLIENT_PREALLOCATE="false"} to disable memory preallocation and force JAX to allocate GPU memory as needed. Additionally, we set \mintinline{python}{XLA_PYTHON_CLIENT_ALLOCATOR="platform"} which allows JAX to deallocate unused memory. Note that allowing JAX to deallocate unused memory incurs a performance penalty, and we only use this setting to profile the memory consumption. Table \ref{tab:Memory} summarizes the GPU memory requirements for the aforementioned simulation setups. We refer to the documentation of JAX \cite{jax2018github} for more details on GPU memory utilization. \begin{table}[t!] \begin{center} \begin{tabular}{ c c c c c c } \hline & \multicolumn{5}{c}{Mean wall clock time per cell per time step in $10^{-9} s$}\\ \hline & $32^3$ & $64^3$ & $128^3$ & $256^3$ & $384^3$ \\ [0.5ex] \hline HLLC - float32 & 64.41 (11.87) & 24.39 (0.06) & 25.09 (0.04) & 28.36 (0.03) & 28.11 (0.01) \\ HLLC - float64 & 98.20 (0.49) & 93.94 (0.06) & 92.55 (0.09) & 92.76 (0.04) & - \\ ROE - float32 & 241.71 (0.70) & 302.20 (0.32) & 304.55 (0.16) & 301.78 (0.36) & - \\ ROE - float64 & 703.40 (1.42) & 746.78 (6.17) & 759.36 (5.23) & 760.09 (6.37) & - \\ \hline \end{tabular} \caption{Mean wall clock time per cell per time step. All computations are run on an NVIDIA RTX A6000 GPU. The wall clock times are averaged over five runs. Numbers in brackets denote the standard deviation over the five runs.} \label{tab:Performance} \end{center} \end{table} \begin{table}[t!] \begin{center} \begin{tabular}{ c c c c c c } \hline & \multicolumn{5}{c}{Memory pressure}\\ \hline & $32^3$ & $64^3$ & $128^3$ & $256^3$ & $384^3$ \\ [0.5ex] \hline HLLC - float32 & 295.6 (1.50) & 434.4 (1.50) & 1424.4 (1.50) & 9141.6 (1.50) & 29849.2 (2.04) \\ HLLC - float64 & 353.6 (2.33) & 623.6 (2.33) & 2631.6 (2.33) & 18275.6 (2.33) & - \\ ROE - float32 & 626.0 (1.79) & 818.8 (2.04) & 2255.2 (2.04) & 13546.0 (1.79) & - \\ ROE - float64 & 688.0 (2.83) & 1068.4 (2.33) & 3938.0 (2.83) & 26504.4 (2.33) & - \\ \hline \end{tabular} \caption{GPU Memory Pressure in megabytes (MB).} \label{tab:Memory} \end{center} \end{table} \section{Physical Model} \label{sec:PhysicalModel} We are interested in the compressible Euler equations for inviscid flows and in the compressible Navier-Stokes equations (NSE) which govern viscous flows. The state of a fluid at any position in the flow field $\mathbf{x} = \left[ x, y, z \right]^T = \left[ x_1, x_2, x_3 \right]^T$ at time $t$ can be described by the vector of primitive variables $\mathbf{W} = \left[ \rho, u, v, w, p \right]^T$. Here, $\rho$ is the density, $\mathbf{u} = \left[ u, v, w \right]^T = \left[ u_1, u_2, u_3 \right]^T$ is the velocity vector, and $p$ is the pressure. An alternative description is given by the vector of conservative variables $\mathbf{U} = \left[ \rho, \rho u, \rho v, \rho w, E \right]^T$. Here, $\rho \mathbf{u} = \left[ \rho u, \rho v, \rho w \right]^T$ are the momenta in the three spatial dimensions, and $E = \rho e + \frac{1}{2} \rho \mathbf{u} \cdot \mathbf{u} $ is the total energy per unit volume. $e$ is the internal energy per unit mass. In differential formulation, the compressible Euler equations can be written in terms of $\mathbf{U}$ \begin{equation} \frac{\partial \mathbf{U}}{\partial t} + \frac{\partial \mathcal{F}(\mathbf{U})}{\partial x} + \frac{\partial \mathcal{G}(\mathbf{U})}{\partial y} + \frac{\partial \mathcal{H}(\mathbf{U})}{\partial z} = 0. \label{eq:DiffConsLaw1} \end{equation} The convective physical fluxes $\mathcal{F}$, $\mathcal{G}$, and $\mathcal{H}$ are defined as \begin{align} \mathcal{F}(\mathbf{U}) = \begin{pmatrix} \rho u \\ \rho u^2 + p \\ \rho u v \\ \rho u w \\ u (E + p) \end{pmatrix}, \quad \mathcal{G}(\mathbf{U}) = \begin{pmatrix} \rho v \\ \rho v u\\ \rho v^2 + p \\ \rho v w \\ v (E + p) \end{pmatrix}, \quad \mathcal{H}(\mathbf{U}) = \begin{pmatrix} \rho w\\ \rho w u\\ \rho w v \\ \rho w^2 + p \\ w (E + p) \end{pmatrix}. \end{align} This set of equations must be closed by an equation of state (EOS) which relates pressure with density and internal energy, i.e., $p = p(\rho, e)$. Unless specified otherwise, we use the stiffened gas equation \begin{equation} p(\rho, e) = (\gamma - 1) \rho e - \gamma B, \label{eq:StiffenedGas} \end{equation} where $\gamma$ represents the ratio of specific heats and $B$ is the background pressure. The compressible Navier-Stokes equations can be seen as the viscous extension of the Euler equations. As before, we write them in terms of the conservative state vector $\mathbf{U}$, \begin{align} \frac{\partial \mathbf{U}}{\partial t} + \frac{\partial \mathcal{F}(\mathbf{U})}{\partial x} + \frac{\partial \mathcal{G}(\mathbf{U})}{\partial y} + \frac{\partial \mathcal{H}(\mathbf{U})}{\partial z} = \frac{\partial \mathcal{F}^{d}(\mathbf{U})}{\partial x} + \frac{\partial \mathcal{G}^{d}(\mathbf{U})}{\partial y} + \frac{\partial \mathcal{H}^{d}(\mathbf{U})}{\partial z} + S(\mathbf{U}). \label{eq:DiffConsLaw2} \end{align} Here, we have additionally introduced the dissipative fluxes $\mathcal{F}^d$, $\mathcal{G}^d$, and $\mathcal{H}^d$ and the source term vector $S(\mathbf{U})$ on the right-hand side. The dissipative fluxes describe viscous effects and heat conduction, and are given by \begin{align} \mathcal{F}^d(\mathbf{U}) = \begin{pmatrix} 0 \\ \tau^{11} \\ \tau^{12} \\ \tau^{13} \\ \sum_i u_i \tau^{1i} - q_1 \end{pmatrix}, \quad \mathcal{G}^d(\mathbf{U}) = \begin{pmatrix} 0 \\ \tau^{21} \\ \tau^{22} \\ \tau^{23} \\ \sum_i u_i \tau^{2i} - q_2 \end{pmatrix}, \quad \mathcal{H}^d(\mathbf{U}) = \begin{pmatrix} 0 \\ \tau^{31} \\ \tau^{32} \\ \tau^{33} \\ \sum_i u_i \tau^{3i} - q_3 \end{pmatrix}. \end{align} The stresses $\tau^{ij}$ are given by \begin{align} \tau^{ij} = \mu \left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i}\right) - \frac{2}{3} \mu \delta_{ij} \frac{\partial u_k}{\partial x_k}. \end{align} $\mu$ is the dynamic viscosity. The energy flux vector $\mathbf{q}$ can be expressed via Fourier's heat conduction law, $\mathbf{q} = [q_1, q_2, q_3]^T = - \lambda \nabla T$. $\lambda$ is the heat conductivity. The source terms $S(\mathbf{U})$ represent body forces or heat sources. The body force resulting from gravitational acceleration $\mathbf{g}=[g_1, g_2, g_3]^T$ is given by \begin{equation} S(\mathbf{U}) = \begin{pmatrix} 0 \\ \rho\mathbf{g} \\ \rho \mathbf{u} \cdot \mathbf{g} \end{pmatrix}. \end{equation} In Table \ref{tab:Nondimensionalization}, we summarize the nomenclature and the reference values with which we non-dimensionalize aforementioned equations. \begin{table}[t] \begin{center} \small \begin{tabular}{ l l l } \hline Quantitiy & Nomenclature & Reference quantity \\ \hline Density & $\rho$ & $\rho_{ref}$ \\ Length & $(x, y, z)$ or $(x_1, x_2, x_3)$ & $l_{ref}$ \\ Velocity & $(u, v, w)$ or $(u_1, u_2, u_3)$ & $u_{ref}$ \\ Temperature & $T$ & $T_{ref}$ \\ Time & $t$ & $t_{ref} = l_{ref} / u_{ref}$ \\ Pressure & $p$ & $p_{ref} = \rho_{ref} u_{ref}^2$ \\ Viscosity & $\mu$ & $\mu_{ref} = \rho_{ref} u_{ref} l_{ref}$ \\ Surface tension coefficient & $\sigma$ & $\sigma_{ref} = \rho_{ref} u_{ref}^2 l_{ref}$ \\ Thermal conductivity & $\lambda$ & $\lambda_{ref} = \rho_{ref} u_{ref}^3 l_{ref} / T_{ref}$ \\ Gravitation & $g$ & $g_{ref} = u_{ref}^2 / l_{ref}$ \\ Specific gas constant & $\mathcal{R}$ & $\mathcal{R}_{ref} = u_{ref}^2 / T_{ref}$ \\ Mass & $m$ & $m_{ref} = \rho_{ref} l_{ref}^3$ \\ Mass flow & $\dot{m}$ & $\dot{m}_{ref} = m_{ref} / t_{ref} = \rho_{ref} u_{ref} l_{ref}^2$ \\ \hline \end{tabular} \caption{Overview on nomenclature and nondimensionalization.} \label{tab:Nondimensionalization} \end{center} \end{table}
{ "attr-fineweb-edu": 1.543945, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfxvxK0flNeFb6LpZ
\chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} In most modern systems, the memory subsystem is managed and accessed at multiple different granularities at various resources. The software stack typically accesses data at a \emph{word} granularity (typically 4 or 8 bytes). The on-chip caches store data at a \emph{cache line} granularity (typically 64 bytes). The commodity off-chip memory interface is optimized to fetch data from main memory at a cache line granularity. The main memory capacity itself is managed at a \emph{page} granularity using virtual memory (typically 4KB pages with support for larger super pages). The off-chip commodity DRAM architecture internally operates at a \emph{row} granularity (typically 8KB). In this thesis, we observe that this \emph{curse of multiple granularities} results in significant inefficiency in the memory subsystem. We identify three specific problems. First, page-granularity virtual memory unnecessarily triggers large memory operations. For instance, with the widely-used copy-on-write technique, even a single byte update to a virtual page results in a full 4KB copy operation. Second, with existing off-chip memory interfaces, to perform any operation, the processor must first read the source data into the on-chip caches and write the result back to main memory. For bulk data operations, this model results in a large amount of data transfer back and forth on the main memory channel. Existing systems are particularly inefficient for bulk operations that do not require any computation (e.g., data copy or initialization). Third, for operations that do not exhibit good spatial locality, e.g., non-unit strided access patterns, existing cache-line-optimized memory subsystems unnecessarily fetch values that are not required by the application over the memory channel and store them in the on-chip cache. All these problems result in high latency, and high (and often unnecessary) memory bandwidth and energy consumption. To address these problems, we present a series of techniques in this thesis. First, to address the inefficiency of existing page-granularity virtual memory systems, we propose a new framework called \emph{page overlays}. At a high level, our framework augments the existing virtual memory framework with the ability to track a new version of a subset of cache lines within each virtual page. We show that this simple extension is very powerful by demonstrating its benefits on a number of different applications. Second, we show that the analog operation of DRAM can perform more complex operations than just store data. When combined with the row granularity operation of commodity DRAM, we can perform these complex operations efficiently in bulk. Specifically, we propose \emph{RowClone}, a mechanism to perform bulk data copy and initialization operations completely inside DRAM, and \emph{Buddy RAM}, a mechanism to perform bulk bitwise logical operations using DRAM. Both these techniques achieve an order-of-magnitude improvement in performance and energy-efficiency of the respective operations. Third, to improve the performance of non-unit strided access patterns, we propose \emph{Gather-Scatter DRAM} ({\sffamily{GS-DRAM}}\xspace), a technique that exploits the module organization of commodity DRAM to effectively gather or scatter values with any power-of-2 strided access pattern. For these access patterns, {\sffamily{GS-DRAM}}\xspace achieves near-ideal bandwidth and cache utilization, without increasing the latency of fetching data from memory. Finally, to improve the performance of the protocol to maintain the coherence of dirty cache blocks, we propose the \emph{Dirty-Block Index} (DBI), a new way of tracking dirty blocks in the on-chip caches. In addition to improving the efficiency of bulk data coherence, DBI has several applications, including high-performance memory scheduling, efficient cache lookup bypassing, and enabling heterogeneous ECC for on-chip caches. \chapter*{Acknowledgments} \addcontentsline{toc}{chapter}{Acknowledgments} \vspace{-5mm} \newcolumntype{L}[1]{>{\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}\em}m{#1}} \begin{figure}[b!] \hrule\vspace{1mm} \begin{footnotesize} The last chapter of this dissertation acknowledges the collaborations with my fellow graduate students. These collaborations have been critical in keeping my morale high throughout the course of my Ph.D. \end{footnotesize} \end{figure} Many people have directly or indirectly played a significant role in the success of my Ph.D. Words alone may not be sufficient to express my gratitude to these people. Yet, I would like to thank \vspace{-2mm} \begin{center} \setlength{\extrarowheight}{6pt} \begin{tabular}{R{1.75in}L{4in}} Todd \& Onur, & for trusting me and giving me freedom, despite some tough initial years, for constantly encouraging me to work hard, and for keeping me focused\\ Mike \& Phil, & for being more than just collaborators\\ Dave \& Rajeev, & for making the ending as less painful as possible\\ Deb, & for ensuring that I did not have to worry about anything but research\\ Donghyuk \& Yoongu, & for teaching me everything I know about DRAM\\ Lavanya, & for being more than just a lab mate\\ Michelle, Tunji, \& Evangelos, & for sharing the secrets of the trade\\ Gena, Chris, \& Justin, & for giving me a different perspective in research and life\\ other members of SAFARI, & for all the valuable and enjoyable discussions\\ people @ PDL \& CALCM, & for all the feedback and comments\\ \end{tabular} \end{center} \begin{center} \setlength{\extrarowheight}{6pt} \begin{tabular}{R{1.75in}L{4in}} Frank, Rick, Vyas, \& other members of AB league, & for making squash an integral part of my life\\ Stan \& Patty, & for their warming smile every time I visit UC\\ Gopu, & for being my best roommate for the better part of my Ph.D.\\ Shane, & for being a part of an important decade in my life\\ G, RawT, KL, Coolie, \& BAP, & for making my life @ Pittsburgh enjoyable\\ Vyas, & for teaching me (mostly necessary) life skills\\ PJ, & for giving me hope\\ Veda, Tomy, Murali, Hetu, \& Captain, & for all the gyan\\ Ashwati, Usha, \& Shapadi, & for all the chai, and random life discussions\\ James \& DSha, & for all the discussions, rants, and squash\\ Ranjan, SKB, Sudha, \& Joy, & for the complete TeleTaaz experience\\ Sharads, Sumon, Kaushik, \& KJ et al., & for food and company\\ IGSA folks, & for the much-needed distraction\\ Jyo, Pave, Sail, Vimal, Karthi, and Jayavel, & for always being the friends in need\\ MahimaSri, NiMeg, Wassim, Yogi, \& Pro Club, & for making Redmond my second home in the US\\ PK \& Kaavy, & for my go-to whatsapp group\\ DHari, & for putting up with me\\ Vikram, & for making my path in life smooth\\ Appa \& Amma, & for freedom and support. \end{tabular} \end{center} \section{Effect on Real-world Applications} \label{sec:applications} To demonstrate the benefits of Buddy on real-world applications, we implement Buddy in the Gem5~\cite{gem5} simulator. We implement the new Buddy instructions using the pseudo instruction framework in Gem5. We simulate an out-of-order, 4 GHz processor with 32 KB L1 D-cache and I-cache, with a shared 2 MB L2 cache. All caches use a 64B cache line size. We model a 1-channel, 1-rank, DDR4-2400 main memory. In Section~\ref{sec:bitmap-indices}, we first show that Buddy can significantly improve the performance of an in-memory bitmap index. In Section~\ref{sec:bitset}, we show that Buddy generally makes bitmaps more attractive for various set operations compared to traditional red-black trees. In Section~\ref{sec:other-apps}, we discuss other potential applications that can benefit from Buddy. \subsection{Bitmap Indices} \label{sec:bitmap-indices} Bitmap indices are an alternative to traditional B-tree indices for databases. Compared to B-trees, bitmap indices can 1)~consume less space, and 2)~improve performance of certain queries. There are several real-world implementations of bitmap indices for databases (e.g., Oracle~\cite{oracle}, Redis~\cite{redis}, Fastbit~\cite{fastbit}, rlite~\cite{rlite}). Several real applications (e.g., Spool~\cite{spool}, Belly~\cite{belly}, bitmapist~\cite{bitmapist}, Audience Insights~\cite{ai}) use bitmap indices for fast analytics. Bitmap indices rely on fast bitwise operations on large bit vectors to achieve high performance. Therefore, Buddy can accelerate operations on bitmap indices, thereby improving overall application performance. To demonstrate this benefit, we use the following workload representative of many applications. The application uses bitmap indices to track users' characteristics (e.g., gender, premium) and activities (e.g., did the user log in to the website on day 'X'?) for $m$ users. The applications then uses bitwise operations on these bitmaps to answer several different queries. Our workload runs the following query: ``How many unique users were active every week for the past $n$ weeks? and How many premium users were active each of the past $n$ weeks?'' Executing this query requires 6$n$ bitwise \textrm{\texttt{or}}\xspace, 2$n$-1 bitwise \textrm{\texttt{and}}\xspace, and $n$+1 bitcount operations. The size of each bitmap (and hence each bitwise operation) depends on the number of users. For instance, a reasonably large application that has 8 million users will require each bitmap to be around 1 MB. Hence, these operations can easily be accelerated using Buddy (the bitcount operations are performed by the CPU). Figure~\ref{fig:rlite} shows the execution time of the baseline and Buddy for the above experiment for various values of $m$ (number of users) and $n$ (number of days). \begin{figure}[h] \centering \includegraphics[scale=1.5]{buddy/plots/rlite-small} \caption{Performance of Buddy for bitmap indices} \label{fig:rlite} \end{figure} We draw two conclusions. First, as each query has $O(n)$ bitwise operations and each bitwise operation takes $O(m)$ time, the execution time of the query increases with increasing value $mn$. Second, Buddy significantly reduces the query execution time by 6X (on average) compared to the baseline. While we demonstrate the benefits of Buddy using one query, as all bitmap index queries involve several bitwise operations, Buddy will provide similar performance benefits for any application using bitmap indices. \subsection{Bit Vectors vs. Red-Black Trees} \label{sec:bitset} A \emph{set} data structure is widely used in many algorithms. Many libraries (e.g., C++ Standard Template Library~\cite{stl}), use red-black trees~\cite{red-black-tree} (RB-trees) to implement a set. While RB-trees are efficient when the domain of elements is very large, when the domain is limited, a set can be implemented using a bit vector. Bit vectors offer constant time insert and lookup as opposed to RB-trees, which consume $O(\log n)$ time for both operations. However, with bit vectors, set operations like union, intersection, and difference have to operate on the entire bit vector, regardless of whether the elements are actually present in the set. As a result, for these operations, depending on the number of elements actually present in each set, bit vectors may outperform or perform worse than a RB-trees. With support for fast bulk bitwise operations, we show that Buddy significantly shifts the trade-off spectrum in favor of bit vectors. To demonstrate this, we compare the performance of union, intersection, and difference operations using three implementations: RB-tree, bit vectors with SSE optimization (Bitset), and bit vectors with Buddy. We run a microbenchmark that performs each operation on 15 sets and stores the result in an output set. Each set can contain elements between 1 and 524288 ($2^{19}$). Therefore, the bit vector approaches requires 64~KB to represent each set. For each operation, we vary the number of elements present in the input sets. Figure~\ref{plot:set-results} shows the results of this experiment. The figure plots the execution time for each implementation normalized to RB-tree. \begin{figure}[h] \centering \includegraphics[scale=1.5]{buddy/plots/set-results} \caption{Comparison between RB-Tree, Bitset, and Buddy} \label{plot:set-results} \end{figure} We draw three conclusions. First, by enabling much higher throughput for bitwise operations, Buddy outperforms the baseline bitset on all the experiments. Second, as expected, when the number of elements in each set is very small (16 out of 524288), RB-Tree performs better than the bit vector based implementations. Third, even when each set contains only 1024 (out of 524288) elements, Buddy significantly outperforms RB-Tree. In summary, by performing bulk bitwise operations efficiently and with much higher throughput compared to existing systems, Buddy makes a bit-vector-based implementation of a set more attractive in scenarios where red-black trees previously outperformed bit vectors. \subsection{Other Applications} \label{sec:other-apps} \subsubsection{Cryptography.} Many encryption algorithms in cryptography heavily use bitwise operations (e.g., XOR)~\cite{xor1,xor2,enc1}. The Buddy support for fast and efficient bitwise operations can i)~boost the performance of existing encryption algorithms, and ii)~enable new encryption algorithms with high throughput and efficiency. \subsubsection{DNA Sequence Mapping.} DNA sequence mapping has become an important problem, with applications in personalized medicine. Most algorithms~\cite{dna-overview} rely on identifying the locations where a small DNA sub-string occurs in the reference gnome. As the reference gnome is large, a number of pre-processing algorithms~\cite{dna-algo1,dna-algo2,dna-algo3,dna-algo4} have been proposed to speedup this operation. Based on a prior work~\cite{dna-our-algo}, we believe bit vectors with support for fast bitwise operations using Buddy can enable an efficient filtering mechanism. \subsubsection{Approximate Statistics.} Certain large systems employ probabilistic data structures to improve the efficiency of maintaining statistics~\cite{summingbird}. Many such structures (e.g., Bloom filters) rely on bitwise operations to achieve high efficiency. By improving the throughput of bitwise operations, Buddy can further improve the efficiency of such data structures, and potentially enable the design of new data structures in this space. \section{Implementation and Hardware Cost} \label{sec:buddy-implementation} Besides the addition of a dual-contact cell on either side of each sense amplifier, Buddy primarily relies on different variants and sequences of row activations to perform bulk bitwise operations. As a result, the main changes to the DRAM chip introduced by Buddy are to the row decoding logic. While the operation of Buddy-NOT\xspace is similar to a regular cell operation, Buddy-AND/OR\xspace requires three rows to be activated simultaneously. Activating three arbitrary rows within a subarray 1)~requires the memory controller to first communicate the three addresses to DRAM, and 2)~requires DRAM to simultaneously decode the three addresses. Both of these requirements incur huge cost, i.e., wide address buses and three full row decoders to decode the three addresses simultaneously. In this work, we propose an implementation with much lower cost. At a high level, we reserve a small fraction of row addresses within each subarray for triple-row activation. Our mechanism maps each reserved address to a pre-defined set of three wordlines instead of one. With this approach, the memory controller can perform a triple-row activation by issuing an \texttt{ACTIVATE}\xspace with a \emph{single row address}. We now describe how we exploit certain properties of Buddy to realize this implementation. \subsection{Row Address Grouping} \label{sec:address-grouping} Before performing the triple-row activation, our mechanism copies the source data and the control data to three temporary rows (Section~\ref{sec:and-or-mechanism}). If we choose these temporary rows at \emph{design time}, then these rows can be controlled using a separate small row decoder. To exploit this idea, we divide the space of row addresses within each subarray into three distinct groups (B, C, and D), as shown in Figure~\ref{fig:row-address-grouping}. \begin{figure}[h] \centering \includegraphics{buddy/figures/row-address-grouping} \caption[Logical subarray row address grouping]{Logical subarray address grouping. As an example, the figure shows how the B-group row decoder simultaneously activates rows \taddr{0}, \taddr{1}, and \taddr{2} (highlighted in thick lines), with a single address \baddr{12}.} \label{fig:row-address-grouping} \end{figure} The \emph{B-group} (or the \emph{bitwise} group) corresponds to rows that are used to perform the bitwise operations. This group contains $16$ addresses that map to $8$ physical wordlines. Four of the eight wordlines are the \emph{d-}and-\emph{n-wordlines} that control the two rows of dual-contact cells. We will refer to the {\emph{d-wordline}\xspace}s of the two rows as \texttt{DCC0}\xspace and \texttt{DCC1}\xspace, and the corresponding {\emph{n-wordline}\xspace}s as $\overline{\textrm{\texttt{DCC0}}}$\xspace and $\overline{\textrm{\texttt{DCC1}}}$\xspace. The remaining four wordlines control four temporary rows of DRAM cells that will be used by various bitwise operations. We refer to these rows as \taddr{0}---\taddr{3}. While some B-group addresses activate individual wordlines, others activate multiple wordlines simultaneously. Table~\ref{table:b-group-mapping} lists the mapping between the 16 addresses and the wordlines. Addresses \baddr{0}---\baddr{7} individually activate one of the $8$ physical wordlines. Addresses \baddr{12}---\baddr{15} activate three wordlines simultaneously. These addresses will be used by the memory controller to trigger bitwise AND or OR operations. Finally, addresses \baddr{8}---\baddr{11} activate two wordlines. As we will show in the next section, these addresses will be used to copy the result of an operation simultaneously to two rows (e.g., zero out two rows simultaneously). \begin{table}[h] \centering \input{buddy/tables/b-group-mapping} \caption[Buddy: B-group address mapping]{Mapping of B-group addresses} \label{table:b-group-mapping} \end{table} The \emph{C-group} (or the \emph{control} group) contains the rows that store the pre-initialized values for controlling the bitwise AND/OR operations. Specifically, this group contains only two addresses: \texttt{C0}\xspace and \texttt{C1}\xspace. The rows corresponding to \texttt{C0}\xspace and \texttt{C1}\xspace are initialized to all-zeros and all-ones, respectively. The \emph{D-group} (or the \emph{data} group) corresponds to the rows that store regular user data. This group contains all the addresses in the space of row addresses that are neither in the \emph{B-group} nor in the \emph{C-group}. Specifically, if each subarray contains $1024$ rows, then the \emph{D-group} contains $1006$ addresses, labeled \daddr{0}---\daddr{1005}. With these different address groups, the memory controller can simply use the existing command interface and use the \texttt{ACTIVATE}\xspace commands to communicate all variants of the command to the DRAM chips. Depending on the address group, the DRAM chips can internally process the activate command appropriately, e.g., perform a triple-row activation. For instance, by just issuing an \texttt{ACTIVATE}\xspace to address \baddr{12}, the memory controller can simultaneously activate rows \taddr{0}, \taddr{1}, and \taddr{2}, as illustrated in Figure~\ref{fig:row-address-grouping}. \subsection{Split Row Decoder} \label{sec:split-row-decoder} Our idea of \emph{split row decoder} splits the row decoder into two parts. The first part controls addresses from only the \emph{B-group}, and the second part controls addresses from the \emph{C-group} and the \emph{D-group} (as shown in Figure~\ref{fig:row-address-grouping}). There are two benefits to this approach. First, the complexity of activating multiple wordlines is restricted to the small decoder that controls only the \emph{B-group}. In fact, this decoder takes only a 4-bit input (16 addresses) and generates a 8-bit output (8 wordlines). In contrast, as described in the beginning of this section, a naive mechanism to simultaneously activate three arbitrary rows incurs high cost. Second, as we will describe in Section~\ref{sec:command-sequence}, the memory controller must perform several back-to-back {\texttt{ACTIVATE}\xspace}s to execute various bitwise operations. In a majority of cases, the two rows involved in each back-to-back {\texttt{ACTIVATE}\xspace}s are controlled by different decoders. This enables an opportunity to overlap the two {\texttt{ACTIVATE}\xspace}s, thereby significantly reducing their latency. We describe this optimization in detail in Section~\ref{sec:accelerating-aap}. Although the groups of addresses and the corresponding row decoders are logically split, the physical implementation can use a single large decoder with the wordlines from different groups interleaved, if necessary. \subsection{Executing Bitwise Ops: The AAP Primitive} \label{sec:command-sequence} To execute each bitwise operations, the memory controller must send a sequence of commands. For example, to perform the bitwise NOT operation, \daddr{k} \texttt{=} \textrm{\texttt{not}}\xspace \daddr{i}, the memory controller sends the following sequence of commands. \begin{enumerate}\itemsep0pt\parskip0pt \item \texttt{ACTIVATE}\xspace \daddr{i} \comment{Activate the source row} \item \texttt{ACTIVATE}\xspace \baddr{5} \comment{Activate the \emph{n-wordline}\xspace of \texttt{DCC0}\xspace} \item \texttt{PRECHARGE}\xspace \item \texttt{ACTIVATE}\xspace \baddr{4} \comment{Activate the \emph{d-wordline}\xspace of \texttt{DCC0}\xspace} \item \texttt{ACTIVATE}\xspace \daddr{k} \comment{Activate the destination row} \item \texttt{PRECHARGE}\xspace \end{enumerate} Step 1 transfers the data from the source row to the array of sense amplifiers. Step 2 activates the \emph{n-wordline}\xspace of one of the DCCs, which connects the dual-contact cells to the corresponding {$\overline{\textrm{bitline}}$\xspace}s. As a result, this step stores the negation of the source cells into the corresponding DCC row (as described in Figure~\ref{fig:bitwise-not}). After the precharge operation in Step 3, Step 4 activates the \emph{d-wordline}\xspace of the DCC row, transferring the negation of the source data on to the bitlines. Finally, Step 5 activates the destination row. Since the sense amplifiers are already activated, this step copies the data on the bitlines, i.e., the negation of the source data, to the destination row. Step 6 completes the negation operation by precharging the bank. If we observe the negation operation, it consists of two steps of \texttt{ACTIVATE}\xspace-\texttt{ACTIVATE}\xspace-\texttt{PRECHARGE}\xspace operations. We refer to this sequence of operations as the \texttt{AAP}\xspace primitive. Each \texttt{AAP}\xspace takes two addresses as input. \texttt{\texttt{AAP}\xspace(row1, row2)} corresponds to the following sequence of commands:\\ \centerline{\texttt{\texttt{ACTIVATE}\xspace row1; \texttt{ACTIVATE}\xspace row2; \texttt{PRECHARGE}\xspace;}} With the \texttt{AAP}\xspace primitive, the \textrm{\texttt{not}}\xspace operation, \daddr{k} \texttt{=} \textrm{\texttt{not}}\xspace \daddr{i}, can be rewritten as, \begin{enumerate}\itemsep0pt\parskip0pt \item \callaap{\daddr{i}}{\baddr{5}} \tcomment{\texttt{DCC}\xspace = \textrm{\texttt{not}}\xspace \daddr{i}} \item \callaap{\baddr{4}}{\daddr{k}} \tcomment{\daddr{k} = \texttt{DCC}\xspace} \end{enumerate} In fact, we observe that all the bitwise operations mainly involve a sequence of \texttt{AAP}\xspace operations. Sometimes, they require a regular \texttt{ACTIVATE}\xspace followed by a \texttt{PRECHARGE}\xspace operation. We will use \texttt{AP}\xspace to refer to such operations. Figure~\ref{fig:command-sequences} shows the sequence of steps taken by the memory controller to execute seven bitwise operations: \textrm{\texttt{not}}\xspace, \textrm{\texttt{and}}\xspace, \textrm{\texttt{or}}\xspace, \textrm{\texttt{nand}}\xspace, \textrm{\texttt{nor}}\xspace, \textrm{\texttt{xor}}\xspace, and \textrm{\texttt{xnor}}\xspace. Each step is annotated with the logical result of performing the step. \begin{figure}[h] \centering \includegraphics{buddy/figures/command-sequences} \caption{Command sequences for different bitwise operations} \label{fig:command-sequences} \end{figure} As an illustration, let us consider the \textrm{\texttt{and}}\xspace operation, \daddr{k} = \daddr{i} \textrm{\texttt{and}}\xspace \daddr{j}. The first step (\callaap{\daddr{i}}{\baddr{0}}) first activates the source row \daddr{i}, followed by the temporary row \taddr{0} (which corresponds to address \baddr{0}). As a result, this operation copies the data of \daddr{i} to the temporary row \taddr{0}. Similarly, the second step (\callaap{\daddr{j}}{\baddr{1}}) copies the data of the source row \daddr{j} to the temporary row \taddr{1}, and the third step \callaap{\texttt{C0}\xspace}{\baddr{2}} copies the data of the control row ``0'' to the temporary row \taddr{2}. Finally, the last step (\callaap{\baddr{12}}{\daddr{k}}) first issues an \texttt{ACTIVATE}\xspace to address \baddr{12}. As described in Table~\ref{table:b-group-mapping} and illustrated in Figure~\ref{fig:row-address-grouping}, this command simultaneously activates the rows \taddr{0}, \taddr{1}, and \taddr{2}, resulting in a \textrm{\texttt{and}}\xspace operation of the values of rows \taddr{0} and \taddr{1}. As this command is immediately followed by an \texttt{ACTIVATE}\xspace to \daddr{k}, the result of the \textrm{\texttt{and}}\xspace operation is copied to the destination row \daddr{k}. \subsection{Accelerating the AAP Primitive} \label{sec:accelerating-aap} It is clear from Figure~\ref{fig:command-sequences} that the latency of executing any bitwise operation using Buddy depends on the latency of executing the \texttt{AAP}\xspace primitive. The latency of the \texttt{AAP}\xspace primitive in turn depends on the latency of the \texttt{ACTIVATE}\xspace and the \texttt{PRECHARGE}\xspace operations. In the following discussion, we assume DDR3-1600 (8-8-8) timing parameters~\cite{ddr3-1600}. For these parameters, the latency of an \texttt{ACTIVATE}\xspace operation is t$_{\textrm{\footnotesize{RAS}}}$\xspace = 35~ns, and the latency of a \texttt{PRECHARGE}\xspace operation is t$_{\textrm{\footnotesize{RP}}}$\xspace = 10~ns. \subsubsection{Naive Execution of AAP.} The naive approach is to perform the three operations involved in \texttt{AAP}\xspace serially one after the other. Using this simple approach, the latency of the \texttt{AAP}\xspace operation is 2t$_{\textrm{\footnotesize{RAS}}}$\xspace + t$_{\textrm{\footnotesize{RP}}}$\xspace = \textbf{80~ns}. While Buddy outperforms existing systems even with this naive approach, we exploit some properties of \texttt{AAP}\xspace to further reduce its latency. \subsubsection{Shortening the Second \texttt{ACTIVATE}\xspace.} We observe that the second \texttt{ACTIVATE}\xspace operation is issued when the bank is already activated. As a result, this \texttt{ACTIVATE}\xspace does not require the full sense-amplification process, which is the dominant source of the latency of an \texttt{ACTIVATE}\xspace. In fact, the second \texttt{ACTIVATE}\xspace of an \texttt{AAP}\xspace only requires the corresponding wordline to be raised, and the bitline data to overwrite the cell data. We introduce a new timing parameter called t$_{\textrm{\footnotesize{WL}}}$\xspace, to capture the latency of these steps. With this optimization, the latency of \texttt{AAP}\xspace is t$_{\textrm{\footnotesize{RAS}}}$\xspace + t$_{\textrm{\footnotesize{WL}}}$\xspace + t$_{\textrm{\footnotesize{RP}}}$\xspace. \subsubsection{Overlapping the Two {\texttt{ACTIVATE}\xspace}s.} For all the bitwise operations (Figure~\ref{fig:command-sequences}), with the exception of one \texttt{AAP}\xspace each in \textrm{\texttt{nand}}\xspace and \textrm{\texttt{nor}}\xspace, \emph{exactly one} of the two \texttt{ACTIVATE}s\xspace in each \texttt{AAP}\xspace is to a \emph{B-group} address. Since the wordlines in \emph{B-group} are controlled by a different row decoder (Section~\ref{sec:split-row-decoder}), we can overlap the two \texttt{ACTIVATE}s\xspace of the \texttt{AAP}\xspace primitive. More precisely, if the second \texttt{ACTIVATE}\xspace is issued after the first activation has sufficiently progressed, the sense amplifiers will force the data of the second row to the result of the first activation. This operation is similar to the inter-segment copy operation in Tiered-Latency DRAM~\cite{tl-dram} (Section~4.4). Based on SPICE simulations, the latency of the executing both the \texttt{ACTIVATE}s\xspace is 4~ns larger than t$_{\textrm{\footnotesize{RAS}}}$\xspace. Therefore, with this optimization, the latency of \texttt{AAP}\xspace is t$_{\textrm{\footnotesize{RAS}}}$\xspace + 4~ns + t$_{\textrm{\footnotesize{RP}}}$\xspace = \textbf{49~ns}. \subsection{DRAM Chip and Controller Cost} \label{sec:hardware-cost} Buddy has three main sources of cost to the DRAM chip. First, Buddy requires changes to the row decoding logic. Specifically, the row decoding logic must distinguish between the \emph{B-group} addresses and the remaining addresses. Within the \emph{B-group}, it must implement the mapping between the addresses and the wordlines described in Table~\ref{table:b-group-mapping}. As described in Section~\ref{sec:address-grouping}, the \emph{B-group} contains only 16 addresses that are mapped to 8 wordlines. As a result, we expect the complexity of the changes to the row decoding logic to be low. The second source of cost is the design and implementation of the dual-contact cells (DCCs). In our design, each sense amplifier has only one DCC on each side, and each DCC has two wordlines associated with it. Consequently, there is enough space to implement the second transistor that connects the DCC to the corresponding $\overline{\textrm{bitline}}$\xspace. In terms of area, the cost of each DCC is roughly equivalent to two regular DRAM cells. As a result, we can view each row of DCCs as two rows of regular DRAM cells. The third source of cost is the capacity lost as a result of reserving the rows in the \emph{B-group} and \emph{C-group}. The rows in these groups are reserved for the memory controller to perform bitwise operations and cannot be used to store application data (Section~\ref{sec:address-grouping}). Our proposed implementation of Buddy reserves 18 addresses in each subarray for the two groups. For a typical subarray size of 1024 rows, the loss in memory capacity is $\approx$1\%. On the controller side, Buddy requires the memory controller to 1)~store information about different address groups, 2)~track the timing for different variants of the \texttt{ACTIVATE}\xspace (with or without the optimizations), and 3)~track the status of different on-going bitwise operations. While scheduling different requests, the controller 1)~adheres to power constraints like tFAW which limit the number of full row activations during a given time window, and 2)~can interleave the multiple AAP commands to perform a bitwise operation with other requests from different applications. We believe this modest increase in the DRAM chip/controller complexity and capacity cost is negligible compared to the improvement in throughput and performance enabled by Buddy. \section{Analysis of Throughput \& Energy} \label{sec:lte-analysis} In this section, we compare the raw throughput and energy of performing bulk bitwise operations of Buddy to an Intel Skylake Core i7 processor using Advanced Vector eXtensions~\cite{intel-avx}. The system contains a per-core 32 KB L1 cache and 256 KB L2 cache, and a shared 8 MB L3 cache. The off-chip memory consists of 2 DDR3-2133 memory channels with 8GB of memory on each channel. We run a simple microbenchmark that performs each bitwise operation on one or two vectors and stores the result in a result vector. We vary the size of the vector, and for each size we measure the throughput of performing each operation with 1, 2, and 4 cores. Figure~\ref{plot:buddy-throughput} plots the results of this experiment for bitwise AND/OR operations. The x-axis shows the size of the vector, and the y-axis plots the corresponding throughput (in terms of GB/s of computed result) for the Skylake system with 1, 2, and 4 cores, and Buddy with 1 or 2 DRAM banks. \begin{figure}[h] \centering \includegraphics{buddy/plots/throughput} \caption{Comparison of throughput of AND/OR operations} \label{plot:buddy-throughput} \end{figure} First, for each core count, the throughput of the Skylake drops with increasing size of the vector. This is expected as the working set will stop fitting in different levels of the on-chip cache as we move to the right on the x-axis. Second, as long the working set fits in some level of on-chip cache, running the operation with more cores provides higher throughput. Third, when the working set stops fitting in the on-chip caches (> 8MB per vector), the throughput with different core counts is roughly the same (5.8~GB/s of computed result). This is because, at this point, the throughput of the operation is strictly limited by the available memory bandwidth. Finally, for working sets larger than the cache size, Buddy significantly outperforms the baseline system. Even using only one bank, Buddy achieves a throughput of 38.92 GB/s, 6.7X better than the baseline. As modern DRAM chips have abundant bank-level parallelism, Buddy can achieve much higher throughput by using more banks (e.g., 26.8X better throughput with 4 banks, compared to the baseline). In fact, while the throughput of bitwise operations in existing systems is limited by the memory bandwidth, the throughput enabled by Buddy scales with the number of banks in the system. Table~\ref{table:buddy-throughput-energy} shows the throughput and energy results different bitwise operations for the 32MB input. We estimate energy for DDR3-1333 using the Rambus model~\cite{rambus-power}. Our energy numbers only include the DRAM and channel energy, and does not include the energy spent at the CPU and caches. For Buddy, some activate operations have rise multiple wordlines and hence will consume higher energy. To account for this, we increase the energy of the activate operation by 22\% for each additional wordline raised. As a result, a triple-row activation will consume 44\% more energy than a regular activation. \begin{table}[h] \centering \input{buddy/tables/throughput-energy} \caption[Buddy: Throughput/energy comparison]{Comparison of throughput and energy for various groups of bitwise operations. ($\uparrow$) and ($\downarrow$) respectively indicate the factor improvement and reduction in throughput and energy of Buddy (1 bank) over the baseline (Base).} \label{table:buddy-throughput-energy} \end{table} In summary, across all bitwise operations, Buddy reduces energy consumption by at least 25.1X and up to 59.5X compared to the baseline, and with just one bank, Buddy improves the throughput by at least 3.8X and up to 10.1X compared to the baseline. In the following section, we demonstrate the benefits of Buddy in some real-world applications. \chapter{Buddy RAM} \label{chap:buddy} \begin{figure}[b!] \hrule\vspace{2mm} \begin{footnotesize} A part of this chapter is originally published as ``Fast Bulk Bitwise AND and OR in DRAM'' in IEEE Computer Architecture Letters, 2015~\cite{buddy-cal} \end{footnotesize} \end{figure} In the line of research aiming to identify primitives that can be efficiently performed inside DRAM, the second mechanism we explore in this thesis is one that can perform bitwise logical operations completely inside DRAM. Our mechanism \emph{uses} the internal analog operation of DRAM to efficiently perform bitwise operations. For this reason, we call our mechanism \emph{Buddy RAM} or {Bitwise-ops Using DRAM} (BU-D-RAM). Bitwise operations are an important component of modern day programming. They have a wide variety of applications, and can often replace arithmetic operations with more efficient algorithms~\cite{btt-knuth,hacker-delight}. In fact, many modern processors provide support for accelerating a variety of bitwise operations (e.g., Intel Advance Vector eXtensions~\cite{intel-avx}). We focus our attention on bitwise operations on large amounts of input data. We refer to such operations as \emph{bulk bitwise operations}. Many applications trigger such bulk bitwise operations. For example, in databases, bitmap indices~\cite{bmide,bmidc} can be more efficient than commonly-used B-trees for performing range queries and joins~\cite{bmide,fastbit,bicompression}. In fact, bitmap indices are supported by many real-world implementations (e.g., Redis~\cite{redis}, Fastbit~\cite{fastbit}). Improving the throughput of bitwise operations can boost the performance of such bitmap indices and many other primitives (e.g., string matching, bulk hashing). As bitwise operations are computationally inexpensive, in existing systems, the throughput of bulk bitwise operations is limited by the available memory bandwidth. This is because, to perform a bulk bitwise operation, existing systems must first read the source data from main memory into the processor caches. After performing the operation at the processor, they may have to write the result back to main memory. As a result, this approach requires a large amount of data to be transferred back and forth on the memory channel, resulting in high latency, bandwidth, and energy consumption. Our mechanism, Buddy RAM, consist of two components: one to perform bitwise AND/OR operations (Buddy-AND/OR\xspace), and the second component to perform bitwise NOT operations (Buddy-NOT\xspace). Both components heavily exploit the operation of the sense amplifier and the DRAM cells (described in Section~\ref{sec:cell-operation}). In the following sections, we first provide an overview of both these mechanisms, followed by a detailed implementation of Buddy that requires minimal changes to the internal design and the external interface of commodity DRAM. \input{buddy/mechanism} \input{buddy/implementation} \input{buddy/support} \input{buddy/lte-analysis} \input{buddy/applications} \input{buddy/related} \input{buddy/summary} \section{Buddy-AND/OR\xspace} \label{sec:bitwise-and-or} As described in Section~\ref{sec:cell-operation}, when a DRAM cell is connected to a bitline precharged to $\frac{1}{2}$V$_{DD}$\xspace, the cell induces a deviation on the bitline, and the deviation is amplified by the sense amplifier. \emph{Buddy-AND/OR\xspace} exploits the following fact about the cell operation. \begin{quote} The final state of the bitline after amplification is determined solely by the deviation on the bitline after the charge sharing phase (after state \ding{204} in Figure~\ref{fig:cell-operation}). If the deviation is positive (i.e., towards V$_{DD}$\xspace), the bitline is amplified to V$_{DD}$\xspace. Otherwise, if the deviation is negative (i.e., towards $0$), the bitline is amplified to $0$. \end{quote} \subsection{Triple-Row Activation} \label{sec:triple-row-activation} Buddy-AND/OR\xspace simultaneously connects three cells to a sense amplifier. When three cells are connected to the bitline, the deviation of the bitline after charge sharing is determined by the \emph{majority value} of the three cells. Specifically, if at least two cells are initially in the charged state, the effective voltage level of the three cells is at least $\frac{2}{3}$V$_{DD}$\xspace. This results in a positive deviation on the bitline. On the other hand, if at most one cell is initially in the charged state, the effective voltage level of the three cells is at most $\frac{1}{3}$V$_{DD}$\xspace. This results in a negative deviation on the bitline voltage. As a result, the final state of the bitline is determined by the logical majority value of the three cells. Figure~\ref{fig:triple-row-activation} shows an example of activating three cells simultaneously. In the figure, we assume that two of the three cells are initially in the charged state and the third cell is in the empty state \ding{202}. When the wordlines of all the three cells are raised simultaneously \ding{203}, charge sharing results in a positive deviation on the bitline. \begin{figure}[t] \centering \includegraphics{buddy/figures/triple-row-activation} \caption[Triple-row activation in DRAM]{Triple-row activation} \label{fig:triple-row-activation} \end{figure} More generally, if the cell's capacitance is $C_c$, the the bitline's is $C_b$, and if $k$ of the three cells are initially in the charged state, based on the charge sharing principles~\cite{dram-cd}, the deviation $\delta$ on the bitline voltage level is given by, \begin{eqnarray} \delta &=& \frac{k.C_c.V_{DD} + C_b.\frac{1}{2}V_{DD}}{3C_c + C_b} - \frac{1}{2}V_{DD}\nonumber\\ &=& \frac{(2k - 3)C_c}{6C_c + 2C_b}V_{DD}\label{eqn:delta} \end{eqnarray} From the above equation, it is clear that $\delta$ is positive for $k = 2,3$, and $\delta$ is negative for $k = 0,1$. Therefore, after amplification, the final voltage level on the bitline is V$_{DD}$\xspace for $k = 2,3$ and $0$ for $k = 0,1$. If $A$, $B$, and $C$ represent the logical values of the three cells, then the final state of the bitline is $AB + BC + CA$ (i.e., at least two of the values should be $1$ for the final state to be $1$). Importantly, using simple boolean algebra, this expression can be rewritten as $C(A + B) + \overline{C}(AB)$. In other words, if the initial state of $C$ is $1$, then the final state of the bitline is a bitwise OR of $A$ and $B$. Otherwise, if the initial state of $C$ is $0$, then the final state of the bitline is a bitwise AND of $A$ and $B$. Therefore, by controlling the value of the cell C, we can execute a bitwise AND or bitwise OR operation of the remaining two cells using the sense amplifier. Due to the regular bulk operation of cells in DRAM, this approach naturally extends to an entire row of DRAM cells and sense amplifiers, enabling a multi-kilobyte-wide bitwise AND/OR operation.\footnote{Note that the triple-row activation by itself can be useful to implement a \emph{bitwise majority} primitive. However, we do not explore this path in this thesis.} \subsection{Challenges} \label{sec:and-or-challenges} There are two challenges in our approach. First, Equation~\ref{eqn:delta} assumes that the cells involved in the triple-row activation are either fully charged or fully empty. However, DRAM cells leak charge over time. Therefore, the triple-row activation may not operate as expected. This problem may be exacerbated by process variation in DRAM cells. Second, as shown in Figure~\ref{fig:triple-row-activation} (state~\ding{204}), at the end of the triple-row activation, the data in all the three cells are overwritten with the final state of the bitline. In other words, our approach overwrites the source data with the final value. In the following sections, we propose a simple implementation that addresses these challenges. \subsection{Overview of Implementation of Buddy-AND/OR\xspace} \label{sec:and-or-mechanism} To ensure that the source data does not get modified, our mechanism first \emph{copies} the data from the two source rows to two reserved temporary rows ($T1$ and $T2$). Depending on the operation to be performed (AND or OR), our mechanism initializes a third reserved temporary row $T3$ to ($0$ or $1$). It then simultaneously activates the three rows $T1$, $T2$, and $T3$. It finally copies the result to the destination row. For example, to perform a bitwise AND of two rows $A$ and $B$ and store the result in row $R$, our mechanism performs the following steps. \begin{enumerate}\itemsep0pt\parsep0pt\parskip0pt \item \emph{Copy} data of row $A$ to row $T1$ \item \emph{Copy} data of row $B$ to row $T2$ \item \emph{Initialize} row $T3$ to $0$ \item \emph{Activate} rows $T1$, $T2$, and $T3$ simultaneously \item \emph{Copy} data of row $T1$ to row $R$ \end{enumerate} While the above mechanism is simple, the copy operations, if performed naively, will nullify the benefits of our mechanism. Fortunately, we can use RowClone (described in Chapter~\ref{chap:rowclone}), to perform row-to-row copy operations quickly and efficiently within DRAM. To recap, RowClone consists of two techniques. The first technique, RowClone-FPM (Fast Parallel Mode), which is the fastest and the most efficient, copies data within a subarray by issuing two back-to-back {\texttt{ACTIVATE}\xspace}s to the source row and the destination row, without an intervening \texttt{PRECHARGE}\xspace. The second technique, RowClone-PSM (Pipelined Serial Mode), efficiently copies data between two banks by using the shared internal bus to overlap the read to the source bank with the write to the destination bank. With RowClone, all three copy operations (Steps 1, 2, and 5) and the initialization operation (Step 3) can be performed efficiently within DRAM. To use RowClone for the initialization operation, we reserve two additional rows, $C0$ and $C1$. $C0$ is pre-initialized to $0$ and $C1$ is pre-initialized to 1. Depending on the operation to be performed, our mechanism uses RowClone to copy either $C0$ or $C1$ to $T3$. Furthermore, to maximize the use of RowClone-FPM, we reserve five rows in each subarray to serve as the temporary rows ($T1$, $T2$, and $T3$) and the control rows ($C0$ and $C1$). In the best case, when all the three rows involved in the operation ($A$, $B$, and $R$) are in the same subarray, our mechanism can use RowClone-FPM for all copy and initialization operations. However, if the three rows are in different banks/subarrays, some of the three copy operations have to use RowClone-PSM. In the worst case, when all three copy operations have to use RowClone-PSM, our approach will consume higher latency than the baseline. However, when only one or two RowClone-PSM operations are required, our mechanism will be faster and more energy-efficient than existing systems. As our goal in this paper is to demonstrate the power of our approach, in the rest of the paper, we will focus our attention on the case when all rows involved in the bitwise operation are in the same subarray. \subsection{Reliability of Our Mechanism} While our mechanism trivially addresses the second challenge (modification of the source data), it also addresses the first challenge (DRAM cell leakage). This is because, in our approach, the source (and the control) data are copied to the rows $T1$, $T2$ and $T3$ \emph{just} before the triple-row activation. Each copy operation takes much less than $1~{\mu}s$, which is five \emph{orders} of magnitude less than the typical refresh interval ($64~ms$). Consequently, the cells involved in the triple-row activation are very close to the fully refreshed state before the operation, thereby ensuring reliable operation of the triple-row activation. Having said that, an important aspect of our mechanism is that a chip that fails the tests for triple-row activation (e.g., due to process variation) \emph{can still be used as a regular DRAM chip}. As a result, our approach is likely to have little impact on the overall yield of DRAM chips, which is a major concern for manufacturers. \section{Buddy-NOT\xspace} \label{sec:bitwise-not} Buddy-NOT\xspace exploits the fact that the sense amplifier itself consists of two inverters and the following observation about the sense amplifier operation. \begin{quote} At the end of the sense amplification process, while the bitline voltage reflects the logical value of the cell, the voltage level of the $\overline{\textrm{bitline}}$\xspace corresponds to the negation of the logical value of the cell. \end{quote} \subsection{Dual-Contact Cell} \label{sec:dcc} Our high-level idea is to transfer the data on the $\overline{\textrm{bitline}}$\xspace to a cell that can be connected to the bitline. For this purpose, we introduce a special DRAM cell called \emph{dual-contact cell}. A dual-contact cell (DCC) is a DRAM cell with two transistors and one capacitor. For each DCC, one transistor connects the DCC to the bitline and the other transistor connects the DCC to the $\overline{\textrm{bitline}}$\xspace. Each of the two transistors is controlled by a different wordline. We refer to the wordline that controls the connection between the DCC and the bitline as the \emph{d-wordline}\xspace (or data wordline). We refer to the wordline that controls the connection between the DCC and the $\overline{\textrm{bitline}}$\xspace as the \emph{n-wordline}\xspace (or negation wordline). Figure~\ref{fig:dcc-not} shows one DCC connected to a sense amplifier. In our mechanism, we use two DCCs for each sense amplifier, one on each side of the sense amplifier. \begin{figure}[h] \centering \includegraphics{buddy/figures/dcc-not} \caption[Dual-contact cell]{A dual-contact cell connected to both ends of a sense amplifier} \label{fig:dcc-not} \end{figure} \subsection{Exploiting the Dual-Contact Cell} \label{sec:dcc-exploit} Since the DCC is connected to both the bitline and the $\overline{\textrm{bitline}}$\xspace, we can use a RowClone-like mechanism to transfer the negation of some source data on to the DCC using the \emph{n-wordline}\xspace. The negated data can be transferred to the bitline by activating the \emph{d-wordline}\xspace of the DCC, and can then be copied to the destination cells using RowClone. Figure~\ref{fig:bitwise-not} shows the sequence of steps involved in transferring the negation of a source cell on to the DCC. The figure shows a \emph{source} cell and a DCC connected to the same sense amplifier \ding{202}. Our mechanism first activates the source cell \ding{203}. At the end of the activation process, the bitline is driven to the data corresponding to the source cell, V$_{DD}$\xspace in this case \ding{204}. More importantly, for the purpose of our mechanism, the $\overline{\textrm{bitline}}$\xspace is driven to $0$. In this state, our mechanism activates the \emph{n-wordline}, enabling the transistor that connects the DCC to the $\overline{\textrm{bitline}}$\xspace~\ding{205}. Since the $\overline{\textrm{bitline}}$\xspace is already at a stable voltage level of $0$, it overwrites the value in the cell with $0$, essentially copying the negation of the source data into the DCC. After this step, the negated data can be efficiently copied into the destination cell using RowClone. Section~\ref{sec:command-sequence} describes the sequence of commands required to perform a bitwise NOT. \begin{figure}[h] \centering \includegraphics{buddy/figures/bitwise-not} \caption{Bitwise NOT using a dual-contact cell} \label{fig:bitwise-not} \end{figure} \section{Related Work} \label{sec:buddy-related} There are several prior works that aim to enable efficient computation near memory. In this section, we qualitatively compare Buddy to these prior works. Some recent patents~\cite{mikamonu,mikamonu2} from Mikamonu describe an architecture that employs a DRAM organization with 3T-1C cells and additional logic to perform NAND/NOR operations on the data inside DRAM. While this architecture can perform bitwise operations inside DRAM, it incurs significant additional cost to the DRAM array due to the extra transistors, and hence reduces overall memory density/capacity. In contrast, Buddy exploits existing DRAM operation to perform bitwise operations efficiently inside DRAM. As a result, it incurs much lower cost compared to the Mikamonu architecture. One main source of memory inefficiency in existing systems is data movement. Data has to travel off-chip buses and multiple levels of caches before reaching the CPU. To avoid this data movement, many works (e.g., NON-VON Database Machine~\cite{non-von-machine}, DIVA~\cite{diva}, Terasys~\cite{pim-terasys}, Computational RAM~\cite{cram}, FlexRAM~\cite{flexram,programming-flexram}, EXECUBE~\cite{execube}, Active Pages~\cite{active-pages}, Intelligent RAM~\cite{iram}, Logic-in-Memory Computer~\cite{lim-computer}) have proposed mechanisms and models to add processing logic close to memory. The idea is to integrate memory and CPU on the same chip by designing the CPU using the memory process technology. While the reduced data movement allows these approaches to enable low-latency, high-bandwidth, and low-energy data communication, they suffer from two key shortcomings. First, this approach of integrating processor on the same chip as memory significantly deviates from existing designs, and as a result, increases the overall cost of the system. Second, DRAM vendors use a high-density process to minimize cost-per-bit. Unfortunately, high-density DRAM process is not suitable for building high-speed logic~\cite{iram}. As a result, this approach is not suitable for building a general purpose processor near memory. In contrast, we restrict our focus to bitwise operations, and propose a mechanism to perform them efficiently inside DRAM with low cost. Some recent DRAM architectures~\cite{3d-stacking,hmc,hbm} use 3D-stacking technology to stack multiple DRAM chips on top of the processor chip or a separate logic layer. These architectures offer much higher bandwidth to the logic layer compared to traditional off-chip interfaces. This enables an opportunity to offload some computation to the logic layer, thereby improving performance. In fact, many recent works have proposed mechanisms to improve and exploit such architectures (e.g.,~\cite{pim-enabled-insts,pim-graph,top-pim,nda,msa3d,spmm-mul-lim,data-access-opt-pim,tom,hrl,gp-simd,ndp-architecture,pim-analytics,nda-arch,jafar,data-reorg-3d-stack,smla}). Unfortunately, despite enabling higher bandwidth compared to off-chip memory, such 3D-stacked architectures are still require data to be transferred outside the DRAM chip, and hence can be bandwidth-limited. However, since Buddy can be integrated easily with such architectures, we believe the logic layer in such 3D architectures should be used to implement \emph{more complex operations}, while Buddy can be used to efficiently implement bitwise logical operations at low cost. \section{Summary} \label{sec:buddy-summary} In this chapter, we introduced Buddy, a new DRAM substrate that performs row-wide bitwise operations using DRAM technology. Specifically, we proposed two component mechanisms. First, we showed that simultaneous activation of three DRAM rows that are connected to the same set of sense amplifiers can be used to efficiently perform AND/OR operations. Second, we showed that the inverters present in each sense amplifier can be used to efficiently implement NOT operations. With these two mechanisms, Buddy can perform any bulk bitwise logical operation quickly and efficiently within DRAM. Our evaluations show that Buddy enables an order-of-magnitude improvement in the throughput of bitwise operations. This improvement directly translates to significant performance improvement in the evaluated real-world applications. Buddy is generally applicable to any memory architecture that uses DRAM technology, and we believe that the support for fast and efficient bulk bitwise operations can enable better design of applications that result in large improvements in performance and efficiency. \section{End-to-end System Support} \label{sec:support} We envision two distinct ways of integrating Buddy with the rest of the system. The first way is a loose integration, where Buddy is treated as an accelerator (similar to a GPU). The second way is a much tighter integration, where Buddy is supported by the main memory. In this section, we discuss these two ways along with their pros and cons. \subsection{Buddy as an Accelerator} Treating Buddy as an accelerator is probably the simplest way of integrating Buddy into a system. In this approach, the manufacturer of Buddy RAM designs the accelerator that can be plugged into the system as a separate device (e.g., PCIe). While this mechanism requires communication between the CPU and the Buddy accelerator, there are benefits to this approach that lower the cost of integration. First, a \emph{single} manufacturer can design both the DRAM and the memory controller (which is not true about commodity DRAM). Second, the details of the data mapping to suit Buddy can be hidden behind the device driver, which can expose a simple-to-use API to the applications. \subsection{Integrating Buddy with System Main Memory} A tighter integration of Buddy with the system main memory requires support from different layers of the system stack, which we discuss below. \subsubsection{ISA Support} For the processor to exploit Buddy, it must be able to identify and export instances of bulk bitwise operations to the memory controller. To enable this, we introduce new instructions that will allow software to directly communicate instances of bulk bitwise operations to the processor. Each new instruction takes the following form:\\ \centerline{\texttt{bop dst, src1, [src2], size}} where \texttt{bop} is the bitwise operation to be performed, dst is the address of the destination, src1 and src2 correspond to the addresses of the source, and size denotes the length of the vectors on which the bitwise operations have to be performed. The microarchitectural implementation of these instructions will determine whether each instance of these instructions can be accelerated using Buddy. \subsubsection{Implementing the New Buddy Instructions.} The microarchitectural implementation of the new instructions will determine whether each instance can be accelerated using Buddy. Buddy imposes two constraints on the data it operates on. First, both the source data and the destination data should be within the same subarray. Second, all Buddy operations are performed on an entire row of data. As a result, the source and destination data should be row-aligned and the operation should span at least an entire row. The microarchitecture ensures that these constraints are satisfied before performing the Buddy operations. Specifically, if the source and destination rows are not in the same subarray, the processor can either 1)~use RowClone-PSM~\cite{rowclone} to copy the data into the same subarray, or 2)~execute the operation using the CPU. This choice can be dynamically made depending on the number of RowClone-PSM operations required and the memory bandwidth contention. If the processor cannot ensure data alignment, or if the size of the operation is smaller than the DRAM row size, it can execute the operations using the CPU. However, with careful application design and operating system support, the system can maximize the use of Buddy to extract its performance and efficiency benefits. \subsubsection{Maintaining On-chip Cache Coherence.} Buddy directly reads/modifies data in main memory. As a result, we need a mechanism to ensure the coherence of data present in the on-chip caches. Specifically, before performing any Buddy operation, the memory controller must first flush any dirty cache lines from the source rows and invalidate any cache lines from destination rows. While flushing the dirty cache lines of the source rows is on the critical path of any Buddy operation, we can speed up using the Dirty-Block Index (described in Chapter~\ref{chap:dbi}). In contrast, the cache lines of the destination rows can be invalidated in parallel with the Buddy operation. The mechanism required to maintain cache coherence introduces a small performance overhead. However, for applications with large amounts of data, since the cache contains only a small fraction of the data, the performance benefits of our mechanism significantly outweigh the overhead of maintaining coherence, resulting in a net gain in both performance and efficiency. \subsubsection{Software Support} The minimum support that Buddy requires from software is for the application to use the new Buddy instructions to communicate the occurrences of bulk bitwise operations to the processor. However, with careful memory allocation support from the operating system, the application can maximize the benefits it can extract from Buddy. Specifically, the OS must allocate pages that are likely to be involved in a bitwise operation such that 1)~they are row-aligned, and 2)~belong to the same subarray. Note that the OS can still interleave the pages of a single data structure to multiple subarrays. Implementing this support, requires the OS to be aware of the subarray mapping, i.e., determine if two physical pages belong to the same subarray or not. The OS must extract this information from the DRAM modules with the help of the memory controller. \chapter{Conclusions \& Future Work} \label{chap:conclusion} In modern systems, different resources of the memory subsystem store and access data at different granularities. Specifically, virtual memory manages main memory capacity at a (large) page granularity; the off-chip memory interface access data from memory at a cache line granularity; on-chip caches typically store and access data at a cache line granularity; applications typically access data at a (small) word granularity. We observe that this mismatch in granularities results in significant inefficiency for many memory operations. In Chapter~\ref{chap:motivation}, we demonstrate this inefficiency using two example operations: copy-on-write and non-unit strided access. \section{Contributions of This Dissertation} In this dissertation, we propose five distinct mechanisms to address the inefficiency problem at various structures. First, we observe that page-granularity management of main memory capacity can result in significant inefficiency in implementing many memory management techniques. In Chapter~\ref{chap:page-overlays}, we describe \emph{page overlays}, a new framework which augments the existing virtual memory framework with a structure called \emph{overlays}. In short, an overlay of a virtual page tracks a newer version of a subset of segments from within the page. We show that this simple framework is very powerful and enables many applications. We quantitatively evaluate page overlays with two mechanisms: \emph{overlay-on-write}, a more efficient version of the widely-used copy-on-write technique, and an efficient hardware-based representation for sparse data structures. Second, we show that the internal organization and operation of a DRAM chip can be used to transfer data quickly and efficiently from one location to another. Exploiting this observation, in Chapter~\ref{chap:rowclone}, we describe \emph{RowClone}, a mechanism to perform bulk copy and initialization operations completely inside DRAM. RowClone reduces the latency and energy of performing a bulk copy operation by 11X and 74X, respectively, compared to commodity DRAM interface. Our evaluations show that this reduction significantly improves the performance of copy and initialization intensive applications. Third, we show that the analog operation of DRAM and the inverters present in the sense amplifier can be used to perform bitwise logical operations completely inside DRAM. In Chapter~\ref{chap:buddy}, we describe \emph{Buddy RAM}, a mechanism to perform bulk bitwise logical operations using DRAM. Buddy improves the throughput of various bitwise logical operations by between 3.8X (for bitwise XOR) and 10.1X (for bitwise NOT) compared to a multi-core CPU, and reduces the energy consumption of the respective operations by 25.1X and 59.5X. We demonstrate the benefits of Buddy by using it to improve the performance of set operations and in-memory bitmap indices. Fourth, we show that the multi-chip module organization of off-chip memory can be used to efficiently gather or scatter data with strided access patterns. In Chapter~\ref{chap:gsdram}, we describe \emph{Gather-Scatter DRAM} ({\sffamily{GS-DRAM}}\xspace), a mechanism that achieves near-ideal bandwidth utilization for any power-of-2 strided access pattern. We implement an in-memory database on top of {\sffamily{GS-DRAM}}\xspace and show that {\sffamily{GS-DRAM}}\xspace gets the best of both a row store layout and column store layout on a transactions workload, analytics workload, and a hybrid transactions and analytics workload. Finally, in Chapter~\ref{chap:dbi}, we introduce the Dirty-Block Index, a mechanism to improve the efficiency of the coherence protocol that ensures the coherence of data across the on-chip caches and main memory. DBI efficiently tracks spatially-collocated dirty blocks, and has several applications in addition to more efficient data coherence, e.g., efficient memory writeback, efficient cache lookup bypass, and reducing cache ECC overhead. \section{Future Research Directions} \label{sec:future} This dissertation opens up several avenues for research. In this section, we describe six specific directions in which the ideas and approaches proposed in this thesis can be extended to other problems to improve the performance and efficiency of various systems. \subsection{Extending Overlays to Superpages} In Chapter~\ref{chap:page-overlays}, we described and evaluated the benefits of our page overlays mechanism for regular 4KB pages. We believe that our mechanism can be easily extended to superpages, which are predominantly used by many modern operating systems. The primary benefit of using superpages is to increase TLB reach and thereby reduce overall TLB misses. However, we observe that superpages are ineffective for reducing memory redundancy. To understand this, let us consider the scenario when multiple virtual superpages are mapped to the same physical page in the copy-on-write mode (as a result of cloning a process or a virtual machine). In this state, if any of the virtual superpages receives a write, the operating system has two options: 1)~allocate a new superpage and copy all the data from the old superpage to the newly allocated superpage, and 2)~break the virtual superpage that received the write and copy only the 4KB page that was modified. While the former approach enables low TLB misses, it results in significant redundancy in memory. On the other hand, while the latter approach reduces memory redundancy, it sacrifices the TLB miss reduction benefits of using a super page. Extending overlays to superpages will allow the operating systems to track small modifications to a superpage using an overlay. As a result, our approach can potentially get both the TLB miss reduction benefits of having a superpage mapping and the memory redundancy benefits by using the overlays to track modifications. \subsection{Using Overlays to Store and Operate on Metadata} In the page overlays mechanism described in Chapter~\ref{chap:page-overlays}, we used overlays to track a newer version of data for each virtual page. Alternatively, each overlay can be used to store metadata for the corresponding virtual page instead of a newer version of the data. Since the hardware is aware of overlays, it can provide an efficient abstraction to the software for maintaining metadata for various applications, e.g., memory error checking, security. We believe using overlays to maintain metadata can also enable new and efficient computation models, e.g., update-triggered computation. \subsection{Efficiently Performing Reduction and Shift Operations in DRAM} In Chapters~\ref{chap:rowclone} and \ref{chap:buddy}, we described mechanisms to perform bulk copy, initialization, and bitwise logical operations completely inside DRAM. These set of operations will enable the memory controller to perform some primitive level of bitwise computation completely inside DRAM. However, these mechanisms lack support for two operations that are required by many applications: 1)~data reduction, and 2)~bit shifting. First, RowClone and Buddy operate at a row-buffer granularity. As a result, to perform any kind of reduction within the row buffer (e.g., bit counting, accumulation), the data must be read into the processor. Providing support for such operations in DRAM will further reduce the amount of bandwidth consumed by many queries. While GS-DRAM (Chapter~\ref{chap:gsdram}) enables a simple form of reduction (i.e., selection), further research is required to enable support for more complex reduction operations. Second, many applications, such as encryption algorithms in cryptography, heavily rely on bit shifting operations. Enabling support for bit shifting in DRAM can greatly improve the performance of such algorithms. However, there are two main challenges in enabling support for bit shifting in DRAM. First, we need to design a low-cost DRAM substrate that can support moving data between multiple bitlines. Given the rigid design of modern DRAM chips, this can be difficult and tricky, especially when we need support for multiple shifts. Second, the physical address space is heavily interleaved on the DRAM hierarchy~\cite{data-retention,parbor}). As a result, bits that are adjacent in the physical address space may not map to adjacent bitlines in DRAM, adding complexity to the data mapping mechanism. \subsection{Designing Efficient ECC Mechanisms for Buddy/GS-DRAM} Most server memory modules use error correction codes (ECC) to protect their data. While RowClone works with such ECC schemes without any changes, Buddy (Chapter~\ref{chap:buddy}) and GS-DRAM (Chapter~\ref{chap:gsdram}) can break existing ECC mechanisms. Designing low-cost ECC mechanisms that work with Buddy and GS-DRAM will be critical to their adoption. While Section~\ref{sec:gsdram-extensions} already describes a simple way of extending GS-DRAM to support ECC, such a mechanism will work only with a simple SECDED ECC mechanism. Further research is required to design mechanisms that will 1)~enable GS-DRAM with stronger ECC schemes and 2)~provide low-cost ECC support for Buddy. \subsection{Extending In-Memory Computation to Non-Volatile Memories} Recently, many new non-volatile memory technologies (e.g., phase change memory~\cite{pcm1,pcm2,pcm3,pcm4}, STT MRAM~\cite{stt1,stt2,stt3,stt4}) have emerged as a scalable alternative to DRAM. In fact, Intel has recently announced a real product~\cite{3dcrosspoint} based on a non-volatile memory technology. These new technologies are expected to have better scalability properties than DRAM. In future systems, while we expect DRAM to still play some role, bulk of the data may actually be stored in non-volatile memory. This raises a natural question: can we extend our in-memory data movement and computation techniques to the new non-volatile memory technologies? Answering this question requires a thorough understanding of these new technologies (i.e., how they store data at the lowest level, what is the architecture used to package them, etc.). \subsection{Extending DBI to Other Caches and Metadata} In Chapter~\ref{chap:dbi}, we described the Dirty-Block Index (DBI), which reorganizes the way dirty blocks are tracked (i.e., the dirty bits are stored) to enable the cache to efficiently respond to queries related to dirty blocks. In addition to the dirty bit, the on-chip cache stores several other pieces of metadata, e.g., valid bit, coherence state, and ECC, for each cache block. In existing caches, all this information is stored in the tag entry of the corresponding block. As a result, a query for any metadata requires a full tag store lookup. Similar to the dirty bits, these other pieces of metadata can potentially be organized to suit the queries for each one, rather than organizing them the way the tag stores do it today. \section{Summary} In this dissertation, we highlighted the inefficiency problem that results from the different granularities at which different memory resources (e.g., caches, DRAM) are managed and accessed. We presented techniques that bridge this granularity mismatch for several important memory operations: a new virtual memory framework that enables memory capacity management at sub-page granularity (Page Overlays), techniques to use DRAM to do more than just store data (RowClone, Buddy RAM, and Gather-Scatter DRAM), and a simple hardware structure for more efficient management of dirty blocks (Dirty-Block Index). As we discussed in Section~\ref{sec:future}, these works open up many avenues for new research that can result in techniques to enable even higher efficiency. \section{Bulk Data Coherence with DBI\xspace} As mentioned in Section~\ref{sec:dbi}, we conceived of DBI\xspace to primarily improve the performance of the DRAM-Aware Writeback optimization. However, we identified several other potential use cases for DBI\xspace, and quantitatively evaluated three such optimizations (including DRAM-aware Writeback). Since these optimizations are not directly related to this thesis, we only provide a brief summary of the evaluated optimizations and the results in Section~\ref{sec:dbi-soe}. We direct the reader to our paper published in ISCA 2014~\cite{dbi} for more details. In this section, we describe how DBI\xspace can be used to improve the performance of certain cache coherence protocol operations, specifically flushing bulk data. \subsection{Checking If a Block is Dirty} As described in Section~\ref{sec:dbi}, in the cache organization used in existing systems, the cache needs to perform a full tag store lookup to check if a block is dirty. Due to the large size and associativity of the last-level cache, the latency of this lookup is typically several processor cycles. In contrast, in a cache augmented with DBI\xspace, the cache only needs to perform a single DBI\xspace lookup to check if a block is dirty. Based on our evaluations, even with just 64 DBI\xspace entries (in comparison to 16384 entries in the main tag store), the cache with DBI\xspace outperforms the conventional cache organization. As a result, checking if a block is dirty is significantly cheaper with the DBI\xspace organization. This fast lookup for the dirty bit information can already make many coherence protocol operations in multi-socket systems more efficient. Specifically, when a processor wants to read a cache line, it has to first check if any of the other processors contain a dirty version of the cache line. With the DBI\xspace, this operation is more efficient than in existing systems. However, the major benefit of DBI\xspace is in accelerating bulk data flushing. \subsection{Accelerating Bulk Data Flushing} In many scenarios, the memory controller must check if a set of blocks belonging to a region is dirty in the on-chip cache. For instance, in Direct Memory Access, the memory controller may send data to an I/O device by directly reading the data from memory. Since these operations typically happen in bulk, the memory controller may have to get all cache lines in an entire page (for example) that are dirty in the on-chip cache. In fact, all the DRAM-related mechanisms described in the previous three chapters of this thesis require an efficient implementation of this primitive. With a cache augmented DBI\xspace, this primitive can be implemented with just a single DBI\xspace lookup. When the controller wants to extract all the dirty cache lines of a region, it looks up the DBI\xspace with the DBI\xspace tag for the corresponding region, and extracts the dirty bit vector, which indicates which blocks within the region are dirty in the cache. If the size of the region is larger than the segment tracked by each DBI\xspace entry, the memory controller must perform multiple lookups. However, with the DBI\xspace, 1)~the memory controller has to perform 64X or 128X (depending on the DBI\xspace granularity) fewer lookups compared to conventional cache organization, 2)~each DBI\xspace lookup is cheaper than a tag store lookup, and 3)~the cache can lookup and flush \emph{only} blocks that are actually dirty. With these benefits, DBI\xspace can perform flush data in bulk faster than existing organizations. \section{DBI\xspace Design Choices} \label{sec:design-choices} The DBI\xspace design space can be defined using three key parameters: 1)~\emph{DBI\xspace size}, 2)~\emph{DBI\xspace granularity} and 3)~\emph{DBI\xspace replacement policy}.\footnote{DBI\xspace is also a set-associative structure and has a fixed associativity. However, we do not discuss the DBI\xspace associativity in detail as its trade-offs are similar to any other set-associative structure.} These parameters determine the effectiveness of the three optimizations discussed in the previous section. We now discuss these parameters and their trade-offs in detail. \subsection{DBI\xspace Size} \label{sec:dbi-size} The DBI\xspace size refers to the cumulative number of blocks tracked by all the entries in the DBI\xspace. For ease of analysis across systems with different cache sizes, we represent the DBI\xspace size as the ratio of the cumulative number of blocks tracked by the DBI\xspace and the number of blocks tracked by the cache tag store. We denote this ratio using $\alpha$. For example, for a 1MB cache with a 64B block size (16k blocks), a DBI\xspace of size $\alpha = \sfrac{1}{2}$ enables the DBI\xspace to track 8k blocks. The DBI\xspace size presents a trade-off between the size of write working set (set of frequently written blocks) that can be captured by the DBI\xspace, and the area, latency, and power cost of the DBI\xspace. A large DBI\xspace has two benefits: 1)~it can track a larger write working set, thereby reducing the writeback bandwidth demand, and 2)~it gives more time for a DBI\xspace entry to accumulate writebacks to a DRAM row, thereby better exploiting the AWB optimization. However, a large DBI\xspace comes at a higher area, latency and power cost. On the other hand, a smaller DBI\xspace incurs lower area, latency and power cost. This has two benefits: 1)~lower latency in the critical path for the CLB optimization and 2)~ECC storage for fewer dirty blocks. However, a small DBI\xspace limits the number of dirty blocks in the cache and thus, result in premature DBI\xspace evictions, reducing the potential to generate aggressive writebacks. It can also potentially lead to thrashing if the write working set is significantly larger than the number of blocks tracked by the small DBI\xspace. \subsection{DBI\xspace Granularity} \label{sec:dbi-granularity} The DBI\xspace granularity refers to the number of blocks tracked by a single DBI\xspace entry. Although our discussion in Section~\ref{sec:dbi} suggests that this is same as the number of blocks in each DRAM row, we can design the DBI\xspace to track fewer blocks in each entry. For example, for a system with DRAM row of size 8KB and cache block of size 64B, a natural choice for the DBI\xspace granularity is 8KB/64B = 128. Instead, we can design a DBI\xspace entry to track only 64 blocks, i.e. one half of a DRAM row. The DBI\xspace granularity presents another trade-off between the amount of locality that can be extracted during the writeback phase (using the AWB optimization) and the size of write working set that can be captured using the DBI\xspace. A large granularity leads to better potential for exploiting the AWB optimization. However, if writes have low spatial locality, a large granularity will result in inefficient use of the DBI\xspace space, potentially leading to write working set thrashing. \subsection{DBI\xspace Replacement Policy} \label{sec:dbi-replacement-policy} The DBI\xspace replacement policy determines which entry is evicted on a DBI\xspace eviction (Section~\ref{sec:dbi-operation-dbi-eviction}). A DBI\xspace eviction \emph{only writes back} the dirty blocks of the corresponding DRAM row to main memory, and does not evict the blocks themselves from the cache. Therefore, a DBI\xspace eviction does not affect the latency of future read requests for the corresponding blocks. However, if the previous cache level generates a writeback request for a block written back due to a DBI\xspace eviction, the block will have to be written back again, leading to an additional write to main memory. Therefore, the goal of the DBI\xspace replacement policy is to ensure that blocks are not prematurely written back to main memory. The ideal policy is to evict the entry that has a writeback request farthest into the future. However, similar to Belady's optimal replacement policy~\cite{beladyopt}, this ideal policy is impractical to implement in real systems. We evaluated five practical replacement policies for DBI\xspace: 1)~Least Recently Written (LRW)---similar to the LRU policy for caches, 2)~LRW with Bimodal Insertion Policy~\cite{dip}, 3)~Rewrite-interval prediction policy---similar to the RRIP policy for caches~\cite{rrip}, 4)~Max-Dirty---entry with the maximum number of dirty blocks, 5)~Min-Dirty---entry with the minimum number of dirty blocks. We find that the LRW policy works comparably or better than the other policies. \section{Summary of Optimizations and Evaluations} \label{sec:dbi-soe} In this section, we provide a brief summary of the optimizations enabled by DBI\xspace and the corresponding evaluation. We refer the reader to our ISCA paper~\cite{dbi} for more details. We specifically evaluate the following three optimizations in detail: 1)~DRAM-aware writeback, 2)~cache lookup bypass, and 3)~heterogeneous ECC. \subsection{Optimizations} \subsubsection{Efficient DRAM-aware Writeback} This is the optimization we described briefly in Section~\ref{sec:dbi}. The key idea is to cluster writebacks to dirty blocks of individual DRAM rows with the goal of maximizing the write row buffer hit rate. Implementing this optimization with DBI is straightforward. When a block is evicted from the cache, the DBI tells the cache whether the block is dirty or not. In addition, the bit vector in the corresponding DBI entry also tells the cache the list of all other dirty blocks in the corresponding DRAM row (assuming that each region in the DBI corresponds to a DRAM row). The cache can then selectively lookup only those dirty blocks and write them back to main memory. This is significantly more efficient than the implementation with the block-oriented organization. The best case for DBI is when no other block from the corresponding DRAM row is dirty. In this case, current implementations will have to look up every block in the DRAM row unnecessarily, whereas DBI will perform zero additional lookups. \subsubsection{Bypassing Cache Lookups} The idea behind this optimization is simple: \emph{If an access is likely to miss in the cache, then we can avoid the tag lookup for the access, reducing both latency and energy consumption of the access.} In this optimization, the cache is augmented with a miss predictor which can efficiently predict if an access will hit or miss in the cache. If an access is predicted to miss, then the request is directly sent to main memory. The main challenge with this optimization is that an access to a dirty block should not be bypassed. This restricts the range of miss predictors as the predictor cannot falsely predict that an access will miss in the cache~\cite{jsn}. Fortunately, with the DBI, when an access is predicted to miss, the cache can first consult the DBI to ensure that the block is not dirty before bypassing the tag lookup. As a result, DBI enables very aggressive miss predictors (e.g., bypass all accesses of an application~\cite{skipcache}). \subsubsection{Reducing ECC Overhead} The final optimization is again based on a simple idea: \emph{clean blocks need only error detection; only dirty blocks need strong error correction.} Several prior works have propose mechanisms to exploit this observation to reduce the ECC overhead. However, they require complex mechanisms to handle the case when an error is detected on a dirty block (e.g.,~\cite{ecc-fifo}). In our proposed organization, since DBI tracks dirty blocks, it is sufficient to store ECC only for the blocks tracked by DBI. With the previously discussed optimizations, we find that the DBI can get away with tracking far fewer blocks than the main cache. As a result, DBI can seamlessly reduce the ECC area overhead (8\% reduction in overall cache area). \subsection{Summary of Results} \begin{figure}[b] \centering \includegraphics{dbi/plots/perf.pdf} \caption[DBI: Summary of performance results]{Summary of Performance Results. The first three metrics are normalized to the Baseline.} \label{plot:dbi-perf} \end{figure} We refer the reader to our paper for full details on our methodology. Figure~\ref{plot:dbi-perf} briefly summarizes the comparison between DBI (with the first two optimizations, DRAM-aware writeback and cache lookup bypass) and the best previous mechanism, DRAM-aware writeback~\cite{dram-aware-wb} (DAWB). As a result of proactively writing back blocks to main memory, both mechanisms increase the number of memory writes. However, for a small increase in the number of writes, both mechanisms significantly improve the write row hit rate, and hence also performance compared to the baseline. However, the key difference between DAWB and DBI is that DAWB almost doubles the number of tag lookups, whereas with both optimizations, DBI actually reduces the number of tag lookups by 14\% compared to the baseline. As a result, DBI improves performance by 6\% compared to DAWB (31\% over baseline) across 120 workloads in an 8-core system with 16MB shared cache. \chapter{The Dirty-Block Index} \label{chap:dbi} \begin{figure}[b!] \hrule\vspace{2mm} \begin{footnotesize} Originally published as ``The Dirty-Block Index'' in the International Symposium on Computer Architecture, 2014~\cite{dbi} \end{footnotesize} \end{figure} In the previous three chapters, we described three mechanisms that offload some key application-level primitives to DRAM. As described in the respective chapters, these mechanisms directly read/modify data in DRAM. As a result, they require the cache coherence protocol to maintain the coherence of the data stored in the on-chip caches. Specifically, for an in-DRAM operation, the protocol must carry out two steps. First, any dirty cache line that is directly read in DRAM should be flushed to DRAM \emph{before} the operation. Second, any cache line that is modified in DRAM should be invalidated from the caches. While the second step can be performed in \emph{parallel} with the in-DRAM operation, the first step, i.e., flushing the dirty cache lines of the source data, is on the critical path of performing the in-DRAM operation. In this chapter, we describe Dirty-Block Index, a new way of tracking dirty blocks that can speed up flushing dirty blocks of a DRAM row. \input{dbi/opts} \input{dbi/mechanism} \input{dbi/design-choices} \input{dbi/bulk-data-coherence} \input{dbi/evaluation-summary} \input{dbi/summary} \section{The Dirty-Block Index} \label{sec:dbi} In our proposed system, we remove the dirty bits from the tag store and organize them differently in a separate structure called the Dirty-Block Index (DBI). At a high level, DBI organizes the dirty bit information such that the dirty bits of all the blocks of a DRAM row are stored together. \subsection{DBI\xspace Structure and Semantics} \begin{figure} \centering \begin{subfigure}{0.55\textwidth} \centering \includegraphics[scale=0.8]{dbi/figures/conventional} \caption{Conventional cache tag store} \label{fig:conventional} \end{subfigure}\vspace{2mm} \begin{subfigure}{0.55\textwidth} \centering \includegraphics[scale=0.8]{dbi/figures/dbi} \caption{Cache tag store augmented with a DBI} \label{fig:dbi} \end{subfigure} \caption[DBI vs. conventional cache]{Comparison between conventional cache and a cache with DBI.} \label{fig:dbi-compare} \end{figure} Figure~\ref{fig:dbi-compare} compares the conventional tag store with a tag store augmented with a DBI\xspace. In the conventional organization (shown in Figure~\ref{fig:conventional}), each tag entry contains a dirty bit that indicates whether the corresponding block is dirty or not. For example, to indicate that a block B is dirty, the dirty bit of the corresponding tag entry is set. In contrast, in a cache augmented with a DBI\xspace (Figure~\ref{fig:dbi}), the dirty bits are removed from the main tag store and organized differently in the DBI\xspace. The organization of DBI\xspace is simple. It consists of multiple entries. Each entry corresponds to some row in DRAM---identified using a \emph{row tag} present in each entry. Each DBI\xspace entry contains a \emph{dirty bit vector} that indicates if each block in the corresponding DRAM row is dirty or not. \textbf{DBI\xspace Semantics.} A block in the cache is dirty \emph{if and only if} the DBI\xspace contains a valid entry for the DRAM row that contains the block and the bit corresponding to the block in the bit vector of that DBI\xspace entry is set. For example, assuming that block B is the second block of DRAM row R, to indicate that block B is dirty, the DBI\xspace contains a valid entry for DRAM row R, with the second bit of the corresponding bit vector set. Note that the key difference between the DBI and the conventional tag store is the \emph{logical organization} of the dirty bit information. While some processors store the dirty bit information in a separate physical structure, the logical organization of the dirty bit information is same as the main tag store. \subsection{DBI\xspace Operation} \label{sec:dbi-operation} Figure~\ref{fig:dbi-operation} pictorially describes the operation of a cache augmented with a DBI\xspace. The focus of this work is on the on-chip last-level cache (LLC). Therefore, for ease of explanation, we assume that the cache does not receive any sub-block writes and any dirty block in the cache is a result of a writeback generated by the previous level of cache.\footnote{Sub-block writes typically occur in the primary L1 cache where writes are at a word-granularity, or at a cache which uses a larger block size than the previous level of cache. The DBI\xspace operation described in this paper can be easily extended to caches with sub-block writes.} There are four possible operations, which we describe in detail below. \begin{figure}[h] \centering \includegraphics{dbi/figures/dbi-operation} \caption{Operation of a cache with DBI\xspace} \label{fig:dbi-operation} \end{figure} \subsubsection{Read Access to the Cache} \label{sec:dbi-operation-read} The addition of DBI\xspace \emph{does not change} the path of a read access in any way. On a read access, the cache simply looks up the block in the tag store and returns the data on a cache hit. Otherwise, it forwards the access to the memory controller. \subsubsection{Writeback Request to the Cache} \label{sec:dbi-operation-writeback} In a system with multiple levels of on-chip cache, the LLC will receive a writeback request when a dirty block is evicted from the previous level of cache. Upon receiving such a writeback request, the cache performs two actions (as shown in Figure~\ref{fig:dbi-operation}). First, it inserts the block into the cache if it is not already present. This may result in a cache block eviction (discussed in Section~\ref{sec:dbi-operation-cache-eviction}). If the block is already present in the cache, the cache just updates the data store (not shown in the figure) with the new data. Second, the cache updates the DBI\xspace to indicate that the written-back block is dirty. If the DBI\xspace already has an entry for the DRAM row that contains the block, the cache simply sets the bit corresponding to the block in that DBI\xspace entry. Otherwise, the cache inserts a new entry into the DBI\xspace for the DRAM row containing the block and with the bit corresponding to the block set. Inserting a new entry into the DBI\xspace may require an existing DBI\xspace entry to be evicted. Section~\ref{sec:dbi-operation-dbi-eviction} discusses how the cache handles such a DBI\xspace eviction. \subsubsection{Cache Eviction} \label{sec:dbi-operation-cache-eviction} When a block is evicted from the cache, it has to be written back to main memory if it is dirty. Upon a cache block eviction, the cache consults the DBI\xspace to determine if the block is dirty. If so, it first generates a writeback request for the block and sends it to the memory controller. It then updates the DBI\xspace to indicate that the block is no longer dirty---done by simply resetting the bit corresponding to the block in the bit vector of the DBI\xspace entry. If the evicted block is the last dirty block in the corresponding DBI\xspace entry, the cache invalidates the DBI\xspace entry so that the entry can be used to store the dirty block information of some other DRAM row. \subsubsection{DBI Eviction} \label{sec:dbi-operation-dbi-eviction} The last operation in a cache augmented with a DBI\xspace is a \emph{DBI\xspace eviction}. Similar to the cache, since the DBI\xspace has limited space, it can only track the dirty block information for a limited number of DRAM rows. As a result, inserting a new DBI\xspace entry (on a writeback request, discussed in Section~\ref{sec:dbi-operation-writeback}) may require evicting an existing DBI\xspace entry. We call this event a \emph{DBI\xspace eviction}. The DBI\xspace entry to be evicted is decided by the DBI\xspace replacement policy (discussed in Section~\ref{sec:dbi-replacement-policy}). When an entry is evicted from the DBI\xspace, \emph{all} the blocks indicated as dirty by the entry should be written back to main memory. This is because, once the entry is evicted, the DBI\xspace can no longer maintain the dirty status of those blocks. Therefore, not writing them back to memory will likely lead to incorrect execution, as the version of those blocks in memory is stale. Although a DBI\xspace eviction may require evicting many dirty blocks, with a small buffer to keep track of the evicted DBI\xspace entry (until all of its blocks are written back to memory), the DBI\xspace eviction can be interleaved with other demand requests. Note that on a DBI\xspace eviction, the corresponding cache blocks need not be evicted---they only need to be transitioned from the dirty state to clean state. \subsection{Cache Coherence Protocols} \label{sec:dbi-ccp} Many cache coherence protocols implicitly store the dirty status of cache blocks in the cache coherence states. For example, in the MESI protocol~\cite{mesi}, the M (modified) state indicates that the block is dirty. In the improved MOESI protocol~\cite{moesi}, both M (modified) and O (Owner) states indicate that the block is dirty. To adapt such protocols to work with DBI\xspace, we propose to split the cache coherence states into multiple pairs---each pair containing a state that indicates the block is dirty and the non-dirty version of the same state. For example, we split the MOESI protocol into three parts: (M, E), (O, S) and (I). We can use a single bit to then distinguish between the two states in each pair. This bit will be stored in the DBI\xspace. \section{DRAM-Aware Writeback} \label{sec:dbi-dawb} We conceived of the Dirty-Block Index based on a previously proposed optimization called \emph{DRAM-Aware Writeback}. In this section, we first provide a brief background on dirty blocks and the interaction between the last-level cache and the memory controller. We then describe the optimization. \subsection{Background on Dirty Block Management} \label{sec:dbi-background} Most modern high-performance systems use a writeback last-level cache. When a cache block is modified by the CPU, it is marked \emph{dirty} in the cache. For this purpose, each tag entry is associated with a \emph{dirty} bit, which indicates if the block corresponding to the tag entry is dirty. When a block is evicted, if its dirty bit is set, then the cache sends the block to the memory controller to be written back to the main memory. The memory controller periodically writes back dirty blocks to their locations in main memory. While such writes can be interleaved with read requests at a fine granularity, their is a penalty to switching the memory channel between the read and write modes. As a result, most memory controllers buffer the writebacks from the last-level cache in a write buffer. During this period, which we refer to as the \emph{read phase}, the memory controller only serves read requests. When the write buffer is close to becoming full, the memory controller stops serving read requests and starts flushing out the write buffer to main memory until the write buffer is close to empty. We refer to this phase as the \emph{writeback phase}. The memory controller then switches back to serving read requests. This policy is referred to as \emph{drain-when-full}~\cite{dram-aware-wb}. \subsection{DRAM-Aware Writeback: Improving Write Locality} In existing systems, the sequence with which dirty blocks are evicted from the cache depends on primarily on the cache replacement policy. As observed by two prior works~\cite{dram-aware-wb,vwq}, this approach can fill the write buffer with dirty blocks from many different rows in DRAM. As a result, the writeback phase exhibits poor DRAM row buffer locality. However, there could be other dirty blocks in the cache which belong to the same DRAM row as those in the write buffer. \emph{DRAM-Aware Writeback} (DAWB) is a simple solution to counter this problem. The idea is to writeback dirty blocks of the same DRAM row together so as to improve the row buffer locality of the writeback phase. This could reduce the time consumed by the writeback phase, thereby allowing the memory controller to switch to the read phase sooner. To implement this idea, whenever a dirty block is evicted from the cache, DAWB checks if there are any other dirty blocks in the cache that belong to the same DRAM row. If such blocks exist, DAWB simply writes the contents of those blocks to main memory and marks them as clean (i.e., clears their dirty bits). Evaluations show that this simple optimization can significantly improve the performance of many applications. \subsection{Inefficiency in Implementing DAWB} Implementing DAWB with existing cache organizations requires the cache to lookup each block of a DRAM row and determine if the block is dirty. In a modern system with typical DRAM row buffer size of 8KB and a cache block size of 64B, this operations requires 128 cache lookups. With caches getting larger and more cores sharing the cache, these lookups consume high latency and also add to the contention for the tag store. To add to the problem, many of these cache blocks may not be dirty to begin with, making the lookups for those blocks unnecessary. Ideally, as part of the DAWB optimization, the tag store should be looked up only for those blocks from the DRAM row that are actually dirty. In other words, the cache must be able to efficiently identify the list of all cache blocks of a given DRAM row (or region) that are dirty. This will not only enable a more efficient implementation of the DAWB optimization, but also address the cache coherence problem described in the beginning of this chapter. \section{Summary} \label{sec:dbi-summary} In this chapter, we introduced the Dirty-Block Index (DBI), a structure that aids the cache in efficiently responding to queries regarding dirty blocks. Specifically, DBI can quickly list all dirty blocks that belong to a contiguous region and, in general, check if a block is dirty more efficiently than existing cache organization. To achieve this, our mechanism removes the dirty bits from the on-chip tag store and organizes them at a large region (e.g., DRAM row) granularity in the DBI. DBI can be used to accelerate the coherence protocol that ensures the coherence of data between the on-chip caches and main memory in our in-DRAM mechanisms. We described three other concrete use cases for the DBI that can improve the performance and energy-efficiency of the memory subsystem in general, and reduce the ECC overhead for on-chip caches. This approach is an effective way of enabling several other optimizations at different levels of caches by organizing the DBI to cater to the write patterns of each cache level. We believe this approach can be extended to more efficiently organize other metadata in caches (e.g., cache coherence states), enabling more optimizations to improve performance and power-efficiency. \section{DRAM Chip} \label{sec:dram-chip} A modern DRAM chip consists of a hierarchy of structures: DRAM \emph{cells}, \emph{tiles/MATs}, \emph{subarrays}, and \emph{banks}. In this section, we will describe the design of a modern DRAM chip in a bottom-up fashion, starting from a single DRAM cell and its operation. \subsection{DRAM Cell and Sense Amplifier} At the lowest level, DRAM technology uses capacitors to store information. Specifically, it uses the two extreme states of a capacitor, namely, the \emph{empty} and the \emph{fully charged} states to store a single bit of information. For instance, an empty capacitor can denote a logical value of 0, and a fully charged capacitor can denote a logical value of 1. Figure~\ref{fig:cell-states} shows the two extreme states of a capacitor. \begin{figure}[h] \centering \includegraphics{dram-background/figures/cell-states} \caption[Capacitor]{Two states of a DRAM cell} \label{fig:cell-states} \end{figure} Unfortunately, the capacitors used for DRAM chips are small, and will get smaller with each new generation. As a result, the amount of charge that can be stored in the capacitor, and hence the difference between the two states is also very small. In addition, the capacitor can potentially lose its state after it is accessed. Therefore, to extract the state of the capacitor, DRAM manufactures use a component called \emph{sense amplifier}. Figure~\ref{fig:sense-amp} shows a sense amplifier. A sense amplifier contains two inverters which are connected together such that the output of one inverter is connected to the input of the other and vice versa. The sense amplifier also has an enable signal that determines if the inverters are active. When enabled, the sense amplifier has two stable states, as shown in Figure~\ref{fig:sense-amp-states}. In both these stable states, each inverter takes a logical value and feeds the other inverter with the negated input. \begin{figure}[h] \centering \begin{minipage}{5cm} \centering \includegraphics{dram-background/figures/sense-amp} \caption{Sense amplifier} \label{fig:sense-amp} \end{minipage}\quad \begin{minipage}{9cm} \centering \includegraphics{dram-background/figures/sense-amp-states} \caption{Stable states of a sense amplifier} \label{fig:sense-amp-states} \end{minipage} \end{figure} Figure~\ref{fig:sense-amp-operation} shows the operation of the sense amplifier from a disabled state. In the initial disabled state, we assume that the voltage level of the top terminal (V$_a$) is higher than that of the bottom terminal (V$_b$). When the sense amplifier is enabled in this state, it \emph{senses} the difference between the two terminals and \emph{amplifies} the difference until it reaches one of the stable state (hence the name ``sense amplifier''). \begin{figure}[h] \centering \includegraphics{dram-background/figures/sense-amp-operation} \caption{Operation of the sense amplifier} \label{fig:sense-amp-operation} \end{figure} \subsection{DRAM Cell Operation: The \texttt{ACTIVATE-PRECHARGE} cycle} \label{sec:cell-operation} DRAM technology uses a simple mechanism that converts the logical state of a capacitor into a logical state of the sense amplifier. Data can then be accessed from the sense amplifier (since it is in a stable state). Figure~\ref{fig:cell-operation} shows the connection between a DRAM cell and the sense amplifier and the sequence of states involved in converting the cell state to the sense amplifier state. \begin{figure}[h] \centering \includegraphics{dram-background/figures/cell-operation} \caption{Operation of a DRAM cell and sense amplifier} \label{fig:cell-operation} \end{figure} As shown in the figure (state \ding{202}), the capacitor is connected to an access transistor that acts as a switch between the capacitor and the sense amplifier. The transistor is controller by a wire called \emph{wordline}. The wire that connects the transistor to the top end of the sense amplifier is called \emph{bitline}. In the initial state \ding{202}, the wordline is lowered, the sense amplifier is disabled and both ends of the sense amplifier are maintained at a voltage level of $\frac{1}{2}$V$_{DD}$\xspace. We assume that the capacitor is initially fully charged (the operation is similar if the capacitor was empty). This state is referred to as the \emph{precharged} state. An access to the cell is triggered by a command called \texttt{ACTIVATE}\xspace. Upon receiving an \texttt{ACTIVATE}\xspace, the corresponding wordline is first raised (state \ding{203}). This connects the capacitor to the bitline. In the ensuing phase called \emph{charge sharing} (state \ding{204}), charge flows from the capacitor to the bitline, raising the voltage level on the bitline (top end of the sense amplifier) to $\frac{1}{2}$V$_{DD}+\delta$\xspace. After charge sharing, the sense amplifier is enabled (state \ding{205}). The sense amplifier detects the difference in voltage levels between its two ends and amplifies the deviation, till it reaches the stable state where the top end is at V$_{DD}$\xspace (state \ding{206}). Since the capacitor is still connected to the bitline, the charge on the capacitor is also fully restored. We will shortly describe how the data can be accessed form the sense amplifier. However, once the access to the cell is complete, it is taken back to the original precharged state using the command called \texttt{PRECHARGE}\xspace. Upon receiving a \texttt{PRECHARGE}\xspace, the wordline is first lowered, thereby disconnecting the cell from the sense amplifier. Then, the two ends of the sense amplifier are driven to $\frac{1}{2}$V$_{DD}$\xspace using a precharge unit (not shown in the figure for brevity). \subsection{DRAM MAT/Tile: The Open Bitline Architecture} \label{sec:dram-mat} The goal of DRAM manufacturers is to maximize the density of the DRAM chips while adhering to certain latency constraints (described in Section~\ref{sec:dram-timing-constraints}). There are two costly components in the setup described in the previous section. The first component is the sense amplifier itself. Each sense amplifier is around two orders of magnitude larger than a single DRAM cell~\cite{rambus-power}. Second, the state of the wordline is a function of the address that is currently being accessed. The logic that is necessary to implement this function (for each cell) is expensive. In order to reduce the overall cost of these two components, they are shared by many DRAM cells. Specifically, each sense amplifier is shared a column of DRAM cells. In other words, all the cells in a single column are connected to the same bitline. Similarly, each wordline is shared by a row of DRAM cells. Together, this organization consists of a 2-D array of DRAM cells connected to a row of sense amplifiers and a column of wordline drivers. Figure~\ref{fig:dram-mat} shows this organization with a $4 \times 4$ 2-D array. \begin{figure}[h] \centering \includegraphics{dram-background/figures/dram-mat} \caption{A 2-D array of DRAM cells} \label{fig:dram-mat} \end{figure} To further reduce the overall cost of the sense amplifiers and the wordline driver, modern DRAM chips use an architecture called the \emph{open bitline architecture}. This architecture exploits two observations. First, the sense amplifier is wider than the DRAM cells. This difference in width results in a white space near each column of cells. Second, the sense amplifier is symmetric. Therefore, cells can also be connected to the bottom part of the sense amplifier. Putting together these two observations, we can pack twice as many cells in the same area using the open bitline architecture, as shown in Figure~\ref{fig:dram-mat-oba}; \begin{figure}[h] \centering \includegraphics{dram-background/figures/dram-mat-oba} \caption{A DRAM MAT/Tile: The open bitline architecture} \label{fig:dram-mat-oba} \end{figure} As shown in the figure, a 2-D array of DRAM cells is connected to two rows of sense amplifiers: one on the top and one on the bottom of the array. While all the cells in a given row share a common wordline, half the cells in each row are connected to the top row of sense amplifiers and the remaining half of the cells are connected to the bottom row of sense amplifiers. This tightly packed structure is called a DRAM MAT/Tile~\cite{rethinking-dram,half-dram,salp}. In a modern DRAM chip, each MAT typically is a $512 \times 512$ or $1024 \times 1024$ array. Multiple MATs are grouped together to form a larger structure called a \emph{DRAM bank}, which we describe next. \subsection{DRAM Bank} In most commodity DRAM interfaces~\cite{ddr3,ddr4}, a DRAM bank is the smallest structure visible to the memory controller. All commands related to data access are directed to a specific bank. Logically, each DRAM bank is a large monolithic structure with a 2-D array of DRAM cells connected to a single set of sense amplifiers (also referred to as a row buffer). For example, in a 2Gb DRAM chip with 8 banks, each bank has $2^{15}$ rows and each logical row has 8192 DRAM cells. Figure~\ref{fig:dram-bank-logical} shows this logical view of a bank. In addition to the MAT, the array of sense amplifiers, and the wordline driver, each bank also consists of some peripheral structures to decode DRAM commands and addresses, and manage the input/output to the DRAM bank. Specifically, each bank has a \emph{row decoder} to decode the row address of row-level commands (e.g., \texttt{ACTIVATE}\xspace). Each data access command (\texttt{READ}\xspace and \texttt{WRITE}\xspace) accesses only a part of a DRAM row. Such individual parts are referred to as \emph{columns}. With each data access command, the address of the column to be accessed is provided. This address is decoded by the \emph{column selection logic}. Depending on which column is selected, the corresponding piece of data is communicated between the sense amplifiers and the bank I/O logic. The bank I/O logic intern acts as an interface between the DRAM bank and the chip-level I/O logic. \begin{figure}[h] \centering \includegraphics{dram-background/figures/dram-bank-logical} \caption{DRAM Bank: Logical view} \label{fig:dram-bank-logical} \end{figure} Although the bank can logically be viewed as a single MAT, building a single MAT of a very large dimension is practically not feasible as it will require very long bitlines and wordlines. Therefore, each bank is physically implemented as a 2-D array of DRAM MATs. Figure~\ref{fig:dram-bank-physical} shows a physical implementation of the DRAM bank with 4 MATs arranged in $2 \times 2$ array. As shown in the figure, the output of the global row decoder is sent to each row of MATs. The bank I/O logic, also known as the \emph{global sense amplifiers}, are connected to all the MATs through a set of \emph{global bitlines}. As shown in the figure, each vertical collection of MATs consists of its own columns selection logic and global bitlines. One implication of this division is that the data accessed by any command is split equally across all the MATs in a single row of MATs. \begin{figure}[h] \centering \includegraphics{dram-background/figures/dram-bank} \caption{DRAM Bank: Physical implementation} \label{fig:dram-bank-physical} \end{figure} Figure~\ref{fig:dram-mat-zoomed} shows the zoomed-in version of a DRAM MAT with the surrounding peripheral logic. Specifically, the figure shows how each column selection line selects specific sense amplifiers from a MAT and connects them to the global bitlines. It should be noted that the width of the global bitlines for each MAT (typically 8/16) is much smaller than that of the width of the MAT (typically 512/1024). \begin{figure}[h] \centering \includegraphics{dram-background/figures/dram-bank-zoomed} \caption{Detailed view of MAT} \label{fig:dram-mat-zoomed} \end{figure} Each DRAM chip consist of multiple banks as shown in Figure~\ref{fig:dram-chip}. All the banks share the chip's internal command, address, and data buses. As mentioned before, each bank operates mostly independently (except for operations that involve the shared buses). The chip I/O manages the transfer of data to and from the chip's internal bus to the channel. The width of the chip output (typically 8 bits) is much smaller than the output width of each bank (typically 64 bits). Any piece of data accessed from a DRAM bank is first buffered at the chip I/O and sent out on the memory bus 8 bits at a time. With the DDR (double data rate) technology, 8 bits are sent out each half cycle. Therefore, it takes 4 cycles to transfer 64 bits of data from a DRAM chip I/O on to the memory channel. \begin{figure}[h] \centering \includegraphics{dram-background/figures/dram-chip} \caption{DRAM Chip} \label{fig:dram-chip} \end{figure} \subsection{DRAM Commands: Accessing Data from a DRAM Chip} To access a piece of data from a DRAM chip, the memory controller must first identify the location of the data: the bank ID ($B$), the row address ($R$) within the bank, and the column address ($C$) within the row. After identifying these pieces of information, accessing the data involves three steps. The first step is to issue a \texttt{PRECHARGE}\xspace to the bank $B$. This step prepares the bank for a data access by ensuring that all the sense amplifiers are in the \emph{precharged} state (Figure~\ref{fig:cell-operation}, state~\ding{202}). No wordline within the bank is raised in this state. The second step is to activate the row $R$ that contains the data. This step is triggered by issuing a \texttt{ACTIVATE}\xspace to bank $B$ with row address $R$. Upon receiving this command, the corresponding bank feeds its global row decoder with the input $R$. The global row decoder logic then raises the wordline of the DRAM row corresponding to the address $R$ and enables the sense amplifiers connected to that row. This triggers the DRAM cell operation described in Section~\ref{sec:cell-operation}. At the end of the activate operation the data from the entire row of DRAM cells is copied to the corresponding array of sense amplifiers. Finally, the third step is to access the data from the required column. This is done by issuing a \texttt{READ}\xspace or \texttt{WRITE}\xspace command to the bank with the column address $C$. Upon receiving a \texttt{READ}\xspace or \texttt{WRITE}\xspace command, the corresponding address is fed to the column selection logic. The column selection logic then raises the column selection lines (Figure~\ref{fig:dram-mat-zoomed}) corresponding the address $C$, thereby connecting those sense amplifiers to the global sense amplifiers through the global bitlines. For a read access, the global sense amplifiers sense the data from the MAT's local sense amplifiers and transfer that data to chip's internal bus. For a write access, the global sense amplifiers read the data from the chip's internal bus and force the MAT's local sense amplifiers to the appropriate state. Not all data accesses require all three steps. Specifically, if the row to be accessed is already activated in the corresponding bank, then the first two steps can be skipped and the data can be directly accessed by issuing a \texttt{READ}\xspace or \texttt{WRITE}\xspace to the bank. For this reason, the array of sense amplifiers are also referred to as a \emph{row buffer}, and such an access that skips the first two steps is called a \emph{row buffer hit}. Similarly, if the bank is already in the precharged state, then the first step can be skipped. Such an access is referred to as a \emph{row buffer miss}. Finally, if a different row is activated within the bank, then all three steps have to be performed. Such a situation is referred to as a \emph{row buffer conflict}. \subsection{DRAM Timing Constraints} \label{sec:dram-timing-constraints} Different operations within DRAM consume different amounts of time. Therefore, after issuing a command, the memory controller must wait for a sufficient amount of time before it can issue the next command. Such wait times are managed by what are called the \emph{timing constraints}. Timing constraints essentially dictate the minimum amount of time between two commands issued to the same bank/rank/channel. Table~\ref{table:timing-constraints} describes some key timing constraints along with their values for the DDR3-1600 interface. \begin{table}[h]\small \centering \input{dram-background/tables/timing-constraints} \caption[DDR3-1600 DRAM timing constraints]{Key DRAM timing constraints with their values for DDR3-1600} \label{table:timing-constraints} \end{table} \chapter{Understanding DRAM} \label{chap:dram-background} In the second component of this dissertation, we propose a series of techniques to improve the efficiency of certain key primitives by exploiting the DRAM architecture. In this chapter, we will describe the modern DRAM architecture and its implementation in full detail. While we focus our attention primarily on commodity DRAM design (i.e., the DDRx interface), most DRAM architectures use very similar design approaches and vary only in higher-level design choices. As a result, our mechanisms, which we describe in the subsequent chapters, can be easily extended to any DRAM architecture. We now describe the high-level organization of the memory system. \input{dram-background/org} \input{dram-background/chip} \input{dram-background/module} \input{dram-background/summary} \section{DRAM Module} \label{sec:dram-module} \begin{figure} \centering \input{dram-background/figures/dram-rank} \caption{Organization of a DRAM rank} \label{fig:dram-rank} \end{figure} As mentioned before, each \texttt{READ}\xspace or \texttt{WRITE}\xspace command for a single DRAM chip typically involves only 64 bits. In order to achieve high memory bandwidth, commodity DRAM modules group several DRAM chips (typically 4 or 8) together to form a \emph{rank} of DRAM chips. The idea is to connect all chips of a single rank to the same command and address buses, while providing each chip with an independent data bus. In effect, all the chips within a rank receive the same commands with same addresses, making the rank a logically wide DRAM chip. Figure~\ref{fig:dram-rank} shows the logical organization of a DRAM rank. Most commodity DRAM ranks consist of 8 chips. Therefore, each \texttt{READ}\xspace or \texttt{WRITE}\xspace command accesses 64 bytes of data, the typical cache line size in most processors. \section{High-level Organization of the Memory System} \begin{figure}[b] \centering \includegraphics{dram-background/figures/high-level-mem-org} \caption{High-level organization of the memory subsystem} \label{fig:high-level-mem-org} \end{figure} Figure~\ref{fig:high-level-mem-org} shows the organization of the memory subsystem in a modern system. At a high level, each processor chip consists of one of more off-chip memory \emph{channels}. Each memory channel consists of its own set of \emph{command}, \emph{address}, and \emph{data} buses. Depending on the design of the processor, there can be either an independent memory controller for each memory channel or a single memory controller for all memory channels. All modules connected to a channel share the buses of the channel. Each module consists of many DRAM devices (or chips). Most of this chapter (Section~\ref{sec:dram-chip}) is dedicated to describing the design of a modern DRAM chip. In Section~\ref{sec:dram-module}, we present more details of the module organization of commodity DRAM. \section{Summary} In this section, we summarize the key takeaways of the DRAM design and operation. \begin{enumerate}\itemsep0pt \item To access data from a DRAM cell, DRAM converts the state of the cell into one of the stable states of the sense amplifier. The \emph{precharged} state of the sense amplifier, wherein both the bitline and the $\overline{\textrm{bitline}}$\xspace are charged to a voltage level of $\frac{1}{2}$V$_{DD}$\xspace, is key to this state transfer, as the DRAM cell is large enough to perturb the voltage level on the bitline. \item The DRAM cell is not strong enough to switch the sense amplifier from one stable state to another. If a cell is connected to a stable sense amplifier, the charge on the cell gets overwritten to reflect the state of the sense amplifier. \item In the DRAM cell operation, the final state of the sense amplifier after the amplification phase depends solely on the deviation on the bitline after \emph{charge sharing}. If the deviation is positive, the sense amplifier drives the bitline to V$_{DD}$\xspace. Otherwise, if the deviation is negative, the sense amplifier drives the bitline to 0. \item In commodity DRAM, each \texttt{ACTIVATE}\xspace command simultaneously activates an entire row of DRAM cells. In a single chip, this typically corresponds to 8 Kbits of cells. Across a rank with 8 chips, each \texttt{ACTIVATE}\xspace activates 8 KB of data. \item In a commodity DRAM module, the data corresponding to each \texttt{READ}\xspace or \texttt{WRITE}\xspace is equally distributed across all the chips in a rank. All the chips share the same command and address bus, while each chip has an independent data bus. \end{enumerate} All our mechanisms are built on top of these observations. We will recap these observations in the respective chapters. \section{Applications and Evaluations} \label{sec:applications} To quantitatively evaluate the benefits of {\sffamily{GS-DRAM}}\xspace, we implement our framework in the Gem5 simulator~\cite{gem5}, on top of the x86 architecture. We implement the \texttt{pattload} instruction by modifying the behavior of the \texttt{prefetch} instruction to gather with a specific pattern into either the \texttt{rax} register (8 bytes) or the \texttt{xmm0} register (16 bytes). None of our evaluated applications required the \texttt{pattstore} instruction. Table~\ref{table:gsdram-parameters} lists the main parameters of the simulated system. All caches uniformly use 64-byte cache lines. While we envision several applications to benefit from our framework, in this section, we primarily discuss and evaluate two applications: 1)~an in-memory database workload, and 2)~general matrix-matrix multiplication workload. \begin{table}[h]\footnotesize \centering \input{gsdram/tables/parameters} \caption[GS-DRAM: Simulation parameters]{Main parameters of the simulated system.} \label{table:gsdram-parameters} \end{table} \subsection{In-Memory Databases} \label{sec:in-memory-db} In-memory databases (IMDB) (e.g.,~\cite{memsql,hstore,hyrise}) provide significantly higher performance than traditional disk-oriented databases. Similar to any other database, an IMDB may support two kinds of queries: \emph{transactions}, which access many fields from a few tuples, and \emph{analytics}, which access one or few fields from many tuples. As a result, the storage model used for the database tables heavily impacts the performance of transactions and analytical queries. While a row-oriented organization (\emph{row store}) is better for transactions, a column-oriented organization~\cite{c-store} (\emph{column store}) is better for analytics. Increasing need for both fast transactions and fast real-time analytics has given rise to a new workload referred to as \emph{Hybrid Transaction/Analytical Processing} (HTAP)~\cite{htap}. In an HTAP workload, both transactions and analytical queries are run on the \emph{same version} of the database. Unfortunately, neither the row store nor the column store provides the best performance for both transactions and analytics. With our {\sffamily{GS-DRAM}}\xspace framework, each database table can be stored as a row store in memory, but can be accessed at high performance \emph{both} in the row-oriented access pattern \emph{and} the field-oriented access pattern.\footnote{{\sffamily{GS-DRAM}}\xspace requires the database to be structured (i.e., not have any variable length fields). This is fine for most high-performance IMDBs as they handle variable length fields using fixed size pointers for fast data retrieval~\cite{vlf1,vlf2}. {\sffamily{GS-DRAM}}\xspace will perform at least as well as the baseline for unstructured databases.} Therefore, we expect {\sffamily{GS-DRAM}}\xspace to provide the \emph{best of both row and column layouts} for both kinds of queries. We demonstrate this potential benefit by comparing the performance of {\sffamily{GS-DRAM}}\xspace with both a row store layout (\mbox{\sffamily Row Store}\xspace) and a column store layout (\mbox{\sffamily Column Store}\xspace) on three workloads: 1)~a transaction-only workload, 2)~an analytics-only workload, and 3)~an HTAP workload. For our experiments, we assume an IMDB with a single table with one million tuples and no use of compression. Each tuple contains eight 8-byte fields, and fits exactly in a 64B cache line. (Our mechanism naturally extends to any table with power-of-2 tuple size.) \textbf{Transaction workload.} For this workload, each transaction operates on a randomly-chosen tuple. All transactions access $i$, $j$, and $k$ fields of the tuple in the read-only, write-only, and read-write mode, respectively. Figure~\ref{fig:transactions} compares the performance (execution time) of {\sffamily{GS-DRAM}}\xspace, \mbox{\sffamily Row Store}\xspace, and \mbox{\sffamily Column Store}\xspace on the transaction workload for various values of $i$, $j$, and $k$ (x-axis). The workloads are sorted based on the total number of fields accessed by each transaction. For each mechanism, the figure plots the execution time for running 10000 transactions. \begin{figure}[h] \centering \includegraphics{gsdram/plots/transactions} \caption[{\sffamily{GS-DRAM}}\xspace: Transaction workload performance]{Transaction Workload Performance: Execution time for 10000 transactions. The x-axis indicates the number of \emph{read-only}, \emph{write-only}, and \emph{read-write} fields for each workload.} \label{fig:transactions} \end{figure} We draw three conclusions. First, as each transaction accesses only one tuple, it accesses only one cache line. Therefore, the performance of \mbox{\sffamily Row Store}\xspace is almost the same regardless of the number of fields read/written by each transaction. Second, the performance of \mbox{\sffamily Column Store}\xspace is worse than that of \mbox{\sffamily Row Store}\xspace, and decreases with increasing number of fields. This is because \mbox{\sffamily Column Store}\xspace accesses a different cache line for each field of a tuple accessed by a transaction, thereby causing a large number of memory accesses. Finally, as expected, {\sffamily{GS-DRAM}}\xspace performs as well as \mbox{\sffamily Row Store}\xspace and 3X (on average) better than \mbox{\sffamily Column Store}\xspace for the transactions workload. \textbf{Analytics workload.} For this workload, we measure the time taken to run a query that computes the sum of $k$ columns from the table. Figure~\ref{fig:analytics} compares the performance of the three mechanisms on the analytics workload for $k = 1$ and $k = 2$. The figure shows the performance of each mechanism without and with prefetching. We use a PC-based stride prefetcher~\cite{stride-prefetching} (with prefetching degree of 4~\cite{fdp}) that prefetches data into the L2 cache. We draw several conclusions from the results. \begin{figure}[h] \centering \includegraphics{gsdram/plots/analytics} \caption[{\sffamily{GS-DRAM}}\xspace: Analytics workload performance]{Analytics Workload Performance: Execution time for running an analytics query on 1 or 2 columns (without and with prefetching).} \label{fig:analytics} \end{figure} First, prefetching significantly improves the performance of all three mechanisms for both queries. This is expected as the analytics query has a uniform stride for all mechanisms, which can be easily detected by the prefetcher. Second, the performance of \mbox{\sffamily Row Store}\xspace is roughly the same for both queries. This is because each tuple of the table fits in a single cache line and hence, the number of memory accesses for \mbox{\sffamily Row Store}\xspace is the same for both queries (with and without prefetching). Third, the execution time of \mbox{\sffamily Column Store}\xspace increases with more fields. This is expected as \mbox{\sffamily Column Store}\xspace needs to fetch more cache lines when accessing more fields from the table. Regardless, \mbox{\sffamily Column Store}\xspace significantly outperforms \mbox{\sffamily Row Store}\xspace for both queries, as it causes far fewer cache line fetches compared to \mbox{\sffamily Row Store}\xspace. Finally, {\sffamily{GS-DRAM}}\xspace, by gathering the columns from the table as efficiently as \mbox{\sffamily Column Store}\xspace, performs similarly to \mbox{\sffamily Column Store}\xspace and significantly better than \mbox{\sffamily Row Store}\xspace both without and with prefetching (2X on average). \textbf{HTAP workload.} For this workload, we run one analytics thread and one transactions thread concurrently on the same system operating on the \emph{same} table. The analytics thread computes the sum of a single column, whereas the transactions thread runs transactions (on randomly chosen tuples with one read-only and one write-only field). The transaction thread runs until the analytics thread completes. We measure 1)~the time taken to complete the analytics query, and 2)~the throughput of the transactions thread. Figures~\ref{plot:htap-anal} and \ref{plot:htap-trans} plot these results, without and with prefetching. \begin{figure}[h] \centering \hspace{11mm}\includegraphics{gsdram/plots/htap-legend}\vspace{1mm}\\ \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics{gsdram/plots/htap-anal-full} \caption{\footnotesize{Analytics Performance}} \label{plot:htap-anal} \end{subfigure}\quad \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics{gsdram/plots/htap-trans-full} \caption{\footnotesize{Transaction Throughput}} \label{plot:htap-trans} \end{subfigure} \caption[{\sffamily{GS-DRAM}}\xspace: HTAP performance]{HTAP (without and with prefetching) (transactions: 1 read-only, 1 write-only field; analytics: 1 column)} \label{plot:htap} \end{figure} First, for analytics, prefetching significantly improves performance for all three mechanisms. {\sffamily{GS-DRAM}}\xspace performs as well as \mbox{\sffamily Column Store}\xspace. Second, for transactions, we find that {\sffamily{GS-DRAM}}\xspace not only outperforms \mbox{\sffamily Column Store}\xspace, in terms of transaction throughput, but it also performs better than \mbox{\sffamily Row Store}\xspace. We traced this effect back to inter-thread contention for main memory bandwidth, a well-studied problem (e.g.,~\cite{bliss,critical-scheduler,tcm,stfm,parbs,atlas}). The FR-FCFS~\cite{frfcfs,frfcfs-patent} memory scheduler prioritizes requests that hit in the row buffer. With \mbox{\sffamily Row Store}\xspace, the analytics thread accesses all the cache lines in a DRAM row, thereby starving requests of the transaction thread to the same bank (similar to a memory performance hog program described in~\cite{mpa}). In contrast, by fetching just the required field, {\sffamily{GS-DRAM}}\xspace accesses \emph{8 times fewer} cache lines per row. As a result, it stalls the transaction thread for much smaller amount of time, leading to higher transaction throughput than \mbox{\sffamily Row Store}\xspace. The problem becomes worse for \mbox{\sffamily Row Store}\xspace with prefetching, since the prefetcher makes the analytics thread run even faster, thereby consuming a larger fraction of the memory bandwidth. \textbf{\sffamily Energy.} We use McPAT~\cite{mcpat} and DRAMPower~\cite{drampower,drampower-paper} (integrated with Gem5~\cite{gem5}) to estimate the processor and DRAM energy consumption of the three mechanisms. Our evaluations show that, for transactions, {\sffamily{GS-DRAM}}\xspace consumes similar energy to \mbox{\sffamily Row Store}\xspace and 2.1X lower than \mbox{\sffamily Column Store}\xspace. For analytics (with prefetching enabled), {\sffamily{GS-DRAM}}\xspace consumes similar energy to \mbox{\sffamily Column Store}\xspace and 2.4X lower energy (4X without prefetching) than \mbox{\sffamily Row Store}\xspace. (As different mechanisms perform different amounts of work for the HTAP workload, we do not compare energy for this workload.) The energy benefits of {\sffamily{GS-DRAM}}\xspace over prior approaches come from 1)~lower overall processor energy consumption due to reduced execution time, and 2)~lower DRAM energy consumption due to significantly fewer memory accesses. Figure~\ref{plot:summary} summarizes the performance and energy benefits of {\sffamily{GS-DRAM}}\xspace compared to \mbox{\sffamily Row Store}\xspace and \mbox{\sffamily Column Store}\xspace for the transactions and the analytics workloads. We conclude that {\sffamily{GS-DRAM}}\xspace provides the best of both the layouts. \begin{figure}[h] \centering \hspace{11mm}\includegraphics{gsdram/plots/htap-legend}\vspace{1mm}\\ \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics{gsdram/plots/time-summary} \caption{\footnotesize{Average Performance}} \label{plot:time-summary} \end{subfigure}\quad \begin{subfigure}[b]{0.45\linewidth} \centering \includegraphics{gsdram/plots/energy-summary} \caption{\footnotesize{Average Energy}} \label{plot:energy-summary} \end{subfigure} \caption[{\sffamily{GS-DRAM}}\xspace: Performance and energy summary]{Summary of performance and energy consumption for the transactions and analytics workloads} \label{plot:summary} \end{figure} \subsection{Scientific Computation: GEMM} \label{sec:dgemm} General Matrix-Matrix (GEMM) multiplication is an important kernel in many scientific computations. When two $n \times n$ matrices $A$ and $B$ are multiplied, the matrix $A$ is accessed in the row-major order, whereas the matrix $B$ is accessed in the column-major order. If both matrices are stored in row-major order, a naive algorithm will result in poor spatial locality for accesses to $B$. To mitigate this problem, matrix libraries use two techniques. First, they split each matrix into smaller tiles, converting the reuses of matrix values into L1 cache hits. Second, they use SIMD instructions to speed up each vector dot product involved in the operation. Unfortunately, even after tiling, values of a column of matrix $B$ are stored in different cache lines. As a result, to exploit SIMD, the software must gather the values of a column into a SIMD register. In contrast, {\sffamily{GS-DRAM}}\xspace can read each tile of the matrix in the column-major order into the L1 cache such that each cache line contains values gathered from one column. As a result, {\sffamily{GS-DRAM}}\xspace naturally enables SIMD operations, without requiring the software to gather data into SIMD registers. Figure~\ref{fig:dgemm} plots the performance of GEMM with {\sffamily{GS-DRAM}}\xspace and with the best tiled version normalized to a non-tiled version for different sizes ($n$) of the input matrices. We draw two conclusions. First, as the size of the matrices increases, tiling provides significant performance improvement by eliminating many memory references. Second, by seamlessly enabling SIMD operations, {\sffamily{GS-DRAM}}\xspace improves the performance of GEMM multiplication by 10\% on average compared to the best tiled baseline. Note that {\sffamily{GS-DRAM}}\xspace achieves 10\% improvement over a heavily-optimized tiled baseline that spends most of its time in the L1 cache. \begin{figure} \centering \includegraphics{gsdram/plots/matmul} \caption[{\sffamily{GS-DRAM}}\xspace: GEMM multiplication performance]{GEMM Multiplication: Performance of {\sffamily{GS-DRAM}}\xspace and the best tiled-version (normalized to a non-tiled baseline). Values on top indicate percentage reduction in execution time of {\sffamily{GS-DRAM}}\xspace compared to tiling.} \label{fig:dgemm} \end{figure} \subsection{Other Applications} \label{sec:other-apps} We envision {\sffamily{GS-DRAM}}\xspace to benefit many other applications like key-value stores, graph processing, and graphics. Key-value stores have two main operations: \emph{insert} and \emph{lookup}. The \emph{insert} operation benefits from both the key and value being in the same cache line. On the other hand, the \emph{lookup} operation benefits from accessing a cache line that contains only keys. Similarly, in graph processing, operations that update individual nodes in the graph have different access patterns than those that traverse the graph. In graphics, multiple pieces of information (e.g., RGB values of pixels) may be packed into small objects. Different operations may access multiple values within an object or a single value across a large number of objects. The different access patterns exhibited by these applications have a regular stride and can benefit significantly from {\sffamily{GS-DRAM}}\xspace. \section{End-to-end System Design} \label{sec:end-to-end} In this section, we discuss the support required from the rest of the system stack to exploit the {\sffamily{GS-DRAM}}\xspace substrate. We propose a mechanism that leverages support from different layers of the system stack to exploit {\sffamily{GS-DRAM}}\xspace: 1)~on-chip caches, 2)~the instruction set architecture, and 3)~software. It is also possible for the processor to dynamically identify different access patterns present in an application and exploit {\sffamily{GS-DRAM}}\xspace to accelerate such patterns transparently to the application. As our goal in this work is to demonstrate the benefits of {\sffamily{GS-DRAM}}\xspace, we leave the design of such an automatic mechanism for future work. The following sections assume a \gsdramp{*}{*}{p}, i.e., a p-bit pattern ID. \subsection{On-Chip Cache Support} \label{sec:on-chip-cache} Our mechanism introduces two problems with respect to on-chip cache management. First, when the memory controller gathers a cache line from a non-zero pattern ID, the values in the cache line are \emph{not} contiguously stored in physical memory. For instance, in our example (Figure~\ref{fig:matrix}), although the controller can fetch the first field of the first four tuples of the table with a single \texttt{READ}\xspace, the first field of the table is not stored contiguously in physical memory. Second, two cache lines belonging to different patterns may have a partial overlap. In our example (Figure~\ref{fig:matrix}), if the memory controller reads the first tuple (pattern ID = 0, column ID = 0) and the first field of the first four tuples (pattern ID = 3, column ID = 0), the two resulting cache lines have a common value (the first field of the first tuple, i.e., \tikz[baseline={([yshift=-8pt]current bounding box.north)}]{ \tikzset{col1/.style={draw,minimum height=2mm, minimum width=2mm,scale=0.9,inner sep=2pt, anchor=west, fill=black!30, rounded corners=2pt,outer sep=-0pt,xshift=1mm}}; \node (v1) [col1] {\sffamily 00};}). One simple way to avoid these problems is to store the individual values of the gathered data in \emph{different} physical cache lines by employing a sectored cache~\cite{sectored-cache} (for example). However, with the off-chip interface to DRAM operating at a wider-than-sector (i.e., a full cache line) granularity, such a design will increase the complexity of the cache-DRAM interface. For example, writebacks may require read-modify-writes as the processor may not have the entire cache line. More importantly, a mechanism that does not store the gathered values in the same cache line cannot extract the full benefits of SIMD optimizations because values that are required by a single SIMD operation would now be stored in \emph{multiple} physical cache lines. Therefore, we propose a simple mechanism that stores each gathered cache line from DRAM in a single physical cache line in the on-chip cache. Our mechanism has two aspects.\vspace{2mm} \noindent\textbf{\sffamily 1. Identifying non-contiguous cache lines.} When a non-contiguous cache line is stored in the cache, the cache controller needs a mechanism to identify the cache line. We observe that, in our proposed system, each cache line can be uniquely identified using the cache line address and the pattern ID with which it was fetched from DRAM. Therefore, we extend each cache line tag in the cache tag store with $p$ additional bits to store the pattern ID of the corresponding cache line.\vspace{2mm} \noindent\textbf{\sffamily 2. Maintaining cache coherence.} The presence of overlapping cache lines has two implications on coherence. First, before fetching a cache line from DRAM, the controller must check if there are any dirty cache lines in the cache which have a partial overlap with the cache line being fetched. Second, when a value is modified by the processor, in addition to invalidating the modified cache line from the other caches, the processor must also invalidate all other cache lines that contain the value that is being modified. With a number of different available patterns, this operation can be a complex and costly. Fortunately, we observe that many applications that use strided accesses require only two pattern IDs per data structure, the default pattern and one other pattern ID. Thus, as a trade-off to simplify cache coherence, we restrict each data structure to use only the zero pattern and one other pattern ID. To implement this constraint, we associate each virtual page with an additional $p$-bit pattern ID. Any access to a cache line within the page can use either the zero pattern or the page's pattern ID. If multiple virtual pages are mapped to the same physical page, the OS must ensure that the same alternate pattern ID is used for all mappings. Before fetching a cache line from DRAM with a pattern, the memory controller must only look for dirty cache lines from the other pattern. Since all these cache lines belong to the same DRAM row, this operation is fast and can be accelerated using simple structures like the Dirty-Block Index (described in Chapter~\ref{chap:dbi} of this thesis). Similarly, when the processor needs to modify a shared cache line, our mechanism piggybacks the other pattern ID of the page along with the \emph{read-exclusive} coherence request. Each cache controller then locally invalidates the cache lines from the other pattern ID that overlap with the cache line being modified. For \gsdramp{c}{*}{*}, our mechanism requires $c$ additional invalidations for each read-exclusive request. \subsection{Instruction Set Architecture Support} \label{sec:isa} To enable software to communicate strided access patterns to the processor, we introduce a new variant of the \texttt{load} and \texttt{store} instructions, called \texttt{pattload} and \texttt{pattstore}, that allow the code to specify the pattern ID. These instructions takes the following form:\\ \centerline{\texttt{pattload reg, addr, patt}} \centerline{\texttt{pattstore reg, addr, patt}} where \texttt{reg} is the source or destination register (depending on the instruction type), \texttt{addr} is the address of the data, and \texttt{patt} is the pattern ID. To execute a \texttt{pattload} or \texttt{pattstore}, the processor first splits the \texttt{addr} field into two parts: the cache line address (\texttt{caddr}), and the offset within the cache line (\texttt{offset}). Then the processor sends out a request for the cache line with address-pattern combination (\texttt{caddr}, \texttt{patt}). If the cache line is present in the on-chip cache, it is sent to the processor. Otherwise, the request reaches the memory controller. The memory controller identifies the row address and the column address from \texttt{caddr} and issues a \texttt{READ}\xspace command for a cache line with pattern ID \texttt{patt}. If the memory controller interleaves cache lines across multiple channels (or ranks), then it must access the corresponding cache line within each channel (or rank) and interleave the data from different channels appropriately before obtaining the required cache line. The cache line is then stored in the on-chip cache and is also sent to the processor. After receiving the cache line, the processor reads or updates the data at the \texttt{offset} to or from the destination or source register (\texttt{reg}). Note that architectures like x86 allow instructions to directly operate on memory by using different addressing modes to specify memory operands~\cite{x86-addressing}. For such architectures, common addressing modes may be augmented with a pattern ID field, or instruction prefixes may be employed to specify the pattern. \subsection{System and Application Software Support} \label{sec:gsdram-software} Our mechanism requires two pieces of information from the software for each data structure: 1)~whether the data structure requires the memory controller to use the shuffling mechanism (Section~\ref{sec:shuffling}) (we refer to this as the \emph{shuffle flag}), and 2)~the alternate pattern ID (Section~\ref{sec:pattern-id}) with which the application will access the data structure. To enable the application to specify this information, we propose a new variant of the \texttt{malloc} system call, called \texttt{pattmalloc}, which includes two additional parameters: the shuffle flag, and the pattern ID. When the OS allocates virtual pages for a \texttt{pattmalloc}, it also updates the page tables with the shuffle flag and the alternate pattern ID for those pages. Once the data structure is allocated with \texttt{pattmalloc}, the application can use the \texttt{pattload} or \texttt{pattstore} instruction to access the data structure efficiently with both the zero pattern and the alternate access pattern. While we can envision automating this process using a compiler optimization, we do not explore that path in this thesis. Figure~\ref{fig:example-code} shows an example piece of code before and after our optimization. The original code (line 5) allocates an array of 512 objects (each object with eight 8-byte fields) and computes the sum of the first field of all the objects (lines 8 and 9). The figure highlights the key benefit of our approach. \begin{figure}[h] \centering \includegraphics[scale=1]{gsdram/figures/example-code} \caption[Optimizing programs for {\sffamily{GS-DRAM}}\xspace]{Example code without and with our optimization.} \label{fig:example-code} \end{figure} In the program without our optimization (Figure~\ref{fig:example-code}, left), each iteration of the loop (line 9) fetches a different cache line. As a result, the entire loop accesses 512 different cache lines. On the other hand, with our optimization (Figure~\ref{fig:example-code}, right), the program first allocates memory for the array using \texttt{pattmalloc} (line 5), with the shuffle flag enabled and an alternate pattern ID = 7 (i.e., stride of 8). The program then breaks the loop into two parts. Each iteration of the outer loop (line 8) fetches a single strided cache line that contains only values from the first field. The loop skips the other fields (\texttt{i += 8}). The inner loop (lines 9-11) iterates over values within each strided cache line. In the first iteration of the inner loop, the \texttt{pattload} instruction with pattern ID 7 fetches a cache line with a stride of 8. As a result, the remaining seven iterations of the inner loop result in cache hits. Consequently, with our optimization, the entire loop accesses only 64 cache lines. As we will show in our evaluations, this reduction in the number of accessed cache lines directly translates to reduction in latency, bandwidth consumption, and cache capacity consumption, thereby improving overall performance. \subsection{Hardware Cost} \label{sec:cost} In this section, we quantify the changes required by {\sffamily{GS-DRAM}}\xspace, specifically \gsdramp{8}{3}{3} (Section~\ref{sec:gsdramp}), to various hardware components. On the DRAM side, first, our mechanism requires the addition of the column translation logic (CTL) for each DRAM chip. Each CTL requires a 3-bit register for the \texttt{Chip ID}, a 3-bit bitwise AND gate, a 3-bit bitwise XOR gate and a 3-bit bitwise multiplexer. Even for a commodity DRAM module with 8 chips, the overall cost is roughly 72 logic gates and 24 bits of register storage, which is negligible compared to the logic already present in a DRAM module. Second, our mechanism requires a few additional pins on the DRAM interface to communicate the pattern ID. However, existing DRAM interfaces already have some spare address bits, which can be used to communicate part of the pattern ID. Using this approach, a 3-bit pattern ID requires only one additional pin for DDR4~\cite{ddr4}. On the processor side, first, our mechanism requires the controller to implement the shuffling logic. Second, our mechanism augments each cache tag entry with the pattern ID. Each page table entry and TLB entry stores the shuffle flag and the alternate pattern ID for the corresponding page (Section~\ref{sec:on-chip-cache}). For a 3-bit pattern ID, the cost of this addition is less than 0.6\% of the cache size. Finally, the processor must implement the \texttt{pattload} and \texttt{pattstore} instructions, and the state machine for invalidating additional cache lines on read-exclusive coherence requests. The operation of \texttt{pattload}/\texttt{pattstore} is not very different from that of a regular \texttt{load}/\texttt{store} instruction. Therefore, we expect the implementation of these new instructions to be simple. Similarly, on a write, our mechanism has to check only eight cache lines (for {\sffamily{GS-DRAM}}\xspace with 8 chips) for possible overlap with the modified cache line. Therefore, we expect the invalidation state machine to be relatively simple. Note that a similar state machine has been used to keep data coherent in a virtually-indexed physically-tagged cache in the presence of synonyms~\cite{alpha-21264}. \section{Extensions to {\sffamily{GS-DRAM}}\xspace} \label{sec:gsdram-extensions} In this section, we describe three simple extensions to {\sffamily{GS-DRAM}}\xspace: 1)~programmable shuffling, 2)~wider pattern IDs, and 3)~intra-chip column translation. These extensions (together or individually) allow {\sffamily{GS-DRAM}}\xspace to 1)~express more patterns (e.g., larger strides), 2)~gather or scatter data at a granularity smaller than 8 bytes, and 3)~enable ECC support. \subsection{Programmable Shuffling} \label{sec:programmable-shuffling} Although our shuffling mechanism uses the least significant bits of the column ID to control the shuffling stages, there are two simple ways of explicitly controlling which shuffling stages are active. First, we can use a \emph{shuffle mask} to disable some stages. For example, the shuffle mask \texttt{10} disables swapping of adjacent values (Figure~\ref{fig:shuffling}, Stage 1). Second, instead of using the least significant bits to control the shuffling stages, we can choose different combinations of bits (e.g., XOR of multiple sets of bits~\cite{power-of-2,xor-schemes}). To enable programmable shuffling, we add another parameter to {\sffamily{GS-DRAM}}\xspace called the \emph{shuffling function}, $f$. For \gsdrampf{c}{s}{p}{f}, the function $f$ takes a column ID as input and generates an $n$-bit value that is used as the control input to the $n$ shuffling stages. The function $f$ can be application-specific, thereby optimizing {\sffamily{GS-DRAM}}\xspace for each application. \subsection{Wider Pattern IDs} \label{sec:wider-pattern} Although a wide pattern ID comes at additional cost, using a wider pattern ID allows the memory controller to express more access patterns. However, the column translation logic (CTL) performs a bitwise AND of the chip ID and the pattern ID to create a modifier for the column address. As a result, even if we use a wide pattern ID, a small chip ID disables the higher order bits of the pattern ID. Specifically, for \gsdramp{c}{*}{p}, if $p > \log c$, the CTL uses only the least significant $\log c$ bits of the pattern ID. To enable wider pattern IDs, we propose to simply widen the chip ID used by the CTL by repeating the physical chip ID multiple times. For instance, with 8 chips and a 6-bit pattern ID, the chip ID used by CTL for chip $3$ will be \texttt{011-011} (i.e., \texttt{011} repeated twice). With this simple extension, {\sffamily{GS-DRAM}}\xspace can enable more access patterns (e.g., larger strides). \subsection{Intra-Chip Column Translation} \label{sec:intra-chip-gather} Although we have assumed that each DRAM bank has a single wide row-buffer, in reality, each DRAM bank is a 2-D collection of multiple small \emph{tiles} or MATs~\cite{rethinking-dram,half-dram,salp}. Similar to how each chip within a rank contributes 64 bits to each cache line, each tile contributes equally to the 64 bits of data supplied by each chip. We can use the column translation logic within each DRAM chip to select different columns from different tiles for a single \texttt{READ}\xspace or \texttt{WRITE}\xspace. This mechanism has two benefits. First, with the support for intra-chip column translation, we can gather access patterns at a granularity smaller than 8 bytes. Second, with DIMMs that support ECC, {\sffamily{GS-DRAM}}\xspace may incur additional bandwidth to read all the required ECC values for non-zero patterns. However, if we use a chip that supports intra-chip column selection for ECC, accesses with non-zero patterns can gather the data from the eight data chips and gather the ECC from the eight tiles within the ECC chip, thereby seamlessly supporting ECC for all access patterns. \chapter{Gather-Scatter DRAM} \label{chap:gsdram} \begin{figure}[b!] \hrule\vspace{2mm} \begin{footnotesize} Originally published as ``Gather-Scatter DRAM: In-DRAM Address Translation to Improve the Spatial Locality of Non-unit Strided Accesses'' in the International Symposium on Microarchitecture, 2015~\cite{gsdram} \end{footnotesize} \end{figure} In this chapter, we shift our focus to the problem of non-unit strided access patterns. As described in Section~\ref{sec:non-unit-stride-problem}, such access patterns present themselves in many important applications such as in-memory databases, scientific computation, etc. As illustrated in that section, non-unit strided access patterns exhibit low spatial locality. In existing memory systems that are optimized to access and store wide cache lines, such access patterns result in high inefficiency. The problem presents itself at two levels. First, commodity DRAM modules are designed to supply wide contiguous cache lines. As a result, the cache lines fetched by the memory controller are only partially useful---i.e., they contain many values that do not belong to the strided access pattern. This results in both high latency and wasted memory bandwidth. Second, modern caches are optimized to store cache lines. Consequently, even the caches have to store values that do not belong to the strided access. While this results in inefficient use of the on-chip cache space, this also negatively affects SIMD optimizations on strided data. The application must first gather (using software or hardware) the values of the strided access in a single vector register before it can perform any SIMD operation. Unfortunately, this gather involves multiple physical cache lines, and hence is a long-latency operation. Given the importance of strided access patterns, several prior works (e.g., Impulse~\cite{impulse,impulse-journal}, Adaptive/Dynamic Granularity Memory Systems~\cite{agms,dgms}) have proposed solutions to improve the performance of strided accesses. Unfortunately, prior works~\cite{impulse,agms,dgms} require the off-chip memory interface to support fine-grained memory accesses~\cite{mini-rank,mc-dimm,mc-dimm-cal,threaded-module,sg-dimm} and, in some cases, a sectored cache~\cite{sectored-cache,dscache}. These approaches significantly increase the cost of the memory interface and the cache tag store, and potentially lower the utilization of off-chip memory bandwidth and on-chip cache space. We discuss these prior works in more detail in Section~\ref{sec:gsdram-prior-work}. Our goal is to design a mechanism that 1)~improves the performance (cache hit rate and memory bandwidth consumption) of strided accesses, and 2)~works with commodity DRAM modules and traditional non-sectored caches with very few changes. To this end, we first restrict our focus to power-of-2 strided access patterns and propose the Gather-Scatter DRAM ({\sffamily{GS-DRAM}}\xspace), a substrate that allows the memory controller to gather or scatter data with any power-of-2 stride efficiently with very few changes to the DRAM module. In the following sections, we describe {\sffamily{GS-DRAM}}\xspace in detail. \input{gsdram/mechanism} \input{gsdram/end-to-end} \input{gsdram/applications} \input{gsdram/extensions} \input{gsdram/prior-work} \input{gsdram/summary} \section{The Gather-Scatter DRAM} \label{sec:gsdram-mechanism} For the purpose of understanding the problems in existing systems and understanding the challenges in designing in {\sffamily{GS-DRAM}}\xspace, we use the following database example. The database consists of a single table with four fields. We assume that each tuple of the database fits in a cache line. Figure~\ref{fig:matrix} illustrates the two problems of accessing just the first field of the table. As shown in the figure, this query results in high latency, and wasted bandwidth and cache space. \begin{figure}[h] \centering \includegraphics{gsdram/figures/matrix} \caption[Shortcomings of strided access pattern]{Problems in accessing the first field (shaded boxes) from a table in a cache-line-optimized memory system. The box ``ij'' corresponds to the j$^{th}$ field of the i$^{th}$ tuple.} \label{fig:matrix} \end{figure} Our \textbf{goal} is to design a DRAM substrate that will enable the processor to access a field of the table (stored in tuple-major order) across all tuples, without incurring the penalties of existing interfaces. More specifically, if the memory controller wants to read the first field of the first four tuples of the table, it must be able to issue a \emph{single command} that fetches the following gathered cache line: \tikz[baseline={([yshift=-8pt]current bounding box.north)}]{ \tikzset{col1/.style={draw,minimum height=2mm, minimum width=2mm,scale=0.8,inner sep=2pt, anchor=west, fill=black!30, rounded corners=2pt,outer sep=-0pt,xshift=1mm}}; \node (v1) [col1] {\sffamily 00}; \node (v2) at (v1.east) [col1] {\sffamily 10}; \node (v3) at (v2.east) [col1] {\sffamily 20}; \node (v4) at (v3.east) [col1] {\sffamily 30};}. At the same time, the controller must be able to read a tuple from memory (e.g., \tikz[baseline={([yshift=-8pt]current bounding box.north)}]{ \tikzset{col1/.style={draw,minimum height=2mm, minimum width=2mm,inner sep=2pt, anchor=west, fill=black!30, rounded corners=2pt,outer sep=0pt,scale=0.8}}; \tikzset{rest/.style={draw,minimum height=2mm, minimum width=5mm,inner sep=2pt, anchor=west, fill=white,rounded corners=2pt,outer sep=0pt,xshift=1mm, scale=0.8}}; \node (v1) [col1] {\sffamily 00}; \node (v2) at (v1.east) [rest] {\sffamily 01}; \node (v3) at (v2.east) [rest] {\sffamily 02}; \node (v4) at (v3.east) [rest] {\sffamily 03}; }) with a single command. Our mechanism is based on the fact that modern DRAM modules consist of many DRAM chips. As described in Section~\ref{sec:dram-module}, to achieve high bandwidth, multiple chips are grouped together to form a rank, and all chips within a rank operate in unison. Our \textbf{idea} is to enable the controller to access multiple values from a strided access from \emph{different} chips within the rank with a single command. However, there are two challenges in implementing this idea. For the purpose of describing the challenges and our mechanism, we assume that each rank consist of four DRAM chips. However, {\sffamily{GS-DRAM}}\xspace is general and can be extended to any rank with power-of-2 number of DRAM chips. \subsection{Challenges in Designing {\sffamily{GS-DRAM}}\xspace} \label{sec:gsdram-challenges} Figure~\ref{fig:gsdram-challenges} shows the two challenges. We assume that the first four tuples of the table are stored from the beginning of a DRAM row. Since each tuple maps to a single cache line, the data of each tuple is split across all four chips. The mapping between different segments of the cache line and the chips is controlled by the memory controller. Based on the mapping scheme described in Section~\ref{sec:dram-module}, the $i^{th}$ 8 bytes of each cache line (i.e., the $i^{th}$ field of each tuple) is mapped to the $i^{th}$ chip. \begin{figure}[h] \centering \includegraphics{gsdram/figures/challenges} \caption[{\sffamily{GS-DRAM}}\xspace: Challenges]{The two challenges in designing {\sffamily{GS-DRAM}}\xspace.} \label{fig:gsdram-challenges} \end{figure} \noindent\textbf{\sffamily Challenge 1:} \emph{Reducing chip conflicts}. The simple mapping mechanism maps the first field of \emph{all} the tuples to Chip 0. Since each chip can send out only one field (8 bytes) per \texttt{READ}\xspace operation, gathering the first field of the four tuples will necessarily require four {\texttt{READ}\xspace}s. In a general scenario, different pieces of data that are required by a strided access pattern will be mapped to different chips. When two such pieces of data are mapped to the same chip, it results in what we call a \emph{chip conflict}. Chip conflicts increase the number of {\texttt{READ}\xspace}s required to gather all the values of a strided access pattern. Therefore, we have to map the data structure to the chips in a manner that minimizes the number of chip conflicts for target access patterns. \noindent\textbf{\sffamily Challenge 2:} \emph{Communicating the access pattern to the module}. As shown in Figure~\ref{fig:gsdram-challenges}, when a column command is sent to a rank, all the chips select the \emph{same} column from the activated row and send out the data. If the memory controller needs to access the first tuple of the table and the first field of the four tuples each with a \emph{single} \texttt{READ}\xspace operation, we need to break this constraint and allow the memory controller to potentially read \emph{different} columns from different chips using a single \texttt{READ}\xspace command. One naive way of achieving this flexibility is to use multiple address buses, one for each chip. Unfortunately, this approach is very costly as it significantly increases the pin count of the memory channel. Therefore, we need a simple and low cost mechanism to allow the memory controller to efficiently communicate different access patterns to the DRAM module. In the following sections, we propose a simple mechanism to address the above challenges with specific focus on power-of-2 strided access patterns. While non-power-of-2 strides (e.g., odd strides) pose some additional challenges (e.g., alignment), a similar approach can be used to support non-power-of-2 strides as well. \subsection{Column ID-based Data Shuffling} \label{sec:shuffling} To address challenge 1, i.e., to minimize chip conflicts, the memory controller must employ a mapping scheme that distributes data of each cache line to different DRAM chips with the following three goals. First, the mapping scheme should be able to minimize chip conflicts for a number of access patterns. Second, the memory controller must be able to succinctly communicate an access pattern along with a column command to the DRAM module. Third, once the different parts of the cache line are read from different chips, the memory controller must be able to quickly assemble the cache line. Unfortunately, these goals are conflicting. While a simple mapping scheme enables the controller to assemble a cache line by concatenating the data received from different chips, this scheme incurs a large number of chip conflicts for many frequently occurring access patterns (e.g., any power-of-2 stride > 1). On the other hand, pseudo-random mapping schemes~\cite{pr-interleaving} potentially incur a small number of conflicts for almost any access pattern. Unfortunately, such pseudo-random mapping schemes have two shortcomings. First, for any cache line access, the memory controller must compute which column of data to access from each chip and communicate this information to the chips along with the column command. With a pseudo random interleaving, this communication may require a separate address bus for each chip, which would significantly increase the cost of the memory channel. Second, after reading the data, the memory controller must spend more time assembling the cache line, increasing the overall latency of the \texttt{READ}\xspace operation. We propose a simple \emph{column ID-based data shuffling} mechanism that achieves a sweet spot by restricting our focus to power-of-2 strided access patterns. Our shuffling mechanism is similar to a butterfly network~\cite{butterfly}, and is implemented in the memory controller. To map the data of the cache line with column address $C$ to different chips, the memory controller inspects the $n$ \emph{least significant bits} (LSB) of $C$. Based on these $n$ bits, the controller uses $n$ stages of shuffling. Figure~\ref{fig:shuffling} shows an example of a 2-stage shuffling mechanism. In Stage 1 (Figure~\ref{fig:shuffling}), if the LSB is set, our mechanism groups adjacent 8-byte values in the cache line into pairs and swaps the values within each pair. In Stage 2 (Figure~\ref{fig:shuffling}), if the second LSB is set, our mechanism groups the 8-byte values in the cache line into quadruplets, and swaps the adjacent \emph{pairs} of values. The mechanism proceeds similarly into the higher levels, doubling the size of the group of values swapped in each higher stage. The shuffling mechanism can be enabled only for those data structures that require our mechanism. Section~\ref{sec:gsdram-software} discusses this in more detail. \begin{figure}[h] \centering \includegraphics[scale=0.95]{gsdram/figures/shuffling} \caption[Column-ID based data shuffling]{2-stage shuffling mechanism that maps different 8-byte values within a cache line to a DRAM chip. For each mux, \texttt{0} selects the vertical input, and \texttt{1} selects the cross input.} \label{fig:shuffling} \end{figure} With this simple multi-stage shuffling mechanism, the memory controller can map data to DRAM chips such that \emph{any} power-of-2 strided access pattern incurs minimal chip conflicts for values within a single DRAM row. \subsection{Pattern ID: Low-cost Column Translation} \label{sec:pattern-id} The second challenge is to enable the memory controller to flexibly access different column addresses from different DRAM chips using a single \texttt{READ}\xspace command. To this end, we propose a mechanism wherein the controller associates a \emph{pattern ID} with each access pattern. It provides this pattern ID with each column command. Each DRAM chip then independently computes a new column address based on 1)~the issued column address, 2)~the chip ID, and 3)~the pattern ID. We refer to this mechanism as \emph{column translation}. Figure~\ref{fig:ctl} shows the column translation logic for a single chip. As shown in the figure, our mechanism requires only two bitwise operations per chip to compute the new column address. More specifically, the output column address for each chip is given by \mbox{(\texttt{Chip ID} \& \texttt{Pattern ID}) $\oplus$ \texttt{Column ID}}, where \texttt{Column ID} is the column address provided by the memory controller. In addition to the logic to perform these simple bitwise operations, our mechanism requires 1)~a register per chip to store the chip ID, and 2)~a multiplexer to enable the address translation only for column commands. While our column translation logic can be combined with the column selection logic already present within each chip, our mechanism can also be implemented within the DRAM module with \emph{no} changes to the DRAM chips. \begin{figure}[h] \centering \includegraphics[scale=1]{gsdram/figures/ctl} \caption[In-DRAM column translation logic]{Column Translation Logic (CTL). Each chip has its own CTL. The CTL can be implemented in the DRAM module (as shown in Figure~\ref{fig:overview}). Each logic gate performs a bitwise operation of the input values.} \label{fig:ctl} \end{figure} Combining this pattern-ID-based column translation mechanism with the column-ID-based data shuffling mechanism, the memory controller can gather or scatter any power-of-two strided access pattern with no waste in memory bandwidth. \subsection{{\sffamily{GS-DRAM}}\xspace: Putting It All Together} \label{sec:overview} Figure~\ref{fig:overview} shows the full overview of our {\sffamily{GS-DRAM}}\xspace substrate. The figure shows how the first four tuples of our example table are mapped to the DRAM chips using our data shuffling mechanism. The first tuple (column ID $=$ 0) undergoes no shuffling as the two LSBs of the column ID are both 0 (see Figure~\ref{fig:shuffling}). For the second tuple (column ID $=$ 1), the adjacent values within each pairs of values are swapped (Figure~\ref{fig:shuffling}, Stage 1). Similarly, for the third tuple (column ID $=$ 2), adjacent pair of values are swapped (Figure~\ref{fig:shuffling}, Stage 2). For the fourth tuple (column ID $=$ 3), since the two LSBs of the column ID are both 1, both stages of the shuffling scheme are enabled (Figure~\ref{fig:shuffling}, Stages 1 and 2). As shown in shaded boxes in Figure~\ref{fig:overview}, the first field of the four tuples (i.e., \tikz[baseline={([yshift=-8pt]current bounding box.north)}]{ \tikzset{col1/.style={draw,minimum height=2mm, minimum width=2mm,inner sep=2pt, anchor=west, fill=black!30, rounded corners=2pt,outer sep=0pt,scale=0.8,xshift=1mm}}; \tikzset{rest/.style={draw,minimum height=2mm, minimum width=5mm,inner sep=2pt, anchor=west, fill=white,rounded corners=2pt,outer sep=0pt,xshift=1mm, scale=0.8}}; \node (v1) [col1] {\sffamily 00}; \node (v2) at (v1.east) [col1] {\sffamily 10}; \node (v3) at (v2.east) [col1] {\sffamily 20}; \node (v4) at (v3.east) [col1] {\sffamily 30}; }) are mapped to \emph{different} chips, allowing the memory controller to read them with a single \texttt{READ}\xspace command. The same is true for the other fields of the table as well (e.g., \tikz[baseline={([yshift=-8pt]current bounding box.north)}]{ \tikzset{col1/.style={draw,minimum height=2mm, minimum width=2mm,inner sep=2pt, anchor=west, fill=black!30, rounded corners=2pt,outer sep=0pt,scale=0.8}}; \tikzset{rest/.style={draw,minimum height=2mm, minimum width=5mm,inner sep=2pt, anchor=west, fill=white,rounded corners=2pt,outer sep=0pt,xshift=1mm, scale=0.8}}; \node (v1) [rest] {\sffamily 01}; \node (v2) at (v1.east) [rest] {\sffamily 11}; \node (v3) at (v2.east) [rest] {\sffamily 21}; \node (v4) at (v3.east) [rest] {\sffamily 31}; }) \begin{figure}[h] \centering \includegraphics[scale=1]{gsdram/figures/overview} \caption[{\sffamily{GS-DRAM}}\xspace Overview]{{\sffamily{GS-DRAM}}\xspace Overview. CTL-i is the column translation logic with Chip ID $=$ i (as shown in Figure~\ref{fig:ctl}).} \label{fig:overview} \end{figure} The figure also shows the per-chip column translation logic. To read a specific tuple from the table, the memory controller simply issues a \texttt{READ}\xspace command with pattern ID $=$ 0 and an appropriate column address. For example, when the memory controller issues the \texttt{READ}\xspace for column ID 2 and pattern 0, the four chips return the data corresponding to the columns (2 2 2 2), which is the data in the third tuple of the table (i.e., \tikz[baseline={([yshift=-8pt]current bounding box.north)}]{ \tikzset{col1/.style={draw,minimum height=2mm, minimum width=2mm,inner sep=2pt, anchor=west, fill=black!30, rounded corners=2pt,outer sep=0pt,scale=0.8}}; \tikzset{rest/.style={draw,minimum height=2mm, minimum width=5mm,inner sep=2pt, anchor=west, fill=white,rounded corners=2pt,outer sep=0pt,xshift=1mm, scale=0.8}}; \node (v1) [rest] {\sffamily 22}; \node (v2) at (v1.east) [rest] {\sffamily 23}; \node (v3) at (v2.east) [rest] {\sffamily 20}; \node (v4) at (v3.east) [rest] {\sffamily 21}; }). In other words, pattern ID 0 allows the memory controller to perform the default read operation. Hence, we refer to pattern ID 0 as the \emph{default pattern}. On the other hand, if the memory controller issues a \texttt{READ}\xspace for column ID 0 and pattern 3, the four chips return the data corresponding to columns (0 1 2 3), which precisely maps to the first field of the table. Similarly, the other fields of the first four tuples can be read from the database by varying the column ID with pattern 3. \subsection{{\sffamily{GS-DRAM}}\xspace Parameters} \label{sec:gsdramp} {\sffamily{GS-DRAM}}\xspace has three main parameters: 1)~the number of chips in each module, 2)~the number of shuffling stages in the data shuffling mechanism, and 3)~the number of bits of pattern ID. While the number of chips determines the size of each cache line, the other two parameters determine the set of access patterns that can be efficiently gathered by {\sffamily{GS-DRAM}}\xspace. We use the term \gsdramp{c}{s}{p} to denote a {\sffamily{GS-DRAM}}\xspace with $c$ chips, $s$ stages of shuffling, and $p$ bits of pattern ID. Figure~\ref{fig:patterns} shows all cache lines that can be gathered by \gsdramp{4}{2}{2}, with the four possible patterns for column IDs 0 through 3. For each pattern ID and column ID combination, the figure shows the index of the four values within the row buffer that are retrieved from the DRAM module. As shown, pattern 0 retrieves contiguous values. Pattern 1 retrieves every other value (stride = 2). Pattern 2 has a dual stride of (1,7). Pattern 3 retrieves every 4th value (stride = 4). In general, pattern $2^k - 1$ gathers data with a stride $2^k$. \begin{figure}[h] \centering \includegraphics[scale=0.9]{gsdram/figures/patterns} \caption[Patterns gathered by \gsdramp{4}{2}{2}]{Cache lines gathered by \gsdramp{4}{2}{2} for all patterns for column IDs 0--3. Each circle contains the index of the 8-byte value inside the logical row buffer.} \label{fig:patterns} \end{figure} While we showed a use case for pattern 3 (in our example), we envision use-cases for other patterns as well. Pattern 1, for instance, can be useful for data structures like key-value stores. Assuming an 8-byte key and an 8-byte value, the cache line (\texttt{Patt 0, Col 0}) corresponds to the first two key-value pairs. However the cache line (\texttt{Patt 1, Col 0}) corresponds to the first four keys, and (\texttt{Patt 1, Col 1}) corresponds to the first four values. Similarly, pattern 2 can be use to fetch odd-even pairs of fields from a data structure where each object has 8 fields. Our mechanism is general. For instance, with \gsdramp{8}{3}{3} (i.e., 8 chips, 3 shuffling stages, and 3-bit pattern ID), the memory controller can access data with seven different patterns. Section~\ref{sec:gsdram-extensions} discusses other simple extensions to our approach to enable more fine-grained gather access patterns, and larger strides. \subsection{Ease of Implementing {\sffamily{GS-DRAM}}\xspace} \label{sec:benefits} In Section~\ref{sec:applications}, we will show that {\sffamily{GS-DRAM}}\xspace has compelling performance and energy benefits compared to existing DRAM interfaces. These benefits are augmented by the fact that {\sffamily{GS-DRAM}}\xspace is simple to implement. First, our data shuffling mechanism is simple and has low latency. Each stage involves only data swapping and takes at most one processor cycle. Our evaluations use \gsdramp{8}{3}{3}, thereby incurring 3 cycles of additional latency to shuffle/unshuffle data for each DRAM write/read. Second, for \gsdramp{*}{*}{p}, the column translation logic requires only two p-bit bitwise operations, a p-bit register to store the chip ID, and a p-bit multiplexer. In fact, this mechanism can be implemented as part of the DRAM module \emph{without} any changes to the DRAM chips themselves. Finally, third, {\sffamily{GS-DRAM}}\xspace requires the memory controller to communicate only k bits of pattern ID to the DRAM module, adding only a few pins to each channel. In fact, the column command in existing DDR DRAM interfaces already has a few spare address pins that can potentially be used by the memory controller to communicate the pattern ID (e.g., DDR4 has two spare address pins for column commands~\cite{ddr4}). \section{Prior Work} \label{sec:gsdram-prior-work} Carter et al.~\cite{impulse} propose Impulse, a mechanism to export gather operations to the memory controller. In their system, applications specify a \emph{gather mapping} to the memory controller (with the help of the OS). To perform a gather access, the controller assembles a cache line with only the values required by the access pattern and sends the cache line to the processor, thereby reducing the bandwidth between the memory controller and the processor. Impulse has two shortcomings. First, with commodity DRAM modules, which are optimized for accessing cache lines, Impulse cannot mitigate the wasted memory bandwidth consumption between the memory controller and DRAM. Impulse requires a memory interface that supports fine-grained accesses (e.g.,~\cite{mini-rank,mc-dimm,mc-dimm-cal,threaded-module,sg-dimm}), which significantly increases the system cost. Second, Impulse punts the problem of maintaining cache coherence to software. In contrast, {\sffamily{GS-DRAM}}\xspace 1)~works with commodity modules with very few changes, and 2)~provides coherence of gathered cache lines transparent to software. Yoon et al.~\cite{dgms,agms} propose the Dynamic Granularity Memory System (DGMS), a memory interface that allows the memory controller to dynamically change the granularity of memory accesses in order to avoid unnecessary data transfers for accesses with low spatial locality. Similar to Impulse, DGMS requires a memory interface that supports fine-grained memory accesses (e.g.,~\cite{mini-rank,mc-dimm,mc-dimm-cal,threaded-module,sg-dimm}) and a sectored cache~\cite{sectored-cache,dscache}. In contrast, {\sffamily{GS-DRAM}}\xspace works with commodity DRAM modules and conventionally-used non-sectored caches with very few changes. Prior works (e.g.,~\cite{stride-prefetching,stride-prefetching-2,stride-stream-buffer,stride-pref-fp,fdp,ghb}) propose prefetching for strided accesses. While prefetching reduces the latency of such accesses, it does not avoid the waste in memory bandwidth and cache space. He et al.~\cite{gpu-gather-scatter} propose a model to analyze the performance of gather-scatter accesses on a GPU. To improve cache locality, their model splits gather-scatter loops into multiple passes such that each pass performs only accesses from a small group of values that fit in the cache. This mechanism works only when multiple values are \emph{actually} reused by the application. In contrast, {\sffamily{GS-DRAM}}\xspace fetches \emph{only} useful values from DRAM, thereby achieving better memory bandwidth and cache utilization. \section{Summary} \label{sec:gsdram-summary} In this chapter, we introduced \emph{Gather-Scatter DRAM}, a low-cost substrate that enables the memory controller to efficiently gather or scatter data with different non-unit strided access patterns. Our mechanism exploits the fact that multiple DRAM chips contribute to each cache line access. {\sffamily{GS-DRAM}}\xspace maps values accessed by different strided patterns to different chips, and uses a per-chip column translation logic to access data with different patterns using significantly fewer memory accesses than existing DRAM interfaces. Our framework requires no changes to commodity DRAM chips, and very few changes to the DRAM module, the memory interface, and the processor architecture. Our evaluations show that {\sffamily{GS-DRAM}}\xspace provides the best of both the row store and the column store layouts for a number of in-memory database workloads, and outperforms the best tiled layout on a well-optimized matrix-matrix multiplication workload. Our framework can benefit many other modern data-intensive applications like key-value stores and graph processing. We conclude that the {\sffamily{GS-DRAM}}\xspace framework is a simple and effective way to improve the performance of non-unit strided and gather/scatter memory accesses. \chapter{Introduction} \label{chap:introduction} In recent years, energy-efficiency has become a major design factor in systems. This trend is fueled by the ever-growing use of battery-powered hand-held devices on the one end, and large-scale data centers on the other end. To ensure high energy-efficiency, all the resources in a system (e.g., processor, caches, memory) must be used efficiently. To simplify system design, each resource typically exposes an interface/abstraction to other resources in the system. Such abstractions allow system designers to adopt newer technologies to implement a resource \emph{without} modifying other resources. However, a \emph{poor abstraction} to a resource that does not expose all its capabilities can significantly limit the overall efficiency of the system. \section{Focus of This Dissertation: The Memory Subsystem} This dissertation focuses on the efficiency of the memory subsystem. Main memory management in a modern system has two components: 1)~memory mapping (affects capacity management, protection, etc.), and 2)~memory access (reads, writes, etc.). We observe that in existing systems, there is a mismatch in the granularity at which memory is mapped and accessed at different resources, resulting in significant inefficiency. \Cref{fig:system-stack} shows the different layers of the system stack and their interaction with different memory resources. \begin{figure} \centering \includegraphics{introduction/figures/system-stack} \caption[Interaction with memory resources]{Layers of the system stack and their interaction with memory resources.} \label{fig:system-stack} \end{figure} \subsection{Different Granularities of Data Storage and Access} First, most modern operating systems (OS) ubiquitously use virtual memory~\cite{virtual-memory} to manage main memory capacity. To map virtual memory to physical memory, virtual memory systems use a set of mapping tables called \emph{page tables}. In order to keep the overhead of the page tables low, most virtual memory systems typically manage memory at a large granularity (\emph{4KB pages} or larger super pages). Second, to access memory, the instruction set architecture (ISA) exposes a set of load and store instructions to the software stack. To allow efficient representation of various data types, such instructions typically allow software to access memory at a small granularity (e.g., \emph{4B or 8B words}). Third, any memory request generated by load/store instructions go through a hierarchy of \emph{on-chip caches} all the way to the \emph{off-chip main memory}. In order to lower the cost of the cache tag stores and the memory interface, the on-chip caches and the off-chip memory interface are typically optimized to store and communicate data at a granularity wider than a single word (e.g., \emph{64B cache lines}). Finally, to reduce cost-per-bit, commodity DRAM architectures internally operate at a row granularity (typically \emph{8KB}). It is clear that data are stored and accessed at different granularities in different memory resources. We identify three problems that result from this mismatch in granularity. \subsection{The Curse of Multiple Granularities} First, we observe that the page-granularity virtual memory management can result in unnecessary work. For instance, when using the copy-on-write technique, even a write to a single byte can trigger a full page copy operation. Second, existing off-chip memory interfaces only expose a read-write abstraction to main memory. As a result, to perform any operation, the processor must read all the source data from main memory and write back the results to main memory. For operations that involve a large amount of data, i.e., bulk data operations, this approach results in a large number of data transfers on the memory channel. Third, many access patterns trigger accesses with poor spatial locality (e.g., non-unit strided accesses). With existing caches and off-chip interfaces optimized for cache line granularity, such access patterns fetch a large amount of data not required by the application over the memory channel and store them in the on-chip cache. All these problems result in high latency, high (and often unnecessary) memory bandwidth, and inefficient cache utilization. As a result, they affect the performance of not only the application performing the operations, but also the performance of other co-running applications. Moreover, as data movement on the memory channel consumes high energy~\cite{bill-dally}, these operations also lower the overall energy-efficiency of the system. \cref{chap:motivation} motivates these problems in more detail using case studies. \section{Related Work} Several prior works have proposed mechanisms to improve memory efficiency. In this section, we discuss some closely related prior approaches. We group prior works based on their high level approach and describe their shortcomings. \subsection{New Virtual Memory Frameworks} Page-granularity virtual memory hinders efficient implementation of many techniques that require tracking memory at a fine granularity (e.g., fine-grained memory deduplication, fine-grained metadata management). Prior works have proposed new frameworks to implement such techniques (e.g., HiCAMP~\cite{hicamp}, Mondrian Memory Protection~\cite{mmp}, architectural support for shadow memory~\cite{shadow-memory,ems,shadow-mem-check,umbra,iwatcher}). Unfortunately, these mechanisms either significantly change the existing virtual memory structure, thereby resulting in high cost, or introduce significant changes solely for a specific functionality, thereby reducing overall value. \subsection{Adding Processing Logic Near Memory (DRAM)} One of the primary sources of memory inefficiency in existing systems is the data movement. Data has to travel off-chip buses and multiple levels of caches before reaching the CPU. To avoid this data movement, many works (e.g., Logic-in-Memory Computer~\cite{lim-computer}, NON-VON Database Machine~\cite{non-von-machine}, EXECUBE~\cite{execube}, Terasys~\cite{pim-terasys}, Intelligent RAM~\cite{iram}, Active Pages~\cite{active-pages}, FlexRAM~\cite{flexram,programming-flexram}, Computational RAM~\cite{cram}, DIVA~\cite{diva} ) have proposed mechanisms and models to add processing logic close to memory. The idea is to integrate memory and CPU on the same chip by designing the CPU using the memory process technology. While the reduced data movement allows these approaches to enable low-latency, high-bandwidth, and low-energy data communication, they suffer from two key shortcomings. First, this approach of integrating processor on the same chip as memory greatly increases the overall cost of the system. Second, DRAM vendors use a high-density process to minimize cost-per-bit. Unfortunately, high-density DRAM process is not suitable for building high-speed logic~\cite{iram}. As a result, this approach is not suitable for building a general purpose processor near memory, at least with modern logic and high-density DRAM technologies. \subsection{3D-Stacked DRAM Architectures} Some recent architectures~\cite{3d-stacking,hmc,hbm} use 3D-stacking technology to stack multiple DRAM chips on top of the processor chip or a separate logic layer. These architectures offer much higher bandwidth to the logic layer compared to traditional off-chip interfaces. This enables an opportunity to offload some computation to the logic layer, thereby improving performance. In fact, many recent works have proposed mechanisms to improve and exploit such architectures (e.g.,~\cite{pim-enabled-insts,pim-graph,top-pim,nda,msa3d,spmm-mul-lim,data-access-opt-pim,tom,hrl,gp-simd,ndp-architecture,pim-analytics,nda-arch,jafar,data-reorg-3d-stack,smla}). Unfortunately, despite enabling higher bandwidth compared to off-chip memory, such 3D-stacked architectures are still require data to be transferred outside the DRAM chip, and hence can be bandwidth-limited. In addition, thermal constraints constrain the number of chips that can be stacked, thereby limiting the memory capacity. As a result, multiple 3D-stacked DRAMs are required to scale to large workloads. \subsection{Adding Logic to the Memory Controller} Many prior works have proposed mechanisms to export certain memory operations to the memory controller with the goal of improving the efficiency of the operation (e.g., Copy Engine~\cite{copy-engine} to perform bulk copy or initialization, Impulse~\cite{impulse} to perform gather/scatter operations, Enhanced Memory Controller~\cite{emc} to accelerate dependent cache misses). Recent memory technologies which stack DRAM chips on top of a logic layer containing the memory controller~\cite{hmc,hbm} will likely make this approach attractive. Although these mechanisms definitely reduce the pressure on the CPU and on-chip caches, they still have to go through the cache-line-granularity main memory interface, which is inefficient to perform these operations. \subsection{Supporting Fine-grained Memory Accesses in DRAM} A number of works exploit the module-level organization of DRAM to enable efficient fine-grained memory accesses (e.g., Mini-rank~\cite{mini-rank}, Multi-Core DIMM~\cite{mc-dimm}, Threaded memory modules~\cite{threaded-module}, Scatter/Gather DIMMs~\cite{sg-dimm}). These works add logic to the DRAM module that enables the memory controller to access data from individual chips rather than the entire module. Unfortunately, such interfaces 1)~are much costlier compared to existing memory interfaces, 2)~potentially lower the DRAM bandwidth utilization, and 3)~do not alleviate the inefficiency for bulk data operations. \subsection{Improving DRAM Latency and Parallelism} A number of prior works have proposed new DRAM microarchitectures to lower the latency of DRAM access or enable more parallelism within DRAM. Approaches employed by these works include 1)~introducing heterogeneity in access latency inside DRAM for a low cost (e.g., Tiered-Latency DRAM~\cite{tl-dram}, Asymmetric Banks~\cite{charm}, Dynamic Asymmetric Subarrays~\cite{da-subarray}, Low-cost Interlinked Subarrays~\cite{lisa}), 2)~improving parallelism within DRAM (e.g., Subarray-level Parallelism~\cite{salp}, parallelizing refreshes~\cite{dsarp}, Dual-Port DRAM~\cite{ddma}), 3)~exploiting charge characteristics of cells to reduce DRAM latency (e.g., Multi-clone-row DRAM~\cite{mcr-dram}, Charge Cache~\cite{chargecache}), 4)~reducing the granularity of internal DRAM operation through microarchitectural changes (e.g., Half-DRAM~\cite{half-dram}, Sub-row activation~\cite{rethinking-dram}), 5)~adding SRAM cache to DRAM chips~\cite{cache-dram}, 6)~exploiting variation in DRAM (e.g., Adaptive Latency DRAM~\cite{al-dram}, FLY-DRAM~\cite{fly-dram}), and 7)~better refresh scheduling and refresh reduction (e.g.,~\cite{raidr,smart-refresh,elastic-refresh,refresh-nt,eskimo,opt-dram-refresh,dynamic-memory-design,avatar,efficacy-error-techniques}. While many of these approaches will improve the performance of various memory operations, they are still far from mitigating the unnecessary bandwidth consumed by certain memory operations (e.g., bulk data copy, non-unit strided access). \subsection{Reducing Memory Bandwidth Requirements} Many prior works have proposed techniques to reduce memory bandwidth consumption of applications. Approaches used by these works include 1)~data compression (e.g.,~\cite{bdi,lcp,camp,toggle-compression,rmc,mxt}), 2)~value prediction (e.g.,~\cite{value-prediction,value-locality}), 3)~load approximation (e.g.,~\cite{rollback-vp,load-approx}), 4)~adaptive granularity memory systems (e.g.,~\cite{agms,dgms}), and 5)~better caching to reduce the number of memory requests (e.g.,~\cite{eaf,rrip,dip}). Some of these techniques require significant changes to the hardware (e.g., compression, adaptive granularity memory systems). Having said that, all these approaches are orthogonal to the techniques proposed in this dissertation. \subsection{Mitigating Contention for Memory Bandwidth} One of the problems that result from bulk data operations is the contention for memory bandwidth, which can negatively affect the performance of applications co-running in the system. A plethora of prior works have proposed mechanisms to mitigate this performance degradation using better memory request scheduling (e.g.,~\cite{stfm,mpa,parbs,tcm,atlas,bliss,bliss-tpds,critical-scheduler,dram-aware-wb,prefetch-dram,dash,sms,somc,clams,medic,firm}). While these works improve overall system performance and fairness, they do not fundamentally reduce the bandwidth consumption of the applications performing the bulk operations. \section{Thesis Statement and Overview} Our goal in this thesis is to improve the overall efficiency of the memory subsystem without significantly modifying existing abstractions and without degrading the performance/efficiency of applications that do not use our proposed techniques. Towards this end, our thesis is that, \begin{quote} \emph{we can exploit the diversity in the granularity at which different hardware resources manage memory to mitigate the inefficiency that arises from that very diversity. To this end, we propose to augment existing processor and main memory architectures with some simple, low-cost features that bridge the gap resulting from the granularity mismatch.} \end{quote} Our proposed techniques are based on two observations. First, modern processors are capable of tracking data at a cache-line granularity. Therefore, even though memory capacity is managed at a larger page granularity, using some simple features, it should be possible to enable more efficient implementations of fine-grained memory operations. Second, although off-chip memory interfaces are optimized to access cache lines, we observe that the commodity memory architecture has the ability to internally operate at both a bulk row granularity and at a fine word granularity. We exploit these observations to propose a new virtual memory framework that enables efficient fine-grained memory management, and a series of techniques to exploit the commodity DRAM architecture to efficiently perform bulk data operations and accelerate memory operations with low spatial locality. \section{Contributions} This dissertation makes the following contributions. \begin{enumerate} \item We propose a new virtual memory framework called \emph{page overlays} that allows memory to be managed at a sub-page (cache line) granularity. The page overlays framework significantly improves the efficiency of several memory management techniques, e.g., copy-on-write, and super pages. \Cref{chap:page-overlays} describes our framework, its implementation, and applications in detail. \item We observe that DRAM internally operates at a large, row granularity. Through simple changes to the DRAM architecture, we propose \emph{RowClone}, a mechanism that enables fast and efficient bulk copy operations completely within DRAM. We exploit RowClone to accelerate copy-on-write and bulk zeroing, two important primitives in modern systems. \Cref{chap:rowclone} describes RowClone and its applications in detail. \item We observe that the analog operation of DRAM has the potential to efficiently perform bitwise logical operations. We propose \emph{Buddy RAM}, a mechanism that exploits this potential to enable efficient bulk bitwise operations completely with DRAM. We demonstrate the performance benefits of this mechanism using 1)~a database bitmap index library, and 2)~an efficient implementation of a set data structure. \Cref{chap:buddy} describes Buddy RAM in detail. \item We observe that commodity DRAM architectures heavily interleave data of a single cache line across many DRAM devices and multiple arrays within each device. We propose \emph{Gather-Scatter DRAM} (GS-DRAM), which exploits this fact to enable the memory controller to gather or scatter data of common access patterns with near ideal efficiency. We propose mechanisms that use GS-DRAM to accelerate non-unit strided access patterns in many important applications, e.g., databases. \Cref{chap:gsdram} describes GS-DRAM and its applications in detail. \item Our mechanisms to perform operations completely in DRAM require appropriate dirty cache lines from the on-chip cache to be flushed. We propose the \emph{Dirty-Block Index} (DBI) that significantly improves the efficiency of this flushing operation. \Cref{chap:dbi} describes DBI and several other of its potential applications in detail. \end{enumerate} \chapter{The Curse of Multiple Granularities} \label{chap:motivation} As mentioned in \Cref{chap:introduction}, different memory resources are managed and accessed at a different granularity --- main memory capacity is managed at a page (typically 4KB) granularity, on-chip caches and off-chip memory interfaces store and access data at a cache line (typically 64B) granularity, DRAM internally performs operations at a row granularity (typically 8KB), and the applications (and CPU) access data at a small word (typically 4B or 8B) granularity. This mismatch results in a significant inefficiency in the execution of two important classes of operations: 1)~bulk data operations, and 2)~operations with low spatial locality. In this chapter, we discuss the sources of this inefficiency for each of these operations using one example operation in each class. \section{Bulk Data Operations} \label{sec:bulk-data-problem} A bulk data operation is one that involves a large amount of data. In existing systems, to perform any operation, the corresponding data must first be brought to the CPU L1 cache. Unfortunately, this model results in high inefficiency for a bulk data operation, especially if the operation does not involve any computation on the part of the processor (e.g., data movement). To understand the sources of inefficiency in existing systems, let us consider the widely-used copy-on-write~\cite{fork} technique. \subsection{The Copy-on-Write Technique} \Cref{fig:cow} shows how the copy-on-write technique works. When the system wants to copy the data from the virtual page V1 to the virtual page V2, it simply maps the page V2 to the same physical page (P1) to which V1 is mapped to. Based on the semantics of virtual memory, any read access to either virtual page is directed to the same page, ensuring correct execution. In fact, if neither of the two virtual pages are modified after the \emph{remap}, the system would have avoided an unnecessary copy operation. However, if one of the virtual pages, say V2, does receive a \emph{write}, the system must perform three steps. First, the operating system must \emph{identify a new physical page} (P2) from the free page list. Second, it must \emph{copy} the data from the original physical page (P1) to the newly identified page (P2). Third, it must \emph{remap} the virtual page that received the write (V2) to the new physical page (P2). After these steps are completed, the system can execute the write operation. \begin{figure} \centering \includegraphics{motivation/figures/cow} \caption[Copy-on-write: Mechanism and shortcomings]{The Copy-on-Write technique and shortcomings of existing systems} \label{fig:cow} \end{figure} \subsection{Sources of Inefficiency in Executing Copy-on-Write} \label{sec:cow-problems} Existing interfaces to manage and access the memory subsystem result in several sources of inefficiency in completing a copy-on-write operation. First, existing virtual memory systems manage main memory at a large page granularity. Therefore, even if only a single byte or word is modified in the virtual page V2, the system must allocate and copy a full physical page. This results in \emph{high memory redundancy}. Second, the CPU accesses data at a word or at best a vector register granularity. Existing systems must therefore perform these copy operations one word or a vector register at a time. This results in \emph{high latency} and \emph{inefficient use of the CPU}. Third, all the cache lines involved in the copy operation must be transferred from main memory to the processor caches. These cache line transfers result in \emph{high memory bandwidth} consumption and can potentially cause \emph{cache pollution}. Finally, all the data movement between the CPU, caches, and main memory consumes significant amounts of energy. Ideally, instead of copying an entire page of data, the system should eliminate all the redundancy by remapping only the data that is actually modified. In a case where the entire page needs to be copied, the system should all the unnecessary data movement by performing the copy operation completely in main memory. \section{Fine-grained Operations with Low Spatial Locality} As we mentioned in \Cref{chap:introduction}, the on-chip caches and the off-chip memory interface are both optimized to store and communicate wide cache lines (e.g., 64B). However, the data types typically used by applications are much smaller than a cache line. While access patterns with good spatial locality benefit from the cache-line-granularity management, existing systems incur high inefficiency when performing operations with low spatial locality. Specifically, non-unit strided access patterns are common in several applications, e.g., databases, scientific computing, and graphics. To illustrate the shortcomings of existing memory interfaces, we use an example of an in-memory database table. \subsection{Accessing a Column from a Row-Oriented Database Table} \label{sec:non-unit-stride-problem} In-memory databases are becoming popular among many applications. A table in such a database consist of many records (or rows). Each record in turn consists of many fields (or columns). Typically, a table is stored either in the row-oriented format or the column-oriented format. In the row-oriented format or \emph{row store}, all fields of a record are stored together. On the other hand, in the column-oriented format or \emph{column store}, the values of each field from all records are stored together. Depending on the nature of the query being performed on the table, one format may be better suited than the other. For example, the row store is better suited for inserting new records or executing \emph{transactions} on existing records. On the other hand, the column store is better suited for executing \emph{analytical queries} that aim to extract aggregate information from one or few fields of many records. Unfortunately, neither organization is better suited for \emph{both} transactions and analytical queries. With the recently growing need for real-time analytics, workloads that run both transactions and analytics on the same system, referred to as Hybrid Transaction/Analytics Processing or HTAP~\cite{htap}, are becoming important. Accessing a column of data from a row store results in a strided access pattern. \begin{figure} \centering \includegraphics{motivation/figures/strided} \caption[Strided access pattern in an in-memory table]{Accessing a column from a row store} \label{fig:col-row-store} \end{figure} \subsection{Shortcomings of Strided Access Patterns} \Cref{fig:col-row-store} shows the shortcomings of accessing a column of data from a database table that is stored as a row store. For ease of explanation, we assume that each record fits exactly into a cache line. As shown in the figure, each cache line contains only one useful value. However, since the caches and memory interface in existing systems are heavily optimized to store and access cache lines, existing systems have to fetch more data than necessary to complete the strided access operation. In the example shown in the figure, the system has to bring eight times more data than necessary to access a single column from a row store. This amplification in the amount of data fetched results in several problems. First, it significantly \emph{increases the latency} to complete the operation, thereby degrading the performance of the application. Second, it results in inefficient use of memory bandwidth and on-chip cache space. Finally, since different values of the strided access pattern are stored in different cache lines, it is difficult to enable SIMD (single instruction multiple data) optimizations for the computation performing the strided access pattern. Ideally, the memory system should be able to identify the strided access pattern (either automatically or with the help of the application), and fetch cache lines from memory that contain only values from the access pattern. This will eliminate all the inefficiency that results from the data overfetch and also seamlessly enable SIMD optimizations. \section{Goal of This Dissertation} In this dissertation, our goal is to develop efficient solutions to address the problems that result from the mismatch in the granularity of memory management at different resources. To this end, our approach is to exploit the untapped potential in various hardware structures by introducing \emph{new virtual memory and DRAM abstractions} that mitigate the negative impact of multiple granularities. Specifically, first, we observe that modern processors can efficiently track data at a cache line granularity using the on-chip caches. We exploit this to propose a new virtual memory abstraction called \emph{Page Overlays} to improve the efficiency of many fine-grained memory operations. Second, we observe that DRAM technology can be used to perform a variety of operations rather than just store data. We exploit this potential to design two mechanisms: \emph{RowClone} to perform bulk copy and initialization operations inside DRAM, and \emph{Buddy RAM} to perform bulk bitwise logical operations using DRAM. Third, we observe that commodity DRAM modules interleave data across multiple DRAM chips. We exploit this architecture to design \emph{Gather-Scatter DRAM}, which efficiently gathers or scatters data with access patterns that normally exhibit poor spatial locality. Finally, we propose the \emph{Dirty-Block Index}, which accelerates the coherence protocol that ensures the coherence of data between the caches and the main memory. The rest of the dissertation is organized as follows. Chapter~\ref{chap:page-overlays} describes our new page overlay framework. Chapter~\ref{chap:dram-background} provides a detailed background on modern DRAM design and architecture. Chapters~\ref{chap:rowclone}, \ref{chap:buddy}, and \ref{chap:gsdram} describe RowClone, Buddy RAM, and Gather-Scatter DRAM, respectively. Chapter~\ref{chap:dbi} describes the Dirty-Block Index. Finally, we conclude the dissertation and present some relevant future work in Chapter~\ref{chap:conclusion}. \chapter*{Other Works of the Author} \label{chap:other-works} \addcontentsline{toc}{chapter}{Other Works of the Author} During the course of my Ph.D., I had the opportunity to collaborate many of my fellow graduate students. These projects were not only helpful in keeping my morale up, especially during the initial years of my Ph.D., but also helped me in learning about DRAM (an important aspect of this dissertation). In this chapter, I would like to acknowledge these projects and also my early works on caching, which kick started my Ph.D. My interest in DRAM was triggered my work on subarray-level parallelism (SALP)~\cite{salp} in collaboration with Yoongu Kim. Since then, I have contributed to a number of projects on low-latency DRAM architectures with Donghyuk Lee (Tiered-Latency DRAM~\cite{tl-dram} and Adaptive-Latency DRAM~\cite{al-dram}), and Hasan Hassan (Charge Cache~\cite{chargecache}). These works focus on improving DRAM performance by either increasing parallelism or lowering latency through low-cost modifications to the DRAM interface and/or microarchitecture. In collaboration with Gennady Pekhimenko, I have worked on designing techniques to support data compression in a modern memory hierarchy. Two contributions have resulted from this work: 1)~Base-Delta Immediate Compression~\cite{bdi}, an effective data compression algorithm for on-chip caches, and 2)~Linearly Compressed Pages~\cite{lcp}, a low-cost framework for storing compressed data in main memory. In collaboration with Lavanya Subramanian, I have worked on techniques to quantify and mitigate slowdown in a multi-core system running multi-programmed workloads. This line of work started with MISE~\cite{mise}, a mechanism to estimate slowdown induced by contention for memory bandwidth. Later, we extended this with Application Slowdown Model~\cite{asm}, mechanism that accounts for contention for on-chip cache capacity. Finally, we propose the Blacklisting Memory Scheduler~\cite{bliss,bliss-tpds}, a simple memory scheduling algorithm to achieve high performance and fairness with low complexity. Finally, in the early years of my Ph.D., I have worked on techniques to improve on-chip cache utilization using 1)~the Evicted-Address Filter~\cite{eaf}, an improved cache insertion policy to address pollution and thrashing, and 2)~ICP~\cite{icp}, a mechanism that better integrates caching policy for prefetched blocks. We have released \texttt{memsim}~\cite{memsim}, the simulator that we developed as part of the EAF work. The simulator code can be found in github (\texttt{github.com/CMU-SAFARI/memsim}). \texttt{memsim} has since been used for many works~\cite{icp,bdi,dbi,lcp}. \section{Applications and Evaluations} \label{sec:overlays-applications} We describe seven techniques enabled by our framework, and quantitatively evaluate two of them. For our evaluations, we use memsim~\cite{memsim}, an event-driven multi-core simulator that models out-of-order cores coupled with a DDR3-1066~\cite{ddr3} DRAM simulator. All the simulated systems use a three-level cache hierarchy with a uniform 64B cache line size. We do not enforce inclusion in any level of the hierarchy. We use the state-of-the-art DRRIP cache replacement policy~\cite{rrip} for the last-level cache. All our evaluated systems use an aggressive multi-stream prefetcher~\cite{fdp} similar to the one implemented in IBM Power~6~\cite{power6-prefetcher}. Table~\ref{table:parameters} lists the main configuration parameters in detail. \begin{table} \centering \input{page-overlays/tables/parameters} \caption[Page Overlays: Simulation parameters]{Main parameters of our simulated system} \label{table:parameters} \end{table} \subsection{Overlay-on-write} \label{sec:oow-evaluation} As discussed in Section~\ref{sec:overlay-applications-oow}, overlay-on-write is a more efficient version of copy-on-write~\cite{fork}: when multiple virtual pages share the same physical page in the copy-on-write mode and one of them receives a write, overlay-on-write simply moves the corresponding cache line to the overlay and updates the cache line in the overlay. We compare the performance of overlay-on-write with that of copy-on-write using the \texttt{fork}\xspace~\cite{fork} system call. \texttt{fork}\xspace is a widely-used system call with a number of different applications including creating new processes, creating stateful threads in multi-threaded applications, process testing/debugging~\cite{flashback,self-test,hardware-bug}, and OS speculation~\cite{os-speculation-1,os-speculation-2,os-speculation-3}. Despite its wide applications, \texttt{fork}\xspace is one of the most expensive system calls~\cite{fork-exp}. When invoked, \texttt{fork}\xspace creates a child process with an identical virtual address space as the calling process. \texttt{fork}\xspace marks all the pages of both processes as copy-on-write. As a result, when any such page receives a write, the copy-on-write mechanism must copy the whole page and remap the virtual page before it can proceed with the write. Our evaluation models a scenario where a process is checkpointed at regular intervals using the \texttt{fork}\xspace system call. While we can test the performance of fork with any application, we use a subset of benchmarks from the SPEC CPU2006 benchmark suite~\cite{spec2006}. Because the number of pages copied depends on the \emph{write working set} of the application, we pick benchmarks with three different types of write working sets: 1)~benchmarks with low write working set size, 2)~benchmarks for which almost all cache lines within each modified page are updated, and 3)~benchmarks for which only a few cache line within each modified page are updated. We pick five benchmarks for each type. For each benchmark, we fast forward the execution to its representative portion (determined using Simpoint~\cite{simpoints}), run the benchmark for 200 million instructions (to warm up the caches), and execute a \texttt{fork}. After the \texttt{fork}, we run the parent process for another 300 million instructions, while the child process idles.\footnote{While 300 million instructions might seem low, several prior works (e.g.,~\cite{self-test,hardware-bug}) argue for even shorter checkpoint intervals (10-100 million instructions).} Figure~\ref{plot:memory} plots the amount of additional memory consumed by the parent process using copy-on-write and overlay-on-write for the 300 million instructions after the \texttt{fork}\xspace. Figure~\ref{plot:oow-perf} plots the performance (cycles per instruction) of the two mechanisms during the same period. We group benchmarks based on their type. We draw three conclusions. \begin{figure}[h] \centering \begin{minipage}{\linewidth} \centering \includegraphics[scale=1.2]{page-overlays/plots/memory.pdf} \caption[CoW vs. OoW: Additional memory consumption]{Additional memory consumed after a \texttt{fork}\xspace} \label{plot:memory} \end{minipage}\vspace{2mm} \begin{minipage}{\linewidth} \centering \includegraphics[scale=1.2]{page-overlays/plots/oow-perf.pdf} \caption[Cow vs. OoW: Performance]{Performance after a \texttt{fork}\xspace (lower is better)} \label{plot:oow-perf} \end{minipage} \end{figure} First, benchmarks with low write working set (Type 1) consume very little additional memory after forking (Figure~\ref{plot:memory}). As a result, there is not much difference in the performance of copy-on-write and that of overlay-on-write (Figure~\ref{plot:oow-perf}). Second, for benchmarks of Type 2, both mechanisms consume almost the same amount of additional memory. This is because for these benchmarks, almost all cache lines within every modified page are updated. However, with the exception of \bench{cactus}, overlay-on-write significantly improves performance for this type of applications. Our analysis shows that the performance trends can be explained by the distance in time when cache lines of each page are updated by the application. When writes to different cache lines within a page are close in time, copy-on-write performs better than overlay-on-write. This is because copy-on-write fetches \emph{all} the blocks of a page with high memory-level parallelism. On the other hand, when writes to different cache lines within a page are well separated in time, copy-on-write may 1)~unnecessarily pollute the L1 cache with all the cache lines of the copied page, and 2)~increase write bandwidth by generating two writes for each updated cache line (once when it is copied and again when the application updates the cache line). Overlay-on-write has neither of these drawbacks, and hence significantly improves performance over copy-on-write. Third, for benchmarks of Type 3, overlay-on-write significantly reduces the amount of additional memory consumed compared to copy-on-write. This is because the write working set of these applications are spread out in the virtual address space, and copy-on-write unnecessarily copies cache lines that are actually \emph{not} updated by the application. Consequently, overlay-on-write significantly improves performance compared to copy-on-write for this type of applications. In summary, overlay-on-write reduces additional memory capacity requirements by 53\% and improves performance by 15\% compared to copy-on-write. Given the wide applicability of the \texttt{fork}\xspace system call, and the copy-on-write technique in general, we believe overlay-on-write can significantly benefit a variety of such applications. \subsection{Representing Sparse Data Structures} \label{sec:applications-sparse-data-structures} A \emph{sparse} data structure is one with a significant fraction of zero values, e.g., a sparse matrix. Since only non-zero values typically contribute to computation, prior work developed many software representations for sparse data structures (e.g.,~\cite{yale-sm,bcsr-format}). One popular representation of a sparse matrix is the Compressed Sparse Row (CSR) format~\cite{bcsr-format}. To represent a sparse matrix, CSR stores only the non-zero values in an array, and uses two arrays of index pointers to identify the location of each non-zero value within the matrix. While CSR efficiently stores sparse matrices, the additional index pointers maintained by CSR can result in inefficiency. First, the index pointers lead to significant additional memory capacity overhead (roughly 1.5 times the number of non-zero values in our evaluation---each value is 8 bytes, and each index pointer is 4 bytes). Second, any computation on the sparse matrix requires additional memory accesses to fetch the index pointers, which degrades performance. Our framework enables a very efficient hardware-based representation for a sparse data structure: all virtual pages of the data structure map to a zero physical page and each virtual page is mapped to an overlay that contains only the \emph{non-zero cache lines} from that page. To avoid computation over zero cache lines, we propose a new computation model that enables the software to \emph{perform computation only on overlays}. When overlays are used to represent sparse data structures, this model enables the hardware to efficiently perform a computation only on non-zero cache lines. Because the hardware is aware of the overlay organization, it can efficiently prefetch the overlay cache lines and hide the latency of memory accesses significantly. Our representation stores non-zero data at a cache line granularity. Hence, the performance and memory capacity benefits of our representation over CSR depends on the spatial locality of non-zero values within a cache line. To aid our analysis, we define a metric called \emph{non-zero value locality} ($\mathcal{L}$\xspace), as the average number of non-zero values in each non-zero cache line. On the one hand, when non-zero values have poor locality ($\mathcal{L}$\xspace $\approx$ 1), our representation will have to store a significant number of zero values and perform redundant computation over such values, degrading both memory capacity and performance over CSR, which stores and performs computation on only non-zero values. On the other hand, when non-zero values have high locality ($\mathcal{L}$\xspace $\approx$ 8---e.g., each cache line stores 8 double-precision floating point values), our representation is significantly more efficient than CSR as it stores significantly less metadata about non-zero values than CSR. As a result, it outperforms CSR both in terms of memory capacity and performance. We analyzed this trade-off using real-world sparse matrices of double-precision floating point values obtained from the UF Sparse Matrix Collection~\cite{ufspm}. We considered all matrices with at least 1.5 million non-zero values (87 in total). Figure~\ref{plot:perf-mem} plots the memory capacity and performance of one iteration of Sparse-Matrix Vector (SpMV) multiplication of our mechanism normalized to CSR for each of these matrices. The x-axis is sorted in the increasing order of the $\mathcal{L}$\xspace-value of the matrices. \begin{figure} \centering \includegraphics[scale=1.2]{page-overlays/plots/perf-mem} \caption[Overlays vs. CSR: SpMV performance and memory overhead]{SpMV multiplication: Performance of page overlays vs. CSR. $\mathcal{L}$\xspace (non-zero value locality): Average \# non-zero values in each non-zero cache line.}\vspace{-1mm} \label{plot:perf-mem} \end{figure} The trends can be explained by looking at the extreme points. On the left extreme, we have a matrix with $\mathcal{L}$\xspace = 1.09 (\bench{poisson3Db}), i.e., most non-zero cache lines have only one non-zero value. As a result, our representation consumes 4.83 times more memory capacity and degrades performance by 70\% compared to CSR. On the other extreme is a matrix with $\mathcal{L}$\xspace = 8 (\bench{raefsky4}), i.e., \emph{none} of the non-zero cache lines have any zero value. As a result, our representation is more efficient, reducing memory capacity by 34\%, and improving performance by 92\% compared to CSR. Our results indicate that even when a little more than half of the values in each non-zero cache line are non-zero ($\mathcal{L}$\xspace > 4.5), overlays outperform CSR. For 34 of the 87 real-world matrices, overlays reduce memory capacity by 8\% and improve performance by 27\% on average compared to CSR. In addition to the performance and memory capacity benefits, our representation has several \textbf{other major advantages over CSR} (or any other software format). First, CSR is typically helpful \emph{only} when the data structure is very sparse. In contrast, our representation exploits a wider degree of sparsity in the data structure. In fact, our simulations using randomly-generated sparse matrices with varying levels of sparsity (0\% to 100\%) show that our representation outperforms the dense-matrix representation for all sparsity levels---the performance gap increases linearly with the fraction of zero cache lines in the matrix. Second, in our framework, dynamically inserting non-zero values into a sparse matrix is as simple as moving a cache line to the overlay. In contrast, CSR incur a high cost to insert non-zero values. Finally, our computation model enables the system to seamlessly use optimized dense matrix codes on top of our representation. CSR, on the other hand, requires programmers to rewrite algorithms to suit CSR. \textbf{Sensitivity to Cache Line Size.} So far, we have described the benefits of using overlays using 64B cache lines. However, one can imagine employing our approach at a 4KB page granularity (i.e., storing only non-zero pages as opposed to non-zero cache lines). To illustrate the benefits of fine-grained management, we compare the memory overhead of storing the sparse matrices using different cache line sizes (from 16B to 4KB). Figure~\ref{plot:block-sizes} shows the results. The memory overhead for each cache line size is normalized to the ideal mechanism which stores only the non-zero values. The matrices are sorted in the same order as in Figure~\ref{plot:perf-mem}. We draw two conclusions from the figure. First, while storing only non-zero (4KB) pages may be a practical system to implement using today's hardware, it increases the memory overhead by 53X on average. It would also increase the amount of computation, resulting in significant performance degradation. Hence, there is significant benefit to the fine-grained memory management enabled by overlays. Second, the results show that a mechanism using a finer granularity than 64B can outperform CSR on more matrices, indicating a direction for future research on sub-block management (e.g.,~\cite{amoeba}). \begin{figure}[h] \centering \includegraphics[scale=1.3]{page-overlays/plots/block-sizes} \caption[Overlays: Effect of cache line size]{Memory overhead of different cache line sizes over ``Ideal'' that stores only non-zero values. Circles indicate points where fine-grained management begins to outperform CSR.} \label{plot:block-sizes} \end{figure} In summary, our overlay-based sparse matrix representation outperforms the state-of-the-art software representation on many real-world matrices, and consistently better than page-granularity management. We believe our approach has much wider applicability than existing representations. \subsection{Other Applications of Our Framework} \label{sec:other-applications} We now describe five other applications that can be efficiently implemented on top of our framework. While prior works have already proposed mechanisms for some of these applications, our framework either enables a simpler mechanism or enables efficient hardware support for mechanisms proposed by prior work. We describe these mechanisms only at a high level, and defer more detailed explanations to future work. \subsubsection{Fine-grained Deduplication.} \label{sec:applications-deduplication} Gupta et al.~\cite{de} observe that in a system running multiple virtual machines with the same guest operating system, there are a number of pages that contain \emph{mostly same} data. Their analysis shows that exploiting this redundancy can reduce memory capacity requirements by 50\%. They propose the \emph{Difference Engine}, which stores such similar pages using small patches over a common page. However, accessing such patched pages incurs significant overhead because the OS must apply the patch before retrieving the required data. Our framework enables a more efficient implementation of the Difference Engine wherein cache lines that are different from the base page can be stored in overlays, thereby enabling seamless access to patched pages, while also reducing the overall memory consumption. Compared to HICAMP~\cite{hicamp}, a cache line level deduplication mechanism that locates cache lines based on their content, our framework avoids significant changes to both the existing virtual memory framework and programming model. \subsubsection{Efficient Checkpointing.} \label{sec:applications-checkpointing} Checkpointing is an important primitive in high performance computing applications where data structures are checkpointed at regular intervals to avoid restarting long-running applications from the beginning~\cite{hpc-survey,plfs,checkpointing}. However, the frequency and latency of checkpoints are often limited by the amount of memory data that needs to be written to the backing store. With our framework, overlays could be used to capture all the updates between two checkpoints. Only these overlays need to be written to the backing store to take a new checkpoint, reducing the latency and bandwidth of checkpointing. The overlays are then committed (Section~\ref{sec:converting}), so that each checkpoint captures precisely the delta since the last checkpoint. In contrast to prior works on efficient checkpointing such as INDRA~\cite{indra}, ReVive~\cite{revive}, and Sheaved Memory~\cite{sheaved-memory}, our framework is more flexible than INDRA and ReVive (which are tied to recovery from remote attacks) and avoids the considerable write amplification of Sheaved Memory (which can significantly degrade overall system performance). \subsubsection{Virtualizing Speculation.} \label{sec:applications-speculation} Several hardware-based speculative techniques (e.g., thread-level speculation~\cite{tls,multiscalar}, transactional memory~\cite{intel-htm,tm}) have been proposed to improve system performance. Such techniques maintain speculative updates to memory in the cache. As a result, when a speculatively-updated cache line is evicted from the cache, these techniques must necessarily declare the speculation as unsuccessful, resulting in a potentially wasted opportunity. In our framework, these techniques can store speculative updates to a virtual page in the corresponding overlay. The overlay can be \emph{committed} or \emph{discarded} based on whether the speculation succeeds or fails. This approach is not limited by cache capacity and enables potentially unbounded speculation~\cite{utm}. \subsubsection{Fine-grained Metadata Management.} \label{sec:applications-metadata} Storing fine-grained (e.g., word granularity) metadata about data has several applications (e.g., memcheck, taintcheck~\cite{flexitaint}, fine-grained protection~\cite{mmp}, detecting lock violations~\cite{eraser}). Prior works (e.g.,~\cite{ems,shadow-memory,mmp,flexitaint}) have proposed frameworks to efficiently store and manipulate such metadata. However, these mechanisms require hardware support \emph{specific} to storing and maintaining metadata. In contrast, with our framework, the system can potentially use overlays for each virtual page to store metadata for the virtual page instead of an alternate version of the data. In other words, \emph{the Overlay Address Space serves as shadow memory} for the virtual address space. To access some piece of data, the application uses the regular load and store instructions. The system would need new \emph{metadata load} and \emph{metadata store} instructions to enable the application to access the metadata from the overlays. \subsubsection{Flexible Super-pages.} \label{sec:applications-flexible-super-pages} Many modern architectures support super-pages to reduce the number of TLB misses. In fact, a recent prior work~\cite{direct-segment} suggests that a single arbitrarily large super-page (direct segment) can significantly reduce TLB misses for large servers. Unfortunately, using super-pages reduces the flexibility for the operating system to manage memory and implement techniques like copy-on-write. For example, to our knowledge, there is no system that shares a super-page across two processes in the copy-on-write mode. This lack of flexibility introduces a trade-off between the benefit of using super-pages to reduce TLB misses and the benefit of using copy-on-write to reduce memory capacity requirements. Fortunately, with our framework, we can apply overlays at higher-level page table entries to enable the OS to manage super-pages at a finer granularity. In short, we envision a mechanism that divides a super-page into smaller segments (based on the number of bits available in the \texttt{OBitVector}\xspace), and allows the system to potentially remap a segment of the super-page to the overlays. For example, when a super-page shared between two processes receives a write, only the corresponding segment is copied and the corresponding bit in the \texttt{OBitVector}\xspace is set. This approach can similarly be used to have multiple protection domains within a super-page. Assuming only a few segments within a super-page will require overlays, this approach can still ensure low TLB misses while enabling more flexibility for the OS. \section{Detailed Design and Implementation} \label{sec:overlays-design-implementation} To recap our high-level design (Figure~\ref{fig:implementation-overview}), each virtual page in the system is mapped to two entities: 1)~a regular physical page, which in turn directly maps to a page in main memory, and 2)~an overlay page in the Overlay Address{} space (which is not directly backed by main memory). Each page in this space is in turn mapped to a region in the Overlay Memory Store{}, where the overlay is stored compactly. Because our implementation does not modify the way virtual pages are mapped to regular physical pages, we now focus our attention on how virtual pages are mapped to overlays. \subsection{Virtual-to-Overlay Mapping} \label{sec:v2omap} The virtual-to-overlay mapping maps a virtual page to a page in the Overlay Address{} space. One simple approach to maintain this mapping information is to store it in the page table and allow the OS to manage the mappings (similar to regular physical pages). However, this increases the overhead of the mapping table and complicates the OS. We make a simple observation and impose a constraint that makes the virtual-to-overlay mapping a direct 1-1 mapping. Our \textbf{observation} is that since the Overlay Address{} space is part of the \emph{unused} physical address space, it can be \emph{significantly larger} than the amount of main memory. To enable a 1-1 mapping between virtual pages and overlay pages, we impose a simple \textbf{constraint} wherein no two virtual pages can be mapped to the same overlay page. Figure~\ref{fig:virtual-to-overlay-mapping} shows how our design maps a virtual address to the corresponding overlay address. Our scheme widens the physical address space such that the overlay address corresponding to the virtual address \texttt{vaddr} of a process with ID \texttt{PID} is obtained by simply concatenating an overlay bit (set to 1), \texttt{PID}, and \texttt{vaddr}. Since two virtual pages cannot share an overlay, when data of a virtual page is copied to another virtual page, the overlay cache lines of the source page must be copied into the appropriate locations in the destination page. While this approach requires a slightly wider physical address space than in existing systems, this is a more practical mechanism compared to storing this mapping explicitly in a separate table, which can lead to much higher storage and management overheads than our approach. With a 64-bit physical address space and a 48-bit virtual address space per process, this approach can support $2^{15}$ different processes. \begin{figure}[h] \centering \input{page-overlays/figures/virtual-to-overlay} \caption[Virtual address space to overlay address space mapping]{Virtual-to-Overlay Mapping. The MSB indicates if the physical address is part of the Overlay Address{} space.} \label{fig:virtual-to-overlay-mapping} \end{figure} Note that a similar approach \emph{cannot} be used to map virtual pages to physical pages due to the \emph{synonym} problem~\cite{virtual-caches}, which results from multiple virtual pages being mapped to the same physical page. However, this problem does not occur with the virtual-to-overlay mapping because of the constraint we impose: no two virtual pages can map to the same overlay page. Even with this constraint, our framework enables many applications that can improve performance and reduce memory capacity requirements (Section~\ref{sec:overlays-applications}). \subsection{Overlay Address{} Mapping} \label{sec:o2mmap} Overlay cache lines tagged in the Overlay Address{} space must be mapped into an Overlay Memory Store{} location upon eviction. In our design, since there is a 1-1 mapping between a virtual page and an overlay page, we could potentially store this mapping in the page table along with the physical page mapping. However, since many pages may not have an overlay, we store this mapping information in a separate mapping table similar to the page table. This \emph{Overlay Mapping Table{}} (OMT{}) is maintained and controlled fully by the memory controller with minimal interaction with the OS. Section~\ref{sec:overlay-store} describes Overlay Memory Store{} management in detail. \subsection{Microarchitecture and Memory Access Operations} \label{sec:mem-ops} Figure~\ref{fig:end-to-end-operation} depicts the details of our design. There are three main changes over the microarchitecture of current systems. First (\ding{202} in the figure), main memory is split into two regions that store 1)~regular physical pages and 2)~the Overlay Memory Store{} (OMS{}). The OMS{} stores both a compact representation of the overlays and the \emph{Overlay Mapping Table{}} (OMT{}), which maps each page from the Overlay Address{} Space to a location in the Overlay Memory Store{}. At a high level, each OMT{} entry contains 1) the \texttt{OBitVector}\xspace, indicating if each cache line within the corresponding page is present in the overlay, and 2)~\texttt{OMSaddr}, the location of the overlay in the OMS{}. Second \ding{203}, we augment the memory controller with a cache called the \emph{\OMTshort{} Cache{}}, which caches recently accessed entries from the OMT{}. Third \ding{204}, because the TLB must determine if an access to a virtual address should be directed to the corresponding overlay, we extend each TLB entry to store the \texttt{OBitVector}\xspace. While this potentially increases the cost of each TLB miss (as it requires the \texttt{OBitVector}\xspace to be fetched from the OMT), our evaluations (Section~\ref{sec:overlays-applications}) show that the performance benefit of using overlays more than offsets this additional TLB fill latency. \begin{figure} \centering \includegraphics[scale=0.9]{page-overlays/figures/end-to-end-10pt} \caption[Page overlays: Microarchitectural implementation]{Microarchitectural details of our implementation. The main changes (\ding{202}, \ding{203} and \ding{204}) are described in Section~\ref{sec:mem-ops}.} \label{fig:end-to-end-operation} \end{figure} To describe the operation of different memory accesses, we use overlay-on-write (Section~\ref{sec:overlay-applications-oow}) as an example. Let us assume that two virtual pages (V1 and V2) are mapped to the same physical page in the copy-on-write mode, with a few cache lines of V2 already mapped to the overlay. There are three possible operations on V2: 1)~a read, 2)~a write to a cache line already in the overlay (\emph{simple write}), and 3)~a write to a cache line not present in the overlay (\emph{overlaying write}). We now describe each of these operations in detail. \subsubsection{Memory Read Operation.} \label{sec:memory-read-op} When the page V2 receives a read request, the processor first accesses the TLB with the corresponding page number (\texttt{VPN}) to retrieve the physical mapping (\texttt{PPN}) and the \texttt{OBitVector}\xspace. It generates the overlay page number (\texttt{OPN}) by concatenating the address space ID (\texttt{ASID}) of the process and the \texttt{VPN} (as described in Section~\ref{sec:v2omap}). Depending on whether the accessed cache line is present in the overlay (as indicated by the corresponding bit in the \texttt{OBitVector}\xspace), the processor uses either the \texttt{PPN} or the \texttt{OPN} to generate the L1 cache \texttt{tag}. If the access misses in the entire cache hierarchy (L1 through last-level cache), the request is sent to the memory controller. The controller checks if the requested address is part of the overlay address space by checking the overlay bit in the physical address. If so, it looks up the overlay store address (\texttt{\OMSshort{}addr{}}) of the corresponding overlay page from the \OMTshort{} Cache{}, and computes the exact location of the requested cache line within main memory (as described later in Section~\ref{sec:overlay-store}). It then accesses the cache line from the main memory and returns the data to the cache hierarchy. \subsubsection{Simple Write Operation.} \label{sec:memory-write-op} When the processor receives a write to a cache line already present in the overlay, it simply has to update the cache line in the overlay. This operation is similar to a read operation, except the cache line is updated after it is read into the L1 cache. \subsubsection{Overlaying Write Operation.} \label{sec:overlaying-write-op} An \emph{overlaying write} operation is a write to a cache line that is \emph{not} already present in the overlay. Since the virtual page is mapped to the regular physical page in the copy-on-write mode, the corresponding cache line must be remapped to the overlay (based on our semantics described in Section~\ref{sec:overlay-applications-oow}). We complete the overlaying write in three steps: 1)~copy the data of the cache line in the regular physical page (\texttt{PPN}) to the corresponding cache line in the Overlay Address Space page (\texttt{OPN}), 2)~update all the TLBs and the OMT{} to indicate that the cache line is mapped to the overlay, and 3)~process the write operation. The first step can be completed in hardware by reading the cache line from the regular physical page and simply updating the cache tag to correspond to the overlay page number (or by making an explicit copy of the cache line). Na\"{i}vely implementing the second step will involve a TLB shootdown for the corresponding virtual page. However, we exploit three simple facts to use the cache coherence network to keep the TLBs and the OMT{} coherent: i)~the mapping is modified \emph{only} for a single cache line, and not an entire page, ii)~the overlay page address can be used to uniquely identify the virtual page since no overlay is shared between virtual pages, and iii)~the overlay address is part of the physical address space and hence, part of the cache coherence network. Based on these facts, we propose a new cache coherence message called \emph{overlaying read exclusive}. When a core receives this request, it checks if its TLB has cached the mapping for the virtual page. If so, the core simply sets the bit for the corresponding cache line in the \texttt{OBitVector}\xspace. The \emph{overlaying read exclusive} request is also sent to the memory controller so that it can update the \texttt{OBitVector}\xspace of the corresponding overlay page in the OMT{} (via the \OMTshort{} Cache{}). Once the remapping operation is complete, the write operation (the third step) is processed similar to the simple write operation. Note that after an \emph{overlaying write}, the corresponding cache line (which we will refer to as the \emph{overlay cache line}) is marked dirty. However, unlike copy-on-write, which must allocate memory before the write operation, our mechanism allocates memory space \emph{lazily} upon the eviction of the dirty overlay cache line -- significantly improving performance. \subsubsection{Converting an Overlay to a Regular Physical Page.} \label{sec:converting} Depending on the technique for which overlays are used, maintaining an overlay for a virtual page may be unnecessary after a point. For example, when using overlay-on-write, if most of the cache lines within a virtual page are modified, maintaining them in an overlay does not provide any advantage. The system may take one of three actions to promote an overlay to a physical page: The \emph{copy-and-commit} action is one where the OS copies the data from the regular physical page to a new physical page and updates the data of the new physical page with the corresponding data from the overlay. The \emph{commit} action updates the data of the regular physical page with the corresponding data from the overlay. The \emph{discard} action simply discards the overlay. While the \emph{copy-and-commit} action is used with overlay-on-write, the \emph{commit} and \emph{discard} actions are used, for example, in the context of speculation, where our mechanism stores speculative updates in the overlays (Section~\ref{sec:applications-speculation}). After any of these actions, the system clears the \texttt{OBitVector}\xspace of the corresponding virtual page, and frees the overlay memory store space allocated for the overlay (discussed next in Section~\ref{sec:overlay-store}). \input{page-overlays/overlay-store} \chapter{Page Overlays} \label{chap:page-overlays} \begin{figure}[b!] \hrule\vspace{2mm} \begin{footnotesize} Originally published as ``Page Overlays: An Enhanced Virtual Memory Framework to Enable Fine-grained Memory Management'' in the International Symposium on Computer Architecture, 2015~\cite{page-overlays} \end{footnotesize} \end{figure} \input{page-overlays/acronyms} As described in \Cref{sec:cow-problems}, the large page granularity organization of virtual memory results in significant inefficiency for many operations (e.g., copy-on-write). The source of this inefficiency is the fact that the large page granularity (e.g., 4KB) amplifies the amount of work that needs to be done for simple fine-granularity operations (e.g., few bytes). \Cref{sec:cow-problems} explains this problem with the example of the copy-on-write technique, wherein modification of a small amount of data can trigger a full page copy operation. In addition to copy-on-write, which is widely used in many applications, we observe that the large page granularity management hinders efficient implementation of several techniques like fine-grained deduplication~\cite{de,hicamp}, fine-grained data protection~\cite{mmp,legba}, cache-line-level compression~\cite{lcp,rmc,pcm-compression}, and fine-grained metadata management~\cite{ems,shadow-memory}. While managing memory at a finer granularity than pages enables several techniques that can significantly boost system performance and efficiency, simply reducing the page size results in an unacceptable increase in virtual-to-physical mapping table overhead and TLB pressure. Prior works to address this problem either rely on software techniques~\cite{de} (high performance overhead), propose hardware support specific to a particular application~\cite{indra,shadow-memory,ems} (low value for cost), or significantly modify the structure of existing virtual memory~\cite{mmp,hicamp} (high cost for adoption). In this chapter, we describe our new virtual memory framework, called \emph{page overlays}, that enables efficient implementation of several fine-grained memory management techniques. \input{page-overlays/motivation} \input{page-overlays/overview} \input{page-overlays/design-implementation} \input{page-overlays/applications} \input{page-overlays/summary} \section{Page Overlays: Semantics and Benefits} \label{sec:overlays-semantics} We first present a detailed overview of the semantics of our proposed virtual memory framework, page overlays. Then we describe the benefits of overlays using the example of copy-on-write. \Cref{sec:overlays-applications} describes several applications of overlays. \subsection{Overview of Semantics of Our Framework} \label{sec:semantics-overview} \Cref{fig:overlay-detailed} shows our proposed framework. The figure shows a virtual page mapped to a physical page as in existing frameworks. For ease of explanation, we assume each page has only four cache lines. As shown in the figure, the virtual page is also mapped to another structure referred to as \emph{overlay}. There are two aspects to a page overlay. First, unlike the physical page, which has the same size as the virtual page, the overlay of a virtual page contains only a \emph{subset of cache lines} from the page, and hence is smaller in size than the virtual page. In the example in the figure, only two cache lines (C1 and C3) are present in the overlay. Second, when a virtual page has both a physical page and an overlay mapping, we define the access semantics such that any cache line that is present in the overlay is accessed from there. Only cache lines that are \emph{not} present in the overlay are accessed from the physical page. In our example, accesses to C1 and C3 are mapped to the overlay, and the remaining cache lines are mapped to the physical page. \begin{figure}[h] \centering \input{page-overlays/figures/overlay-detailed} \caption[Page overlays: Semantics]{Semantics of our proposed framework} \label{fig:overlay-detailed} \end{figure} \subsection{Overlay-on-write: A More Efficient Copy-on-write} \label{sec:overlay-applications-oow} We described the copy-on-write technique and its shortcomings in detail in \Cref{sec:bulk-data-problem}. Briefly, the copy-on-write technique maps multiple virtual pages that contain the same data to a single physical page in a read-only mode. When one of the pages receive a write, the system creates a full copy of the physical page and remaps the virtual page that received the write to the new physical page in a read-write mode. Our page overlay framework enables a more efficient version of the copy-on-write technique, which does not require a full page copy and hence avoids all associated shortcomings. We refer to this mechanism as \emph{overlay-on-write}. \Cref{fig:overlays-oow} shows how overlay-on-write works. When multiple virtual pages share the same physical page, the OS explicitly indicates to the hardware, through the page tables, that the cache lines of the pages should be copied-on-write. When one of the pages receives a write, our framework first creates an overlay that contains \emph{only the modified cache line}. It then maps the overlay to the virtual page that received the write. \begin{figure} \centering \input{page-overlays/figures/new-oow} \caption[Overlay-on-write technique]{Overlay-on-Write: A more version of efficient copy-on-write} \label{fig:overlays-oow} \end{figure} Overlay-on-write has many benefits over copy-on-write. First, it avoids the need to copy the entire physical page before the write operation, thereby significantly reducing the latency on the critical path of execution (as well as the associated increase in memory bandwidth and energy). Second, it allows the system to eliminate significant redundancy in the data stored in main memory because only the overlay lines need to be stored, compared to a full page with copy-on-write. Finally, as we describe in \Cref{sec:overlaying-write-op}, our design exploits the fact that only a single cache line is remapped from the source physical page to the overlay to significantly reduce the latency of the remapping operation. Copy-on-write has a wide variety of applications (e.g., process forking~\cite{fork}, virtual machine cloning~\cite{snowflock}, operating system speculation~\cite{os-speculation-1,os-speculation-2,os-speculation-3}, deduplication~\cite{esx-server}, software debugging~\cite{flashback}, checkpointing~\cite{checkpointing,hpc-survey}). Overlay-on-write, being a faster and more efficient alternative to copy-on-write, can significantly benefit all these applications. \subsection{Benefits of the Overlay Semantics} \label{sec:overlay-benefits} Our framework offers two distinct benefits over the existing virtual memory frameworks. First, our framework \textbf{reduces the amount of work that the system has to do}, thereby improving system performance. For instance, in the overlay-on-write and sparse data structure (\Cref{sec:applications-sparse-data-structures}) techniques, our framework reduces the amount of data that needs to be copied/accessed. Second, our framework \textbf{enables significant reduction in memory capacity requirements}. Each overlay contains only a subset of cache lines from the virtual page, so the system can reduce overall memory consumption by compactly storing the overlays in main memory---i.e., for each overlay, store only the cache lines that are actually present in the overlay. We quantitatively evaluate these benefits in \Cref{sec:overlays-applications} using two techniques and show that our framework is effective. \subsection{Managing the Overlay Memory Store{}} \label{sec:overlay-store} The \emph{Overlay Memory Store{}} (OMS{}) is the region in main memory where all the overlays are stored. As described in Section~\ref{sec:memory-read-op}, the OMS{} is accessed \emph{only} when an overlay access completely misses in the cache hierarchy. As a result, there are many simple ways to manage the OMS{}. One way is to have a small embedded core on the memory controller that can run a software routine that manages the OMS{} (similar mechanisms are supported in existing systems, e.g., Intel Active Management Technology~\cite{intel-amt}). Another approach is to let the memory controller manage the OMS{} by using a full physical page to store each overlay. While this approach will forgo the memory capacity benefit of our framework, it will still obtain the benefit of reducing overall work (Section~\ref{sec:overlay-benefits}). In this section, we describe a hardware mechanism that obtains both the work reduction and the memory capacity reduction benefits of using overlays. In our mechanism, the controller fully manages the OMS{} with minimal interaction with the OS. Managing the OMS{} has two aspects. First, because each overlay contains only a subset of cache lines from the virtual page, we need a \emph{compact representation for the overlay}, such that the OMS{} contains only cache lines that are actually present in the overlay. Second, the memory controller must manage multiple \emph{overlays of different sizes}. We need a mechanism to handle such different sizes and the associated free space fragmentation issues. Although operations that allocate new overlays or relocate existing overlays are slightly complex, they are triggered only when a dirty overlay cache line is written back to main memory. Therefore, these operations are rare and are not on the critical path of execution. \subsubsection{Compact Overlay Representation.} \label{sec:overlay-representation} One approach to compactly maintain the overlays is to store the cache lines in an overlay in the order in which they appear in the virtual page. While this representation is simple, if a new cache line is inserted into the overlay before other overlay cache lines, then the memory controller must \emph{move} such cache lines to create a slot for the inserted line. This is a read-modify-write operation, which results in significant performance overhead. We propose an alternative mechanism, in which each overlay is assigned a \emph{segment} in the OMS{}. The overlay is associated with an array of pointers---one pointer for each cache line in the virtual page. Each pointer either points to the slot within the overlay segment that contains the cache line or is invalid if the cache line is not present in the overlay. We store this metadata in a single cache line at the head of the segment. For segments less than 4KB size, we use 64 5-bit slot pointers and a 32-bit vector indicating the free slots within a segment---total of 352 bits. For a 4KB segment, we do not store any metadata and simply store each overlay cache line at an offset which is same as the offset of the cache line within the virtual page. Figure~\ref{fig:segment} shows an overlay segment of size 256B, with only the first and the fourth cache lines of the virtual page mapped to the overlay. \begin{figure}[h] \centering \input{page-overlays/figures/segment} \caption[An example overlay segment in memory]{A 256B overlay segment (can store up to three overlay cache lines from the virtual page). The first line stores the metadata (array of pointers and the free bit vector).} \label{fig:segment} \end{figure} \subsubsection{Managing Multiple Overlay Sizes.} \label{sec:multi-size-overlays} Different virtual pages may contain overlays of different sizes. The memory controller must store them efficiently in the available space. To simplify this management, our mechanism splits the available overlay space into segments of 5 fixed sizes: 256B, 512B, 1KB, 2KB, and 4KB. Each overlay is stored in the smallest segment that is large enough to store the overlay cache lines. When the memory controller requires a segment for a new overlay or when it wants to migrate an existing overlay to a larger segment, the controller identifies a free segment of the required size and updates the \texttt{\OMSshort{}addr{}} of the corresponding overlay page with the base address of the new segment. Individual cache lines are allocated their slots within the segment as and when they are written back to main memory. \subsubsection{Free Space Management.} To manage the free segments within the Overlay Memory Store{}, we use a simple linked-list based approach. For each segment size, the memory controller maintains a memory location or register that points to a free segment of that size. Each free segment in turn stores a pointer to another free segment of the same size or an invalid pointer denoting the end of the list. If the controller runs out of free segments of a particular size, it obtains a free segment of the next higher size and splits it into two. If the controller runs out of free 4KB segments, it requests the OS for an additional set of 4KB pages. During system startup, the OS proactively allocates a chunk of free pages to the memory controller. To reduce the number of memory operations needed to manage free segments, we use a \emph{grouped-linked-list} mechanism, similar to the one used by some file systems~\cite{fs-free-space}. \subsubsection{The Overlay Mapping Table{} (OMT{}) and the \OMTshort{} Cache{}.} \label{sec:omt-cache} The OMT{} maps pages from the Overlay Address{} Space to a specific segment in the Overlay Memory Store{}. For each page in the Overlay Address{} Space (i.e., for each \texttt{OPN}), the OMT{} contains an entry with the following pieces of information: 1)~the \texttt{OBitVector}\xspace, indicating which cache lines are present in the overlay, and 2)~the \OMS{} Address{} (\texttt{OMSaddr}), pointing to the segment that stores the overlay. To reduce the storage cost of the OMT{}, we store it hierarchically, similar to the virtual-to-physical mapping tables. The memory controller maintains the root address of the hierarchical table in a register. The \OMTshort{} Cache{} stores the following details regarding recently-accessed overlays: the \texttt{OBitVector}\xspace, the \texttt{\OMSshort{}addr{}}, and the overlay segment metadata (stored at the beginning of the segment). To access a cache line from an overlay, the memory controller consults the \OMTshort{} Cache{} with the overlay page number (\texttt{OPN}). In case of a hit, the controller acquires the necessary information to locate the cache line in the overlay memory store using the overlay segment metadata. In case of a miss, the controller performs an OMT{} walk (similar to a page table walk) to look up the corresponding OMT{} entry, and inserts it in the \OMTshort{} Cache{}. It also reads the overlay segment metadata and caches it in the OMT{} cache entry. The controller may modify entries of the OMT{}, as and when overlays are updated. When such a modified entry is evicted from the \OMTshort{} Cache{}, the memory controller updates the corresponding OMT{} entry in memory. \section{Overview of Design} \label{sec:page-overlays-overview} While our framework imposes simple access semantics, there are several key challenges to efficiently implement the proposed semantics. In this section, we first discuss these challenges with an overview of how we address them. We then provide a full overview of our proposed mechanism that addresses these challenges, thereby enabling a simple, efficient, and low-overhead design of our framework. \subsection{Challenges in Implementing Page Overlays} \label{sec:challenges} \textbf{Challenge 1:} \emph{Checking if a cache line is part of the overlay}. When the processor needs to access a virtual address, it must first check if the accessed cache line is part of the overlay. Since most modern processors use a physically-tagged L1 cache, this check is on the critical path of the L1 access. To address this challenge, we associate each virtual page with a bit vector that represents which cache lines from the virtual page are part of the overlay. We call this bit vector the \emph{overlay bit vector} (\texttt{OBitVector}\xspace). We cache the \texttt{OBitVector}\xspace in the processor TLB, thereby enabling the processor to quickly check if the accessed cache line is part of the overlay. \textbf{Challenge 2:} \emph{Identifying the physical address of an overlay cache line}. If the accessed cache line is part of the overlay (i.e., it is an \emph{overlay cache line}), the processor must quickly determine the physical address of the overlay cache line, as this address is required to access the L1 cache. The simple approach to address this challenge is to store in the TLB the base address of the region where the overlay is stored in main memory (we refer to this region as the \emph{overlay store}). While this may enable the processor to identify each overlay cache line with a unique physical address, this approach has three shortcomings when overlays are stored compactly in main memory. First, the overlay store (in main memory) does \emph{not} contain all the cache lines from the virtual page. Therefore, the processor must explicitly compute the address of the accessed overlay cache line. This will delay the L1 access. Second, most modern processors use a virtually-indexed physically-tagged L1 cache to partially overlap the L1 cache access with the TLB access. This technique requires the virtual index and the physical index of the cache line to be the same. However, since the overlay is smaller than the virtual page, the overlay physical index of a cache line will likely not be the same as the cache line's virtual index. As a result, the cache access will have to be delayed until the TLB access is complete. Finally, inserting a new cache line into an overlay is a relatively complex operation. Depending on how the overlay is represented in main memory, inserting a new cache line into an overlay can potentially change the addresses of other cache lines in the overlay. Handling this scenario requires a likely complex mechanism to ensure that the tags of these other cache lines are appropriately modified. In our design, we address this challenge by using two different addresses for each overlay---one to address the processor caches, called the {\em Overlay Address{}}, and another to address main memory, called the {\em \OMS{} Address{}}. As we will describe shortly, this \emph{dual-address design} enables the system to manage the overlay in main memory independently of how overlay cache lines are addressed in the processor caches, thereby overcoming the above three shortcomings. \textbf{Challenge 3:} \emph{Ensuring the consistency of the TLBs}. In our design, since the TLBs cache the \texttt{OBitVector}\xspace, when a cache line is moved from the physical page to the overlay or vice versa, any TLB that has cached the mapping for the corresponding virtual page should update its mapping to reflect the cache line remapping. The na\"{i}ve{} approach to addressing this challenge is to use a TLB shootdown~\cite{tlb-consistency-1,tlb-consistency-2}, which is expensive~\cite{didi,unitd}. Fortunately, in the above scenario, the TLB mapping is updated only for a \emph{single cache line} (rather than an entire virtual page). We propose a simple mechanism that exploits this fact and uses the cache coherence protocol to keep the TLBs coherent (Section~\ref{sec:overlaying-write-op}). \subsection{Overview of Our Design} \label{sec:final-implementation} A key aspect of our dual-address design, mentioned above, is that the address to access the cache (the {\em Overlay Address{}}) is taken from an \emph{address space} where the size of each overlay is the \emph{same} as that of a regular physical page. This enables our design to seamlessly address Challenge 2 (overlay cache line address computation), without incurring the drawbacks of the na\"{i}ve{} approach to address the challenge (described in Section~\ref{sec:challenges}). The question is, \emph{from what address space is the Overlay Address{} taken?} Towards answering this question, we observe that only a small fraction of the physical address space is backed by main memory (DRAM) and a large portion of the physical address space is \emph{unused}, even after a portion is consumed for memory-mapped I/O~\cite{mmio} and other system constructs. We propose to use this unused physical address space for the overlay cache address and refer to this space as the \emph{Overlay Address{} Space}.\footnote{A prior work, the Impulse Memory Controller~\cite{impulse}, uses the unused physical address space to communicate gather/scatter access patterns to the memory controller. The goal of Impulse~\cite{impulse} is different from ours, and it is difficult to use the design proposed by Impulse to enable fine-granularity memory management.} Figure~\ref{fig:implementation-overview} shows the overview of our design. There are three address spaces: the virtual address space, the physical address space, and the main memory address space. The main memory address space is split between regular physical pages and the \emph{Overlay Memory Store{}} (OMS), a region where the overlays are stored compactly. In our design, to associate a virtual page with an overlay, the virtual page is first mapped to a full size page in the overlay address space using a direct mapping without any translation or indirection (Section~\ref{sec:v2omap}). The overlay page is in turn mapped to a location in the OMS using a mapping table stored in the memory controller (Section~\ref{sec:o2mmap}). We will describe the figure in more detail in Section~\ref{sec:overlays-design-implementation}. \begin{figure} \centering \input{page-overlays/figures/implementation-overview} \caption[Page overlays: Design overview]{Overview of our design. ``Direct mapping'' indicates that the corresponding mapping is implicit in the source address. OMT = Overlay Mapping Table (Section~\ref{sec:o2mmap}).} \label{fig:implementation-overview} \end{figure} \subsection{Benefits of Our Design} \label{sec:implementation-benefits} There are three main benefits of our high-level design. First, our approach makes no changes to the way the existing VM framework maps virtual pages to physical pages. This is very important as the system can treat overlays as an inexpensive feature that can be turned on only when the application benefits from it. Second, as mentioned before, by using two distinct addresses for each overlay, our implementation decouples the way the caches are addressed from the way overlays are stored in main memory. This enables the system to treat overlay cache accesses very similarly to regular cache accesses, and consequently requires very few changes to the existing hardware structures (e.g., it works seamlessly with virtually-indexed physically-tagged caches). Third, as we will describe in the next section, in our design, the Overlay Memory Store (in main memory) is accessed only when an access completely misses in the cache hierarchy. This 1)~greatly reduces the number of operations related to managing the OMS, 2)~reduces the amount of information that needs to be cached in the processor TLBs, and 3)~more importantly, enables the memory controller to completely manage the OMS with \emph{minimal} interaction with the OS. \section{Summary} \label{sec:overlays-summary} In this chapter, we introduced a new, simple framework that enables fine-grained memory management. Our framework augments virtual memory with a concept called \emph{overlays}. Each virtual page can be mapped to both a physical page and an overlay. The overlay contains only a subset of cache lines from the virtual page, and cache lines that are present in the overlay are accessed from there. We show that our proposed framework, with its simple access semantics, enables several fine-grained memory management techniques, without significantly altering the existing VM framework. We quantitatively demonstrate the benefits of our framework with two applications: 1)~\emph{overlay-on-write}, an efficient alternative to copy-on-write, and 2)~an efficient hardware representation of sparse data structures. Our evaluations show that our framework significantly improves performance and reduces memory capacity requirements for both applications (e.g., 15\% performance improvement and 53\% memory capacity reduction, on average, for \texttt{fork}\xspace over traditional copy-on-write). Finally, we discuss five other potential applications for the page overlays. \section{Applications} \label{sec:applications} RowClone can be used to accelerate any bulk copy and initialization operation to improve both system performance and energy efficiency. In this paper, we quantitatively evaluate the efficacy of RowClone by using it to accelerate two primitives widely used by modern system software: 1)~Copy-on-Write and 2)~Bulk Zeroing. We now describe these primitives followed by several applications that frequently trigger them. \subsection{Primitives Accelerated by RowClone} \label{sec:apps-primitives} \emph{Copy-on-Write} (CoW) is a technique used by most modern operating systems (OS) to postpone an expensive copy operation until it is actually needed. When data of one virtual page needs to be copied to another, instead of creating a copy, the OS points both virtual pages to the same physical page (source) and marks the page as read-only. In the future, when one of the sharers attempts to write to the page, the OS allocates a new physical page (destination) for the writer and copies the contents of the source page to the newly allocated page. Fortunately, prior to allocating the destination page, the OS already knows the location of the source physical page. Therefore, it can ensure that the destination is allocated in the same subarray as the source, thereby enabling the processor to use FPM to perform the copy. \emph{Bulk Zeroing} (BuZ) is an operation where a large block of memory is zeroed out. As mentioned in Section~\ref{sec:bulk-initialization}, our mechanism maintains a reserved row that is fully initialized to zero in each subarray. For each row in the destination region to be zeroed out, the processor uses FPM to copy the data from the reserved zero-row of the corresponding subarray to the destination row. \subsection{Applications that Use CoW/BuZ} \label{sec:apps-cow-zeroing} We now describe seven example applications or use-cases that extensively use the CoW or BuZ operations. Note that these are just a small number of example scenarios that incur a large number of copy and initialization operations. \textit{Process Forking.} \texttt{fork}\xspace is a frequently-used system call in modern operating systems (OS). When a process (parent) calls \texttt{fork}\xspace, it creates a new process (child) with the exact same memory image and execution state as the parent. This semantics of \texttt{fork}\xspace makes it useful for different scenarios. Common uses of the \texttt{fork}\xspace system call are to 1)~create new processes, and 2)~create stateful threads from a single parent thread in multi-threaded programs. One main limitation of \texttt{fork}\xspace is that it results in a CoW operation whenever the child/parent updates a shared page. Hence, despite its wide usage, as a result of the large number of copy operations triggered by \texttt{fork}\xspace, it remains one of the most expensive system calls in terms of memory performance~\cite{fork-exp}. \textit{Initializing Large Data Structures.} Initializing large data structures often triggers Bulk Zeroing. In fact, many managed languages (e.g., C\#, Java, PHP) require zero initialization of variables to ensure memory safety~\cite{why-nothing-matters}. In such cases, to reduce the overhead of zeroing, memory is zeroed-out in bulk. \textit{Secure Deallocation.} Most operating systems (e.g., Linux~\cite{linux-security}, Windows~\cite{windows-security}, Mac OS X~\cite{macos-security}) zero out pages newly allocated to a process. This is done to prevent malicious processes from gaining access to the data that previously belonged to other processes or the kernel itself. Not doing so can potentially lead to security vulnerabilities, as shown by prior works~\cite{shredding,sunshine,coldboot,disclosure}. \textit{Process Checkpointing.} Checkpointing is an operation during which a consistent version of a process state is backed-up, so that the process can be restored from that state in the future. This checkpoint-restore primitive is useful in many cases including high-performance computing servers~\cite{plfs}, software debugging with reduced overhead~\cite{flashback}, hardware-level fault and bug tolerance mechanisms~\cite{self-test,hardware-bug}, and speculative OS optimizations to improve performance~\cite{os-speculation-2,os-speculation-1}. However, to ensure that the checkpoint is consistent (i.e., the original process does not update data while the checkpointing is in progress), the pages of the process are marked with copy-on-write. As a result, checkpointing often results in a large number of CoW operations. \textit{Virtual Machine Cloning/Deduplication.} Virtual machine (VM) cloning~\cite{snowflock} is a technique to significantly reduce the startup cost of VMs in a cloud computing server. Similarly, deduplication is a technique employed by modern hypervisors~\cite{esx-server} to reduce the overall memory capacity requirements of VMs. With this technique, different VMs share physical pages that contain the same data. Similar to forking, both these operations likely result in a large number of CoW operations for pages shared across VMs. \textit{Page Migration.} Bank conflicts, i.e., concurrent requests to different rows within the same bank, typically result in reduced row buffer hit rate and hence degrade both system performance and energy efficiency. Prior work~\cite{micropages} proposed techniques to mitigate bank conflicts using page migration. The PSM mode of RowClone can be used in conjunction with such techniques to 1)~significantly reduce the migration latency and 2)~make the migrations more energy-efficient. \textit{CPU-GPU Communication.} In many current and future processors, the GPU is or is expected to be integrated on the same chip with the CPU. Even in such systems where the CPU and GPU share the same off-chip memory, the off-chip memory is partitioned between the two devices. As a consequence, whenever a CPU program wants to offload some computation to the GPU, it has to copy all the necessary data from the CPU address space to the GPU address space~\cite{cpu-gpu}. When the GPU computation is finished, all the data needs to be copied back to the CPU address space. This copying involves a significant overhead. By spreading out the GPU address space over all subarrays and mapping the application data appropriately, RowClone can significantly speed up these copy operations. Note that communication between different processors and accelerators in a heterogeneous System-on-a-chip (SoC) is done similarly to the CPU-GPU communication and can also be accelerated by RowClone. We now quantitatively compare RowClone to existing systems and show that RowClone significantly improves both system performance and energy efficiency. \subsection{Mechanism for Bulk Data Copy} \label{sec:bulk-copy} When the data from a source row (\texttt{src}\xspace) needs to be copied to a destination row (\texttt{dst}\xspace), there are three possible cases depending on the location of \texttt{src}\xspace and \texttt{dst}\xspace: 1)~\texttt{src}\xspace and \texttt{dst}\xspace are within the same subarray, 2)~\texttt{src}\xspace and \texttt{dst}\xspace are in different banks, 3)~\texttt{src}\xspace and \texttt{dst}\xspace are in different subarrays within the same bank. For case 1 and case 2, RowClone uses FPM and PSM, respectively, to complete the operation (as described in Sections~\ref{sec:rowclone-fpm} and \ref{sec:rowclone-psm}). For the third case, when \texttt{src}\xspace and \texttt{dst}\xspace are in different subarrays within the same bank, one can imagine a mechanism that uses the global bitlines (shared across all subarrays within a bank -- described in \cite{salp}) to copy data across the two rows in different subarrays. However, we do not employ such a mechanism for two reasons. First, it is not possible in today's DRAM chips to activate multiple subarrays within the same bank simultaneously. Second, even if we enable simultaneous activation of multiple subarrays, as in~\cite{salp}, transferring data from one row buffer to another using the global bitlines requires the bank I/O circuitry to switch between read and write modes for each cache line transfer. This switching incurs significant latency overhead. To keep our design simple, for such an intra-bank copy operation, our mechanism uses PSM to first copy the data from \texttt{src}\xspace to a temporary row (\texttt{tmp}\xspace) in a different bank. It then uses PSM again to copy the data back from \texttt{tmp}\xspace to \texttt{dst}\xspace. The capacity lost due to reserving one row within each bank is negligible (0.0015\% for a bank with 64k rows). \subsection{Mechanism for Bulk Data Initialization} \label{sec:bulk-initialization} Bulk data initialization sets a large block of memory to a specific value. To perform this operation efficiently, our mechanism first initializes a single DRAM row with the corresponding value. It then uses the appropriate copy mechanism (from Section~\ref{sec:bulk-copy}) to copy the data to the other rows to be initialized. Bulk Zeroing (or BuZ), a special case of bulk initialization, is a frequently occurring operation in today's systems~\cite{bulk-copy-initialize,why-nothing-matters}. To accelerate BuZ, one can reserve one row in each subarray that is always initialized to zero. By doing so, our mechanism can use FPM to efficiently BuZ any row in DRAM by copying data from the reserved zero row of the corresponding subarray into the destination row. The capacity loss of reserving one row out of 512 rows in each subarray is very modest (0.2\%). While the reserved rows can potentially lead to gaps in the physical address space, we can use an appropriate memory interleaving technique that maps consecutive rows to different subarrays. Such a technique ensures that the reserved zero rows are contiguously located in the physical address space. Note that interleaving techniques commonly used in today's systems (e.g., row or cache line interleaving) have this property. \subsection{Fast-Parallel Mode} \label{sec:rowclone-fpm} The Fast Parallel Mode (FPM) is based on the following three observations about DRAM. \begin{enumerate} \item In a commodity DRAM module, each \texttt{ACTIVATE}\xspace command transfers data from a large number of DRAM cells (multiple kilo-bytes) to the corresponding array of sense amplifiers (Section~\ref{sec:dram-module}). \item Several rows of DRAM cells share the same set of sense amplifiers (Section~\ref{sec:dram-mat}). \item A DRAM cell is not strong enough to flip the state of the sense amplifier from one stable state to another stable state. In other words, if a cell is connected to an already activated sense amplifier (or bitline), then the data of the cell gets overwritten with the data on the sense amplifier. \end{enumerate} While the first two observations are direct implications from the design of commodity DRAM, the third observation exploits the fact that DRAM cells are large enough to cause only a small perturbation on the bitline voltage. Figure~\ref{fig:cell-fpm} pictorially shows how this observation can be used to copy data between two cells that share a sense amplifier. \begin{figure}[h] \centering \includegraphics{rowclone/figures/cell-fpm} \caption{RowClone: Fast Parallel Mode} \label{fig:cell-fpm} \end{figure} The figure shows two cells (\texttt{src}\xspace and \texttt{dst}\xspace) connected to a single sense amplifier. In the initial state, we assume that \texttt{src}\xspace is fully charged and \texttt{dst}\xspace is fully empty, and the sense amplifier is in the precharged state (\ding{202}). In this state, FPM issues an \texttt{ACTIVATE}\xspace to \texttt{src}\xspace. At the end of the activation operation, the sense amplifier moves to a stable state where the bitline is at a voltage level of V$_{DD}$\xspace and the charge in \texttt{src}\xspace is fully restored (\ding{203}). FPM follows this operation with an \texttt{ACTIVATE}\xspace to \texttt{dst}\xspace, without an intervening \texttt{PRECHARGE}\xspace. This operation lowers the wordline corresponding to \texttt{src}\xspace and raises the wordline of \texttt{dst}\xspace, connecting \texttt{dst}\xspace to the bitline. Since the bitline is already fully activated, even though \texttt{dst}\xspace is initially empty, the perturbation caused by the cell is not sufficient to flip the state of the bitline. As a result, sense amplifier continues to drive the bitline to V$_{DD}$\xspace, thereby pushing \texttt{dst}\xspace to a fully charged state (\ding{204}). It can be shown that regardless of the initial state of \texttt{src}\xspace and \texttt{dst}\xspace, the above operation copies the data from \texttt{src}\xspace to \texttt{dst}\xspace. Given that each \texttt{ACTIVATE}\xspace operates on an entire row of DRAM cells, the above operation can copy multiple kilo bytes of data with just two back-to-back \texttt{ACTIVATE}\xspace operations. Unfortunately, modern DRAM chips do not allow another \texttt{ACTIVATE}\xspace to an already activated bank -- the expected result of such an action is undefined. This is because a modern DRAM chip allows at most one row (subarray) within each bank to be activated. If a bank that already has a row (subarray) activated receives an \texttt{ACTIVATE}\xspace to a different subarray, the currently activated subarray must first be precharged~\cite{salp}.\footnote{Some DRAM manufacturers design their chips to drop back-to-back {\texttt{ACTIVATE}\xspace}s to the same bank.} To support FPM, we propose the following change to the DRAM chip in the way it handles back-to-back {\texttt{ACTIVATE}\xspace}s. When an already activated bank receives an \texttt{ACTIVATE}\xspace to a row, the chip processes the command similar to any other \texttt{ACTIVATE}\xspace if and only if the command is to a row that belongs to the currently activated subarray. If the row does not belong to the currently activated subarray, then the chip takes the action it normally does with back-to-back {\texttt{ACTIVATE}\xspace}s---e.g., drop it. Since the logic to determine the subarray corresponding to a row address is already present in today's chips, implementing FPM only requires a comparison to check if the row address of an \texttt{ACTIVATE}\xspace belongs to the currently activated subarray, the cost of which is almost negligible. \textbf{Summary.} To copy data from \texttt{src}\xspace to \texttt{dst}\xspace within the same subarray, FPM first issues an \texttt{ACTIVATE}\xspace to \texttt{src}\xspace. This copies the data from \texttt{src}\xspace to the subarray row buffer. FPM then issues an \texttt{ACTIVATE}\xspace to \texttt{dst}\xspace. This modifies the input to the subarray row-decoder from \texttt{src}\xspace to \texttt{dst}\xspace and connects the cells of \texttt{dst}\xspace row to the row buffer. This, in effect, copies the data from the sense amplifiers to the destination row. As we show in Section~\ref{sec:rowclone-analysis}, with these two steps, FPM copies a 4KB page of data 11.6x faster and with 74.4x less energy than an existing system. \textbf{Limitations.} FPM has two constraints that limit its general applicability. First, it requires the source and destination rows to be within the same subarray (i.e., share the same set of sense amplifiers). Second, it cannot partially copy data from one row to another. Despite these limitations, we show that FPM can be immediately applied to today's systems to accelerate two commonly used primitives in modern systems -- Copy-on-Write and Bulk Zeroing (Section~\ref{sec:applications}). In the following section, we describe the second mode of RowClone -- the Pipelined Serial Mode (PSM). Although not as fast or energy-efficient as FPM, PSM addresses these two limitations of FPM. \chapter{RowClone} \label{chap:rowclone} \begin{figure}[b!] \hrule\vspace{2mm} \begin{footnotesize} Originally published as ``RowClone: Fast and Energy-efficient In-DRAM Bulk Data Copy and Initialization'' in the International Symposium on Microarchitecture, 2013~\cite{rowclone} \end{footnotesize} \end{figure} In Section~\ref{sec:cow-problems}, we described the source of inefficiency in performing a page copy operation in existing systems. Briefly, in existing systems, a page copy operation (or any bulk copy operation) is at best performed one cache line at a time. The operation requires a large number of cache lines to be transferred back and forth on the main memory channel. As a result, a bulk copy operation incurs high latency, high bandwidth consumption, and high energy consumption. In this chapter, we present RowClone, a mechanism that can perform bulk copy and initialization operations completely inside DRAM. We show that this approach obviates the need to transfer large quantities of data on the memory channel, thereby significantly improving the efficiency of a bulk copy operation. As bulk data initialization (specifically bulk zeroing) can be viewed as a special case of a bulk copy operation, RowClone can be easily extended to perform such bulk initialization operations with high efficiency. \input{rowclone/overview} \input{rowclone/fpm} \input{rowclone/psm} \input{rowclone/copy-init-mechanism} \input{rowclone/system-design} \input{rowclone/applications} \input{rowclone/methodology} \input{rowclone/results} \input{rowclone/summary} \section{Methodology} \label{sec:methodology} \noindent\textbf{Simulation.} Our evaluations use an in-house cycle-level multi-core simulator along with a cycle-accurate command-level DDR3 DRAM simulator. The multi-core simulator models out-of-order cores, each with a private last-level cache.\footnote{Since our mechanism primarily affects off-chip memory traffic, we expect our results and conclusions to be similar with shared caches as well.} We integrate RowClone into the simulator at the command-level. We use DDR3 DRAM timing constraints~\cite{ddr3} to calculate the latency of different operations. Since \texttt{TRANSFER}\xspace operates similarly to \texttt{READ}\xspace/\texttt{WRITE}\xspace, we assume \texttt{TRANSFER}\xspace to have the same latency as \texttt{READ}\xspace/\texttt{WRITE}\xspace. For our energy evaluations, we use DRAM energy/power models from Rambus~\cite{rambus-power} and Micron~\cite{micron-power}. Although, in DDR3 DRAM, a row corresponds to 8KB across a rank, we assume a minimum in-DRAM copy granularity (Section~\ref{sec:memory-interleaving}) of 4KB -- same as the page size used by the operating system (Debian Linux) in our evaluations. For this purpose, we model a DRAM module with 512-byte rows per chip (4KB across a rank). Table~\ref{tab:parameters} specifies the major parameters used for our simulations. \begin{table}[h] \centering \input{rowclone/tables/parameters} \caption[RowClone: Simulation parameters]{Configuration of the simulated system} \label{tab:parameters} \end{table} \noindent\textbf{Workloads.} We evaluate the benefits of RowClone using 1)~a case study of the \texttt{fork}\xspace system call, an important operation used by modern operating systems, 2)~six copy and initialization intensive benchmarks: \emph{bootup}, \emph{compile}, \emph{forkbench}, \emph{memcached}~\cite{memcached}, \emph{mysql}~\cite{mysql}, and \emph{shell} (Section~\ref{res:rowclone-ii-apps} describes these benchmarks), and 3)~a wide variety of multi-core workloads comprising the copy/initialization intensive applications running alongside memory-intensive applications from the SPEC CPU2006 benchmark suite~\cite{spec2006}. Note that benchmarks such as SPEC CPU2006, which predominantly stress the CPU, typically use a small number of page copy and initialization operations and therefore would serve as poor individual evaluation benchmarks for RowClone. We collected instruction traces for our workloads using Bochs~\cite{bochs}, a full-system x86-64 emulator, running a GNU/Linux system. We modify the kernel's implementation of page copy/initialization to use the \texttt{memcopy}\xspace and \texttt{meminit}\xspace instructions and mark these instructions in our traces.\footnote{For our \texttt{fork}\xspace benchmark (described in Section~\ref{sec:res-fork}), we used the Wind River Simics full system simulator~\cite{simics} to collect the traces.} We collect 1-billion instruction traces of the representative portions of these workloads. We use the instruction throughput (IPC) metric to measure single-core performance. We evaluate multi-core runs using the weighted speedup metric, a widely-used measure of system throughput for multi-programmed workloads~\cite{weighted-speedup}, as well as five other performance/fairness/bandwidth/energy metrics, as shown in Table~\ref{tab:multi-core-ws}. \section{The RowClone DRAM Substrate} \label{sec:rowclone-overview} RowClone consists of two independent mechanisms that exploit several observations about DRAM organization and operation. Our first mechanism efficiently copies data between two rows of DRAM cells that share the same set of sense amplifiers (i.e., two rows within the same subarray). We call this mechanism the \emph{Fast Parallel Mode} (FPM). Our second mechanism efficiently copies cache lines between two banks within a module in a pipelined manner. We call this mechanism the \emph{Piplines Serial Mode} (PSM). Although not as fast as FPM, PSM has fewer constraints and hence is more generally applicable. We now describe these two mechanism in detail. \subsection{Pipelined Serial Mode} \label{sec:rowclone-psm} The Pipelined Serial Mode efficiently copies data from a source row in one bank to a destination row in a \emph{different} bank. PSM exploits the fact that a single internal bus that is shared across all the banks is used for both read and write operations. This enables the opportunity to copy an arbitrary quantity of data one cache line at a time from one bank to another in a pipelined manner. To copy data from a source row in one bank to a destination row in a different bank, PSM first activates the corresponding rows in both banks. It then puts the source bank in the read mode, the destination bank in the write mode, and transfers data one cache line (corresponding to a column of data---64 bytes) at a time. For this purpose, we propose a new DRAM command called \texttt{TRANSFER}\xspace. The \texttt{TRANSFER}\xspace command takes four parameters: 1)~source bank index, 2)~source column index, 3)~destination bank index, and 4)~destination column index. It copies the cache line corresponding to the source column index in the activated row of the source bank to the cache line corresponding to the destination column index in the activated row of the destination bank. Unlike \texttt{READ}\xspace/\texttt{WRITE}\xspace which interact with the memory channel connecting the processor and main memory, \texttt{TRANSFER}\xspace does not transfer data outside the chip. Figure~\ref{fig:psm} pictorially compares the operation of the \texttt{TRANSFER}\xspace command with that of \texttt{READ}\xspace and \texttt{WRITE}\xspace. The dashed lines indicate the data flow corresponding to the three commands. As shown in the figure, in contrast to the \texttt{READ}\xspace or \texttt{WRITE}\xspace commands, \texttt{TRANSFER}\xspace does not transfer data from or to the memory channel. \begin{figure}[h] \centering \includegraphics[angle=90]{rowclone/figures/chip-read} \includegraphics[angle=90]{rowclone/figures/chip-write} \includegraphics[angle=90]{rowclone/figures/chip-transfer} \caption{RowClone: Pipelined Serial Mode} \label{fig:psm} \end{figure} \section{Evaluations} \label{sec:results} In this section, we quantitatively evaluate the benefits of RowClone. We first analyze the raw latency and energy improvement enabled by the DRAM substrate to accelerate a single 4KB copy and 4KB zeroing operation (Section~\ref{sec:rowclone-analysis}). We then discuss the results of our evaluation of RowClone using \texttt{fork}\xspace (Section~\ref{sec:res-fork}) and six copy/initialization intensive applications (Section~\ref{res:rowclone-ii-apps}). Section~\ref{sec:multi-core} presents our analysis of RowClone on multi-core systems and Section~\ref{sec:mc-dma} provides quantitative comparisons to memory controller based DMA engines. \subsection{Latency and Energy Analysis} \label{sec:rowclone-analysis} Figure~\ref{fig:timing} shows the sequence of commands issued by the baseline, FPM and PSM (inter-bank) to perform a 4KB copy operation. The figure also shows the overall latency incurred by each of these mechanisms, assuming DDR3-1066 timing constraints. Note that a 4KB copy involves copying 64 64B cache lines. For ease of analysis, only for this section, we assume that no cache line from the source or the destination region are cached in the on-chip caches. While the baseline serially reads each cache line individually from the source page and writes it back individually to the destination page, FPM parallelizes the copy operation of all the cache lines by using the large internal bandwidth available within a subarray. PSM, on the other hand, uses the new \texttt{TRANSFER}\xspace command to overlap the latency of the read and write operations involved in the page copy. \begin{figure} \centering \includegraphics[scale=1.3]{rowclone/figures/timing} \caption[RowClone: Command sequence and latency comparison]{Command sequence and latency for Baseline, FPM, and Inter-bank PSM for a 4KB copy operation. Intra-bank PSM simply repeats the operations for Inter-bank PSM twice (source row to temporary row and temporary row to destination row). The figure is not drawn to scale.} \label{fig:timing} \end{figure} Table~\ref{tab:latency-energy} shows the reduction in latency and energy consumption due to our mechanisms for different cases of 4KB copy and zeroing operations. To be fair to the baseline, the results include only the energy consumed by the DRAM and the DRAM channel. We draw two conclusions from our results. \begin{table}[h!] \centering \input{rowclone/tables/latency-energy} \caption[RowClone: Latency/energy reductions]{DRAM latency and memory energy reductions due to RowClone} \label{tab:latency-energy} \end{table} First, FPM significantly improves both the latency and the energy consumed by bulk operations --- 11.6x and 6x reduction in latency of 4KB copy and zeroing, and 74.4x and 41.5x reduction in memory energy of 4KB copy and zeroing. Second, although PSM does not provide as much benefit as FPM, it still reduces the latency and energy of a 4KB inter-bank copy by 1.9x and 3.2x, while providing a more generally applicable mechanism. When an on-chip cache is employed, any line cached from the source or destination page can be served at a lower latency than accessing main memory. As a result, in such systems, the baseline will incur a lower latency to perform a bulk copy or initialization compared to a system without on-chip caches. However, as we show in the following sections (\ref{sec:res-fork}--\ref{sec:multi-core}), \emph{even in the presence of on-chip caching}, the raw latency/energy improvement due to RowClone translates to significant improvements in both overall system performance and energy efficiency. \subsection{The {\large{\texttt{fork}\xspace}} System Call} \label{sec:res-fork} As mentioned in Section~\ref{sec:apps-cow-zeroing}, \texttt{fork}\xspace is one of the most expensive yet frequently-used system calls in modern systems~\cite{fork-exp}. Since \texttt{fork}\xspace triggers a large number of CoW operations (as a result of updates to shared pages from the parent or child process), RowClone can significantly improve the performance of \texttt{fork}\xspace. The performance of \texttt{fork}\xspace depends on two parameters: 1)~the size of the address space used by the parent---which determines how much data may potentially have to be copied, and 2)~the number of pages updated after the \texttt{fork}\xspace operation by either the parent or the child---which determines how much data is actually copied. To exercise these two parameters, we create a microbenchmark, \texttt{forkbench}\xspace, which first creates an array of size $\mathcal{S}$\xspace and initializes the array with random values. It then forks itself. The child process updates $N$ random pages (by updating a cache line within each page) and exits; the parent process waits for the child process to complete before exiting itself. As such, we expect the number of copy operations to depend on $N$---the number of pages copied. Therefore, one may expect RowClone's performance benefits to be proportional to $N$. However, an application's performance typically depends on the {\em overall memory access rate}~\cite{mise}, and RowClone can only improve performance by reducing the {\em memory access rate due to copy operations}. As a result, we expect the performance improvement due to RowClone to primarily depend on the \emph{fraction} of memory traffic (total bytes transferred over the memory channel) generated by copy operations. We refer to this fraction as FMTC---Fraction of Memory Traffic due to Copies. Figure~\ref{plot:fork-copy} plots FMTC of \texttt{forkbench}\xspace for different values of $\mathcal{S}$\xspace (64MB and 128MB) and $N$ (2 to 16k) in the baseline system. As the figure shows, for both values of $\mathcal{S}$\xspace, FMTC increases with increasing $N$. This is expected as a higher $N$ (more pages updated by the child) leads to more CoW operations. However, because of the presence of other read/write operations (e.g., during the initialization phase of the parent), for a given value of $N$, FMTC is larger for $\mathcal{S}$\xspace= 64MB compared to $\mathcal{S}$\xspace= 128MB. Depending on the value of $\mathcal{S}$\xspace and $N$, anywhere between 14\% to 66\% of the memory traffic arises from copy operations. This shows that accelerating copy operations using RowClone has the potential to significantly improve the performance of the \texttt{fork}\xspace operation. \begin{figure}[h] \centering \includegraphics{rowclone/plots/fork-copy} \caption[Memory traffic due to copy in \texttt{forkbench}\xspace]{FMTC of \texttt{forkbench}\xspace for varying $\mathcal{S}$\xspace and $N$} \label{plot:fork-copy} \end{figure} \begin{figure}[h] \centering \includegraphics{rowclone/plots/fork-perf} \caption[RowClone: \texttt{forkbench}\xspace performance]{Performance improvement due to RowClone for \texttt{forkbench}\xspace with different values of $\mathcal{S}$\xspace and $N$} \label{plot:fork-perf} \end{figure} Figure~\ref{plot:fork-perf} plots the performance (IPC) of FPM and PSM for \texttt{forkbench}\xspace, normalized to that of the baseline system. We draw two conclusions from the figure. First, FPM improves the performance of \texttt{forkbench}\xspace for both values of $\mathcal{S}$\xspace and most values of $N$. The peak performance improvement is 2.2x for $N$ = 16k (30\% on average across all data points). As expected, the improvement of FPM increases as the number of pages updated increases. The trend in performance improvement of FPM is similar to that of FMTC (Figure~\ref{plot:fork-copy}), confirming our hypothesis that FPM's performance improvement primarily depends on FMTC. Second, PSM does not provide considerable performance improvement over the baseline. This is because the large on-chip cache in the baseline system buffers the writebacks generated by the copy operations. These writebacks are flushed to memory at a later point without further delaying the copy operation. As a result, PSM, which just overlaps the read and write operations involved in the copy, does not improve latency significantly in the presence of a large on-chip cache. On the other hand, FPM, by copying all cache lines from the source row to destination in parallel, significantly reduces the latency compared to the baseline (which still needs to read the source blocks from main memory), resulting in high performance improvement. Figure~\ref{plot:fork-energy} shows the reduction in DRAM energy consumption (considering both the DRAM and the memory channel) of FPM and PSM modes of RowClone compared to that of the baseline for \texttt{forkbench}\xspace with $\mathcal{S}$\xspace $=64$MB. Similar to performance, the overall DRAM energy consumption also depends on the total memory access rate. As a result, RowClone's potential to reduce DRAM energy depends on the fraction of memory traffic generated by copy operations. In fact, our results also show that the DRAM energy reduction due to FPM and PSM correlate well with FMTC (Figure~\ref{plot:fork-copy}). By efficiently performing the copy operations, FPM reduces DRAM energy consumption by up to 80\% (average 50\%, across all data points). Similar to FPM, the energy reduction of PSM also increases with increasing $N$ with a maximum reduction of 9\% for $N$=16k. \begin{figure}[h] \centering \includegraphics[scale=0.9]{rowclone/plots/fork-energy} \caption[RowClone: Energy consumption for \texttt{forkbench}\xspace]{Comparison of DRAM energy consumption of different mechanisms for \texttt{forkbench}\xspace ($\mathcal{S}$\xspace = 64MB)} \label{plot:fork-energy} \end{figure} In a system that is agnostic to RowClone, we expect the performance improvement and energy reduction of RowClone to be in between that of FPM and PSM. By making the system software aware of RowClone (Section~\ref{sec:os-changes}), we can approximate the maximum performance and energy benefits by increasing the likelihood of the use of FPM. \subsection{Copy/Initialization Intensive Applications} \label{res:rowclone-ii-apps} In this section, we analyze the benefits of RowClone on six copy/initialization intensive applications, including one instance of the \texttt{forkbench}\xspace described in the previous section. Table~\ref{tab:iias} describes these applications. \begin{table}[h]\small \centering \input{rowclone/tables/iias} \caption[RowClone: Copy/initialization-intensive benchmarks]{Copy/Initialization-intensive benchmarks} \label{tab:iias} \end{table} Figure~\ref{plot:memfrac-apps} plots the fraction of memory traffic due to copy, initialization, and regular read/write operations for the six applications. For these applications, between 10\% and 80\% of the memory traffic is generated by copy and initialization operations. \begin{figure}[h] \centering \includegraphics{rowclone/plots/memfrac-apps} \caption[Copy/initialization intensive benchmark: Memory traffic breakdown]{Fraction of memory traffic due to read, write, copy and initialization} \label{plot:memfrac-apps} \end{figure} Figure~\ref{plot:perf-apps} compares the IPC of the baseline with that of RowClone and a variant of RowClone, RowClone-ZI (described shortly). The RowClone-based initialization mechanism slightly degrades performance for the applications which have a negligible number of copy operations (\emph{mcached}, \emph{compile}, and \emph{mysql}). Further analysis indicated that, for these applications, although the operating system zeroes out any newly allocated page, the application typically accesses almost all cache lines of a page immediately after the page is zeroed out. There are two phases: 1)~the phase when the OS zeroes out the page, and 2)~the phase when the application accesses the cache lines of the page. While the baseline incurs cache misses during phase 1, RowClone, as a result of performing the zeroing operation completely in memory, incurs cache misses in phase 2. However, the baseline zeroing operation is heavily optimized for memory-level parallelism (MLP)~\cite{effra,runahead}. In contrast, the cache misses in phase 2 have low MLP. As a result, incurring the same misses in Phase 2 (as with RowClone) causes higher overall stall time for the application (because the latencies for the misses are serialized) than incurring them in Phase 1 (as in the baseline), resulting in RowClone's performance degradation compared to the baseline. To address this problem, we introduce a variant of RowClone, RowClone-Zero-Insert (RowClone-ZI). RowClone-ZI not only zeroes out a page in DRAM but it also inserts a zero cache line into the processor cache corresponding to each cache line in the page that is zeroed out. By doing so, RowClone-ZI avoids the cache misses during both phase 1 (zeroing operation) and phase 2 (when the application accesses the cache lines of the zeroed page). As a result, it improves performance for all benchmarks, notably \texttt{forkbench}\xspace (by 66\%) and \emph{shell} (by 40\%), compared to the baseline. \begin{figure}[h] \centering \includegraphics{rowclone/plots/perf-apps} \caption[RowClone-ZI performance]{Performance improvement of RowClone and RowClone-ZI. \normalfont{Value on top indicates percentage improvement of RowClone-ZI over baseline.}} \label{plot:perf-apps} \end{figure} Table~\ref{tab:energy-apps} shows the percentage reduction in DRAM energy and memory bandwidth consumption with RowClone and RowClone-ZI compared to the baseline. While RowClone significantly reduces both energy and memory bandwidth consumption for \emph{bootup}, \emph{forkbench} and \emph{shell}, it has negligible impact on both metrics for the remaining three benchmarks. The lack of energy and bandwidth benefits in these three applications is due to serial execution caused by the the cache misses incurred when the processor accesses the zeroed out pages (i.e., {\em phase 2}, as described above), which also leads to performance degradation in these workloads (as also described above). RowClone-ZI, which eliminates the cache misses in {\em phase 2}, significantly reduces energy consumption (between 15\% to 69\%) and memory bandwidth consumption (between 16\% and 81\%) for all benchmarks compared to the baseline. We conclude that RowClone-ZI can effectively improve performance, memory energy, and memory bandwidth efficiency in page copy and initialization intensive single-core workloads. \begin{table}[h]\small \centering \input{rowclone/tables/energy-apps} \caption[RowClone: DRAM energy/bandwidth reduction]{DRAM energy and bandwidth reduction due to RowClone and RowClone-ZI (indicated as +ZI)} \label{tab:energy-apps} \end{table} \subsection{Multi-core Evaluations} \label{sec:multi-core} As RowClone performs bulk data operations completely within DRAM, it significantly reduces the memory bandwidth consumed by these operations. As a result, RowClone can benefit other applications running concurrently on the same system. We evaluate this benefit of RowClone by running our copy/initialization-intensive applications alongside memory-intensive applications from the SPEC CPU2006 benchmark suite~\cite{spec2006} (i.e., those applications with last-level cache MPKI greater than 1). Table~\ref{tab:benchmarks} lists the set of applications used for our multi-programmed workloads. \begin{table}[h]\small \centering \input{rowclone/tables/benchmarks} \caption[RowClone: Benchmarks for multi-core evaluation]{List of benchmarks used for multi-core evaluation} \label{tab:benchmarks} \end{table} We generate multi-programmed workloads for 2-core, 4-core and 8-core systems. In each workload, half of the cores run copy/initialization-intensive benchmarks and the remaining cores run memory-intensive SPEC benchmarks. Benchmarks from each category are chosen at random. Figure~\ref{plot:s-curve} plots the performance improvement due to RowClone and RowClone-ZI for the 50 4-core workloads we evaluated (sorted based on the performance improvement due to RowClone-ZI). Two conclusions are in order. First, although RowClone degrades performance of certain 4-core workloads (with \emph{compile}, \emph{mcached} or \emph{mysql} benchmarks), it significantly improves performance for all other workloads (by 10\% across all workloads). Second, like in our single-core evaluations (Section~\ref{res:rowclone-ii-apps}), RowClone-ZI eliminates the performance degradation due to RowClone and consistently outperforms both the baseline and RowClone for all workloads (20\% on average). \begin{figure}[h] \centering \includegraphics[scale=0.9]{rowclone/plots/s-curve} \caption[RowClone: 4-core performance]{System performance improvement of RowClone for 4-core workloads} \label{plot:s-curve} \end{figure} Table~\ref{tab:multi-core-ws} shows the number of workloads and six metrics that evaluate the performance, fairness, memory bandwidth and energy efficiency improvement due to RowClone compared to the baseline for systems with 2, 4, and 8 cores. For all three systems, RowClone significantly outperforms the baseline on all metrics. \begin{table}[h]\small \centering \input{rowclone/tables/multi-core-ws} \caption[RowClone: Multi-core results]{Multi-core performance, fairness, bandwidth, and energy} \label{tab:multi-core-ws} \end{table} To provide more insight into the benefits of RowClone on multi-core systems, we classify our copy/initialization-intensive benchmarks into two categories: 1) Moderately copy/initialization-intensive (\emph{compile}, \emph{mcached}, and \emph{mysql}) and highly copy/initialization-intensive (\emph{bootup}, \emph{forkbench}, and \emph{shell}). Figure~\ref{plot:multi-trend} shows the average improvement in weighted speedup for the different multi-core workloads, categorized based on the number of highly copy/initialization-intensive benchmarks. As the trends indicate, the performance improvement increases with increasing number of such benchmarks for all three multi-core systems, indicating the effectiveness of RowClone in accelerating bulk copy/initialization operations. \begin{figure}[h!] \centering \includegraphics[scale=0.9]{rowclone/plots/multi-trend} \caption[RowClone: Effect of increasing copy/initialization intensity]{Effect of increasing copy/initialization intensity} \label{plot:multi-trend} \end{figure} We conclude that RowClone is an effective mechanism to improve system performance, energy efficiency and bandwidth efficiency of future, bandwidth-constrained multi-core systems. \subsection{Memory-Controller-based DMA} \label{sec:mc-dma} One alternative way to perform a bulk data operation is to use the memory controller to complete the operation using the regular DRAM interface (similar to some prior approaches~\cite{bulk-copy-initialize,copy-engine}). We refer to this approach as the memory-controller-based DMA (MC-DMA). MC-DMA can potentially avoid the cache pollution caused by inserting blocks (involved in the copy/initialization) unnecessarily into the caches. However, it still requires data to be transferred over the memory bus. Hence, it suffers from the large latency, bandwidth, and energy consumption associated with the data transfer. Because the applications used in our evaluations do not suffer from cache pollution, we expect the MC-DMA to perform comparably or worse than the baseline. In fact, our evaluations show that MC-DMA degrades performance compared to our baseline by 2\% on average for the six copy/initialization intensive applications (16\% compared to RowClone). In addition, the MC-DMA does not conserve any DRAM energy, unlike RowClone. \section{Summary} In this chapter, we introduced RowClone, a technique for exporting bulk data copy and initialization operations to DRAM. Our fastest mechanism copies an entire row of data between rows that share a row buffer, with very few changes to the DRAM architecture, while leading to significant reduction in the latency and energy of performing bulk copy/initialization. We also propose a more flexible mechanism that uses the internal bus of a chip to copy data between different banks within a chip. Our evaluations using copy and initialization intensive applications show that RowClone can significantly reduce memory bandwidth consumption for both single-core and multi-core systems (by 28\% on average for 8-core systems), resulting in significant system performance improvement and memory energy reduction (27\% and 17\%, on average, for 8-core systems). We conclude that our approach of performing bulk copy and initialization completely in DRAM is effective in improving both system performance and energy efficiency for future, bandwidth-constrained, multi-core systems. We hope that greatly reducing the bandwidth, energy and performance cost of bulk data copy and initialization can lead to new and easier ways of writing applications that would otherwise need to be designed to avoid bulk data copy and initialization operations. \section{End-to-end System Design} \label{sec:system-design} So far, we described RowClone, a DRAM substrate that can efficiently perform bulk data copy and initialization. In this section, we describe the changes to the ISA, the processor microarchitecture, and the operating system that will enable the system to efficiently exploit the RowClone DRAM substrate. \subsection{ISA Support} \label{sec:isa-changes} To enable the software to communicate occurrences of bulk copy and initialization operations to the hardware, we introduce two new instructions to the ISA: \texttt{memcopy}\xspace and \texttt{meminit}\xspace. Table~\ref{tab:isa-semantics} describes the semantics of these two new instructions. We deliberately keep the semantics of the instructions simple in order to relieve the software from worrying about microarchitectural aspects of RowClone such as row size, alignment, etc.~(discussed in Section~\ref{sec:offset-alignment-size}). Note that such instructions are already present in some of the instructions sets in modern processors -- e.g., \texttt{rep movsd}, \texttt{rep stosb}, \texttt{ermsb} in x86~\cite{x86-ermsb} and \texttt{mvcl} in IBM S/390~\cite{s390}. \begin{table}[h] \centering \input{rowclone/tables/isa-semantics} \caption{Semantics of the \texttt{memcopy}\xspace and \texttt{meminit}\xspace instructions} \label{tab:isa-semantics} \end{table} There are three points to note regarding the execution semantics of these operations. First, the processor does not guarantee atomicity for both \texttt{memcopy}\xspace and \texttt{meminit}\xspace, but note that existing systems also do not guarantee atomicity for such operations. Therefore, the software must take care of atomicity requirements using explicit synchronization. However, the microarchitectural implementation will ensure that any data in the on-chip caches is kept consistent during the execution of these operations (Section~\ref{sec:rowclone-cache-coherence}). Second, the processor will handle any page faults during the execution of these operations. Third, the processor can take interrupts during the execution of these operations. \subsection{Processor Microarchitecture Support} \label{sec:uarch-changes} The microarchitectural implementation of the new instructions, \texttt{memcopy}\xspace and \texttt{meminit}\xspace, has two parts. The first part determines if a particular instance of \texttt{memcopy}\xspace or \texttt{meminit}\xspace can be fully/partially accelerated by RowClone. The second part involves the changes required to the cache coherence protocol to ensure coherence of data in the on-chip caches. We discuss these parts in this section. \subsubsection{Source/Destination Alignment and Size} \label{sec:offset-alignment-size} For the processor to accelerate a copy/initialization operation using RowClone, the operation must satisfy certain alignment and size constraints. Specifically, for an operation to be accelerated by FPM, 1)~the source and destination regions should be within the same subarray, 2)~the source and destination regions should be row-aligned, and 3)~the operation should span an entire row. On the other hand, for an operation to be accelerated by PSM, the source and destination regions should be cache line-aligned and the operation must span a full cache line. Upon encountering a \texttt{memcopy}\xspace/\texttt{meminit}\xspace instruction, the processor divides the region to be copied/initialized into three portions: 1)~row-aligned row-sized portions that can be accelerated using FPM, 2)~cache line-aligned cache line-sized portions that can be accelerated using PSM, and 3)~the remaining portions that can be performed by the processor. For the first two regions, the processor sends appropriate requests to the memory controller which completes the operations and sends an acknowledgment back to the processor. Since \texttt{TRANSFER}\xspace copies only a single cache line, a bulk copy using PSM can be interleaved with other commands to memory. The processor completes the operation for the third region similarly to how it is done in today's systems. Note that the CPU can offload all these operations to the memory controller. In such a design, the CPU need not be made aware of the DRAM organization (e.g., row size and alignment, subarray mapping, etc.). \subsubsection{Managing On-Chip Cache Coherence} \label{sec:rowclone-cache-coherence} RowClone allows the memory controller to directly read/modify data in memory without going through the on-chip caches. Therefore, to ensure cache coherence, the controller appropriately handles cache lines from the source and destination regions that may be present in the caches before issuing the copy/initialization operations to memory. First, the memory controller writes back any dirty cache line from the source region as the main memory version of such a cache line is likely stale. Copying the data in-memory before flushing such cache lines will lead to stale data being copied to the destination region. Second, the controller invalidates any cache line (clean or dirty) from the destination region that is cached in the on-chip caches. This is because after performing the copy operation, the cached version of these blocks may contain stale data. The controller already has the ability to perform such flushes and invalidations to support Direct Memory Access (DMA)~\cite{intel-dma}. After performing the necessary flushes and invalidations, the memory controller performs the copy/initialization operation. To ensure that cache lines of the destination region are not cached again by the processor in the meantime, the memory controller blocks all requests (including prefetches) to the destination region until the copy or initialization operation is complete. While performing the flushes and invalidates as mentioned above will ensure coherence, we propose a modified solution to handle dirty cache lines of the source region to reduce memory bandwidth consumption. When the memory controller identifies a dirty cache line belonging to the source region while performing a copy, it creates an in-cache copy of the source cache line with the tag corresponding to the destination cache line. This has two benefits. First, it avoids the additional memory flush required for the dirty source cache line. Second and more importantly, the controller does not have to wait for all the dirty source cache lines to be flushed before it can perform the copy. In Section~\ref{res:rowclone-ii-apps}, we will consider another optimization, called RowClone-Zero-Insert, which inserts clean zero cache lines into the cache to further optimize Bulk Zeroing. This optimization does not require further changes to our proposed modifications to the cache coherence protocol. Although RowClone requires the controller to manage cache coherence, it does not affect memory consistency --- i.e., concurrent readers or writers to the source or destination regions involved in a bulk copy or initialization operation. As mentioned before, such an operation is not guaranteed to be atomic even in current systems, and the software needs to perform the operation within a critical section to ensure atomicity. \subsection{Software Support} \label{sec:os-changes} The minimum support required from the system software is the use of the proposed \texttt{memcopy}\xspace and \texttt{meminit}\xspace instructions to indicate bulk data operations to the processor. Although one can have a working system with just this support, maximum latency and energy benefits can be obtained if the hardware is able to accelerate most copy operations using FPM rather than PSM. Increasing the likelihood of the use of the FPM mode requires further support from the operating system (OS) on two aspects: 1)~page mapping, and 2)~granularity of copy/initialization. \subsubsection{Subarray-Aware Page Mapping} \label{sec:subarray-awareness} The use of FPM requires the source row and the destination row of a copy operation to be within the same subarray. Therefore, to maximize the use of FPM, the OS page mapping algorithm should be aware of subarrays so that it can allocate a destination page of a copy operation in the same subarray as the source page. More specifically, the OS should have knowledge of which pages map to the same subarray in DRAM. We propose that DRAM expose this information to software using the small EEPROM that already exists in today's DRAM modules. This EEPROM, called the Serial Presence Detect (SPD)~\cite{spd}, stores information about the DRAM chips that is read by the memory controller at system bootup. Exposing the subarray mapping information will require only a few additional bytes to communicate the bits of the physical address that map to the subarray index.\footnote{To increase DRAM yield, DRAM manufacturers design chips with spare rows that can be mapped to faulty rows~\cite{spare-row-mapping}. Our mechanism can work with this technique by either requiring that each faulty row is remapped to a spare row within the same subarray, or exposing the location of all faulty rows to the memory controller so that it can use PSM to copy data across such rows.} Once the OS has the mapping information between physical pages and subarrays, it maintains multiple pools of free pages, one pool for each subarray. When the OS allocates the destination page for a copy operation (e.g., for a \emph{Copy-on-Write} operation), it chooses the page from the same pool (subarray) as the source page. Note that this approach does not require contiguous pages to be placed within the same subarray. As mentioned before, commonly used memory interleaving techniques spread out contiguous pages across as many banks/subarrays as possible to improve parallelism. Therefore, both the source and destination of a bulk copy operation can be spread out across many subarrays. \pagebreak \subsubsection{Granularity of Copy/Initialization} \label{sec:memory-interleaving} The second aspect that affects the use of FPM is the granularity at which data is copied or initialized. FPM has a minimum granularity at which it can copy or initialize data. There are two factors that affect this minimum granularity: 1)~the size of each DRAM row, and 2)~the memory interleaving employed by the controller. First, FPM copies \emph{all} the data of the source row to the destination row (across the entire DIMM). Therefore, the minimum granularity of copy using FPM is at least the size of the row. Second, to extract maximum bandwidth, some memory interleaving techniques map consecutive cache lines to different memory channels in the system. Therefore, to copy/initialize a contiguous region of data with such interleaving strategies, FPM must perform the copy operation in each channel. The minimum amount of data copied by FPM in such a scenario is the product of the row size and the number of channels. To maximize the likelihood of using FPM, the system or application software must ensure that the region of data copied (initialized) using the \texttt{memcopy}\xspace (\texttt{meminit}\xspace) instructions is at least as large as this minimum granularity. For this purpose, we propose to expose this minimum granularity to the software through a special register, which we call the \emph{Minimum Copy Granularity Register} (MCGR). On system bootup, the memory controller initializes the MCGR based on the row size and the memory interleaving strategy, which can later be used by the OS for effectively exploiting RowClone. Note that some previously proposed techniques such as sub-wordline activation~\cite{rethinking-dram} or mini-rank~\cite{threaded-module,mini-rank} can be combined with RowClone to reduce the minimum copy granularity, further increasing the opportunity to use FPM.
{ "attr-fineweb-edu": 1.527344, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfynxK19JmejM_5cd
\section{Introduction} As a mathematical model for a linear inverse problem, we consider the ill-posed operator equation \begin{equation} \label{eq:opeq} A\,x\,=\,y\,, \end{equation} where $A$ is a bounded linear operator from an infinite-dimensional Banach space $X$ to an infinite-dimensional Hilbert space $H$ such that $\mathcal R(A)$, the range of $A$, is a non-closed subset of $H$. Let ${x^\dagger} \in X$ denote an exact solution of (\ref{eq:opeq}) with properties to be particularized later. Unlesss specified otherwise, the norm $\|\cdot\|$ in this study always refers to that in $H$. We assume that instead of the exact right-hand side $y \in \mathcal R(A)$ only noisy data ${y^\delta} \in H$ satisfying \begin{equation} \label{eq:noise} \|{y^\delta}-y\| \le \delta \end{equation} with noise level $\delta \ge 0$ are available. Based on ${y^\delta}$ we try to recover ${x^\dagger}$ in a stable approximate manner by using variational regularization with general \emph{convex} penalty functionals $J$. Precisely, we are going to analyze convergence conditions for minimizers of the Tikhonov functional \begin{equation*} T_{\alpha}(x;v) := \frac{1} {2} \norm{A x - v}{}^{2} + \alpha J(x),\quad x\in X, \end{equation*} with regularization parameter $\alpha>0$, for exact right-hand sides $v:=y$ and noisy data $v:={y^\delta}$. In this context, we distinguish regularized solutions \begin{equation}\label{eq:xa} {x_\alpha} \in \argmin_{x \in X} T_{\alpha}(x;y)\end{equation} and \begin{equation}\label{eq:xad} {x_{\alpha}^\delta} \in \argmin_{x \in X} T_{\alpha}(x;{y^\delta}),\end{equation} respectively. In case of multiple minimizers we select any family of regularized solutions ${x_\alpha}$ and ${x_{\alpha}^\delta}$ for $\alpha>0$. As will be seen from the subsequent results, in particular from the discussion in Remark~\ref{rem:multiple}, the specific choice has no impact on the convergence rate results. We are interested in estimates of the error between ${x_{\alpha}^\delta}$ and ${x^\dagger}$ and in proving corresponding convergence rates. In a Hilbert space $X$, the error norm is a canonical measure in this context, in particular if the penalty $J$ is of norm square type. For Banach spaces $X$ and general convex penalties $J$, however, norms are not always appropriate measures, and the study~\cite{BurOsh04} introduced alternatively the Bregman distance \begin{equation} \label{eq:bregman} \Bj\zeta(z;x) := J(x) - J(z) - \scalar{ \zeta}{x - z},\quad x\in X, \quad \zeta \in \partial \reg(z) \subset X^*, \end{equation} with some subgradient $\zeta$ from the subdifferential $\partial \reg(z)$ of $J$ at the point $z \in X$, as a powerful error measure for regularized solutions of ill-posed problems in Banach spaces, see also, e.g.,~\cite{HoKaPoSc07,Resm05,Scherzer09,ScKaHoKa12}. We stress the fact that the subgradient $\zeta$ is taken at the first argument in the Bregman distance, and we recall that the Bregman distance is not symmetric in its arguments. {Therefore, we highlight in~\eqref{eq:bregman} the base point~$z$, by indicating the corresponding subgradient, say~$\zeta$.} It is a classical result that convergence rates for ill-posed problems require a regularity condition (abstract smoothness condition) for ${x^\dagger}$ as otherwise convergence can be arbitrary slow. For linear problems in Hilbert space, a classical smoothness condition assumes that~${x^\dagger}\in \mathcal R\lr{A^{\ast}}$. The corresponding Banach space assumption instead supposes that there is a subgradient $$\xi^{\dag}\in\partial \reg({x^\dagger}) \qquad \mbox{with} \qquad \xi^{\dag} = A^{\ast}w,\;\norm{w}\leq R.$$ For convex $J$ the Bregman distance is non-negative, and hence~\eqref{eq:bregman} implies that for all $x \in X$ the inequality $$ J({x^\dagger}) - J(x) \le \scalar{\xi^{\dag}}{{x^\dagger}-x} $$ holds, and then it is immediate (cf.~\cite[p.~349]{Ki16}) that we have \begin{equation} \label{eq:vi-benchmark} J({x^\dagger}) - J(x) \leq R \norm{A{x^\dagger} - Ax} \quad \mbox{for all} \quad x\in X. \end{equation} This represents a \emph{benchmark variational inequality} as Section~\ref{sec:relations} will reveal. If otherwise $\xi^{\dag}\not\in\mathcal R(A^{\ast})$, then a condition of type (\ref{eq:vi-benchmark}) must fail, but as shown in~\cite[Lemma~3.2]{Fl17} a variational inequality \begin{equation} \label{eq:vi-Phi} J({x^\dagger}) - J(x) \leq \mod\lr{\norm{A{x^\dagger} - Ax}} \quad \mbox{for all} \quad x\in X \end{equation} with a sub-linear \emph{index function}\footnote{Throughout, we call a function $\varphi\colon (0,\infty) \to (0,\infty)$ index function if it is continuous, strictly increasing and obeys the limit condition $\lim_{t\to +0} \varphi(t) = 0$.}~$\mod$ holds, and the quotient function $\mod(t)/t$ is strictly decreasing. Under a more restrictive assumption on~$\mod$ (concavity instead of sub-linearity) and for a more general setting this condition was introduced as formula (2.11) in \cite{Ki16}, and it was proven that (\ref{eq:vi-Phi}) yields the convergence rates \[ \Bj{\xi_\alpha^\delta}({x_{\alpha}^\delta};{x^\dagger}) = \mathcal{O}(\mod(\delta)) \quad \mbox{as} \quad \delta \to 0\,. \] In the past years, convergence rates under variational source conditions (cf., e.g.,~\cite{Fl12,Gr10,HoMa12}) were expressed in terms of the Bregman distance~$B_{\xi^\dagger}({x^\dagger};{x_{\alpha}^\delta})$, and hence using the base point~${x^\dagger}$. In this context it is not clear whether a subgradient~$\xi^\dag\in \partial \reg({x^\dagger})$ exists, for instance if~${x^\dagger}$ is not in the interior of~$\mathrm{dom}(J):=\{x \in X:\,J(x)<\infty\}$. Taking as base point the minimizer~${x_{\alpha}^\delta}$, this cannot happen and the set~$\partial \reg({x_{\alpha}^\delta})$ is always non-empty (cf., e.g.,~\cite[Lemma~2.2]{Fl17}). This may be seen as an advantage of the present approach, following the original study~ \cite{Ki16}. Without further notice we follow the convention from that study: if the subdifferential $\partial J({x_{\alpha}^\delta})$ is multi-valued, then we take for $\Bj{\xi_\alpha^\delta}({x_{\alpha}^\delta};{x^\dagger})$ a subgradient $\xi_\alpha^\delta$ that satisfies the optimality condition \begin{equation} \label{eq:left0} A^*(A{x_{\alpha}^\delta}-{y^\delta})+\alpha \xi_\alpha^\delta=0. \end{equation} A remarkable feature of the error bounds under smoothness assumptions~(\ref{eq:vi-Phi}) is the splitting of the error, see also in a more general setting~\cite[Thm.~3.1]{Ki16}, as \begin{equation} \label{eq:err-split} B_{\xi_{\alpha}^{\delta}}({x_{\alpha}^\delta};{x^\dagger}) \leq \frac{\delta^{2}}{2\alpha} + \Psi(\alpha) \quad \mbox{for all} \quad \alpha>0, \end{equation} where the function~$\Psi$ is related to~$\mod$, and typically it will also be an index function. \medskip In this study, see Section~\ref{sec:main}, we analyze the condition \begin{equation} \label{eq:weaker} J({x^\dagger}) - J({x_\alpha}) \leq \Psi(\alpha) \quad \mbox{for all} \quad \alpha>0 \end{equation} (cf.~Assumption~\ref{ass:conv}, and its alternative,~Assumption~\ref{ass:2prime}). Under these conditions a similar error splitting as in~(\ref{eq:err-split}) is shown as the main result. Notice, that these bounds are required to hold only for the minimizers~${x_\alpha}$ of the noise-free Tikhonov functional~$T_{\alpha}(x;A{x^\dagger})$. This resembles the situation for linear ill-posed problems in Hilbert spaces, where the error is decomposed into the noise propagation term, usually of the form~$\delta/\sqrt\alpha$, and some noise-free term depending on the solution smoothness, say~$\varphi(\alpha)$, which is called profile function in \cite{HofMat07}. We refer to a detailed discussion in Section~\ref{sec:main}. The error bounds will be complemented by some discussion on the equivalence of Assumptions~\ref{ass:conv} and~\ref{ass:2prime}. Also, a discussion on necessary conditions for an index function $\Psi$ to serve as an inequality~(\ref{eq:weaker}) is given. We mention that the existence of an index function $\Psi$ satisfying (\ref{eq:weaker}) is an immediate consequence of \cite[Thm.~3.2]{Fl18} (see also \cite[Remark~2.6]{Fl17}) in combination with the results of Section~\ref{sec:relations}. Precisely, we highlight that the variational inequality (\ref{eq:vi-Phi}) implies the validity of~(\ref{eq:weaker}) for some specific index functions $\Psi$ related to~$\mod$ by some convex analysis arguments. Then, in Section~\ref{sec:examples} we present specific applications of this approach. In an appendix we give detailed proofs of the main results (Appendix~\ref{sec:proofs}) and some auxiliary discussion concerning convex index functions (Appendix~\ref{sec:convex-analysis}). \section{Assumptions and main results} \label{sec:main} In the subsequent analysis, \emph{convex} index functions will be of particular interest, i.e.,~index functions $\varphi$ which obey $$ \varphi\left(\frac{s+t}{2}\right) \leq \frac 1 2 \lr{\varphi(s) + \varphi(t)},\quad s,t\geq 0. $$ The inverse of a convex index function is a \emph{concave} index function, and hence the above inequality is reversed. We mention that concave index functions are \emph{sub-linear}, which means that these functions have the property that the quotients~$\varphi(\lambda)/\lambda$ are non-increasing. Additional considerations concerning convex index functions are collected in Appendix~\ref{sec:convex-analysis}. The proofs of the results in this section are technical, and hence they are postponed to Appendix~\ref{sec:proofs}. \subsection{Assumptions} \label{sec:assumptions} Throughout this study we impose, e.g.,~along the lines of~\cite{ScKaHoKa12}, the following standard assumptions on the penalty. \begin{assumption}[Penalty]\label{ass:penalty} The function~$J: X \to [0,\infty]$ is a proper, convex functional defined on an Banach space $X$, which is lower semi-continuous with respect to weak (or weak$^*$) sequential convergence. Additionally, we assume that $J$ is a stabilizing (weakly coercive) penalty functional, i.e., the sublevel sets $\mathcal{M}_c:=\{x \in X:\,J(x) \le c\}$ of $J$ are for all $c \ge 0$ weakly (or weak$^*$) sequentially compact. Moreover, we assume that at least one solution ${x^\dagger}$ of (\ref{eq:opeq}) with finite penalty value $J({x^\dagger})<\infty$ exists. \end{assumption} Consequently, for all $\alpha>0$ and $v \in H$, the sublevel sets of $T_{\alpha}(.,v)$ are weakly (or weak$^*$) sequentially compact. This ensures the existence and stability of regularized solutions ${x_\alpha}$ and ${x_{\alpha}^\delta}$ which are the corresponding minimizers for $v=y$ and $v={y^\delta}$, respectively. In the sequel, we use the symbol ${x^\dagger}$ only for the always existing $J$-minimizing solutions of (\ref{eq:opeq}), i.e.~$J({x^\dagger})=\min \limits_{x \in X: Ax=y} J(x)$. The fundamental regularity condition is given as follows. To this end, let ${x_\alpha}$ be defined as in \eqref{eq:xa}. This assumption controls the deviation of the penalty at the minimizers from the one at the $J$-minimizing solution~${x^\dagger}$. \begin{assumption}[Defect for penalty]\label{ass:conv} There is an index function~$\Psi$ such that \begin{equation} \label{eq:new} J(x^\dag) - J({x_\alpha}) \leq \Psi(\alpha) \quad \mbox{for all} \quad \alpha>0. \end{equation} \end{assumption} \medskip It is not difficult to conclude from the minimizing property of~${x_\alpha}$, \begin{equation} \label{eq:minprop} \frac{1} {2} \norm{A {x_\alpha} - A{x^\dagger}}{}^{2} + \alpha J({x_\alpha}) \le \alpha J({x^\dagger}) \,, \end{equation} that the left hand side of \eqref{eq:new} is nonnegative and hence that $$\lim_{\alpha \to 0}J({x_\alpha}) =J(x^\dag) \quad \mbox{and} \quad \frac{1}{2 \alpha} \norm{A {x_\alpha} - y}{}^{2} \le J(x^\dag) - J({x_\alpha}), $$ such that Assumption~\ref{ass:conv} also yields the estimate \begin{equation} \label{eq:residual-bound} \frac{1}{2\alpha}\norm{A {x_\alpha} - y}{}^{2} \leq \Psi(\alpha) \quad \mbox{for all} \quad \alpha>0. \end{equation} Instead of controlling the defect for the penalty~$J$ one might control the defect for the overall Tikhonov functional as follows. \renewcommand{\theassumption}{2$^{\,\prime}$} \begin{assumption}[Defect for Tikhonov functional]\label{ass:2prime} There is an index function~$\Psi$ such that \begin{equation}\label{eq:Modified} \frac{1}{\alpha}\left( T_{\alpha}({x^\dagger};A{x^\dagger}) - T_{\alpha}({x_\alpha};A {x^\dagger}) \right) \leq \Psi(\alpha) \quad \mbox{for all} \quad \alpha>0. \end{equation} \end{assumption} \medskip By explicitly writing the left hand side in~(\ref{eq:Modified}) we see that $$ \frac{1}{\alpha}\left( T_{\alpha}({x^\dagger};A{x^\dagger}) - T_{\alpha}({x_\alpha};A {x^\dagger}) \right) = J({x^\dagger}) - J({x_\alpha}) - \frac{1}{2\alpha}\norm{A {x_\alpha} - y}{}^{2}, $$ and hence Assumption~\ref{ass:conv} is stronger than Assumption~\ref{ass:2prime}, as stated above. One advantage of Assumption~\ref{ass:2prime} is that it is invariant with respect to the choice of the minimizers~${x_\alpha}$. This is not clear for Assumption~\ref{ass:conv}. As a remarkable fact we state that both assumptions are basically equivalent. \begin{prop}\label{thm:equivalence} Assumption~\ref{ass:2prime} yields that \begin{equation} \label{eq:equivalence3} J(x^\dag) - J({x_\alpha}) \leq 2 \Psi(\alpha) \quad \mbox{for all} \quad \alpha>0. \end{equation} Hence Assumption~\ref{ass:conv} is fulfilled with~$\Psi$ replaced by $2\Psi$. \end{prop} \begin{rem}\label{rem:multiple} The above result has an important impact, and we return to the choice of the minimizers~${x_\alpha},{x_{\alpha}^\delta}$ from \eqref{eq:xa} and~\eqref{eq:xad}, respectively. As mentioned before, the functional on the left-hand side of \eqref{eq:Modified} is independent of the choice of the minimizers~${x_\alpha}$, due to the uniqueness of the value of the Tikhonov functional at the minimizers (cf., e.g.,~\cite[Sec.~3.2]{ItoJin15}). Thus, if Assumption~\ref{ass:2prime} is fulfilled for one selection~${x_\alpha},$ $\alpha>0,$ then this holds true for arbitrary selections. Since Assumption~\ref{ass:2prime} implies Assumption~\ref{ass:conv} (at the expense of a factor~2) the latter will be fulfilled for any selection. Conversely, if Assumption~\ref{ass:conv} holds for some selection~${x_\alpha}, \ \alpha >0$, then this yields the validity of Assumption~\ref{ass:2prime}, but then extends to any other choice of minimizers. Again, by the above proposition this implies that any other choice of minimizers will obey Assumption~\ref{ass:conv}, by losing a factor $2$ at most. \end{rem} We finally discuss which index functions may serve as upper bounds in either of the assumptions~\ref{ass:conv} or~\ref{ass:2prime}, respectively. We formulate this as follows. \begin{prop} \label{prop:dich} Suppose Assumption~\ref{ass:conv} holds with index function $\Psi$. Then the following is true:\\ {\bf Either} $\;J({x^\dagger}) = \min \limits_{x\in X} J(x)$,\\ and then $J({x_\alpha}) =J({x^\dagger})$ for each $\alpha>0$, and any index function~$\Psi$ is a valid bound in \eqref{eq:new},\\ {\bf or} $\;J({x^\dagger}) > \min \limits_{x\in X} J(x)$,\\ and then $\Psi$ increases near zero at most linearly. \end{prop} We shall call the first case \emph{singular}. In this case, where $J({x^\dagger}) = \min _{x\in X} J(x)$, the choice of the regularization parameter loses importance, which is also the case if the phenomenon of exact penalization occurs (see \cite{BurOsh04} and more recently in~\cite{AnzHofMat14}). \subsection{Main results} \label{sec:main-results} We turn to stating the main results, which highlight the impact of Assumption~\ref{ass:conv} and Assumption~\ref{ass:2prime} on the overall error, measured by the Bregman distance. \begin{theo}\label{th:main} Under Assumption~\ref{ass:conv} we have that $$ \Bj{\xi_\alpha^\delta}({x_{\alpha}^\delta};x^\dag) \leq \frac{\delta^{2}}{2\alpha} + \Psi(\alpha) \quad \mbox{for all} \quad \alpha>0. $$ \end{theo} The proof of Theorem~\ref{th:main} is a simple consequence of the following result, which may be also of its own interest. \renewcommand{\thetheo}{1$^{\,\prime}$} \begin{theo}\label{thm:mod} Suppose that Assumption~\ref{ass:2prime}{} is satisfied with an index function $\Psi$. Then an error estimate of the type \begin{equation}\label{eq:upperbound2} \Bj{\xi_\alpha^\delta}({x_{\alpha}^\delta};{x^\dagger}) \leq \frac{\delta^2}{2\alpha} + \Psi(\alpha)\quad \mbox{for all} \quad \alpha>0 \end{equation} holds. \end{theo} Since, as mentioned above, Assumption~\ref{ass:conv} is stronger than Assumption~\ref{ass:2prime} it is enough to prove Theorem~\ref{thm:mod}. \subsection{Discussion} \label{sec:discussion} Resulting from Theorems~\ref{th:main} and Theorem~\ref{thm:mod}, the best possible bound for the Bregman distance between the regularized solutions and ${x^\dagger}$ as a function of $\delta>0$ takes place in both cases if $\alpha=\alpha_*>0$ is chosen such that the right-hand side $\frac{\delta^2}{2\alpha} + \Psi(\alpha)$ is minimized, i.e., \begin{equation}\label{eq:bestrate} B_{\xi_{\alpha_*}^\delta}\lr{x_{\alpha_*}^\delta;{x^\dagger}} \leq \inf_{\alpha >0} \set{\frac{\delta^2}{2\alpha} + \Psi(\alpha)}, \end{equation} which determines, from this perspective, the best possible convergence rate of $B_{\xi_{\alpha_*}^\delta}\lr{x_{\alpha_*}^\delta;{x^\dagger}}$ to zero as $\delta \to 0$. Consequently, this convergence rate is the higher, the faster the decay rate of $\Psi(\alpha) \to 0$ as $\alpha \to 0$ is. As expressed in the non-singular case of Proposition~\ref{prop:dich}, the function~$\Psi$ cannot increase from zero super-linearly, and the limiting case is obtained for~$\Psi(\alpha) \sim \alpha,\ \alpha\to 0$. From this perspective, the maximally described rate is $B_{\xi_{\alpha_*}^\delta}\lr{x_{\alpha_*}^\delta;{x^\dagger}} \sim \delta$ as $\delta \to 0$, which is obtained whenever for example the regularization parameter is chosen as $\alpha_*=\alpha(\delta)\sim \delta$. For linear ill-posed equations in Hilbert spaces and using the standard penalty $J(x)=\|x\|_X^2$ (see Section~\ref{subsec:quadratic}), this results in the error rate $B_{\xi_{\alpha_*}^\delta}\lr{x_{\alpha_*}^\delta;{x^\dagger}}=\norm{x_{\alpha_*}^\delta- {x^\dagger}}_X^2 = \mathcal O(\delta)$. However, resulting from Theorems~\ref{th:main} and Theorem~\ref{thm:mod} the overall best possible convergence rate $\norm{x_{\alpha_*}^\delta- {x^\dagger}}_X^2=\mathcal{O}(\delta^{4/3})$ attainable for Tikhonov regularization cannot be obtained, and indeed our analysis is confined to the low rate case expressed be the range-type source condition ${x^\dagger}\in\mathcal R(A^\ast)$. This is also the case for all other approaches which are based on the minimizing property $T_\alpha({x_{\alpha}^\delta};{y^\delta}) \le T_\alpha({x^\dagger};{y^\delta})$ only, including approaches using variational source conditions (see Section~\ref{sec:relations} below). For alternative techniques leading to enhanced convergence rates we refer to \cite{NHHKT10, Resm05,ResSch06}, \cite[Sect.~4.2.4]{ScKaHoKa12} and references therein. \medskip Now we return to the error estimate (\ref{eq:upperbound2}) for general convex penalties~$J$. Since the upper bound with respect to $\alpha>0$ is decomposed into a sum of a continuous decreasing function~$\delta^2/(2\alpha)$ and an increasing continuous function $\Psi(\alpha)$, the minimizer always exists. Given~$\Psi$, let us assign the companion~$\Theta(\alpha):= \sqrt{\alpha \Psi(\alpha)},\ \alpha>0$. If we then let~$\alpha_\ast$ be obtained from calibrating both summands as \begin{equation} \alpha_\ast = \alpha_\ast(\delta) := \lr{\Theta^2}^{-1}\lr{\frac{\delta^2}{2}} = \Theta^{-1}\lr{\frac{\delta}{\sqrt 2}}, \end{equation} then we find that \begin{equation} \Bj{\xi_{\alpha_\ast}^\delta}(x_{\alpha_\ast}^\delta;{x^\dagger}) \leq 2 \Psi\lr{\Theta^{-1}\lr{\frac{\delta}{\sqrt 2}}}, \end{equation} and the optimality of this bound will be discussed in the examples presented below in Section~\ref{sec:examples}. \medskip It is interesting to separately discuss the singular case, i.e., when $\;J({x^\dagger}) = \min \limits_{x\in X} J(x)$. We claim that then~$\Bj{\xi_\alpha^\delta}({x_{\alpha}^\delta};{x^\dagger}) =0$ when the subdifferential~$\xi_\alpha^\delta=\partial \reg({x_{\alpha}^\delta})$ obeys the optimality condition (\ref{eq:left0}), i.e., we have $\alpha\xi_\alpha^\delta = A^\ast\lr{{y^\delta} - A{x_{\alpha}^\delta}}$. If we now look at the minimizing property of~${x_{\alpha}^\delta}$ then we see that $$ \frac 1 2 \norm{A{x_{\alpha}^\delta} - {y^\delta}}^2 + \alpha J({x_{\alpha}^\delta}) \leq \alpha J({x^\dagger}), $$ which, in the singular case, requires to have that~$\norm{A{x_{\alpha}^\delta} - {y^\delta}}=0$, and hence that~${\xi_\alpha^\delta} = 0$. This yields for the Bregman distance that \begin{align*} B_{\xi_{\alpha}^\delta}\lr{x_{\alpha}^\delta;{x^\dagger}} &=J({x^\dagger}) - J({x_{\alpha}^\delta}) + \scalar{\xi_\alpha^\delta}{{x_{\alpha}^\delta} - {x^\dagger}} \leq 0, \end{align*} such that the Bregman distance equals zero in the singular case. \medskip We already emphasized that the upper estimate of the error measure $\Bj{\xi_\alpha^\delta}({x_{\alpha}^\delta};{x^\dagger})$ in (\ref{eq:upperbound2}) consists of two terms, the first $\delta$-dependent noise propagation, and the second $\delta$-independent term which expresses the smoothness of the solution ${x^\dagger}$ with respect to the forward operator $A$. In the study~\cite{HofMat07} such a decomposition was comprehensively analyzed for general linear regularization methods applied to~(\ref{eq:opeq}) in a Hilbert space setting, i.e., for linear mappings ${y^\delta} \mapsto {x_{\alpha}^\delta}$, and for the norm as an error measure the $\delta$-independent term was called \emph{profile function} there, because this term completely determines the error profile. For the current setting, the index function $\Psi$ plays a similar role, although the mapping ${y^\delta} \mapsto {x_{\alpha}^\delta}$ is \emph{nonlinear} for general convex penalties $J$ different from norm squares in Hilbert space $X$. This shows the substantial meaning of the right-hand function $\mod$ in the inequality (\ref{eq:new}) of Assumption~\ref{ass:conv}. \section{Relation to variational inequalities} \label{sec:relations} In this section we shall prove that a variational inequality of type (\ref{eq:vi-Phi}) implies the validity of Assumption~\ref{ass:2prime} and a fortiori Assumption~\ref{ass:conv}. More precisely, we consider the situation that there is an index function~$\mod$ such that \begin{equation}\label{eq:kindermann} J({x^\dagger}) - J(x) \leq \mod(\|A x - A {x^\dagger} \|) \quad \mbox{for all} \quad x\in X. \end{equation} First, similarly to Proposition~\ref{prop:dich} we highlight that the choice of functions~$\mod$ in~\eqref{eq:kindermann} is not arbitrary. \begin{prop} \label{prop:nosuper} Suppose that a variational inequality~\eqref{eq:kindermann} holds with an index function~$\mod$. The following is true:\\ {\bf Either} $\;J({x^\dagger}) = \min \limits_{x\in X} J(x)$,\\ and then any index function~$\mod$ is a valid bound in \eqref{eq:kindermann},\\ {\bf or} $\;J({x^\dagger}) > \min \limits_{x\in X} J(x)$,\\ and then $\mod$ increases near zero at most linearly. \end{prop} \begin{proof} First, if~$\;J({x^\dagger}) = \min \limits_{x\in X} J(x)$ then the left hand side in~\eqref{eq:kindermann} is non-positive, and hence any non-negative upper bound is valid. Otherwise, suppose that~$\mod(t)/t$ decreases to zero as~$t\to 0$. The inequality (\ref{eq:kindermann}) taken at the point ${x^\dagger}+t(x-{x^\dagger}),\;0<t<1,$ attains the form $$J({x^\dagger})-J((1-t){x^\dagger}+tx) \le \mod(t\|Ax-A{x^\dagger}\|), $$ where we can estimate from below the left-hand side as $$ J({x^\dagger})-J((1-t){x^\dagger}+tx) \ge J({x^\dagger})-(1-t)J({x^\dagger})-tJ(x)=t(J({x^\dagger})-J(x)), $$ because $J$ is a convex functional. From this we directly derive $$ J({x^\dagger})-J(x) \le \frac{\mod(t\|Ax-A{x^\dagger}\|)}{t}= \|Ax-A{x^\dagger}\|\frac{\mod(t\|Ax-A{x^\dagger}\|)}{t\|Ax-A{x^\dagger}\|},$$ where under the assumption of the lemma the right-hand side tends to zero as $t \to 0$. Consequently, we have $J({x^\dagger}) \le J(x)$ for all $x \in X$. This completes the proof. \end{proof} The main result in this section reads as follows: \begin{prop} \label{pro:peter} Suppose that a variational inequality~(\ref{eq:kindermann}) holds for some index function~$\mod$. Let us consider the related index function $\tilde{\mod}(t):= \mod(\sqrt{t}),\;t>0$. Then the following assertions hold true. \begin{enumerate} \item The condition~(\ref{eq:Modified}) is valid with a function \begin{equation} \label{eq:infpos} \Psi(\alpha) = \sup_{t>0} \left[ \mod(t) - \frac{t^2}{2 \alpha}\right], \end{equation} which is increasing for all $\alpha>0$ but that may take values~$+\infty$. \item If the function~$\tilde\mod$ is concave then the function~$\Psi$ from (\ref{eq:infpos}) has the representation \begin{equation*} \Psi(\alpha):= \frac{\tilde{\mod}^{-\ast}(2\alpha)}{2\alpha},\quad \alpha >0, \end{equation*} where $\tilde{\mod}^{-\ast}$ is the Fenchel conjugate to the convex index function $\tilde{\mod}^{-1}$ (cf.~Appendix~\ref{sec:convex-analysis}). \item\label{it:3} Finally, if moreover the quotient function $s^2/\mod(s)$, is an index function and hence strictly increasing for all $0<s<\infty$, then $\Psi$ also constitutes an index function. Theorem~\ref{thm:mod} yields the error estimate (\ref{eq:upperbound2}). \end{enumerate} \end{prop} \begin{proof For the first assertion we find that \begin{align*} &\frac{1}{\alpha} \left(T_{\alpha}({x^\dagger};y) - T_{\alpha}({x_\alpha};y) \right) = J({x^\dagger}) - J({x_\alpha}) - \frac{1}{2\alpha} \|A {x_\alpha} - A {x^\dagger} \|^2 \\ & \leq \mod(\|A {x_\alpha} - A {x^\dagger} \| ) - \frac{1}{2\alpha} \|A {x_\alpha} - A {x^\dagger} \|^2. \end{align*} Setting $t:= \norm{A {x_{\alpha}^\delta} - A {x^\dagger}}$ yields the function~$\Psi$ as stated. Now suppose that the function~$\tilde \mod$ is a concave index function. Then its inverse is a convex index function, and by the definition of the Fenchel conjugate, see~\eqref{eq:Fconjugate}, we find \begin{align*} &\sup_{t>0} \left[\mod(t) - \frac{t^2}{2 \alpha}\right] = \sup_{t>0} \left[\tilde{\mod}(t^2) - \frac{t^2}{2\alpha}\right] \\ & = \frac{1}{2 \alpha} \sup_{s>0} \left[ 2 \alpha s - \tilde{\mod}^{-1}(s) \right] = \frac{\tilde{\mod}^{-\ast}(2\alpha)}{2\alpha}, \end{align*} which proves the second assertion. It remains to establish that this function is an index function with property as stated. To this end we aim at applying Corollary~\ref{cor:appendix} with~$f(t):= \tilde\mod^{-1}(t),\ t>0$. We observe, after substituting~$t:= \tilde\mod(s^2)$, that $$ \frac{\tilde\mod^{-1}(t)}{t} = \frac{s^2}{\mod\lr{s}},\quad s>0, $$ which was supposed to be strictly increasing from 0 to $\infty$. Thus Corollary~\ref{cor:appendix} applies, and the proof is complete. \end{proof} Under the conditions of item (2) of Proposition~\ref{pro:peter} we can immediately derive a convergence rate for the Bregman distance as error measure. \begin{prop} \label{pro:stefan} If the function $\mod$ in (\ref{eq:kindermann}) is such that $\tilde{\mod}(t):= \mod(\sqrt{t})$ is a concave index function, and with an appropriately selected $\alpha$, the following convergence rate holds \begin{equation} \label{eq:rateStefan} \Bj{\xi_\alpha^\delta}({x_{\alpha}^\delta};{x^\dagger}) = \mathcal{O}(\mod(\delta)) \quad \mbox{as} \quad \delta \to 0. \end{equation} \end{prop} \begin{proof} In the Fenchel-Young inequality~\eqref{eq:FYI}, used for~$f:= \tilde{\mod}^{-1}$, assigning~$u:= \tilde{\mod}(\delta^2)$ and~$v:= 2\alpha$ we obtain \[ \mod(\delta) = \tilde{\mod}(\delta^2) \leq \frac{\delta^2}{2 \alpha} + \frac{\tilde{\mod}^{-*}(2 \alpha)}{2\alpha} \] Taking $2 \alpha \in \partial \tilde{\mod}^{-1}(\delta^2)$, which exists by continuity of $\tilde{\mod}^{-1}$, yields equality in the Fenchel-Young and in the above inequality, thus, with such a choice and by \eqref{eq:upperbound2} \[ \Bj{\xi_\alpha^\delta}({x_{\alpha}^\delta};{x^\dagger}) \leq \frac{\delta^2}{2\alpha} + \frac{\tilde{\mod}^{-*}(2 \alpha)}{2\alpha} = \mod(\delta).\] \end{proof} We highlight the previous findings in case that the function~$\mod$ in~\eqref{eq:kindermann} is a monomial. \begin{example}\label{xmpl:monomial} Let us prototypically consider the case that the function~$\mod$ is of power type, i.e.,\ $\mod(t):= t^{\mu},\ t>0$ for some $0 < \mu <\infty$. Then the function~$\tilde\mod$ is~$\tilde\mod(t)=t^{\mu/2}$. This function is concave whenever~$0< \mu \leq 2$. In that range also the quotients~$s^2/\mod(s),\ s>0$ are strictly increasing. For~$\mu>2$ the function~$\Psi$ is infinite for all~$\alpha >0$ and for $\mu=2$ it is a positive constant. For $0<\mu<2$, however, $\Psi$ is an index function. Namely, the inverse of~$\tilde\mod$ equals~$\tilde\mod^{-1}(t) = t^{2/\mu},\ t > 0$. By using the simple identity that~$(cf)^\ast(t) = c f^\ast(t/c),\ t>0$, for a convex function~$f$ and $c>0$ we see that the Fenchel conjugate function is for all $0<\mu<2$ $$ \tilde\mod^{-\ast}(t) = \frac{2-\mu}{\mu}\lr{\mu t/2}^{2/(2-\mu)},\ t > 0. $$ Then the quotient $$ \frac{\tilde{\mod}^{-\ast}(2\alpha)}{2\alpha} = \frac{2 - \mu}{2}\lr{\mu\alpha}^{\frac{\mu}{2-\mu}},\quad \alpha > 0, $$ is a strictly increasing index function as predicted by the proposition. This function is sub-linear for~$\mu/(2-\mu)\leq 1$, i.e.,\ for~$0< \mu \leq 1$, and hence may serve as a bound in Assumption~\ref{ass:conv}, including the benchmark case~$\mod(t) = ct,\ t>0$, in which case the corresponding function~$\Psi$ is also linear. \end{example} \begin{rem} \label{rem:Flemming} We know from Proposition~\ref{prop:nosuper} that in the non-singular case the function~$\mod$ is at most linear, i.e.,\ the function~$\mod(s)/s$ is bounded away from zero. In particular this holds for concave index functions. In this particular case the function~$s/\mod(s)$ is non-decreasing, and hence the function $s(s/\mod(s))=s^2/\mod(s)$ is an index function. Thus item~\eqref{it:3} of Proposition~\ref{pro:peter} applies and yields that Assumption~\ref{ass:2prime} holds. Hence Theorem~\ref{thm:mod} applies and gives a convergence rate. \smallskip Note that \eqref{eq:kindermann} with the function~$\mod(t)=Rt$, has benchmark character. Indeed, if~(\ref{eq:kindermann}) holds with an index function~$\mod$ obeying $0<R=\lim_{\alpha \to 0} \mod(\alpha)/\alpha\leq R< \infty$, then this implies the variational inequality $$J({x^\dagger}) - J(x) \leq R \norm{A{x^\dagger} - Ax} \quad \mbox{for all} \quad x\in X.$$ This was shown to hold if~$\mathcal R(A^*) \cap \partial J({x^\dagger})\not =\emptyset$, cf. Eq~\eqref{eq:vi-benchmark}. If such linear bound fails then by the method of \emph{approximate variational source conditions} (cf.~\cite{FleHof10} and more comprehensively \cite{Fl12}) one can consider the strictly positive and decreasing \emph{distance function} $$d(R) := \sup_{x\in X}\set{J({x^\dagger}) - J(x) - R \norm{A{x^\dagger} - Ax}},\quad R>0.$$ We find that~$\lim_{R \to \infty} d(R)=0$, and the decay rate to zero as $R \to \infty$ measures the degree of violation of the benchmark variational inequality~(\ref{eq:vi-benchmark}). Together with \cite[Lemma~3.2]{Fl17} it was proven that then a variational inequality of type (\ref{eq:kindermann}) holds, such that \begin{equation} \label{eq:distphi} \mod(\alpha)= 2d(\Theta^{-1}(\alpha)), \quad \mbox{where} \quad \Theta(R):=d(R)/R. \end{equation} It should be noted that this function $\mod$ is a sub-linear index function such that the quotient function $\mod(\alpha)/\alpha$ is non-increasing for all \linebreak $\alpha>0$. Hence, the convergence rate (\ref{eq:rateStefan}) also applies for the function~$\mod$ from~(\ref{eq:distphi}). \end{rem} \section{Examples} \label{sec:examples} Here we shall highlight the applicability of the main results in special situations. We start with the standard penalty in a Hilbert space context and then analyze other penalties as these are used in specific applications. \subsection{Quadratic Tikhonov regularization in Hilbert spaces} \label{subsec:quadratic} Suppose we are in the classical context of Tikhonov regularization in Hilbert spaces~$X$ and $Y$, where the penalty is given as~$J(x) := \frac 1 2 \norm{x}{}^{2},\ x\in X$. In this case, which has been comprehensively discussed in the literature (cf., e.g.,~\cite[Chap.~5]{EHN96} and \cite{AlbElb16,AndElb15,FHM11}), we can explicitly calculate the terms under consideration. First, let~${g_{\alpha}}(\lambda) := 1/(\alpha + \lambda)$ be the filter from Tikhonov regularization and its companion~${r_{\alpha}}(\lambda) = \alpha/(\alpha + \lambda)$. With these short-hands, we see that~${x_\alpha} - x^\dag = {r_{\alpha}}(A^\ast A) x^\dag$, and also~$A({x_\alpha} - x^\dag) = A{r_{\alpha}}(A^\ast A)x^\dag$. This yields \begin{equation}\label{eq:axa-axp} \frac{\norm{A{x_\alpha} - Ax^\dag}{}^{2} }{2\alpha} = \frac{\norm{A{r_{\alpha}}(A^\ast A)x^\dag}{}^{2}}{2\alpha} = \frac{ \norm{{r_{\alpha}}(A^\ast A)\lr{A^\ast A}^{1/2}x^\dag}{}^{2}}{2\alpha}. \end{equation} We also see that \begin{align*} T_{\alpha}({x_\alpha};Ax^\dag) &= \frac{1} 2 \lr{ \norm{{r_{\alpha}}(A^\ast A)\lr{A^\ast A}^{1/2}x^\dag}^{2} + \alpha \norm{{x_\alpha}}^{2}}\\ & = \frac 1 2 \int \left[ \frac{\alpha^{2}\lambda}{(\alpha +\lambda)^{2}} + \frac {\alpha\lambda^{2}}{(\alpha + \lambda)^{2}}\right]dE_\lambda \|{x^\dagger}\|^2\\ & = \frac 1 2 \int \frac{\alpha\lambda}{(\alpha + \lambda)} dE_\lambda \|{x^\dagger}\|^2, \end{align*} which in turn yields \begin{equation} \label{eq:kindermann-functional} \begin{split} \frac{1}{\alpha} \left(T_{\alpha}({x^\dagger};y) - T_{\alpha}({x_\alpha};y) \right) & = \frac 1 2 \int \frac{ \alpha }{(\lambda +\alpha)} dE_\lambda \|{x^\dagger}\|^2\\ & = \frac 1 2 \norm{{r_{\alpha}}^{1/2}(A^\ast A)x^\dag}^{2}. \end{split} \end{equation} Finally, we bound \begin{equation}\label{eq:diff-J} \begin{split} J(x^\dag) - J({x_\alpha}) & = \frac 1 2 \lr{\norm{x^\dag}{}^{2} - \norm{{x_\alpha}}{}^{2}} = \frac 1 2 \scalar{x^\dag - {x_\alpha}}{x^\dag + {x_\alpha}}\\ & = \frac 1 2 \scalar{{r_{\alpha}}(A^\ast A) x^\dag}{\lr{I + \lr{\alpha + A^\ast A}^{-1}A^\ast A}x^\dag}\\ & \leq \norm{{r_{\alpha}}^{1/2}(A^\ast A)x^\dag}^{2}. \end{split} \end{equation} We observe that the right-hand sides in~(\ref{eq:kindermann-functional}) and~(\ref{eq:diff-J}) differ by a factor $\frac{1}{2}$, as predicted in Proposition~\ref{thm:equivalence}. In the classical setup of Tikhonov regularization, a regularity condition is usually imposed by a source-condition. Thus, let us now assume that the element~$x^\dag$ obeys a source-wise representation~ \begin{equation}\label{eq:sc} x^\dag = \varphi(A^\ast A)v,\qquad \norm{v}\leq 1,\end{equation} for an for an index function~$\varphi$. Then the estimate~(\ref{eq:diff-J}) reduces to bounding \begin{equation} \label{eq:diff-J-phi-bound} \begin{split} \norm{{r_{\alpha}}^{1/2}(A^\ast A)x^\dag}^{2} & \leq \norm{{r_{\alpha}}^{1/2}(A^\ast A)\varphi(A^\ast A)}^{2}\\ &\leq \norm{{r_{\alpha}}(A^\ast A)\varphi^{2}(A^\ast A)}, \end{split} \end{equation} where we used the estimate~$\norm{H^{1/2}}{}\leq \norm{H}^{1/2}$ for a self-adjoint non-negative operator~$H$. Then, if the function~$\varphi^{2}$ is sub-linear, we find that $$ \norm{{r_{\alpha}}(A^\ast A)\varphi^{2}(A^\ast A)}\leq \varphi^{2}(\alpha). $$ Hence, in the notation of \cite{MaPe03}, $\varphi^{2}$ is a qualification for Tikhonov regularization, and Assumption~\ref{ass:conv} holds true with the index function $\Psi(\alpha) = \varphi^{2}(\alpha)$. In particular, the rate~\eqref{eq:bestrate}, which is obtained by equilibrating both summands by letting the parameter~$\alpha_\ast$ be given as solution to the equation $\Theta(\alpha_\ast) = \delta/\sqrt 2$, yields the convergence rate $$\norm{{x^\dagger} - x_{\alpha_\ast}^\delta} \leq 2 \varphi\lr{\Theta^{-1}(\delta/\sqrt 2)}, $$ which is known to be optimal in the ``low smoothness'' case, i.e., ${x^\dagger} \in \mathcal R(A^*)$. Under the same condition on $\varphi$ we can also bound the right-hand side in~(\ref{eq:axa-axp}) as $$ \frac{ \norm{{r_{\alpha}}(A^\ast A)\lr{A^\ast A}^{1/2}x^\dag}{}^{2}}{2\alpha}\leq \frac{\lr{\sqrt{\alpha}\varphi(\alpha)}^{2}}{2\alpha} = \frac 1 2 \varphi^{2}(\alpha), $$ which verifies \eqref{eq:residual-bound}. We finally turn to discussing the maximal rate at which the function~$\Psi$ may tend to zero as~$\alpha\to 0$, provided that~${x^\dagger}\neq 0$. Considering the ratio~$\Psi(\alpha)/\alpha$ we find \begin{align*} \frac{\Psi(\alpha)}\alpha &\geq \frac{ \norm{{r_{\alpha}}(A^\ast A)\lr{A^\ast A}^{1/2}x^\dag}{}^{2}}{2\alpha^{2}} = \frac 1 {2\alpha^{2}}\int \frac{\alpha^{2}\lambda}{(\lambda + \alpha)^{2}} dE_\lambda \|{x^\dagger}\|^2\\ & \geq \frac 1 8 \int_{\lambda \geq \alpha} \frac 1 \lambda dE_\lambda \|{x^\dagger}\|^2= \frac 1 8 \norm{\chi_{[\alpha,\infty)}(A^\ast A) \lr{A^\ast A}^{-1/2}{x^\dagger}}^{2}. \end{align*} This shows that either~${x^\dagger} \in \mathcal D\lr{A^\ast A}^{-1/2}$, and hence that~${x^\dagger} \in\mathcal R\lr{A^{\ast}}$, in which case the right-hand side is bounded away from zero (if~${x^\dagger}\neq 0$), or we have that~${x^\dagger} \not\in \mathcal R\lr{A^{\ast}}$, and the right-hand side diverges. Hence, for nonzero ${x^\dagger}$ the best attainable rate near zero of the function~$\Psi$ is linear as also predicted in Proposition~\ref{prop:dich}. \subsection{ROF-Filter} We consider the celebrated ROF-filter in image processing \cite{RuOsFA92}: Let ${y^\delta} \in L^2(\mathbb{R}^2)$ represent an noisy image. Then a filtered version $x \in L^2(\mathbb{R}^2)\cap BV(\mathbb{R}^2)$ is computed by minimizing the Tikhonov functional \[ T_{\alpha}(x;{y^\delta}) = \frac{1}{2}\norm{x- {y^\delta}}_{L^2(\mathbb{R}^2)}^2 + \alpha |x|_{TV}, \] where $J(x) := |x|_{TV}$ denotes the total variation of $x$ on $\mathbb{R}^2$. Obviously, this can be put into our framework with $A$ being the embedding operator from $BV(\mathbb{R}^2)$ to $L^2(\mathbb{R}^2)$. For some special cases, where ${x^\dagger}$ is the characteristic function of simple geometric shapes, the minimizers can be computed explicitly. Denote by $B_{x,R}$ a ball with center $x$ and radius $R$. Consider first the case when ${x^\dagger}$ is the characteristic function of a ball $B_{0,R}$: \[ {x^\dagger}(s) = \chi_{B_{0,R}}(s) := \begin{cases} 1 & \text{if } \|s\|_{\mathbb{R}^2} \leq 1, \\ 0 & \text{else}. \end{cases} \] The minimizer ${x_\alpha}$ of $T_{\alpha}(.;y)$ with exact data is given by, e.g., \cite{Me01} \[ {x_\alpha} = \max\{1- \tfrac{2 \alpha}{R},0\} \chi_{B_{0,R}}(s). \] Calculating the index function in Assumption~\ref{ass:conv} is now a simple task as $|\chi_{B_{0,R}}|_{TV} = 2 \pi R$ \begin{align*} J({x^\dagger}) - J({x_\alpha}) &= \Psi(\alpha) = 2 \pi R\left(1 - \max\{1- \tfrac{2 \alpha}{R},0\} \right) \\ &=2 \pi R \min\{\frac{2 \alpha}{R},1\} = 4 \pi \alpha \quad \mbox{ if } \alpha < R/2. \end{align*} For a comparison, we may compute the Bregman distances. For the asymptotically interesting case, $\alpha < \frac{R}{2}$ we find that \[ \Bj{\xi_\alpha}({x_\alpha};{x^\dagger}) = \Bj{\xi_\alpha}({x^\dagger};{x_\alpha}) = 0 \qquad \forall \alpha < \frac{R}{2}, \] which yields a trivial rate, but of course, does not violate the upper bound $\mod(\alpha)$ in \eqref{eq:upperbound2} for $\delta= 0$. The squared norm of the residual for $\alpha < \frac{R}{2}$ is given by \[ \norm{A {x_\alpha} - y}^2 = 4 \pi \alpha^2, \] hence, \eqref{eq:residual-bound} clearly holds. We also observe that a variational inequality of the form~\eqref{eq:vi-Phi}, or~\eqref{eq:kindermann} below, holds with $\mod(s) \sim s$. For noisy data, ${x_{\alpha}^\delta}$ cannot be calculated analytically, but our results suggest for such ${x^\dagger}$ a suitable parameter choice of the form $\alpha = \delta$, which provides a convergence rate \[ \Bj{\xi_\alpha^\delta}({x_{\alpha}^\delta};{x^\dagger}) \leq (4 \pi +\frac{1}{2} ) \delta. \] A less simple situation appears when the exact solution is the characteristic function of the unit square \[ {x^\dagger}(s) = \chi_{[0,1]^2}(s) = \begin{cases}1 & \text{ if } s \in [0,1]^2 \\ 0 &\text{else} \end{cases}. \] An explicit solution is known here as well \cite{Chetal10}. For $R>0$ define the rounded square \[ C_R := \bigcup_{x: B_{x,R} \subset [0,1]^2} B_{x,R}, \] which has the shape of a square with the four corners cut off and replaced by circular arcs of radius $R$ that meet tangentially the edges of the square. The solution satisfies $0\leq {x_\alpha} \leq 1$ and can be characterized by the level sets: for $s \in [0,1]$ \begin{align*} \{{x_\alpha} > s\} = \begin{cases} \emptyset & \text{ if } s \geq 1-\frac{\alpha}{R^*} \\ C_{\frac{\alpha}{1-s}} & \text{ if } s \leq 1-\frac{\alpha}{R^*} \end{cases}. \end{align*} Here $R^*$ is a limiting value, which can be computed explicitly. Since we are interested in the asymptotics $\alpha \to 0$, we generally impose the condition $\alpha \leq R^*$ as otherwise ${x_\alpha} = 0$. The index function $\Psi$ can now be calculated by the coarea formula \begin{align*} J({x^\dagger}) - J({x_\alpha}) = \Psi(\alpha) = 4 - \int_0^{1-\frac{\alpha}{R^*} } |C_{\frac{\alpha}{1-s}}|_{TV} ds \end{align*} The value of $|C_{R}|_{TV}$ is its perimeter and can be calculated by elementary geometry to $|C_{R}|_{TV} = 4 - 2(4-\pi)R$. Thus, evaluating the integral, we obtain \begin{align*} \Psi(\alpha) = \frac{4}{R^*} \alpha + 2(4-\pi) \alpha\lr{\log\lr{\frac{R^*}{\alpha}}} \qquad \alpha \leq R^*. \end{align*} Thus, in this case, \[ \Psi(\alpha)\sim \alpha \log(1/\alpha) \qquad \text{ as } \alpha \to 0. \] The residual norm is given by \begin{align*} \|A {x_\alpha} - y\|^2 &= \|{x_\alpha} -{x^\dagger}\|_{L^2}^2 \\ &= \frac{\alpha^2}{{R^*}^2} + 2(4 - \pi) \alpha^2\left(\log\lr{\frac{R^*}{\alpha}}\right) \qquad \alpha < R^*. \end{align*} Obviously, the bound \eqref{eq:residual-bound} is satisfied. The approximation error in the Bregman distance (with our choice of the subgradient element) is hence given by \begin{align*} \Bj{\xi_\alpha}({x_\alpha};x^\dag) &= J({x^\dagger}) - J({x_{\alpha}^\delta}) - \frac{1}{\alpha} \|A {x_\alpha} - y\|^2 = \frac{3 }{{R^*}^2} \alpha \qquad \alpha < R^*. \end{align*} We observe that, for the square, the parameter choice that minimizes the upper bound \eqref{eq:upperbound2} differs from that for the ball as we have that $\alpha \sim C \frac{\delta}{(\log(1/\delta))^\frac{1}{2}}$, which highlights the (well-known) dependence of the parameter choice on the regularity of the exact solution. Note also, that the decay of the Bregman distance $\Bj{\xi_\alpha}({x_\alpha};x^\dag)$ alone does not suit well as a measure of regularity for ${x^\dagger}$ since the logarithmic factor that appears in the condition in Assumption~\ref{ass:conv} is not observed for this Bregman distance. \subsection{On $\ell^1$-regularization when sparsity is slightly missing} We consider the \emph{injective} continuous linear operator $A\colon \ell^1\to \ell^2$ and the penalty~$J(x):=\|x\|_{\ell^1} = \norm{x}_1$. Notice that $\ell^1=c_0^*$, it thus has a predual, and we assume that $A$ is weak$^*$-to-weak continuous, and the penalty~$J$ is stabilizing in this sense (see also~\cite{FleGer17}). The crucial additional assumption on the operator~$A$ is that the unit elements $e^{(k)}$ with $e_k^{(k)}=1$ and $e_i^{(k)}=0$ for $i \not=k$, satisfy source conditions $e^{(k)}=A^*f^{(k)},\;f^{(k)} \in Y$ for all $k \in \mathbb{N}$ . Under these assumptions, and with~${x^\dagger} =(x_k^\dagger)_{k \in \mathbb{N}}\in X$ from (\ref{eq:opeq}), we assign the function \begin{equation} \label{eq:BFHPhi} \mod(t)=2 \inf \limits_{n \in \mathbb{N}} \left(\sum \limits_{k=n+1}^\infty |x_k^\dagger| + t\,\sum \limits_{k=1}^n \|f^{(k)}\|_Y \right),\quad t>0. \end{equation} Notice that the function~$\mod$ from~\eqref{eq:BFHPhi} is a concave index function. It was shown in~\cite{BurFleHof13} that then a variational inequality of the form \begin{equation*} \|x-{x^\dagger}\|_X \le \|x\|_X-\|{x^\dagger}\|_X + \mod\lr{\norm{A{x^\dagger} - Ax}} \quad \mbox{for all} \quad x\in X \end{equation*} holds true. This immediately implies the validity of the condition (\ref{eq:vi-Phi}) with the same index function $\mod$, and an application of~item (3) of Proposition~\ref{pro:peter} shows that the error estimate~(\ref{eq:upperbound2}) is valid for that $\mod$. The behavior of the index function $\mod$ from (\ref{eq:BFHPhi}) essentially depends on the decay rate of the tail of $x_k^\dagger \to 0$ of the solution element~${x^\dagger}$. When sparsity is (slightly) missing, then the function~$\mod$ will be strictly concave. However, if~${x^\dagger}$ is sparse, i.e.,\ $x_k^\dagger=0 \quad \mbox{for} \quad k>n_{max}$, then the function~$\mod$ reduces to the linear function $$ \mod(t)= \left(\sum \limits_{k=1}^{n_{max}} \|f^{(k)}\|_Y \right)\,t, \quad t >0. $$ As Example~\ref{xmpl:monomial} highlights, this results in a linear companion function~$\Psi$. Thus Theorem~\ref{thm:mod} applies, and the choice of~$\alpha \sim \delta$ yields a rate for the Bregman distance $\Bj{\xi_\alpha^\delta}({x_{\alpha}^\delta};{x^\dagger}) = \mathcal O(\delta)$ as~$\delta\to 0$ in the sparse case. \section{Outlook to higher order rates} \label{sec:outlook} There might be a way for overcoming the limitation of sub-linear functions~$\Psi$ in the assumptions~\ref{ass:conv} or~\ref{ass:2prime}. The underlying observation for this is the identity \begin{equation} \label{eq:noiseminus} B_{\xi_{\alpha}}\lr{x_{\alpha};{x^\dagger}} = \frac{2}{\alpha}(T_\alpha({x^\dagger},y)-T_\alpha({x_\alpha},y))-(J({x^\dagger})-J({x_\alpha}). \end{equation} The right-hand side above is again entirely based on noise-free quantities, and its decay could be used as smoothness assumption. If one could prove that there were an inequality of the form \begin{equation} \label{eq:noiseplus} B_{\xi_{\alpha}^\delta}\lr{x_{\alpha}^\delta;{x^\dagger}} \le C_1\,B_{\xi_{\alpha}}\lr{x_{\alpha};{x^\dagger}}+ C_2\,\delta^2/\alpha,\quad \alpha>0, \end{equation} with positive constants $C_1$ and $C_2$, then this might open the pathway for higher order rates. Indeed, in Hilbert space~$X$ and for the standard penalty~$J(x) := \tfrac 1 2 \norm{x}_X^2$, cf. Section~\ref{subsec:quadratic}, we find that $B_{\xi_{\alpha}^\delta}\lr{x_{\alpha}^\delta;{x^\dagger}}=\|{x_{\alpha}^\delta}-{x^\dagger}\|_X^2$, and hence that the inequality~(\ref{eq:noiseplus}) is satisfied with $C_1=2$ and $C_2=1$. Moreover, one can easily verify that $$ \frac{2}{\alpha}(T_\alpha({x^\dagger},y)-T_\alpha({x_\alpha},y))-(J({x^\dagger})-J({x_\alpha}) = \frac 1 2 \norm{{r_{\alpha}}(A^\ast A){x^\dagger}}^2, $$ with~${r_{\alpha}}(A^\ast A)= \alpha\lr{\alpha + A^\ast A}^{-1}$, being the (squared) residual for (standard linear) Tikhonov regularization. This squared residual is known to decay of order up to~$\mathcal O(\alpha^2)$ as $\alpha\to 0$, which then allows for higher rates $B_{\xi_{\alpha}^\delta}\lr{x_{\alpha}^\delta;{x^\dagger}}=\mathcal{O}(\delta^{4/3})$, attained under the limiting source condition ${x^\dagger}=A^*Aw,\;w \in X$, and for the a priori parameter choice $\alpha \sim \delta^{2/3}$. It is thus interesting to see whether and under which additional assumptions an inequality of the form~\eqref{eq:noiseplus} holds.
{ "attr-fineweb-edu": 1.642578, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUgCjxK7IDPADJ40n3
\section{Introduction} Intelligent robots require the ability to understand their environment through parsing and segmenting the 3D scene into meaningful objects. The rich appearance-based information contained in images renders vision a primary sensory modality for this task. \input{./tikz/teaser2.tex} In recent years, large progress has been achieved in semantic segmentation of images. Most current state-of-the-art approaches apply deep learning for this task. With RGB-D cameras, appearance as well as shape modalities can be combined to improve the semantic segmentation performance. Less explored, however, is the usage and fusion of multiple views onto the same scene which appears naturally in the domains of 3D reconstruction and robotics. Here, the camera is moving through the environment and captures the scene from multiple view points. Semantic SLAM aims at aggregating several views in a consistent 3D geometric and semantic reconstruction of the environment. In this paper, we propose a novel deep learning approach for semantic segmentation of RGB-D images with multi-view context. We base our network on a recently proposed deep convolutional neural network (CNN) for RGB and depth fusion~\cite{lingni16accv} and enhance the approach with multi-scale deep supervision. Based on the trajectory obtained through RGB-D simultaneous localization and mapping (SLAM), we further regularize the CNN training with multi-view consistency constraints as shown in Fig.~\ref{fig:teaser}. We propose and evaluate several variants to enforce multi-view consistency during training. A shared principle is using the SLAM trajectory estimate to warp network outputs of multiple frames into the reference view with ground-truth annotation. By this, the network not only learns features that are invariant under view-point change. Our semi-supervised training approach also makes better use of the annotated ground-truth data than single-view learning. This alleviates the need for large amounts of annotated training data which is expensive to obtain. Complementary to our training approach, we aggregate the predictions of our trained network in keyframes to increase segmentation accuracy at testing. The predictions of neighboring images are fused into the keyframe based on the SLAM estimate in a probabilistic way. In experiments, we evaluate the performance gain achieved through multi-view training and fusion at testing over single-view approaches. Our results demonstrate that multi-view max-pooling of feature maps during training best supports multi-view fusion at testing. Overall we find that enforcing multi-view consistency during training significantly improves fusion at test time versus fusing predictions from networks trained on single views. Our end-to-end training achieves state-of-the-art performance on the NYUDv2 dataset in single-view segmentation as well as multi-view semantic fusion. While the fused keyframe segmentation can be directly used in robotic perception, our approach can also be useful as a building block for semantic SLAM using RGB-D cameras. \input{./tikz/network2.tex} \section{Related Work} Recently, remarkable progress has been achieved in semantic image segmentation using deep neural networks and, in particular, CNNs. On many benchmarks, these approaches excell previous techniques by a great margin. {\bf Image-based Semantic Segmentation.} As one early attempt, Couprie~et~al.~\cite{couprie13iclr} propose a multiscale CNN architecture to combine information at different receptive field resolutions and achieved reasonable segmentation results. Gupta~et~al.~\cite{gupta14eccv} integrate depth into the R-CNN approach by Girshick~et~al.~\cite{girshick14_rcnn} to detect objects in RGB-D images. They convert depth into 3-channel HHA, \emph{i.e.,} disparity, height and angle encoding and achieve semantic segmentation by training a classifier for superpixels based on the CNN features. Long~et~al.~\cite{long15cvpr} propose a fully convolutional network (FCN) which enables end-to-end training for semantic segmentation. Since CNNs reduce the input spatial resolution by a great factor through layers pooling, FCN presents an upsample stage to output high-resolution segmentation by fusing low-resolution predictions. Inspired by FCN and auto-encoders~\cite{yoshua06nips}, encoder-decoder architectures have been proposed to learn upsampling with unpooling and deconvolution~\cite{noh15iccv}. For RGB-D images, Eigen~et~al.~\cite{eigen15iccv} propose to train CNNs to predict depth, surface normals and semantics with a multi-task network and achieve very good performance. FuseNet~\cite{lingni16accv} proposes an encoder-decoder CNN to fuse color and depth cues in an end-to-end training for semantic segmentation, which is shown to be more efficient in learning RGB-D features in comparison to direct concatenation of RGB and depth or the use of HHA. Recently, more complex CNN architectures have been proposed that include multi-resolution refinement~\cite{lin2017_refinenet}, dilated convolutions~\cite{yu2016_dilated} and residual units (e.g.,~\cite{wu2016_widerordeeper}) to achieve state-of-the-art single image semantic segmentation. Li~et~al.~\cite{li2016_lstmcf} use a LSTM recurrent neural network to fuse RGB and depth cues and obtain smooth predictions. Lin~et~al.~\cite{lin16exploring} design a CNN that corresponds to a conditional random field (CRF) and use piecewise training to learn both unary and pairwise potentials end-to-end. Our approach trains a network on multi-view consistency and fuses the results from multiple view points. It is complementary to the above single-view CNN approaches. {\bf Semantic SLAM.} In the domain of semantic SLAM, Salas-Moreno~et~al.~\cite{moreno14cvpr} developed the SLAM++ algorithm to perform RGB-D tracking and mapping at the object instance level. Hermans~et~al.~\cite{hermans14icra} proposed 3D semantic mapping for indoor RGB-D sequences based on RGB-D visual odometry and a random forest classifier that performs semantic image segmentation. The individual frame segmentations are projected into 3D and smoothed using a CRF on the point cloud. St\"uckler~et~al.~\cite{stueckler15_semslam} perform RGB-D SLAM and probabilistically fuse the semantic segmentations of individual frames obtained with a random forest in multi-resolution voxel maps. Recently, Armeni~et~al.~\cite{armeni16cvpr} propose a hierarchical parsing method for large-scale 3D point clouds of indoor environments. They first seperate point clouds into disjoint spaces, \emph{i.e.,} single rooms, and then further cluster points at the object level according to handcrafted features. {\bf Multi-View Semantic Segmentation.} In contrast to the popularity of CNNs for image-based segmentation, it is less common to apply CNNs for semantic segmentation on multi-view 3D reconstructions. Recently, Riegler~et~al.~\cite{riegler2016_octnet} apply 3D CNNs on sparse octree data structures to perform semantic segmentation on voxels. Nevertheless, the volumetric representations may discard details which are present at the original image resolution. McCormac~et~al.~\cite{mccormac2016_semanticfusion} proposed to fuse CNN semantic image segmentations on a 3D surfel map~\cite{whelan16_elasticfusion}. He~et~al.~\cite{he2017_std2p} propose to fuse CNN semantic segmentations from multiple views in video using superpixels and optical flow information. In contrast to our approach, these methods do not impose multi-view consistency during CNN training and cannot leverage the view-point invariant features learned by our network. Kundu~et~al.~\cite{kundu2016_featurespaceoptim} extend dense CRFs to videos by associating pixels temporally using optical flow and optimizing their feature similarity. Closely related to our approach for enforcing multi-view consistency is the approach by Su et al.~\cite{su2015_mvcnn} who investigate the task of 3D shape recognition. They render multiple views onto 3D shape models which are fed into a CNN feature extraction stage that is shared across views. The features are max-pooled across view-points and fed into a second CNN stage that is trained for shape recognition. Our approach uses multi-view pooling for the task of semantic segmentation and is trained using realistic imagery and SLAM pose estimates. Our trained network is able to classify single views, but we demonstrate that multi-view fusion using the network trained on multi-view consistency improves segmentation performance over single-view trained networks. \section{CNN Architecture for Semantic Segmentation} In this section, we detail the CNN architecture for semantic segmentation of each RGB-D image of a sequence. We base our encoder-decoder CNN on FuseNet~\cite{lingni16accv} which learns rich features from RGB-D data. We enhance the approach with multi-scale loss minimization, which gains additional improvement in segmentation performance. \subsection{RGB-D Semantic Encoder-Decoder} Fig.~\ref{fig:network} illustrates our CNN architecture. The network follows an encoder-decoder design, similar to previous work on semantic segmentation~\cite{noh15iccv}. The encoder extracts a hierarchy of features through convolutional layers and aggregates spatial information by pooling layers to increase the receptive field. The encoder outputs low-resolution high-dimensional feature maps, which are upsampled back to the input resolution by the decoder through layers of memorized unpooling and deconvolution. Following FuseNet~\cite{lingni16accv}, the network contains two branches to learn features from RGB ($\mathcal{F}_{rgb}$) and depth ($\mathcal{F}_{d}$), respectively. The feature maps from the depth branch are consistently fused into the RGB branch at each scale. We denote the fusion by~$\mathcal{F}_{rgb}\oplus \mathcal{F}_d$. The semantic label set is denoted as~$\mathcal{L} = \{1,2,\ldots, K\}$ and the category index is indicated with subscript~$j$. Following notation convention, we compute the classification score~$\mathcal{S}=(s_1, s_2, \ldots , s_K)$ at location~$\mathbf{x}$ and map it to the probability distribution~$\mathcal{P}=(p_1, p_2,\ldots, p_K)$ with the softmax function $\sigma(\cdot)$. Network inference obtains the probability \begin{equation}\label{eq:softmax} p_j(\mathbf{x}, \mathcal{W} \mid \mathcal{I}) = \sigma(s_j(\mathbf{x}, \mathcal{W})) = \frac{\exp (s_j(\mathbf{x}, \mathcal{W}))}{\sum_k^K \exp (s_k(\mathbf{x}, \mathcal{W}))} \;, \end{equation} of all pixels~$\mathbf{x}$ in the image for being labelled as class~$j$, given input RGB-D image~$\mathcal{I}$ and network parameters~$\mathcal{W}$. We use the cross-entropy loss to learn network parameters for semantic segmentation from ground-truth annotations $l_{gt}$, \begin{equation} L(\mathcal{W}) = - \frac{1}{N}\sum_i^N \sum_j^K \llbracket j =l_{gt} \rrbracket \log p_j(\mathbf{x}_i, \mathcal{W} \mid \mathcal{I})\;, \end{equation} where~$N$ is the number of pixels. This loss minimizes the Kullback-Leibler (KL) divergence between predicted distribution and the ground-truth, assuming the ground-truth has a one-hot distribution on the true label. \input{./tikz/multiscale.tex} \subsection{Multi-Scale Deep Supervision} The encoder of our network contains five $2\times 2$ pooling layers and downsamples the input resolution by a factor of 32. The decoder learns to refine the low resolution back to the original one with five memorized unpooling followed by deconvolution. In order to guide the decoder through the successive refinement, we adopt the deeply supervised learning method~\cite{lee15aistats, dosovitskiy15iccv} and compute the loss for all upsample scales. For this purpose, we append a classification layer at each deconvolution scale and compute the loss for the respective resolution of ground-truth which is obtained through stochastic pooling~\cite{zeiler2013_stochasticpooling} over the full resolution annotation (see Fig.~\ref{fig:multiscale} for an example). \section{Multi-View Consistent Learning and Prediction}\label{sec:consistency} While CNNs have been shown to obtain the state-of-the-art semantic segmentation performances for many datasets, most of these studies focus on single views. When observing a scene from a moving camera such as on a mobile robot, the system obtains multiple different views onto the same objects. The key innovation of this work is to explore the use of temporal multi-view consistency within RGB-D sequences for CNN training and prediction. For this purpose, we perform 3D data association by warping multiple frames into a common reference view. This then enables us to impose multi-view constraints during training. In this section, we describe several variants of such constraints. Notably, these methods can also be used at test time to fuse predictions from multiple views in a reference view. \subsection{Multi-view Data Association Through Warping} Instead of single-view training, we train our network on RGB-D sequences with poses estimated by a SLAM algorithm. We define each training sequence to contain one reference view~$\mathcal{I}_k$ with ground-truth semantic annotations and several overlapping views~$\mathcal{I}_i$ that are tracked towards $\mathcal{I}_k$. The relative poses~$\boldsymbol{\xi}$ of the neighboring frames are estimated through tracking algorithms such as DVO SLAM~\cite{kerl13iros}. In order to impose temporal consistency, we adopt the warping concept from multi-view geometry to associate pixels between view points and introduce warping layers into our CNN. The warping layers synthesize CNN output in a reference view from a different view at any resolution by sampling given a known pose estimate and the known depth. The warping layers can be viewed as a variant of spatial transformers~\cite{jaderberg15nips} with fixed transformation parameters. We now formulate the warping. Given 2D image coordinate $\mathbf{x}\in\mathbb{R}^2$, the warped pixel location \begin{equation}\label{eq:warping} \mathbf{x}^\omega := \omega(\mathbf{x}, \boldsymbol{\xi})=\pi \big( \mathbf{T}(\boldsymbol{\xi}) \, \pi^{-1}(\mathbf{x}, Z_i(\mathbf{x}))\big) \;, \end{equation} is determined through the warping function~$\omega(\mathbf{x}, \boldsymbol{\xi})$ which transforms the location from one camera view to the other using the depth~$Z_i(\mathbf{x})$ at pixel~$\mathbf{x}$ in image~$\mathcal{I}_i$ and the SLAM pose estimate~$\boldsymbol{\xi}$. The functions~$\pi$ and its inverse~$\pi^{-1}$ project homogeneous 3D coordinates to image coordinates and vice versa, while~$\mathbf{T}(\boldsymbol{\xi})$ denotes the homogeneous transformation matrix derived from pose~$\boldsymbol{\xi}$. Using this association by warping, we synthesize the output of the reference view by sampling the feature maps of neighboring views using bilinear interpolation. Since the interpolation is differentiable, it is straight-forward to back-propagate gradients through the warping layers. With a slight abuse of notation, we denote the operation of synthesizing the layer output~$\mathcal{F}$ given the warping by $\mathcal{F}^\omega := \mathcal{F}(\omega( \mathbf{x}, \boldsymbol{\xi}))$. We also apply deep supervision when training for multi-view consistency through warping. As shown in Fig.~\ref{fig:network}, feature maps at each resolution of the decoder are warped into the common reference view. Despite the need to perform warping at multiple scales, the warping grid is only required to be computed once at the input resolution, and is normalized to the canonical coordinates within the range of $[-1, 1]$. The lower-resolution warping grids can then be efficiently generated through average pooling layers. \subsection{Consistency Through Warp Augmentation} One straight-forward solution to enforce multi-view segmentation consistency is to warp the predictions of neighboring frames into the ground-truth annotated keyframe and computing a supervised loss there. This approach can be interpreted as a type of data augmentation using the available nearby frames. We implement this consistency method by warping the keyframe into neighboring frames, and synthesize the classification score of the nearby frame from the keyframe's view point. We then compute the cross-entropy loss on this synthesized prediction. Within RGB-D sequences, objects can appear at various scales, image locations, view perspective, color distortion given uncontrolled lighting and shape distortion given rolling shutters of RGB-D cameras. Propagating the keyframe annotation into other frames implicitly regulates the network predictions to be invariant under these transformations. \subsection{Consistency Through Bayesian Fusion} Given a sequence of measurements and predictions at test time, Bayesian fusion is frequently applied to aggregate the semantic segmentations of individual views. Let us denote the semantic labelling of a pixel by~$y$ and its measurement in frame~$i$ by~$z_i$. We use the notation~$z^i$ for the set of measurements up to frame~$i$. According to Bayes rule, \begin{align}\label{eq:b1} p( y \mid z^i ) &= \frac{p( z_i \mid y, z^{i-1} ) \, p( y \mid z^{i-1} )}{p( z_i \mid z^{i-1} )} \\ &= \eta_i \, p( z_i \mid y, z^{i-1} ) \, p( y \mid z^{i-1} ) \;. \end{align} Suppose measurements satisfy the \emph{i.i.d.} condition, i.e. $p( z_i \mid y, z^{i-1} ) = p( z_i \mid y )$, and equal a-priori probability for each class, then Equation~\eqref{eq:b1} simplifies to \begin{equation}\label{eq:b2} p( y \mid z^i ) = \eta_i \, p( z_i \mid y ) \, p( y \mid z^{i-1} ) = \prod_{i} \eta_i \, p( z_i \mid y ) \;. \end{equation} Put simple, Bayesian fusion can be implemented by taking the product over the semantic labelling likelihoods of individual frame at a pixel and normalizing the product to yield a valid probability distribution. This process can also be implemented recursively on a sequence of frames. When training our CNN for multi-view consistency using Bayesian fusion, we warp the predictions of neighboring frames into the keyframe using the SLAM pose estimate. We obtain the fused prediction at each keyframe pixel by summing up the unnormalized log labelling likelihoods instead of the individual frame softmax outputs. Applying softmax on the sum of log labelling likelihoods yields the fused labelling distribution. This is equivalent to Eq.~\eqref{eq:b2} since \begin{equation}\label{eq:fusioneq} \frac{\prod_{i} p_{i,j}^\omega}{\sum_k^K \prod_i p_{i,k}^\omega} = \frac{\prod_{i} \sigma(s_{i,j}^\omega)} {\sum_k^K \prod_i \sigma(s_{i,k}^\omega)} = \sigma\left( \sum_i s_{i,j}^\omega \right) \;, \end{equation} where $s_{i,j}^\omega$ and $p_{i,j}^\omega$ denote the warped classification scores and probabilities, respectively, and $\sigma(\cdot)$ is the softmax function as defined in Equation~\eqref{eq:softmax}. \subsection{Consistency Through Multi-View Max-Pooling} While Bayesian fusion provides an approach to integrate several measurements in the probability space, we also explore direct fusion in the feature space using multi-view max-pooling of the warped feature maps. We warp the feature maps preceeding the classification layers at each scale in our decoder into the keyframe and apply max-pooling over corresponding feature activations at the same warped location to obtain a pooled feature map in the keyframe, \begin{equation} \mathcal{F} = \operatorname{max\_pool} (\mathcal{F}^\omega_1, \mathcal{F}^\omega_2, \ldots, \mathcal{F}^\omega_N) \;. \end{equation} The fused feature maps are classified and the resulting semantic segmentation is compared to the keyframe ground-truth for loss calculation. \section{Evaluation} We evaluate our proposed approach using the NYUDv2 RGB-D dataset~\cite{silberman12eccv}. The dataset provides 1449 pixelwise annotated RGB-D images capturing various indoor scenes, and is split into 795 frames for training/validation (trainval) and 654 frames for testing. The original sequences that contain these 1449 images are also available with NYUDv2, whereas sequences are unfortunately not available for other large RGB-D semantic segmentation datasets. Using DVO-SLAM~\cite{kerl13iros}, we determine the camera poses of neighboring frames around each annotated keyframe to obtain multi-view sequences. This provides us with in total 267,675 RGB-D images, despite that tracking fails for 30 out of 1449 keyframes. Following the original trainval/test split, we use 770 sequences with 143,670 frames for training and 649 sequences with 124,005 frames for testing. For benchmarking, our method is evaluated for the 13-class~\cite{couprie13iclr} and 40-class~\cite{gupta13cvpr} semantic segmentation tasks. We use the raw depth images without inpainted missing values. \subsection{Training Details} We implemented our approach using the Caffe framework~\cite{jia14caffe}. For all experiments, the network parameters are initialized as follows. The convolutional kernels in the encoder are initialized with the pretrained 16-layer VGGNet~\cite{simonyan14vgg} and the deconvolutional kernels in the decoder are initialized using He's method~\cite{he15iccv}. For the first layer of the depth encoder, we average the original three-channel VGG weights to obtain a single-channel kernel. We train the network with stochastic gradient descent (SGD)~\cite{bottou12sgd} with 0.9 momentum and 0.0005 weight decay. The learning rate is set to 0.001 and decays by a factor of 0.9 every 30k iterations. All the images are resized to a resolution of $320\times240$ pixels as input to the network and the predictions are also up to this scale. To downsample, we use cubic interpolation for RGB images and nearest-neighbor interpolation for depth and label images. During training, we use a minibatch of 6 that comprises two sequences, with one keyframe and two tracking frames for each sequence. We apply random shuffling after each epoch for both inter and intra sequences. The network is trained until convergence. We observed that multi-view CNN training does not require significant extra iterations for convergence. For multi-view training, we sample from the nearest frames first and include 10 further-away frames every 5 epochs. By this, we alleviate that typically tracking errors accumulate and image overlap decreases as the camera moves away from the keyframe. \subsection{Evaluation Criteria} We measure the semantic segmentation performance with three criteria: global pixelwise accuracy, average classwise accuracy and average intersection-over-union (IoU) scores. These three criteria can be calculated from the confusion matrix. With $K$ classes, each entry of the $K\times K$ confusion matrix~$c_{ij}$ is the total amount of pixels belonging to class~$i$ that are predicted to be class~$j$. The global pixelwise accuracy is computed by $\sum_{i}c_{ii} / \sum_{ij}c_{ij}$, the average classwise accuracy is computed by $\frac{1}{K} \sum_{i} (c_{ii} / \sum_j c_{ij})$, and the average IoU score is calculated by $\frac{1}{K} \sum_i \big(c_{ii} / (\sum_i c_{ij} + \sum_j c_{ij} - c_{ii}) \big)$. \begin{table}[t!] \centering \caption{Single-view semantic segmentation accuracy of our network in comparison to the state-of-the-art methods for NYUDv2 13-class and 40-class segmentation tasks.} \label{tab:monotest} \begin{tabular}{C{4ex} L{20ex} L{12ex} C{9ex} C{9ex} C{4ex} } \toprule &methods &input &pixelwise &classwise &IoU\\ \midrule \multirow{9}{*}{\rot{\shortstack[c]{NYUDv2 \\ 13 classes}}} &Couprie~et~al.~\cite{couprie13iclr} &RGB-D & 52.4 & 36.2 &-\\ &Hermans~et~al.~\cite{hermans14icra} &RGB-D & 54.2 & 48.0 &-\\ &SceneNet~\cite{ankur16cvpr} &DHA & 67.2 & 52.5 &-\\ &Eigen~et~al.~\cite{eigen15iccv} &RGB-D-N & 75.4 & 66.9 & 52.6 \\ &FuseNet-SF3~\cite{lingni16accv} &RGB-D & 75.8 & 66.2 & 54.2 \\ &MVCNet-Mono &RGB-D & 77.6 & 68.7 & 56.9 \\ &MVCNet-Augment &RGB-D & 77.6 & 69.3 & 57.2 \\ &MVCNet-Bayesian &RGB-D & \bf 77.8 & \it 69.4 & \bf 57.3 \\ &MVCNet-MaxPool &RGB-D & \it 77.7 & \bf 69.5 & \bf 57.3 \\ \cmidrule{2-6} \multirow{9}{*}{\rot{\shortstack[c]{NYUDv2 \\40 classes}}} &RCNN~\cite{gupta14eccv} &RGB-HHA & 60.3 & 35.1 & 28.6\\ &FCN-16s~\cite{long15cvpr} &RGB-HHA & 65.4 & 46.1 & 34.0\\ &Eigen~et~al.~\cite{eigen15iccv} &RGB-D-N & 65.6 & 45.1 & 34.1\\ &FuseNet-SF3~\cite{lingni16accv} &RGB-D & 66.4 & 44.2 & 34.0\\ &Context-CRF~\cite{lin16exploring} &RGB & 67.6 & 49.6 & 37.1\\ &MVCNet-Mono &RGB-D & \it 68.6 & 48.7 & 37.6\\ &MVCNet-Augment &RGB-D & \it 68.6 & \it 49.9 & \bf 38.0\\ &MVCNet-Bayesian &RGB-D & 68.4 & 49.5 & 37.4\\ &MVCNet-MaxPool &RGB-D & \bf 69.1 & \bf 50.1 & \bf 38.0\\ \bottomrule \end{tabular} \end{table} \begin{table}[t!] \centering \caption{Multi-view segmentation accuracy of our network using Bayesian fusion for NYUDv2 13-class and 40-class segmentation.} \label{tab:fusiontest} \begin{tabular}{C{4ex} L{20ex} C{11ex} C{11ex} C{11ex} } \toprule &methods &pixelwise &classwise &IoU\\ \midrule \multirow{5}{*}{\rot{\shortstack[c]{NYUDv2 \\ 13 classes}}} &FuseNet-SF3~\cite{lingni16accv} &77.19 &67.46 &56.01 \\ &MVCNet-Mono &78.70 &69.61 &58.29 \\ &MVCNet-Augment &78.94 &\it 70.48 &58.93 \\ &MVCNet-Bayesian &\bf 79.13 &\it 70.48 &\it 59.04 \\ &MVCNet-MaxPool &\bf 79.13 &\bf 70.59 &\bf 59.07 \\ \cmidrule{2-5} \multirow{5}{*}{\rot{\shortstack[c]{NYUDv2 \\40 classes}}} &FuseNet-SF3~\cite{lingni16accv} &67.74 & 44.92 & 35.36 \\ % &MVCNet-Mono &70.03 & 49.73 & 39.12 \\ &MVCNet-Augment &\it 70.34 & \it 51.73 & \bf 40.19\\ &MVCNet-Bayesian &70.24 & 51.18 & 39.74\\ &MVCNet-MaxPool &\bf 70.66 & \bf 51.78 & \it 40.07\\ \bottomrule \end{tabular} \end{table} \begin{table*} \centering \caption{NYUDv2 13-class semantic segmentation IoU scores. Our method achieves best per-class accuracy and average IoU. \label{tab:classwise} \setlength{\tabcolsep}{4.5pt} \begin{tabular}{C{4ex} L{24ex} cl*{16}{r} } \toprule &method &\block{icra1}{bed} &\block{icra2}{objects} &\block{icra3}{chair} &\block{icra4}{furniture} &\block{icra5}{ceiling} &\block{icra6}{floor} &\block{icra7}{decorat.} &\block{icra8}{sofa} &\block{icra9}{table} &\block{icra10}{wall} &\block{icra11}{window} &\block{icra12}{books} &\block{icra13}{TV} &\rot{\shortstack[l]{average\\accuracy}}\\ \midrule &class frequency &4.08 &7.31 &3.45 &12.71 &1.47 &9.88 &3.40 &2.84 &3.42 &24.57 &4.91 &2.78 &0.99 & \\ \midrule \multirow{6}{*}{\rot{\shortstack[c]{single-view}}} &Eigen~et~al.~\cite{eigen15iccv} &56.71 &38.29 &50.23 &54.76 &64.50 &89.76 &45.20 &47.85 &42.47 &74.34 &56.24 &45.72 &34.34 &53.88\\ &FuseNet-SF3~\cite{lingni16accv} &61.52 &37.95 &52.67 &53.97 &64.73 &89.01 &47.11 &57.17 &39.20 &75.08 &58.06 &37.64 &29.77 &54.14\\ &MVCNet-Mono &65.27 &37.82 &54.09 &59.39 &65.26 &89.15 &49.47 &57.00 &44.14 &75.31 &57.22 &49.21 &36.14 &56.88\\ &MVCNet-Augment &65.33 &38.30 &54.15 &\bf59.54 &\bf67.65 &89.26 &49.27 &55.18 &43.39 &74.59 &58.46 &\bf49.35 &\bf38.84 &57.18\\ &MVCNet-Bayesian &\bf65.76 &38.79 &\bf54.60 &59.28 &67.58 &89.69 &48.98 &\bf56.72 &42.42 &75.26 &\bf59.55 &49.27 &36.51 &57.26\\ &MVCNet-MaxPool &65.71 &\bf39.10 &54.59 &59.23 &66.41 &\bf89.94 &\bf49.50 &56.30 &\bf43.51 &\bf75.33 &59.11 &49.18 &37.37 &\bf57.33\\ \cmidrule{2-16} \multirow{5}{*}{\rot{\shortstack[c]{multi-view}}} &FuseNet-SF3~\cite{lingni16accv} &64.95 &39.62 &55.28 &55.90 &64.99 &89.88 &47.99 &\bf60.17 &42.40 &76.24 &59.97 &39.80 &30.91 &56.01\\ &MVCNet-Mono &67.11 &40.14 &56.39 &60.90 &66.07 &89.77 &50.32 &59.49 &46.12 &76.51 &59.03 &48.80 &37.13 &58.29\\ &MVCNet-Augment &68.22 &40.04 &56.55 &61.82 &67.88 &90.06 &50.85 &58.00 &\bf45.98 &75.85 &60.43 &50.50 &\bf39.89 &58.93\\ &MVCNet-Bayesian &\bf68.38 &\bf40.87 &\bf57.10 &\bf61.84 &\bf67.98 &\bf90.64 &50.05 &59.70 &44.73 &76.50 &\bf61.75 &\bf51.01 &36.99 &59.04\\ &MVCNet-MaxPool &68.09 &41.58 &56.88 &61.56 &67.21 &\bf90.64 &50.69 &59.73 &45.46 &\bf76.68 &61.28 &50.60 &37.51 &\bf59.07\\ \bottomrule \end{tabular} \end{table*} \input{./tikz/visualcmp.tex} \input{./tikz/badeg.tex} \subsection{Single Frame Segmentation} In a first set of experiments, we evaluate the performance of several variants of our network for direct semantic segmentation of individual frames. This means we do not fuse predictions from nearby frames to obtain the final prediction in a frame. We predict semantic segmentation with our trained models on the 654 test images of the NYUDv2 dataset and compare our methods with state-of-art approaches. The results are shown in Table~\ref{tab:monotest}. Unless otherwise stated, we take the results from the original papers for comparison and report their best results (i.e. SceneNet-FT-NYU-DO-DHA model for SceneNet~\cite{ankur16cvpr}, VGG-based model for Eigen~et~al.~\cite{eigen15iccv}). The result of Hermans~et~al.~\cite{hermans14icra} is obtained after applying a dense CRF~\cite{krahenb11nips} for each image and in-between neighboring 3D points to further smoothen their results. We also remark that the results reported here for the Context-CRF model are finetuned on NYUDv2 like in our approach to facilitate comparison. Furthermore, the network output is refined using a dense CRF~\cite{krahenb11nips} which is claimed to increase the accuracy of the network by approximately 2\%. The results for FuseNet-SF3 are obtained by our own implementation. Our baseline model MVCNet-Mono is trained without multi-view consistency, which amounts to FuseNet with multiscale deeply supervised loss at decoder. However, we apply single image augmentation to train the FuseNet-SF3 and MVCNet-Mono with random scaling between $[0.8,1.2]$, random crop and mirror. This data augmentation is not used fro multi-view training. Nevertherless, our results show that the different variants of multi-view consistency training outperform the state-of-art methods for single image semantic segmentation. Overall, multi-view max-pooling (MVCNet-MaxPool) has a small advantage over the other multi-view consistency training approaches (MVCNet-Augment and MVCNet-Bayesian). \subsection{Multi-View Fused Segmentation} Since we train on sequences, in the second set of experiment, we also evaluate the fused semantic segmentation over the test sequences. The number of fused frames is fixed to 50, which are uniformly sampled over the entire sequence. Due to the lack of ground-truth for neighboring frames, we fuse the prediction of neighboring frames in the keyframes using Bayesian fusion according to Equation~\eqref{eq:fusioneq}. This fusion is typically applied for semantic mapping using RGB-D SLAM. The results are shown in Table~\ref{tab:fusiontest}. Bayesian multi-view fusion improves the semantic segmentation by approx. 2\% on all evaluation measures towards single-view segmentation. Also, the training for multi-view consistency achieves a stronger gain over single-view training (MVCNet-Mono) when fusing segmentations compared to single-view segmentation. This performance gain is observed in the qualitative results in Fig.~\ref{fig:visualcmp}. It can be seen that our multi-view consistency training and Bayesian fusion produces more accurate and homogeneous segmentations. Fig.~\ref{fig:bageg} shows typical challenging cases for our model. We also compare classwise and average IoU scores for 13-class semantic segmentation on NYUDv2 in Table~\ref{tab:classwise}. The results of Eigen~et~al.~\cite{eigen15iccv} are from their publicly available model tested on $320\times240$ resolution. The results demonstrate that our approach gives high performance gains across all occurence frequencies of the classes in the dataset. \section{Conclusion} In this paper we propose methods for enforcing multi-view consistency during the training of CNN models for semantic RGB-D image segmentation. We base our CNN design on FuseNet~\cite{lingni16accv}, a recently proposed CNN architecture in an encoder-decoder scheme for semantic segmentation of RGB-D images. We augment the network with multi-scale loss supervision to improve its performance. We present and evaluate three different approaches for multi-view consistency training. Our methods use an RGB-D SLAM trajectory estimate to warp semantic segmentations or feature maps from one view point to another. Multi-view max-pooling of feature maps overall provides the best performance gains in single-view segmentation and fusion of multiple views. We demonstrate the superior performance of multi-view consistency training and Bayesian fusion on the NYUDv2 13-class and 40-class semantic segmentation benchmark. All multi-view consistency training approaches outperform single-view trained baselines. They are key to boosting segmentation performance when fusing network predictions from multiple view points during testing. On NYUDv2, our model sets a new state-of-the-art performance using an end-to-end trained network for single-view predictions as well as multi-view fused semantic segmentation without further postprocessing stages such as dense CRFs. In future work, we want to further investigate integration of our approach in a semantic SLAM system, for example, through coupling of pose tracking and SLAM with our semantic predictions. \balance \bibliographystyle{ieeetr}
{ "attr-fineweb-edu": 1.942383, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUgFzxK6wB9k0iOYDJ
\section{The TT-PET scanner layout} \label{sec:intro} \subsection{Scanner architecture} \label{sub:scanner} The proposed small-animal TOF PET scanner (shown in figure \ref{fig:scannerlayout}) is a layered detector for \SI{511}{\kilo\electronvolt} photons based on monolithic timing pixel sensors purposely developed for this project. It will contain 1920 detector ASICs, for a total of approximately 1.5 million pixels, corresponding to a 3D granularity of 500$\times$500$\times$\SI{220}{\cubic\micro\meter}. The pixel detector has a target time resolution of $ \SI{30}{\pico\second} $ rms for the measurement of electrons over the entire detection volume, for which it requires a synchronization of all the ASICs to a \SI{10}{\pico\second} precision. The scanner is formed by 16 identical stacks of detectors, called ``towers'', shaped as wedges and interleaved by cooling blocks (described in detail in section \ref{sec:thermals}). A tower has a total thickness of \SI{22.5}{\milli\meter} and is formed by 60 detection layers, called modules, grouped in super-modules of 5 layers each. \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{TTPET_Nov2018-Reversed12.jpg} \caption{CAD image of the TT-PET scanner, with the 16 towers and the cooling blocks between them represented in blue. The wedge-shaped towers are formed by ASICs of three sizes, with larger ones at larger radii.} \label{fig:scannerlayout} \end{figure}\\ \subsection{Module layout and detection mechanism} \label{sub:types} The photon detection element of the scanner is the module, which is a layered structure made of a \SI{50}{\micro\meter} thick lead absorber, a dielectric spacer and a \SI{100}{\micro\meter} thick monolithic silicon pixel timing detector. A stack of two modules is shown in figure \ref{fig:stack}. The layers are connected to each other by means of a \SI{5}{\micro\meter} double-sided adhesive tape. The dielectric spacer has a thickness of \SI{50}{\micro\meter} and is made by a low permittivity material, necessary to reduce the capacitive coupling between the sensor pixels and the lead absorber. Inside the lead layer the photon generates a high energy electron by Compton scattering or photoelectric interaction, which traverses the dielectric spacer and deposits energy in the pixel detector. Geant4 simulations show that in almost 50\% of the cases the electron is reflected back by the following lead layer. This effect has the advantage of increasing the average signal inside the pixel, but at the risk of increasing the cluster size for electrons emitted at large angles. For this reason the distance between the ASIC backplane and the lead layer should be kept as small as possible. \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{stack.pdf} \caption{Two detection modules, including monolithic silicon detectors, lead converters, dielectric spacers and adhesive tape. The 60 detection modules of a tower are divided in 12 super-modules of 5 layers each. The lead and silicon layers are glued together with \SI{5}{\micro\meter} thick double-sided adhesive tape.} \label{fig:stack} \end{figure}\\ Three monolithic active pixel sensor types are needed to maximize the sensing area at larger radii of the tower stack-up. The modules are then grouped in three categories according to their area: 7$\times$\SI{24}{\milli\meter\squared}, 9$\times $\SI{24}{\milli\meter\squared} and 11$\times$\SI{24}{\milli\meter\squared}. The total power consumption scales linearly with the area of the module. In order to cover a larger area, two chips are used in each layer, side by side. \subsection{The super-module} \label{sub:supermodule} The super-module consists of an assembly of 10 modules staggered in five layers as illustrated in the central part of figure \ref{fig:wirebond-scheme} and displayed at the front and rear sides. Each module is electrically connected via wire bonds to the super-module flex PCB. The front side is the only open access for the services of the scanner. The pigtail of each super-module connects radially to a patch panel which distributes the power, clock and data lines. Each service flex bends radially when connecting to the patch panel; for this reason the tower includes a stress release mechanism in order not to transmit any force to the module stack-up and to the wire bonds located at the front side.\\ Wire-bond connections are made on both sides of the stack: on the top for the ASIC power and data signals and on the bottom to reference the sensor backplane to ground. The stacked die wire bond concept and tests are discussed in section \ref{sub:interfaces}. In the assembly sequence, the back side wire bonds are made first, then a \SI{300}{\micro\meter} thick spacer for wire bond protection is glued. The function of the latter is to protect the wire bond inside a mechanical envelope as well as to protect the top side wire bonds when assembling super-modules together. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{wire-bond.png} \caption{Super-module stack-up serviced with a flex circuit at the bottom and wire bond interconnections. In the construction, five modules are glued and staggered at the rear and front sides. A spacer with a centered slot is glued at the bottom to protect the wire bond envelopes.} \label{fig:wirebond-scheme} \end{figure}\\ One of the challenges in the conceptual design of this detector is to keep the tower thicknesses below \SI{22}{\milli\meter}\footnote{The size of the scanner was calculated to allow its insertion in a small animal MRI machine, with the target of performing combined PET/MRI scans on mice.}. With the nominal thickness of the components, the stack-up should reach \SI{20.4}{\milli\meter} thickness. The only way to assemble such complex multilayer structure is to use a \SI{5}{\micro\meter} thick double coated adhesive tape that should allow matching the above requirement. \section{Electrical Interfaces} \subsection{Power and I/O distribution} \label{sub:interfaces} The challenge of providing interconnections to all the chips in a super-module imposed the development of a specific communication protocol that allows the 10 ASICs to be connected in a daisy-chain and behave as a single larger chip. In this way, most of the signals can be provided to the super-module with a single line, being then shared or connected only to the first chip in the chain and then propagated from chip to chip. The only exception to this strategy is the global synchronization signal: while it is logically the same signal being distributed to all chips, having a low-jitter line is very important to the chip performance, as any uncertainty on this signal is directly added to the time resolution of the system. It was thus chosen to provide a separate differential synchronization line to each chip. Table \ref{tab:signals} lists the signals needed for each ASIC. \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline \textbf{Signal} & \textbf{Differential} & \textbf{Type of connection} \\ \hline Analog VDD & No & Shared \\ \hline Digital VDD & No & Shared \\ \hline Analog Gnd & No & Shared \\ \hline Digital Gnd & No & Shared \\ \hline HV & No & Shared \\ \hline Guard ring bias & No & Shared \\ \hline Sync & Yes & Point-to-point \\ \hline MOSI & Yes & Daisy-chained \\ \hline MISO & Yes & Daisy-chained \\ \hline Clock out & Yes & Daisy-chained \\ \hline Trigger & Yes & Daisy-chained \\ \hline Reset & No & Shared \\ \hline \end{tabular} \caption{List of signals that the super-module needs to operate and communicate to the readout system. Daisy chaining lines from chip to chip means that it is necessary to connect chips physically stacked on top of each other.} \label{tab:signals} \end{table} \subsection{Dummy super-module construction and wire bonding tests} \label{sub:dummy} Two PCBs (shown in figure \ref{fig:pcb-layout}) were designed and produced with a standard glass-reinforced epoxy laminate substrate, to study the feasibility of the electrical interconnection between the modules and the upstream services. The first one, simulating the super-module flex connected the read-out electronics, was designed with a resistor network to check the integrity of the circuitry after wire bonding. The second dummy board was designed to mimic the module bond-pad interconnections. It has a \textasciitilde\SI{200}{\micro\meter} thickness and a similar surface area to the ASIC. An additional requirement of the final stack-up is an adhesive layer between modules offering a small and uniform thickness. The \SI{5}{\micro\meter} double-coated adhesive tape used for this test was originally developed for compact electronics applications such as cellular phone and digital cameras. The adhesion performance was found to be exceptionally good in terms of shear strength while remaining acceptable with respect to the peeling characteristics. A first test showed how the wire bonding process often failed in the presence of a small bow of the glued substrates, as the bow was causing the edge of the dummy chips to lift by a few microns.\\ The application technique and sequence were optimized in order to obtain a reliable adhesion performance of the tape, as the procedure proved very sensitive to them. With the new procedure the adhesion was very consistent for glass, plastic and low-surface-roughness metal samples. No visible trapped air bubble was detected and the measured thickness was uniform within \SI{1}{\micro\meter}. \begin{figure}[htbp] \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{pcb-layout-2.png} \end{minipage}\hfill \begin{minipage}{0.55\textwidth} \centering \includegraphics[width=0.9\textwidth]{pcb-layout.png} \end{minipage} \centering \caption{Layout of the PCBs used for wire bonding tests. On the left, the smaller PCB simulating the ASIC, on the right the bigger one hosting the stack to be tested that emulated the sumer-module flex.} \label{fig:pcb-layout} \end{figure}\\ Independent measurements were made to assess the resistance to shear stress and all the values consistently exceeded the specifications given by the manufacturer of \SI{265}{\newton} for an adhesion surface of 20$\times$\SI{20}{\milli\meter\squared} at room temperature. Thermal cycling tests were also performed between 10 and \SI{50}{\celsius} without any visible change, which is consistent with the technical datasheet for which adhesion performance was given for a range of 10 to \SI{80}{\celsius}.\\ The staggered assembly of the five dummy modules was made with a simple jig with alignment pins. The step depth was designed to be \SI{500}{\micro\meter}, which is considered a realistic target for the wire bonding access in between the layers. The thin tape was first attached to each module and then the liner was removed only when connecting the modules to the parts already assembled. The modules were handled with a vacuum tool and the application force was applied according to the successfully tested procedure. The wire-bonding was then made targeting a maximum loop height of \SI{200}{\micro\meter} and a maximum and uniform pull strength of more than \SI{10}{g}. The measurements on the stack-up showed a maximum loop height of about \SI{70}{\micro\meter} and an average pull strength of \SI{12}{g} with less than 5\% dispersion.\\ A wire-bond routine was made to test the reliability of the process. Two samples were wire bonded without any failure and completed in less than 15 minutes each. The samples were then cycled ten times in temperature inside a climate chamber from 0 to \SI{50}{\celsius} and the electrical continuity was successfully checked with a resistor network and probe pads on the larger PCB.\\ A protection technique consisting of spraying the wire bonds with polyurethane in order to protect them against possible chemical issues due to water contamination or non-intentional mechanical stresses was also used\footnote{ The same technique was tested for the Insertable Barrel Layer of the ATLAS experiment at CERN\cite{IBL}.}. After this procedure, additional thermal cycling tests were performed, without showing any measurable degradation. \begin{figure}[htbp] \centering \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=0.9\textwidth]{IMG_3892.JPG} \end{minipage}\hfill \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=0.9\textwidth]{wirebond-pic2.png} \end{minipage}\hfill \begin{minipage}{0.25\textwidth} \centering \includegraphics[width=0.9\textwidth]{wirebond-pic3.png} \end{minipage} \centering \caption{Pictures of the stacked wire-bond test. The left picture shows the staggered assembly glued together using ultra-thin double coated tape. The central picture illustrates the wire bond made over the six pad layers. The right picture was taken with a microscope showing the bond foot of three layers.} \label{fig:wirebond-pics} \end{figure}\\ \section{Thermal management} \label{sec:thermals} \subsection{Requirements and cooling strategy} \label{sub:requirements} Each TT-PET monolithic chip features many detection channels, that are required to be powered at all times. While the front-end uses a relatively low power compared to similar detectors (less than \SI{80}{\milli\watt\per\square\centi\meter}, see \cite{pierpaolo} and \cite{ivan}), this number is still much higher than the power consumption of any other block inside the ASIC (TDC, readout logic...) due to the high granularity of the sensor. We can thus assume that all the power is drawn by the pixel front-ends, placed in two rows at the long edges of the chips.\\ In terms of thermal management of the detector the key items in the design are the cooling blocks that are interposed between the towers. The total power to be dissipated is not very high (\textasciitilde\SI{300}{\watt}) but is contained in a small and confined volume. Therefore, particular attention has to be taken to ensure sufficient flow for the cooling fluid. The cooling block has to extract the heat through the fluid circulating across the channels. The challenge is to identify a non-metallic material that can be built with micro-channeling circuitry, which could be interfaced to the outside world through pipes and manifold systems. The target minimum Heat Transfer Coefficient (HTC) for the cooling blocks estimated from FEA simulations is \SI{4000}{\watt\per\meter\squared\per\kelvin}.\\ In order to qualify possible materials and designs, prototypes of the cooling blocks were built and later used in a thermal mock-up of the scanner tower. \begin{figure}[htbp] \centering \begin{minipage}{0.35\textwidth} \centering \includegraphics[width=0.9\textwidth]{coolingblock1.jpg} \end{minipage}\hfill \begin{minipage}{0.65\textwidth} \centering \includegraphics[width=0.9\textwidth]{coolingblock2.jpg} \end{minipage} \centering \caption{Two types of cooling blocks were produced based upon the same design. One made of aluminum (on the left) and the second made of aluminum oxide ceramic (on the right).} \label{fig:coolingblock} \end{figure}\\ The wedge structure of the tower was simplified for the cooling block prototypes using a uniform thickness along the block length and width. The two types of blocks, shown in figure \ref{fig:coolingblock}. one made of aluminum and another made of aluminum oxide ceramic (\ce{AlO3}), were fabricated by laser sintering method to assess their feasibility, performance and reliability. The typical precision that is achievable with laser sintering is in the order of \SI{100}{\micro\meter} in all directions, and the minimum wall thickness guaranteed by manufacturer is \SI{400}{\micro\meter}. The design of both the thermal mock-up (see figure \ref{fig:coolingdrawing}) and the final version is done according to the aforementioned constraints. The process was found to be feasible in terms of manufacturability, but offered a poor yield. Out of the six parts produced two were showing an excellent coolant flow while the other four failed due to partial or full clogging. The main reason is that the ratio between the channel length and its diameter is such that the channels cannot be evacuated easily during the construction and residual material tends to clog the pipes after a while. The final design will be modified so that the diameter of the hole will increase by about 50\% while the wall thickness will be kept the same. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{coolingblock-section.jpg} \caption{Drawing of the cooling block for the thermal mock-up. Dimensions are in \SI{}{mm}.} \label{fig:coolingdrawing} \end{figure}\\ In the thermal mock-up the inlet/outlet fittings and U turns were 3D-printed with clear resin in stereo lithography techniques. This is a solution to investigate for the final version, whereas the access and spac between towers are very limited (standard flexible pipes in the lowest radii might interfere with module flexes).\\ As a result of this test, the diameter of the final cooling block (see figure \ref{fig:newsize}) is foreseen to be \SI{1.3}{\milli\meter} and the inter distance between channels is balanced over the block height. This will limit the effect of channel clogging. To reduce the risk even more, the cooling block will be divided in 3 parts: the central and ``linear'' segment (only straight channels to ease the ``cleaning'' after processing), and 2 extremities that will be glued and sealed together. Figure \ref{fig:pipes} shows the limited space available at small radius in the tower assembly. A design currently under study foresees to produce also the extremities with laser sintering, integrating the needed channel U turns and organizing the inlet/outlet fitting at higher radius to avoid any interference with the module flexes.\\ \begin{figure}[htbp] \centering \begin{minipage}{0.2\textwidth} \centering \includegraphics[width=0.9\textwidth]{holes1.jpg} \end{minipage}\hfill \begin{minipage}{0.8\textwidth} \centering \includegraphics[width=0.9\textwidth]{holes2.jpg} \end{minipage} \centering \caption{Cross section of the modified cooling block proposed for the final design, with thicker holes compared to the measured prototype.} \label{fig:newsize} \end{figure}\\ The manifolding between towers will be organized outside the detector envelope, so that only 2 pipes will get in and out from the patch panel with flexible tubing. More detailed studies will be done on the connections (fitting parts) of the cooling blocks to allow for modularity and ease of maintenance. \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{pipes.jpg} \caption{CAD drawing of the inlet/outlet cooling block pipes.} \label{fig:pipes} \end{figure}\\ \subsection{Cooling test on the scanner thermal mock-up} \label{sub:mockup} A thermal mock-up of the tower (shown in figure \ref{fig:thermal}) was built to demonstrate the feasibility of the tower assembly with the interface to the cooling block, and to verify that the operating temperature remains below \SI{40}{\celsius}. The mock-up was connected to a chilled water system with a regulated inlet temperature at about \SI{10}{\celsius} and no pipe insulation installed along the distribution lines. In this condition a relative humidity of less than 50\% at an ambient temperature of \SI{21}{\celsius} is required in order to stay below the dew point. The integrated electronics does not have strict temperature constraints, but the assembly must remain mechanically stable with minimum built-in stresses due to CTE mismatch. The total power, estimated to be \SI{18}{\watt} per tower, requires to have a cooling capacity of \SI{320}{\watt} including 10\% coming from the service power losses.\\ The thermal mock-up was equipped with heater pads and temperature sensors. The tower consisted of 12 super-modules each composed of 5 parts with a maximum area of 51$\times$\SI{15}{\milli\meter\squared}. Each super-module includes: \begin{itemize} \item 1 laser-cut bare \SI{300}{\micro\meter} thick silicon die \item 1 stainless steel water jet cut with a thickness of \SI{1}{\milli\meter} \item 1 thermo-foil heater pad capable to deliver up to \SI{20}{\watt} with an intrinsic resistance of \textasciitilde\SI{40}{\ohm} and with a maximum thickness of \SI{250}{\micro\meter} \item 1 thin film NTC thermistor with a maximum thickness of \SI{500}{\micro\meter} \item 3 layers of ultra-thin double side adhesive tape of \SI{5}{\micro\meter} thickness \end{itemize} The overall thermal super-module thickness is nominally \SI{1.565}{\milli\meter}, not far from the target value for the real super-module, and it is made of materials with very similar thermal conductivities. An FEA model was built with this thermal mock-up to make comparisons with the measurements. In order to service the super-module tower the orientation of the odd and even super-module services was alternated given the inflation of thicknesses at the location of the soldering pads transition between the kapton and the wires as illustrated in figure \ref{fig:thermal}. \begin{figure}[htbp] \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{supermodule-cad.jpg} \end{minipage}\hfill \begin{minipage}{0.55\textwidth} \centering \includegraphics[width=0.9\textwidth]{supermodule-pic.jpg} \end{minipage} \centering \caption{On the left is a CAD view of the thermal super-module with the cooling block attached and the pipe interconnection represented in translucent color. On the right is a picture of the thermal mock-up showing services of the odd and even super-module exiting on the two sides.} \label{fig:thermal} \end{figure}\\ The cooling blocks were thermally bonded at the side of the stack-up with a thermal paste interface and held with two nylon screws on each side (see figure \ref{fig:thermal-setup}). The system was serviced with flexible silicon pipes that minimize the stress at the stereo-lithographic fittings of the cooling blocks. The cooling was set-up with a manifolding system to have parallel flow in the two blocks. In addition, each cooling block was equipped with a heater pad allowing the injection of the equivalent power coming from the neighboring tower and with an NTC temperature sensor. The readout system of the NTC temperature sensors uses custom CAN controller readout boards. Each of the boards can handle up to 12 ADC channels, so two readout boards were daisy chained via the CAN interface links. Data were transferred to the PC via a CAN to USB interface. The system was designed to have a resolution of 0.1°C. In order to avoid systematic errors and reach the desired accuracy, each of the sensors connected to the board was calibrated for temperatures in a range between 5 and \SI{40}{\celsius} with a step of \SI{2}{\celsius}. The linearity error in this range was found to be less than 2\%. \begin{figure}[htbp] \centering \includegraphics[width=0.7\textwidth]{thermal-setup.jpg} \caption{Thermal mock-up with cooling blocks. The red wires are servicing the heater pads which are segmented into four power supply channels. The inlet and exhaust tubes are split in order to circulate the cooling fluid in parallel into the two blocks.} \label{fig:thermal-setup} \end{figure}\\ \subsection{Test results and comparison with FEA simulations} \label{sub:results} The flow speed of the coolant has a direct impact on the HTC and an asymmetry between the two blocks can lead to an undesired temperature distribution. Therefore, due to the yield issues described in section \ref{sub:requirements}, for the measurement campaign the mock-up was connected to the two cooling blocks with the best coolant flow, one aluminum block on one side and a \ce{AlO3} ceramic one on the other side. The total power of up to \SI{36}{\watt} was injected in 4 different points: \begin{itemize} \item \SI{18}{\watt} distributed uniformly to the two heater pads of the cooling blocks \item \SI{6}{\watt} distributed uniformly to the 4 super-modules of the lower group \item \SI{6}{\watt} distributed uniformly to the 4 super-modules of the middle group \item \SI{6}{\watt} distributed uniformly to the 4 super-modules of the upper group \end{itemize} In this set-up, thanks to parts made in stereo lithography, it was possible to precisely monitor the temperature of the coolant at the inlet and outlet of the manifold by inserting and gluing an NTC film sensor inside a little slit, directly in contact with the cooling fluid. This feature allows measuring the power exchanged between the set-up and the environment through the aluminum jig and base plate. This power was measured to be approximately \SI{6}{\watt}, a sixth of the total injected power. Moreover, even if the coolant temperature was set at \SI{10}{\celsius} there was an increase in temperature of \SI{1.8}{\celsius} due to the relatively long flexible pipe (\textasciitilde\SI{4}{\meter} back and forth). The HTC was estimated by measuring the temperature difference when no power was injected into the setup and when the full power was set, normalizing the result by the total surface between the cooling fluid and the block participating in the thermal exchange. Since it was not possible to measure the coolant temperature independently at the outlet of the two cooling blocks, the HTC was obtained averaging the heat exchange for the two types of blocks in such condition and found to be \textasciitilde\SI{8000}{\watt\per\meter\squared\per\kelvin}. This value together with the environmental temperature was then used in a thermal FEA simulation to validate the thermal model. \begin{figure}[htbp] \centering \includegraphics[width=0.7\textwidth]{temps1.png} \includegraphics[width=0.7\textwidth]{temps2.png} \caption{Temperature distribution of the set-up at various steps of the cooling and power configuration (top) as compared to the temperature distribution of the 12 super-modules (bottom).} \label{fig:plots} \end{figure} The temperature distribution of the super-modules ranges from 18 to \SI{26}{\celsius} at the full power of the heaters (as shown in figure \ref{fig:plots}). The first dummy super-module has the lowest measured temperature because of the effect of the base plate. All the measurements show that the target of a maximum temperature of \SI{40}{\celsius} is achievable for a detector consisting of 16 towers and a power consumption of \SI{300}{\watt}, at the condition that a coolant flow inside the blocks are guaranteed and that the minimum HTC value of \SI{4000}{\watt\per\meter\squared\per\kelvin} is reached.\\ An FEA model, shown in figure \ref{fig:assembly}, was designed to be as close as possible to the real test bench in terms of geometry and materials, including: \begin{itemize} \item 3$\times$4 modules (as described previously) \item 1 aluminum cooling block \item 1 ceramic \ce{AlO3} cooling block \item 1 aluminum base block (assembly jig) \item 1 aluminum base plate providing a thermal boundary condition (\SI{19}{\celsius} temperature at the bottom surface) \end{itemize} \begin{figure}[htbp] \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{assembly1.png} \end{minipage}\hfill \begin{minipage}{0.55\textwidth} \centering \includegraphics[width=0.9\textwidth]{assembly2.png} \end{minipage} \centering \caption{Assembly model of the full system (left) and sub-model of a super-module stack (right). The silicon plate is shown in blue, while the steel plate in gray. The heater pad where the heat flux is applied is at the very bottom.} \label{fig:assembly} \end{figure} The 12 modules are thermally connected to the side cooling blocks by thermal glue (loaded epoxy or equivalent). The blocks in the mock-up were insulated from the aluminum base block by interfacing Nomex foils. This is accounted for into the FEA model. The simulation was performed using the following setup.\\ Boundary conditions: \begin{itemize} \item The temperature is set at \SI{19}{\celsius} at the bottom side of the base plate in aluminum (constant). \item The air convection (\SI{19}{\celsius}) is applied to the side of the cooling blocks (vertical) + the horizontal surfaces of the top module and the aluminum parts (HTC = \SI{1}{\watt\per\meter\squared\per\kelvin}) \item The coolant (water for inlet/outlet) through the cooling block channels is considered as a convective temperature applied to the surfaces (channels). \end{itemize} Loads: \begin{itemize} \item A heat flux is applied to each heater pad, corresponding to \SI{18}{\watt} in total (\SI{2.4}{\milli\watt\per\milli\meter\squared}) \item 2 heat fluxes are also applied to the cooling blocks sides to account for the effect of adjacent modules (\SI{9}{\watt} per cooling block, equivalent to \SI{0.07}{\watt\per\milli\meter\squared}) \end{itemize} The coolant temperature at the inlet and outlet of the manifold was set respectively to \SI{11,8}{\celsius} and \SI{16}{\celsius}, in accordance with the measurements on the mock-up. However, since the blocks are quite different (due to heat loss variations and different flows, leading to different HTCs), the temperature of the coolant in each block is expected to be different. The FEA model simulates the two blocks independently in terms of HTC and temperature. In the first simulation an HTC of \SI{10000}{\watt\per\meter\squared\per\kelvin} for the \ce{AlO3} and \SI{6000}{\watt\per\meter\squared\per\kelvin} for the aluminum block were set. In this condition the maximum temperature difference between the simulation and measurement was of \SI{2}{\celsius}. To improve the matching between the simulation and the measurements, a parametric optimization on the two HTC values was performed. The optimal values of the HTCs are respectively \SI{5000}{\watt\per\meter\squared\per\kelvin} for the aluminum and \SI{9000}{\watt\per\meter\squared\per\kelvin} forthe \ce{AlO3} block, for which the temperature mapping at full power (\SI{36}{\watt}) is shown in figure \ref{fig:temperaturemapping}. The asymmetry between the two cooling blocks is clearly visible: temperatures on the cooling blocks have a discrepancy of 2 to \SI{3}{\celsius} for the same coolant temperature. The results, shown in table \ref{tab:FEA}, show good accordance with the measurements with an average error of less than \SI{1}{\celsius}, offering a good starting point towards simulations of the full scanner. \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|} \hline MOD \# & Measurement Temp (\SI{}{\celsius})& FEA Temp (\SI{}{\celsius})\\ \hline 1 & 18.0 & 19.6 \\ \hline 2 & 20.5 & 20.6 \\ \hline 3 & 19.3 & 22.5 \\ \hline 4 & 21.8 & 22.9 \\ \hline 5 & 21.0 & 24.0 \\ \hline 6 & 23.6 & 24.3 \\ \hline 7 & 23.1 & 24.8 \\ \hline 8 & 24.8 & 24.8 \\ \hline 9 & 24.6 & 25.0 \\ \hline 10 & 25.6 & 24.8 \\ \hline 11 & 24.0 & 25.0 \\ \hline 12 & 26.0 & 24.6 \\ \hline \end{tabular} \caption{List of measured and simulated temperature on each module of the thermal mock-up} \label{tab:FEA} \end{table} \begin{figure}[htbp] \centering \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{temps-overall1.png} \end{minipage}\hfill \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{temps-overall2.png} \end{minipage} \centering \caption{Temperature mapping of the system after HTC fine tuning. On the left, the entire block is depicted, while on the right only the module is shown. The temperatures ranges from \SI{19}{\celsius} to \SI{25}{\celsius} at NTC locations (in the voids between modules).} \label{fig:temperaturemapping} \end{figure}\\ \section{Conclusions} \label{sec:conclusions} The TT-PET small animal scanner has an innovative multi-layer layout that presents many challenges in terms of implementation and production. In particular for what concerns the connection of electrical interfaces and services and the cooling of a system of this complexity in which the detector dead area must be minimized. A stacked wire-bond scheme was proposed. Test results on dummy substrates proved successful, showing a reliable and reproducible interconnection technique for the daisy-chained silicon detectors. 3D-printed cooling blocks using different materials were produced and found to be performing according to simulations. Although they are capable of dissipating the power produced by the scanner, a new design is being investigated to improve the yield of the manufacturing process. \acknowledgments We would like to thank Prof. Allan Clark for his precious suggestions and for reading this manuscript and the technicians of the DPNC of University of Geneva for their contribution. This study was funded by the SNSF SINERGIA grant CRSII2\_160808.
{ "attr-fineweb-edu": 1.972656, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUgGrxK03BfNelYmYN
\section{Introduction} The expectation value of the Wilson loop (WL) operator $\langle} \def \ran {\rangle {\rm Tr\,}{\cal P} e^{i\int A}\ran $ is an important observable in any gauge theory. In ${\cal N}=4$ Super Yang-Mills (SYM), the Wilson-Maldacena loop (WML) \cite{Maldacena:1998im, Rey:1998ik}, which contains an extra scalar coupling making it locally-supersymmetric, was at the center of attention, but the study of the ordinary, ``non-supersymmetric'' WL is also of interest \cite{Alday:2007he,Polchinski:2011im} in the context of the AdS/CFT correspondence. Computing the large $N$ expectation value of the standard WL for some simple contours (like circle or cusp) should produce new non-trivial functions of the 't Hooft coupling $\lambda= g^2 N$ which are no longer controlled by supersymmetry but may still be possible to determine using the underlying integrability of the theory. Another motivation comes from considering correlation functions of local operators inserted along the WL: this should produce a new example of AdS$_{2}$/CFT$_{1}$ duality, similar but different from the one recently discussed in the WML case \cite{Cooke:2017qgm,Giombi:2017cqn}. In the latter case, correlators of local operators on the 1/2-BPS Wilson line have a $OSp(4^*|4)$ 1d superconformal symmetry, while in the ordinary WL case one expects a non-supersymmetric ``defect'' CFT$_1$ with $SO(3)\times SO(6)$ ``internal'' symmetry. On general grounds, for the standard WL defined for a smooth contour one should find that (i) all power divergences (that cancel in the WML case) exponentiate and factorize \cite{Polyakov:1980ca,Dotsenko:1979wb,Gervais:1979fv,Arefeva:1980zd,Dorn:1986dt,Marinho:1987fs} and (ii) all logarithmic divergences cancel as the gauge coupling is not running in ${\cal N}=4$ SYM theory. Thus its large $N$ expectation value should produce a nontrivial finite function of $\lambda$ (after factorising power divergences, or directly, if computed in dimensional regularization). It is useful to consider a 1-parameter family of Wilson loop operators with an arbitrary coefficient $\zeta$ in front of the scalar coupling which interpolates between the WL ($\zeta=0$) and the WML ($\zeta=1$) cases \cite{Polchinski:2011im} \begin{equation} \label{0} W^{(\zeta)} (C) = \frac{1}{N}\,\text{Tr}\,\mathcal P\,\exp\oint_{C}d\tau\,\big[i\,A_{\mu}(x)\,\dot x^{\mu} + \zeta \Phi_{m}(x) \, \theta^{m} \,|\dot x|\,\big], \qquad\qquad \theta_{m}^{2}=1\ . \end{equation} We may choose the direction $\theta_m$ of the scalar coupling in \rf{0} to be along 6-th direction, i.e. $\P_m \theta^m =\P_6$. Below we shall sometimes omit the expectation value brackets using the notation \begin{equation} \label{1111} {\rm WL}: \ \ \ \ \ \langle} \def \ran {\rangle W^{(0)} \ran \equiv W^{(0)} \ , \ \ \ \qquad\qquad \ \ \ {\rm WML}: \ \ \ \ \ \langle} \def \ran {\rangle W^{(1)}\ran \equiv {\rm W} \ . \end{equation} Ignoring power divergences, for generic $\zeta$ the expectation value $\langle} \def \ran {\rangle W^{(\z)} \ran$ for a smooth contour may have additional logarithmic divergences but it should be possible to absorb them into a renormalization of the coupling $\zeta$, {\em i.e.} \footnote{Here there is an analogy with a partition function of a renormalizable QFT: if $g_{\rm b}$ is bare coupling depending on cutoff $\L$ one has $Z_{\rm b} ( g_{\rm b}(\L), \L) = Z (g(\mu), \mu)$, \ \ $\mu {dZ\over d \mu} =\mu { \partial Z \over \partial \mu} + \beta (g) {\partial Z \over \partial g} =0$, \ \ $\beta =\mu {d g \over d \mu}$. In the present case the expectation value depends on $\mu$ via $\mu R$ where $R$ is the radius of the circle (which we often set to 1). A natural choice of renormalization point is then $\mu=R^{-1}$. } \begin{equation} \label{113} \langle} \def \ran {\rangle W^{(\zeta)}\ran \equiv W\big(\lambda; \zeta(\mu), \mu\big) \ , \ \ \ \ \qquad \mu {\partial \over \partial \mu} W + \beta_\zeta { \partial \over \partial \zeta} W =0 \ , \end{equation} where $\mu$ is a renormalization scale and the beta-function is, to leading order at weak coupling \cite{Polchinski:2011im} \begin{equation} \beta_\zeta =\mu { d \zeta \over d \mu } = - {\lambda \over 8 \pi^2} \zeta (1-\zeta^2) + \mathcal O ( \lambda^2) \ . \label{111} \end{equation} The WL and WML cases in \rf{1111} are the two conformal fixed points $\zeta=0$ and $\zeta=1$ where the logarithmic divergences cancel out automatically.\footnote{As the expectation value of the standard WL has no logarithmic divergences, combined with the fact that the straight line (or circle) preserves a subgroup of 4d conformal group this implies that one should have a 1d conformal $SL(2,R)$ invariance for the corresponding CFT on the line for all $\lambda$.} Given that the SYM action is invariant under the change of sign of $\Phi_m$ the fixed point points $\zeta=\pm 1$ are equivalent (we may resstrict $\zeta$ to be non-negative in \rf{0}). Our aim below will be to compute the leading weak and strong coupling terms in the WL expectation value for a circular contour in the planar limit. As is well known, the circular WML expectation value can be found exactly due to underlying supersymmetry; in the planar limit \cite{Erickson:2000af,Drukker:2000rr,Pestun:2007rz} (see also \cite{Zarembo:2016bbk}) \begin{equation} \label{1.1} {\rm W} (\text{circle}) = \frac{2}{\sqrt\lambda}\,I_{1}(\sqrt\lambda) = \begin{cases} 1+\frac{\lambda}{8}+\frac{\lambda^{2}}{192}+\cdots, &\qquad \lambda\ll 1\ , \\ \sqrt{\frac{2}{\pi}}\, { {1\over ( \sqrt \lambda)^{3/2} } }\, e^{\sqrt\lambda}\,\big(1-\frac{3}{8\,\sqrt\lambda}+\cdots\big), & \qquad \lambda \gg 1\ . \end{cases} \end{equation} For a straight line the expectation value of the WML is 1, and then for the circle its non-trivial value can be understood as a consequence of an anomaly in the conformal transformation relating the line to the circle \cite{Drukker:2000rr}. As this anomaly is due to an IR behaviour of the vector field propagator \cite{Drukker:2000rr}, one may wonder if the same anomaly argument may apply to the WL as well. Indeed, in this case ($\zeta=0$) there are no additional logarithmic divergences and then after all power divergences are factorized or regularized away one gets $ W^{(0)} (\text{line}) =1$; then the finite part of $ W^{(0)} (\text{circle}) $ may happen to be the same as in the WML case \rf{1.1}.\footnote{The conjecture that the circular WL may have the same value as the locally-supersymmetric WML one runs, of course, against the derivation of the expectation value of the latter based on localization \cite{Pestun:2007rz} as there is no reason why the localisation argument should apply in the standard WL case.} Some indication in favour of this is that the leading strong and weak coupling terms in the circular WL happen to be the same as in the WML case. The leading strong-coupling term is determined by the volume of the same minimal surface (${\rm AdS}_2$ with circle as a boundary) given by $2 \pi ( {1\over a} - 1) $ and (after subtracting the linear divergence) thus has the universal form $ \langle} \def \ran {\rangle W^{(\z)}\ran \sim e^{ \sqrt \lambda}$. At weak coupling, the circular WL and WML also have the same leading-order expectation value (again after subtracting linear divergence) $\langle} \def \ran {\rangle W^{(\z)} \ran= 1 + {1\over 8}\lambda + O(\lambda^2)$. However, as we shall see below, the subleading terms in WL in both weak and strong coupling expansion start to differ from the WML values, {\em i.e.}\, $\langle} \def \ran {\rangle W^{(\z)}(\rm circle)\ran$ develops dependence on $\zeta$. This implies, in particular, that the conformal anomaly argument of \cite{Drukker:2000rr} does not apply for $\zeta =0$.\footnote{This may be attributed to the presence of extra (power) divergences that do not cancel automatically in the standard WL case. For generic $\zeta$ there are also additional logarithmic divergences that break conformal invariance.} Explicitly, we shall find that at weak coupling (in dimensional regularization) \begin{align} &\label{1} \langle} \def \ran {\rangle W^{(\z)} \ran = 1 + {1\over 8} \lambda + \Big[ \frac{1}{192}+\frac{1}{128\,\pi^{2}} (1-\zeta^2)^2 \Big]\lambda^2 + {\cal O}(\lambda^3) \ . \end{align} This interpolates between the WML value in \rf{1.1} and the WL value ($\zeta=0$) \begin{equation} \label{91} W^{(0)}= 1 + {1\over 8} \lambda + \Big( \frac{1}{192}+\frac{1}{128\,\pi^{2}} \Big)\lambda^2 + {\cal O}(\lambda^3) \ . \end{equation} Note that the 2-loop correction in \rf{91} to the WML value in \rf{1.1} has a different transcendentality; it would be very interesting to find the all-order generalization of \rf{91}, i.e. the counterpart of the exact Bessel function expression in \rf{1.1} in the standard WL case. It is tempting to conjecture that the highest transcendentality part of $\langle} \def \ran {\rangle W \ran$ at each order in the perturbative expansion is the same for supersymmetric and non-supersymmetric Wilson loops and hence given by \rf{1.1}. The expression \rf{1} passes several consistency checks. The UV finiteness of the two-loop $\lambda^2$ term is in agreement with $\zeta$-independence of the one-loop term (cf. \rf{113},\rf{111} implying that UV logs should appear first at the next $\lambda^3$ order). The derivative of (log of) \rf{1} over $\zeta$ is proportional to the beta-function \rf{111} \begin{equation} \label{2} {\partial \over \partial \zeta } \log \langle} \def \ran {\rangle W^{(\z)} \ran = {\cal C} \, \beta_\zeta \ , \ \ \qquad \qquad {\cal C}= {\lambda\over 4} + {\cal O}(\lambda^2) \ , \end{equation} where ${\cal C}={\cal C}(\lambda,\zeta)$ should not have zeroes. This implies that the conformal points $\zeta=1$ and $\zeta=0$ are extrema (minimum and maximum) of $\langle} \def \ran {\rangle W^{(\z)} \ran $. This is consistent with the interpretation of $\langle} \def \ran {\rangle W^{(\z)} \ran $ as a 1d partition function on $S^1$ that may be computed in conformal perturbation theory near $\zeta=1$ or $\zeta=0$ conformal points. Indeed, eq.\rf{2} may be viewed as a special $d=1$ case of the relation ${\partial F\over \partial g_i} = {\cal C}^{ij} \beta_j $ for free energy $F$ on a sphere $S^d$ computed by perturbing a CFT$_d$ by a linear combination of operators $g_i O^i$ (see, e.g., \cite{Klebanov:2011gs,Fei:2015oha}). In the present case, the flow \cite{Polchinski:2011im} is driven by the scalar operator $\P_m \theta^m =\P_6$ in \rf{0} restricted to the line, and the condition ${\partial \over \partial \zeta } \langle} \def \ran {\rangle W^{(\z)} \ran \big|_{\zeta=0, 1} =0$ means that its one-point function vanishes at the conformal points, as required by the 1d conformal invariance. The parameter $\zeta$ may be viewed as a ``weakly relevant'' (nearly marginal up to ${\cal O}(\lambda)$ term, cf. \rf{111}) coupling constant running from $\zeta=0$ in the UV (the ordinary Wilson loop) to $\zeta=1$ in the IR (the supersymmetric Wilson loop). Note that our result (\ref{1}) implies that \begin{equation} \label{190} \log \langle} \def \ran {\rangle W^{(0)} \ran\ >\ \log \langle} \def \ran {\rangle W^{(1)} \ran \ . \end{equation} Hence, viewing $\langle W^{(\z)} \rangle = Z_{S^1} $ as a partition function of a 1d QFT on the circle, this is precisely consistent with the $F$-theorem \cite{Myers:2010xs, Klebanov:2011gs, Casini:2012ei, Giombi:2014xxa,Fei:2015oha}, which in $d=1$ (where it is analogous to the $g$-theorem \cite{Affleck:1991tk,Friedan:2003yc} applying to the boundary of a 2d theory) implies \begin{equation} \label{110} \widetilde F_{_{\rm UV}} > \widetilde F_{_{\rm IR}}\,, \qquad\qquad \widetilde F \equiv {\te \sin { \pi d \over 2} }\, \log Z_{S^d} \Big|_{d=1} = \log Z_{S^1} =-F\,. \end{equation} Moreover, we see that $ \langle} \def \ran {\rangle W^{(\zeta)}\ran$ decreases monotonically as a function of $\zeta$ from the non-supersymmetric to the supersymmetric fixed point. The second derivative of $\langle} \def \ran {\rangle W^{(\z)} \ran$ which from \rf{2} is thus proportional to the derivative of the beta-function \rf{111} \begin{equation} \label{4} {\partial^2 \over \partial \zeta^2 }\log \langle} \def \ran {\rangle W^{(\z)} \ran \Big|_{\zeta=0, 1} = {\cal C}\, {\partial \beta_{\zeta} \over \partial \zeta} \Big|_{\zeta=0, 1} \ , \end{equation} should, on the other hand, be given by the integrated 2-point function of $\P_6$ restricted to the line and should thus be determined by the corresponding anomalous dimensions. Indeed, $ {\partial \beta_{\zeta} \over \partial \zeta} \big|_{\zeta=0, 1} $ reproduces \cite{Polchinski:2011im} the anomalous dimensions \cite{Alday:2007he} of $\P_6$ at the $\zeta=1$ and $\zeta=0$ conformal points \begin{equation} \begin{aligned} \label{3} &\Delta(\zeta) -1= {\partial \beta_\zeta \over \partial \zeta} = {\lambda\over 8\pi^2} ( 3 \zeta^2 -1) + {\cal O}(\lambda^2)\ , \\ &\Delta(1) = 1 + {\lambda \over 4 \pi^2} + \ldots \ , \qquad \ \ \ \ \ \ \ \Delta(0) = 1 - {\lambda \over 8 \pi^2} + \ldots \ . \end{aligned} \end{equation} Again, this is a special case of a general relation between the second derivative of free energy on $S^d$ at conformal point and anomalous dimensions found in conformal perturbation theory. We shall explicitly verify this relation between ${\partial^2 \over \partial \zeta^2 } \langle} \def \ran {\rangle W^{(\z)} \ran \big|_{\zeta=0, 1}$ and the integrated 2-point function of $\P_6$ inserted into the circular Wilson loop in section \ref{sec4} below. The interpretation of $\langle W^{(\z)} \rangle$ as a partition function of an effective 1d QFT is strongly supported by its strong-coupling representation as the AdS$_5 \times S^5\ $ string theory partition function on a disc with mixed boundary conditions \cite{Polchinski:2011im} for $S^5$ coordinates (in particular, Dirichlet for $\zeta=1$ and Neumann for $\zeta=0$ \cite{Alday:2007he}). As we will find in section \ref{secstr}, in contrast to the large $\lambda$ asymptotics of the WML $ \langle {\rm W} \rangle \sim ({\sqrt \l}} \def \D {\Delta)^{-3/2} e^{{\sqrt \l}} \def \D {\Delta} + ... $ in \rf{1.1}, in the standard WL one gets \begin{equation}\label{03} \langle W^{(0)} \rangle \sim {\sqrt \l}} \def \D {\Delta \, e^{{\sqrt \l}} \def \D {\Delta} + ... \ , \end{equation} so that the F-theorem inequality \rf{190},\rf{110} is satisfied also at strong coupling. At strong coupling, the counterpart of the $\P_6$ perturbation near the $\zeta=0$ conformal point is an extra boundary term (which to leading order is quadratic in $S^5$ coordinates) added to the string action with Neumann boundary condition to induce the boundary RG flow to the other conformal point. \footnote{ In particular, the boundary term is independent of the fermionic fields. When restricted to the AdS$_{2}$ minimal surface dual to the Wilson loop, they will be assumed to have the usual unitary $\Delta=3/2$ boundary behaviour along the whole RG flow. Instead, for the $S^{5}$ scalars the boundary deformation induces unitary mixed boundary conditions and only in Dirichlet case we have unbroken supersymmetry \cite{Polchinski:2011im}. } The counterpart of $\zeta$ in \rf{0} is a (relevant) coupling $\kk= {{\rm f}} (\zeta; \lambda)$ (which is 0 for $\zeta=0$ and $\infty$ for $\zeta=1$) has the beta function (see \ref{sec4.2}) $\beta_\kk= ( - 1 + {5\over {\sqrt \l}} \def \D {\Delta}) \kk + ...$. This implies that strong-coupling dimensions of $\P_6$ near the two conformal points should be (in agreement with \cite{Alday:2007he,Giombi:2017cqn}) \begin{equation} \Delta-1 = \pm \big( - 1 + {5\over {\sqrt \l}} \def \D {\Delta} + ... \big) \ , \ \ \ {\rm i.e.} \ \ \ \ \Delta(0) = {5\over {\sqrt \l}} \def \D {\Delta} + ... \ , \ \ \ \ \Delta(1) = 2 - {5\over {\sqrt \l}} \def \D {\Delta} + ... \ . \label{04} \end{equation} This paper is organized as follows. In section \ref{sec2} we shall compute the two leading terms in the planar weak-coupling expansion of the circular WL. The structure of the computation will be similar to the one in the WML case in \cite{Erickson:2000af} (see also \cite{Young:2003pra}) but now the integrands (and thus evaluating the resulting path-ordered integrals) will be substantially more complicated. We shall then generalize to any value of $\zeta$ in \rf{0} obtaining the expression in \rf{1}. In section \ref{sec4} we shall elaborate on the relation between the expansion of the generalized WL \rf{1} near the conformal points and the correlators of scalar operators inserted on the loop. In section \ref{secstr} we shall consider the strong-coupling (string theory) computation of the circular WL to 1-loop order in AdS$_5 \times S^5\ $ superstring theory generalizing the previous discussions in the WML case. We shall also discuss the general $\zeta$ case in section \ref{sec4.2}. Some concluding remarks will be made in section \ref{secf}. In Appendix \ref{A} we shall comment on cutoff regularization. In Appendix \ref{B} we shall explain different methods of computing path-ordered integrals on a circle appearing in the 2-loop ladder diagram contribution to the generalized WL. \section{Weak coupling expansion}\label{sec2} Let us now consider the weak-coupling ($\lambda=g^2 N \ll 1$) expansion in planar ${\cal N}=4$ SYM theory and compute the first two leading terms in the expectation value for the generalized circular Wilson loop \rf{0} \begin{equation} \label{3.3} \langle W^{(\z)}\rangle = 1+\ \lambda\, W^{(\z)}_{1}+\ \lambda^{2}\, W^{(\z)}_{2}+\cdots . \end{equation} We shall first discuss explicitly the standard Wilson loop $W^{(0)}$ in \rf{1111} comparing it to the Wilson-Maldacena loop ${\rm W}$ case in \cite{Erickson:2000af} and then generalize to an arbitrary value of the parameter $\zeta$. \subsection{One-loop order} The perturbative computation of the WML was discussed in \cite{Erickson:2000af} (see also \cite{Young:2003pra}) that we shall follow and generalize. The order $\lambda$ contribution is\footnote{There is a misprint in the overall coefficient in \cite{Erickson:2000af} corrected in \cite{Young:2003pra}. } \begin{equation}\label{3.4} {\rm W}_{1}(C) = \frac{1}{(4\,\pi)^{2}}\oint_{C} d\tau_{1}d\tau_{2}\ \frac{|\dot x(\tau_{1})|\,|\dot x(\tau_{2})|-\dot x(\tau_{1})\cdot \dot x(\tau_{2})}{ |x(\tau_{1})-x(\tau_{2})|^{2}} \ . \end{equation} Here the term $\dot x(\tau_{1})\cdot \dot x(\tau_{2})$ comes from the vector exchange (see Fig. \ref{fig:one-loop}) and the term $|\dot x(\tau_{1})|\,|\dot x(\tau_{2})|$ from the scalar exchange. This integral is finite for a smooth loop. In particular, for the straight line $x^\mu(\tau) = (\tau,0,0,0)$, the numerator in ${\rm W}_{1}$ is zero and thus \begin{equation} \label{3.5} {\rm W}_{1}(\text{line})=0. \end{equation} For the circular loop, $x^\mu(\tau) = (\cos\tau, \sin\tau, 0, 0)$, the integrand in \rf{3.4} is constant \begin{equation}\label{399} \frac{|\dot x(\tau_1)|\,|\dot x(\tau_2)|-\dot x(\tau_1)\cdot \dot x(\tau_2)}{ |x(\tau_1)-x(\tau_2)|^{2}} = \frac{1}{2} \end{equation} and thus, in agreement with \rf{1.1},\rf{3.3} \begin{equation} \label{3.6} {\rm W}_{1}(\text{circle}) = \frac{1}{(4\pi)^{2}}\,(2\pi)^{2}\,\frac{1}{2} = \frac{1}{8} \ . \end{equation} % \begin{figure}[t] \centering \includegraphics[scale=0.25]{./fig-oneloop.pdf} \caption[] { Gauge field exchange diagram contributing the standard Wilson loop at the leading order. In the Wilson-Maldacena loop case there is an additional scalar exchange contribution. \label{fig:one-loop}} \end{figure} The analog of \rf{3.4} in the case of the standard WL is found by omitting the scalar exchange $|\dot x(\tau_{1})|\,|\dot x(\tau_{2})|$ term in the integrand. The resulting integral will have linear divergence (see Appendix \ref{A}) that can be factorized or automatically ignored using dimension regularization for the vector propagator with parameter $\omega}\def \no {\nonumber=2-\varepsilon \to 2$. If we replace the dimension $4$ by $d=2 \omega\equiv 4 -2 \varepsilon$ the standard Euclidean 4d propagator becomes \begin{equation}\label{3.7} \Delta(x) = (-\partial^2)^{-1} = \frac{\Gamma(\omega-1)}{4\pi^{\omega}} { 1 \over |x|^{2 \omega}\def \no {\nonumber -2}} \ . \end{equation} Then \begin{equation}\label{3.8} W^{(0)}_{1} = \frac{1}{(4\,\pi)^{2}}\oint d\tau_{1}d\tau_{2}\, \frac{-\dot x(\tau_{1})\cdot \dot x(\tau_{2})}{ |x(\tau_{1})-x(\tau_{2})|^{2}}\ \to \ \frac{\Gamma(\omega}\def \no {\nonumber-1)}{16\pi^{\omega}}\oint d\tau_{1}d\tau_{2}\, \frac{-\dot x(\tau_{1})\cdot \dot x(\tau_{2})}{ |x(\tau_{1})-x(\tau_{2})|^{2\omega-2}}\,. \end{equation} In the infinite line case we get ($L\to \infty$) \footnote{We use that $\int_{0}^{L}d\tau_{1}\int_{0}^{L}d\tau_{2}\,f(|\tau_{1}-\tau_{2}|) =2\,\int_{0}^{L}d\tau\,\,(L-\tau)\,f(\tau)$.} \begin{align} &\int_{0}^{L} d\tau_{1}\int_{0}^{L}d\tau_{2}\,\frac{1} {|\tau_{1}-\tau_{2}|^{2\omega-2}} = 2\, \int_{0}^{L} d\tau\,\frac{L-\tau}{\tau^{2\,(\omega-1)}} = \frac{L^{4-2\omega}} {2-\omega}\frac{1}{3-2\omega} \to 0 \ . \label{3.9} \end{align} The formal integral here is linearly divergent. If we use dimensional regularization to regulate both UV and IR divergences (analytically continuing from $\omega}\def \no {\nonumber > 2 $ region) we get as in \rf{3.5} \begin{equation}\label{3.10} W^{(0)}_{1}(\text{line})=0. \end{equation} In the case of a circle, we may use \rf{3.4},\rf{3.6} to write ($\omega \equiv 2-\varepsilon\to 2$) \begin{align} \label{3.11} W^{(0)}_{1}(\text{circle}) &= {\rm W}_{1}(\text{circle}) -\frac{ \Gamma(\omega}\def \no {\nonumber-1) }{16\pi^{\omega}}\oint d\tau_{1}d\tau_{2}\, \frac{1}{ |x(\tau_{1})-x(\tau_{2})|^{2\omega-2}}\notag \\ & \frac{1}{8} -\frac{\Gamma(\omega}\def \no {\nonumber-1) }{2^{2\omega+2}\pi^{\omega}}\oint d\tau_{1}d\tau_{2}\,\Big[ \sin^{2} \tfrac{\tau_{12}}{2}\Big]^{1-\omega} \ . \end{align} The integral here may be computed, e.g., by using the master-integral in eq.(G.1) of \cite{Bianchi:2016vvm}\footnote{Alternative direct methods of computing similar integrals are discussed in Appendix \ref{B}. We also note that such 2-point and 3-point integrals can be viewed as a special $d=1$ case of the conformal integrals on $S^d$ used in \cite{Cardy:1988cwa,Klebanov:2011gs}.} \begin{align} \label{3.12} \mathcal M(a,b,c) &\equiv \oint d{\tau_1}d{\tau_2}d{\tau_3} \Big[\sin^{2}\tfrac{\tau_{12}}{2}\Big]^{a} \Big[\sin^{2}\tfrac{\tau_{23}}{2}\Big]^{b} \Big[\sin^{2}\tfrac{\tau_{13}}{2}\Big]^{c}\notag \\ &= 8\,\pi^{3/2}\, \frac{ \Gamma(\frac{1}{2}+a)\Gamma(\frac{1}{2}+b)\Gamma(\frac{1}{2}+c)\Gamma(1+a+b+c)}{ \Gamma(1+a+c)\Gamma(1+b+c)\Gamma(1+a+b)} \ , \end{align} i.e. \begin{equation}\label{3331} \oint d\tau_{1}d\tau_{2}\,\Big[ \sin^{2}\tfrac{\tau_{12}}{2}\Big]^{1-\omega} = \frac{1}{2\pi}\,\mathcal M(1-\omega,0,0) = \frac{4\,\pi^{3/2}\,\Gamma(-\frac{1}{2}+\varepsilon)}{\Gamma(\varepsilon)} = - 8 \pi^2 \varepsilon + \mathcal O(\varepsilon^2). \end{equation} Plugging this into (\ref{3.11}), we get the same result as in (\ref{3.6}): \begin{equation}\label{3466} W^{(0)}_{1}(\text{circle}) = \frac{1}{8} \ . \end{equation} Thus the leading-order expectation values for the WML and WL are the same for both the straight line and the circle. \subsection{Two-loop order } At order $\lambda^{2}$ there are three types of planar contributions to the Wilson loop in \rf{313} shown in Fig. \ref{fig:two-loops} that we shall denote as \begin{equation} \label{313} W^{(\z)}_2 = W^{(\z)}_{2,1} + W^{(\z)}_{2,2} + W^{(\z)}_{2,3}\ . \end{equation} \begin{figure}[t] \centering \includegraphics[scale=0.3]{./fig-twoloops.pdf} \caption[] { Order $\lambda^2$ contributions to the standard Wilson loop. The middle diagram contains the full self-energy 1-loop correction in SYM theory (with vector, ghost, scalar and fermion fields in the loop). For the Wilson-Maldacena loop there are additional diagrams with scalar propagators instead of some of the vector ones. \label{fig:two-loops}} \end{figure} In the WML case it was found in \cite{Erickson:2000af} that the ladder diagram contribution ${\rm W}_{2,1}$ is finite. While the self-energy part ${\rm W}_{2,2}$ and the internal-vertex part ${\rm W}_{2,3}$ are separately logarithmically divergent (all power divergences cancel out in WML case), their sum is finite; moreover, the finite part also vanishes in 4 dimensions (in Feynman gauge) \begin{equation}\label{2212} {\rm W}_{2,2} +{\rm W}_{2,3}=0\ . \end{equation} In the WL case, using dimensional regularization to discard power divergences, we find that the ladder diagram $W^{(0)}_{2,1}$ in Fig. \ref{fig:two-loops} has a logarithmic singularity (i.e. a pole in $\varepsilon=2-\omega}\def \no {\nonumber$). The same is true for both the self-energy diagram $W^{(0)}_{2,2}$ and the internal-vertex diagram $W^{(0)}_{2,3}$. However, their sum in \rf{313} turns out to be finite (in agreement with the general expectation for a conformal WL operator in a theory where the gauge coupling is not running).\footnote{If one uses power UV cutoff $a\to 0$ the remaining power divergences universally factorize as an exponential factor $\exp(-k\,{L\over a})$ where $L$ is the loop length. This can be interpreted as a mass renormalization of a test particle moving along the loop. } Let us now discuss each of these contributions in turn. \begin{figure}[t] \centering \includegraphics[scale=0.35]{./fig-rainbow.pdf} \caption[] { Two of planar diagrams of ladder type $W_{2,1}=W^{(a)}_{2,1} +W^{(b)}_{2,1} $ with path-ordered four points $\tau_{1}, \dots, \tau_{4}$ in the WL ($\zeta=0$) case. For general $\zeta$ one needs also to add similar diagrams with scalar propagators. \label{fig:rainbow-ladder}} \end{figure} \subsubsection{Ladder contribution } The planar ladder diagram $W_{2,1}$ in Fig. \ref{fig:two-loops} arises from the quartic term in the expansion of the Wilson loop operator \rf{0}. It is convenient to split the integration region into $4!$ ordered domains, {\em i.e.} $\tau_{1}>\tau_{2}>\tau_{3}>\tau_{4}$ and similar ones. Before the Wick contractions, all these are equivalent and cancel the $4!$ factor from the expansion of the exponential. There are two different planar Wick contractions shown in Fig. \ref{fig:rainbow-ladder}. In the WML case the expression for the first one is \cite{Erickson:2000af}\footnote{Here $x^{(i)}=x(\tau_i)$ and $d^{4}\bm{\tau} \equiv d\tau_{1}d\tau_{2}d\tau_{3}d\tau_4$.} \begin{equation} \label{3.17} {\rm W}_{{{2,1a}}} \def \wb {{{2,1b}}} = \frac{\big[\Gamma(\omega-1)\big]^{2}}{64\,\pi^{2\omega}}\,\oint_{\tau_{1}>\tau_{2}>\tau_{3}>\tau_{4}} d^{4}\bm\tau\,\frac{(|\dot x^{(1)}|\,|\dot x^{(2)}|-\dot x^{(1)}\cdot \dot x^{(2)}) (|\dot x^{(3)}|\,|\dot x^{(4)}|-\dot x^{(3)}\cdot \dot x^{(4)})} {(|x^{(1)}-x^{(2)}|^{2}\,|x^{(3)}-x^{(4)}|^{2})^{\omega-1}}. \end{equation} The second diagram has a similar expression with $(1,2,3,4)\to (1,4,2,3)$. In the WML case these two contributions are equal and finite. Setting $\omega}\def \no {\nonumber=2$ we find that the integrand in (\ref{3.17}) in the circle case is constant as in \rf{399}. As a result, \begin{equation}\label{3.19} {\rm W}_{2,1} = {\rm W}_{{2,1a}}} \def \wb {{{2,1b}} +{\rm W}_{\wb} = 2\times \frac{1}{64\,\pi^{4}}\,\frac{(2\pi)^{4}}{4!}\,\Big(\frac{1}{2}\Big)^{2} = \frac{1}{192} \ . \end{equation} This already reproduces the coefficient of the $\lambda^2$ term in \rf{1.1} (consistently with the vanishing \rf{2212} of the rest of the contributions \cite{Erickson:2000af}). The corresponding expression in the WL case is found by dropping the scalar field exchanges, {\em i.e.}\, the $|\dot x |$ terms in the numerator of \rf{3.17}. Then for the circle we get \begin{align} \label{3.20} W^{(0)}_{{{2,1a}}} \def \wb {{{2,1b}}} &= \frac{[\Gamma(\omega-1)]^{2}}{64\,\pi^{2\omega}}\, \int_{\tau_{1}>\tau_{2}>\tau_{3}>\tau_{4}}d^{4}\bm\tau\, \frac{\cos \tau_{12}\,\cos\tau_{34}}{(4\,\sin^{2}\frac{\tau_{12}}{2}\,4\,\sin^{2}\frac{\tau_{34}}{2}) ^{\omega-1}}\ , \notag \\ W^{(0)}_{\wb} &= \frac{[\Gamma(\omega-1)]^{2}}{64\,\pi^{2\omega}}\, \int_{\tau_{1}>\tau_{2}>\tau_{3}>\tau_{4}}d^{4}\bm\tau\, \frac{\cos \tau_{14}\,\cos\tau_{23}}{(4\,\sin^{2}\frac{\tau_{14}}{2}\,4\,\sin^{2}\frac{\tau_{23}}{2}) ^{\omega-1}}\ . \end{align} The computation of these integrals is discussed in Appendix ~\ref{B}. Setting $\omega}\def \no {\nonumber=2-\varepsilon$ we get \begin{align} \label{3.21} W^{(0)}_{{{2,1a}}} \def \wb {{{2,1b}}} &=\te \frac{[\Gamma(1-\varepsilon)]^{2}}{64\,\pi^{2\,(2-\varepsilon)}}\, \Big[\frac{\pi^{2}}{\varepsilon}+3\,\pi^{2}+\frac{\pi^{4}}{6}+\mathcal O(\varepsilon)\Big] = \frac{1}{64\,\pi^{2}\,\varepsilon}+\frac{1}{384}+\frac{3}{64\,\pi^{2}}+ \frac{\gamma_{\rm E}+\log\pi}{32\,\pi^{2}} +\mathcal O(\varepsilon),\notag \\ W^{(0)}_{\wb} &=\te \frac{[\Gamma(1-\varepsilon)]^{2}}{64\,\pi^{2\,(2-\varepsilon)}}\, \Big[\frac{\pi^{2}}{2}+\frac{\pi^{4}}{6}+\mathcal O(\varepsilon)\Big] = \frac{1}{384}+\frac{1}{128\,\pi^{2}}+\mathcal O(\varepsilon). \end{align} The total ladder contribution in the WL case is thus \begin{equation} \label{3.22} W^{(0)}_{2,1} =W^{(0)}_{{{2,1a}}} \def \wb {{{2,1b}}}+ W^{(0)}_{\wb} = \frac{1}{64\,\pi^{2}\,\varepsilon}+\frac{1}{192}+\frac{7}{128\,\pi^{2}} +\frac{\gamma_{\rm E}+\log\pi}{32\,\pi^{2}}+\mathcal O(\varepsilon). \end{equation} \subsubsection{Self-energy contribution} \label{sec:self-wilson} It is convenient to represent the contribution $W_{2,2}$ of the self-energy diagram in Fig. \ref{fig:two-loops} as \begin{equation} \label{3.23} W^{(\z)}_{2,2} = - \, \frac{[\Gamma(\omega-1)]^2}{8\,\pi^{\omega}(2-\omega)(2\omega-3)}\, \wW^{(\z)} _{1} \ , \end{equation} where, in the WML case, one has \cite{Erickson:2000af} \begin{equation} \label{3.24} {\wW^{(1)}_{1}} = \frac{1}{16\pi^{\omega}}\oint d\tau_{1}d\tau_{2}\, \frac{|\dot x(\tau_{1})|\,|\dot x(\tau_{2})|-\dot x(\tau_{1})\cdot \dot x(\tau_{2})}{ \big[|x(\tau_{1})-x(\tau_{2})|^{2}\big]^{2\omega-3}} \ . \end{equation} Again, the expression in the WL case is obtained by simply dropping the scalar exchange $|\dot x(\tau_{1})|\,|\dot x(\tau_{2})|$ term in the numerator of (\ref{3.24}): \begin{equation} \label{3.25} {\wW^{(0)}_1} = \frac{1}{16\pi^{\omega}}\oint d\tau_{1}d\tau_{2}\, \frac{-\dot x(\tau_{1})\cdot \dot x(\tau_{2})}{ \big[|x(\tau_{1})-x(\tau_{2})|^{2}\big]^{2\omega-3}} \ . \end{equation} Altough (\ref{3.25}) is very similar to $W^{(0)}_{1}$ in \rf{3.8}, for $\omega}\def \no {\nonumber\not=2$ there is a difference in the power in the denominator. \iffalse We may write \begin{align} \label{3.21} W^{(0)}_{2,2} &= {\rm W}_{2,2} -\frac{[\Gamma(\omega-1)]^{2}}{8\,\pi^{\omega}(2-\omega)(2\omega-3)}\,(\widetilde W^{(0)}_{1}-\widetilde {\rm W}_{1}). \end{align} To compute this quantity we need the expansion of $\widetilde W^{(0)}_{1}$ and $\widetilde {\rm W}_{1}$ to order $\varepsilon$ included. \fi Specializing to the circle case we find (using the integral \rf{3.12}) \begin{align} \wW^{(1)}_{1} &= 2^{3-4\omega}\pi^{-\omega}\oint d\tau_{1}d\tau_{2}\, \Big[\sin^{2}\tfrac{\tau_{12}}{2}\Big]^{4-2\omega} = \frac{2^{3-4\omega}\pi^{-\omega}}{2\pi}\,\mathcal M(4-2\omega,0,0) \notag \\ &= \frac{1}{8}+\frac{1}{8}\log\pi\, \varepsilon+{\cal O}(\varepsilon^2) \ , \label{3.26}\\ \wW^{(0)}_{1} &= -4^{1-2\omega}\pi^{-\omega}\oint d\tau_{1}d\tau_{2}\, \Big[\sin^{2}\tfrac{\tau_{12}}{2}\Big]^{3-2\omega} +2^{3-4\omega}\pi^{-\omega} \oint d\tau_{1}d\tau_{2}\, \Big[\sin^{2}\tfrac{\tau_{12}}{2}\Big]^{4-2\omega}\notag \\ &= \frac{1}{8}+\frac{1}{8}(2+\log\pi)\,\varepsilon+ {\cal O}(\varepsilon^2). \label{3.27} \end{align} Then from (\ref{3.23}) we get \begin{align} \label{3.28} {\rm W}_{2,2} &= -\frac{1}{64\,\pi^{2}\,\varepsilon}-\frac{1}{32\,\pi^{2}}-\frac{\gamma_{\rm E}} {32\,\pi^{2}}-\frac{\log\pi}{32\,\pi^{2}}+{\cal O}(\varepsilon), \\ W^{(0)}_{2,2} &= -\frac{1}{64\,\pi^{2}\,\varepsilon}-\frac{1}{16\,\pi^{2}}-\frac{\gamma_{\rm E}} {32\,\pi^{2}}-\frac{\log\pi}{32\,\pi^{2}}+{\cal O}(\varepsilon) \ . \label{3.29} \end{align} Note that the difference between the WL and WML self-energy contributions is finite \begin{align} \label{3.30} W^{(0)}_{2,2} = {\rm W}_{2,2}-\frac{1}{32\,\pi^{2}} \ . \end{align} \subsubsection{Internal-vertex contribution} In the WML case, the internal-vertex diagram contribution in Fig. 2 has the following expression \cite{Erickson:2000af} \begin{align} {\rm W}_{2,3} &= -\frac{1}{4}\,\oint d^{3}\bm{\tau} \, \varepsilon(\tau_{1}, \tau_{2},\tau_{3})\ \Big[|\dot x^{(1)}|\,|\dot x^{(3)}|-\dot x^{(1)}\cdot \dot x^{(3)}\Big]\notag \\ &\ \qquad \qquad \times \dot x^{(2)}\cdot \frac{\partial}{\partial x^{(1)}}\int d^{2\omega}y\, \Delta(x^{(1)}-y)\,\Delta(x^{(2)}-y)\,\Delta(x^{(3)}-y),\label{3.31} \end{align} where $\Delta(x)$ is the propagator \rf{3.7}, $d^{3}\bm{\tau} \equiv d\tau_{1}d\tau_{2}d\tau_{3}$ and $\varepsilon(\tau_{1},\tau_{2},\tau_{3})$ is the totally antisymmetric path ordering symbol equal to $1$ if $\tau_{1}>\tau_{2}>\tau_{3}$. Using the Feynman parameter representation for the propagators and specializing to the circle case \rf{3.31} becomes \begin{align} \label{3.32} &{\rm W}_{2,3} = \frac{\Gamma(2\omega-2)}{2^{2\omega+5}\,\pi^{2\omega}}\, \int_{0}^{1} [d^3\bm{\alpha}] \ \oint d^{3}\bm{\tau} \,\epsilon(\tau_{1},\tau_{2},\tau_{3})\ \notag\\ & \qquad \qquad \qquad \qquad \qquad \qquad \times \big(1-\cos\tau_{13}\big)\ \frac{\alpha\,(1-\alpha)\,\sin\tau_{12} +\alpha\,\gamma\,\sin\tau_{23}}{Q^{2\,\omega-2}}, \\ &\qquad \qquad [d^3\bm{\alpha}] \equiv d\alpha\,d\beta\,d\gamma\,(\alpha\beta\gamma)^{\omega-2}\, \delta(1-\alpha-\beta-\gamma) \ , \\ &\qquad \qquad Q \equiv \alpha\,\beta\,(1-\cos\tau_{12})+\beta\,\gamma\,(1-\cos\tau_{23}) +\gamma\,\alpha\,(1-\cos\tau_{13}) \ . \label{3.33} \end{align} The corresponding WL expression is found by omitting the scalar coupling term $|\dot x^{(1)}|\,|\dot x^{(3)}|$, {\em i.e.}\space by replacing the factor $(1-\cos\tau_{13}) $ by $( -\cos\tau_{13}) $. We can then represent the WL contribution as \begin{align} \label{3.34} W^{(0)}_{2,3} &= {\rm W}_{2,3} -\frac{\Gamma(2\omega-2)}{2^{2\omega+5}\pi^{2\omega}}\, J(\omega), \\ \label{3731} J(\omega) &\equiv \int_{0}^{1} [d^3\bm{\alpha}] \oint d^{3}\bm{\tau}\,\epsilon(\tau_{1},\tau_{2},\tau_{3})\ \frac{\alpha\,(1-\alpha)\,\sin\tau_{12} +\alpha\,\gamma\,\sin\tau_{23}}{Q^{2\,\omega-2}} \ . \end{align} In the WML case one finds that \rf{3.32} is related to ${\rm W}_{2,2}$ \cite{Erickson:2000af} \begin{equation} \label{3.36} {\rm W}_{2,3} =- {\rm W}_{2,2} + \mathcal O(\varepsilon) \ , \end{equation} where ${\rm W}_{2,2}$ was given in \rf{3.28}. Thus to compute $W^{(0)}_{2,3}$ it remains to determine $J(\omega)$. Let us first use that \begin{align} & \oint d^{3}\bm{\tau}\,\varepsilon(\tau_{1},\tau_{2},\tau_{3}) \ F(\tau_{1},\tau_{2},\tau_{3}) =\oint_{\tau_{1}>\tau_{2}>\tau_{3}}d^{3}\bm{\tau}\,\Big[ F(\tau_{1}, \tau_{2}, \tau_{3}) -F(\tau_{1}, \tau_{3}, \tau_{2}) \notag \\ &\qquad +F(\tau_{2}, \tau_{3}, \tau_{1}) -F(\tau_{2}, \tau_{1}, \tau_{3}) +F(\tau_{3}, \tau_{1}, \tau_{2}) -F(\tau_{3}, \tau_{2}, \tau_{1})\Big], \label{3355} \end{align} and relabel the Feynman parameters in each term. Then $J(\omega)$ takes a more symmetric form \begin{align} \label{3.38} J(\omega) &= 8\,\int_{0}^{1} [d^3\bm{\alpha}] \oint_{\tau_{1}>\tau_{2}>\tau_{3}} d^{3}\bm{\tau}\,\frac{(\alpha\,\beta+\beta\,\gamma +\gamma\,\alpha)\,\sin \frac{\tau_{12}}{2}\,\sin \frac{\tau_{13}}{2}\,\sin \frac{\tau_{23}}{2}} {Q^{2\,\omega-2}} \ . \end{align} Using the double Mellin-Barnes representation (see, for instance, \cite{Jantzen:2012cb}) \begin{equation}\label{3377} \frac{1}{(A+B+C)^{\sigma}} = \frac{1}{(2\,\pi\,i)^{2}}\frac{1}{\Gamma(\sigma)}\, \int_{-i\,\infty}^{+i\,\infty}du\,dv\,\frac{B^{u}\,C^{v}}{A^{\sigma+u+v}}\, \Gamma(\sigma+u+v)\,\Gamma(-u)\,\Gamma(-v), \end{equation} we can further rewrite (\ref{3.38}) as \begin{align} &J(\omega) = \frac{8}{(2\pi i)^{2}\,2^{2\omega-2} \Gamma(2\omega-2)} \oint_{\tau_{1}>\tau_{2}>\tau_{3}} d^{3}\bm{\tau} \int du dv \int_{0}^{1}d\alpha\, d\beta\, d\gamma\, (\alpha\beta\gamma)^{\omega-2}(\alpha\beta +\beta\gamma+\gamma\alpha)\label{3.40} \\ &\times \Gamma(2\omega-2+u+v)\Gamma(-u)\Gamma(-v)\, \frac{(\beta\gamma\sin^{2}\frac{\tau_{23}}{2})^{u} \,(\alpha\beta\,\sin^{2}\frac{\tau_{12}}{2})^{v}} {(\gamma\alpha\,\sin^{2}\frac{\tau_{13}}{2})^{2\omega-2+u+v}} \sin \tfrac{\tau_{12}}{2}\,\sin \tfrac{\tau_{13}}{2}\,\sin \tfrac{\tau_{23}}{2}\ .\no \end{align} Integrating over $\alpha$,$\beta$,$\gamma$ using the relation \begin{equation}\label{3.41} \int_{0}^{1}\prod_{i=1}^{N} d\alpha_{i}\,\alpha_{i}^{\nu_{i}-1}\,\delta(1-\sum_{i} \alpha_{i}) = \frac{\Gamma(\nu_{1})\cdots\Gamma(\nu_{N})}{\Gamma(\nu_{1}+\cdots+ \nu_{N})} \ , \end{equation} gives the following representation for $J$ \begin{align} J(\omega) =& -\frac{1}{\pi^{2}\,2^{2\omega-3}}\frac{1}{\Gamma(2\omega-2)\Gamma(3-\omega)} \int_{-i\infty}^{+i\infty} du \int_{-i\infty}^{+i\infty}dv\, \ X(u,v)\ T(u,v)\ , \label{3.42} \\ &\ X(u, v) \equiv \Big(\frac{1}{u+v+\omega-1}-\frac{1}{u+\omega-1}-\frac{1}{v+\omega-1}\Big) \label{3.43} \\ & \qquad \times \Gamma(2\omega-2+u+v)\Gamma(-u)\Gamma(-v)\, \Gamma(2-u-\omega)\,\Gamma(2-v-\omega)\, \Gamma(u+v+\omega)\ , \ \ \notag \\ &T(u,v) \equiv\oint_{\tau_{1}>\tau_{2}>\tau_{3}} d^{3}\bm{\tau}\, \frac{(\sin^{2}\frac{\tau_{23}}{2})^{u+1/2}\,(\sin^{2}\frac{\tau_{12}}{2})^{v+1/2}} {(\sin^{2}\frac{\tau_{13}}{2})^{2\omega-2+u+v-1/2}}\ .\label{344} \end{align} A remarkable feature of (\ref{3.42}), familiar in computations of similar integrals, is that the integrand is symmetric in the three $\tau_i$ variables as one can show using a suitable linear change of the Mellin-Barnes integration parameters $u,v$.\footnote{ For instance, the exchange of $\tau_{1}$ and $\tau_{3}$ is compensated by redefining $(u,v)\to (u',v')$ with $ u+\frac{1}{2} = -(2\omega-2+u'+v'-1/2),\ \ -(2\omega-2+u+v-1/2) = u'+1/2, $ that is $ u=2-u'-v'-2\omega, \ v=v'. $ This change of variables leaves invariant the other part $T(u,v)$ of the integrand: it takes the same form when written in terms of $u',v'$.} As a result, we may effectively replace $T(u,v)$ by $1\over 3!$ of the integrals along the full circle: \begin{align} T(u,v) \to {1 \over 3!} \oint_{0}^{2\pi}d^{3}\bm{\tau}\, \frac{(\sin^{2}\frac{\tau_{23}}{2})^{u+1/2}\,(\sin^{2}\frac{\tau_{12}}{2})^{v+1/2}} {(\sin^{2}\frac{\tau_{13}}{2})^{2\omega-2+u+v-1/2}}. \label{355} \end{align} Using again the master integral (\ref{3.12}), we find the following expression for $J(\omega)$ as a double integral \begin{align} J(\omega) &= -\frac{8\,\pi^{3/2}}{3!\,\pi^{2}\,2^{2\omega-3}}\frac{1}{\Gamma(2\omega-2) \Gamma(3-\omega)} \int_{-i\infty}^{+i\infty} du \int_{-i\infty}^{+i\infty}dv\, X(u,v) \notag \\ &\qquad \qquad \times \frac{\Gamma (u+1) \,\Gamma (v+1)\, \Gamma \Big(\frac{9}{2}-2 \omega \Big)\, \Gamma (-u-v-2 \omega +3)}{\Gamma (u+v+2) \,\Gamma (-u-2 \omega +4) \,\Gamma (-v-2 \omega +4)}. \label{366} \end{align} Writing all factors in $X(u,v)$ in \rf{3.43} in terms of $\Gamma$-functions we end up with \iffalse \footnote{The coefficient here comes from $-\frac{8\,\pi^{3/2}} {3!\,\pi^{2}\,2^{2\omega-3}}(2\,\pi\,i)^{2} = \frac{\pi^{3/2}}{3\times 2^{2\omega-7}}. $ }\fi \begin{align} &J(\omega) = \frac{\pi^{3/2}}{3\times 2^{2\omega-7}}\, \frac{1}{\Gamma(2\omega-2) \Gamma(3-\omega)} \int_{-i\infty}^{+i\infty} \frac{du}{2\,\pi\,i} \int_{-i\infty}^{+i\infty} \frac{dv}{2\,\pi\,i}\ R(u,v) \label{3.47} \ , \\ & R(u,v) = \Gamma(2\omega-2+u+v)\Gamma(-u)\Gamma(-v) \,\Big[ \Gamma(1-u-\omega)\,\Gamma(2-v-\omega)\,\Gamma(u+v+\omega)\,\notag \\ & +\Gamma(2-u-\omega)\,\Gamma(1-v-\omega)\,\Gamma(u+v+\omega) +\Gamma(2-u-\omega)\,\Gamma(2-v-\omega)\,\Gamma(u+v+\omega-1)\Big]\,\notag \\ &\qquad \qquad \times \frac{\Gamma (u+1) \Gamma (v+1) \Gamma \Big(\frac{9}{2}-2 \omega \Big) \Gamma (-u-v-2 \omega +3)}{\Gamma (u+v+2) \Gamma (-u-2 \omega +4) \Gamma (-v-2 \omega +4)}. \end{align} This integral can be computed using the algorithms described in \cite{Czakon:2005rk} and by repeated application of Barnes first and second lemmas \cite{bailey1935generalized}. The result expanded in $\varepsilon = 2 - \omega}\def \no {\nonumber \to 0$ is \begin{equation}\label{3.49} J(2-\varepsilon) = \frac{8\,\pi^{2}}{\varepsilon}-8\,\pi^{2}\,(2\,\log 2-3)+ \mathcal O(\varepsilon). \end{equation} Using this in (\ref{3.34}) gives \begin{equation} \label{3.50} W^{(0)}_{2,3} = {\rm W}_{2,3}-\frac{1}{64\,\pi^{2}\,\varepsilon} -\frac{1}{64\,\pi^{2}}-\frac{\gamma_{\rm E}+\log\pi}{32\pi^{2}}+ \mathcal O(\varepsilon). \end{equation} \subsubsection{Total contribution to standard Wilson loop} From \rf{3.30} and \rf{3.50} we get \begin{equation} W^{(0)}_{2,2} + W^{(0)}_{2,3} = -\frac{1}{64\,\pi^{2}\,\varepsilon} -\frac{3}{64\,\pi^{2}}-\frac{\gamma_{\rm E}+\log\pi}{32\pi^{2}}+ \mathcal O(\varepsilon). \label{3667}\ , \end{equation} {\em i.e.}\space in contrast to the WML case \rf{2212},\rf{3.36} the sum of the self-energy and internal vertex diagrams is no longer zero and is logarithmically divergent. The divergence is cancelled once we add the ladder contribution in \rf{3.22}. Thus the total contribution to the WL expectation value at order $\lambda^2$ found from \rf{3.22},\rf{3667} is finite \begin{align}\label{350} &W^{(0)}_{2} = W^{(0)}_{2,1}+W^{(0)}_{2,2}+W^{(0)}_{2,3} = \frac{1}{192}+\frac{1}{128\,\pi^{2}} \ , \qquad \qquad W^{(0)}_2 ={\rm W}_2 + \frac{1}{128\,\pi^{2}} \ . \end{align} Thus, using \rf{3.3},\rf{3466}, we get the final result for the expectation value of the ordinary Wilson loop \begin{align} &\label{3511} W^{(0)} = 1 + {1\over 8} \lambda + \Big(\frac{1}{192}+\frac{1}{128\,\pi^{2}}\Big)\,\lambda^2 + {\cal O}(\lambda^3) \ . \end{align} We conclude that the weak-coupling expectation values for the circular WML and WL start to differ from order $\lambda^2$. \subsection{Generalization to any $\zeta$ } Let us now generalize the above results for the leading and subleading term in the weak-coupling expansion \rf{3.3} of the circular Wilson loop to the case of the generalized WL, i.e. to any value of the parameter $\zeta$ in \rf{0}. The computation follows the same lines as above. At leading order in $\lambda$ we find the same result as in the circular WML \rf{3.6} and WL \rf{3466} cases, {\em i.e.}, after subtracting the linear divergence, the quantity $W_1$ in \rf{3.3} has the universal (independent on $\zeta$) value \begin{equation} \label{3.53} W^{(\zeta)}_1 = { 1\over 8} \ . \end{equation} Explicitly, using again dimensional regularization, we find as in \rf{3.4},\rf{3.11},\rf{3331} \begin{align} W^{(\z)}_{1} &=\frac{ \Gamma(\omega}\def \no {\nonumber-1) }{16\pi^{\omega}\def \no {\nonumber }}\oint d\tau_{1}d\tau_{2}\, \frac{\zeta^{2}-\dot x(\tau_{1})\cdot \dot x(\tau_{2})}{ |x(\tau_{1})-x(\tau_{2})|^{2\omega-2}}\no \\ &= \frac{1}{8} -\frac{(1-\zeta^{2})\, \Gamma(\omega}\def \no {\nonumber-1)}{16\pi^{\omega}}\oint {d\tau_{1}d\tau_{2}\over \big(4\, \sin^{2} \tfrac{\tau_{12}}{2}\big)^{\omega-1} } = \frac{1}{8}+ \frac{1}{8}(1-\zeta^2) \varepsilon + {\cal O}(\varepsilon^2) \label{3.54} \end{align} where we set $\omega=2-\varepsilon$ and retained a term of order $ \varepsilon$ as this will contribute to the final result at order $\lambda^2$ in our dimensional regularization scheme upon replacing the bare with renormalized coupling. To order $\lambda$, however, one can safely remove this term yielding (\ref{3.53}). Turning to $\lambda^2$ order, the ladder diagram contributions in Fig. \ref{fig:two-loops} generalizing the $\zeta=0$ expressions \rf{3.20} are \begin{align} W^{(\zeta)}_{{{2,1a}}} \def \wb {{{2,1b}}} &= \frac{[\Gamma(\omega-1)]^{2}}{64\,\pi^{2\omega}}\, \int_{\tau_{1}>\tau_{2}>\tau_{3}>\tau_{4}}d^{4}\bm\tau\, \frac{(\zeta^{2}-\cos \tau_{12})\,(\zeta^{2}-\cos\tau_{34})}{(4\,\sin^{2}\frac{\tau_{12}}{2}\ 4\,\sin^{2}\frac{\tau_{34}}{2}) ^{\omega-1}}, \notag \\ W^{(\zeta)}_{\wb} &= \frac{[\Gamma(\omega-1)]^{2}}{64\,\pi^{2\omega}}\, \int_{\tau_{1}>\tau_{2}>\tau_{3}>\tau_{4}}d^{4}\bm\tau\, \frac{(\zeta^{2}-\cos \tau_{14})\,(\zeta^{2}-\cos\tau_{23})}{(4\,\sin^{2}\frac{\tau_{14}}{2}\ 4\,\sin^{2}\frac{\tau_{23}}{2}) ^{\omega-1}}.\label{3543} \end{align} The result of their rather involved computation generalizing \rf{3.21} is (see Appendix \ref{B}) \begin{align} W^{(\zeta)}_{{{2,1a}}} \def \wb {{{2,1b}}} &= \frac{[\Gamma(1-\varepsilon)]^{2}}{64\,\pi^{2\,(2-\varepsilon)}}\,\Big[ \frac{\pi^{2}\,(1-\zeta^{2})}{\varepsilon}+\pi^{2}\,(1-\zeta^2) ( 3 - \zeta^2) +\frac{\pi^{4}}{6} +\mathcal O(\varepsilon) \Big], \notag \\ W^{(\zeta)}_{\wb} &= \frac{[\Gamma(1-\varepsilon)]^{2}}{64\,\pi^{2\,(2-\varepsilon)}}\,\Big[ \frac{\pi^{2}}{2}\,(1-\zeta^{2})^{2}+\frac{\pi^{4}}{6} +\mathcal O(\varepsilon)\Big],\label{3556} \end{align} with the sum being \begin{equation} W_{2,1}^{(\zeta)} = W^{(\zeta)}_{{{2,1a}}} \def \wb {{{2,1b}}} + W^{(\zeta)}_{\wb} =\frac{1}{192}+(1-\zeta^{2})\,\Big[ \frac{1}{64\,\pi^{2}\,\varepsilon}+\frac{1}{128\,\pi^{2}}\,(7-3\,\zeta^{2}) +\frac{\log\pi+\gamma_{\rm E}}{32\,\pi^{2}} \Big] + \mathcal O(\varepsilon) \ . \label{8899} \end{equation} For the self-energy contribution in Fig. \ref{fig:two-loops} we find the expression \rf{3.23} where now \begin{align} {\widetilde W}^{(\zeta)}_{1} &= \frac{1}{16\pi^{\omega}}\oint d\tau_{1}d\tau_{2}\ \frac{\zeta^{2}\,|\dot x(\tau_{1})|\,|\dot x(\tau_{2})|-\dot x(\tau_{1})\cdot \dot x(\tau_{2})}{ \big[|x(\tau_{1})-x(\tau_{2})|^{2}\big]^{2\omega-3}} \notag \\ &= \zeta^{2}\,\widetilde W_{1}^{(1)}+(1-\zeta^{2})\,\widetilde W_{1}^{(0)} = \frac{1}{8}+\frac{1}{8}\,\Big[2\,(1-\zeta^{2})+\log\pi\Big]+ \mathcal O(\varepsilon), \label{3365} \end{align} with ${\widetilde W}_{1}^{(1)}$ and ${\widetilde W}_{1}^{(0)}$ given by \rf{3.24},\rf{3.26} and \rf{3.25},\rf{3.27}. Substituting this into (\ref{3.23}), we get \begin{equation}\label{3577} W^{(\z)}_{2,2} = \zeta^{2}\,W_{2,2}^{(1)}+(1-\zeta^{2})\,\Big[ -\frac{1}{64\,\pi^{2}\,\varepsilon}-\frac{1}{16\,\pi^{2}}-\frac{\gamma_{E}+\log\pi}{32\,\pi^{2}} \Big]+\mathcal O(\varepsilon) \ , \end{equation} where $W_{2,2}^{(1)} $ is given by \rf{3.28}. The internal-vertex diagram contribution in Fig. \ref{fig:two-loops} generalizing \rf{3.34} is \begin{equation}\label{3588} W^{(\z)}_{2,3} = W_{2,3}^{(1)}-(1-\zeta^{2})\,\frac{\Gamma(2\omega-2)}{2^{2\omega+5} \pi^{2\omega}}\,J(\omega) \ , \end{equation} where $J$ is given by \rf{3731},\rf{3.49} and $W_{2,3}^{(1)} $ is given by \rf{3.36},\rf{3.28}, i.e. \begin{equation}\label{3599} W^{(\z)}_{2,3} =-{\rm W}_{2,2}+(1-\zeta^{2})\,\Big[-\frac{1}{64\,\pi^{2}\,\varepsilon} -\frac{1}{64\,\pi^{2}}-\frac{\gamma_{\rm E}+\log\pi}{32\pi^{2}}\Big] + \mathcal O(\varepsilon)\ . \end{equation} Summing up the separate contributions given in \rf{8899},\rf{3577} and \rf{3599} we find that the ${1\over \varepsilon} \sim \log a $ logarithmic divergences cancel out, and we get the finite expression \begin{equation} \label{3.61} W^{(\z)}_{2} = W^{(\z)}_{2,1}+W^{(\z)}_{2,2}+W^{(\z)}_{2,3} = \frac{1}{192} +\frac{1}{128\pi^{2}}\,(1-\zeta^{2})\,(1-3\zeta^{2}) \ . \end{equation} The final result for the Wilson loop expectation value to order $\lambda^2$ that follows from \rf{3.54} and \rf{3.61} is then \begin{equation} \label{2633} \langle} \def \ran {\rangle W^{(\z)} \ran = 1+ \lambda \Big(\frac{1}{8}-\frac{1}{8}\zeta^2\varepsilon \Big) +\lambda^2 \left[ \frac{1}{192} +\frac{1}{128\pi^{2}}\,(1-\zeta^{2})\,(1-3\zeta^{2}) \right] + {\cal O}(\lambda^3)\ . \end{equation} Here it is important to retain the order $ \zeta^2\varepsilon $ part in the 1-loop term in \rf{3.54}: despite the cancellation of all $1\over \varepsilon$ terms to this order, $\zeta$ in the order $\lambda$ term is a bare coupling that contains poles that may effectively contribute at higher orders. Despite $\lambda$ not running in $d=4$ the presence of the linear in $\zeta$ term in the beta-function \rf{111} implies that the present case is best treated as a 2-coupling $g_i=(\lambda, \zeta)$ theory. In general, if $d= 4- 2 \varepsilon$ and we have a set of near-marginal couplings $g_i$ with mass dimensions $ u_i\varepsilon$ the bare couplings may be expressed in terms of the dimensionless renormalized couplings $g_i$ as \begin{align} \label{267} & g_i{}_{{\rm b}} = \mu^{u_i\varepsilon}\Big[g_i+\frac{1}{\varepsilon} K_i(g) + {\cal O}\big({1\over \varepsilon^2}\big)\Big]\,,\qquad \qquad \mu { d g_i{}_{\rm b} \over d \mu}=0, \\ & \label{268} \beta_i(g) = \mu { d g_i \over d \mu} = - \varepsilon u_i g_i - u_i K_i + \sum_j u_j g_j {\partial\over \partial g_j} K_i \ . \end{align} In the present case we may choose dimensions so that the gauge field and scalars $\P_m$ in the bare SYM action $ {N\over \lambda_{\rm b}} \int d^{d} x ( F^2+ D\P D\P + ...) $ have dimension 1 so that $\lambda_b$ has dimension $2\varepsilon $, {\em i.e.}\space $\lambda_{\rm b} = \mu^{2 \varepsilon} \lambda$, or $u_\lambda= 2$ (and of course $K_{\lambda}=0$). As the Wilson line integrand in \rf{0} should have dimension 1, that means $\zeta_b$ should have dimension zero, {\em i.e.} \space $u_\zeta=0$.\footnote{ This is natural as the dimension of the Wilson line integral is not changed. Note that the same is true if one redefines the SYM fields by a power of gauge coupling $g$: then dimension of $\P$ is canonical ${d-2\over 2}= 1-\varepsilon$ but $g\, \P$ that then enters the Wilson loop \rf{0} still has dimension 1.} Then from \rf{267},\rf{268} we learn that (using \rf{111}) \begin{equation} \label{269} \zeta_{\rm b} = \zeta + \frac{1}{\varepsilon} K_\zeta + {\cal O}\big({1\over \varepsilon^2}\big) \ , \ \ \ \ \ \ \ \ \beta_\zeta= u_\lambda \lambda {\partial\over \partial \lambda} K_\zeta\ , \qquad \qquad K_\zeta = {1\over 2} \beta_\zeta = \frac{\lambda}{16\pi^2}\zeta (\zeta^2-1) \ . \end{equation} The coupling $\zeta$ in \rf{2633} should actually be the bare coupling; replacing it with the renormalized coupling according to \rf{269} and then sending $\varepsilon \to 0$ we find the expression in \rf{1}, i.e. \begin{align} &\label{3500} \langle} \def \ran {\rangle W^{(\z)}\ran = 1 + {1\over 8} \lambda + \left[\frac{1}{192} +\frac{1}{128\pi^{2}}\,(1-\zeta^{2})^2\,\right]\lambda^2 + {\cal O}(\lambda^3) \ . \end{align} As we shall discuss in Appendix \ref{ab3}, there is an alternative regularization procedure in which the full 2-loop expression in \rf{3500} comes just from the type (b) ladder diagram contribution in \rf{3556} and thus the use of the evanescent 1-loop term in \rf{2633} is not required. \iffalse Note that the cancellation of logarithmic divergences at order $\lambda^2$ does not contradict the non-trivial renormalization \rf{111} of $\zeta$. In general, renormalizability of $W^{(\z)}$ implies that $\langle} \def \ran {\rangle W^{(\z)}\ran \equiv F[\lambda, \zeta(a), \log a] = F[\lambda, \zeta(\mu), \log \mu]$ order by order in $\lambda$ and thus the finiteness of $\lambda^2$ term is due to the absence of $\zeta$ dependence at order $\lambda$ in \rf{3500}. The $\zeta$-dependence of the $\lambda^2$ term in \rf{3500} implies the presence of logarithmic UV divergences at the $\lambda^3$ and higher orders. \fi \section{Relation to correlators of scalar operators on the Wilson loop}\label{sec4} The $\zeta$-dependence of the generalized WL \rf{0} can be viewed as being to due to multiple insertions of the scalar operators on the loop. It is of interest to relate the expression \rf{3500} to what is known about 2-point functions of (scalar) operators on the line or circle (see \cite{Polyakov:2000ti,Drukker:2006xg,Alday:2007he,Sakaguchi:2007ba,Drukker:2011za,Correa:2012at,Cooke:2017qgm,Giombi:2017cqn}). Let us choose the scalar coupling in \rf{0} to be along 6-th direction, {\em i.e.}\space $\P_m \theta^m =\P_6$ and denote the remaining 5 scalars not coupled directly to the loop as $\P_a$ ($a=1,...,5$). Let us also choose the contour to be straight line $x^\mu= (\tau, 0, 0, 0)$ along the Euclidean time direction $x^0 =t$ so that the exponent in \rf{0} is simply $ \int dt ( i A_t + \zeta \P_6) $. For $\zeta=1$ or $\zeta=0$ when the loop preserves the conformal symmetry the 2- (and higher) point functions of conformal operators inserted along the line can be interpreted as correlators in an effective (defect) 1d CFT. For example, for $\zeta=1$ \begin{equation}\label{7} \llangle O(t_1) O(t_2) \rrangle_{\rm line} \equiv \langle {\rm Tr}\,{\cal P} \big[O(x_1)O(x_2)\ e^{\int dt(iA_t+ \Phi_6)}\big]\rangle = \frac{C}{|t_{12}|^{2\Delta}}\ \ . \end{equation} Here in $ \langle {\rm Tr} ... \rangle$ the operator $O(x)$ is a gauge-theory operator in the adjoint representation restricted to the line (with exponential factors appearing between and after $O(x_n(t_n))$ according to path ordering to preserve gauge invariance). We also use that in the WML case for a straight line the normalization factor is trivial, {\em i.e.}\space $\llangle 1 \rrangle =1$. Similar relation can be written for a circular loop using the map $t \to \tan{\tau\over 2}$ \begin{equation} \llangle O(\tau_1) O(\tau_2)\rrangle_{\rm circle}= \frac{C}{|2\sin\frac{\tau_{12}}{2} |^{2\Delta}}\,. \label{77} \end{equation} Here the gauge-theory expectation value is to be normalized with the non-trivial circle WML factor \rf{1.1} so that once again $\llangle 1 \rrangle =1$. In the $\zeta=0$ case one is to use \rf{3511} as the corresponding normalization factor. In what follows $\llangle ...\rrangle$ will refer to the expectation value in the effective CFT on the circle. The simplest example is the insertion of the ``orthogonal'' scalars $\P_a$ into the WML \rf{7} in which case the dimension is protected, $\Delta=1$, while the norm is related to the Bremsstrahlung function $B(\lambda)$ \cite{Correa:2012at}\footnote{Let us recall that the leading tree level value of the 2-point coefficient $C=\frac{\lambda}{8\,\pi^{2}}+\dots$ (with $\lambda\equiv g^2 N$) is found by taking into the account that the adjoint scalar field is $\Phi = \Phi^r t^r$ with propagator $\langle\Phi^{r}(x)\Phi^{r'}(0)\rangle = \frac{g^{2}\,\delta^{rr'}}{4\pi^{2}\,x^{2}}$ ($r=1, ..., N^2-1$ is the $SU(N)$ algebra index) where the generators satisfy $\text{Tr}(t_{r}t_{r'}) = \frac{1}{2}\,\delta_{rr'}$, \ $t_r t_r =\ha N\, {\rm I}$. The trace $\delta^{rr'} \delta_{rr'} = N^2-1$ produces the factor of $N^2$ in the planar limit. } \begin{align}\label{47} &\qquad \qquad \llangle \P_a(\tau_1) \P_b(\tau_2)\rrangle =\delta_{ab} \frac{C_0(\lambda) }{| 2\sin\frac{\tau_{12}}{2} |^{2}}\ \ , \ \qquad \qquad C_0 = 2 B(\lambda) \ , \\ &\qquad \qquad B(\lambda)\equiv {1 \over 2 \pi^2} {d \over d \log \lambda} \langle} \def \ran {\rangle {\rm W} \ran = \frac{\sqrt{\lambda } \, I_2(\sqrt{\lambda })}{4\pi ^2\, I_1(\sqrt{\lambda })}\ , \label{471}\\ &C_0 (\lambda\ll 1)= {\lambda\over 8 \pi^2} - {\lambda^2\over 192 \pi^2} + {\cal O}(\lambda^3)\ , \qquad C_0(\lambda\gg 1) = {{\sqrt \l}} \def \D {\Delta\over 2 \pi^2} - {3\over 4\pi^2} + {\cal O}({1\over {\sqrt \l}} \def \D {\Delta}) \ . \label{472} \end{align} The operator $\P_6$ which couples to the loop in this $\zeta=1$ case, on the other hand, gets renormalized and its scaling dimension is a non-trivial function of $\lambda$. At small $\lambda$ one gets\footnote{Definition of a good conformal operator may require subtraction of a non-zero constant one-point function on the circle, which may depend on the regularization scheme. } \begin{equation} \label{48} \llangle \P_6 (\tau_1) \P_6 (\tau_2) \rrangle = \frac{C(\lambda) }{| 2\sin\frac{\tau_{12}}{2} |^{2\Delta}} \ , \qquad C = {\lambda\over 8 \pi^2} + {\cal O}(\lambda^2) \ , \qquad \Delta =1 + {\lambda\over 4 \pi^2} + {\cal O}(\lambda^2) \ . \end{equation} Here the anomalous dimension can be obtained by direct computation \cite{Alday:2007he} or by taking the derivative of the beta-function \rf{111} at the $\zeta=1$ conformal point \cite{Polchinski:2011im} as in \rf{3}. The leading term in $C$ is the same as in \rf{47},\rf{472} as it comes just from the free-theory correlator. At strong coupling the ``transverse'' scalars $\P_a$ should correspond to massless string coordinates $y_a$ in $S^5$ directions (with $\Delta=\Delta_+ =1$, cf. \rf{2.6}) \cite{Giombi:2017cqn} while $\P_6$ should correspond \cite{malda} to the 2-particle world-sheet state $y_a y_a$ (see section \ref{sec4.2}), with dimension $\Delta=2\Delta_+ + {\cal O}({1\over \sqrt{\lambda}})=2 + {\cal O}({1\over \sqrt{\lambda}})$ \cite{Polchinski:2011im}. The subleading term in \begin{equation} \label{50} \Delta=2 - {5\over {\sqrt \l}} \def \D {\Delta} + {\cal O}\big({1\over ({\sqrt \l}} \def \D {\Delta)^2}\big) , \end{equation} computed in \cite{Giombi:2017cqn} has a negative sign consistent with a possibility of a smooth interpolation to the weak-coupling expansion in \rf{48} (see also section \ref{sec4.2}). In the case of the standard WL with no scalar coupling ($\zeta=0$) the defect CFT$_1$ has unbroken $SO(6)$ symmetry and thus all 6 scalars have the same correlators: \begin{align}\label{49} & \llangle \P_m \rrangl =0 \ , \ \ \ \ \ \ \ \ \qquad \llangle \P_m(\tau_1) \P_n(\tau_2) \rrangle =\delta_{mn} \frac{C (\lambda) }{|2\sin\frac{\tau_{12}}{2} |^{2\D}}\ ,\\ & \qquad C = {\lambda\over 8 \pi^2} + {\cal O}(\lambda^2) \ , \ \qquad \qquad \Delta= 1 - {\lambda\over 8 \pi^2} + {\cal O}(\lambda^2) \ . \label{499} \end{align} Here the leading free-theory term in $C$ is the same as in \rf{472},\rf{48} as it comes just from the free-theory correlator. The anomalous dimension in \rf{499} found by direct computation in \cite{Alday:2007he} is again the same as the derivative of the beta-function \rf{111} at the $\zeta=0$ conformal point \cite{Polchinski:2011im} (see \rf{3}). At strong coupling, {\em i.e.}\space in the string theory description where the $S^5$ coordinates are to be subject to the Neumann boundary conditions restoring the $O(6)$ symmetry, one expects to find \cite{Alday:2007he} \begin{equation} \label{500} \Delta= {5 \over {\sqrt \l}} \def \D {\Delta} + {\cal O}\big({1\over ({\sqrt \l}} \def \D {\Delta)^2}\big), \end{equation} which is consistent with the negative sign of the anomalous dimension at weak coupling in \rf{499}, suggesting that it decreases to zero at strong coupling.\footnote{It is interesting to notice that the data \rf{50},\rf{500} about strong-coupling dimensions of $\P_6$ near $\zeta=0$ and near $\zeta=1$ is consistent with the relation \cite{Polchinski:2011im} $2\D_+ + 2\D_-=2$, i.e. $ [ {5 \over {\sqrt \l}} \def \D {\Delta} + O({1\over ({\sqrt \l}} \def \D {\Delta)^2})] + [ 2 - {5\over {\sqrt \l}} \def \D {\Delta} + O({1\over ({\sqrt \l}} \def \D {\Delta)^2})] = 2 + {\cal O}({1\over ({\sqrt \l}} \def \D {\Delta)^2}). $ Here $2\D_\pm$ are dimensions of perturbations near the two ends of the flow between the Dirichlet and Neumann b.c. which may be interpreted as being driven by the ``double-trace''-like operator constructed out of a massless 2d scalar with strong-coupling dimensions $\D_+ =1$ and $\D_-=0$ (see section \ref{sec4.2}). } As a test of our perturbative calculation of the expectation value \rf{3500} of the generalized WL \rf{0}, let us now relate its expansion near the conformal points $\zeta=0, 1$ to the above expressions for the 2-point functions of the $\Phi_6$ operator. The expectation value of $W^{(\zeta)}$ for the circular contour ($| \dot x|=1$) expanded near $\zeta=0$ may be written as \begin{equation}\label{410} \langle W^{(\zeta)}\rangle = W^{(0)}\Big[ 1 + \zeta\, \big\llangle\oint d\tau \, \Phi_6(x(\tau)) \big\rrangle +\frac{\zeta^{2}}{2}\,\big\llangle \oint d\tau \, \Phi_{6}(x(\tau) ) \,\oint d\tau' \, \Phi_{6}(x(\tau')) \big \rrangle +{\cal O}(\zeta^3) \Big] , \end{equation} where $\llangle...\rrangle $ is defined as in \rf{7} but now for $\zeta=0$, i.e. with only the gauge field coupling $i\int d \tau \dot x^\mu A_\mu$ in the exponent and the normalization factor $W^{(0)} \equiv \langle W^{(0)}\rangle$ has weak-coupling expansion given in \rf{3511}. The order $\zeta$ (tadpole) term here vanishes automatically as in \rf{49} due to the $SO(6)$ symmetry, consistently with the conformal invariance. We may compute the $\zeta^2$ term here \begin{equation} \label{4.12} \langle W^{(\zeta)}\rangle_{\zeta^2} = \frac{\zeta^2}{2} W^{(0)} \,\int_{0}^{2\pi}d\tau\,\int_{0}^{2\pi}d\tau'\, \llangle\Phi_6(\tau)\,\Phi_6(\tau')\rrangle\ , \end{equation} directly using the conformal 2-point function \rf{49} with generic $C(\lambda)$ and $\Delta(\lambda)\equiv 1 + \gamma(\lambda)$. \iffalse \begin{equation} \label{999} \llangle \P_6 (\tau_1) \P_6 (\tau_2) \rrangle = \frac{C (\lambda) }{|2\sin\frac{\tau_{12}}{2} |^{2\D}}\ , \qquad C = \frac{\lambda}{8\pi^2} + {\cal O}(\lambda^2) \ , \ \qquad \Delta\equiv 1 + \g(\lambda) = 1 - \frac{\lambda}{8\pi^2} + {\cal O}(\lambda^2) \ . \end{equation} \fi Doing the integral over $\tau$ as in \rf{3331} and then expanding in small $\lambda$ using \rf{499} we obtain\footnote{This integral is similar to the one in \rf{3.26} and thus can be found by an analytic continuation in $\gamma$. Alternatively, we may use a cutoff regularization, see Appendix \ref{A}.} \begin{align}\label{413} \langle W^{(\zeta)} \rangle_{\zeta^2} & = \zeta^{2} W^{(0)}\,C(\lambda)\,\frac{\pi^{3/2}\,\Gamma(-\frac{1}{2}-\gamma(\lambda))}{2^{1+2\gamma(\lambda)}\,\Gamma(-\gamma(\lambda))} \no \\ &= \zeta^{2} W^{(0)}\,C(\lambda)\, \pi^2 \gamma (\lambda) \big[1 + {\cal O}(\gamma^2) \big] \ =\ - \zeta^2 \frac{\lambda^2 }{64\pi^2}+\mathcal O(\lambda^{3}) \ . \end{align} This precisely matches the term of order $\lambda^2 \zeta^2 $ in \rf{3500}. Comparing to the general relation \rf{4}, the higher order terms in the anomalous dimension $\gamma(\lambda)$ can be absorbed into the relation between ${\cal C}$ in \rf{2} and $C$ in \rf{49}. Next, let us consider the expansion of the WL \rf{0},\rf{3500} near the supersymmetric conformal point $\zeta=1$. The term of order $\zeta-1$ in this expansion is expected to vanish by conformal symmetry (provided a possible tadpole contribution is suitably subtracted),\footnote{As we have seen above, the dimensional regularization scheme that leads to \rf{2} and thus implies the vanishing of the tadpole at the conformal point effectively preserves the conformal invariance. } and the term of order $(\zeta-1)^2$ is to be related to the integrated two-point function on the supersymmetric WL \begin{equation}\label{4133} \langle W^{(\zeta)}\rangle_{(\zeta-1)^2} = \frac{1}{2}(\zeta-1)^2\, W^{(1)} \,\int_{0}^{2\pi}d\tau\,\int_{0}^{2\pi}d\tau'\, \llangle\Phi_6(\tau)\,\Phi_6(\tau')\rrangle_{\zeta=1}\ . \end{equation} Inserting here the conformal 2-point function (\ref{48}) and we get the same integral as in \rf{4.12},(\ref{413}). Plugging in the values for $C=\frac{\lambda}{8\pi^2}+O(\lambda^2)$ and $\gamma = \frac{\lambda}{4\pi^2}+{\cal O}(\lambda^2)$ from \rf{48} we get \begin{equation}\label{3144} \langle W^{(\zeta)}\rangle_{(\zeta-1)^2} =\frac{\lambda^2}{32\pi^2} (\zeta-1)^{2}+{\cal O}(\lambda^3) \ , \end{equation} which is indeed in precise agreement with the term of order $(1-\zeta)^2$ in the expansion of \rf{3500} near $\zeta=1$ \begin{equation} \label{444} \langle W^{(\zeta)}\rangle = \langle W^{(1)}\rangle \Big\{1 + {\lambda^2\over 32 \pi ^2} \Big[{(\zeta-1)^2}{} + {(\zeta-1)^3}{}+\tfrac{1}{4} (\zeta-1)^4 \Big] + {\cal O}\big(\lambda^3\big) \Big\}\,. \end{equation} We may also compare the higher order terms in the small $\zeta$ or small $(1-\zeta)$ expansion to integrated higher-point conformal correlators of the $\zeta=0$ and $\zeta=1$ CFT's. The absence of the $\zeta^3$ term (and other $\zeta^{2n+1}$ terms) in the expansion near the $\zeta=0$ is in agreement with the vanishing of the odd-point scalar correlators that follows from the $\P_m \to -\P_m$ symmetry of the SYM action. At the same time, the 3-point scalar $\P_6$ correlator at the $\zeta=1$ point is non-trivial (cf. also \cite{Kim:2017sju,Kim:2017phs}). In general, on the 1/2-BPS circular WL we should have \begin{equation} \label{445} \llangle\Phi_6(\tau_1)\,\Phi_6(\tau_2)\Phi_6(\tau_3) \rrangle_{\zeta=1} = { C_3 (\lambda) \over |2\sin\frac{\tau_{12}}{2} |^{\D} \ |2\sin\frac{\tau_{23}}{2} |^{\D}\ |2\sin\frac{\tau_{31}}{2} |^{\D} }\ , \end{equation} where at weak coupling $\Delta=1 + \gamma(\lambda) $ is the same as in \rf{48}, i.e. $\g= {\lambda \over 4 \pi^2} + {\cal O}(\lambda^2)$, and we should have $C_3= c_3 \lambda^2 + {\cal O}(\lambda^3)$. Integrating \rf{445} using \rf{3.12} and then expanding in small $\lambda$ we get as in \rf{413},\rf{4133} \begin{align} &\langle W^{(\zeta)}\rangle_{(\zeta-1)^3} = {1\over 3!} (\zeta-1)^3\, \langle W^{(1)}\rangle \oint d\tau_1 \, d\tau_2 \, d \tau_3\ \llangle\Phi_6(\tau_1)\Phi_6(\tau_2) \Phi_6(\tau_3) \rrangle_{\zeta=1} \no \\ &\qquad = (\zeta-1)^3 \langle W^{(1)}\rangle \, C_3 \, \frac{\pi^{3/2}\, \Gamma(-\frac{\g}{2}) \ \Gamma(-\frac{1}{2}- {3\over 2} \gamma)}{3\cdot 2^{1+3\gamma}\, [\Gamma(-\gamma)]^3} = - {8\over 3} \pi^2 (\zeta-1)^3 C_3 \big[1 + {\cal O}(\lambda) \big]\ . \label{4455} \end{align} Comparing \rf{4455} to \rf{444} we conclude that \begin{equation} C_3 = -{3 \lambda^2 \over 256 \pi^4} + {\cal O}(\lambda^3) \ . \label{435} \end{equation} The $\zeta^4 $ term in the expansion of \rf{3500} should be related to the integrated value of the 4-point correlator of $\P_6$. To $\lambda^2$ order it is given just by the product of the two 2-point contributions (corresponding to the two ladder graphs; the third ordering is subleading in the planar limit) \begin{align} \label{4445} & \llangle \Phi_6(\tau_1) \Phi_6(\tau_2)\Phi_6(\tau_3)\Phi_6(\tau_4) \rrangle = \big[ G_0(\tau_1,\tau_2) \, G_0(\tau_3,\tau_4) + G_0(\tau_1,\tau_4) \, G_0(\tau_2,\tau_3) \no \\ & \qquad \qquad\qquad \qquad\qquad \qquad\qquad \qquad + {\cal O}(\lambda^3) \big] \theta(1,2,3,4) + {\rm permutations} \ , \end{align} where $G_0(\tau_1,\tau_2)= {\lambda\over 8 \pi^2} \frac{1 }{|2\sin\frac{\tau_{12}}{2} |^{2}}$ is the leading term in the 2-point correlator \rf{49} of $\P_6$ at $\zeta=0$ and $\theta(1,2,3,4)= \theta(\tau_1-\tau_2) \theta(\tau_2-\tau_3)\theta(\tau_3-\tau_4)$. To understand the precise relation between the integrated 4-point correlator and the $\zeta^4$ term in $\langle W^{(\z)}\rangle$ in \rf{3500} one should follow the logic of conformal perturbation theory by a nearly-marginal operator $O$ with dimension $\Delta=d-\epsilon$ (see, {\em e.g.}, \cite{Fei:2015oha}). In the present case of $d=1$ near the $\zeta=0$ point we have $O=\P_6$ with dimension $\Delta=1 -\epsilon, \ \epsilon\equiv -\gamma = {\lambda\over 8\pi^2}+... \ll 1$ (see \rf{499}). Then the dimension 1 perturbation $\zeta_{\rm b} O$ where the bare coupling $\zeta_{\rm b}$ is related to the dimensionless renormalized one by $\zeta_{{\rm b}} = \mu^{\epsilon} ( \zeta + {\lambda \over 16\pi^2 \epsilon } \zeta^3 +...)$ corresponding to the beta-function \rf{111}, i.e. $ \beta_\zeta = -\epsilon\, \zeta + {\lambda\over 8\pi^2} \zeta^3+... $. Computing $\langle W^{(\z)}\rangle$ in an expansion in powers of $\zeta_b$ we get for the $\lambda^2$ term: $ \langle W^{(\z)}\rangle= \langle W^{(0)}\rangle \big[1+ \lambda^2( k_2 \zeta_{\rm b}^2 + k_4 \zeta_{\rm b}^4) + {\cal O}(\zeta_{\rm b}^6)\big]$ where $k_2 = - {1\over 64\pi^2}$ is the contribution of the integrated 2-point function given by \rf{413} and $k_4 = {1\over 64\pi^2} ( \pi^2 + \ha \pi^2) $ is the contribution of the integral of \rf{4445}, {\em i.e.}\space the sum of the $\zeta^4$ terms in the two ladder diagrams in \rf{3556}. Similarly to what happened in the dimensional regularization case in \rf{2633}, here the quadratic term contributes to the quartic one once expressed in terms of the renormalized coupling. Using $\zeta_{\rm b}=\zeta+ \ha \zeta^3 + ...$ we get $k_2 \zeta_{\rm b}^2 + k_4 \zeta_{\rm b}^4 = k_2 \zeta^2 + k_4' \zeta^4+ ...$, where $k_4'=k_4 + k_2= {1\over 128\pi^2}$ which is in agreement with the $\zeta^4$ coefficient in \rf{3500}. Similar considerations should apply to the $(\zeta-1)^4 \lambda^2$ term in the expansion \rf{444} near $\zeta=1$. \iffalse \footnote{ \red{Potential issue:} if we assume that the 4-point function $G_{4}(\bm{\tau}) = \llangle\Phi_6(\tau_1)\,\Phi_6(\tau_2)\Phi_6(\tau_3)\,\Phi_6(\tau_4)\rrangle$ is symmetric under all permutations $\tau_{i}\leftrightarrow \tau_{j}$ as a consequence of crossing symmetry, then the relevant integral contributing to $\langle W^{(\zeta)}\rangle_{\zeta^{4}}$ is the unrestricted integral $\int_{0}^{2\pi} d^{4}\bm\tau\,G_{4}(\bm \tau)$ and each factorized contribution is proportional to the square $(\int d\tau \llangle\Phi_{6}(\tau)\Phi_{6}(0)\rrangle)^{2}$ that vanishes at tree level, {\em} cf. for instance (\ref{413}) that is zero for $\gamma\to 0$. This argument is not valid because crossing symmetry of $G_{4}(\bm{\tau})$ is not manifest at weak coupling where the planarity constraint forbids the crossed exchange diagram. The situation is different at strong coupling where all three channels contribute, see for instance the discussion in footnote 9 of \cite{Giombi:2017cqn}. \red{A possibly related fact} is the remark that the sum of all channels, so including the non planar one, is zero at lowest order in 4 dimensions, see (6.3) and (6.4) of \cite{Bianchi:2013rma}. } \fi \iffalse for operator like $(\Phi)^J$, e.g., $J=1$ for $\Phi^I$.. At weak coupling for $J=1$: $\Delta=1 - {\lambda \over 8 \pi} + O(\lambda^2)$. This is like BMN: expansion near geodesic in $S^5$ (com or zero mode quantization) or rather dim of vertex operator $e^{i J\varphi }$ as in open string in flat space of $- {1\over {\sqrt \l}} \def \D {\Delta} \nabla^2$ operator value as suggested in \cite{Polchinski:2011im}. This is then different pattern from the one suggested in \cite{Giombi:2017cqn} for operators on WML where $\Phi^i$ was directly represented by string coordinates $y^i$. In WL case ${\rm AdS}_2$ theory should have $O(6)$ invariance and susy should be broken. $\Delta=0$ for $S^5$ scalars suggests that we need vertex operators or derivatives of them -- integrate over zero mode... may be in WL case with $\Delta=0$ better use $\partial y^i$ as operators with dim 1 -- again interpolation to scalars at weak coupling. generalization to z(t) ? funct deriv \fi \section{Strong coupling expansion }\label{secstr} As discussed in \cite{Alday:2007he,Polchinski:2011im}, the AdS$_5 \times S^5\ $ string description of the standard Wilson loop should be given by the path integral with Dirichlet boundary condition along the boundary of $AdS_5$ and Neumann (instead of Dirichlet for the Wilson-Maldacena loop) condition for the $S^5$ coordinates. The case of the generalized WL \rf{0} may then correspond to mixed boundary conditions \cite{Polchinski:2011im}. Below we shall first discuss the subleading strong-coupling correction to the standard WL ($\zeta=0$) comparing it to the more familiar WML ($\zeta=1$) case and then consider the general $\zeta$ case. The strong coupling expansion of the straight-line or circular WL will be represented by the string partition function with the same ${\rm AdS}_2$ world-sheet geometry as in the WML case \cite{Berenstein:1998ij}. As the ${\rm AdS}_2$ is a homogeneous-space, the log of the string partition function should be proportional to the volume of ${\rm AdS}_2$ \cite{Drukker:2000ep,Buchbinder:2014nia}. In the straight-line case the volume of ${\rm AdS}_2$ with infinite ($L\to \infty$) line as a boundary is linearly divergent as ${L\over a}$. Thus the straight-line WL should be given just by an exponent of this linear 2d IR divergence. Linear UV divergences in WL for a smooth contour are known to factorize in general at weak coupling \cite{Dotsenko:1979wb}. After the separation of this linear divergence the straight-line WL should be thus equal to 1 as in the case of the locally-supersymmetric WML. The same should be true for the generalized WL \rf{0}. Similar arguments apply in the case of the circular WL where the minimal surface is again the ${\rm AdS}_2$ but now with a circle as its boundary. In this case the volume is (we fix the radius to be 1) \begin{equation} \label{2.1} V_{\rm AdS_2} = 2 \pi \Big( {1\over a} - 1\Big) \ , \end{equation} i.e. has a finite part and thus the expectation value may be a non-trivial function of string tension ${\sqrt \l}} \def \D {\Delta\over 2\pi $. After factorizing the linearly divergent factor, the leading strong-coupling term will then have a universal ${\sqrt \l}} \def \D {\Delta$ form \begin{equation} \label{21} \langle} \def \ran {\rangle W^{(\zeta)} \ran \equiv e^{-F^{(\zeta)} (\lambda) } \ , \qquad \qquad F^{(\zeta)} =-{\sqrt \l}} \def \D {\Delta + F^{(\zeta)}_1 + {\cal O}( \tfrac{1}{{\sqrt \l}} \def \D {\Delta}) \ . \end{equation} The subleading terms $F^{(\zeta)}_1+ ...$ will, however, differ due to the different boundary conditions in the $S^5$ directions. \iffalse \footnote{ puzzle with strong-coupling claim that < WL> ~ exp sqrt lambda just like for BPS one. in BPS case tr P exp i A + P is not unitary matrix due to scalar coupling so it may not be surprising that at strong coupling < W > >>1 instead of < 1. But for standard WL we expect |< W >| <1 always? The resolution is "no" due to linear divergences -- exp (- sql ( 1/eps -1) ) indeed < 1 for any finite cutoff; it is the renormalized value that behaves strangely, but there is no paradox. } \fi \subsection{Standard Wilson loop}\label{sec4.1} Let us consider the 1-loop string correction in the standard WL case following the same approach as used in the WML case in \cite{Drukker:2000ep,Kruczenski:2008zk,Buchbinder:2014nia}. As the fluctuation determinants for all the 2d fields (3 AdS$_5$ bosons with $m^2$=2, 8 fermions with $m^2=1$ and ghosts) except the $S^{5}$ massless scalars are the same, the ratio of the WML and WL expectation values \rf{1111} should be proportional the ratio of the 1-loop string partition functions with the Dirichlet and Neumann boundary conditions in the five $S^5$ directions: \begin{equation}\label{2.2} \frac{\langle {\rm W}\rangle} {\langle W^{(0)} \rangle} = { e^{ - F^{(1)} } \over e^{- F^{(0)}}} = \ {\cal N}_0^{-1} \ \Big[\frac{\det (-\nabla}\def \Gamma {\Gamma} \def \ha {\tfrac{1}{2}^{2})_{\rm D}}{ \det' (-\nabla}\def \Gamma {\Gamma} \def \ha {\tfrac{1}{2}^{2})_{\rm N}}\Big]^{-5/2}\, \Big[1+\mathcal O({1\over \sqrt\lambda})\Big] \ . \end{equation} Here $-\nabla}\def \Gamma {\Gamma} \def \ha {\tfrac{1}{2}^{2}$ is the massless scalar wave operator in AdS$_{2}$ and ${\cal N}_0$ is the normalization factor of the $S^5$ zero modes present in the Neumann case \begin{equation} {\cal N}_0 = c_0\, ({\sqrt \l}} \def \D {\Delta )^{5/2} \ , \label{232} \end{equation} with $c_0$ being a numerical constant (representing contributions of renormalized volume of ${\rm AdS}_2$ and volume of $S^5$).\footnote{In general, one is to separate the 0-mode integral and treat it exactly (cf. \cite{Miramontes:1990nf}).} The 1-loop corrections to $F^{(\zeta)}$ are thus related by \begin{equation} \label{2.3} F^{(1)}_{1}- F^{(0)}_{1} = 5\,\Big[ \tfrac{1}{2}\log \det (-\nabla}\def \Gamma {\Gamma} \def \ha {\tfrac{1}{2}^{2})_{\rm D}-\tfrac{1}{2}\log{\rm det}' (-\nabla}\def \Gamma {\Gamma} \def \ha {\tfrac{1}{2}^{2})_{\rm N}\Big] + \log {\cal N}_0 \ . \end{equation} To compute this correction we may use the general result for the difference of effective actions with standard (D or +) and alternate (N or -) boundary conditions for a scalar with mass $m$ in AdS$_{d+1}$ \cite{Hartman:2006dy,Diaz:2007an} \begin{align} \delta \Gamma &= \Gamma_+ -\Gamma_- = \tfrac{1}{2}\log\det (-\nabla}\def \Gamma {\Gamma} \def \ha {\tfrac{1}{2}^2+m^{2})_{\rm D} -\tfrac{1}{2}\log\det (-\nabla}\def \Gamma {\Gamma} \def \ha {\tfrac{1}{2}^2+m^{2})_{\rm N} \ \no \\ \label{2.5} & = \tfrac{1}{2}\sum_{\ell=0}^{\infty} c_{d,\ell}\,\log \frac{\Gamma(\ell+\frac{d}{2}-\nu)}{\Gamma(\ell+\frac{d}{2}+\nu)},\qquad \qquad c_{d, \ell} = (2\ell+d-1)\,\frac{(\ell+d-2)!}{\ell!(d-1)!} \ , \end{align} where $\nu$ is defined by \begin{equation} \label{2.6} m^2 = \Delta(\Delta-d) \ , \ \ \ \ \ \ \Delta_{\pm} = \frac{d}{2}\pm\nu, \qquad \nu\equiv \sqrt{\frac{d^{2}}{4}+m^{2}} \ . \end{equation} In the present case of $d=1$ and $m=0$ the $\ell=0$ term with $\Gamma(\ell+\frac{d}{2}-\nu)$ is singular and should be dropped: this corresponds to projecting out the constant 0-mode present in the Neumann case. Then in the limit $d\to 1$ and $\nu\to {1\over 2} $ in (\ref{2.5}) we get (projecting out 0-mode) \begin{equation} \label{2.7} \delta\Gamma' = -\sum_{\ell=1}^{\infty}\log \ell = \lim_{s\to 0}\frac{d}{ds}\sum_{\ell=1}^{\infty}\ell^{-s} = \zeta_{\rm R}'(0) = -\tfrac{1}{2} \log(2\pi) \ . \end{equation} One may also give an alternative derivation of \rf{2.7} using the relation between the AdS$_{d+1}$ bulk field and $S^d$ boundary conformal field partition functions: $ {Z_{-}}/{Z_{+}} = Z_{\rm conf}$ (see \cite{Diaz:2007an,Giombi:2013yva,Beccaria:2014jxa}). For a massive scalar in AdS$_{d+1}$ associated to an operator with dimension $\Delta_+$, the boundary conformal (source) field has canonical dimension $\Delta_- = d - \Delta_{+}$ and thus the kinetic term $\int d^d x \, \varphi (-\partial^2)^\nu \varphi$, with $\nu= \Delta_+ - {d\over 2} $. In the present case of the massless scalar in ${\rm AdS}_2$ we have $d=1$, $\Delta_+ = 1, \ \Delta_-=0$ and $\nu= {1\over 2} $. The induced boundary CFT has thus the kinetic operator $\partial\equiv (-\partial^2)^{1/2} $ defined on $S^1$ and thus we find again \rf{2.7} \begin{equation} \label{2.9} \delta\Gamma' = -\log \frac{Z_{+}}{Z_{-}} = - { 1\over 4} \log {\rm det}' ( - \partial ^2) = -\sum_{\ell=1}^{\infty}\log \ell \ , \end{equation} where we fixed the normalization constant in the $S^1$ eigen-value to be 1. It is interesting to note that the zero-mode contribution in \rf{2.3} may be included automatically by ``regularizing'' the $m\to 0$ or $\nu\to \ha$ limit in \rf{2.5},\rf{2.6}. One may expect that for the Neumann boundary conditions which are non-supersymmetric in the world-sheet theory \cite{Polchinski:2011im} the massless $S^5$ scalars $y^a$ may get 1-loop correction to their mass $m^2 = - { k\over {\sqrt \l}} \def \D {\Delta} + {\cal O}( { 1 \over ({\sqrt \l}} \def \D {\Delta)^2}) \to 0$.\footnote{This correction may be found by computing the 1-loop contribution to the propagator of $y^a$ in ${\rm AdS}_2$ background. Similar correction to scalar propagator with alternate b.c. should appear in higher spin theories in the context of vectorial AdS/CFT (there the effective coupling is $1/N$ instead of $1/{\sqrt \l}} \def \D {\Delta$), see e.g. \cite{Giombi:2017hpr}. Note that having a correction to the mass of a world-sheet excitation here does not run against the usual 2d conformal invariance constraint as we are expanding near a non-trivial background and are effectively in a physical gauge where the conformal freedom is fixed (cf. \cite{Giombi:2010bj}). } Then $\nu = \ha - { k\over {\sqrt \l}} \def \D {\Delta} + ...$ and $\Delta_- = { k\over {\sqrt \l}} \def \D {\Delta} + ...$; for the agreement with \rf{500} we need to fix $k=5$. We then get an extra $ -\ha \log |m^2| = - \ha \log { k\over {\sqrt \l}} \def \D {\Delta} $ term from the $\ell=0$ term in \rf{2.5}, i.e. \begin{equation} \delta \Gamma = \delta \Gamma' - \ha \log |m^2| = -\tfrac{1}{2} \log(2\pi) + \ha \log {\sqrt \l}} \def \D {\Delta - \ha \log k \ . \label{2111}\end{equation} This agrees with \rf{2.3},\rf{2.7} if we set $c_0 = k^{-5/2}$. Finally, from \rf{2.3},\rf{2.5} we find \begin{equation} \label{2.11} F^{(0)}_{1}= F^{(1)}_{1} - 5\delta\Gamma = F^{(1)}_{1} - 5\,\delta\Gamma' - \log {\cal N}_0 = F^{(1)}_{1} + \tfrac{5}{2}\, \log (2 \pi) - \big(\tfrac{5}{2}\, \log {\sqrt \l}} \def \D {\Delta + \log c_0\big) \ . \end{equation} Let us now recall that the direct computation of the determinants in the string 1-loop partition function for the circular WML gives (after using \rf{2.1} and separating out the linear divergence) \cite{Drukker:2000ep,Kruczenski:2008zk,Buchbinder:2014nia} (see also \cite{Kristjansen:2012nz,Bergamin:2015vxa,Forini:2017whz}) \begin{equation} \label{2.10} F_{1}^{(1)} = \tfrac{1}{2}\log(2\pi)\ . \end{equation} At the same time, the exact gauge-theory result \rf{1.1} for the WML implies that the total correction to the leading strong-coupling term should, in fact, be \begin{equation} \label{2.12} F_{1\, \rm tot}^{(1)} =\ \tfrac{1}{2}\log(2\pi) - \log 2 +\tfrac{3}{2}\log\sqrt\lambda\ . \end{equation} The ${3\over 2} \log\sqrt\lambda$ term may be attributed to the normalization of the three M\"obius symmetry zero modes on the disc \cite{Drukker:2000rr}, but the remaining $\log 2$ difference still remains to be understood. It is then natural to conjecture that for the standard WL expanded at strong coupling the total value of the subleading term at strong coupling should be given by \rf{2.11} where the first term is replaced by \rf{2.12}, {\em i.e.} \begin{equation} \label{2.13} F_{1 \, \rm tot}^{(0)} = F_{1\, \rm tot}^{(1)} + \tfrac{5}{2}\log(2\pi) + \log {\cal N}_0 = \ 3 \log(2\pi) - \log (2 c_0) - \log\sqrt\lambda \ . \end{equation} We then conclude that while the leading $\lambda \gg 1$ prediction for the log of the expectation value $\tilde F^{(\zeta)} \equiv \log \langle W^{(\z)} \rangle = - F^{(\zeta)}_{\rm tot} $ for the circular WML and WL is the same $ {{\sqrt \l}} \def \D {\Delta}$ in \rf{21}, the subleading term in $\tilde F^{(0)}$ is larger than that in $\tilde F^{(1)}$ by $ \log {\cal N}_0 = {5\over 2} \log\sqrt\lambda +...$. This appears to be in agreement with a similar behavior \rf{190} observed at weak coupling and thus with the 1d analog of the F-theorem \rf{110}. While the strong-coupling behaviour of WML $ \langle {\rm W} \rangle \sim ({\sqrt \l}} \def \D {\Delta)^{-3/2} e^{{\sqrt \l}} \def \D {\Delta} + ... $ follows from the exact Bessel function expression in \rf{1.1}, one may wonder which special function may give the above strong-coupling asymptotics $ \langle W^{(0)} \rangle \sim {\sqrt \l}} \def \D {\Delta \, e^{{\sqrt \l}} \def \D {\Delta} + ... $ of the standard WL. \subsection{General case}\label{sec4.2} Turning to the case of generic $0 < \zeta < 1$, one may imagine computing $\langle} \def \ran {\rangle W^{(\z)}(\lambda)\ran $ exactly to all orders in the weak-coupling expansion and expressing it in terms of the renormalized coupling $\zeta$ (in some particular scheme). One may then re-expand the resulting function at strong coupling (as in \rf{1.1}) expecting to match $F^{(\zeta)}_1$ in \rf{21} with \rf{2.12} and \rf{2.13} at the two conformal points. A way to set up the strong-coupling (string-theory) computation for an arbitrary value of $\zeta$ may not be a priori clear as non-conformal WL operators need not have a simple string-theory description. Below we shall develop a heuristic but rather compelling suggestion of \cite{Polchinski:2011im}. Starting with the AdS$_5 \times S^5\ $ string action and considering a minimal surface ending, e.g., on a line at the boundary of AdS$_5$ we may choose a static string gauge where $x^0=\tau, \ z= \sigma$ so that the induced metric is the ${\rm AdS}_2$ one: \ $ds^2= {1\over \sigma^2} ( d\sigma^2 +d\tau^2)$; in what follows we identify $z$ and $\sigma$.\footnote{ In the case of the circular boundary $ds^2= {1\over \sinh^2 \sigma} (d\sigma^2 + d\tau^2)$.} Let the 5 independent $S^5$ coordinates be $y^a$ (with the embedding coordinates being, e.g., $Y_a= { y_a\over 1 + {1\over 4} y^2}, \ Y_6= { 1 - {1\over 4} y^2\over 1 + {1\over 4} y^2}$, $ds^2_{S^5} = {dy_a dy_a \over (1 + {1\over 4} y^2)^2}$). In the WL case they are subject to the Neumann condition $\partial_z y^a\big|_{z\to 0} =0 $. One may then start with this Neumann ({\em i.e.}\ standard WL) case and perturb the corresponding string action $I^{(0)}$ by a boundary term that should induced the flow towards the other (Dirichlet or WML) fixed point \begin{align} \label{4a} & I{(\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} )} = I^{(0)} + \delta I \ , \ \ \ \ \ I^{(0)} = T \int d\tau d z \big( \ha \sqrt h h^{pq} \partial_p y^a \partial_q y^a + ...\big) \ , \ \ \ \qquad T= {{\sqrt \l}} \def \D {\Delta \over 2\pi} \ , \\ & \delta I= - \varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} \, T \int d\tau \, Y_6 \ , \ \ \ \ \qquad Y_6 = \sqrt{1- Y_a Y_a}= 1 - \ha y_a y_a + ... \label{4aa} \ . \end{align} In $ I^{(0)}$ we give only the part depending quadratically on $S^5$ coordinates and $h_{mn}$ is the induced ${\rm AdS}_2$ metric. Here $\kk$ is a new coupling constant which should be a strong-coupling counterpart of $\zeta$: $\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} =0$ should correspond to $\zeta=0$ and $\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} =\infty$ to $\zeta=1$. $Y_6$ is then the counterpart of the operator $\P_6$ in \rf{0} perturbing the $\zeta=0$ conformal point at weak coupling. Note that for the ${\rm AdS}_2$ metric $ds^2= z^{-2} ( dz^2 + d \tau^2)$ with the boundary at $z=a \to 0$ the boundary metric is $ds= a^{-1} d \tau $ and thus it may be more natural to write $\delta I$ in \rf{4aa} as $\delta I= - \kappa\, T \int ds \, Y_6$ so that $ \kk= a^{-1} \kappa$. Then $\kappa$ will always appear together with the ${\rm AdS}_2$ IR cutoff factor $a^{-1}$ which, on the other hand, can be also interpreted -- from the world-sheet theory point of view -- as playing the same role as a UV cutoff $\L$. \iffalse \footnote{ In 2d theory in ${\rm AdS}_2$ the boundary UV divergences will be correlated with the IR divergences: the covariant UV cutoff will be coupled to conformal factor $\sqrt h = e^{2\rho} $ of 2d metric via $ e^{2\rho} \delta \sigma_p \delta \sigma_p \geq \L^{-2}$ as $\L e^\rho= \L z^{-1} $.} \fi The variation of the action $ I{(\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} )}$ implies that to linear order in $y^a$ it should satisfy the massless wave equation in ${\rm AdS}_2$ (so that near the ${\rm AdS}_2$ world-sheet boundary $y^a= z^{\Delta_+} u^a + z^{\Delta_-} v^a+ {\cal O}(z^2) = z\, u^a + v^a + {\cal O}(z^2) $) subject to the mixed (Robin) boundary condition\footnote{The tangent vector to the boundary is $t^p=( 0, z)$ and the outward normal to the boundary is $n^p= (-z, 0)$, so that $h^{pq} = n^p n^q + t^p t^q$.} \begin{equation} \label{552} (-\partial_z + \varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} ) y^a \big|_{z\to 0} =0\ , \ \ \ \ {\rm i.e.} \qquad - u^a + \varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} \, v^a =0 \ . \end{equation} The parameter $ 0 \leq \varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} \leq \infty$ thus interpolates between the Neumann and Dirichlet boundary conditions conditions. Note that in general one may add, instead of $Y_6$, in \rf{4aa} any linear combination $\theta^m Y_m $ with $\theta_m^2=1$ (cf. \rf{0}) and the $S^5$ part of \rf{4a} as $\partial^p Y^m \partial_p Y_m$, with $Y^m Y_m =1$. Then the boundary condition becomes $ \big[ -\partial_z Y_m + \varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} (\theta_m - \theta^k Y_k Y_m) \big] \big|_{z\to 0} =0$. For $\theta_m$ along 6-th axis this reduces to \rf{552} to linear order in $y_a$. Like $\zeta$ at weak coupling \rf{111}, the new boundary coupling $\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} $ will need to be renormalized, i.e. it will be running with 2d UV scale.\footnote{As already mentioned above, in the present case of the boundary of the ${\rm AdS}_2$ world sheet being at $z\to 0$ it is natural to add to the boundary term a factor of $z^{-1} =a^{-1} \to \infty$ that may then be interpreted as playing the same role as the world-sheet UV cutoff $\L$; then this running may be interpreted as a flow with ${\rm AdS}_2$ cutoff.} In \rf{4aa} $\kk$ is a renormalized coupling of effective mass dimension 1. In general, in the bare action one should have $\delta I_{{\rm b}} = - \L \kk_{\rm b} T \int d \tau Y_6$, where $\L \kk_{\rm b}= \mu \kk \big[ 1 + K({1 \over {\sqrt \l}} \def \D {\Delta}) \log { \L\over \mu}\big] + ... $, with $\L\to \infty$ being a UV cutoff and $\kk_b$ and $\kk$ being dimensionless. We may choose the renormalization scale $\mu$ to be fixed as $\mu= R^{-1}$ in terms of the radius $R$ and set $R=1$, i.e. measuring scales in units of $R$; then we may effectively treat $\kk$ as dimensionless.\footnote{In the case of the circular boundary the dependence on the radius $R$ that drops out at the conformal points remains for generic value of $\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} $ or $\zeta$. One may fix, for example, $\mu R=1$ as a renormalization condition, or rescale $\kk$ by $R$ to make it dimensionless.} Dimensionless renormalized $\kk$ should be a non-trivial (scheme-dependent) function of the renormalized dimensionless parameter $\zeta$ and the string tension or 't Hooft coupling $\lambda$ \begin{equation} \label{4e} \kk={{\rm f}} (\zeta; \lambda) \ , \ \qquad\ \ \ \ \ {{\rm f}} (0; \lambda) =0 \ , \ \ \ {{\rm f}} (1; \lambda\gg 1) =\infty \ . \end{equation} Lack of information about this function prevents one from direct comparison of weak-coupling and strong-coupling pictures. Just as an illustration, one may assume that at large $\lambda$ one has $ {\kk} = { \zeta \over 1-\zeta} $, ensuring the right limits (cf. \rf{552}). The boundary $\kk$-term in \rf{4a} may be viewed as a special case of an ``open-string tachyon'' coupling depending on $S^5$ coordinates: \begin{align}\label{44d} &\delta I_{\rm b} = \L \int d \tau\, {\cal T}_{\rm b} (y) \ , \ \ \ \ \ \L {\cal T}_{\rm b} = \mu \big[ {\cal T} - \log {\L\over \mu} ( \alpha' D^2 + ...) {\cal T} + ...\big]\ , \\ &\beta_{\cal T} =\mu { d {\cal T} \over d \mu} = - {\cal T} - \alpha' D^2 {\cal T} + ... \ , \ \ \ \ \ \ \ \te \alpha' = {R^2\over {\sqrt \l}} \def \D {\Delta} \ . \label{44e} \end{align} Here $D^2$ is the Laplacian on $S^5$ (of radius $R$ that we set to 1) and $\beta_{\cal T}$ is the corresponding renormalization group function \cite{Callan:1986ja,Tseytlin:1986tt}.\footnote{Similar expression for the closed-string tachyon beta-function has familiar extra factors of 2 and $\ha$: $\beta_{\rm T} = -2 {\rm T} - \ha \alpha' \nabla^2 {\rm T} + ...$. } The ${\cal T}= \kk Y_6 $ term in $I(\kk)$ in \rf{4a} is the eigen-function of the Laplacian with eigenvalue $5$ ({\em e.g.} for small $y_a$ one has $D^2 Y_6 =(\partial_y^2+ ...) ( - \ha y_a y_a +...)= -5 + ...$). \footnote{ In general, the eigenfunctions of Laplacian on $S^5$ are $C_{m_1 ... m_J} Y^{m_1} ... Y^{m_J}$ (where $C_{m_1 ... m_J} $ is totally symmetric and traceless) with eigenvalue $J(J+ 4)$. For example, one may consider $(Y_1 + i Y_2)^J$. In $J=1$ case we may choose any linear combination $C_m Y^m$ or any of six $Y_m$ which will have the eigenvalue $5$. } As a result, we should expect to find that $\kk$ should be renormalized according to \begin{equation} \label{40c} \L \kk_{\rm b} = \mu \kk \big( 1 + {5\over {\sqrt \l}} \def \D {\Delta} \log {\L\over \mu} + ... \big) \ , \ \ \ \ \ \ \beta_\kk = \mu {d \kk \over d \mu} =\big( - 1 + {5\over {\sqrt \l}} \def \D {\Delta} + ...\big)\kk + ... \ . \end{equation} This beta-function then gives another derivation of the strong-coupling dimension \rf{500} of the perturbing operator near the WL ($\zeta=0$) or $\kk=0$ fixed point: the coefficient of the linear term in the beta-function should be the anomalous dimension or $\Delta-1$.\footnote{ To recall, the argument for the strong-coupling dimension $\Delta(0) = {5\over {\sqrt \l}} \def \D {\Delta} + ...$ of the scalar operator on the WL in \cite{Alday:2007he} was based on considering ${\rm AdS}_2$ in global coordinates as conformal to a strip $ds^2 = {1 \over \sin^2 \sigma} ( dt^2 + d \sigma^2) $ where $0 \leq \sigma < \pi$. Then the Hamiltonian with respect to global time is the dilatation operator and the mode constant in $\sigma$ should be the primary, and its energy is the conformal dimension. The Hamiltonian of quantized massless particle moving on $S^5$ is then proportional to the Laplacian on $S^5$ with the eigenvalue ${\alpha' \over R^2} J (J+4)$ with the present case being that of $J=1$ (in the $\zeta=0$ case the dimension of all 6 scalars is the same due to unbroken $O(6)$ symmetry).} This operator identified as $\P_6$ from the weak-coupling point of view is thus naturally associated with the quadratic $y_a y_a$ perturbation in \rf{4aa} \cite{malda,Giombi:2017cqn}. Note that in the opposite WML ($\zeta=1$) or $\kk\to \infty$ limit we may expect to find the same linear beta-function but with the opposite coefficient, as seen by rewriting the RG equation in \rf{40c} as $\mu {d \kk^{-1} \over d \mu} = - \big( - 1 + {5\over {\sqrt \l}} \def \D {\Delta} + ...\big)\kk^{-1} + ...$, with now $\kk^{-1} \to 0$ (an alternative is to reverse the UV and IR limits, i.e. $\log \mu \to -\log \mu$). Then the strong-coupling dimension of $\P_6$ should be given by $\Delta-1= 1 - {5\over {\sqrt \l}} \def \D {\Delta} + ...$ in agreement with \rf{50}. Another way to derive \rf{40c} is to use the general expression for the divergence of the determinant of a 2d scalar Laplacian in curved background subject to the Robin boundary condition $(\partial_n + \kappa) \phi\big|_{\partial} =0 $ as in \rf{552} \cite{McKean:1967xf,Kennedy:1979ar} (see also Appendix B in \cite{Fradkin:1982ge} for a review) \begin{align} \label{266} & \Gamma_\infty= \ha \log \det ( -\nabla^2 + X)\Big|_{\infty} = - \ha A_0 \L^{2} - A_1 \L - A_2 \log\L \ , \\ & A_0 = {1 \over 4 \pi} \int d^2 x \sqrt g \ , \ \ A_1 = {1\over 8 \sqrt \pi} \int_{\partial} ds \ , \ \ \ \ \ A_2 = { 1\over 6} \chi - {1 \over 4 \pi} \int d^2 x \sqrt g X - {1\over 2 \pi} \int_{\partial} ds \, \kappa\ . \no \end{align} Here $\chi$ is the Euler number and $L=\int_{\partial} ds $ is the length of the boundary. In the present massless case $X=0$ and for the Euclidean ${\rm AdS}_2$ we have $\chi=1$. For the circular boundary at $z=a\to 0 $ we have (for $R=1$) $L=2\pi a^{-1}$. To compare this to \rf{552} we note that for an outward normal to the boundary of ${\rm AdS}_2$ we have $(\partial_n + \kappa) \phi\big|_{\partial} = ( - z \partial_z + \kappa) \phi \big|_{z=a} $ so that we need to identify $ a^{-1} \kappa$ with $\kk$ in \rf{552}. Taking into account the factor of 5 for massless scalars $y_a$ we thus find the same $\kk \log \L$ divergence as in \rf{40c} (or in \rf{4234} below).\footnote{Note that \rf{266} directly applies only for a finite non-zero $\kk$ (including $\kk=0$ of the Neumann condition). In the Dirichlet case ($\kappa \to \infty$) the sign of $A_1$ is reversed and the boundary contribution to the logarithmic divergence (the last term in $A_2$) is absent. Thus the D-limit or $\kk\to \infty$ can not be taken directly in \rf{266} (see also \cite{Dowker:2004mq}). The logarithmic $\chi$ divergence and the quadratic divergence are universal, so they cancel in the difference of effective actions with different boundary conditions. Linear divergence has the opposite sign for the Dirichlet and Neumann or Robin b.c.; that means it cancels in the difference of effective actions for the Robin and the Neumann conditions \rf{2.56}.} Explicitly, in the case of 5 massless scalars in AdS$_{d+1}$ with spherical boundary and mixed boundary conditions \rf{552} the analog of \rf{2.5} gives \cite{Hartman:2006dy,Diaz:2007an} (see eqs.(3.2),(5.2) in \cite{Diaz:2007an}) \begin{equation} \label{2.55} F_1{(\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} )}- F_1{(0)} = \tfrac{5}{2}\sum_{\ell=0}^{\infty} c_{d,\ell}\,\log\big( 1 + \varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} \, q_\ell\big) \ , \qquad \qquad q_\ell ={ 2 ^{2\nu} \ \Gamma(1+\nu) \over \Gamma( 1 - \nu) } \frac{\Gamma(\ell+\frac{d}{2}-\nu)}{\Gamma(\ell+\frac{d}{2}+\nu)} R^{2 \nu} \ , \end{equation} where $c_{d, \ell}$ is the same as in \rf{2.5} and in the present $d=1$ case $c_{d,0} = 1, \ c_{d,\ell>0}=2$. Since $\kk$ and $\zeta$ are related by \rf{4e} the connection to previous notation in \rf{2.3},\rf{2.11},\rf{2.10} is \begin{equation} \label{5522} F(\kk) \equiv F^{(\zeta)} \ , \qquad F(\infty) \equiv F^{(1)} \ , \ \ \ \ \ \ \ \ \ F(0) \equiv F^{(0)} \ . \end{equation} Then from \rf{2.55} \begin{equation} \label{2.56} F_1{(\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} )}- F^{(0)}_1 = 5 \sum_{\ell=1}^{\infty} \log\big( 1 + {\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} }\, {\ell}^{-1} \big) + \te {5\over 2} \log \big(1 + \kk\, |m^{-2}|\big) \ . \end{equation} Here we effectively set the radius $R$ to $1$ absorbing it into $\kk$ (which will then be dimensionless) and isolated the contribution of the $\ell=0$ mode (using that for $m^2\to 0$ we have $\nu= \ha + m^2 + ...$). The limit $\kk \to 0$ of \rf{2.56} is smooth provided it is taken before $m^2\to 0$ one. The limit $\kk\to \infty$ in \rf{2.56} may be formally taken before the summation and then (using $ \sum_{\ell=1}^\infty 1 + \ha = \zeta_{\rm R}(0) + \ha = 0$) we recover the previous $\zeta=1$ result in \rf{2.7},\rf{2111},\rf{2.11}. Using \rf{2.7},\rf{2111},\rf{2.11} we may instead consider the difference between $F_1{(\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} )}$ and $ F_1{(\infty)} \equiv F^{(1)}_1$, {\em i.e.} \begin{equation}\label{2156} F_1{(\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} )} - F_1{(\infty)} = 5 \sum_{\ell=1}^{\infty} \log\big(\ell + {\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} }\, \big) + \te {5\over 2} \log \kk \ , \ \ \ \ \ \ \ \ \kk >0 \ . \end{equation} where we assumed $\kk>0$ to drop 1 in the log in the second term in \rf{2.56} and observed that the constant $S^5$ zero mode contribution $\sim \log |m^2| $ which is present only in the N-case ($\kk=0$) then cancels out. An alternative is to rewrite \rf{2156} in the form that has regular expansion near $\kk=\infty$ \begin{equation} F_1{(\kk)}- F_1(\infty) = 5 \sum_{\ell=1}^{\infty} \log ( 1 + {\kk}^{-1} \ell) \ , \label{256x} \end{equation} where we used again the zeta-function regularization ($\zeta_{\rm R}(0) + \ha = 0$). Note that this expression comes out of the general expression in \cite{Diaz:2007an} or \rf{2.55} if we interchange the roles of $\Delta_+$ and $\Delta_-$ ({\em i.e.}\ set $\nu= - \ha$) and replace $\kk \to \kk^{-1}$. The infinite sum in \rf{2156} or \rf{256x} contains the expected logarithmic UV divergence as in \rf{40c},\rf{266} ($\epsilon = {\L }^{-1} \to 0$) as can be seen using an explicit cutoff, $\sum_{\ell=1}^{\infty} e^{- \epsilon \ell} \log\big( \ell + {\kk} \big) \to \kk \sum_{\ell=1}^{\infty} e^{- \epsilon \ell} {\ell}^{-1} + ... = - \kk \log \epsilon + ... $ (we ignore power divergence as in \rf{2.7}). In general, the term linear in $\kk$ in the finite part is thus scheme-dependent. The finite part of \rf{256x} can be found using derivative of the Hurwitz zeta-function or simply expanding the log in powers of $\kk^{-1} \ell$ and then using the zeta-function to define the sum over $\ell$. As a result, \begin{align} \label{256y} &F_{1\, \rm fin}{(\kk)}- F_1(\infty) =\te 5 \kk (\log \kk -1) - 5 \log\big [\Gamma(1+ \kk)\big ] + {5\over 2} \log ( 2 \pi \kk)\no \\ &\qquad \qquad = 5 \sum_{n=1}^\infty {(-1)^n \over n}\, \zr(-n)\, \kk^{-n} = - { 5 \over 12 \kk} + { 1 \over 72 \kk^3} - {1\over 252 \kk^5} + {\cal O}\big({1\over \kk^7}\big) \ . \end{align} Taken with the opposite sign, {\em i.e.}\ $ \td F_{1\, \rm fin}{(\kk)}- \td F_1(\infty) $, this expression is a positive monotonically decreasing function which is consistent with the F-theorem \rf{190},\rf{110}. \iffalse Using ``proper-time'' regularization we find that the infinite sum in \rf{2.56} has indeed the expected logarithmic UV divergence as in \rf{40c},\rf{266} ($\epsilon = {\L }^{-1} \to 0$) \begin{align} \Gamma(\kk)&\equiv \sum_{\ell=1}^{\infty} \log ( \ell + {\kk} ) \Big|_{\rm reg} = - \int_{\epsilon }^\infty {dt \over t} \sum_{\ell=1}^{\infty} e^{- t(\ell + \kk ) } = - \int_{\epsilon }^\infty {dt \over t} \ { e^{-t \kk} \over e^t-1 } = \Gamma_{\infty} + \Gamma_{\rm fin} \ , \no \\ \Gamma_{\infty} & = - { \Lambda} + \kk \log { \L } \ , \qquad \qquad \label{4234} \Gamma_{\rm fin} \te = - \gamma_E \kk - \log \big[\Gamma(1+ \kk) \big] + {1\over 2} \log ( 2 \pi) \ , \end{align} where in \rf{4234} we used that $ \int_{0 }^\infty dt \ { t^{n-1} \over 1-e^t } =- \Gamma(n)\, \zr(n) $. We may drop power divergence as in \rf{2.7}.\footnote{Equivalently, $\sum_{\ell=1}^{\infty} e^{- \epsilon \ell} \log\big( \ell + {\kk} \big) \to \kk \sum_{\ell=1}^{\infty} e^{- \epsilon \ell} {\ell}^{-1} + ... = - \kk \log \epsilon + ... $.} $\Gamma_{\rm fin}$ has a regular expansion at small $\kk$ corresponding to first expansing log in \rf{2.56} in powers of $\kk$ and then summing over $\ell$ using the zeta-function regularization, $\Gamma_{\rm fin} = {1\over 2} \log ( 2 \pi) - \sum_{n=2}^\infty { (-1)^n\over n} \zr (n) \kk^n $. In general, the coefficient of the linear in $\kk$ term is scheme-dependent; for example, defining the finite part of $\Gamma(\kk) $ using Hurwitz zeta-function, {\em i.e.}\ as $-{d\over d s} \sum_{\ell=1}^\infty (\ell + \kk)^{-s} \big|_{s=0}$, we get $\Gamma_{\rm fin}$ without the $\gamma_E \kk$ term in \rf{4234}. In this scheme, after absorbing the log divergent term into the renormalization of $\kk$ in the original action \rf{4a} we end up with the following finite 1-loop correction (for $\kappa >0$) \begin{equation} \label{4x} F_{1\, \rm fin}{(\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} )}- F^{(1)}_1 \te = - 5 \log\big [\Gamma(1+ \kk)\big ] + {5\over 2} \log ( 2 \pi \kk) \ , \end{equation} where $F^{(1)}_1$ is given by \rf{2.10}. We observe that near $\kappa=0$ the difference in \rf{4x} is negative so that reversing the sign of $F$ we get $\td F_1{(\varkappa} \def \L {\Lambda} \def \kk {\ff} \def \kkk {\bar \ff}\def \ala {\L^{-1} \to 0)}- \td F^{(1)}_1 >0$ which is consistent with the F-theorem \rf{190},\rf{110}. An alternative is to write \rf{2156} in the form that has regular expansion near $\kk=\infty$ \begin{equation} F_1{(\kk)}- F_1(\infty) = 5 \sum_{\ell=1}^{\infty} \log ( {\kk}^{-1} \ell + 1) \ , \label{256x} \end{equation} where we used again the zeta-function regularization ($\zeta_{\rm R}(0) + \ha = 0$). Computing the finite part of this sum using derivative of Hurwitz zeta function and subtracting a scheme-dependent linear $5 \kk$ term we get\footnote{This is equivalent to a scheme where one introduces a cutoff on ${\kk}^{-1} \ell$ rather than $\ell$ and that produces an extra $\kk \log \kk$ term.} \begin{align} \label{256y} &F_{1\, \rm fin}{(\kk)}- F_1(\infty) =\te 5 \kk (\log \kk -1) - 5 \log\big [\Gamma(1+ \kk)\big ] + {5\over 2} \log ( 2 \pi \kk)\no \\ &\qquad \qquad = 5 \sum_{n=1}^\infty {(-1)^n \over n}\, \zr(-n)\, \kk^{-n} = - { 5 \over 12 \kk} + { 1 \over 72 \kk^3} - {1\over 252 \kk^5} + {\cal O}\big({1\over \kk^7}\big) \ . \end{align} This scheme is simply equivalent to writing $\log(1+ \kk^{-1} \ell) = -\sum_{n=1}^\infty {(-1)^n \over n}\, \kk^{-n} \, \ell^{n} $ and then doing the sum over $\ell$ using the Riemann zeta-function regularization. Taken with the opposite sign, {\em i.e.}\ $ \td F_{1\, \rm fin}{(\kk)}- \td F_1(\infty) $, this expression is a positive monotonically decreasing function. \fi \iffalse also worth mentioning that 4.29 in fact can be seen to come from the DD formula reversing the role of \Delta_+ and \Delta_- and interpreting f=1/k. We observe that the first derivative of \rf{4x} at $\kk=0$ vanishes in agreement with the expected vanishing of the 1-point function of perturbing operator. The second derivative or $\kk^2$ term should be related to the integrated 2-point function of $\P_6$ at strong coupling. I agree, end of sect. 4 is still not very satisfactory. Regarding F-theorem, though, I think the log(k) term in the vicinity of k=0 makes F large and negative, so at least in the neighborhood of k=0, \tilde F is positive and large and decreases as k is increased. This \tilde F->infty behavior as k->0 is a bit sutble, probably the correct interpretation is that in the strict k=0 limit the log(k) behavior in F^{(k)} is to be replaced/regularized by the -log(sql) zero mode behavior (sql->infty), as there is no zero mode for any finite k, but it appears for k=0. By the way, may be worth mentioning that in 4.28 the finite term linear in kappa is scheme dependent as it depends on the way we subtract the log divergence. For instance if we evaluate the sum in 4.26 say by Hurwitz zeta, d/dz sum_l (l+k)^z |_{z=0}, the final result is 5/2log(2pi)-5 log(Gamma[1+k]) without that linear term. Another way is to take two derivatives w.r.t to k and integrating, which gives A+B k -5 log(Gamma[1+k]) with A,B integration constants that we can try to play with. \fi \section{Concluding remarks}\label{secf} In this paper we computed the $\lambda^2$ term in the expectation value of the generalized circular Wilson loop \rf{1} depending on the parameter $\zeta$. The computation is considerably more involved than in the Wilson-Maldacena loop case \cite{Erickson:2000af}. In particular, in dimensional regularization, to obtain the finite $\lambda^2$ part one needs to take into account the ``evanescent'' dependence of the 1-loop term on the bare value of $\zeta$. It would be useful to extend the perturbative computation of $\langle W^{(\z)} \rangle$ to $\lambda^3$ order to see if the ladder diagrams may still be giving the most relevant contributions, with a hope to sum them up to all orders (at least in the standard WL case). The circular loop expectation value $\langle W^{(\z)} \rangle$ admits a natural interpretation as a special $d=1$ case of a partition function on $d$-sphere and thus satisfies a $d=1$ analog of F-theorem: we demonstrated the inequality \rf{190} at first subleading orders at both weak and strong coupling. \iffalse \cob{Map of operators to ${\rm AdS}_2$ fields or string coordinates}: \bu near $\zeta=0$: \ \ O(6) is unbroken, scalars $\P_m$ $\to $ embedding coordinates of $S^5$ $\P_m \ \leftrightarrow Y_m, $ \ \ \ \ \ $m=1, ...6$ \bu near $\zeta=1$: \ \ O(6) is broken by selection of $\P_6$ direction or particular parametrization of $S^5$ $\P_a \ \leftrightarrow Y_a= y_a + ..., \ \ \ \ \ \ \ \P_6 = 1 - \ha y_a y_a + ... $ $\P_a$ and $\P_6$ get different dimensions \fi The 2-loop term \rf{1} in $\langle W^{(\z)} \rangle$ determined in this paper effectively encodes several previously known results about the defect CFT$_1$ defined on the Wilson line: the 1-loop beta-function for $\zeta$ \cite{Polchinski:2011im} and the related anomalous dimensions of the scalar operator $\P_6$ near the two conformal points $\zeta=1$ and $\zeta=0$ \cite{Alday:2007he}. It would be interesting to further study the spectrum and correlation functions of operator insertions on the non-supersymmetric ($\zeta=0$) Wilson line. A particularly interesting insertion is the displacement operator $D_i \sim F_{ti}$, which has protected dimension $\Delta=2$ as a consequence of conformal symmetry (see e.g. \cite{Billo:2016cpy}). The normalization of its two-point correlation function is an important observable of the CFT, which should be a non-trivial function of the 't Hooft coupling. This observable is also expected to appear in the small angle expansion of the cusp anomalous dimension, or in the expectation value of the WL at second order of small deformations of the loop around the circular shape. In the case of the supersymmetric Wilson-Maldacena loop, the analogous observable, known as ``Bremsstrahlung function'', can be determined exactly by localization \cite{Correa:2012at} as well as integrability \cite{Correa:2012hh, Drukker:2012de}. It would be very interesting to find the corresponding quantity in the non-supersymmetric Wilson loop case. Motivated by the 2-loop expression \rf{1} one may make a bold conjecture\footnote{We thank R. Roiban for a discussion of the possible exact structure of $ \langle W^{(\z)} \rangle$ and this suggestion.} that to all orders in $\lambda$ the renormalized expression for the circular loop will depend on $\zeta$ only through the combination $(1-\zeta^2)\lambda$, {\em i.e.}\ will have the form \begin{equation} \label{000} \langle W^{(\z)} \rangle = {\rm W} (\lambda) \Big[ 1 + {\cal Z}\big( (1-\zeta^2)\lambda \big) \Big] \ , \qquad \qquad {\cal Z}(x) = \sum^\infty_{n=2} c_n x^n \ , \end{equation} where ${\rm W} (\lambda) $ is the exact expression for the WML given in \rf{1.1}. If (in some particular renormalization scheme) all $c_n >0$ then for $0 \leq \zeta \leq 1$ this function will have the minimum at $\zeta=1$ and the maximim at $\zeta=0$, in agreement with the expected structure of the $\beta$-function in \rf{2} and the F-theorem \rf{190}. The standard WL expectation value will be given by $W^{(0)}(\lambda) = {\rm W} (\lambda) \big[ 1 + {\cal Z} (\lambda ) \big]$. One may also try to determine the coefficients $c_n$ by using that at each $\lambda^n$ order the term $\zeta^{2n}$ with the highest power of $\zeta$ should come from the ladder graphs. The large $\lambda$ behavior of the WL in \rf{03},\rf{2.13} suggests that one should have ${\cal Z}(\lambda \gg 1) \sim \lambda^{5/4}$. While localization does not apply to the non-supersymmetric circular Wilson loop case, it would be very interesting to see if $\langle W^{(\z)} \rangle$, and, more generally, the spectrum of local operator insertions on the loop, may be determined exactly in the planar limit using the underlying integrability of the large $N$ theory. Another important direction is to understand better the strong-coupling side, in particular, shed light on the precise correspondence between the ``strong-coupling'' and ``weak-coupling'' parameters $\kk$ and $\zeta$ in \rf{4e}. A related question is about the detailed comparison of the expansion of the Wilson loop expectation value near the conformal points to correlation functions of scalar operator insertions at strong coupling. \iffalse interpolation, theory on w-sheet vs open string vertex operator description in O(6) case etc Compute order $\lambda^2$ terms in scalar 2-point function -- may be not hard? Compare to PS beta function -- fix $\lambda^2$ term in it ? That in WML $\Phi^6$ at strong coupling has $\D=2$ or should be represented by $y^i y^i$ was implicitly suggested in \cite{Polchinski:2011im} alread Dynamical mass generation should again be relevant ? $e^{- c{\sqrt \l}} \def \D {\Delta} $ corrections to anom dim ? Interpolating function? string side: issue of 0-mode integral? normalization? As fermions are massive, as in the cusp case we may assume that $O(6)$ invariant sigma model gets mass generation and then may be there are in addition non-perturbative corrections due to non-perturbative mass $m \sim \lambda^{1/8} e^{- {1\over 4} {\sqrt \l}} \def \D {\Delta} $ \cite{Alday:2007mf}. Then such corrections -- absent in the case of the WML as there we have Dirichlet bc -- may appear to strong-coupling expansion. May be this is also related to the fact that WL and WML results differ at weak coupling. $O(6)$ restored for null cusp -- non-perturbatively? and there WL=WML so should be closely related. Exact correction from conf anomaly? Can it be obtained from integrability? Parallel lines in WL case? Note: not all operators in gauge theory should have regular image in AdS$_5 \times S^5\ $ string theory; it is likely $W^{(\z)}$ does not have such for any $\zeta$. Or: one will need to deform string action by a boundary term like one $\int d \tau Y_6= \int d \tau \sqrt{ 1 - Y_i Y_i} $ (with coeff depending on $\zeta$) suggested in \cite{Polchinski:2011im}, but that will break conformal invariance and seems problematic... \fi \section*{Acknowledgments} We are grateful to L. Griguolo, C. Imbimbo, V. Pestun, R. Roiban, D. Seminara, D. Young and K. Zarembo for very useful discussions. The work of S.G. is supported in part by the US NSF under Grant No.~PHY-1620542. AAT thanks KITP at UC Santa Barbara for hospitality while this work was in progress where his research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915. He was also supported by STFC grant ST/P000762/1 and the Russian Science Foundation grant 14-42-00047 at Lebedev Institute.
{ "attr-fineweb-edu": 1.662109, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUgKXxK6wB9lIs_0Dv
\section{Introduction} \label{intro} Cold atoms trapped in an optical lattice represents a perfect tool to implement and study different phenomena predicted by quantum mechanics in many fields such as condensed matter and solid state physics~\cite{Bloch} or quantum information processing~\cite{Bloch2} and to simulate gauge field theories~\cite{gauge}. In particular, the atomic motion in the optical lattice can be modified and driven by the external forces, like the gravitational force or a static electromagnetic field, or by perturbing the optical lattice field itself, like the presence of a phase or amplitude modulation or the scattered field associated to the presence of rough surfaces along the optical path. The interplay between the external driving force and optical lattice field modulation has been recently studied extensively with the objective of the realization of a precise force sensor~\cite{myself}. In this paper, we describe two particular phenomena concerning a driven optical lattice. The first considers the multimode interference between neighborhood quantum states populated in a optical lattice. The second phenomenon considers the experimental investigation of the multiband structure of the energy spectrum and the associated transport. The paper is organized as follows: in Sect.~\ref{sec:1}, we briefly recall the main features of the dynamical system, i.e. a tilted optical lattice subjected to amplitude modulation of the intensity. Section~\ref{sec:2} deals with the analysis of our multimode atom interferometer with a discussion about the interference peak width. Finally, Sect.\ref{sec:3} shows the effect of interband resonant transitions induced by the amplitude modulation and a possible application for the study of the band structure and calibration of the trap depth. \section{Quantum dynamics in driven optical lattices} \label{sec:1} We experimentally study the dynamics of non-interacting ultracold atoms of mass $M$ in an accelerated one-dimensional optical lattice under the effect of a periodic driving of its amplitude. The effective Hamiltonian describing the system is \begin{equation}\label{eq:startH} \C{H}(z,p,t) = \frac{p^2}{2 M}+\frac{U_0}{2}\cos(2k_Lz)[1+\alpha \sin(\omega t)] +Fz\, \end{equation} where a periodic potential $(U_0/2)\cos(2k_Lz)\equiv U(z)$ is originated by the interference pattern of two counterpropagating laser beams with wavelength $\lambda_L = 2\pi/k_L$, $U_0$ is the depth of the lattice potential which is modulated with amplitude $\alpha$. This system has been extensively studied both theoretically and experimentally. In the absence of the amplitude modulation (AM), the external constant force $F$ breaks the translational symmetry due to the presence of the periodic potential $U(z)$. This results in the formation of the so-called Wannier-Stark (WS) ladders~\cite{Niu96}, where the eigenstates $|\Psi_{n,j}\rangle$ are localized quasi-stationary states constructed from the energy band of index $j$ and centered on the lattice site of index $n$ \cite{Gluck:2002p767}. The quasi-energies of the $j$th ladder $\C{E}_j$ are centered on the average energy value of the corresponding band $\C{E}_j=(2k_L)^{-1}\int_{-k_L}^{k_L}dk\ E_j(k)$, while WS state energies are mismatched at different sites by a constant step $Fd=\hbar\omega_B$, where $d=\lambda_L/2$ is the lattice period and $\omega_B$ is the Bloch frequency. In this system, quantum transport occurs as interference over macroscopic regions of phase space and tunneling between different ladders. In fact, because of the phase imprint $\omega_Bt$ which occurs from site to site~\cite{Roati04}, a wavepacket prepared in a superposition of WS states undergoes oscillations with angular frequency $\omega_B$ both in momentum and real space and are called Bloch oscillations (BO). The BO phenomenon is very well known in atomic physics since the seminal work done by BenDahan et al.~\cite{BenDahan} and can be interpreted as a result of coherent interference among the WS states which compose the atomic wavepacket. On the other hand, it is possible that atoms subjected to BOs may undergo interband tunneling, known as Landau-Zener tunneling (LZ)~\cite{Holt2000}. In this scenario, the periodic driving $\alpha (U_0/2)\cos(2k_Lz)\sin(\omega t)$ represents a flexible tool to manipulate quantum transport by inducing resonant transitions between WS states. In the case of atoms lying on the lowest energy band, resonant tunneling occurs when the atoms absorb or emit energy quanta $\hbar\omega$ which satisfy the condition \begin{equation}\label{eq:WSladders} \hbar\omega+\C{E}_0-\C{E}_j-\ell\hbar\omega_B = 0\, , \end{equation} where we consider the integer $\ell$ to be the lattice site difference between the resonantly coupled WS states. In the following, we will discuss two AM regimes: for $\hbar\omega \ll \C{E}_1-\C{E}_0$ intraband resonant tunneling takes place~\cite{Ivanov}, while for $\hbar\omega\geq\C{E}_1-\C{E}_0$, it is possible to induce resonant transitions between WS ladders~\cite{Wilkinson96,Gluck99}. \subsection{Intraband resonant tunneling: delocalization-enhanced Bloch oscillations} We first consider a system of an ultracold atomic gas with sub-recoil temperature under the effect of a weak constant force $F$ and AM frequencies $\omega$ smaller than the first band gap. In this regime LZ tunneling is fully negligible, thus we drop the band index $j$, since under this assumption the atomic dynamics remains confined in the same initial lattice band at all times. Coherent delocalization of matter waves by means of intraband resonant tunneling is established when near-resonant modulation is applied, i.e. $\omega \simeq\ell\,\omega_B$. It is possible to demonstrate that the Hamiltonian~(\ref{eq:startH}) can be rewritten as \begin{equation}\label{eq:WS_AMham} \C{H}'_{\mathrm{AM}} =\sum_{n=-\infty}^{+\infty}\bigg[n\frac{\hbar\delta}{\ell} | \Psi_{n}\rangle\langle\Psi_{n}| +\left(i\frac{\C{J}_\ell}{2}| \Psi_{n+\ell}\rangle \langle \Psi_{n}| +\mathrm{h.c.}\hspace{-1pt}\right)\bigg]\, , \end{equation} where $\delta=\omega-\ell\omega_B$ is the detuning and $\C{J}_\ell=(\alpha U_0/2)\langle \Psi_{n+\ell}|\cos(2k_Lz)|\Psi_{n}\rangle$ represents the tunneling rate. In this condition, the system exhibits a strict analogy with the Hamiltonian of a static lattice in the presence of an effective homogenous force of magnitude $F_\delta=\hbar\delta/(\ell d)$. In the case of small detunings ($\delta\neq 0$), the transport dynamics is characterized by Bloch oscillations with time period $2\pi/\delta$, which was first observed through macroscopic oscillations of atomic wavepackets \cite{Alberti:2009p45}, and later studied with non-interacting Bose-Einstein condensates (BECs) \cite{Haller:2010hx}. In the other case with $\delta=0$, the system is invariant under discrete translations, and coherent delocalization occurs, with the atomic wavepackets spreading ballistically in time. This results in a broadening of the width of the atomic spatial distribution according to $\sigma(t)\approx \C{J}_\ell \ell d\, t/\hbar $~\cite{Ivanov}. Hence, this system is a perfect tool to engineer matter-wave transport over macroscopic distances in lattice potentials with high relevance to atom interferometry. \subsection{Interband resonant tunneling: Wannier-Stark spectroscopy} When $\hbar\omega\geq\C{E}_0-\C{E}_j$, AM can drive resonant interband transitions~\cite{Friebel,Stoeferle}. In particular, in the dynamical system depicted in Eq.\ref{eq:startH}, it is possible to induce resonant transitions between WS ladders~\cite{Wilkinson96,Gluck99}. In this regime, it is not possible to neglect the non-zero lifetime of the excited ladders due to the LZ tunneling effect, which becomes the governing quantum transport phenomenon. Then the measurement of the number of remaining atoms after modulation of the trap depth (which is proportional to the survival probability $P(\omega)$) provides a direct observation of the WS ladders spectrum. If the AM is applied for a time $t_{\mathrm{mod}}$, the fraction of remaining atoms inside the optical lattice is given by \begin{equation}\label{eq:eq3} P(\omega)\simeq\exp\left[-\left(\frac{\alpha U_0}{2\hbar}\right)^2\,t_{\mathrm{mod}}\, \sum_{j,\ell}\frac{\C{C}_{j,\ell}^2\ \Gamma_j}{\Delta_{0,j}(\ell)^2+\Gamma_j^2}\right]\, , \end{equation} where the coefficients $\C{C}_{j,\ell}=\langle \Psi_{j,n+\ell}| \cos(2k_Lz)| \Psi_{0,n}\rangle$ are the real-valued overlap integrals between resonantly-coupled WS states (only transitions from the first ladder ``0'' are considered), $\Delta_{0,j}(\ell)=\omega-(\C{E}_j-\C{E}_0)/\hbar-\ell\omega_B$ is the AM frequency detuning. In the case of a strong tilt, i.e. $Fd \simeq E_r$, it is possible to resolve each single resonance occurring at $\ell\omega_B+(\C{E}_0-\C{E}_j)/\hbar$ and thus measure $\omega_B$ with high precision~\cite{madison2}. In the opposite regime, i.e. $Fd \ll E_r$, the depletion spectrum will present a comb of resonances around the WS ladder central frequencies $\omega_{0,j}=(\C{E}_j-\C{E}_0)/\hbar$. In this case, the external force represents only a small perturbation to the band structure and interband transitions may probe the band energy dispersion~\cite{Heinze}. \section{A multimode atom interferometer: DEBO phase sensitivity} \label{sec:2} Let us first consider our system consisting of a tilted optical lattice as an example of multimode matter-wave interferometer. As previously stated in Sect.~\ref{sec:1}, a wavepacket which occupies $N$ lattice sites can be viewed as a coherent superposition of $N$-WS states evolving with a Bloch phase $\theta_B(t) = n\,\omega_B t=\frac{\pi}{k_L}k(t)$. In the case of a BEC, the number of coherent WS states is $N\simeq \sigma_{\mathrm{BEC}}/d$, which can interfere in time-of-flight (TOF). By increasing the number of populated lattice sites $N$, it is predicted an enhancement of the Bloch phase determination inferred by a least-squares fit as~\cite{Piazza} \begin{equation}\label{eq:5} \Delta\theta_B \simeq \sqrt{\frac{1}{mN_{\mathrm{at}}}\times\frac{1}{N^2}}\,, \end{equation} where $m$ is the number of repetitions of the experiment and $N_{at}$ is the number of atoms. The latter formula has an evident confirmation if we consider that the width of the interference peak $\Delta k\propto 1/N$. The enhancement of the Bloch phase sensitivity in an incoherent ensemble of ultracold atoms, which lets high precision measurements of gravity by means of the delocalization-enhanced Bloch oscillation (DEBO) technique~\cite{Poli2011,myself}, is also based on the increase of the number of populated lattice sites $N$. Here we analyze our previous experimental data and perform numerical calculations about DEBO multimode interferometer in order to quantitatively validate Eq.\ref{eq:5} as predicted by Ref.~\cite{Piazza}. The DEBO technique consists of broadening of each atomic wavefunction by means of a resonant AM burst for a time interval $t_{\mathrm{mod}}$, so that the wavefunction is delocalized over $N\simeq \C{J}_1t_{\mathrm{mod}}/\hbar$ lattice sites. After evolving in the lattice, the atoms are suddenly released to produce a narrow interference pattern which occurs at the transient time $\tau_{\mathrm{trans}}=MNd^2/\hbar$~\cite{myself}. \begin{figure}[t] \centering \resizebox{0.70\columnwidth}{!}{% \includegraphics{Fig3new.pdf} } \caption{Experimental detection of the DEBO transient interference: from the atomic density distribution in TOF a least-squares fit extracts the momentum value $k(t)$ and thus the Bloch phase with an error of $\Delta\theta_B$. The inset shows numerical calculations (lines) of DEBO effect compared with typical experimental results (points). The dashed line is the $1/e^2$ width of the wavefunction after TOF at t = $\tau_{\mathrm{trans}}$; the solid line is the rms width of the atomic cloud which broadens the smallest detectable $\Delta k$; circles and squares represent the peak width and the average Bloch phase error as a function of the number of occupied lattice sites $N$, respectively.} \label{fig:3} \end{figure} We realize the multimode interferometer as follows: we load an ensemble of $5\cdot10^4$ ultracold $^{88}$Sr atoms at 1 $\mu$K temperature into a vertical optical lattice with $\lambda_L$ = 532 nm accelerated by the gravity force. Here the temperature reduces up to $T \sim 0.6\,\mu$K by evaporation, so that the atoms fill only the lowest band. We then apply a servo-controlled AM burst with $\omega=\omega_B$ delocalizing the atom's wavefunction over $N$ = 100 and 140 lattice sites. Figure~\ref{fig:3} shows a typical density profile after TOF detection of the DEBO interference peak. Here a narrow peak emerges from a broad distribution which reflects the WS momentum density which is flat over the whole Brillouin zone. Typical results for our experiment are $\Delta k$ = $0.18\, k_L$ (blue circles) which corresponds to an error on the momentum $\sigma_k = 1.6\cdot10^{-2}\,k_L$ and thus on the Bloch phase $\Delta\theta_B =5\cdot10^{-2}$ rad. We compare this result with the typical sensitivity on the Bloch phase of a $^{88}$Sr gas without the use of the DEBO technique, where $N\sim2$ is given by the thermal de Broglie wavelength. In the inset of Fig.~\ref{fig:3} we plot the Bloch phase uncertainty (red squares) by means of DEBO technique for different values of N, and we find that the error on the phase decreases as $\Delta\theta_B \propto N^{-0.7}$. We investigated the DEBO phase sensitivity also by numerical calculations. We calculated the single wavefunction at the end of the DEBO sequence for different values of $N$ and then we performed a Gaussian least-squares fit to estimate the resulting $\Delta k$. The dashed line in Fig.~\ref{fig:3} shows that the dependence on the number of populated lattice sites is $\Delta k \propto N^{-3/4}$. However, this curve is still below the minimum detectable atomic cloud width which correspond to the initial cloud size $\sigma_z$ (solid line): by increasing the amplitude of the wavefunction, and thus of the interference time $\tau_{\mathrm{trans}}$, the relative weight of $\sigma_z$ decreases as $1/N$. Comparing the numerical calculations with the experimental data, we see that the experimental $\Delta k$ is about a factor of 3 higher than the $\sigma_z$ limit, mainly due to the interference pattern visibility $\C{V}\equiv (OD(\mathrm{max})-OD(\mathrm{min}))/OD(\mathrm{max})+OD(\mathrm{min}))\sim 0.6$~\cite{Gerbier}, where $OD$ is the optical density of the atomic sample during absorption imaging. Regarding the Bloch phase error, we can approximate $\Delta\theta_B\simeq \Delta z_0\Delta k/\C{V}$, where $\Delta z_0\sim 5$\% is the relative error on the initial atomic cloud position, which is the major source of uncertainty on the determination of the interference peak. \section{Amplitude-modulation induced interband transitions and transport analysis} \label{sec:3} We have experimentally investigated AM-induced interband transitions in a vertical optical lattice on the same experimental setup as described in Sect.~\ref{sec:2}. In this case, we performed \textit{in-situ} WS ladders spectroscopy by applying an AM perturbation to the depth of the lattice potential with $\alpha$ = 5.3 \% for a modulation time $t_{\mathrm{mod}}$ = 250 ms. We used the AM driving as probing field, scanning its frequency between 15 and 80 kHz and then counting the number of atoms remaining in the lattice by absorption imaging. The depth of the lattice was $U_0$ = 2.5 $E_r$, where $U_0$ has been calibrated as described in Ref.\cite{myself} and the recoil energy $E_r$= 8 kHz $\times h$. Given these numbers, we estimate that the interband transitions are centered around $\omega_{0,1}$ = 20 kHz and $\omega_{0,2}$ = 51 kHz. \begin{figure}[t] \centering \resizebox{0.70\columnwidth}{!}{% \includegraphics{Fig1.pdf} } \caption{Experimental spectrum of the survival probability as a function of the modulation frequency $\omega$. The shaded regions correspond to the frequencies at which interband transitions with $\Delta k=0$ are expected from the lowest band. The (red) dashed lines represent the resonances $\omega_{0,1}(0)$ = 20 kHz and $\omega_{0,2}(0)$ = 51 kHz.} \label{fig:1} \end{figure} Figure~\ref{fig:1} shows the fraction of remaining atoms as a function of the modulation frequency. Here we can notice two broad depletion features centered at about 20 kHz and 45 kHz, while a deep and narrow resonance occurs at 37 kHz. We compare the observed spectrum with the expected positions of the interband transitions by superimposing two colored regions on it, which correspond to the expected frequencies at which transitions to the first and second excited bands (``I'' and ``II'' in Fig.\ref{fig:1}, respectively) should occur with $\Delta k =0$. In particular, in the frequency range between 15 and 30 kHz we observe a symmetric depletion centered around $\omega=\omega_{0,1}$ which approximatively corresponds to the first excited band. At about $\omega$ = 30 kHz there is a local maximum of the fraction of remaining atoms, while the expected gap should appear at 34 kHz. For $\omega >$ 34 kHz, there is a second wide depletion region which corresponds to transitions to the second excited band. However, in this case we notice a marked asymmetry with the minimum $P(\omega)$ region close to the gap frequency. Finally, the largest resonance peak occurs at 37 kHz which is about $2\omega_{0,1}$, with a linewidth of about 2 kHz. Regarding the interband transitions to the first excited band, in our regime of a weak tilt on a shallow optical lattice ($\omega_B$ = 574 Hz $\ll\, U_0$), the expected LZ linewidth is $\Gamma_1$ = 360 Hz, which is comparable to the Bloch frequency. Hence, it is not possible to resolve the ladder resonances and the presence of several accessible vertical states is responsible of the broad resonance feature. If we look at the interband transitions to second excited band, we expect that its WS states are fully delocalized so that the ladder is no longer discrete. Then the deepest resonance should be due to a parametric excitation mechanism with one of those states~\cite{Friebel}. We also notice that the asymmetry in the excited fraction of atoms at the beginning of the band ($k \simeq 0$) is also present both in the numerical calculation carried on Ref.\cite{Gluck99} (although the calculation is performed for a phase-modulated lattice) and in the experimental data reported in~\cite{Heinze} (although this experiment does not employ a tilted optical lattice). Since we employ the system for high sensitivity quantum transport measurements, we also observed the corresponding \textit{in-situ} broadening of the spatial atomic distribution as a function of the applied AM frequency. The result is shown in Fig.\ref{fig:2}. Here it is possible to observe two transport resonances at 19.9(3) kHz and 37.5(3) kHz. The amplitudes of these resonances range only between 5 and 9 lattice sites $d$. By observing the fluctuations of the rms width of the atomic cloud $\sigma_0$, we find that they follow a Gaussian statistics with standard deviation $\Delta\sigma_0 < 2\, d$. Then the two broadening amplitudes are both larger than $2\Delta\sigma_0$ \begin{figure} \centering \resizebox{0.70\columnwidth}{!}{% \includegraphics{Fig2.pdf} } \caption{Experimental study of transport resonances during AM induced interband transitions. Each data point represents the difference between the \textit{in-situ} atomic cloud width after AM perturbation and the unperturbed one. As reported in Fig.\ref{fig:1}, the (red) dashed lines correspond to the interband expected resonances with $\ell=0$. The (green) line is a double-peak Lorentzian function best fit of the spatial broadening spectrum.} \label{fig:2} \end{figure} The observed transport resonance widths are of the order of 2 kHz, which is larger than the natural LZ linewidth $\Gamma_1$. However, in the case of the first resonance, this can be the result of the merging of several transitions with $|\ell|>0$. Assuming that this resonance corresponds to $\omega_{0,1}$, we can calculate the corresponding trap depth value. We find that for $\omega_{0,1}$ = 19.9(3) kHz, the lattice depth is 2.54(4) $E_r$, which is in agreement with the calibration method of Ref.\cite{myself} and has the same relative precision. \section{Conclusions and outlook} In this paper we reported on two applications of quantum transport in a driven optical lattice, the first concerning the use of AM at the Bloch frequency to study the sensitivity of a multimode atom interferometer, the second concerning the AM excitation of interband transitions. We compared the predicted $\Delta\theta_B \propto N^{-1}$ reduction of the phase uncertainty~\cite{Piazza} with the performances of our system of incoherent ultracold Sr gas by means of the DEBO technique. In our case, we observed a valuable increase of sensitivity which scales nearly as $N^{-0.7}$. This scaling law is confirmed by numerical calculations, where the interference peak width $\Delta k\propto N^{-3/4}$. The reduced exponent in the DEBO case can be due to the non-uniform probability of being at a particular lattice site. We then studied the response of the atomic system at modulation frequencies larger than the frequency separation between the lowest and the excited bands. We observed a clear frequency-dependent depletion of the lattice population, with the emergence of a gap between the first and the second band. A similar experiment was recently carried out to measure local gravity~\cite{Tack}, where the WS spectroscopy was performed by means of Raman transitions instead of AM of the lattice potential, but in this case the band structure was not resolved. We also observed a weak broadening of the atomic spatial distribution at two different modulation frequencies. The first of these corresponds to the center of the first excited band. Using this transport resonance, we calibrated the lattice depth with 5\% relative uncertainty. Further numerical investigation could provide a better model for either the depletion spectrum and the presence of transport resonances which can become a useful tool to perform band spectroscopy, as done in the case of degenerate Fermi gases~\cite{Heinze}. \paragraph{Acknowledgments} We acknowledge INFN, POR-FSE 2007-2013, iSense (EU--FP7) and LENS (Contract No.RII3 CT 2003 506350) for funding. We thank M.L. Chiofalo and A. Alberti for the theoretical model, S. Chauduri and F. Piazza for critical reading of the manuscript and useful discussions.
{ "attr-fineweb-edu": 1.984375, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUgPvxK3xgpbERFEde
\section{Logical Characterization of $k$-\textsf{dtMVPA}{}} Let $w = (a_1, t_1), \ldots, (a_m, t_m)$ be a timed word over alphabet $\Sigma=\langle \Sigma^i_c, \Sigma^i_{l}, \Sigma^i_r \rangle_{i=1}^n$ as a \emph{word structure} over the universe $U=\{1,2,\dots,|w|\}$ of positions in $w$. We borrow definitions of predicates $Q_a(i), \ERPred{a}(i), \EPPred{a}(i)$ from ~\cite{BDKRT15}. Following~\cite{lics07}, we use the matching binary relation $\mu_j(i,k)$ which evaluates to true iff the $i$th position is a call and the $k$th position is its matching return corresponding to the $j$th stack. We introduce the predicate $\theta_j(i) \in I$ which evaluates to true on the word structure iff {$w[i] = (a, t_i)$ with $a \in \Sigma^j_r$} and {$w[i] \in \Sigma^j_r$}, and there is some $k<i$ such that $\mu_j(k,i)$ evaluates to true and $t_i-t_k \in I$. The predicate $\theta_j(i)$ measures time elapsed between position $k$ where a call was made on the stack $j$, and position $i$, its matching return. This time elapse is the age of the symbol pushed onto the stack during the call at position $k$. Since position $i$ is the matching return, this symbol is popped at $i$, if the age lies in the interval $I$, the predicate evaluates to true. We define MSO($\Sigma$), the MSO logic over $\Sigma$, as: \[ \varphi{:=}Q_a(x)~|~x {\in} X~|~\mu_j(x,y)~|~\ERPred{a}(x){\in} I~|~\EPPred{a}(x){\in} I~|~\theta_j(x){\in} I~|\neg\varphi~|~\varphi {\vee} \varphi|~\exists \, x.\varphi~|~\exists \, X.\varphi \] where $a {\in} \Sigma$, $x_a {\in} C_{\Sigma}$, $x$ is a first order variable and $X$ is a second order variable. The models of a formula $\phi \in \mbox{MSO}(\Sigma)$ are timed words $w$ over $\Sigma$. The semantics is standard where first order variables are interpreted over positions of $w$ and second order variables over subsets of positions. We define the language $L(\varphi)$ of an MSO sentence $\varphi$ as the set of all words satisfying $\varphi$. Words in $Scope(\Sigma,k)$, for some $k$, can be captured by an MSO formula $Scope_k(\psi)= \displaystyle \bigwedge_{1 \leq j \leq n} Scope_k(\psi)^j$, where $n$ is number of stacks, where \\ \begin{center} $Scope_k(\psi)^j= \forall y Q_a(y) \wedge a \in \Sigma^r_j \Rightarrow( \exists x\mu_j(x,y) \wedge $ $( \psi_{kcnxt}^j \wedge \psi_{matcnxt}^j \wedge \psi_{noextracnxt} ))$ \end{center} where $\psi_{kcnxt}^j = \exists x_1,\ldots,x_k (x_1 \leq \ldots \leq x_k \leq y \displaystyle \bigwedge_{1 \leq q \leq k} (Q_a(x_q) \wedge a \in \Sigma_j \wedge (Q_b(x_q-1) \Rightarrow b \notin \Sigma_j))$, \noindent and $\psi_{matcnxt}^j= \displaystyle \bigvee_{1\leq q \leq k} \forall x_i (x_q \leq x_i \leq x (Q_c(x_i) \Rightarrow c \in \Sigma_j)) $, and \\ $\psi_{noextracnxt}= \exists x_l (x_1 \leq x_l \leq y) (Q_a(l) \wedge a \in \Sigma_j \wedge Q_b(x_{l}-1) \wedge b \in \Sigma_j) \Rightarrow x_l \in \{x_1,\ldots,x_k\}.$ Formulas $\psi_{noextracnxt}$ and $\psi_{kcnxt}$ say that there are at most $k$ contexts of $j$-th stack, while formula $\psi_{matcnxt}$ says where matching call position $x$ of return position $y$ is found. Conjuncting the formula obtained from a \textsf{dtMVPA}{} $M$ with $Scope(\psi)$ accepts only those words which lie in $L(M) \cap Scope(\Sigma,k)$. Likewise, if one considers any MSO formula $\zeta=\varphi \wedge Scope(\psi)$, it can be shown that the \textsf{dtMVPA}{} $M$ constructed for $\zeta$ will be a $k$-\textsf{dtMVPA}{}. Hence we have the following MSO characterization. \begin{theorem} \label{th:mso} A language $L$ over $\Sigma$ is accepted by an \textsf{k-scope \textsf{dtMVPA}{}} iff there is a MSO sentence $\varphi$ over $\Sigma$ such that $L(\varphi) \cap Scope(\Sigma,k)=L$. \end{theorem} The two directions, \textsf{dtMVPA}{} to MSO, as well as MSO to \textsf{dtMVPA}{} can be handled using standard techniques, and can be found in Appendix~\ref{app:mso}. \section{Introduction} The Vardi-Wolper~\cite{VARDI19941} recipe for an automata-theoretic model-checking for a class of languages requires that class to be closed under Boolean operations and have decidable emptiness problem. Esparza, Ganty, and Majumdar \cite{EGM12} coined the term ``perfect languages'' for the classes of languages satisfying these properties. However, several important extensions of regular languages, such as pushdown automata and timed automata, do not satisfy these requirements. In order to lift the automata-theoretic model-checking framework for these classes of languages, appropriate restrictions have been studied including visibly pushdown automata~\cite{AM04vpda} (VPA) and event-clock automata~\cite{AFH99} (ECA). Tang and Ogawa~\cite{VTO09} introduced a perfect class of timed context-free languages generalized both visibly pushdown automata and event-clock automata to introduce event-clock visibly pushdown automata (ECVPA). In this paper we study a previously unexplored class of timed context-sensitive languages inspired by the scope-bounded restriction on multi-stack visibly pushdown languages introduced by La Torre, Napoli, and Parlato~\cite{TNP14}, and show that it is closed under Boolean operations and has decidable emptiness problem. Moreover, we also present a logical characterization for the proposed subclass. \noindent \textbf{Visible Stack Operations.} Alur and Madhusudan~\cite{AM04vpda} introduced visibly pushdown automata as a specification formalism where the call and return edges are made visible in a structure of the word. This notion is formalized by giving an explicit partition of the alphabet into three disjoint sets of call, return, and internal or local symbols and the visibly pushdown automata must push one symbol to the stack while reading a call symbol, and must pop one symbol (given the stack is non-empty) while reading a return symbol, and must not touch the stack while reading an internal symbol. \noindent\textbf{Visible Clock Resets.} Alur-Dill timed automata~\cite{AD90} is a generalization of finite automata with continuous variables called clocks that grow with uniform rate in each control location and their valuation can be used to guard the transitions. Each transition can also reset clocks, and that allows one to constrain transitions based on the duration since a previous transition has been taken. However, the power of reseting clocks contributed towards timed automata not being closed under complementation. In order to overcome this limitation, Alur, Fix, and Henzinger~\cite{AFH99} introduced event-clock automata where input symbol dictate the resets of the clocks. In an event-clock automata every \ram{symbol} $a$ is implicitly associated with two clocks $x_a$ and $y_a$, where the "recorder" clock $x_a$ records the time since the last occurrence of the \ram{symbol} $a$, and the "predictor" clock $y_a$ predicts the time of the next occurrence of \ram{symbol} $a$. Hence, event-clock automata do not permit explicit reset of clocks and it is implicitly governed by the input timed word. \noindent\textbf{Visible Stack Operations and Clock Resets in Multistack Setting.} We study dense-time event-clock multistack visibly pushdown automata (\textsf{dtMVPA}{}) that combines event-clock dynamics of event-clock automata with multiple visibly pushdown stacks. We assume a partition of the alphabet among various stacks, and partition of the alphabet of each stack into call, return, and internal symbols. Moreover, we associate recorder and predictor clocks with each symbol. Inspired by Atig et al.~\cite{AAS12} we consider our stacks to be dense-timed, i.e. we allow stack symbols to remember the time elapsed since they were pushed to the stack. A finite timed word over an alphabet $\Sigma$ is a sequence $(a_1,t_1), \ldots, (a_n,t_n) \in (\Sigma {\times} {\mathbb R}_{\geq 0})^*$ such that $t_i \leq t_{i+1}$ for all $1 \leq i \leq n-1$. Alternatively, we can represent timed words as tuple $(\seq{a_1,\ldots, a_n}, \seq{t_1, \ldots, t_n})$. We may use both of these formats depending on the context and for technical convenience. Let $T\Sigma^*$ denote the set of finite timed words over $\Sigma$. We briefly discuss the concepts of rounds and scope as introduced by~\cite{TNP14}. Consider an pushdown automata with $n$ stacks. We say that for a stack $h$, a (timed) word is a stack-$h$ context if all of its \ram{symbols} belong to the alphabet of stack $h$. A \emph{round} is fixed sequence of exactly $n$ contexts one for each stack. Given a timed word, it can be partitioned into sequences of contexts of various stacks. The word is called $k$-round if it can be partitioned into \ram{$k$ rounds.} We say that a timed word is $k$-scoped if for each return symbol of a stack its matching call symbol occurs within the last $k$ contexts of that stack. A visibly-pushdown multistack event-clock automata is \emph{scope-bounded} if all of the accepting words are $k$-scoped for a fixed $k \in \mathbb{N}$. \begin{figure}[t] \begin{center}\scalebox{0.8}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,% auto,node distance=2.8cm,semithick,inner sep=3pt,bend angle=45,scale=0.97] \tikzstyle{every state}=[circle,fill=black!0,minimum size=3pt] \node[initial,state, initial where=above] (0) at(0, 0) {$l_0$}; \node[state] (1) at(2.5, 0) {$l_1$}; \node[state] (2) at(5, 0) {$l_2$}; \node[state] (3) at(8, 0) {$l_3$}; \node[state] (4) at(12.2,0) {$l_4$}; \node[state] (5) at(12.2,-2) {$l_5$}; \node[state] (6) at(12.2,-4) {$l_6$}; \node[state] (7) at(8,-4) {$l_7$}; \node[state] (8) at(5,-4) {$l_8$}; \node[state] (9) at(2.5,-4) {$l_9$}; \node[state,accepting] (10) at(0,-4) {$l_{10}$}; \path (0) edge node [above] {$\hat{a}$, push$^1$($\$$)} (1) (1) edge [loop above] node [above] {$~~a$, push$^1$($\alpha$)} (1) (1) edge node [above] {$b$, push$^2$($\$$)} (2) (2) edge [loop above] node [above] {$~~b$, push$^2$($\$$)} (2) (2) edge node [above] {$d$, pop$^2$($\$$)} (3); \path (3) edge [loop above] node [above] {$d$, pop$^2$($\$$) } (3) (3) edge node [above] {$d$, pop$^2$($\$$) ${\in} [4,\infty)$} (4); \path (7) edge node [above] {$c$, pop$^1$($\$$)} (8); \path (8) edge node [above] {$\hat{c}$, pop$^1$($\alpha$)} (9); \path (8) edge [loop above] node [above] {$c$, pop$^1$($\$$)} (8); \path (8) edge node [below] {$x_{\hat{a}} \leq 5$} (9); \path (9) edge [loop above] node [above] {$d$, pop$^2$($\$$)} (9); \path (9) edge node [above] {$d$, pop$^2$($\$$)} (10); \path (9) edge node [below] {$x_b \leq 2 $} (10); \path (7) edge [loop above] node [above] {$b$, push$^2$($\$$)} (7); \path (6) edge node [above] {$b$, push$^2$($\$$)} (7); \path (6) edge node [below] {$y_d \leq 2$} (7); \path (4) edge [loop above] node [above] {$c$, pop$^1$($\$$)} (4); \path (4) edge node [left] {$c$, pop$^1$($\$$)} (5); \path (5) edge node [right] {$a$, push$^1$($\$$)} (6); \path (6) edge [loop right] node [right] {$a$, \ram{push}$^1$($\$$)} (6); \end{tikzpicture} } \end{center} \caption{Dense-time Multistack Visibly Pushdown Automata used in Example \ref{lab:ex:lan1}} \label{fig:dtvpa} \end{figure} To introduce some of the key ideas of our model, let us consider the following example. \begin{example} \label{lab:ex:lan1} Consider the timed language whose untimed component is of the form \\ $\ram{L=\{\hat{a}a^xb^yd^y c^la^l b^z c^x \hat{c}d^z \mid x,l,z \geq 1, ~y \geq 2\}}$ with the critical timing restrictions among various symbols in the following manner. The time delay between the first occurrence of $b$ and the last occurrence of $d$ in the substring $b^yd^y$ is at least $4$ time-units. The time-delay between this last occurrence of $d$ and the next occcurence of $b$ is at most $2$ time-units. Finally the last $d$ \ram{of the input string} must appear within $2$ time units of the last $b$, and $\hat{c}$ must occur within $5$ time units of corresponding $\hat{a}$. This language is accepted by a \textsf{dtMVPA}{} with two stacks shown in Figure~\ref{fig:dtvpa}. We annotate a transition with the \ram{symbol} and corresponding stack operations if any. We write $pop^i$ or $push^i$ to emphasize pushes and pops to the $i$-th stack. We also use $pop^{i}(X) \in I$ to check if the age of the popped symbol $X$ belongs to the interval $I$. In addition, we use simple constraints on predictor/recorder clock variables corresponding to the \ram{symbols}. Let $a,\hat{a}$ and $c,\hat{c}$ ($b$ and $d$, resp.) be call and return symbols for the first (second, resp.) stack. The Stack alphabet for the first stack is $\Gamma^1=\{\alpha,\$\}$ and for the second stack is $\Gamma^2=\{\$\}$. In Figure~\ref{fig:dtvpa} clock $x_a$ measures the time since the occurrence of the last $a$, while constraints $pop(\gamma) \in I$ checks if the age of the popped symbol $\gamma$ is in a given interval $I$. This language is $3$-scoped and is accepted by a $6$-round \textsf{dtMVPA}{}. But if we consider the Kleene star of this language, it will be still $3$-scoped. and its machine can be built by fusing states $l_0$ and $l_{10}$ of the MVPA in Figure~\ref{fig:dtvpa}. \end{example} \noindent\textbf{Related Work.} The formalisms of timed automata and pushdown stack have been combined before. First such attempt was timed pushdown automata~\cite{Bouajjani1995} by Bouajjani, et al and was proposed as a timed extension of pushdown automata which uses global clocks and timeless stack. We follow the dense-timed pushdown automata by Abdulla et al~\cite{AAS12}. The reachability checking of a given location from an intial one was shown to be decidable for this model. Trivedi and Wojtczak~\cite{TW10} studied the recursive timed automata in which clock values can be pushed onto a stack using mechanisms like pass-by-value and pass-by-reference. They studied reachability and termination problems for this model. Nested timed automata (\textsf{NeTA}) proposed by Li, et al~\cite{Li2013} is a relatively recent model which, an instance of timed automata itself can be pushed on the stack along with the clocks. The clocks of pushed timed automata progress uniformly while on the stack. From the perspective of logical characterization, timed matching logic, an existential fragment of second-order logic, identified by Droste and Perevoshchikov~\cite{DP15} characterizes dense-timed pushdown automata. We earlier~\cite{BDKRT15} studied MSO logic for dense-timed visibly pushdown automata which form a subclass of timed context-free languages. This subclass is closed under union, intersection, complementation and determinization. The work presented in this paper extends the results from~\cite{BDKPT16} for bounded-round \textsf{dtMVPA}{} to the case of bounded-scope \textsf{dtMVPA}{}. \noindent \textbf{Contributions.} We study bounded-scope \textsf{dtMVPA}{} and show that they are closed \ram{under} Boolean operations and the emptiness problem for these models is decidable. We also present a logical characterization for these models. \noindent \textbf{Organization of the paper:} In the next section we recall the definitions of event clock and visibly pushdown automata. In Section \ref{lab:sec:def} we define $k$-scope dense time multiple stack visibly push down automata with event clocks and its properties. In the following section these properties are used to decide emptiness checking and determinizability of $k$-scope \textsf{ECMVPA}{} with event clocks. Building upon these results, we show decidability of these properties for $k$-scope \textsf{dtMVPA}{} with event clocks. In Section \ref{sec:mso} we give a logical characterization for models introduced. \section{Preliminaries} \label{lab:sec:prelim} We only give a very brief introduction of required concepts in this section, and for a detailed background on these concepts we refer the reader to~\cite{AD94,AFH99,AM04vpda}. We assume that the reader is comfortable with standard concepts such as context-free languages, pushdown automata, MSO logic from automata theory; and clocks, event clocks, clock constraints, and valuations from timed automata. Before we introduce our model, we revisit the definitions of event-clock automata. \subsection{Event-Clock Automata} The general class of TA \cite{AD94} are not closed under Boolean operations. An important class of TA which is determinizable is Event-clock automata (\textsf{ECA}{})~\cite{AFH99}, and hence closed under Boolean operations. Here the determinizability is achieved by making clock resets ``visible''. To make clock resets visible we have two clocks which are associated with every action $a \in \Sigma$ : $x_a$ the ``recorder'' clock which records the time of the last occurrence of action $a$, and $y_a$ the ``predictor'' clock which predicts the time of the next occurrence of action $a$. For example, for a timed word $w = (a_1,t_1), (a_2,t_2), \dots, (a_n,t_n)$, the value of the event clock $x_{a}$ at position $j$ is $t_j-t_i$ where $i$ is the largest position preceding $j$ where an action $a$ occurred. If no $a$ has occurred before the $j$th position, then the value of $x_{a}$ is undefined denoted by a special symbol $\vdash$. Similarly, the value of $y_{a}$ at position $j$ of $w$ is undefined if symbol $a$ does not occur in $w$ after the $j$th position. Otherwise, it is $t_k-t_j$ where $k$ is the first occurrence of $a$ after $j$. Hence, event-clock automata do not permit explicit reset of clocks and it is implicitly governed by the input timed word which makes them determinizable and closed under all Boolean operations. We write $C$ for the set of all event clocks and we use ${\mathbb R}_{>0}^{\nan}$ for the set $\Suggestion{{\mathbb R}_{> 0}} \cup \{\vdash\}$. Formally, the clock valuation after reading $j$-th prefix of the input timed word $w$, $\nu_j^w: C \mapsto {\mathbb R}_{>0}^{\nan}$, is defined as follows: $\nu_j^w(x_q) = t_j {-} t_i$ if there exists an $0 {\leq} i {<} j$ such that $a_i = q $ and $a_k \not = q$ for all $i {<} k {<} j$, otherwise $\nu_j^w(x_q) = \ \vdash$ (undefined). Similarly, $\nu_j^w(y_q) = t_m - t_j$ if there is $j {<} m$ such that $a_m = q$ and $a_l \not = q$ for all $j {<} l {<} m$, otherwise $\nu_j^w(y_q) = \vdash$. A clock constraint over $C$ is a boolean combination of constraints of the form $z \sim c$ where $z \in C$, $c \in \mathbb{N}$ and $\sim \in \{\leq,\geq\}$. Given a clock constraint $z \sim c$ over $C$, we write $\nu_i^w \models (z \sim c)$ to denote if $\nu_j^w(z) \sim c$. For any boolean combination $\varphi$, $\nu_i^w \models \varphi$ is defined in an obvious way: if $\varphi=\varphi_1 \wedge \varphi_2$, then $\nu_i^w \models \varphi$ iff $\nu_i^w \models \varphi_1$ and $\nu_i^w \models \varphi_2$. Likewise, the other Boolean combinations are defined. \ram{Let $\Phi(C)$ define all the clock constraints defined over $C$.} \begin{definition} An event clock automaton is a tuple $A = (L, \Sigma, L^0, F, E)$ where $L$ is a set of finite locations, $\Sigma$ is a finite alphabet, $L^0 \in L$ is the set of initial locations, $F \in L$ is the set of final locations, and $E$ is a finite set of edges of the form $(\ell, \ell', a, \varphi)$ where $\ell, \ell'$ are locations, $a \in \Sigma$, and $\varphi$ \ram{in $\Phi(C)$}. \end{definition} The class of languages accepted by \textsf{ECA}{} have Boolean closure and decidable emptiness~\cite{AFH99}. \subsection{Visibly Pushdown Automata} The class of push down automata are not determinzable and also not closed under Boolean operations \cite{HU79}. The determinizability is achieved by making input alphabet ``visible'' that is for a given input letter only one kind of stack operations is allowed giving an important subclass of Visibly pushdown automata~\cite{AM04vpda} which operate over words that dictate the stack operations. This notion is formalized by giving an explicit partition of the alphabet. This notion is formalized by giving an explicit partition of the alphabet into three disjoint sets of \emph{call}, \emph{return}, and \emph{internal} symbols and the visibly pushdown automata must push one symbol to stack while reading a call symbol, and must pop one symbol (given stack is non-empty) while reading a return symbol, and must not touch the stack while reading the internal symbol. \begin{definition} A visibly pushdown alphabet is a tuple $\seq{\Sigma_c, \Sigma_r, \Sigma_{l}}$ where $\Sigma_c$ is \emph{call} alphabet, $\Sigma_r$ is a \emph{return} alphabet, and $\Sigma_{l}$ is \emph{internal} alphabet. A \textsf{visibly pushdown automata(VPA)} over $\Sigma = \seq{\Sigma_c, \Sigma_r, \Sigma_{l}}$ is a tuple $(L, \Sigma, \Gamma, L^0, \delta, F)$ where $L$ is a finite set of locations including a set $L^0 \subseteq L$ of initial locations, $\Gamma$ is a finite stack alphabet with special end-of-stack symbol $\bottom$, $\Delta \subseteq (L {\times} \Sigma_c {\times} L {\times} (\Gamma {\setminus} {\bottom})) \cup (L {\times} \Sigma_r {\times}\Gamma {\times} L) \cup (L {\times} \Sigma_{l}{\times} L)$ is the transition relation, and $F \subseteq L$ is final locations. \end{definition} Alur and Madhusudan~\cite{AM04vpda} showed that \textsf{VPA}{}s are determinizable and closed under boolean operations. A language $L$ of finite words defined over visibly pushdown alphabet $\Sigma$ is a \textsf{visibly pushdown language(VPL)} if there exist a VPA $M$ such that $L(M)=L$. The class of languages accepted by visibly pushdown automata are closed under boolean operations with decidable emptiness property~\cite{AM04vpda}. \section{Dense-Time Visibly Pushdown Multistack Automata} \label{lab:sec:def} This section introduces scope-bounded dense-timed multistack visibly pushdown automata and give some properties about words and languages accepted by these machines. Let $\Sigma= \seq{\Sigma^h_c, \Sigma^h_r, \Sigma^h_{l}}_{h=1}^n$ where $\Sigma^i_x \cap \Sigma^j_x=\emptyset$ whenever either $i \neq j$ or $x \neq y$, and $x,y \in \{c,r,l\}$. Let $\Sigma^h=\seq{\Sigma^h_c, \Sigma^h_r, \Sigma^h_{l}}$. Let $\Gamma^h$ be the stack alphabet of the $h$-th stack and $\Gamma=\bigcup_{h=1}^n \Gamma^h$. For notational convenience, we assume that each symbol $a \in \Sigma^h$ has an unique recorder $x_a$ and predictor $y_a$ clock assigned to it. Let $C_h$ denote the set of event clocks corresponding to stack $h$ and $\Phi(C_h)$ denote the set of clock constraints defined over $C_h$. Let $cmax$ be the maximum constant used in the clock constraints $\Phi(C^h)$ of all stacks. Let $\mathcal{I}$ denote the finite set of intervals $\{ [0,0],(0,1),[1,1], (1,2), \ldots,[cmax,cmax],(cmax,\infty)\}$. \begin{definition}[\cite{BDKPT16}] A dense-timed visibly pushdown multistack automata (\textsf{dtMVPA}{}) over $\seq{\Sigma^h_c, \Sigma^h_r, \Sigma^h_{l}}_{h=1}^{n}$ is a tuple $(L, \Sigma, \Gamma, L^0, F, \Delta {=} (\Delta^h_c {\cup} \Delta^h_r {\cup} \Delta^h_{l})_{h=1}^n)$ where \begin{itemize} \item $L$ is a finite set of locations including a set $L^0 \subseteq L$ of initial locations, \item $\Gamma^h$ is the finite alphabet of stack $h$ and has special end-of-stack symbol $\bottom_h$, \item $\Delta^h_c \subseteq (L {\times} \Sigma^h_c {\times} \Phi(C_h) {\times} L {\times} (\Gamma^h {\setminus} \{\bottom_h\}))$ is the set of call transitions, \item $\Delta^h_r \subseteq (L {\times} \Sigma^h_r {\times} \mathcal{I} {\times} \Gamma^h {\times} \Phi(C_h) {\times} L)$ is set of return transitions, \item $\Delta^h_{l} \subseteq (L {\times} \Sigma^h_{l} {\times} \Phi(C_h) {\times} L)$ is set of internal transitions, and \item $F {\subseteq} L$ is the set of final locations. \end{itemize} \end{definition} Let $w = (a_0,t_0), \dots, (a_e,t_e)$ be a timed word. A configuration of the \ram{\textsf{dtMVPA}{}} is a tuple $(\ell, \nu_i^w, (((\gamma^1\sigma^1, age(\gamma^1\sigma^1)), \dots, (\gamma^n\sigma^n, age(\gamma^n\sigma^n)))$ where $\ell$ is the current location of the \ram{\textsf{dtMVPA}{}}, function $\nu_i^w$ gives the valuation of all the event clocks at position $i \leq |w|$, $\gamma^h\sigma^h \in \Gamma^h(\Gamma^h)^*$ is the content of stack $h$ with $\gamma^h$ being the topmost symbol, and $\sigma^h$ the string representing stack contents below $\gamma^h$, while $age(\gamma^h\sigma^h)$ is a sequence of real numbers denoting the ages (the time elapsed since a stack symbol was pushed on to the stack) of all the stack symbols in $\gamma^h \sigma^h$. We follow the assumption that $age(\bottom^h) = \seq{\vdash}$ (undefined). If for some string $\sigma^h \in (\Gamma^h)^*$ we have $age(\sigma^h) = \seq{t_1, t_2, \ldots, t_g}$ and for $\tau \in {\mathbb R}_{\geq 0}$ then we write $age(\sigma^h) + \tau$ for the sequence $\seq{t_1+ \tau, t_2+ \tau, \ldots, t_g+\tau}$. For a sequence $\sigma^h = \seq{\gamma^h_{1}, \ldots, \gamma^h_{g}}$ and a stack symbol $\gamma^h$ we write $\gamma^h::\sigma^h$ for $\seq{\gamma^h, \gamma^h_{1}, \ldots, \gamma^h_{g}}$. A run of a \ram{\textsf{dtMVPA}{}} on a timed word $w = (a_0,t_0), \dots, (a_e,t_e)$ is a sequence of configurations: \noindent $(\ell_0, \nu^{\Suggestion{w}}_0, (\seq{\bottom^1},\seq{\vdash}), \dots, (\seq{\bottom^n},\seq{\vdash}))$, $ (\ell_1, \nu^{\Suggestion{w}}_1,((\sigma^1_1, age(\sigma^1_1)), \dots, (\sigma^n_1, age(\sigma^n_1))))$,\\ $\dots, (\ell_{e+1}, \nu^{\Suggestion{w}}_{e+1}, (\sigma^{1}_{e+1}, age(\sigma^{1}_{e+1})), \dots, (\sigma^{n}_{e+1}, age(\sigma^{n}_{e+1})))$ where $\ell_i \in L$, $\ell_0 \in L^0$, \ram{$\sigma^h_i \in (\Gamma^h)^* \bottom^h$}, and for each $i,~0 \leq i \leq e$, we have: \begin{itemize} \item If $a_i \in \Sigma^h_c$, then there is $(\ell_i, a_i,\varphi, \ell_{i+1}, \gamma^h) {\in} \Delta^h_c$ such that $\nu_i^w \models \varphi$. The symbol $\gamma^h \in \Gamma^h \backslash \{\bottom^h\}$ is then pushed onto the stack $h$, and its age is initialized to zero, i.e. $(\sigma^{h}_{i+1}, age(\sigma^h_{i+1}))= (\gamma^h ::\sigma^h_i, 0::(age(\sigma^h_i)+(t_i-t_{i-1})))$. All symbols in all other stacks are unchanged, and they age by $t_i-t_{i-1}$. \item If $a_i \in \Sigma^h_r$, then there is $(\ell_i, a_i, I, \gamma^h, \varphi, \ell_{i+1}) \in \Delta^h_r$ such that $\nu_i^w \models \varphi$. Also, $\sigma^h_{i} = \gamma^h :: \kappa \in \Gamma^h (\Gamma^h)^*$ and $age(\gamma^h)+(t_i-t_{i-1}) \in I$. The symbol $\gamma^h$ is popped from stack $h$ obtaining $ \sigma^h_{i+1}=\kappa$ and ages of remaining stack symbols are updated i.e., \ram{ $age(\sigma^h_{i+1})=age(\kappa)+ (t_i-t_{i-1}) $. } However, if $\gamma^h= \seq{\bottom^h}$, then $\gamma^h$ is not popped. The contents of all other stacks remains unchanged, and simply age by $(t_i-t_{i-1})$. \item If $a_i {\in} \Sigma^h_{l}$, then there is $(\ell_i, a_i, \varphi, \ell_{i+1}) {\in} \Delta^h_{l}$ such that $\nu_i^w \vDash \varphi$. In this case all stacks remain unchanged i.e. $\sigma^h_{i+1}{=}\sigma^h_i$, \ram{but their contents age by $t_i-t_{i-1}$ i.e. $age(\sigma^h_{i+1}){=}age(\sigma^h_i)+(t_{i}-t_{i-1})$ for all $1 \leq h \leq n$.} \end{itemize} A run $\rho$ of a {\textsf{dtMVPA}{}} $M$ is accepting if it terminates in a final location. A timed word $w$ is an accepting word if there is an accepting run of $M$ on $w$. The language $L(M)$ of a \ram{\textsf{dtMVPA}{}} $M$, is the set of all timed words $w$ accepted by $M$ \ram{and is called \textsf{dtMVPL}{}.} A \ram{\textsf{dtMVPA}{}} $M=(L, \Sigma, \Gamma, L^0, F, \Delta)$ is said to be \emph{deterministic} if it has exactly one start location, and for every configuration and input action exactly one transition is enabled. Formally, we have the following conditions: \ram{ for any two moves $(\ell, a, \phi_1, \ell',\gamma_1)$ and $(\ell, a, \phi_2, \ell'', \gamma_2)$ of $\Delta^h_c$, condition $\phi_1 \wedge \phi_2$ is unsatisfiable; for any two moves $(\ell, a, I_1, \gamma, \phi_1, \ell')$ and $(\ell, a, I_2, \gamma, \phi_2, \ell'')$ in $\Delta^h_r$, either $\phi_1 \wedge \phi_2$ is unsatisfiable or $I_1 \cap I_2 = \emptyset$; and for any two moves $(\ell, a, \phi_1, \ell')$ and $(\ell, a, \phi_2, \ell')$ in $\Delta^h_{l}$, condition $\phi_1 \wedge \phi_2$ is unsatisfiable.} \ram{An Event clock multi stack visibly push down automata ($\textsf{ECMVPA}$) is a $\textsf{dtMVPA}{}$ where the stacks are untimed i.e., a \textsf{dtMVPA}{} $(L, \Sigma, \Gamma, L^0, F, \Delta)$, with $I = [0, +\infty]$ for every $(\ell, a, I, \gamma, \phi, \ell') {\in} \Delta^h_r$, is an \textsf{ECMVPA}{}.} A $\textsf{dtECVPA}{}$ is a $\textsf{dtMVPA}{}$ restricted to single stack. We now define a \emph{matching relation} $\sim_h$ on the positions of input timed word $w$ which identifies matching call and return positions for each stack $h$. Note that this is possible because of the visibility of the input symbols. \begin{definition}[Matching relation] Consider a timed word $w$ over $\Sigma$. \ram{Let $\mathcal{P}^h_c$ (resp. $\mathcal{P}^h_r$) denote the set of positions in $w$ where a symbol from $\Sigma^h_c$ i.e. a call symbol (resp. $\Sigma^h_r$ i.e. a return symbol ) occurs. Position i (resp. j) is called \emph{call position (resp. return position).}} For each stack $h$ the timed word $w$, defines a \emph{matching relation} $\sim_h \subseteq \mathcal{P}^h_c \times \mathcal{P}^h_r$ satisfying the following conditions: \begin{enumerate} \item for all positions $i,j$ with $i \sim_h j$ we have $i < j$, \item for any call position $i$ of $\mathcal{P}^h_c$ and any return position $j$ of $\mathcal{P}^h_r$ with $i <j$, there exists $l$ with $i \leq l \leq j$ for which either $i \sim_h l$ or $l \sim_h j$, \item for each call position $i \in \mathcal{P}^h_c$ (resp. $i \in \mathcal{P}^h_r$) there is at most one return position $j \in \mathcal{P}^h_r$ (resp. $j \in \mathcal{P}^h_c$) with $i \sim_h j$ (resp. $j \sim_h i$). \end{enumerate} \end{definition} For $i \sim_h j$, position $i$ (resp. $j$) is called \emph{matching call (resp. matching return)}. This definition of matching relation extends that defined by La Torre, et al~\cite{TNP16} to timed words. As matching relation is completely determined by stacks and timestamps of the input word does not play any role, we claim that above definition uniquely identifies matching relation for a given input word $w$ using uniqueness proof from~\cite{TNP16}. Fix a $k$ from $\mathbb{N}$. A \emph{stack-$h$ context} is a word in $\Sigma^h (\Sigma^h)^*$. Given a word $w$ and a stack $h$, the word $w$ has $k$ maximal $h$-contexts if $w \in (\Sigma^h)^* ( (\bigcup_{h \neq h'} \Sigma^{h'})^* (\Sigma^h)^* )^{k-1}$. A timed word over $\Sigma$ is \textbf{$k$-scoped} if for each matching call of stack $h$, its corresponding return occurs within at most $k$ maximal stack-$h$ contexts. Let $Scope(\Sigma,k)$ denote the set of all $k$-scope timed words over $\Sigma$. For any fixed $k$, a $k$-scope \ram{\textsf{dtMVPA}{}} over $\Sigma$ is a tuple \ram{$A=(k,M)$} where $M=(L, \Sigma, \Gamma, L^0, F, \Delta)$ is a \ram{\textsf{dtMVPA}{}} over $\Sigma$. The language accepted by $A$ is $L(A)=L(M) \cap Scope(\Sigma,k)$ and is called $k$-scope \ram{dense-timed multistack visibly pushdown language ($k$-scoped-\textsf{dtMVPL}{}). We define $k$-scoped-\textsf{ECMVPL}{} in a similar fashion.} We now recall some key definitions from La Torre~\cite{TNP14,TNP16} which help us extend the notion of scoped words from untimed to timed words. \begin{definition}[$k$-scoped splitting~\cite{TNP14,TNP16}] A \textsf{cut} of $w$ is $w_1 {:} w_2$ where $w=w_1w_2$. The cutting of $w$ is marked by ``:''. A \textsf{cut is $h$-consistent} with matching relation $\sim_h$ if no call occuring in $w_1$ matches with a return in $w_2$ in $\sim_h$. A \textsf{splitting} of $w$ is a set of cuts $w_1 \ldots w_i:w_{i+1}\ldots w_m$ such that $w=w_1\ldots w_i w_{i+1}\ldots w_m$ for each $i$ in $\{1,\ldots,m-1\}$. An \textsf{h-consistent splitting} of $w$ is the one in which each specified cut is $h$-consistent. A \textsf{context-splitting} of word $w$ is a splitting $w_1:w_2:\ldots:w_m$ such that each $w_i$ is an $h$-context for some stack $h$ and $i \in \{1,\ldots,m\}$. A \textsf{canonical context-splitting} of word is a context-splitting of $w$ in which no two consecutive contexts belong to the same stack. \end{definition} Given a context-splitting of timed word $w$, we obtain its \textsf{$h$-projection} by removing all non stack-$h$ contexts. See that an $h$-projection is a context-splitting. An ordered tuple of $m$ $h$-contexts is \textsf{$k$-bounded} if there there exists a $h$-consistent splitting of this tuple, where each component of the cut in the splitting is a concatenation of at most $k$ consecutive $h$-contexts of given tuple. A \textsf{$k$-scoped splitting} of word $w$ is the canonical splitting of $w$ equipped with additional cuts for each stack $h$ such that, if we take $h$-projection of $w$ with these cuts it is $k$-bounded. The main purpose for introducing all the above definitions is to come up with a scheme which will permit us to split any arbitrary length input timed word into $k$-scoped words. Using \cite{TNP14,TNP16} for untimed words we get the following Lemma. \begin{lemma}A timed word $w$ is $k$-scoped iff there is a $k$-scoped splitting of $w$. \label{lab:lm:kscope-spilt} \end{lemma} Next we describe the notion of switching vectors for timed words~\cite{BDKRT15}, which are used in determinization of $k$-scope $\textsf{dtMVPA}{}$. \subsection{Switching vectors} \label{lab:subsec:split-switch} Let $A$ be $k$-scoped \textsf{dtMVPA}{} over $\Sigma$ and let $w$ be a timed word accepted by $A$. Our aim is to simulate $A$ on $w$ by $n$ different \textsf{dtECVPA}{}s, $A^h$ for each stack-$h$ inputs. We insert a special symbol $\#$ at the end of each maximal context, to obtain word $w'$ over $\Sigma \cup \{ \#,\#'\}$. We also have recorder clocks $x_{\#}$ and predictor clocks $y_{\#}$ for symbol $\#$. For $h$-th stack, let \textsf{dtECVPA}{} $A^h$ be the restricted version of $A$ over alphabet $\Sigma \cup \{ \#, \#'\}$ which simulates $A$ on input symbols from $\Sigma^h$. Then, it is clear that at the symbol before $\#$, stack $h$ may be touched by $\textsf{dtMVPA}{}$ $A$ and at the first symbol after $\#$, stack $h$ may be touched again. But it may be the case that at positions where $\#$ occurs stack $h$ may not be empty i.e., cut defined position of \# may be not be $h$-consistent. To capture the behaviour of $A^h$ over timed word $w$ we have a notion of switching vector. Let $m$ be the number of maximal $h$-contexts in word $w$ and $w^h$ be the $h$-projection of $w$ i.e., $w^h=u^h_1 \ldots u^h_m$. In particular, $m$ could be more than $k$. A switching vector $\mathbb{V}^h$ of $A$ for word $w$ is an element of $(L,\mathcal{I},L)^m $, where $\mathbb{V}^h[l]=(q,I_l,q')$ if in the run of $A$ over $w^h$ we have $q \step{u^h_l} q'$. Let $w'^h = u^h_1 \# u^h_{2}\# \ldots u^h_{m}\#$, where $u^h_i=(a^h_{i1},t^h_{i1}), (a^h_{i2},t^h_{i2}) \ldots (a^h_{i,s_i},t^h_{i,s_i})$ is a stack-$h$ context, where $s_i=|u^h_i|$. Now we assign time stamps of the last letter read in the previous contexts to the current symbol $\#$ to get the word $\kappa^h=u^h_1 (\#,t^h_{1,s_1}) u^h_{2}(\#,t^h_{2,s_2}) \ldots u^h_{m}(\#,t^h_{m,s_m})$. We take the word $w'^h$ and looking at this word we construct another word $\bar{w}^{h}$ by inserting symbols $\# '$ at places where the stack is empty after popping some symbol, and if $\#'$ is immediately followed by $\#$ then we drop $\#$ symbol. We do this in a very canonical way as follows: In this word $w'^h$ look at the first call position $c_1$ and its corresponding return position $r_1$. Then we insert $\# '$ after position $r_1$ in $w^h$. Now we look for next call position $c_2$ and its corresponding return position $r_2$ and insert symbol $\# '$ after $r_2$. We repeat this construction for all call and its corresponding return positions in $w'^h$ to get a timed word $\bar{w}^{h}$ over $\Sigma \cup \{\#, \#'\}$. Let $\bar{w}^{h}=\bar{u}^h_1 \widehat{\#} \bar{u}^h_2 \widehat{\#} \ldots \widehat{\#} \bar{u}^h_z$, where $\widehat{\#}$ is either $\#$ or $\#'$, and $\bar{u}^h_i=(\bar{a}^h_{i1},\bar{t}^h_{i1}), (\bar{a}^h_{i2},\bar{t}^h_{i2}) \ldots (\bar{a}^h_{i,s_i},\bar{t}^h_{i,s_i})$, is a timed word. The restriction of $A$ which reads $\bar{w}^h$ is denoted by $A^h_k$. Assign timestamps of the last letter read in the previous contexts to the current symbol $\hat{\#}$ to get the word $\bar{\kappa}^h= \bar{u}^h_1 (\hat{\#},\bar{t}^h_{1,s_1}) \bar{u}^h_{2}(\hat{\#},\bar{t}^h_{2,s_2}) \ldots \bar{u}^h_{z}(\hat{\#},\bar{t}^h_{z,s_z})$, where $s_i=|\bar{u}^h_{i}|$ for $i$ in $\{1,\ldots,z\}$. A stack-$h$ \emph{switching vector} $\bar{\mathbb{V}}^h$ is a $z$-tuple of the form $(L, \mathcal{I}, L)^z$, where $z > 0$ and for every $j \leq z$ if $\bar{\mathbb{V}}^h[j] = (q_j, I_j, q'_j)$ then there is a run of $A^h$ from location $q_j$ to $q'_j$. By definition of $k$-scoped word we are guaranteed to find maximum $k$ number of $\#$ symbols from $c_j$ to $r_j$. And we also know that stack-$h$ is empty whenever we encounter $\#'$ in the word. In other words, if we look at the switching vector $\bar{\mathbb{V}}^h$ of $A$ reading $\bar{w}^h$, it can be seen as a product of switching vectors of $A$ each having a length less than $k$. Therefore, $\bar{\mathbb{V}}^h = \displaystyle \Pi_{i=1}^r V^h_i$ where $r \leq z$ and $V^h_i = (L \times \mathcal{I} \times L )^{\leq k}$. When we look at a timed word and refer to the switching vector corresponding to it, we view it as tuples of switching pairs, but when we look at the switching vectors as a part of state of $A^h_k$ then we see at a product of switching vectors of length less than $k$. A \emph{correct sequence of context switches} for $A^h_k$ wrt $\bar{\kappa}^h$ is a sequence of pairs \ram{$\bar{\mathbb{V}}^h=P^h_1 P^h_2 \dots P^h_z$}, where $P^h_{i}=(\ell^h_i,I^h_i, \ell'^h_i)$, \ram{$2 \leq h \leq n$, } $P^h_{1}=(\ell^h_{1},\nu^h_{1}, \ell'^h_{1})$ and $I^h_i \in \mathcal{I}$ such that \begin{enumerate} \item Starting in $\ell^h_{1}$, with the $h$-th stack containing $\bottom^h$, and an initial valuation $\nu^h_{1}$ of all recorders and predictors of $\Sigma^h$, the \ram{\textsf{dtMVPA}{}} $A$ processes $u^h_{1}$ and reaches some $\ell'^h_{1}$ with stack content $\sigma^h_{2}$ and clock valuation $\nu'^h_{1}$. The processing of $u^h_{2}$ by $A$ then starts at location $\ell^h_{2}$, and a time $t \in I^h_{2}$ has elapsed between the processing of $u^h_{1}$ and $u^h_{2}$. Thus, $A$ starts processing $u^h_{2}$ in $(\ell^h_{2}, \nu^h_{2})$ where $\nu^h_{2}$ is the valuation of all recorders and predictors updated from $\nu'^h_{1}$ with respect to $t$. The stack content remains same as $\sigma^h_{2}$ when the processing of $u^h_{2}$ begins. \item In general, starting in $(\ell^h_{i}, \nu^h_{i})$, $i >1$ with the $h$-th stack containing $\sigma^h_{i}$, and $\nu^h_{i}$ obtained from $\nu^h_{i-1}$ by updating all recorders and predictors based on the time interval $I^h_{i}$ that records the time elapse between processing $u^h_{i-1}$ and $u^h_{i}$, $A$ processes $u^h_{i}$ and reaches $(\ell'^h_{i}, \nu'^h_{i})$ with stack content $\sigma^h_{i+1}$. The processing of $u^{h}_{i+1}$ starts after time $t \in I^h_{i+1}$ has elapsed since processing $u^h_{i}$ in a location $\ell^h_{i+1}$, and stack content being $\sigma^h_{i}$. \end{enumerate} These switching vectors were used in to get the determinizability of $k$-round \ram{$\textsf{dtMVPA}$}~\cite{BDKRT15} In a $k$-round \textsf{dtMVPA}{}, we know that there at most $k$-contexts of stack-$h$ and hence the length of switching vector (whichever it is) is at most $k$ for any given word $w$. See for example the MVPA corresponding to Kleene star of language given in the Example \ref{lab:ex:lan1}. In $k$-scope MVPA for a given $w$, we do not know beforehand what is the length of switching vector. So we employ not just one switching vector but many one after another for given word $w$, and we maintain that length of each switching vector is at most $k$. This is possible because of the definition of $k$-scope $\textsf{dtMVPA}{}$ and Lemma~\ref{lab:lm:kscope-spilt}. \begin{lemma}(Switching Lemma for $A^h_k$) \label{switch} Let $A=(k, L, \Sigma, \Gamma, L^0, F, \Delta)$ be a \ram{$k$-scope-\textsf{dtMVPA}{}.} Let $w$ be a timed word \ram{with $m$ maximal $h$-contexts and} accepted by $A$ . Then we can construct a \ram{\textsf{dtECVPA}{}} $A^h_k$ over $\Sigma^h \cup \{\#,\#'\}$ such that $A^h_k$ has a run over $\bar{w}^{h}$ witnessed by a switching sequence $\bar{\mathbb{V}}^h = \displaystyle \Pi_{i=1}^r \bar{\mathbb{V}}^h_i$ where \ram{$r \leq z$} and $\bar{\mathbb{V}}^h_i = (L \times \mathcal{I} \times L )^{\leq k}$ which ends in the last component $\bar{\mathbb{V}}^h_r$ of $\bar{\mathbb{V}}^h$ \mbox{~iff~} there exists a $k$-scoped switching sequence $\bar{\mathbb{V}}'^h$ of switching vectors of $A$ such that for any $v'$ of \ram{$\bar{\mathbb{V}}'^h$} there exist $v_i$ and $v_j$ in $\bar{\mathbb{V}}'$ with $i \leq j$ and $v'[1]=v_i[1]$ \ram{and $v'[|v'|]=v_j[|v_j|]$. } \end{lemma} \begin{proof} We construct a \textsf{dtECVPA}{} $A^k_h= (L^h, \Sigma \cup \{\#,\#'\}, \Gamma^h, L^0, F^h=F, \Delta^h)$ where, $L^h \subseteq (L \times \mathcal{I} \times L )^{\leq k} \times \Sigma \cup \{\#,\#'\}$ and $\Delta^h$ are given below. \ram{\begin{enumerate} \item For $a$ in $\Sigma$:\\ $(P^h_{1}, \ldots , P^h_{i}=(q,I^h_i,q'),b) \step{a, \phi} (P^h_{1}, \ldots , P'^h_{i}=(q,I'^h_i,q''),a)$, when $q' \step{a,\phi} q''$ is in $\Delta$, and $b \in \Sigma$. \item For $a$ in $\Sigma$:\\ $(P^h_{1}, \ldots , P^h_{i}=(q,I^h_i,q'),\#) \step{a, \phi \wedge x_{\#}=0} (P^h_{1}, \ldots , P'^h_{i}=(q,I'^h_i,q''),a)$, when $q' \step{a,\phi} q''$ is in $\Delta$, and $b \in \Sigma$. \item For $a$ in $\Sigma$:\\ $(P^h_{1}, \ldots , P^h_{i}=(q,I^h_i,q'),\#') \step{a, \phi \wedge x_{\#'}=0} (P^h_{1}, \ldots , P'^h_{i}=(q,I'^h_i,q''),a)$, when $q' \step{a,\phi} q''$ is in $\Delta$, and $b \in \Sigma$. \item For $a=\#$, \\ $(P^h_{1}, \ldots, P^h_{i}=(q,I^h_i,q'),b) \step{a,\phi \wedge x_{b} \in I'^h_{i+1}} (P^h_{1}, \ldots , P'^h_{i+1}=(q'',I'^h_{i+1}, q''),\#)$, when $q' \step{a,\phi} q''$ is in $\Delta$. \item For $a=\#'$, \\ $(P^h_{1}, \ldots , P^h_{i}=(q,I^h_i,q'),a) \step{a,\phi,x_{\#'} \in \hat{I}^h_1} (\hat{P}^h_{1}=(q',\hat{I}^h_1,q'),\#')$, when $q' \step{a,\phi} q''$ is in $\Delta$. \end{enumerate}} Given a timed word $w$ accepted by $A$, when $A$ is restricted to $A^h$ then it is running on $w'^h$, the projection of $w$ on $\Sigma^h$, \ram{interspersed with $\#$ separating the maximal $h$-contexts in original} word $w$. Let $v_1,v_2, \ldots, v_m$ be the sequence of switching vectors witnessed by $A^h$ while reading $w'^h$. Now when $w'^h$ is fed to the constructed machine $A^k_h$, it is interspersed with new symbols $\#'$ whenever the stack is empty just after a return symbol is read. \ram{Now $\bar{w}^{h}$ thus constructed is again a collection of $z$ stack-$h$ } contexts which possibly are more in number than in $w'^h$. And each newly created context is either equal to some context of $w'^h$ or is embedded in exactly one context of $w'^h$. These give rise to sequence of switching vectors $v'_1, v'_2, \ldots, v'_z$, where \ram{$m \leq z$}. That explains the embedding of switching vectors witnessed by $A^k_h$, while reading $\bar{w}^{h}$, into switching vectors of $A$, while reading \ram{$w^h$. } $\Box$ \end{proof} Let $w$ be in $L(A)$. Then as described above we can have a sequence of switching vectors $\bar{\mathbb{V}}_h$ for stack-$h$ machine $A^h_k$. Let $d^h$ be the number of $h$-contexts in the $k$-scoped splitting of $w$ i.e., the number of $h$-contexts in $\bar{w}^h$. Then we have those many tuples in the sequence of \ram{switching vectors} $\bar{\mathbb{V}}^h$. Therefore, $\bar{\mathbb{V}}^h = \Pi_{y \in \{1,\ldots,d_h\}} \langle l^h_y, I^h_y,l'^h_y \rangle$. We define the relation between elements of $\bar{\mathbb{V}}^h$ across all such sequences. While reading the word $w$, for all $h$ and $h'$ in $\{1,\ldots,n\}$ and for some $y$ in $\{1,\ldots, d_h\}$ and some $y'$ in $\{1,\ldots, d_{h'}\}$ we define a relation $\emph{follows}(h,y)=(h'y')$ if $y$-th $h$-context is followed by $y'$-th $h'$-context. \ram{A collection of correct sequences of context switches given via switching vectors} $(\bar{\mathbb{V}}^1,\ldots,\bar{\mathbb{V}}^n)$ is called \textbf{globally correct} if we can stitch together runs of all $A^h_k$s on $\bar{w}^h$ using these switching vectors to get a run of $A$ on word $w$. In the reverse direction, if for a given $k$-scoped word $w$ over $\Sigma$ which is in $L(A)$ then we have, collection of globally correct switching vectors $(\bar{\mathbb{V}}^1,\ldots,\bar{\mathbb{V}}^n)$. The following lemma enables us to construct a run of $k$-scope \textsf{ECMVPA}{} on word $w$ over $\Sigma$ from the runs of \textsf{ECVPA}{} $A^j_k$ on $h$-projection of $w$ over $\Sigma^j$ with the help of switching vectors. \begin{lemma}[Stitching Lemma] Let $A=(k, L, \Sigma, \Gamma, L^0, F, \Delta)$ be a $k$-scope \textsf{dtMVPA}{}. Let $w$ be a $k$-scoped word over $\Sigma$. Then $w \in L(A)$ iff there exist a collection of globally correct sequences of switching vectors for word $w$. \label{lab:lm:stitching} \end{lemma} \begin{proof} $(\Rightarrow)$: Assuming $w \in L(A)$ to prove existence of a collection of globally correct sequences is easy. $(\Leftarrow)$: Assume that we have a collection of globally correct switching vectors $(\bar{\mathbb{V}}^1,\ldots,\bar{\mathbb{V}}^n)$ of $A$, for a word $w$. For each $h$ in $\{1,\ldots,n\}$ we have $k$-scoped splitting of word $w$ and we have a run of $A^h_k$ on $\bar{w}^h$, which uses $\bar{\mathbb{V}}^h$, the corresponding switching vector of $A$. Let $\bar{w}$ be the word over $\Sigma \cup \{\#^1,\ldots,\#^n\} \cup \ram{\{\#'^1,\ldots, \#'^n\}}$, obtained from $w$ by using $k$-splitting as follows. Let $w= w_1 w_2 \ldots w_d$ We first insert $\#^j$ before $w_1$ if $w_1$ is a $j$-context. Then for all $i$ in $\{1,\ldots,d\}$, insert $\#^l$ in between $w_i$ and \ram{$w_{i+1}$} if $w_{i+1}$ is $l$-th context. Let us insert special symbol $\#^{\$}$ to mark the end of word $w$. Now using $k$-splitting of word $w$ we throw in $\#'^h$ as done in the case of obtaining $\bar{w}^h$ from $w^h$. \ram{Now we build a composite machine whose constituents are $A^h_k$ for all $h$ in $\{1,\ldots,n\}$. Its initial state is $(p^1,\ldots, p^j,\ldots, p^n,j)$ where $p^j$ s are the initial states of $A^j_k$s and the last component tells which component is processing current symbol. According to this, initially we are processing first $j$-context in $\bar{w}^j$.} We first run $A^j_k$ on $\bar{w}^j$ updating location $p^j$ to $p'^j$ where $p'^j= (\bar{v}^j=(l^j_1,I^j_1,l'^j_1),a)$. When it reads $\#^g$ then $p'^j= (\bar{v}^j=(l^j_1,I^j_1,l'^j_1),\#^g)$ and the composite state is changed to $(p^1,\ldots, p'^j,\ldots, p^n,g)$ meaning that next context belongs to $g$-th stack. \ram{So we start simulating $A$ on first $g$-context in $\bar{w}^g$}. But to do that $A^j_k$ should have left us in a state where $A^g_k$ can begin. That is if $p^g= (\bar{v}^g=(l^g_1,I^g_1,l'^g_1),\#^g)$ we should have $l^g_1=l'^j_1$ and $x_{\#'^g}=0$ which will be the case as this is the first time we accessing $g$-th stack. In general, if we are leaving a \ram{$j$-context} and the next context belongs to $g$-th stack then the time elapsed from reading the last symbol of last \ram{$g$-context} should falls in the interval of the \ram{first} element of next switching vector processing \ram{next $g$-context} in $\bar{w}^g$, along with the matching of state in which the previous context has left us in. \end{proof} \section{Emptiness checking and Determinizability of scope-bounded ECMVPA} \label{lab:sec:det-kecmvpa} First we show that emptiess problem is decidable using the ideas from \cite{BDKPT16}. Fix a $k \in \mathbb{N}$. \begin{theorem} \label{lab:tm:empt-kecmvpa} Emptiness checking for $k$-scope \textsf{ECMVPA}{} is decidable. \end{theorem} \begin{proof}[Proof sketch] Decidability of emptiness checking of $k$-round \textsf{ECMVPA}{} has been shown in \cite{BDKPT16}. This proof works for any general \textsf{ECMVPA}{} as the notion $k$-round has not been used. So, the same proof can be used to decide emptiness checking of $k$-scope \textsf{ECMVPA}{}. \end{proof} Rest of the section is devoted for the proof of following theorem. \begin{theorem} \label{lab:tm:det-kecmvpa} The class of $k$-scope \textsf{ECMVPA}{} are determinizable. \end{theorem} To show this we use the determinization of VPA \cite{AM04vpda} and we recall this construction here for the reader's convenience. \subsubsection*{Determinization of VPA \cite{AM04vpda}} \label{vpa-det} Given a VPA $M=(Q, Q_{in}, \Gamma, \delta, Q_F)$, the idea in \cite{AM04vpda} is to do a subset construction. Let $w=w_1a_1w_2a_2w_3$ be a word such that every call in $w_1, w_2, w_3$ has a matching return, and $a_1, a_2, a_3$ are call symbols without matching returns. After reading $w$, the deterministic VPA has stack contents $(S_2,R_2,a_2)(S_1,R_1,a_1)\bottom$ and is in control state $(S,R)$. Here, $S_2$ contains all pairs of states $(q,q')$ such that starting with $q$ on $w_2$ and an empty stack (contains only $\bottom$), we reach $q'$ with stack $\bottom$. The set of pairs of states $S_2$ is called a summary for $w_2$. Likewise, $S_1$ is a summary for $w_1$ and $S$ is the summary for $w_3$. Here $R_i$ is the set of states reachable from the initial state after reading till the end of $w_i$, $i=1,2$ and $R$ is the set of reachable states obtained on reading $w$. After $w_3$, if a call $a_3$ occurs, then $(S,R,a_3)$ is pushed on the stack, and the current state is $(S',R')$ where $S'=\{(q,q)\mid q \in Q\}$, while $R'$ is obtained by updating $R$ using all transitions for $a_3$. The current control state $(S,R)$ is updated to $(S', R')$ where $R'$ in the case of call and internal symbols is the set of all reachable states obtained from $R$, using all possible transitions on the current symbol read, and where the set $S'$ is obtained as follows: \begin{itemize} \item On reading an internal symbol, $S$ evolves into $S'$ where $S'=\{(q,q') \mid \exists q'', (q, q'') \in S, (q'', a, q') \in \delta\}$. \item On reading a call symbol $a$, $(S, R,a)$ is pushed onto the stack, and the control state is $(S', R')$ where $S'=\{(q,q) \mid q \in Q\}$. On each call, $S'$ is re-initialized. \item On reading a return symbol $a'$, let the top of stack be $(S_1, R_1, a)$. This is popped. Thus, $a$ and $a'$ are a matching call-return pair. Let the string read so far be $waw'a'$. Clearly, $w, w'$ are well-nested, or all calls in them have seen their returns. For the well-nested string $w$ preceding $a$, we have $S_1$ consisting of all $(q,q'')$ such that starting on $q$ on $w$, we reach $q''$ with empty stack. Also, $S$ consists of pairs $(q_1,q_2)$ that have been obtained since the call symbol $a$ (corresponding to the return symbol $a'$) was pushed onto the stack. The set $S$ started out as $\{(q_1,q_1) \mid q_1 \in Q\}$ on pushing $a$, and contains pairs $(q_1,q_2)$ such that on reading the well-nested string between $a$ and $a'$, starting in $q_1$, we reach $q_2$. The set $S$ is updated to $S'$ by ``stitching'' $S_1$ and $S$ as follows: A pair $(q,q') \in S'$ if there is some $(q, q'') \in S_1$, and $(q'',a,q_1,\gamma) \in \delta$ (the push transition on $a$), $(q_1, q_2) \in S$, and $(q_2, a', \gamma, q')\in \delta$ (the pop transition on $a'$). In this case, a state $q' \in R'$ if there exists some location $q$ in $R_1$ and there exists and $(q,a,q_1,\gamma) \in \delta$ (the push transition on $a$), $(q_1, q_2) \in S'_1$, and $(q_2, a', \gamma, q')\in \delta$ (the pop transition on $a'$). The important thing is the reachable set of states of non-deterministic machine after the call transition is updated using all possible summaries of well-matched words possible after the corresponding push transition in the past. \end{itemize} The set of final locations of the determinized VPA are $\{(S,R) \mid R$ contains a final state of the starting VPA$\}$, and its initial location is the set of all pairs $(S_{in}, R_{in})$ where $S_{in}=\{(q,q) \mid q \in Q\}$ and $R_{in}$ is the set of all initial states of the starting VPA. \subsubsection*{Determinization of $k$-scope ECMVPA} In this section we show that $k$-scope \textsf{ECMVPA}{} are determinizable using the result from \cite{VTO09} that single stack \textsf{ECVPA}{} are determinizable. Let $A=(k, L, \Sigma, \Gamma, L^0, F, \Delta)$ be the $k$-scoped \textsf{ECMVPA}{} and let $A^h_k$ be the \textsf{ECVPA}{} on $\Sigma^h \cup \{\#^h, \#'^h\}$. Each $A^h_k$ is determinizable \cite{VTO09}. Recall from~\cite{VTO09} that an \textsf{ECVPA}{} $A^h_k$ is untimed to obtain a \textsf{VPA}{} $ut(A^h_k)$ by encoding the clock constraints of $A^h_k$ in an extended alphabet. This \textsf{VPA}{} can be converted back into an \textsf{ECVPA}{} $ec(ut(A^h_k))$ by using the original alphabet, and replacing the clock constraints. This construction is such that $L(ec(ut(A^h_k)))=L(A^h_k)$ and both steps involved preserve determinism. Determinization of \textsf{VPA}{} $ut(A^h_k)$ is done in the usual way \cite{AM04vpda}. This gives $Det(ut(A^h_k))$. Again, $ec(Det(ut(A^h_k)))$ converts this back into a \textsf{ECVPA}{} by simplifying the alphabet, and writing the clock constraints. The set of locations remain unchanged in $ec(Det(ut(A^h_k)))$ and $Det(ut(A^h_k))$. This translation also preserves determinism, hence $B^h_k=ec(Det(ut(A^h_k)))$ is a deterministic \textsf{ECVPA}{} language equivalent to \textsf{ECVPA}{} $A^h_k$. This process is also explained in the Figure~\ref{fig:proof-dia}. \begin{figure} \begin{center}\scalebox{0.75}{ \begin{tikzpicture} \tikzstyle{every node}=[draw=none,fill=none]; \node (A0) at (0,4) {nondeterministic $k$-scoped \textsf{ECMVPA}}; \node (B0) at (0,-2) {deterministic $k$-scoped \textsf{ECMVPA}}; \node (A) at (4,4) {$A$}; \node (A1) at (0,2) {$A_1$}; \node (A10) at (-3,2) {nondeterministic \textsf{ECVPA} s}; \node (Aj) at (4,2) {$A_j$}; \node (An) at (8,2) {$A_n$}; \node (B1) at (0,0) {$B_1$}; \node (B10) at (-3,0) {deterministic \textsf{ECVPA} s}; \node (Bj) at (4,0) {$B_j$}; \node (Bn) at (8,0) {$B_n$}; \node (B) at (4,-2) {$B=\langle B_1,\ldots,B_n\rangle$}; \draw[->,black,rounded corners] (A)--(A1); \draw[->,black,rounded corners] (A)--(Aj); \draw[->,black,rounded corners] (A)--(An); \draw[->,black,rounded corners] (A1)--(B1); \draw[->,black,rounded corners] (Aj)--(Bj); \draw[->,black,rounded corners] (An)--(Bn); \draw[->,black,rounded corners] (B1)--(B); \draw[->,black,rounded corners] (Bj)--(B); \draw[->,black,rounded corners] (Bn)--(B); \tikzstyle{every node}=[draw=none,fill=none]; \node (decompose) at (7.3,3) {$decompose$}; \node (determinize) at (5,1) {$determinize$}; \node (product) at (7,-1) {$product$}; \end{tikzpicture} } \end{center} \caption{Determinazation of $k$-scoped \textsf{ECMVPA}} \label{fig:proof-dia} \end{figure} The locations of $B^h_k$ are thus of the form $(S,R)$ where $R$ is the set of all reachable control states of $A^h_k$ and $S$ is a set of ordered pairs of states of $A^h_k$ as seen in section \ref{vpa-det}. On reading $\kappa_j$, the $R$ component of the state reached in $B^h_k$ is the set $\{\langle V^h\rangle \mid V^h ~\mbox{is a last component of switching sequence}~ \bar{\mathbb{V}}^h$ of $A^h_k\}$. Lemmas \ref{switch} and Lemma \ref{lab:lm:stitching} follow easily using $B^h_k=ec(Det(ut(A^h_k)))$ in place of $A^h_k$. We now obtain a deterministic $k$-scoped \textsf{ECMVPA}{} $B$ for language of $k$-scoped \textsf{ECMVPA}{} $A$ by simulating $B^1_k, \dots, B^k_k$ one after the other on reading $w$, with the help of globally correct sequences of k-scoped switching vectors of $A^h_k$ s. Automaton $B$ in its finite control, keeps track of the locations of all the $B^h_k$'s, along with the valuations of all the recorders and predictors of $\Sigma$. It also remembers the current context number of the $B^h_k$'s to ensure correct simulation. Let $B^h_k=(Q^h, \Sigma^h \cup\{\#,\#'\},\Gamma^h, Q^h_0, F^h, \delta^h)$. Locations of $B^h_k$ have the form $(S^h, R^h)$. The initial state of $B^h_k$ is the set consisting of all $(S_{in}, R_{in})$ where $S_{in}=\{(q,q) \mid q$ is a state of $A^h_k\}$, and $R_{in}$ is the set of all initial states of $A^h_k$. Recall that a final state of $A^h_k$ is $V^h$ the last component switching vector $V^h$ of a correct switching sequence $\bar{\mathbb{V}}^h$ of $A^h_k$. Thus, an accepting run in $B^h_k$ goes through states $(S_{in}, R_{in}), (S_1, R_1) \dots, (S_m, R_m), \langle V^h \rangle$. Locations of $B$ have the form $(q_1, \dots, q_n, h)$ where $q_y$ is a location of $B_y$, $h$ is the stack number of current context. Without loss of generality we assume that the first context in any word belongs to stack $1$. The initial location of $B$ is $(q_{11}, q_{12}, \dots, q_{1n},1)$ where $q_{1h}$ is the initial location of $B^h_k$. We define the set of final locations of $B$ to be $(\langle V^1 \rangle, \ldots \langle V^{n-1} \rangle \langle S_n,R_n \rangle)$ where $R_n$ is a set containing a tuple of the form $(k,l'_{kn}, V_n,a)$ and $ l'_{kn}$ is in $F$, the set of final locations of $A$. We now explain the transitions $\Delta$ of $B$, using the transitions $\delta^j$ of $B^h_k$. Recall that $B$ processes $w=w_1w_2 \dots w_m$ where each $w_i$ is context of some stack When $w$ is $k$-scoped we can see $w$ as the concatenation of contexts $w=\bar{w}_1\bar{w}_2\ldots\bar{w}_{\bar{m}}$ where consecutive contexts need not belong to different stacks, and $\bar{m} \geq m$. Let $\eta=(q_{1}, \dots, q_{h-1}, q_{h}, q_{h+1}, \dots, q_{n},h)$ and let \\ $\zeta= (q_{1}, \dots, q_{h-1}, q, q_{h+1}, \dots, q_{n},h)$, where $q_{h}$ is some state of $B^h_k$ while processing some context of $h$-th stack. \begin{enumerate} \item Simulation of $B^h_k$ when the $\bar{w}_{\bar{m}}$ is not an $h$-context. \begin{itemize} \item $\langle \eta, a, \varphi, \zeta \rangle \in\Delta^j_{l}$ iff $(q_{ij},a, \varphi, q) \in \delta_{l}^j$ \item $\langle \eta, a, \varphi, \zeta, \gamma \rangle \in \Delta^j_c$ iff $(q_{ij},a, \varphi,\gamma, q) \in \delta_{c}^j$ \item $\langle \eta, a, \gamma, \varphi, \zeta \rangle \in \Delta^j_r$ iff $(q_{ij},a, \gamma, \varphi, q) \in \delta_{r}^j$ \end{itemize} \item Change context from $h$ to $h+1$. Assume that we have processed $h$-context $\bar{w}_g$ and next context is $\bar{w}_{g+1}$ which is an $(h+1)$-context. Let the current state of $B$ is $\eta=(q_{1}, \dots, q_{h-1}, q_{h}, q_{h+1}, \dots, q_{n},h)$ after processing $\bar{w}_g$. At this point $B^h_k$ reads symbol $\#$ in $\bar{\kappa}^h$ and moves from state $q_{h}$ to $q'_{h}$. Therefore, current state of $B$ becomes $\eta'=(q_{1}, \dots, q_{h-1}, q'_{h}, q_{h+1}, \dots, q_{n},h)$ At this point, $B$ invokes $B^{h+1}_k$ and starts reading first symbol of $\bar{w}_{g+1}$ when $B$ is in state $\eta'$. This is meaningful because of Lemma~\ref{lab:lm:stitching}. States of $q_{h+1}$ are of the form $(S^{h+1}, R^{h+1})$ where $R^{h+1}$ have states of $A^{h+1}_k$ of the form $(\bar{\mathbb{V}}^{h+1},a)$, which is a possible state after processing last $h+1$ context. Globally correct stitching sequence guarantees that for all $(\bar{\mathbb{V}}^h,a) \in R'^h$ we have at least one $(\bar{\mathbb{V}}^{h+1},a) \in R'^{h+1}$ such that $\bar{\mathbb{V}}'^h[last]=(x,I,y)$ then $\bar{\mathbb{V}}'^{h+1}[first]=(y,I',z)$ and valuation of clocks at $\eta'$ belongs to $I'$, where $x,y$ are locations of $A$, and \ram{where last is the last component of $\bar{\mathbb{V}}'^h$ after processing $\bar{w}_g$, and first is the first componet of switching vector $\bar{\mathbb{V}}'^{h+1}$ for processing $\bar{\mathbb{V}}'{h+1}$.} Component location $q_{h+1}$ will be replaced based on a transition of $B^{h+1}_k$, and $q'^{h}$ is also replaced with $q^{h}$ to take care of the transition on $\#$ in $B^h_k$, where $q^{h}=(S^{h}, R^{h})$ with $R^{h}$ containing all locations of the form $(\bar{\mathbb{V}}^h, a)$, where $a$ is the last symbol of $\bar{w}_{g}$. \begin{itemize} \item $\langle (\dots,q'_{h},q_{h+1}, \dots,h), a, \varphi, (\dots,q_{h},q, \dots,h+1) \rangle \in \Delta^{h+1}_{l}$ iff \\ $(q_{h+1}, a, \varphi, q) \in \delta^{h+1}_{l}$ \item $\langle (\dots,q'_{h},q_{h+1}, \dots,h), a, \varphi, (\dots,q_{h},q, \dots,h+1), \gamma \rangle \in \Delta^{h+1}_{c}$ iff \\ $(q_{h+1}, a, \varphi, q, \gamma) \in \delta^{h+1}_{c}$ \item $\langle (\dots,q'_{h},q_{h+1}, \dots,h), a, \gamma, \varphi, (\dots,q_{h},q, \dots,h+1) \rangle \in \Delta^{h+1}_{r}$ iff \\ $(q_{h+1}, a, \gamma, \varphi, q) \in \delta^{h+1}_{r}$ \end{itemize} Transitions of $B^{h+1}$ continue on $(\dots,q_{h},q, \dots,h+1)$ replacing only the $(h+1)$-th entry until $\bar{w}_{g+1}$ is read completely. \item Reading $\bar{w}_{\bar{m}}$ the last context of word $w$. Without loss of generality assume that this is the context of $B^n_k$ and the previous context $\bar{w}_{\bar{m}-1}$ was the context of $B^{n-1}_k$. When $B^{n-1}_k$ stops reading last symbol of $\bar{w}_{\bar{m}}$ then it is in the state $\eta=(\mathcal{V}^1, \mathcal{V}^2,\ldots,\mathcal{V}^{n-1},(S'^n,R'^n),n)$ where no more symbols are to be read, and each $B^h_k$ is in the state $\langle \mathcal{V}^h \rangle$, where $\mathcal{V}^h$ is the last component of $k$-scoped switching vector $\bar{\mathbb{V}}^h$, where $R'_{n}$ is the set of all locations of the form $(\mathcal{V}^n,a)$, where $a$ is the last symbol of $\bar{\mathbb{V}}^h$. This is accepting iff there exists $\ell'_{kn}$ as the second destination of last tuple in $\mathcal{V}^n$ and $\ell'_{kn} \in F$. Note that we have ensured the following: \begin{enumerate} \item $\bar{\mathbb{V}}^h= \displaystyle \Pi_{1 \leq i \leq m_h} \mathcal{V}^h_i$ and is part of correct global sequence for all $1 \leq h \leq n$, and $m_h$ is the number of $k$-switching vectors of $A^h_k$ used in the computation. \item At the end of $\bar{w}_{arm}$, we reach in $B^h_k$, $(S'_{kn},R'_{kn})$ such that there exists $\ell'_{kn}$ as the second destination of the last tuple in $\mathcal{V}^n$ and $\ell'_{kn} \in F$. \item When switching from one context to another continuity of state is used as given in the correct global sequence. \end{enumerate} \end{enumerate} The above conditions ensure correctness of local switching and a globally correct sequence in $A$. Clearly, $w \in L(B)$ iff $w \in L(A)$ iff there is some globally correct sequence $\bar{\mathbb{V}}^1 \dots \bar{\mathbb{V}}^n$. \section{Emptiness checking and Determinizability of scope-bounded \textsf{dtMVPA}{}} \label{sec2} Fix a $k \in \mathbb{N}$. \ram{The proof is via untiming stack construction to get $k$-scope $\textsf{ECMVPA}$ for which emptiness is shown to be decidable in Theorem~\ref{lab:tm:empt-kecmvpa}}. We first informally describe the \emph{untiming-the-stack} construction to obtain from a \ram{$k$-scope-\textsf{dtMVPA}{} $M$} over $\Sigma$, an \ram{$k$-scope-\textsf{ECMVPA}{} $M'$} over an extended alphabet $\Sigma'$ such that $L(M)=h(L(M'))$ where $h$ is a homomorphism $h: \Sigma' {\times} \mathbb{R}^{\geq 0}\rightarrow \Sigma {\times} \mathbb{R}^{\geq 0}$ defined as $h(a,t)=(a,t)$ for $a {\in} \Sigma$ and $h(a,t)=\epsilon$ for $a {\notin} \Sigma$. Our construction builds upon that of~\cite{BDKRT15}. Let $\kappa$ be the maximum constant used in the \ram{$k$-scope-\textsf{dtMVPA}{} $M$} while checking the age of a popped symbol in any of the stacks. Let us first consider a call transition $(l, a, \varphi, l', \gamma) \in \Delta^i_c$ encountered in $M$. To construct an \ram{$k$-scope-\textsf{ECMVPA}{} $M'$} from $M$, we guess the interval used in the return transition when $\gamma$ is popped from $i$th stack. Assume the guess is an interval of the form $[0,\kappa]$. This amounts to checking that the age of $\gamma$ at the time of popping is ${<}\kappa$. In $M'$, the control switches from $l$ to a special location $(l'_{a, {<}\kappa}, \{{{<_i}\kappa}\})$, and the symbol $(\gamma, {{<}\kappa}, \texttt{first})$\footnote{It is sufficient to push $(\gamma, {{<}\kappa}, \texttt{first})$ in stack $i$, since the stack number is known as $i$} is pushed onto the $i$th stack. Let $Z_i^{\sim}=\{\sim_i c \mid c {\in} \mathbb{N}, c \leq k, \sim {\in} \{{<}, \leq, >, \geq\}\}$. Let $\Sigma'_i=\Sigma^i \cup Z_i^{\sim}$ be the extended alphabet for transitions on the $i$th stack. All symbols of $Z_i^{\sim}$ are internal symbols in $M'$ i.e. $\Sigma'_i=\set{\Sigma^i_c, \Sigma^i_{l}\cup Z_i^{\sim}, \Sigma^i_r}$. At location $(l'_{a,{<}\kappa}, \{{{<_i}\kappa}\})$, the new symbol ${{<_i}\kappa}$ is read and we have the following transition : $((l'_{a,{<}\kappa}, \{{{<_i}\kappa}\}), {{<_i}\kappa}, x_a=0, (l', \{{{<_i}\kappa}\}))$, which results in resetting the event recorder $x_{{<_i}\kappa}$ corresponding to the new symbol ${{<_i}\kappa}$. The constraint $x_a=0$ ensures that no time is elapsed by the new transition. The information ${{<_i}\kappa}$ is retained in the control state until $(\gamma, {{<}\kappa}, \texttt{first})$ is popped from $i$th stack. At $(l', \{{{<_i}\kappa}\}))$, we continue the simulation of $M$ from $l'$. Assume that we have another push operation on $i$th stack at $l'$ of the form $(l',b, \psi, q, \beta)$. In $M'$, from $(l',\{{{<_i} \kappa}\})$, we first guess the constraint that will be checked when $\beta$ will be popped from the $i$th stack. If the guessed constraint is again ${<_i}\kappa$, then control switches from $(l',\{{{<_i} \kappa}\})$ to $(q,\{{{<_i} \kappa}\})$, and $(\beta,{{<}\kappa},-)$ is pushed onto the $i$th stack and simulation continues from $(q,\{{{<_i} \kappa}\})$. However, if the guessed pop constraint is ${<_i}\zeta$ for $\zeta \neq \kappa$, then control switches from $(l',\{{{<_i} \kappa}\})$ to $(q_{b,{<}\zeta},\{{{<_i}\kappa},{{<_i} \zeta}\})$ on reading $b$. The new obligation ${<_i}\zeta$ is also remembered in the control state. From $(q_{b,{<}\zeta},\{{{<_i}\kappa},{{<_i} \zeta}\})$, we read the new symbol ${<_i}\zeta$ which resets the event recorder $x_{{<_i}\zeta}$ and control switches to $(q, \{{{<_i}\kappa},{{<_i} \zeta}\})$, pushing $(\beta, {{<}\zeta}, \texttt{first})$ on to the $i$th stack. The idea thus is to keep the obligation ${{<_i}\kappa}$ alive in the control state until $\gamma$ is popped; the value of $x_{{<_i}\kappa}$ at the time of the pop determines whether the pop is successful or not. If a further ${{<_i}\kappa}$ constraint is encountered while the obligation ${{<_i}\kappa}$ is already alive, then we do not reset the event clock $x_{{<_i}\kappa}$. The $x_{{<_i}\kappa}$ is reset only at the next call transition after $(\gamma, {{<}\kappa}, \texttt{first})$ is popped from $i$th stack , when ${{<_i}\kappa}$ is again guessed. The case when the guessed popped constraint is of the form ${>_i}\kappa$ is similar. In this case, each time the guess is made, we reset the event recorder $x_{{>_i}\kappa}$ at the time of the push. If the age of a symbol pushed later is ${>}\kappa$, so will be the age of a symbol pushed earlier. In this case, the obligation ${{>}\kappa}$ is remembered only in the stack and not in the finite control. Handling guesses of the form $\geq \zeta \wedge \leq \kappa$ is similar, and we combine the ideas discussed above. \Suggestion{Now consider a return transition $(l, a, I, \gamma, \varphi, l')\in \Delta^i_r$ in $M$.} In $M'$, we are at some control state $(l,P)$. On reading $a$, we check the top of the $i$th stack symbol in $M'$. It is of the form $(\gamma, S, \texttt{first})$ or $(\gamma, S, -)$, where $S$ is a singleton set of the form $\{{<}\kappa\}$ or $\{{>}\zeta\}$, or a set of the form $\{{<}\kappa, {>}\zeta\}$\footnote{This last case happens when the age checked lies between $\zeta$ and $\kappa$}. Consider the case when the top of the $i$th stack symbol is $(\gamma, \{{<}\kappa, {>}\zeta\}, \texttt{first})$. In $M'$, on reading $a$, the control switches from $(l,P)$ to $(l',P')$ for $P'=P \backslash \{{<}\kappa\}$ iff the guard $\varphi$ evaluates to true, the interval $I$ is $(\zeta,\kappa)$ (this validates our guess made at the time of push) and the value of clock $x_{{<_i}\kappa}$ is ${<} \kappa$, and the value of clock $x_{{>_i}\zeta}$ is ${>}\zeta$. Note that the third component $\texttt{first}$ says that there are no symbols in $i$th stack below $(\gamma, \{{<}\kappa, {>}\zeta\}, \texttt{first})$ whose pop constraint is ${<}\kappa$. Hence, we can remove the obligation ${<_i}\kappa$ from $P$ in the control state. If the top of stack symbol was $(\gamma, \{{<}\kappa, {>}\zeta\},-)$, then we know that the pop constraint ${<}\kappa$ is still alive for $i$th stack . That is, there is some stack symbol below $(\gamma, \{{<}\kappa, {>}\zeta\},-)$ of the form $(\beta, S, \texttt{first})$ such that ${<}\kappa {\in} S$. In this case, we keep $P$ unchanged and control switches to $(l',P)$. Processing another $j$th stack continues exactly as above; the set $P$ contains $<_i \kappa, \leq_j \eta$, and so on depending on what constraints are remembered per stack. Note that the set $P$ in $(l, P)$ only contains constraints of the form $<_i \kappa$ or $\leq_i \kappa$ for each $i$th stack, since we do not remember $>\zeta$ constraints in the finite control. We now give the formal construction. \subsection*{Reduction from \ram{$k$-scope-\textsf{dtMVPA}{} to $k$-scope-\textsf{ECMVPA}{}:}} Let $Z^{\sim}=\bigcup_{i=1}^n Z^{\sim}_i$ and let $S^{\sim}=\{\sim c \mid c {\in} \mathbb{N}, c \leq \kappa, \sim {\in} \{{<}, \leq, {>}, \geq, =\}\}$. Given \ram{$k$-scope-\textsf{dtMVPA}{}} $M=(L, \Sigma, \Gamma, L^0, F, \Delta)$ with max constant $\kappa$ used in return transitions of all stacks, we construct \ram{$k$-scope-\textsf{ECMVPA}{}} $M'=(L', \Sigma', \Gamma', L'^0, F', \Delta')$ where $L'{=}(L {\times} 2^{Z^{\sim}}) \cup (L_{\Sigma_i {\times} S^{\sim}} {\times} 2^{Z^{\sim}}) \cup (L_{\Sigma_i {\times} S^{\sim}{\times} S^{\sim}} {\times} 2^{Z^{\sim}})$, $\Sigma_i'=(\Sigma^i_c, \Sigma^i_{l} \cup Z_i^{\sim}, \Sigma^i_r)$ and $\Gamma_i'=\Gamma_i {\times} 2^{S^{\sim}} {\times} \{\texttt{first}, -\}$, $L^0=\{(l^0, \emptyset) \mid l^0 {\in} L^0\}$, and $F=\{(l^f, \emptyset) \mid l^f {\in} F\}$. The transitions $\Delta'$ are defined as follows: \noindent{\emph{Internal Transitions}}. For every $(l,a,\varphi,l') {\in} \Delta^i_{l}$ we have the set of transitions $((l,P),a, \varphi,(l',P)) {\in} {\Delta^i}'_{l}$. \noindent{\emph{Call Transitions}}. For every $(l,a,\varphi,l',\gamma) {\in} \Delta^i_c$, we have the following classes of transitions in $M'$. \begin{enumerate} \item The first class of transitions corresponds to the guessed pop constraint being ${<}\kappa$. In the case that, obligation is ${<}\kappa$ is alive in the state, hence there is no need to reset the clock $x_{{<_i}\kappa}$. Otherwise, the obligation ${<}\kappa$ is fresh and hence it is remembered as $\texttt{first}$ in the $i$th stack, and the clock $x_{{<_i}\kappa}$ is reset. \begin{eqnarray*} ((l,P), a, \varphi, (l', P), (\gamma,\{{<}\kappa\},-)) {\in} {\Delta^i}'_c && \: \text{if} \:{{<_i}\kappa} {\in} P \\ ((l,P), a, \varphi, (l'_{a,{{<}\kappa}}, P'), (\gamma,\{{<}\kappa\},\texttt{first})) {\in} {\Delta^i}'_c && \:\text{if}\: {{<_i}\kappa} {\notin} P \:\text{and}\: P'=P \cup \{{{<_i}\kappa}\}\\ ((l'_{a,{{<}\kappa}}, P'), {{<_i}\kappa}, x_a=0, (l', P')) {\in} {\Delta^i}'_{l} \end{eqnarray*} \item The second class of transitions correspond to the guessed pop constraint ${>}\kappa$. The clock $x_{{>_i}\kappa}$ is reset, and obligation is stored in $i$th stack. \begin{eqnarray*} ((l,P), a, \varphi, (l'_{a,{{>}\kappa}}, P), (\gamma,\{{>}\kappa\},-)) {\in} {\Delta^i}'_c &\text{and} & ((l'_{a,{{>}\kappa}}, P), {{>_i}\kappa}, x_a{=}0, (l', P)) {\in} {\Delta^i}'_{l} \end{eqnarray*} \item Finally the following transitions consider the case when the guessed pop constraint is ${>}\zeta$ and ${<}\kappa$. Depending on whether ${<}\kappa$ is alive or not, we have two cases. If alive, then we simply reset the clock $x_{{>_i}\zeta}$ and remember both the obligations in $i$th stack. If ${<}\kappa$ is fresh, then we reset both clocks $x_{{>_i}\zeta}$ and $x_{{<_i}\kappa}$ and remember both obligations in $i$th stack , and ${<_i}\kappa$ in the state. \begin{eqnarray*} ((l,P), a, \varphi, (l'_{a,{<}\kappa,{>}\zeta}, P'), (\gamma,\{{<}\kappa, {>}\zeta\},\texttt{first})) {\in} {\Delta^i}'_c && \text{if}\: {{<_i}\kappa} {\notin} P, P'{=}P \cup \{{<_i}\kappa\}\\ ((l'_{a,{<}\kappa,{>}\zeta},P'), {>_i}\zeta,x_a=0, (l'_{a,{<}\kappa},P')) {\in} {\Delta^i}'_{l}&&\\ ((l'_{a,{<}\kappa},P'), {>_i}\kappa,x_a=0, (l'_{a,{<}\kappa},P')) {\in} {\Delta^i}'_{l}~ &&\\ ((l,P), a, \varphi, (l'_{a,{>}\zeta}, P), (\gamma,\{{<}\kappa, {>}\zeta\},-)) {\in} {\Delta^i}'_c && \text{if}\: {{<_i}\kappa} {\in} P \end{eqnarray*} \end{enumerate} \noindent{\emph{Return Transitions}}. For every $(l,a,I,\gamma, \varphi,l') {\in} \Delta^i_r$, transitions in ${\Delta^i}'_r$ are: \begin{enumerate} \item $((l,P),a,(\gamma,\{{<}\kappa,{>}\zeta\},-),\varphi \wedge x_{{<_i}\kappa}{<}\kappa \wedge x_{{>_i}\zeta}{>}\zeta,(l',P))$ if $I=(\zeta,\kappa)$. \item $((l,P),a,(\gamma,\{{<}\kappa,{>}\zeta\},\texttt{first}),\varphi \wedge x_{{<_i}\kappa}{<}\kappa \wedge x_{{>_i}\zeta}{>}\zeta,(l',P'))$ \\ where $P' = P\backslash\{{<_i}\kappa\}$, if $I=(\zeta,\kappa)$. \item $((l,P),a,(\gamma,\{{<}\kappa\},-),\varphi \wedge x_{{<_i}\kappa}{<}\kappa,(l',P))$ if $I=[0,\kappa)$. \item $((l,P),a,(\gamma,\{{<}\kappa\},\texttt{first}),\varphi \wedge x_{{<_i}\kappa}{<}\kappa,(l',P'))$ with $P'{=}P\backslash\{{<_i}\kappa\}$ if $I{=}[0,\kappa)$. \item $((l,P),a,(\gamma,\{{>}\zeta\},-),\varphi \wedge x_{{>_i}\zeta}{>}\zeta,(l',P))$ if $I=(\zeta,\infty)$. \end{enumerate} For the pop to be successful in $M'$, the guess made at the time of the push must be correct, and indeed at the time of the pop, the age must match the constraint. The control state $(l^f,P)$ is reached in $M'$ on reading a word $w'$ iff $M$ accepts a string $w$ and reaches $l^f$. Accepting locations of $M'$ are of the form $(l^f,P)$ for $P \subseteq Z^{\sim}$. If $w$ is a matched word then $P$ is empty, and there must be any obligations which are pending at the end. Let $w=(a_1,t_1) \dots (a_i,t_i) \dots (a_n,t_n) {\in} L(M)$. If $a_i \in \Sigma^i_c$, we have in $L(M')$, a string $T_i$ between $(a_i,t_i)$ and $(a_{i+1},t_{i+1})$, with $|T_i| \leq 2$, and $T_i$ is a timed word of the form $(b_{1i},t_i)(b_{2i},t_i)$ or $(b_{1i},t_i)$. The time stamp $t_i$ remains unchanged, and either $b_{1i}$ is $<_i\kappa$ or $\leq_i \kappa$ or $b_{1i}$ is $>_i\zeta$, or $b_{1i}$ is $>_i\zeta$ and $b_{2i}$ is one of $<_i \kappa$ or $\leq_i \kappa$ for some $\kappa,\zeta \leq k$. This follows from the three kinds of call transitions in $M'$. In the construction above, it can shown by inducting on the length of words accepted that $h(L(M'))=L(M)$. Thus, $L(M') \neq \emptyset$ iff $L(M) \neq \emptyset$. If $M$ is a \ram{$k$-scope-\textsf{dtMVPA}{}, then $M'$ is a $k$-scope-\textsf{ECMVPA}{}}. \ram{Since $M'$ is a $k$-scope-\textsf{ECMVPA}{}, its emptiness check is decidable using Theorem~\ref{lab:tm:empt-kecmvpa}, which uses the standard region construction of event clock automata \cite{AFH99} to obtain a $k$-scope-\textsf{MVPA}{},} which has a decidable emptiness \cite{latin10}. \begin{theorem} \label{lab:tm:empt} The emptiness checking for $k$-scope \textsf{dtMVPA}{} is decidable. \end{theorem} In \cite{BDKPT16} we have shown that $k$-round \textsf{dtMVPA}{} are determinizable. Using an untiming construction to get $k$-round \textsf{ECMVPA}, determinize it and again converting this to get deterministic $k$-round \textsf{dtMVPA}{}. For $k$-scope \textsf{dtMVPA}{} using the stack untiming construction we get a $k$-scope \textsf{ECMVPA}. This is determinized to get deterministic $k$-scope \textsf{ECMVPA}. We convert this back to get deterministic $k$-scope \textsf{dt-MVPA}{}. The morphisms used for this conversion are same as in \cite{BDKPT16}. \begin{theorem} \label{lab:tm:kdtdet} The $k$-scope \textsf{dtMVPA}{} are determinizable. \end{theorem} \begin{proof} Consider a $k$-scope \textsf{dtMVPA}{} $M=(L, \Sigma, \Gamma, L^0, F, \Delta)$ and the corresponding $k$-scope \textsf{ECMVPA}{} $M'=(L', \Sigma', \Gamma', L'^0, F', \Delta')$ as constructed in section \ref{app:untime}. From Theorem~\ref{lab:tm:det-kecmvpa} we know that $M'$ is determinizable. Let $Det(M')$ be the determinized automaton such that $L(Det(M'))=L(M')$. That is, $L(M)=h(L(Det(M')))$. By construction of $M'$, we know that the new symbols introduced in $\Sigma'$ are $Z^{\sim}$ ($\Sigma'_i=\Sigma_i \cup Z_i^{\sim}$ for each $i$th stack ) and (i) no time elapse happens on reading symbols from $Z_i^{\sim}$, and (ii) no stack operations happen on reading symbols of $Z_i^{\sim}$. Consider any transition in $Det(M')$ involving the new symbols. Since $Det(M')$ is deterministic, let $(s_1, \alpha, \varphi, s_2)$ be the unique transition on $\alpha {\in} Z_i^{\sim}$. In the following, we eliminate these transitions on $Z_i^{\sim}$ preserving the language accepted by $M$ and the determinism of $det(M')$. In doing so, we will construct a \ram{$k$-scope \textsf{dtMVPA}{}} $M''$ which is deterministic, and which preserves the language of $M$. We now analyze various types for $\alpha {\in} Z_i^{\sim}$. \begin{enumerate} \item Assume that $\alpha$ is of the form ${>_i}\zeta$. Let $(s_1, \alpha, \varphi, s_2)$ be the unique transition on $\alpha {\in} Z_i^{\sim}$. By construction of $M'$ (and hence $det(M')$), we know that $\varphi$ is $x_a=0$ for some $a {\in} \Sigma^i$. We also know that in $Det(M')$, there is a unique transition $(s_0, a, \psi, s_1, (\gamma, \alpha, -))$ preceding $(s_1, \alpha, \varphi, s_2)$. Since $(s_1, \alpha, \varphi, s_2)$ is a no time elapse transition, and does not touch any stack, we can combine the two transitions from $s_0$ to $s_1$ and $s_1$ to $s_2$ to obtain the call transition $(s_0, a, \psi, s_2, \gamma)$ for $i$th stack . This eliminates transition on ${>_i}\zeta$. \item Assume that $\alpha$ is of the form ${<_i}\kappa$. Let $(s_1, \alpha, \varphi, s_2)$ be the unique transition on $\alpha {\in} Z_i^{\sim}$. We also know that $\varphi$ is $x_a=0$ for some $a {\in} \Sigma^i$. From $M'$, we also know that in $Det(M')$, there is a unique transition of one of the following forms preceding $(s_1, \alpha, \varphi, s_2)$: \begin{itemize} \item[(a)] $(s_0, a, \psi, s_1, (\gamma, \alpha, -))$, (b) $(s_0, a, \psi, s_1, (\gamma, \alpha,\texttt{first}))$, or \item[(c)] $(s_0, {>_i}\zeta, \varphi, s_1)$ where it is preceded by $(s'_0,a, \psi, s_0, (\gamma, \{\alpha, {>}\zeta\},X))$ for $X {\in} \{\texttt{first},-\}$. \end{itemize} As $(s_1, \alpha, \varphi, s_2)$ is a no time elapse transition, and does not touch the stack, we can combine two transitions from $s_0$ to $s_1$ (cases $(a)$, $(b)$) and $s_1$ to $s_2$ to obtain the call transition $(s_0, a, \psi, s_2, (\gamma, \alpha, -))$ or $(s_0, a, \psi, s_2, (\gamma, \alpha, \texttt{first}))$. This eliminates the transition on ${<_i}\kappa$. In case of transition $(c)$, we first eliminate the local transition on ${>_i}\zeta$ obtaining $(s'_0,a, \psi, s_1, \gamma)$. This can then be combined with $(s_1, \alpha, \varphi, s_2)$ to obtain the call transitions $(s'_0,a, \psi, s_2, \gamma)$. We have eliminated local transitions on ${<_i}\kappa$. \end{enumerate} Merging transitions as done here does not affect transitions on any $\Sigma^i$ as they simply eliminate the newly added transitions on $\Sigma_i'\setminus \Sigma_i$. Recall that checking constraints on recorders $x_{<_i \kappa}$ and $x_{>_i \zeta}$ were required during return transitions. We now modify the pop operations in $Det(M')$ as follows: Return transitions have the following forms, and in all of these, $\varphi$ is a constraint checked on the clocks of $C_{\Sigma^i}$ in $M$ during return: transitions $(s, a, (\gamma, \{{<}\kappa\}, X), \varphi \wedge x_{{<_i}\kappa} {<}\kappa, s')$ for $X {\in} \{-, \texttt{first}\}$ are modified to $(s, a, [0, \kappa), \gamma, \varphi, s')$; transitions $(s, a, (\gamma, \{{<}\kappa, {>}\zeta\}, X), \varphi \wedge x_{{>_i}\zeta} {{>}} \zeta \wedge x_{{<_i}\kappa} {{<}} \kappa, s')$ for $X {\in} \{-, \texttt{first}\}$ are modified to $(s, a, (\zeta, \kappa), \gamma, \varphi, s')$; and transition $(s, a, (\gamma, \{{>}\zeta\}, -), \varphi \wedge x_{{>_i}\zeta} {>} \zeta, s')$ are modified to the transitions $(s, a, (\zeta, \infty), \gamma, \varphi, s')$. Now it is straightforward to verify that the $k$-scope $\textsf{dtMVPA}{}$ $M''$ obtained from the $k$-scope \textsf{ECMVPA}{} $det(M')$ is deterministic. Also, since we have only eliminated symbols of $Z^{\sim}$, we have $L(M'')=L(M)$ and $h(L(M''))=L(det(M'))$. This completes the proof of determinizability of $k$-scope \textsf{dtMVPA}{}. \end{proof} It is easy to show that $k$-scoped \textsf{ECMVPA} s and $k$-scoped \textsf{dtMVPA} s are closed under union and intersection; using Theorem \ref{lab:tm:det-kecmvpa} and Theorem \ref{lab:tm:kdtdet} we get closure under complementation. \begin{theorem} \label{lab:tm:boolmain} The classes of $k$-scoped \textsf{ECMVPL} s and $k$-scoped \ram{\textsf{dtMVPL}{}s} are closed under Boolean operations. \end{theorem} \section{Logical Characterization of $k$-\textsf{dtMVPA}{}} \label{sec:mso} Let $w = (a_1, t_1), \ldots, (a_m, t_m)$ be a timed word over alphabet $\Sigma=\langle \Sigma^i_c, \Sigma^i_{l}, \Sigma^i_r \rangle_{i=1}^n$ as a \emph{word structure} over the universe $U=\{1,2,\dots,|w|\}$ of positions in $w$. We borrow definitions of predicates $Q_a(i), \ERPred{a}(i), \EPPred{a}(i)$ from ~\cite{BDKRT15}. Following~\cite{lics07}, we use the matching binary relation $\mu_j(i,k)$ which evaluates to true iff the $i$th position is a call and the $k$th position is its matching return corresponding to the $j$th stack. We introduce the predicate $\theta_j(i) \in I$ which evaluates to true on the word structure iff {$w[i] = (a, t_i)$ with $a \in \Sigma^j_r$} and {$w[i] \in \Sigma^j_r$}, and there is some $k<i$ such that $\mu_j(k,i)$ evaluates to true and $t_i-t_k \in I$. The predicate $\theta_j(i)$ measures time elapsed between position $k$ where a call was made on the stack $j$, and position $i$, its matching return. This time elapse is the age of the symbol pushed onto the stack during the call at position $k$. Since position $i$ is the matching return, this symbol is popped at $i$, if the age lies in the interval $I$, the predicate evaluates to true. We define MSO($\Sigma$), the MSO logic over $\Sigma$, as: \[ \varphi{:=}Q_a(x)~|~x {\in} X~|~\mu_j(x,y)~|~\ERPred{a}(x){\in} I~|~\EPPred{a}(x){\in} I~|~\theta_j(x){\in} I~|\neg\varphi~|~\varphi {\vee} \varphi|~\exists \, x.\varphi~|~\exists \, X.\varphi \] where $a {\in} \Sigma$, $x_a {\in} C_{\Sigma}$, $x$ is a first order variable and $X$ is a second order variable. The models of a formula $\phi \in \mbox{MSO}(\Sigma)$ are timed words $w$ over $\Sigma$. The semantics is standard where first order variables are interpreted over positions of $w$ and second order variables over subsets of positions. We define the language $L(\varphi)$ of an MSO sentence $\varphi$ as the set of all words satisfying $\varphi$. Words in $Scope(\Sigma,k)$, for some $k$, can be captured by an MSO formula $Scope_k(\psi)= \displaystyle \bigwedge_{1 \leq j \leq n} Scope_k(\psi)^j$, where $n$ is number of stacks, where \\ \begin{center} $Scope_k(\psi)^j= \forall y Q_a(y) \wedge a \in \Sigma^r_j \Rightarrow( \exists x\mu_j(x,y) \wedge $ $( \psi_{kcnxt}^j \wedge \psi_{matcnxt}^j \wedge \psi_{noextracnxt} ))$ \end{center} where $\psi_{kcnxt}^j = \exists x_1,\ldots,x_k (x_1 \leq \ldots \leq x_k \leq y \displaystyle \bigwedge_{1 \leq q \leq k} (Q_a(x_q) \wedge a \in \Sigma_j \wedge (Q_b(x_q-1) \Rightarrow b \notin \Sigma_j))$, \noindent and $\psi_{matcnxt}^j= \displaystyle \bigvee_{1\leq q \leq k} \forall x_i (x_q \leq x_i \leq x (Q_c(x_i) \Rightarrow c \in \Sigma_j)) $, and \\ $\psi_{noextracnxt}= \exists x_l (x_1 \leq x_l \leq y) (Q_a(l) \wedge a \in \Sigma_j \wedge Q_b(x_{l}-1) \wedge b \in \Sigma_j) \Rightarrow x_l \in \{x_1,\ldots,x_k\}.$ Formulas $\psi_{noextracnxt}$ and $\psi_{kcnxt}$ say that there are at most $k$ contexts of $j$-th stack, while formula $\psi_{matcnxt}$ says where matching call position $x$ of return position $y$ is found. Conjuncting the formula obtained from a \textsf{dtMVPA}{} $M$ with $Scope(\psi)$ accepts only those words which lie in $L(M) \cap Scope(\Sigma,k)$. Likewise, if one considers any MSO formula $\zeta=\varphi \wedge Scope(\psi)$, it can be shown that the \textsf{dtMVPA}{} $M$ constructed for $\zeta$ will be a $k$-\textsf{dtMVPA}{}. Hence we have the following MSO characterization. \begin{theorem} \label{th:mso} A language $L$ over $\Sigma$ is accepted by an \textsf{k-scope \textsf{dtMVPA}{}} iff there is a MSO sentence $\varphi$ over $\Sigma$ such that $L(\varphi) \cap Scope(\Sigma,k)=L$. \end{theorem} The two directions, \textsf{dtMVPA}{} to MSO, as well as MSO to \textsf{dtMVPA}{} can be handled using standard techniques, and can be found in Appendix~\ref{app:mso}. \section{Conclusion} In this work we have seen timed context languages characterized by $k$-scope \textsf{ECMVPA}{} and dense-timed $k$-scope \textsf{dtMVPA}{}, along with logical characterizations for both classes (for $k$-scope \textsf{ECMVPA}{}, equivalent MSO is obtained by dropping predicate $\theta_j(x){\in} I$--which checks if the aging time of pushed symbol $x$ of stack $j$~falls in interval $I$--from MSO of $k$-scope \textsf{dtMVPA}{}). Here, while reading an input symbol of a stack, clock constraints involve the clocks of symbols associated with the same stack. It would be interesting to see if our results hold without this restriction. Another direction, would be to use alternate methods like Ramsey theorem based \cite{Abdulla11,ramsey-vpa} or Anti-chain based \cite{FogartyV10,BruyereDG13} methods avoiding complementation and hence determinization to do language inclusion check. \bibliographystyle{splncs04}
{ "attr-fineweb-edu": 1.818359, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUa-A241xg-Gl0mOxK
\section{Design and Fabrication} \subsection{Theory and simulation} Taking modal dispersion and thermo-optic effects into account, i.e. $n_1(\omega_1,T)$ and $n_2(\omega_2,T)$, the QPM condition of the PPLN waveguide reads \cite{Wang:18PPLN, Chen:19} \begin{equation} \Delta K= n_{2}(\omega_2,T) - n_{1}(\omega_1,T) -\frac{2\pi c}{\Lambda \omega_2} = 0, \label{eq2} \end{equation} where $\Lambda$ is the poling period, $n_1$ and $n_2$ are the effective refractive indices for fundamental ($\omega_{1}$) and SH ($\omega_{2}$) modes, respectively. To derive its thermal dependency, we perform the Taylor's series expansion for each term in Eq.~(\ref{eq2}) around one of their perfect QPM points $\omega^o_1$, $\omega^o_2$, and $T^o$ \cite{luo2018highly}: \begin{align} \label{eq3} n_{2}(\omega_2,T)= n_{2}(\omega_2^o,T^o)+\frac{\partial n_2}{\partial \omega_2}\Delta \omega_2+\frac{\partial n_2}{\partial T}\Delta T, \nonumber\\ n_{1}(\omega_1,T)= n_{1}(\omega_1^o,T^o)+\frac{\partial n_1}{\partial \omega_1}\Delta \omega_1+\frac{\partial n_1}{\partial T}\Delta T, \\ \frac{2\pi c}{\Lambda \omega_2} = \frac{2\pi c}{\Lambda \omega_2^o}\left(1-\frac{\Delta \omega_2}{\omega_2^0}\right),\nonumber \end{align} where $\omega_2 = 2\omega_1$, $\Delta \omega_2 = 2\Delta \omega_1$ and $n_{2}(\omega_2^o,T^o)-n_{1}(\omega_1^o,T^o)=\frac{2\pi c}{\Lambda \omega_2^o}$. After temperature is perturbed, the new system still need to fulfill QPM condition with the same poling period (thermal expansion is neglected here). By inserting the expanded terms and plugging individual items in Eq.(\ref{eq3}) into Eq.(\ref{eq2}) and simplifying with initial QPM condition of $\Delta K=0$, we arrive at : \begin{eqnarray} \frac{\Delta \omega_1}{\Delta T} = \frac{\frac{\partial n_1}{\partial T}-\frac{\partial n_2}{\partial T}}{2\frac{\partial n_2}{\partial \omega_2}-\frac{\partial n_1}{\partial \omega_1}+\frac{4\pi c}{\Lambda \omega_2^{o2}}}. \label{eq4} \end{eqnarray} From the above equation, the temperature dependency of the phase matching comes from two folds. The first is exhibited in the numerator, given by the difference of thermo-optic coefficient between fundamental and SH modes. The second is in the denominator, governed by the group indices and an extra constant term. By increasing the numerator and/or reducing the denominator, we could increase the temperature tunability of the LNOI waveguide. For lithium niobate, $\frac{\partial n_e}{\partial T} \gg \frac{\partial n_o}{\partial T}$, \cite{TO2005}, so that one could use the fundamental-frequency mode along the ordinary axis (o-polarized) and the SH mode along the extraordinary axis (e-polarized), in order to maximize the thermal coefficient difference \cite{luo2018highly}. However, it works at the expense of reducing the conversion efficiency over 30 times due to the much smaller nonlinear tensor $d_{31}$ it can support. This motivates us to explore another possibility, by minimizing the denominator. By recognizing $n_g = c/v_g=n+\omega \frac{\partial n}{\partial \omega}$, the group velocity mismatch GVM $= 1/v_{g1}-1/v_{g2}$, and $n_{2}^o-n_{1}^o=\frac{2\pi c}{\Lambda \omega_2^o}$, Eq.(\ref{eq4}) is reduced to: \begin{equation} \frac{\Delta \omega_1}{\Delta T} = \frac{\frac{\partial n_2}{\partial T}-\frac{\partial n_1}{\partial T}}{GVM\frac{c}{\omega_1}}. \label{eq5} \end{equation} Hence, GVM can serve as an efficient knob to control the tunability, both for its magnitude and the direction of the QPM wavelength shifting. Meanwhile, the largest nonlinear tensor $d_{33}$ in lithium niobate can be accessed, thanks to the periodic poling. This method addresses the inefficiency drawback in Ref.~\cite{luo2018highly}. We validate the theoretical prediction through performing MODE simulations of optical modes and their temperature dependency (Lumerical, Inc). As a proof of principle, we compare the thermal effects on the GVM for two LNOI waveguide geometries (Waveguide 1: 400$\times$1850 nm and Waveguide 2: 700$\times$1500 nm). As shown in Fig.~\ref{fig1}, we select fundamental quasi-transverse-magnetic (quasi-TM) modes for both 1550 nm and 775 nm in Z-cut LNOI waveguides, which allows to access their largest nonlinear tensor $d_{33}$ while maximizing the mode overlapping. With the chosen geometry, the poling period for type-0 phase matching is calculated to be 2.45 $\mu m$ and 3.8 $\mu m$, respectively. Later, by incorporating a temperature-dependent Sellmeier equation into the Lumerical simulator, we calculate their phase matching curves at various temperatures. As shown in Fig.~\ref{fig2}, Waveguide 1 exhibits red shift with a 0.2 nm/K slope. However, Waveguide 2 exhibits blue shifting with 0.6 nm/K slope, which shows opposite temperature dependency than in typical cases \cite{luo2018highly, Wang:18PPLN, Chen:19, fejer1992quasi}. This distinct behavior indicates that they have different signs in GVM. From these numbers we extract their GVM to be -600 fs/mm and 180 fs/mm, respectively. Also the ratio of the absolute value of their GVMs reveals the ratio of amplitude of corresponding thermal tunability. The above simulation results are in good agreement with our theoretical prediction, validating the relationship between GVM and thermal tunability. By using a thicker layer of lithium niobate ($\sim$700 nm), we are able to achieve small and positive GVM ($<$200 fs/mm) to obtain larger thermal tunability. With the waveguide geometry of 700$\times$1500 nm, the required poling period is 55$\%$ larger than the thinner one, which reduces the fabrication difficulty. Additionally, better mode confinement in thicker nanowaveguide promises lower optical propagation loss, which is important for many practical applications. Thus in the following sections, we will focus on the Waveguide 2 configuration, while keeping in mind that the same technique is applicable to both. \begin{figure} \centering \includegraphics[width=3in]{fig1.jpg \caption{Simulated profiles for the 1550-nm and 775-nm quasi-TM$_{00}$ modes. The waveguide cross section 400$\times$1850nm (height and top) for (a) and (b), and 700$\times$1500nm for (c) and (d). For latter one, the etched depth is 480 nm, so that a layser of 220 nm LN slab is remained. The simulated sidewall angle in is 62$^\circ$ for all figures.} \label{fig1} \end{figure} \begin{figure} \centering \includegraphics[width=3.4in]{fig2.jpg \caption{Simulated phase matching curves of Waveguide 1 (a) and Waveguide 2 (b) under different temperatures. Waveguide 1 has a negative GVM (-600 fs/mm) and red shifts at 0.2 nm/K. Waveguide 2 has a positive GVM (180 fs/mm) and blue shifts at 0.6 nm/K.} \label{fig2} \end{figure} \subsection{Fabrication} \begin{figure} \centering \includegraphics[width=3.4in]{fig3.jpg \caption{(a) Schematic of periodic poling on the Z-cut LNOI wafer placed on an aluminum hot plate. (b) SEM image of a PPLN waveguide with 3.8 $\mu m$ poling period. The fringe pattern is formed after hydrofluoric (HF) acid etching and indicates good uniformity along the propagation direction.} \label{fig3} \end{figure} The PPLN waveguides are fabricated on an Z-cut thin film LNOI wafer (NANOLN Inc.), which is a 700 nm thick LN thin film bonded on a 2-$\mu m$ thermally grown silicon dioxide layer above a silicon substrate, shown in Fig.\ref{fig3} (a). We use bi-layer electron-beam resist (495 PMMA A4 and 950 PMMA A4) and define the poling electrodes by using electron-beam lithography (EBL, Elionix ELS-G100, 100 keV). Then 30-nm Cr and 60-nm Au layers are deposited via electron-beam evaporation (AJA e-beam evaporator). The desirable comb-like poling electrode pattern (see metal layer in Fig.\ref{fig3} (a)) is then created by a metal lift-off process. We apply several 1-ms high voltage ($\sim$550 V) electrical pulses on the poling pads to form the domain inversion region. The whole sample is placed on high temperature ($\sim 300 ^\circ C$) heater. Elevated temperature is a critical factor in order to reduce the required corrective voltage for thin-film lithium niobate.Then a second EBL is carried out to define the LN waveguide in the poled region. Using a similar process described in our previous work \cite{Chen:18}, an optimized ion milling process is used to shallowly etch the waveguide structure ($\sim$ 480 nm) with smooth sidewalls and the optimum sidewall angle. RCA ( (5:1:1, deionized water, ammonia and hydrogen peroxide) bath for the removal of the redeposition is applied delicately to minimize the sidewall roughness due to the uneven removal rates for poled and unpoled regions. Later on, we apply an overclad layer of 2 $\mu m$ silicon oxide on the chip with via Plasma-enhanced chemical vapor deposition (PECVD). To examine the poling quality, we use hydrofluoric (HF) acid to attack the etched waveguide and check the poling pattern under scanning electron microscope (SEM). As shown in Fig.\ref{fig3} (b), we obtain uniform periodic domain inversion along light propagation direction with such short poling period 3.8 $\mu m$. Also, we achieve good poling uniformity in thickness (down to 480 nm in depth), which is quite challenging for thick ($>$ 400 nm) lithium niobate. Because the decay rate of poling electrical field in thin-film lithium niobate is much faster than traditional bulky lithium niobate case, especially in the case that no ground layer is directly attached to lithium niobate layer. \begin{figure} \centering \includegraphics[width=3.4in]{fig4.jpg \caption{(a) The spectra of Fabry-Perot resonances formed by the LNOI wavguide and its two polished facets. (b) Measured phase matching curve at T = $45^\circ$C. The extracted GVM magnitude is 190 fs/mm. The device length is 1-mm long.} \label{fig4} \end{figure} \section{Experimental Results} In this section, we first characterize the linear optical properties of the fabricated 1-mm long PPLN nanowaveguide. A continuous-wave (CW) tunable laser (Santec 550, 1500-1630 nm) is used as the pump to excite the waveguide's fundamental quasi-TM mode through a fiber polarization controller. Two tapered fibers (2 $\mu$m spot diameter, by OZ Optics) serve for the input/output coupling. The coupling loss at 1550 nm and 775 nm are measured to be 5.4$\pm$0.3 dB and 4.7$\pm$0.3 dB, respectively. By using the Fabry-Perot (F-P) method described in \cite{Chen:19}, the propagation loss for 1562 nm is extracted to be about 2 dB/cm. Its F-P fringes formed by the two waveguide facets are plotted in Fig.~\ref{fig4} (a). Currently, the propagation loss is primarily attributed to the surface roughness induced by the direct contact of the metal pads for poling. It can be reduced to $<0.3 $ dB/cm by avoiding the metal contact, by inserting a buffer layer in between lithium niobate layer and the top metal layer. Next, we characterize the waveguide's nonlinear property and measure the phase matching curve for SHG. Using the similar setup as in the above, we sweep the wavelength of the telecom laser, and collect and measure the generated second-harmonic light in the visible band with a power meter. As shown in Fig.~\ref{fig4}(b), a good $sinc^2$-like curve is measured with minimal side peaks, which indicates high-quality poling. The highest efficiency is measured up to 2400 $\%$ W$^{-1}$cm$^{-2}$, and the bandwidth is about 18 nm. From the latter, the absolute value of GVM is estimated to be 190 fs/mm, which is in agreement with the previous Lumerical simulation results. Finally, we study the waveguide's thermal tunability. Figure.~\ref{fig5}(a) and (b) plot the phase matching profiles and their peak wavelengths measured at different temperatures. As shown, the phase matching blue shifts with increasing temperature at a linear rate of $1.71$ nm/K. To the best of our knowledge, such blue shifting, representing a negative temperature dependency of the phase matching, is quite distinct to bulky LN waveguides or other LNOI waveguides reported so far \cite{Sua:18, luo2018highly, Wang:18PPLN, Chen:19,Elkus:19, Rao:19, niu2020optimizing}. To understand this result, we note that in Eq.\ref{eq5}, the numerator $\frac{\partial n_{775}}{\partial T} - \frac{\partial n_{1550}}{\partial T}$ is always positive. For efficient QPM, one can either use $d_{31}$ or $d_{33}$, i.e. 1550-nm o-polarized and 775-nm e-polarized for type-I or 1550-nm e-polarized and 775-nm e-polarized for type-0. For LN, that $\frac{\partial n_{e,775}}{\partial T} > \frac{\partial n_{e,1550}}{\partial T} > \frac{\partial n_{o,1550}}{\partial T}$ guarantees the positivity for the above two cases \cite{TO2005}. Then the negative temperature dependency can only come from the sign of GVM. In bulky LN or other LNOI waveguides, the GVM is always negative because of the material dispersion, thus they show positive temperature dependency in wavelength (thus negative on frequency). Figure~\ref{fig5}(a) also shows excellent consistency from 1530 to 1583 nm with $\pm$ 500$\%$ variation in efficiency, attributed mainly to the coupling instability. Meanwhile, the bandwidth varies within $\pm$ 1.5 nm due to material inhomogeneity or imperfect poling. We notice that the measured tunability ($\sim$ 1.71 nm/K) is almost 3 times larger than the simulated result ($\sim$ 0.6 nm/K), as shown in Figure.~\ref{fig5}(b). In addition to the thermo-optic effect, the waveguide also experiences various other effects, such as the thermal expansion and pyroelectric effect, which are not considering in the current model. Those effects are even more significant in our wavelength-scale waveguides since the optical modes are much tighter and sensitive to such environment changes. We will need to investigate in our future work. Efficient, broadband, and with wide tuning range and large tuning capability, our technique is promising for frequency conversion of ultra-short coherent optical pulses, supercontinuum generation \cite{Jankowski:20}, mode selective frequency conversion \cite{Shahverdi:18} and high-dimensional quantum information processing \cite{Ansari:18}. Currently, the tuning range is limited only by the external temperature controller, which can be replaced by an on-chip microheater \cite{Lee:17} to provide an even larger tuning range. \begin{figure} \centering \includegraphics[width=3.4in]{fig5.jpg \caption{(a) Measured phase matching curves of the LNOI waveguide blue shifted as the temperature increases from 25$^\circ$C to 55$^\circ$C at 5 $^\circ$C step. (b) Temperature dependency of the QPM wavelength, with the tunability fitted to be -1.71 nm/K.} \label{fig5} \end{figure} \section{Conclusion} In summary, we have studied group velocity engineering as a robust tool to realize $\chi^{(2)}$ processes with both high efficiency and large thermal tunability in Z-cut, periodically-poled LNOI waveguides. As a case study, we have demonstrated second harmonic generation on chip at 1900$\pm$500 $\%$W$^{-1}$cm$^{-2}$ and with the phase matching wavelength thermally tuned at -1.71 nm/K. This exceptional tunability, obtained without degrading the conversion efficiency or bandwidth, is valuable in offsetting inevitable nanofabrication errors of $\chi^{(2)}$ circuits, especially as they are mass integrated on chip. Among other applications, our technique holds the potential for the coherent light generation from visible to mid-IR spectra, providing both high power efficiency and wavelength tunability. Furthermore, with broadband (about 18 nm) phase matching in well maintained profiles over 50 nm of thermal tuning, our chips are suitable for applications ultra-short pulse. Finally, the type-0 periodic poling technique demonstrated in the present Z-cut LNOI waveguides could potentially be applied to LNOI microring cavities for significantly enhanced frequency conversion efficiency towards single photon nonlinearity \cite{chen2017observation, Chen:19optica, Lu:19}. {\bf Acknowledgement:} The research was supported in part by National Science Foundation (Award \#1641094 \& \#1842680) and National Aeronautics and Space Administration (Grant Number 80NSSC19K1618). Device fabrication was performed at Advanced Science Research Center, City University of New York. {\bf Disclosures:} The authors declare that there are no conflicts of interest related to this Letter.
{ "attr-fineweb-edu": 1.857422, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUa27xK0zjCxN_rEm4
\section{Introduction} This paper concerns the Cauchy problem for the Navier-Stokes system in the space-time domain $Q_{\infty}:=\mathbb{R}^3\times ]0,\infty[$ for vector-valued function $v=(v_1,v_2,v_3)=(v_i)$ and scalar function $p$, satisfying the equations \begin{equation}\label{directsystem} \partial_tv+v\cdot\nabla v-\Delta v=-\nabla p,\qquad\mbox{div}\,v=0 \end{equation} in $Q_{\infty}$, with the initial conditions \begin{equation}\label{directic} v(\cdot,0)=u_0(\cdot).\end{equation} This paper will concern a certain class of solutions to (\ref{directsystem})-(\ref{directic}), which we will call weak Leray-Hopf solutions. Before defining this class, we introduce some necessary notation. Let $J(\mathbb{R}^3)$ be the closure of $$C^{\infty}_{0,0}(\mathbb{R}^3):=\{u\in C_{0}^{\infty}(\mathbb{R}^3): \rm{div}\,\,u=0\}$$ with respect to the $L_{2}(\mathbb{R}^3)$ norm. Moreover, $\stackrel{\circ}J{^1_2}(\mathbb{R}^3)$ is defined the completion of the space $C^\infty_{0,0}(\mathbb{R}^3)$ with respect to $L_2$-norm and the Dirichlet integral $$\Big(\int\limits_{\mathbb{R}^3} |\nabla v|^2dx\Big)^\frac 12 .$$ Let us now define the notion of 'weak Leray-Hopf solutions' to the Navier-Stokes system. \begin{definition}\label{weakLerayHopf} Consider $0<S\leq \infty$. Let \begin{equation}\label{initialdatacondition} u_0\in J(\mathbb{R}^3). \end{equation} We say that $v$ is a 'weak Leray-Hopf solution' to the Navier-Stokes Cauchy problem in $Q_S:=\mathbb{R}^3 \times ]0,S[$ if it satisfies the following properties: \begin{equation} v\in \mathcal{L}(S):= L_{\infty}(0,S; J(\mathbb{R}^3))\cap L_{2}(0,S;\stackrel{\circ} J{^1_2}(\mathbb{R}^3)). \end{equation} Additionally, for any $w\in L_{2}(\mathbb{R}^3)$: \begin{equation}\label{vweakcontinuity} t\rightarrow \int\limits_{\mathbb R^3} w(x)\cdot v(x,t)dx \end{equation} is a continuous function on $[0,S]$ (the semi-open interval should be taken if $S=\infty$). The Navier-Stokes equations are satisfied by $v$ in a weak sense: \begin{equation}\label{vsatisfiesNSE} \int\limits_{0}^{S}\int\limits_{\mathbb{R}^3}(v\cdot \partial_{t} w+v\otimes v:\nabla w-\nabla v:\nabla w) dxdt=0 \end{equation} for any divergent free test function $$w\in C_{0,0}^{\infty}(Q_{S}):=\{ \varphi\in C_{0}^{\infty}(Q_{S}):\,\,\rm{div}\,\varphi=0\}.$$ The initial condition is satisfied strongly in the $L_{2}(\mathbb{R}^3)$ sense: \begin{equation}\label{vinitialcondition} \lim_{t\rightarrow 0^{+}} \|v(\cdot,t)-u_0\|_{L_{2}(\mathbb{R}^3)}=0. \end{equation} Finally, $v$ satisfies the energy inequality: \begin{equation}\label{venergyineq} \|v(\cdot,t)\|_{L_{2}(\mathbb{R}^3)}^2+2\int\limits_{0}^t\int\limits_{\mathbb{R}^3} |\nabla v(x,t')|^2 dxdt'\leq \|u_0\|_{L_{2}(\mathbb{R}^3)}^2 \end{equation} for all $t\in [0,S]$ (the semi-open interval should be taken if $S=\infty$). \end{definition} The corresponding global in time existence result, proven in \cite{Le}, is as follows. \begin{theorem} Let $u_0\in J(\mathbb{R}^3)$. Then, there exists at least one weak Leray-Hopf solution on $Q_{\infty}$. \end{theorem} There are two big open problems concerning weak Leray-Hopf solutions. \begin{enumerate} \item (Regularity)\footnote{This is closely related to one of the Millenium problems, see \cite{fefferman}.} Given any initial data $u_0\in J(\mathbb{R}^3)$, is there a global in time weak Leray-Hopf solution that is regular for all times \footnote{By regular for all time, we mean $C^{\infty}(\mathbb{R}^3 \times ]0,\infty[)$ with every space-time derivative in $L_{\infty}(\epsilon,T; L_{\infty}(\mathbb{R}^3))$ for any $0<\epsilon<T<\infty$.}? \item (Uniqueness) Given any initial data $u_0\in J(\mathbb{R}^3)$, is the associated global in time weak Leray-Hopf solution unique in the class of weak Leray-Hopf solutions? \end{enumerate} Under certain restrictions of the initial data, it is known since \cite{Le} that 1)(Regularity) implies 2)(Uniqueness)\footnote{The connection made there concerns the slightly narrower class of 'turbulent solutions' defined by Leray in \cite{Le}.}. However, this implication may not be valid for more general classes of initial data. Indeed, certain unverified non uniqueness scenarios, for weak Leray-Hopf solutions, have recently been suggested in \cite{jiasverak2015}. In the scenario suggested there, the non unique solutions are regular. This paper is concerned with the following very natural question arising from 2)(Uniqueness). \begin{itemize} \item[] \textbf{(Q) Which $\mathcal{Z}\subset \mathcal{S}^{'}(\mathbb{R}^3)$ are such that $u_0 \in J(\mathbb{R}^3)\cap\mathcal{Z}$ implies uniqueness of the associated weak Leray-Hopf solutions on some time interval?} \end{itemize} There are a vast number of papers related to this question. We now give some incomplete references, which are directly concerned with this question and closely related to this paper. It was proven in \cite{Le} that for $\mathcal{Z}=\stackrel{\circ} J{^1_2}(\mathbb{R}^3)$ and $\mathcal{Z}= L_{p}(\mathbb{R}^3)$ ($3<p\leq \infty$), we have short time uniqueness in the slightly narrower class of 'turbulent solutions'. The same conclusion was shown to hold in \cite{FJR} for the weak Leray-Hopf class. It was later shown in \cite{Kato} that $\mathcal{Z}= L_{3}(\mathbb{R}^3)$ was sufficient for short time uniqueness of weak Leray-Hopf solutions. At the start of the 21st Century, \cite{GP2000} provided a positive answer for question \textbf{(Q)} for the homogeneous Besov spaces $$\mathcal{Z}= \dot{B}_{p,q}^{-1+\frac{3}{p}}(\mathbb{R}^3)$$ with $p, q<\infty$ and $$\frac{3}{p}+\frac{2}{q}\geq 1.$$ An incomplete selection of further results in this direction are \cite{chemin}, \cite{dongzhang}-\cite{dubois}, \cite{germain} and \cite{LR1}, for example. A more complete history regarding question \textbf{(Q)} can be found in \cite{germain}. An approach (which we will refer to as approach 1) to determining $\mathcal{Z}$ such that \textbf{(Q)} is true, was first used for the Navier-Stokes equation in \cite{Le} and is frequently found in the literature. The principle aim of approach 1 is to show for certain $\mathcal{Z}$ and $u_0\in\mathcal{Z}\cap J(\mathbb{R}^3)$, one can construct a weak Leray Hopf solution $V(u_0)$ belonging to a path space $\mathcal{X}_{T}$ having certain features. Specifically, $\mathcal{X}_{T}$ has the property that any weak Leray-Hopf solution (with arbitrary $u_0\in J(\mathbb{R}^3)$ initial data) in $\mathcal{X}_{T}$ is unique amongst all weak Leray-Hopf solutions with the same initial data. A crucial step in approach 1 is the establishment of appropriate estimates of the trilinear form $F:\mathcal{L}(T)\times\mathcal{L}(T)\times \mathcal{X}_{T}\times ]0,T[\rightarrow \mathbb{R}$ given by: \begin{equation}\label{trilinearformweakstrong} F(a,b,c,t):=\int_{0}^{t}\int\limits_{\mathbb{R}^3} (a\otimes c):\nabla b dyd\tau. \end{equation} As mentioned in \cite{germain}, these estimates of this trilinear form typically play two roles. The first is to provide rigorous justification of the energy inequality for $w:= V(u_0)-u(u_0)$, where $u(u_0)$ is another weak Leray-Hopf solution with the same initial data. The second is to allow the applicability of Gronwall's lemma to infer $w\equiv 0$ on $Q_{T}$. The estimates of the trilinear form needed for approach 1 appear to be restrictive, with regards to the spaces $\mathcal{Z}$ and $\mathcal{X}_{T}$ that can be considered. Consequently, \textbf{(Q)} has remained open for the Besov spaces $$\mathcal{Z}= \dot{B}_{p,q}^{-1+\frac{3}{p}}(\mathbb{R}^3)$$ with $p\in ]3,\infty[,\,\, q\in [1,\infty[$ and $$\frac{3}{p}+\frac{2}{q}<1.$$ The obstacle of using approach 1 for this case, has been explicitly noted in \cite{GP2000} and \cite{germain}. \begin{itemize} \item[] 'It does not seem possible to improve on the continuity (of the trilinear term) without using in a much deeper way that not only $u$ and $V(u_0)$ are in the Leray class $\mathcal{L}$ but also solutions of the equation.'(\cite{GP2000}) \end{itemize} For analagous Besov spaces on bounded domains, question \textbf{(Q)} has also been considered recently in \cite{farwiggiga2016}-\cite{farwiggigahsu2016}. There, a restricted version of \textbf{(Q)} is shown to hold. Namely, the authors prove uniqueness within the subclass of 'well-chosen weak solutions' describing weak Leray-Hopf solutions constructed by concrete approximation procedures. Furthermore, in \cite{farwiggiga2016}-\cite{farwiggigahsu2016} it is explicitly mentioned that a complete answer to \textbf{(Q)} for these cases is 'out of reach'. In this paper, we provide a positive answer to \textbf{(Q)} for $\mathcal{Z}= \mathbb{\dot{B}}^{-1+\frac{3}{p}}_{p,\infty}(\mathbb{R}^3)$, with any $3<p<\infty$ Here, $\mathbb{\dot{B}}^{-1+\frac{3}{p}}_{p,\infty}(\mathbb{R}^3)$ is the closure of smooth compactly supported functions in $\dot{B}^{-1+\frac{3}{p}}_{p,\infty}(\mathbb{R}^3)$ and is such that $$\dot{B}^{-1+\frac{3}{p}}_{p,p}(\mathbb{R}^3)\hookrightarrow \mathbb{\dot{B}}^{-1+\frac{3}{p}}_{p,\infty}(\mathbb{R}^3).$$ In fact this is a corollary of our main theorem, which provides a positive answer to \textbf{(Q)} for other classes of $\mathcal{Z}$. From this point onwards, for $p_0>3$, we will denote $$s_{p_0}:=-1+\frac{3}{p_0}<0.$$ Moreover, for $ 2<\alpha\leq 3$ and $p_{1}>\alpha$, we define $$s_{p_1,\alpha}:= -\frac{3}{\alpha}+\frac{3}{p_1}<0.$$ Now, we state the main theorem of this paper. \begin{theorem}\label{weakstronguniquenessBesov} Fix $2<\alpha \leq 3.$ \begin{itemize} \item For $2<\alpha< 3$, take any $p$ such that $\alpha <p< \frac{\alpha}{3-\alpha}$. \item For $\alpha=3$, take any $p$ such that $3<p<\infty$. \end{itemize} Consider a weak Leray-Hopf solution $u$ to the Navier-Stokes system on $Q_{\infty}$, with initial data $$u_0\in VMO^{-1}(\mathbb{R}^3)\cap \dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)\cap J(\mathbb{R}^3).$$ Then, there exists a $T(u_0)>0$ such that all weak Leray-Hopf solutions on $Q_{\infty}$, with initial data $u_0$, coincide with $u$ on $Q_{T(u_0)}:=\mathbb{R}^3\times ]0,T(u_0)[.$ \end{theorem} Let us remark that previous results of this type are given in \cite{chemin} and \cite{dongzhang} respectively, with the additional assumption that $u_0$ belongs to a nonhomogeneous Sobolev space $H^{s}(\mathbb{R}^3)$, with $s>0$. By comparison the assumptions of Theorem \ref{weakstronguniquenessBesov} are weaker. This follows because of the following embeddings. For $s>0$ there exists $2<\alpha\leq 3$ such that for $p\geq \alpha$: $${H}^{s}(\mathbb{R}^3)\hookrightarrow L_{\alpha}(\mathbb{R}^3)\hookrightarrow {\dot{B}^{s_{p,\alpha}}}_{p,p}(\mathbb{R}^3). $$ \begin{cor}\label{cannoneweakstronguniqueness} Let $3<p<\infty$. Consider a weak Leray-Hopf solution $u$ to the Navier-Stokes system on $Q_{\infty}$, with initial data $$u_0\in \dot{\mathbb{B}}_{p,\infty}^{s_{p}}(\mathbb{R}^3)\cap J(\mathbb{R}^3).$$ Then, there exists a $T(u_0)>0$ such that all weak Leray-Hopf solutions on $Q_{\infty}$, with initial data $u_0$, coincide with $u$ on $Q_{T(u_0)}:=\mathbb{R}^3\times ]0,T(u_0)[.$ \end{cor} Our main tool to prove Theorem \ref{weakstronguniquenessBesov} is the new observation that weak Leray-Hopf solutions, with this class of initial data, have stronger continuity properties near $t=0$ than general members of the energy class $\mathcal{L}$. In \cite{cheminplanchon}, a similar property was obtained for for the mild solution with initial data in $\dot{B}^{-\frac{1}{4}}_{4,4}(\mathbb{R}^3)$. Recently, in case of 'global weak $L_3$ solutions' with $L_3(\mathbb{R}^3)$ initial data, properties of this type were established in \cite{sersve2016}. See also \cite{barkerser16} for the case of $L^{3,\infty}$ initial data, in the context of 'global weak $L^{3,\infty}(\mathbb{R}^3)$ solutions'. Let us mention that throughout this paper, $$S(t)u_0(x):=\Gamma(\cdot,t)\star u_0$$ where $\Gamma(x,t)$ is the kernel for the heat flow in $\mathbb{R}^3$. Here is our main Lemma. \begin{lemma}\label{estnearinitialforLeraywithbesov} Take $\alpha$ and $p$ as in Theorem \ref{weakstronguniquenessBesov}. Assume that $$ u_0 \in J(\mathbb{R}^3)\cap \dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3). $$ Then for any weak Leray-Hopf $u$ solution on $Q_{T}:=\mathbb{R}^3 \times ]0,T[$, with initial data $u_0$, we infer the following. There exists $$\beta(p,\alpha)>0$$ and $$\gamma(\|u_{0}\|_{\dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)}, p,\alpha)>0$$ such that for $t\leq \min({1,\gamma},T):$ \begin{equation}\label{estimatenearinitialtime} \|u(\cdot,t)-S(t)u_0\|_{L_{2}}^{2}\leq t^{\beta} c(p,\alpha, \|u_0\|_{L_{2}(\mathbb{R}^3)}, \|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)}). \end{equation} \end{lemma} This then allows us to apply a less restrictive version of approach 1. Namely, we show that for any initial data in this specific class, there exists a weak Leray-Hopf solution $V(u_0)$ on $Q_T$, which belongs to a path space $\mathcal{X}_{T}$ with the following property. Namely, $\mathcal{X}_{T}$ grants uniqueness for weak Leray-Hopf solutions with the same initial data in this specific class (rather than for arbitrary initial data in $J(\mathbb{R}^3)$, as required in approach 1). A related strategy has been used in \cite{dongzhang}. However, in \cite{dongzhang} an additional restriction is imposed, requiring that the initial data has positive Sobolev regularity. \newpage \textbf{Remarks} \begin{enumerate} \item Another notion of solution, to the Cauchy problem of the Navier Stokes system, was pioneered in \cite{Kato} and \cite{KatoFujita}. These solutions, called 'mild solutions' to the Navier-Stokes system, are constructed using a contraction principle and are unique in their class. Many authors have given classes of initial data for which mild solutions of the Navier-Stokes system exist. See, for example, \cite{cannone}, \cite{GigaMiy1989}, \cite{KozYam1994}, \cite{Plan1996} and \cite{Taylor1992}. The optimal result in this direction was established in \cite{kochtataru}. The authors there proved global in time existence of mild solutions for solenoidal initial data with small $BMO^{-1}(\mathbb{R}^3)$ norm, as well as local in time existence for solenoidal $u_0\in {VMO}^{-1}(\mathbb{R}^3)$. Subsequently, the results of the paper \cite{LRprioux} implied that if $u_0 \in J(\mathbb{R}^3)\cap {VMO}^{-1}(\mathbb{R}^3)$ then the mild solution is a weak Leray-Hopf solution. Consequently, we formulate the following plausible conjecture \textbf{(C)}. \begin{itemize} \item[] \textbf{(C) Question (Q) is affirmative for $\mathcal{Z}={VMO}^{-1}(\mathbb{R}^3)$.} \end{itemize} \item In \cite{GregoryNote1}, the following open question was discussed: \begin{itemize} \item[] \textbf{(Q.1) { Assume that $u_{0k}\in J(\mathbb{R}^3)$ are compactly supported in a fixed compact set and converge to $u_0\equiv 0$ weakly in $L_2(\mathbb{R}^3)$. Let $u^{(k)}$ be the weak Leray-Hopf solution with the initial value $u_0^{(k)}$. Can we conclude that $v^{(k)}$ converge to $v\equiv 0$ in the sense of distributions? }} \end{itemize} In \cite{GregoryNote1} it was shown that (Q.1) holds true under the following additional restrictions. Namely \begin{equation}\label{initialdatanormbdd} \sup_{k}\|u_{0}^{(k)}\|_{L_{s}(\mathbb{R}^3)}<\infty\,\,\,\,\,\,\,\rm{for\,\,some}\,\,3<s\leq\infty \end{equation} and that $u^{(k)}$ and it's associated pressure $p^{(k)}$ satisfy the local energy inequality: \begin{equation}\label{localenergyinequality} \int\limits_{\mathbb R^3}\varphi(x,t)|u^{(k)}(x,t)|^2dx+2\int\limits_{0}^t\int\limits_{\mathbb R^3}\varphi |\nabla u^{(k)}|^2 dxds\leq$$$$\leq \int\limits_{0}^{t}\int\limits_{\mathbb R^3}|u^{(k)}|^2(\partial_{t}\varphi+\Delta\varphi)+u^{(k)}\cdot\nabla\varphi(|u^{(k)}|^2+2p^{(k)}) dxds \end{equation} for all non negative functions $\varphi\in C_{0}^{\infty}(Q_{\infty}).$ Subsequently, in \cite{sersve2016} it was shown that the same conclusion holds with (\ref{initialdatanormbdd}) replaced by weaker assumption that \begin{equation}\label{criticalL3seqbdd} \sup_{k}\|u_{0}^{(k)}\|_{L_{3}(\mathbb{R}^3)}<\infty. \end{equation} In \cite{barkerser16} , this was further weakened to boundedness of $u_{0}^{(k)}$ in $L^{3,\infty}(\mathbb{R}^3)$. Lemma \ref{estnearinitialforLeraywithbesov} has the consequence that \textbf{(Q.1)} still holds true, if (\ref{initialdatanormbdd}) is replaced by the assumption that $u_{0}^{(k)}$ is bounded in the supercritical Besov spaces $\dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)$\footnote{ with $p$ and $\alpha$ as in Theorem \ref{weakstronguniquenessBesov}}. Consequently, as the following continuous embedding holds (recall $\alpha\leq p$ and $2\leq \alpha\leq 3$), $$L_{\alpha}(\mathbb{R}^3)\hookrightarrow {\dot{B}^{s_{p,\alpha}}}_{p,p}(\mathbb{R}^3),$$ we see that this improves the previous assumptions under which \textbf{(Q.1)} holds true. \item In \cite{Serrin} and \cite{prodi}, it was shown that if $u$ is a weak Leray Hopf solution on $Q_{T}$ and satisfies \begin{equation}\label{ladyzhenskayaserrinprodi} u\in L_{q}(0,T; L_{p}(\mathbb{R}^3))\,\,\,\,\,\,\,\,\,\frac{3}{p}+\frac{2}{q}=1\,\, (3<p\leq \infty\,\,\rm{and}\,\, 2\leq q<\infty), \end{equation} then $u$ coincides on $Q_{T}$ with other any weak Leray-Hopf solution with the same initial data. The same conclusion for the endpoint case $u\in L_{\infty}(0,T; L_{3}(\mathbb{R}^3))$ appeared to be much more challenging and was settled in \cite{ESS2003}. As a consequence of Theorem \ref{weakstronguniquenessBesov}, we are able to extend the uniqueness criterion (\ref{ladyzhenskayaserrinprodi}) for weak Leray-Hopf solutions. Let us state this as a Proposition. \begin{pro}\label{extendladyzhenskayaserrinprodi} Suppose $u$ and $v$ are weak Leray-Hopf solutions on $Q_{\infty}$ with the same initial data $u_0 \in J(\mathbb{R}^3)$. Then there exists a $\epsilon_{*}=\epsilon_{*}(p,q)>0$ such that if either \begin{itemize} \item \begin{equation}\label{extendladyzhenskayaserrinprodi1} u \in L^{q,s}(0,T; L^{p,s}(\mathbb{R}^3))\,\,\,\,\,\,\,\,\,\frac{3}{p}+\frac{2}{q}=1 \end{equation} \begin{equation}\label{integrabilitycondition1} (3<p< \infty\,,\,\, 2< q<\infty\,\,\rm{and}\,\, 1\leq s<\infty) \end{equation} \item or \begin{equation}\label{extendladyzhenskayaserrinprodi2} u \in L^{q,\infty}(0,T; L^{p,\infty}(\mathbb{R}^3))\,\,\,\,\,\,\,\,\,\frac{3}{p}+\frac{2}{q}=1 \end{equation} \begin{equation}\label{integrabilitycondition2} (3< p< \infty\,,\,\, 2< q <\infty) \end{equation} with \begin{equation}\label{smallness} \|u\|_{L^{q,\infty}(0,T; L^{p,\infty}(\mathbb{R}^3))}\leq \epsilon_{*}, \end{equation} then $u\equiv v$ on $Q_{T}:=\mathbb{R}^3 \times ]0,T[$. \end{itemize} \end{pro} Let us mention that for sufficently small $\epsilon_{*}$, it was shown in \cite{Sohr} that if $u$ is a weak Leray-Hopf solution on $Q_{\infty}$ satisfying either (\ref{extendladyzhenskayaserrinprodi1})-(\ref{integrabilitycondition1}) or (\ref{extendladyzhenskayaserrinprodi2})-(\ref{smallness}), then $u$ is regular\footnote{By regular on $Q_{T}$, we mean $C^{\infty}(\mathbb{R}^3 \times ]0,T[)$ with every space-time derivative in $L_{\infty}(\epsilon,T; L_{\infty}(\mathbb{R}^3))$ for any $0<\epsilon<T$.} on $Q_{T}$. To the best of our knowledge, it was not previously known whether these conditions on $u$ were sufficient to grant uniqueness on $Q_{T}$, amongst all weak Leray Hopf solutions with the same initial value.\par Uniqueness for the endpoint case $(p,q)=(3,\infty)$ of (\ref{extendladyzhenskayaserrinprodi2})-(\ref{smallness}) is simpler and already known. A proof can be found in \cite{LR1}, for example. Hence, we omit this case. \end{enumerate} \setcounter{equation}{0} \section{Preliminaries} \subsection{Notation} In this subsection, we will introduce notation that will be repeatedly used throughout the rest of the paper. We adopt the usual summation convention throughout the paper . For arbitrary vectors $a=(a_{i}),\,b=(b_{i})$ in $\mathbb{R}^{n}$ and for arbitrary matrices $F=(F_{ij}),\,G=(G_{ij})$ in $\mathbb{M}^{n}$ we put $$a\cdot b=a_{i}b_{i},\,|a|=\sqrt{a\cdot a},$$ $$a\otimes b=(a_{i}b_{j})\in \mathbb{M}^{n},$$ $$FG=(F_{ik}G_{kj})\in \mathbb{M}^{n}\!,\,\,F^{T}=(F_{ji})\in \mathbb{M}^{n}\!,$$ $$F:G= F_{ij}G_{ij},\,|F|=\sqrt{F:F},$$ For spatial domains and space time domains, we will make use of the following notation: $$B(x_0,R)=\{x\in\mathbb{R}^3: |x-x_0|<R\},$$ $$B(\theta)=B(0,\theta),\,\,\,B=B(1),$$ $$ Q(z_0,R)=B(x_0,R)\times ]t_0-R^2,t_0[,\,\,\, z_{0}=(x_0,t_0),$$ $$Q(\theta)=Q(0,\theta),\,\,Q = Q(1),\,\, Q_{a,b}:=\mathbb{R}^3\times ]a,b[.$$ Here $-\infty\leq a<b\leq \infty.$ In the special cases where $a=0$ we write $Q_{b}:= Q_{0,b}.$ For $\Omega\subseteq\mathbb{R}^3$, mean values of integrable functions are denoted as follows $$[p]_{\Omega}=\frac{1}{|\Omega|}\int\limits_{\Omega}p(x)dx.$$ For, $\Omega\subseteq\mathbb{R}^3$, the space $[C^{\infty}_{0,0}(\Omega)]^{L_{s}(\Omega)}$ is defined to be the closure of $$C^{\infty}_{0,0}(\Omega):=\{u\in C_{0}^{\infty}(\Omega): \rm{div}\,\,u=0\}$$ with respect to the $L_{s}(\Omega)$ norm. For $s=2$, we define $$J(\Omega):= [C^{\infty}_{0,0}(\Omega)]^{L_{2}(\Omega)}.$$ We define $\stackrel{\circ}J{^1_2}(\Omega)$ as the completion of $C^\infty_{0,0}(\Omega)$ with respect to $L_2$-norm and the Dirichlet integral $$\Big(\int\limits_{\Omega} |\nabla v|^2dx\Big)^\frac 12 .$$ If $X$ is a Banach space with norm $\|\cdot\|_{X}$, then $L_{s}(a,b;X)$, with $a<b$ and $s\in [1,\infty[$, will denote the usual Banach space of strongly measurable $X$-valued functions $f(t)$ on $]a,b[$ such that $$\|f\|_{L_{s}(a,b;X)}:=\left(\int\limits_{a}^{b}\|f(t)\|_{X}^{s}dt\right)^{\frac{1}{s}}<+\infty.$$ The usual modification is made if $s=\infty$. With this notation, we will define $$L_{s,l}(Q_{a,b}):= L_{l}(a,b; L_{s}(\mathbb{R}^3)).$$ Let $C([a,b]; X)$ denote the space of continuous $X$ valued functions on $[a,b]$ with usual norm. In addition, let $C_{w}([a,b]; X)$ denote the space of $X$ valued functions, which are continuous from $[a,b]$ to the weak topology of $X$.\\ We define the following Sobolev spaces with mixed norms: $$ W^{1,0}_{m,n}(Q_{a,b})=\{ v\in L_{m,n}(Q_{a,b}): \|v\|_{L_{m,n}(Q_{a,b})}+$$$$+\|\nabla v\|_{L_{m,n}(Q_{a,b})}<\infty\},$$ $$ W^{2,1}_{m,n}(Q_{a,b})=\{ v\in L_{m,n}(Q_{a,b}): \|v\|_{L_{m,n}(Q_{a,b})}+$$$$+\|\nabla^2 v\|_{L_{m,n}(Q_{a,b})}+\|\partial_{t}v\|_{L_{m,n}(Q_{a,b})}<\infty\}.$$ \subsection{Relevant Function Spaces} \subsubsection{Homogeneous Besov Spaces and $BMO^{-1}$} We first introduce the frequency cut off operators of the Littlewood-Paley theory. The definitions we use are contained in \cite{bahourichemindanchin}. For a tempered distribution $f$, let $\mathcal{F}(f)$ denote its Fourier transform. Let $C$ be the annulus $$\{\xi\in\mathbb{R}^3: 3/4\leq|\xi|\leq 8/3\}.$$ Let $\chi\in C_{0}^{\infty}(B(4/3))$ and $\varphi\in C_{0}^{\infty}(C)$ be such that \begin{equation}\label{valuesbetween0and1} \forall \xi\in\mathbb{R}^3,\,\,0\leq \chi(\xi), \varphi(\xi)\leq 1 , \end{equation} \begin{equation}\label{dyadicpartition} \forall \xi\in\mathbb{R}^3,\,\,\chi(\xi)+\sum_{j\geq 0} \varphi(2^{-j}\xi)=1 \end{equation} and \begin{equation}\label{dyadicpartition.1} \forall \xi\in\mathbb{R}^3\setminus\{0\},\,\,\sum_{j\in\mathbb{Z}} \varphi(2^{-j}\xi)=1. \end{equation} For $a$ being a tempered distribution, let us define for $j\in\mathbb{Z}$: \begin{equation}\label{dyadicblocks} \dot{\Delta}_j a:=\mathcal{F}^{-1}(\varphi(2^{-j}\xi)\mathcal{F}(a))\,\,and\,\, \dot{S}_j a:=\mathcal{F}^{-1}(\chi(2^{-j}\xi)\mathcal{F}(a)). \end{equation} Now we are in a position to define the homogeneous Besov spaces on $\mathbb{R}^3$. Let $s\in\mathbb{R}$ and $(p,q)\in [1,\infty]\times [1,\infty]$. Then $\dot{B}^{s}_{p,q}(\mathbb{R}^3)$ is the subspace of tempered distributions such that \begin{equation}\label{Besovdef1} \lim_{j\rightarrow-\infty} \|\dot{S}_{j} u\|_{L_{\infty}(\mathbb{R}^3)}=0, \end{equation} \begin{equation}\label{Besovdef2} \|u\|_{\dot{B}^{s}_{p,q}(\mathbb{R}^3)}:= \Big(\sum_{j\in\mathbb{Z}}2^{jsq}\|\dot{\Delta}_{j} u\|_{L_p(\mathbb{R}^3)}^{q}\Big)^{\frac{1}{q}}. \end{equation} \begin{remark}\label{besovremark1} This definition provides a Banach space if $s<\frac{3}{p}$, see \cite{bahourichemindanchin}. \end{remark} \begin{remark}\label{besovremark2} It is known that if $1\leq q_{1}\leq q_{2}\leq\infty$, $1\leq p_1\leq p_2\leq\infty$ and $s\in\mathbb{R}$, then $$\dot{B}^{s}_{p_1,q_{1}}(\mathbb{R}^3)\hookrightarrow\dot{B}^{s-3(\frac{1}{p_1}-\frac{1}{p_2})}_{p_2,q_{2}}(\mathbb{R}^3).$$ \end{remark} \begin{remark}\label{besovremark3} It is known that for $s=-2s_{1}<0$ and $p,q\in [1,\infty]$, the norm can be characterised by the heat flow. Namely there exists a $C>1$ such that for all $u\in\dot{B}^{-2s_{1}}_{p,q}(\mathbb{R}^3)$: $$C^{-1}\|u\|_{\dot{B}^{-2s_{1}}_{p,q}(\mathbb{R}^3)}\leq \|\|t^{s_1} S(t)u\|_{L_{p}(\mathbb{R}^3)}\|_{L_{q}(\frac{dt}{t})}\leq C\|u\|_{\dot{B}^{-2s_{1}}_{p,q}(\mathbb{R}^3)}.$$ Here, $$S(t)u(x):=\Gamma(\cdot,t)\star u$$ where $\Gamma(x,t)$ is the kernel for the heat flow in $\mathbb{R}^3$. \end{remark} We will also need the following Proposition, whose statement and proof can be found in \cite{bahourichemindanchin} (Proposition 2.22 there) for example. In the Proposition below we use the notation \begin{equation}\label{Sh} \mathcal{S}_{h}^{'}:=\{ \rm{ tempered\,\,distributions}\,\, u\rm{\,\,\,such\,\,that\,\,} \lim_{j\rightarrow -\infty}\|S_{j}u\|_{L_{\infty}(\mathbb{R}^3)}=0\}. \end{equation} \begin{pro}\label{interpolativeinequalitybahourichemindanchin} A constant $C$ exists with the following properties. If $s_{1}$ and $s_{2}$ are real numbers such that $s_{1}<s_{2}$ and $\theta\in ]0,1[$, then we have, for any $p\in [1,\infty]$ and any $u\in \mathcal{S}_{h}^{'}$, \begin{equation}\label{interpolationactual} \|u\|_{\dot{B}_{p,1}^{\theta s_{1}+(1-\theta)s_{2}}(\mathbb{R}^3)}\leq \frac{C}{s_2-s_1}\Big(\frac{1}{\theta}+\frac{1}{1-\theta}\Big)\|u\|_{\dot{B}_{p,\infty}^{s_1}(\mathbb{R}^3)}^{\theta}\|u\|_{\dot{B}_{p,\infty}^{s_2}(\mathbb{R}^3)}^{1-\theta}. \end{equation} \end{pro} Finally, $BMO^{-1}(\mathbb{R}^3)$ is the space of all tempered distributions such that the following norm is finite: \begin{equation}\label{bmo-1norm} \|u\|_{BMO^{-1}(\mathbb{R}^3)}:=\sup_{x\in\mathbb{R}^3,R>0}\frac{1}{|B(0,R)|}\int\limits_0^{R^2}\int\limits_{B(x,R)} |S(t)u|^2 dydt. \end{equation} Note that $VMO^{-1}(\mathbb{R}^3)$ is the subspace that coincides with the closure of test functions $C_{0}^{\infty}(\mathbb{R}^3)$, with respect to the norm (\ref{bmo-1norm}). \subsubsection{Lorentz spaces} Given a measurable subset $\Omega\subset\mathbb{R}^{n}$, let us define the Lorentz spaces. For a measurable function $f:\Omega\rightarrow\mathbb{R}$ define: \begin{equation}\label{defdist} d_{f,\Omega}(\alpha):=\mu(\{x\in \Omega : |f(x)|>\alpha\}), \end{equation} where $\mu$ denotes Lebesgue measure. The Lorentz space $L^{p,q}(\Omega)$, with $p\in [1,\infty[$, $q\in [1,\infty]$, is the set of all measurable functions $g$ on $\Omega$ such that the quasinorm $\|g\|_{L^{p,q}(\Omega)}$ is finite. Here: \begin{equation}\label{Lorentznorm} \|g\|_{L^{p,q}(\Omega)}:= \Big(p\int\limits_{0}^{\infty}\alpha^{q}d_{g,\Omega}(\alpha)^{\frac{q}{p}}\frac{d\alpha}{\alpha}\Big)^{\frac{1}{q}}, \end{equation} \begin{equation}\label{Lorentznorminfty} \|g\|_{L^{p,\infty}(\Omega)}:= \sup_{\alpha>0}\alpha d_{g,\Omega}(\alpha)^{\frac{1}{p}}. \end{equation}\\ It is known there exists a norm, which is equivalent to the quasinorms defined above, for which $L^{p,q}(\Omega)$ is a Banach space. For $p\in [1,\infty[$ and $1\leq q_{1}< q_{2}\leq \infty$, we have the following continuous embeddings \begin{equation}\label{Lorentzcontinuousembedding} L^{p,q_1}(\Omega) \hookrightarrow L^{p,q_2}(\Omega) \end{equation} and the inclusion is known to be strict. Let $X$ be a Banach space with norm $\|\cdot\|_{X}$, $ a<b$, $p\in [1,\infty[$ and $q\in [1,\infty]$. Then $L^{p,q}(a,b;X)$ will denote the space of strongly measurable $X$-valued functions $f(t)$ on $]a,b[$ such that \begin{equation}\label{Lorentznormbochner} \|f\|_{L^{p,q}(a,b; X)}:= \|\|f(t)\|_{X}\|_{L^{p,q}(a,b)}<\infty. \end{equation} In particular, if $1\leq q_{1}< q_{2}\leq \infty$, we have the following continuous embeddings \begin{equation}\label{Bochnerlorentzcontinuousembedding} L^{p,q_1}(a,b; X) \hookrightarrow L^{p,q_2}(a,b; X) \end{equation} and the inclusion is known to be strict. Let us recall a known Proposition known as 'O'Neil's convolution inequality' (Theorem 2.6 of \cite{O'Neil}), which will be used in proving Proposition \ref{extendladyzhenskayaserrinprodi}. \begin{pro}\label{O'Neil} Suppose $1\leq p_{1}, p_{2}, q_{1}, q_{2}, r\leq\infty$ are such that \begin{equation}\label{O'Neilindices1} \frac{1}{r}+1=\frac{1}{p_1}+\frac{1}{p_{2}} \end{equation} and \begin{equation}\label{O'Neilindices2} \frac{1}{q_1}+\frac{1}{q_{2}}\geq \frac{1}{s}. \end{equation} Suppose that \begin{equation}\label{fghypothesis} f\in L^{p_1,q_1}(\mathbb{R}^{n})\,\,\rm{and}\,\,g\in L^{p_2,q_2}(\mathbb{R}^{n}). \end{equation} Then it holds that \begin{equation}\label{fstargconclusion1} f\star g \in L^{r,s}(\mathbb{R}^n)\,\,\rm{with} \end{equation} \begin{equation}\label{fstargconclusion2} \|f\star g \|_{L^{r,s}(\mathbb{R}^n)}\leq 3r \|f\|_{L^{p_1,q_1}(\mathbb{R}^n)} \|g\|_{L^{p_2,q_2}(\mathbb{R}^n)}. \end{equation} \end{pro} Let us finally state and proof a simple Lemma, which we will make use of in proving Proposition \ref{ladyzhenskayaserrinprodi}. \begin{lemma}\label{pointwiselorentzdecreasing} Let $f:]0,T[\rightarrow ]0,\infty[$ be a function satisfying the following property. In particular, suppose that there exists a $C\geq 1$ such that for any $0<t_{0}\leq t_{1}<T$ we have \begin{equation}\label{almostdecreasing} f(t_{1})\leq C f(t_{0}). \end{equation} In addition, assume that for some $1\leq r<\infty$: \begin{equation}\label{florentz} f \in L^{r,\infty}(0,T). \end{equation} Then one can conclude that for all $t\in ]0,T[$: \begin{equation}\label{fpointwise} f(t)\leq \frac{2C\|f\|_{L^{r,\infty}(0,T)}}{t^{\frac{1}{r}}}. \end{equation} \end{lemma} \begin{proof} It suffices to proof that if $f$ satisfies the hypothesis of Lemma \ref{pointwiselorentzdecreasing}, along with the additional constraint \begin{equation}\label{lorentznormhalf} \|f\|_{L^{r,\infty}(0,T)}=\frac{1}{2}, \end{equation} then we must necessarily have that for any $0<t<T$ \begin{equation}\label{fpointwisereduction} f(t)\leq \frac{C}{t^{\frac{1}{r}}}. \end{equation} The assumption (\ref{lorentznormhalf}) implies that \begin{equation}\label{weakcharacterisationdistribution} \sup_{\alpha>0} \alpha^r \mu(\{s\in]0,T[\,\,\rm{such\,\,that}\,\,f(s)>\alpha\})<1. \end{equation} Fixing $t\in ]0,T[$ and setting $\alpha=\frac{1}{t^{\frac{1}{r}}}$, we see that \begin{equation}\label{weakcharacterisationdistributiont} \mu(\{s\in]0,T[\,\,\rm{such\,\,that}\,\,f(s)>1/{t^{\frac{1}{r}}}\})<t. \end{equation} For $0<t_{0}\leq t$ we have \begin{equation}\label{almostdecreasingrecall} f(t)\leq Cf(t_0). \end{equation} This, together with (\ref{weakcharacterisationdistributiont}), implies (\ref{fpointwisereduction}). \end{proof} \subsection{Decomposition of Homogeneous Besov Spaces} Next state and prove certain decompositions for homogeneous Besov spaces. This will play a crucial role in the proof of Theorem \ref{weakstronguniquenessBesov}. In the context of Lebesgue spaces, an analogous statement is Lemma II.I proven by Calderon in \cite{Calderon90}. Before stating and proving this, we take note of a useful Lemma presented in \cite{bahourichemindanchin} (specifically, Lemma 2.23 and Remark 2.24 in \cite{bahourichemindanchin}). \begin{lemma}\label{bahouricheminbook} Let $C^{'}$ be an annulus and let $(u^{(j)})_{j\in\mathbb{Z}}$ be a sequence of functions such that \begin{equation}\label{conditionsupport} \rm{Supp}\, \mathcal{F}(u^{(j)})\subset 2^{j}C^{'} \end{equation} and \begin{equation}\label{conditionseriesbound} \Big(\sum_{j\in\mathbb{Z}}2^{jsr}\|u^{(j)}\|_{L_p}^{r}\Big)^{\frac{1}{r}}<\infty. \end{equation} Moreover, assume in addition that \begin{equation}\label{indicescondition} s<\frac{3}{p}. \end{equation} Then the following holds true. The series $$\sum_{j\in\mathbb{Z}} u^{(j)}$$ converges (in the sense of tempered distributions) to some $u\in \dot{B}^{s}_{p,r}(\mathbb{R}^3)$, which satisfies the following estimate: \begin{equation}\label{besovboundlimitfunction} \|u\|_{\dot{B}^{s}_{p,r}}\leq C_s \Big(\sum_{j\in\mathbb{Z}}2^{jsr}\|u^{(j)}\|_{L_p}^{r}\Big)^{\frac{1}{r}}. \end{equation} \end{lemma} Now, we can state the proposition regarding decomposition of homogeneous Besov spaces. Note that decompositions of a similar type can be obtained abstractly from real interpolation theory, applied to homogeneous Besov spaces. See Chapter 6 of \cite{berghlofstrom}, for example. \begin{pro}\label{Decompgeneral} For $i=1,2,3$ let $p_{i}\in ]1,\infty[$, $s_i\in \mathbb{R}$ and $\theta\in ]0,1[$ be such that $s_1<s_0<s_2$ and $p_2<p_0<p_1$. In addition, assume the following relations hold: \begin{equation}\label{sinterpolationrelation} s_1(1-\theta)+\theta s_2=s_0, \end{equation} \begin{equation}\label{pinterpolationrelation} \frac{1-\theta}{p_1}+\frac{\theta}{p_2}=\frac{1}{p_0}, \end{equation} \begin{equation}\label{besovbanachcondition} {s_i}<\frac{3}{p_i}. \end{equation} Suppose that $u_0\in \dot{B}^{{s_{0}}}_{p_0,p_0}(\mathbb{R}^3).$ Then for all $\epsilon>0$ there exists $u^{1,\epsilon}\in \dot{B}^{s_{1}}_{p_1,p_1}(\mathbb{R}^3)$, $u^{2,\epsilon}\in \dot{B}^{s_{2}}_{p_2,p_2}(\mathbb{R}^3)$ such that \begin{equation}\label{udecompgeneral} u= u^{1,\epsilon}+u^{2,\epsilon}, \end{equation} \begin{equation}\label{u_1estgeneral} \|u^{1,\epsilon}\|_{\dot{B}^{s_{1}}_{p_1,p_1}}^{p_1}\leq \epsilon^{p_1-p_0} \|u_0\|_{\dot{B}^{s_{0}}_{p_0,p_0}}^{p_0}, \end{equation} \begin{equation}\label{u_2estgeneral} \|u^{2,\epsilon}\|_{\dot{B}^{s_{2}}_{p_2,p_2}}^{p_2}\leq C( p_0,p_1,p_2, \|\mathcal{F}^{-1}\varphi\|_{L_1})\epsilon^{p_2-p_0} \|u_0\|_{\dot{B}^{s_{0}}_{p_0,p_0}}^{p_0} .\end{equation} \end{pro} \begin{proof} Denote, $$f^{(j)}:= \dot{\Delta}_{j} u,$$ $$f^{(j)N}_{-}:=f^{(j)}\chi_{|f^{(j)}|\leq N} $$ and $$f^{(j)N}_{+}:=f^{(j)}(1-\chi_{|f^{(j)}|\leq N}). $$ It is easily verified that the following holds: $$ \|f^{(j)N}_{-}\|_{L_{p_1}}^{p_1}\leq N^{p_1-p_0}\|f^{(j)}\|_{L_{p_0}}^{p_0},$$ $$\|f^{(j)N}_{+}\|_{L_{p_2}}^{p_2}\leq N^{p_2-p_0}\|f^{(j)}\|_{L_{p_0}}^{p_0}.$$ Thus, we may write \begin{equation}\label{truncationest1} 2^{p_1 s_1 j}\|f^{(j)N}_{-}\|_{L_{p_1}}^{p_1}\leq N^{p_1-p_0}2^{(p_1 s_1-p_0 s_0)j} 2^{p_0 s_0 j}\|f^{(j)}\|_{L_{p_0}}^{p_0} \end{equation} \begin{equation}\label{truncationest2} 2^{p_2 s_2 j}\|f^{(j)N}_{+}\|_{L_{p_2}}^{p_2}\leq N^{p_2-p_0} 2^{(p_2 s_2-p_0s_0)j} 2^{p_0s_0j}\|f^{(j)}\|_{L_{p_0}}^{p_0}. \end{equation} With (\ref{truncationest1}) in mind, we define $$N({j,\epsilon,s_0,s_1,p_0,p_1}):= \epsilon 2^{\frac{(p_0s_0-p_1s_1)j}{p_1-p_0}}.$$ For the sake of brevity we will write $N({j},\epsilon)$. Using the relations of the Besov indices given by (\ref{sinterpolationrelation})-(\ref{pinterpolationrelation}), we can infer that $$N({j,\epsilon})^{p_2-p_0} 2^{(p_2 s_2-p_0s_0)j} = \epsilon^{p_2-p_0}.$$ The crucial point being that this is independent of $j$. Thus, we infer \begin{equation}\label{truncationest1.1} 2^{p_1 s_1 j}\|f^{(j)N(j,\epsilon)}_{-}\|_{L_{p_1}}^{p_1}\leq \epsilon^{p_1-p_0} 2^{p_0 s_0 j}\|f^{(j)}\|_{L_{p_0}}^{p_0}, \end{equation} \begin{equation}\label{truncationest2.1} 2^{p_2 s_2 j}\|f^{(j)N(j,\epsilon)}_{+}\|_{L_{p_2}}^{p_2}\leq \epsilon^{p_2-p_0} 2^{p_0s_0j}\|f^{(j)}\|_{L_{p_0}}^{p_0}. \end{equation} Next, it is well known that for any $ u\in \dot{B}^{s_0}_{p_0,p_0}(\mathbb{R}^3)$ we have that $\sum_{j=-m}^{m} \dot{\Delta}_{j} u$ converges to $u$ in the sense of tempered distributions. Furthermore, we have that $\dot{\Delta}_{j}\dot{\Delta}_{j'} u=0$ if $|j-j'|>1.$ Combing these two facts allows us to observe that \begin{equation}\label{smoothingtruncations} \dot{\Delta}_{j} u= \sum_{|m-j|\leq 1} \dot{\Delta}_{m} f^{(j)}= \sum_{|m-j|\leq 1}\dot{\Delta}_{m} f_{-}^{(j)N(j,\epsilon)}+\sum_{|m-j|\leq 1}\dot{\Delta}_{m} f_{+}^{(j)N(j,\epsilon)}. \end{equation} Define \begin{equation}\label{decomp1eachpiece} u^{1,\epsilon}_{j}:= \sum_{|m-j|\leq 1}\dot{\Delta}_{m} f_{-}^{(j)N(j,\epsilon)}, \end{equation} \begin{equation}\label{decomp2eachpiece} u^{2,\epsilon}_{j}:= \sum_{|m-j|\leq 1}\dot{\Delta}_{m} f_{+}^{(j)N(j,\epsilon)} \end{equation} It is clear, that \begin{equation}\label{fouriersupport} \rm{Supp}\,\mathcal{F}(u^{1,\epsilon}_{j}), \rm{Supp}\,\mathcal{F}(u^{2,\epsilon}_{j})\subset 2^{j}C^{'}. \end{equation} Here, $C'$ is the annulus defined by $C':=\{\xi\in\mathbb{R}^3: 3/8\leq |\xi|\leq 16/3\}.$ Using, (\ref{truncationest1.1})-(\ref{truncationest2.1}) we can obtain the following estimates: \begin{equation}\label{decomp1est} 2^{p_1 s_1 j}\|u^{1,\epsilon}_{j}\|_{L_{p_1}}^{p_1}\leq \lambda_{1}(p_1, \|\mathcal{F}^{-1} \varphi\|_{L_1}) 2^{p_1 s_1 j}\|f^{(j)N(j,\epsilon)}_{-}\|_{L_{p_1}}^{p_1}\leq$$$$\leq \lambda_{1}(p_1, \|\mathcal{F}^{-1} \varphi\|_{L_1}) \epsilon^{p_1-p_0} 2^{p_0 s_0 j}\|f^{(j)}\|_{L_{p_0}}^{p_0}, \end{equation} \begin{equation}\label{decomp2est} 2^{p_2 s_2 j}\|u^{2,\epsilon}_{j}\|_{L_{p_2}}^{p_2}\leq \lambda_{2}(p_2, \|\mathcal{F}^{-1} \varphi\|_{L_1}) 2^{p_2 s_2 j}\|f^{(j)N(j,\epsilon)}_{-}\|_{L_{p_2}}^{p_2}\leq$$$$\leq \lambda_{2}(p_2, \|\mathcal{F}^{-1} \varphi\|_{L_1})\epsilon^{p_2-p_0} 2^{p_0 s_0 j}\|f^{(j)}\|_{L_{p_0}}^{p_0}. \end{equation} It is then the case that (\ref{fouriersupport})-(\ref{decomp2est}) allow us to apply the results of Lemma \ref{bahouricheminbook}. This allows us to achieve the desired decomposition with the choice $$u^{1,\epsilon}=\sum_{j\in\mathbb{Z}}u^{1,\epsilon}_{j},$$ $$u^{2,\epsilon}=\sum_{j\in\mathbb{Z}}u^{2,\epsilon}_{j}.$$ \end{proof} \begin{cor}\label{Decomp} Fix $2<\alpha \leq 3.$ \begin{itemize} \item For $2<\alpha< 3$, take $p$ such that $\alpha <p< \frac{\alpha}{3-\alpha}$. \item For $\alpha=3$, take $p$ such that $3<p<\infty$. \end{itemize} For $p$ and $\alpha$ satisfying these conditions, suppose that \begin{equation} u_0\in \dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)\cap L_{2}(\mathbb{R}^3) \end{equation} and $\rm{div}\,\,u_0=0$ in weak sense. \\ Then the above assumptions imply that there exists $\max{(p,4)}<p_0<\infty$ and $\delta>0$ such that for any $\epsilon>0$ there exists weakly divergence free functions $\bar{u}^{1,\epsilon}\in \dot{B}^{s_{p_0}+\delta}_{p_0,p_0}(\mathbb{R}^3)\cap L_{2}(\mathbb{R}^3)$ and $\bar{u}^{2,\epsilon}\in L_2(\mathbb{R}^3)$ such that \begin{equation}\label{udecomp1} u_0= \bar{u}^{1,\epsilon}+\bar{u}^{2,\epsilon}, \end{equation} \begin{equation}\label{baru_1est} \|\bar{u}^{1,\epsilon}\|_{\dot{B}^{s_{p_0}+\delta}_{p_0,p_0}}^{p_0}\leq \epsilon^{p_0-p} \|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{p}, \end{equation} \begin{equation}\label{baru_2est} \|\bar{u}^{2,\epsilon}\|_{L_2}^2\leq C(p,p_0,\|\mathcal{F}^{-1}\varphi\|_{L_1}) \epsilon^{2-p}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^p , \end{equation} \begin{equation}\label{baru_1est.1} \|\bar{u}^{1,\epsilon}\|_{L_2}\leq C(\|\mathcal{F}^{-1}\varphi\|_{L_1})\|u_0\|_{L_{2}}. \end{equation} \end{cor} \begin{proof} \begin{itemize} \item[] \textbf{ First case: $2<\alpha< 3$ and $\alpha <p< \frac{\alpha}{3-\alpha}$} \end{itemize} Under this condition, we can find $\max{(4,p)}<p_{0}<\infty$ such that \begin{equation}\label{condition} \theta:= \frac{\frac{1}{p}-\frac{1}{p_0}}{\frac{1}{2}-\frac{1}{p_0}}>\frac{6}{\alpha}-2. \end{equation} Clearly, $0<\theta<1$ and moreover \begin{equation}\label{summabilityindicerelation} \frac{1-\theta}{p_0}+\frac{\theta}{2}=\frac{1}{p}. \end{equation} Define \begin{equation}\label{deltadef} \delta:=\frac{1-\frac{3}{\alpha}+\frac{\theta}{2}}{1-\theta}. \end{equation} From (\ref{condition}), we see that $\delta>0$. One can also see we have the following relation: \begin{equation}\label{regularityindicerelation} (1-\theta)(s_{p_0}+\delta)=s_{p,\alpha}. \end{equation} The above relations allow us to apply Proposition \ref{Decompgeneral} to obtain the following decomposition: (we note that $\dot{B}^{0}_{2,2}(\mathbb{R}^3)$ coincides with $L_2(\mathbb{R}^3)$ with equivalent norms) \begin{equation}\label{udecomp} u_0= {u}^{1,\epsilon}+{u}^{2,\epsilon}, \end{equation} \begin{equation}\label{u_1est} \|{u}^{1,\epsilon}\|_{\dot{B}^{s_{p_0}+\delta}_{p_0,p_0}}^{p_0}\leq \epsilon^{p_0-p} \|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^p, \end{equation} \begin{equation}\label{u_2est} \|{u}^{2,\epsilon}\|_{L_2}^2\leq C(p,p_0,\|\mathcal{F}^{-1}\varphi\|_{L_1})\epsilon^{2-p} \|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^p . \end{equation} For $j\in\mathbb{Z}$ and $m\in\mathbb{Z}$, it can be seen that \begin{equation}\label{besovpersistency} \|\dot{\Delta}_{m}\left( (\dot{\Delta}_{j}u_{0})\chi_{|\dot{\Delta}_{j}u_{0}|\leq N(j,\epsilon)}\right)\|_{L_{2}}\leq C(\|\mathcal{F}^{-1} \varphi\|_{L_1})) \|\dot{\Delta}_{j} u_0\|_{L_2}. \end{equation} It is known that $u_0\in L_{2}$ implies $$ \|u_0\|_{L_2}^2=\sum_{j\in\mathbb{Z}}\|\dot{\Delta}_{j} u_0\|_{L_2}^2.$$ Using this, (\ref{besovpersistency}) and the definition of $u^{1,\epsilon}$ from Proposition \ref{Decompgeneral}, we can infer that $$\|{u}^{1,\epsilon}\|_{L_2}\leq C(\|\mathcal{F}^{-1} \varphi\|_{L_1})\|u_0\|_{L_{2}}.$$ To establish the decomposition of the Corollary we apply the Leray projector to each of $u^{1,\epsilon}$ and $u^{2,\epsilon}$, which is a continuous linear operator on the homogeneous Besov spaces under consideration. \begin{itemize} \item[] \textbf{ Second case: $\alpha=3$ and $3<p<\infty$} \end{itemize} In the second case, we choose any $p_0$ such that $\max{(4,p)}<p_0<\infty$. With this $p_0$ we choose $\theta$ such that $$ \frac{1-\theta}{p_0}+\frac{\theta}{2}=\frac{1}{p} .$$ If we define \begin{equation} \delta:=\frac{\theta}{2(1-\theta)}>0, \end{equation} we see that \begin{equation}\label{regulairtyindexrelation} (s_{p_0}+\delta)(1-\theta)= s_p. \end{equation} These relations allow us to obtain the decomposition of the Corollary, by means of identical arguments to those presented in the the first case of this proof. \end{proof} \setcounter{equation}{0} \section{Some Estimates Near the Initial Time for Weak Leray-Hopf Solutions} \subsection{Construction of Mild Solutions with Subcritical Besov Initial Data} Let $\delta>0$ be such that $s_{p_0}+\delta<0$ and define the space $$X_{p_0,\delta}(T):=\{f\in \mathcal{S}^{'}(\mathbb{R}^3\times ]0,T[): \sup_{0<t<T} t^{-\frac{s_{p_0}}{2}-\frac{\delta}{2}}\|f(\cdot,t)\|_{L_{p_0}(\mathbb{R}^3)}<\infty\}.$$ From remarks \ref{besovremark2} and \ref{besovremark3}, we observe that \begin{equation}\label{Besovembedding} u_0 \in \dot{B}^{s_{p_0}+\delta}_{p_0,p_0}(\mathbb{R}^3)\Rightarrow \|S(t)u_{0}\|_{X_{p_0,\delta}(T)}\leq C\|u_0\|_{\dot{B}^{s_{p_0}+\delta}_{p_0,\infty}}\leq C\|u_0\|_{\dot{B}^{s_{p_0}+\delta}_{p_0,p_0}}. \end{equation} In this subsection, we construct \footnote{ This is in a similar spirit to a construction contained in a joint work with Gabriel Koch, to appear.} mild solutions with weakly divergence free initial data in $\dot{B}^{s_{p_0}+\delta}_{p_0,p_0}(\mathbb{R}^3)\cap J(\mathbb{R}^3)$. Before constructing mild solutions we will briefly explain the relevant kernels and their pointwise estimates. Let us consider the following Stokes problem: $$\partial_t v-\Delta v +\nabla q=-\rm{div}\,F,\qquad {\rm div }\,v=0$$ in $Q_T$, $$v(\cdot,0)=0.$$ Furthermore, assume that $F_{ij}\in C^{\infty}_{0}(Q_T).$ Then a formal solution to the above initial boundary value problem has the form: $$v(x,t)=\int \limits_0^t\int \limits_{\mathbb R^3}K(x-y,t-s):F(y,s)dyds.$$ The kernel $K$ is derived with the help of the heat kernel $\Gamma$ as follows: $$\Delta_{y}\Phi(y,t)=\Gamma(y,t),$$ $$K_{mjs}(y,t):=\delta_{mj}\frac{\partial^3\Phi}{\partial y_i\partial y_i\partial y_s}(y,t)-\frac{\partial^3\Phi}{\partial y_m\partial y_j\partial y_s}(y,t).$$ Moreover, the following pointwise estimate is known: \begin{equation}\label{kenrelKest} |K(x,t)|\leq\frac{C}{(|x|^2+t)^2}. \end{equation} Define \begin{equation}\label{fluctuationdef} G(f\otimes g)(x,t):=\int\limits_0^t\int\limits_{\mathbb{R}^3} K(x-y,t-s): f\otimes g(y,s) dyds. \end{equation} \begin{theorem}\label{regularity} Consider $p_0$ and $\delta$ such that $4<p_0<\infty$, $\delta>0$ and $s_{p_0}+\delta<0$. Suppose that $u_0\in \dot{B}^{s_{p_0}+\delta}_{p_0,p_0}(\mathbb{R}^3)\cap J(\mathbb{R}^3)$. There exists a constant $c=c(p_0)$ such that if \begin{equation}\label{smallnessasmp} 4cT^{\frac{\delta}{2}}\|u_{0}\|_{\dot{B}^{s_{p_0}+\delta}_{p_0,p_0}}<1, \end{equation} then there exists a $v\in X_{p_0,\delta}(T)$, which solves the Navier Stokes equations (\ref{directsystem})-(\ref{directic}) in the sense of distributions and satisfies the following properties. The first property is that $v$ solves the following integral equation: \begin{equation}\label{vintegeqn} v(x,t):= S(t)u_{0}+G(v\otimes v)(x,t) \end{equation} in $Q_{T}$, along with the estimate \begin{equation}\label{vintegest} \|v\|_{X_{p_0,\delta}(T)}<2\|S(t)u_0\|_{X_{p_0,\delta}(T)}:=2M^{(0)} . \end{equation} The second property is (recalling that by assumption $u_0\in J(\mathbb{R}^3)$): \begin{equation}\label{venergyspace} v\in\,W^{1,0}_{2}(Q_T)\cap C([0,T];J(\mathbb{R}^3))\cap L_{4}(Q_T) .\end{equation} Moreover, the following estimate is valid for $0<t\leq T$: \begin{equation}\label{energyinequalityfluctuation} \|G(v\otimes v)(\cdot,t)\|_{L_{2}}^2+\int\limits_0^t\int\limits_{\mathbb{R}^3} |\nabla G(v\otimes v)|^2dyd\tau\leq $$$$\leq C t^{\frac{1}{p_0-2}+2\delta\theta_{0}}(2M^{(0)})^{4\theta_0}\left(C t^{\frac{1}{2\theta_{0}-1} \left(\frac{1}{p_0-2}+2\delta\theta_0\right)}(2M^{(0)})^{\frac{4\theta_0}{2\theta_0-1}}+\|u_0\|_{L_2}^2\right)^{2(1-\theta_0)}. \end{equation} Here, \begin{equation}\label{thetadef} \frac 1 4=\frac{\theta_0}{p_0}+\frac{1-\theta_0}{2} \end{equation} and $C=C(p_0,\delta,\theta_0)$.\\ If $\pi_{v\otimes v}$ is the associated pressure we have (here, $\lambda\in ]0,T[$): \begin{equation}\label{presspace} \pi_{v\otimes v} \in L_{2}(Q_{T})\cap L_{\frac{p_0}{2},\infty}(Q_{\lambda,T})). \end{equation} The final property is that for $\lambda\in]0,T[$ and $k=0,1\ldots$, $l=0,1\ldots$: \begin{equation}\label{vpsmooth} \sup_{(x,t)\in Q_{\lambda,T}}|\partial_{t}^{l}\nabla^{k} v|+|\partial_{t}^{l}\nabla^{k} \pi_{v\otimes v}|\leq c(p_0,\delta,\lambda,\|u_0\|_{B_{p_0,p_0}^{s_{p_0}+\delta}},k,l ). \end{equation} \end{theorem} \begin{proof} Recall Young's inequality: \begin{equation}\label{Youngs} \|f\star g\|_{L^{r}}\leq C_{p,q}\|f\|_{L^{p}}\|g\|_{L^{q}} \end{equation} where $1<p,q,r<\infty$, $0<s_1,s_2\leq\infty$, $\frac 1 p+\frac 1 q=\frac 1 r +1$. Applying this and the pointwise estimate (\ref{kenrelKest}) gives the following: \begin{equation}\label{Ktensormainest1} \|K(\cdot, t-\tau)\star (r\otimes r)(\cdot,\tau)\|_{L_{p_0}}\leq \|K(\cdot,t-\tau)\|_{L_{{(p_0)}^{'}}}\|r\otimes r\|_{L_{\frac{p_0}{2}}}\leq$$$$\leq C (t-\tau)^{-(1+\frac{s_{p_0}}{2})}\|r\|_{L_{p_0}}^2\leq C (t-\tau)^{-(1+\frac{s_{p_0}}{2})}\frac{\|r\|_{X_{p_0,\delta}(T)}^2}{\tau^{-s_{p_0}-\delta}}. \end{equation} One can then show that \begin{equation}\label{Gtensorest1} \|G(r\otimes r)\|_{X_{p_0,\delta}(T)}\leq CT^{\frac{\delta}{2}} \|r\|_{X_{p_0,\delta}(T)}^2. \end{equation} We briefly describe successive approximations. For $n=1,2,\cdots$ let $v^{(0)}= S(t)u_0$, $$v^{(n+1)}= v^{(0)}+G(v^{(n)},v^{(n)}).$$ Moreover for $n=0,1,\ldots$ define: \begin{equation}\label{Mdef} M^{(n)}:=\|v^{(n)}\|_{X_{p_0,\delta}(T)}. \end{equation} Then using (\ref{Gtensorest1}) we have the following iterative relation: \begin{equation}\label{Miterative} M^{(n+1)}\leq M^{(0)}+CT^{\frac{\delta}{2}}(M^{(n)})^2. \end{equation} If \begin{equation}\label{Msmallnesscondition} 4CT^{\frac{\delta}{2}}M^{(0)}< 1, \end{equation} then one can show that for $n=1,2\cdots$ we have \begin{equation}\label{Mbound} M^{(n)}<2M^{(0)}. \end{equation} \begin{itemize} \item[] \textbf{Step 2: establishing energy bounds} \end{itemize} First we note that by interpolation ($0\leq \tau\leq T$): \begin{equation}\label{interpolationineq} \|r(\cdot,\tau)\|_{L_{4}}\leq \|r(\cdot,\tau)\|_{L_{p_0}}^{\theta_0}\|r(\cdot,\tau)\|_{L_{2}}^{1-\theta_0}\leq \|r\|_{X_{p_0,\delta}(T)}^{\theta_0}\|r(\cdot,\tau)\|_{L_{2}}^{1-\theta_0}\tau^{\theta_0(\frac{s_p}{2}+\frac{\delta}{2})}. \end{equation} Recall, \begin{equation}\label{theta0recap} \frac{\theta_0}{p_0}+\frac{1-\theta_0}{2}=\frac{1}{4}. \end{equation} Specifically, \begin{equation}\label{theta0ident} \theta_0=\frac{1/4}{1/2-1/p_0}. \end{equation} It is then immediate that $$\frac{-\theta_0 s_{p_0}}{2}= \frac{1-3/p_0}{4(1-2/p_0)}=\frac{1}{4}-\frac{1}{4(p_0-2)}<\frac{1}{4}.$$ From this, we conclude that for $0\leq t\leq T$: \begin{equation}\label{L4spacetimeest} \|r\|_{L_{4}(Q_t)}\leq C t^{\frac{\theta_{0}\delta}{2}+\frac{1}{4(p_0-2)}}\|r\|_{X_{p_0,\delta}(T)}^{\theta_0}\|r\|_{L_{2,\infty}(Q_t)}^{1-\theta_0}. \end{equation} Let $r\in L_{4}(Q_T)\cap X_{p_0,\delta}(T)\cap L_{2,\infty}(Q_T)$ and $R:= G(r\otimes r)$. Furthermore, define $\pi_{r\otimes r}:= \mathcal{R}_{i}\mathcal{R}_{j}(r_{i}r_{j})$, where $\mathcal{R}_{i}$ denotes the Riesz transform and repeated indices are summed. One can readily show that on $Q_{T}$, $(R, \pi_{r\otimes r})$ are solutions to \begin{equation}\label{Reqn} \partial_{t} R-\Delta R+\rm{div}\,r\otimes r=-\nabla \pi_{r\otimes r}, \end{equation} \begin{equation}\label{Rdivfree} \rm{div\,R}\,=0, \end{equation} \begin{equation}\label{Rintialcondition} R(\cdot,0)=0. \end{equation} We can also infer that $R\in W^{1,0}_{2}(Q_T)\cap C([0,T]; J(\mathbb{R}^3))$, along with the estimate \begin{equation}\label{Renergyest} \|R(\cdot,t)\|_{L_{2}}^2+\int\limits_0^t\int\limits_{\mathbb{R}^3}|\nabla R(x,s)|^2 dxds\leq \|r\otimes r\|_{L_{2}(Q_t)}^2\leq$$$$\leq c\|r\|_{L_{4}(Q_{t})}^4 \leq C t^{2\theta_{0}\delta+\frac{1}{(p_0-2)}}\|r\|_{X_{p_0,\delta}(T)}^{4\theta_0}\|r\|_{L_{2,\infty}(Q_t)}^{4(1-\theta_0)}. \end{equation} Since the associated pressure is a composition of Riesz transforms acting on $r\otimes r$, we have the estimates \begin{equation}\label{presest1} \|\pi_{r\otimes r}\|_{L_{2}(Q_T)}\leq C\|r\|_{L_{4}(Q_T)}^2, \end{equation} \begin{equation}\label{presest2} \|\pi_{r\otimes r}\|_{L_{\frac{p_0}{2}, \infty}(Q_{\lambda,T})}\leq C(\lambda,T,p_0,\delta)\|r\|_{X_{p_0,\delta}(T)}^2. \end{equation} Moreover for $n=0,1,\ldots$ and $t\in [0,T]$ define: \begin{equation}\label{Edef} E^{(n)}(t):=\|v^{(n)}\|_{L_{\infty}(0,t;L_2)}+\|\nabla v^{(n)}\|_{L_{2}(Q_t)}. \end{equation} Clearly, by the assumptions on the initial data $E^{(0)}(t)\leq \|u_0\|_{L_2}<\infty$. Then from (\ref{Renergyest}), we have the following iterative relations: \begin{equation}\label{Eiterative1} E^{(n+1)}(t)\leq E^{(0)}(t)+Ct^{\theta_{0}\delta+\frac{1}{2(p_0-2)}}(M^{(n)})^{2\theta_0}(E^{(n)}(t))^{2(1-\theta_0)}. \end{equation} From (\ref{thetadef}), it is clear that $2(1-\theta_0)<1$. Hence by Young's inequality: \begin{equation}\label{Eiterative2} E^{(n+1)}(t)\leq Ct^{\frac{1}{2\theta_0-1}(\theta_{0}\delta+\frac{1}{2(p_0-2)})}(M^{(n)})^{\frac{2\theta_0}{2\theta_0-1}}+ E^{(0)}+\frac{1}{2}E^{(n)}(t). \end{equation} From (\ref{Mbound}) and iterations, we infer that for $t\in [0,T]$: \begin{equation}\label{Ebounded} E^{(n+1)}(t)\leq 2Ct^{\frac{1}{2\theta_0-1}(\theta_{0}\delta+\frac{1}{2(p_0-2)})}(2M^{(0)})^{\frac{2\theta_0}{2\theta_0-1}}+ 2E^{(0)}(t)\leq$$$$\leq 2Ct^{\frac{1}{2\theta_0-1}(\theta_{0}\delta+\frac{1}{2(p_0-2)})}(2M^{(0)})^{\frac{2\theta_0}{2\theta_0-1}}+ 2\|u_0\|_{L_{2}} . \end{equation} \begin{itemize} \item[]\textbf{Step 3: convergence and showing energy inequalities} \end{itemize} Using (\ref{Miterative})-(\ref{Mbound}), one can argue along the same lines as in \cite{Kato} to deduce that there exists $v\in X_{p_0,\delta}(T)$ such that \begin{equation}\label{v^nconverg1} \lim_{n\rightarrow 0}\|v^{(n)}-v\|_{X_{p_0,\delta}(T)}=0. \end{equation} Furthermore $v$ satisfies the integral equation (\ref{vintegeqn}) as well as satisfying the Navier Stokes equations, in the sense of distributions. From $u_0\in J(\mathbb{R}^3)\cap X_{p_0,\delta}(T)$, (\ref{Ebounded}), (\ref{v^nconverg1}) and estimates analogous to (\ref{L4spacetimeest}) and (\ref{Renergyest}), applied to $v^{(m)}-v^{(n)}$, we have the following: \begin{equation}\label{v^nstrongconverg1} v^{(n)}\rightarrow v\,\rm{in}\,\, C([0,T];J(\mathbb{R}^3))\cap W^{1,0}_{2}(Q_T), \end{equation} \begin{equation}\label{v^ninitialcondition} v(\cdot,0)=u_0, \end{equation} \begin{equation}\label{v^nstrongconverg2} v^{(n)}\rightarrow v\,\rm{in}\,\, L_{4}(Q_T), \end{equation} \begin{equation}\label{presconvergence} \pi_{v^{(n+1)}\otimes v^{(n+1)}}\rightarrow \pi_{v\otimes v}\,\rm{in}\, L_{2}(Q_T) \,\,\rm{and}\,\, L_{\frac{p_0}{2},\infty}(Q_{\lambda,T}) \end{equation} where $0<\lambda<T$. Using this, along with (\ref{Renergyest}) and (\ref{Ebounded}), we infer the estimate (\ref{energyinequalityfluctuation}). \begin{itemize} \item[]\textbf{Step 4: estimate of higher derivatives} \end{itemize} All that remains to prove is the estimate (\ref{vpsmooth}). First note that from the definition of $X_{p_0,\delta}(T)$, we have the estimate: \begin{equation}\label{limitvelocityest} \|v\|_{L_{p_0,\infty}(Q_{\lambda,T})}\leq \lambda^{\frac 1 2(s_{p_0}+\delta)}\|v\|_{X_{p_0,\delta}(T)}. \end{equation} Since $\pi_{v\otimes v}$ is a convolution of Riesz transforms, we deduce from (\ref{limitvelocityest}) that \begin{equation}\label{limitpressureest} \|\pi_{v\otimes v}\|_{L_{\frac{p_0}{2},\infty}(Q_{\lambda,T})}\leq C \lambda^{s_{p_0}+\delta}\|v\|_{X_{p_0,\delta}(T)}^2. \end{equation} One can infer that $(v,\pi_{v\otimes v})$ satisfies the local energy equality. This can be shown using (\ref{venergyspace})-(\ref{presspace}) and a mollification argument. If $(x,t)\in Q_{\lambda,T}$, then for $0<r^2<\frac{\lambda}{2}$ we can apply H\"{o}lder's inequality and (\ref{limitvelocityest})-(\ref{limitpressureest}) to infer \begin{equation}\label{CKNquantity} \frac{1}{r^2}\int\limits_{t-r^2}^{t}\int\limits_{B(x,r)}(|v|^3+|\pi_{v\otimes v}|^{\frac{3}{2}}) dxdt\leq C\lambda^{\frac{3}{2}(s_{p_0}+\delta)} r^{3(1-\frac{3}{p_0})}\|u_0\|_{\dot{B}_{p_0,p_0}^{s_{p_0}+\delta}}^3. \end{equation} Clearly, there exists $r_0^2(\lambda, \varepsilon_{CKN}, \|u_0\|_{\dot{B}_{p_0,p_0}^{s_{p_0}+\delta}})<\frac{\lambda}{2}$ such that $$\frac{1}{r_0^2}\int\limits_{t-r_0^2}^{t}\int\limits_{B(x,r_0)}(|v|^3+|\pi_{v\otimes v}|^{\frac{3}{2}}) dxdt\leq \varepsilon_{CKN}.$$ By the $\varepsilon$- regularity theory developed in \cite{CKN}, there exists universal constants $c_{0k}>0$ such that (for $(x,t)$ and $r$ as above) we have $$ |\nabla^{k} v(x,t)|\leq \frac{c_{0k}}{r_{0}^{k+1}}= C(k, \lambda, \|u_0\|_{\dot{B}_{p_0,p_0}^{s_{p_0}+\delta}}).$$ Thus, \begin{equation}\label{vhigherreg} \sup_{(x,t)\in Q_{\lambda,T}} |\nabla^k v|\leq C(k, \lambda, \|u_0\|_{\dot{B}_{p_0,p_0}^{s_{p_0}+\delta}}). \end{equation} Using this and (\ref{limitpressureest}), we obtain by local regularity theory for elliptic equations that \begin{equation}\label{preshigherreg} \sup_{(x,t)\in Q_{\lambda,T}} |\nabla^k \pi_{v\otimes v}|\leq C(k, \lambda, \|u_0\|_{\dot{B}_{p_0,p_0}^{s_{p_0}+\delta}}). \end{equation} From these estimates, the singular integral representation of $\pi_{v\otimes v}$ and that $(v,\pi_{v\otimes v})$ satisfy the Navier-Stokes system one can prove the corresponding estimates hold for higher time derivatives of the velocity field and pressure. \end{proof} \subsection{Proof of Lemma \ref{estnearinitialforLeraywithbesov}} The proof of Lemma \ref{estnearinitialforLeraywithbesov} is achieved by careful analysis of certain decompositions of weak Leray-Hopf solutions, with initial data in the class given in Lemma \ref{estnearinitialforLeraywithbesov}. A key part of this involves decompositions of the initial data (Corollary \ref{Decomp}), together with properties of mild solutions, whose initial data belongs in a subcritical homogeneous Besov space (Theorem \ref{regularity}). In the context of local energy solutions of the Navier-Stokes equations with $L_{3}$ initial data, related splitting arguments have been used in \cite{jiasverak}. Before proceeding, we state a known lemma found in \cite{prodi} and \cite{Serrin}, for example. \begin{lemma}\label{trilinear} Let $p\in ]3,\infty]$ and \begin{equation}\label{serrinpairs} \frac{3}{p}+\frac{2}{r}=1. \end{equation} Suppose that $w \in L_{p,r}(Q_T)$, $v\in L_{2,\infty}(Q_T)$ and $\nabla v\in L_{2}(Q_T)$. Then for $t\in ]0,T[$: \begin{equation}\label{continuitytrilinear} \int\limits_0^t\int\limits_{\mathbb{R}^3} |\nabla v||v||w| dxdt'\leq C\int\limits_0^t \|w\|_{L_{p}}^r \|v\|^{2}_{L_2}dt'+\frac{1}{2}\int\limits_0^t\int\limits_{\mathbb{R}^3} |\nabla v|^2 dxdt'. \end{equation} \end{lemma} \begin{proof} Throughout this subsection, $u_0 \in \dot{B}_{p,p}^{s_{p,\alpha}}(\mathbb{R}^3)\cap L_2(\mathbb{R}^3)$. Here, $p$ and $\alpha$ satisfy the assumptions of Theorem \ref{weakstronguniquenessBesov}. We will write $u_0= \bar{u}^{1, \epsilon}+\bar{u}^{2, \epsilon}.$ Here the decomposition has been performed according to Corollary \ref{Decomp} (specifically, (\ref{udecomp1})-(\ref{baru_1est.1})), with $\epsilon>0.$ Thus, \begin{equation}\label{baru_1u_0decomp} \|\bar{u}^{1, \epsilon}\|_{\dot{B}^{s_{p_0}+\delta}_{p_0,p_0}}^{p_0}\leq \epsilon^{p_0-p}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{p} \end{equation} \begin{equation}\label{baru_2u_0decomp} \|\bar{u}^{2, \epsilon}\|_{L_2}^2\leq C(p,p_0,\|\mathcal{F}^{-1}\varphi\|_{L_1})\epsilon^{2-p}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{p} \end{equation} and \begin{equation}\label{baru_1u_0decomp.1} \|\bar{u}^{1, \epsilon}\|_{L_{2}}\leq C(\|\mathcal{F}^{-1}\varphi\|_{L_1})\|u_0\|_{L_2}. \end{equation} Throughout this section we will let $w^{\epsilon}$ be the mild solution from Theorem \ref{regularity} generated by the initial data $\bar{u}^{1, \epsilon}$. Recall from Theorem \ref{regularity} that $w^{\epsilon}$ is defined on $Q_{T_{\epsilon}}$, where \begin{equation}\label{smallnessasmprecall} 4c(p_0)T_{\epsilon}^{\frac{\delta}{2}}\|\bar{u}^{1,\epsilon}\|_{\dot{B}^{s_{p_0}+\delta}_{p_0,p_0}}<1. \end{equation} In accordance with this and (\ref{baru_1u_0decomp}), we will take \begin{equation}\label{Tepsilondef} T_{\epsilon}:=\frac{1}{\Big(8c(p_0)\epsilon^{\frac{p_0-p}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}}\Big)^{\frac{2}{\delta}}} \end{equation} The two main estimates we will use are as follows. Using (\ref{Besovembedding}), (\ref{vintegest}) and (\ref{baru_1u_0decomp}), we have \begin{equation}\label{wepsilonkatoest} \|w^{\epsilon}\|_{X_{p_0,\delta}(T_{\epsilon})}<C\epsilon^{\frac{p_0-p}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}}. \end{equation} The second property, from Theorem \ref{regularity}, is (recalling that by assumption $\bar{u}^{1,\epsilon}\in L_{2}(\mathbb{R}^3)$): \begin{equation}\label{wepsilonenergyspace} w^{\epsilon}\in\,W^{1,0}_{2}(Q_{T_{\epsilon}})\cap C([0,T_{\epsilon}];J(\mathbb{R}^3))\cap L_{4}(Q_{T_{\epsilon}}) .\end{equation} Consequently, it can be shown that $w^{\epsilon}$ satisfies the energy equality: \begin{equation}\label{energyequalitywepsilon} \|w^{\epsilon}(\cdot,s)\|_{L_2}^2+2\int\limits_{\tau}^{s}\int\limits_{\mathbb{R}^3} |\nabla w^{\epsilon}(y,\tau)|^2 dyd\tau=\|w^{\epsilon}(\cdot,s')\|_{L_2}^2 \end{equation} for $0\leq s'\leq s\leq T_{\epsilon}$. Moreover, using (\ref{energyinequalityfluctuation}), (\ref{baru_1u_0decomp}) and (\ref{baru_1u_0decomp.1}), the following estimate is valid for $0\leq s \leq T_{\epsilon}$: \begin{equation}\label{energyinequalitywepsilonfluctuation} \|w^{\epsilon}(\cdot,s)-S(s)\bar{u}^{1,\epsilon}\|_{L_{2}}^2+\int\limits_0^s\int\limits_{\mathbb{R}^3} |\nabla w^{\epsilon}(\cdot,\tau)-\nabla S(\tau)\bar{u}^{1,\epsilon}|^2dyd\tau\leq $$$$\leq C s^{\frac{1}{p_0-2}+2\delta\theta_{0}}(\epsilon^{\frac{p_0-p}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}})^{4\theta_0}\times$$$$\times\left( s^{\frac{1}{2\theta_{0}-1} \left(\frac{1}{p_0-2}+2\delta\theta_0\right)}(\epsilon^{\frac{p_0-p}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}})^{\frac{4\theta_0}{2\theta_0-1}}+\|u_0\|_{L_2}^2\right)^{2(1-\theta_0)}. \end{equation} Here, \begin{equation}\label{thetadefrecall} \frac 1 4=\frac{\theta_0}{p_0}+\frac{1-\theta_0}{2} \end{equation} and $C=C(p_0,\delta,\theta_0)$. Let $$S_{\epsilon}:=\min(T,T_{\epsilon}).$$ Define on $Q_{S_{\epsilon}}$, $v^{\epsilon}(x,t):= u(x,t)- w^{\epsilon}(x,t)$. Clearly we have : \begin{equation} \label{vepsilonspaces} v^{\epsilon}\in L_{\infty}(0,S_{\epsilon}; L_{2})\cap C_{w}([0,S_{\epsilon}]; J(\mathbb{R}^3))\cap W^{1,0}_{2}(Q_{S_{\epsilon}}) \end{equation} and \begin{equation} \label{vepsiloncontinuityattzero} \lim_{\tau\rightarrow 0}\|v^{\epsilon}(\cdot,\tau)-\bar{u}^{2,\epsilon}\|_{L_{2}}=0. \end{equation} Moreover, $v^{\epsilon}$ satisfies the following equations \begin{equation}\label{directsystemvepsilon} \partial_t v^{\epsilon}+v^{\epsilon}\cdot\nabla v^{\epsilon}+w^{\epsilon}\cdot\nabla v^{\epsilon}+v^{\epsilon}\cdot\nabla w^{\epsilon}-\Delta v^{\epsilon}=-\nabla p^{\epsilon},\qquad\mbox{div}\,v^{\epsilon}=0 \end{equation} in $Q_{S_{\epsilon}}$, with the initial conditions \begin{equation}\label{directicvepsilon} v^{\epsilon}(\cdot,0)=\bar{u}^{2,\epsilon}(\cdot) \end{equation} in $\mathbb{R}^3$. From (\ref{wepsilonkatoest}), the definition of ${X_{p_0,\delta}(T_{\epsilon})}$ and (\ref{baru_1u_0decomp}), we see that for $0<s\leq T_{\epsilon}$: \begin{equation}\label{serrinnormvepsilon} \int\limits_{0}^{s} \|w^{\epsilon}(\cdot,\tau)\|_{L_{p_0}}^{r_0} d\tau\leq C(\delta,p_0) s^{\frac{\delta r_0}{2}} ( \epsilon^{\frac{p_0-p}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}})^{{r_0}}. \end{equation} Here, $r_0\in ]2,\infty[$ is such that $$\frac{3}{p_0}+\frac{2}{r_0}=1.$$ Moreover, the following energy inequality holds for $s\in [0,S_{\epsilon}]$ \footnote{ This can be justified by applying Proposition 14.3 in \cite{LR1}, for example.}: \begin{equation}\label{venergyequality} \|v^{\epsilon}(\cdot,s)\|_{L_2}^2+ 2\int\limits_0^s\int\limits_{\mathbb{R}^3} |\nabla v^{\epsilon}(x,\tau)|^2 dxd\tau \leq \|\bar{u}^{2,\epsilon}\|_{L_2}^2+$$$$+2\int\limits_0^s\int\limits_{\mathbb{R}^3}v^{\epsilon}\otimes w^{\varepsilon}:\nabla v^{\epsilon} dxd\tau. \end{equation} We may then apply Lemma \ref{trilinear} to infer that for $s\in [0,S_{\epsilon}]$: \begin{equation}\label{venergyinequalitynorms} \|v^{\epsilon}(\cdot,s)\|_{L_2}^2+ \int\limits_0^s\int\limits_{\mathbb{R}^3} |\nabla v^{\epsilon}(x,\tau)|^2 dxd\tau\leq \|\bar{u}^{2, \epsilon}\|_{L_2}^2+$$$$+C\int\limits_0^s \|v^{\epsilon}(\cdot,\tau)\|_{L_2}^2 \|w^{\varepsilon}(\cdot,\tau)\|_{L_{p_0}}^{r_0} d\tau. \end{equation} By an application of the Gronwall lemma, we see that for $s\in [0,S_{\epsilon}]$: \begin{equation}\label{gronwallresult} \|v^{\epsilon}(\cdot,s)\|_{L_2}^2\leq C\|\bar{u}^{2,\epsilon}\|_{L_2}^2 \exp{\Big(C\int\limits_{0}^s \|w^{\epsilon}(\cdot,\tau)\|_{L_{p_0}}^{r_0} d\tau\Big)}. \end{equation} We may then use (\ref{baru_2u_0decomp}) and (\ref{serrinnormvepsilon}) to infer for $s\in [0,S_{\epsilon}]$: \begin{equation}\label{vepsilonkineticenergybound} \|v^{\epsilon}(\cdot,s)\|_{L_2}^2\leq C(p_0,p)\epsilon^{2-p}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{p} \exp{\Big(C(\delta,p_0) s^{\frac{\delta r_0}{2}} \Big( \epsilon^{\frac{p_0-p}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}}\Big)^{r_0}\Big)}. \end{equation} Clearly, for $s\in [0,S_{\epsilon}]$ we have $$\|u(\cdot,s)-S(s)u_0\|_{L_2}^2\leq C(\|v^{\epsilon}(\cdot,s)\|_{L_{2}}^2+\|\bar{u}^{2,\epsilon}\|_{L_2}^2 +\|w^{\epsilon}(\cdot,s)-S(s)\bar{u}^{1,\epsilon}\|_{L_{2}}^2).$$ Thus, using this together with (\ref{baru_2u_0decomp}), (\ref{energyinequalitywepsilonfluctuation}) and (\ref{vepsilonkineticenergybound}), we infer that for $s\in [0,S_{\epsilon}]$: \begin{equation}\label{estnearinitialtimeepsilon} \|u(\cdot,s)-S(s)u_0\|_{L_2}^2\leq$$$$\leq C(p_0,p)\epsilon^{2-p}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{p} \Big(\exp{\Big(C(\delta,p_0) s^{\frac{\delta r_0}{2}} \Big( \epsilon^{\frac{p_0-p}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}}\Big)^{r_0}\Big)}+1\Big)+$$$$+\Big(C s^{\frac{1}{p_0-2}+2\delta\theta_{0}}\Big(\epsilon^{\frac{p_0-p}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}}\Big)^{4\theta_0}\times$$$$\times \left( s^{\frac{1}{2\theta_{0}-1} \left(\frac{1}{p_0-2}+2\delta\theta_0\right)}\Big(\epsilon^{\frac{p_0-p}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}}\Big)^{\frac{4\theta_0}{2\theta_0-1}}+\|u_0\|_{L_2}^2\right)^{2(1-\theta_0)}\Big). \end{equation} Take $\epsilon= t^{-\gamma}$, where $0<t\leq\min(1,T)$ and $\gamma>0$. Observing (\ref{Tepsilondef}), we see that in order to also satisfy $t\in ]0,T_{\epsilon}]$ (and hence $t\in ]0,\min(1,S_{\epsilon})]$) , we should take \begin{equation}\label{gammarequirement1} 0<\gamma<\frac{\delta p_0}{2(p_0-p)} \end{equation} and we should consider the additional restriction \begin{equation}\label{trequirement} 0<t<\min\Big(1,T,\Big( \frac{1}{8c} \|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{-\frac{p}{p_0}}\Big)^{\frac{2p_0}{\delta p_0-2\gamma(p_0-p)}}\Big). \end{equation} Assuming these restrictions, we see that according to (\ref{estnearinitialtimeepsilon}), we then have: \begin{equation}\label{estnearinitialtimeepsilon.1} \|u(\cdot,t)-S(t)u_0\|_{L_2}^2\leq$$$$\leq C(p_0,p)t^{\gamma(p-2)}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{p} \Big(\exp{\Big(C(\delta,p_0) t^{\frac{\delta r_0}{2}} \Big( t^{\frac{-\gamma(p_0-p)}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}}\Big)^{r_0}\Big)}+1\Big)+$$$$+\Big(C t^{\frac{1}{p_0-2}+2\delta\theta_{0}}\Big(t^{\frac{-\gamma(p_0-p)}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}}\Big)^{4\theta_0}\times$$$$\times\left( t^{\frac{1}{2\theta_{0}-1} \left(\frac{1}{p_0-2}+2\delta\theta_0\right)}\Big(t^{\frac{-\gamma(p_0-p)}{p_0}}\|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}}^{\frac{p}{p_0}}\Big)^{\frac{4\theta_0}{2\theta_0-1}}+\|u_0\|_{L_2}^2\right)^{2(1-\theta_0)}\Big). \end{equation} Let us further choose $\gamma$ such that the following inequalities hold: \begin{equation}\label{gammarequirement2} \frac{\delta r_0}{2}-\frac{\gamma r_0(p_0-p)}{p_0}>0, \end{equation} \begin{equation}\label{gammarequirement3} \frac{1}{p_0-2}+2\delta\theta_0-\frac{4\gamma\theta_0(p_0-p)}{p_0}>0 \end{equation} and \begin{equation}\label{gammarequirement4} \frac{1}{2\theta_0-1}\Big(\frac{1}{p_0-2}+2\delta\theta_0\Big)-\frac{4\theta_0\gamma(p_0-p)}{p_0(2\theta_0-1)}>0. \end{equation} With these choices, together with (\ref{gammarequirement1})-(\ref{trequirement}), we recover the conclusions of Lemma \ref{estnearinitialforLeraywithbesov}. \end{proof} \setcounter{equation}{0} \section{ Short Time Uniqueness of Weak Leray-Hopf Solutions for Initial Values in $VMO^{-1}$} \subsection{Construction of Strong Solutions} The approach we will take to prove Theorem \ref{weakstronguniquenessBesov} is as follows. Namely, we construct a weak Leray-Hopf solution, with initial data $u_0 \in J(\mathbb{R}^3)\cap \dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)\cap VMO^{-1}(\mathbb{R}^3)$, by perturbation methods. We refer to this constructed solution as the 'strong solution'. Then, Lemma \ref{estnearinitialforLeraywithbesov} plays a crucial role in showing that the strong solution has good enough properties to coincide with all weak Leray-Hopf solutions, with the same initial data, on some time interval $]0,T(u_0)[$. With this in mind, we now state the relevant Theorem related to the construction of this 'strong solution'. Let us introduce the necessary preliminaries. The path space $\mathcal{X}_{T}$ for the mild solutions constructed in \cite{kochtataru} is defined to be \begin{equation}\label{pathspaceBMO-1} \mathcal{X}_{T}:=\{ u\in \mathcal{S}^{'}(\mathbb{R}^3\times \mathbb{R}_{+}): \|e^{t\Delta}v\|_{\mathcal{E}_{T}}<\infty\}. \end{equation} Here, \begin{equation}\label{pathspacenormdef} \|u\|_{\mathcal{E}_{T}}:= \sup_{0<t<T} \sqrt{t}\|u(\cdot,t)\|_{L_{\infty}(\mathbb{R}^3)}+$$$$+\sup_{(x,t)\in \mathbb{R}^3\times ]0,T[}\Big(\frac{1}{|B(0,t)|}\int\limits_0^t\int\limits_{|y-x|<\sqrt{t}} |u|^2 dyds\Big)^{\frac{1}{2}}. \end{equation} From (\ref{bmo-1norm}), we see that for $0<T\leq\infty$ \begin{equation}\label{BMO-1embeddingcritical} u_0 \in BMO^{-1}(\mathbb{R}^3)\Rightarrow \|S(t)u_{0}\|_{\mathcal{E}_{T}}\leq C\|u_0\|_{BMO^{-1}}. \end{equation} Since $C_{0}^{\infty}(\mathbb{R}^3)$ is dense in $VMO^{-1}(\mathbb{R}^3)$, we can see from the above that for $u_0\in VMO^{-1}(\mathbb{R}^3)$ \begin{equation}\label{VMO-1shrinking} \lim_{T\rightarrow 0^{+}} \|S(t)u_{0}\|_{\mathcal{E}_{T}}=0. \end{equation} Recalling the definition of $G(f\otimes g)$ given by (\ref{fluctuationdef}), it was shown in \cite{kochtataru} that there exists a universal constant $C$ such that for all $f,\,g\in \mathcal{E}_{T}$ \begin{equation}\label{bilinbmo-1} \|G(f\otimes g)\|_{\mathcal{E}_{T}}\leq C\|f\|_{\mathcal{E}_{T}}\|g\|_{\mathcal{E}_{T}}. \end{equation} Here is the Theorem related to the construction of the 'strong solution'. The main features of this construction required for our purposes, can already be inferred ideas contained in \cite{LRprioux}. Since the proof is not explicitly contained in \cite{LRprioux}, we find it beneficial to sketch certain parts of the proof in the Appendix. \begin{theorem}\label{regularitycriticalbmo-1} Suppose that $u_0\in VMO^{-1}(\mathbb{R}^3)\cap J(\mathbb{R}^3).$ There exists a universal constant $\epsilon_0>0$ such that if \begin{equation}\label{smallnessasmpbmo-1} \|S(t)u_{0}\|_{\mathcal{E}_{T}}<\epsilon_0, \end{equation} then there exists a $v\in \mathcal{E}_{T}$, which solves the Navier Stokes equations in the sense of distributions and satisfies the following properties. The first property is that $v$ solves the following integral equation: \begin{equation}\label{vintegeqnbmo-1} v(x,t):= S(t)u_{0}+G(v\otimes v)(x,t) \end{equation} in $Q_{T}$, along with the estimate \begin{equation}\label{vintegestbmo-1} \|v\|_{\mathcal{E}_{T}}<2\|S(t)u_0\|_{\mathcal{E}(T)}. \end{equation} The second property is that $v$ is a weak Leray-Hopf solution on $Q_{T}$. If $\pi_{v\otimes v}$ is the associated pressure we have (here, $\lambda\in ]0,T[$ and $p\in ]1,\infty[$): \begin{equation}\label{presspacebmo-1} \pi_{v\otimes v} \in L_{\frac{5}{3}}(Q_{T})\cap L_{\frac{p}{2},\infty}(Q_{\lambda,T})). \end{equation} The final property is that for $\lambda\in]0,T[$ and $k=0,1\ldots$, $l=0,1\ldots$: \begin{equation}\label{vpsmoothbmo-1} \sup_{(x,t)\in Q_{\lambda,T}}|\partial_{t}^{l}\nabla^{k} v|+|\partial_{t}^{l}\nabla^{k} \pi_{v\otimes v}|\leq c(p_0,\lambda,\|u_0\|_{BMO^{-1}},\|u_0\|_{L_{2}},k,l ). \end{equation} \end{theorem} \subsection{Proof of Theorem \ref{weakstronguniquenessBesov}} \begin{proof} Let us now consider any other weak Leray-Hopf solution $u$, defined on $Q_{\infty}$ and with initial data $u_0 \in J(\mathbb{R}^3)\cap \dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)\cap VMO^{-1}(\mathbb{R}^3).$ Let $\widehat{T}(u_0)$ be such that $$\|S(t)u_0\|_{\mathcal{E}_{\widehat{T}(u_0)}}< \epsilon_0,$$ where $\epsilon_0$ is from (\ref{smallnessasmpbmo-1}) of Theorem \ref{regularitycriticalbmo-1}. Consider $0<T<\widehat{T}(u_0)$, where $T$ is to be determined. Let $v:Q_{T}\rightarrow\mathbb{R}^3$ be as in Theorem \ref{regularitycriticalbmo-1}. From (\ref{vintegestbmo-1}), we have \begin{equation}\label{vbmo-1estuptoT} \|v\|_{\mathcal{E}_{T}}<2\|S(t)u_0\|_{\mathcal{E}_{T}}\leq 2\|S(t)u_0\|_{\mathcal{E}_{\widehat{T}(u_0)}}<2\epsilon_0. \end{equation} We define \begin{equation}\label{wdefbmo-1} w=u-v \in W^{1,0}_{2}(Q_{T})\cap C_{w}([0,T]; J(\mathbb{R}^3)). \end{equation} Moreover, $w$ satisfies the following equations \begin{equation}\label{directsystemwbmo-1} \partial_t w+w\cdot\nabla w+v\cdot\nabla w+w\cdot\nabla v-\Delta w=-\nabla q,\qquad\mbox{div}\,w=0 \end{equation} in $Q_{T}$, with the initial condition satisfied in the strong $L_{2}$ sense: \begin{equation}\label{initialconditionwbmo-1} \lim_{t\rightarrow 0^{+}}\|w(\cdot,0)\|_{L_{2}}=0 \end{equation} in $\mathbb{R}^3$. From the definition of $\mathcal{E}_{T}$, we have that $v\in L_{\infty}(Q_{\delta,T})$ for $0<\delta<s\leq T$. Using Proposition 14.3 in \cite{LR1}, one can deduce that for $t\in [\delta,T]$: \begin{equation}\label{wenergyequalitybmo-1} \|w(\cdot,t)\|_{L_2}^2\leq \|u_{0}\|_{L_2}^2-2\int\limits_0^{t}\int\limits_{\mathbb{R}^3} |\nabla u|^2 dyd\tau+\|u_{0}\|_{L_2}^2-2\int\limits_0^{t}\int\limits_{\mathbb{R}^3} |\nabla v|^2 dyd\tau-$$$$-2\int\limits_{\mathbb{R}^3} u(y,\delta)\cdot v(y,\delta)dy+4\int_{\delta}^t\int\limits_{\mathbb{R}^3} \nabla u:\nabla v dyd\tau+$$$$+2\int\limits_{\delta}^t\int\limits_{\mathbb{R}^3} v\otimes w:\nabla w dyd\tau. \end{equation} Using Lemma \ref{trilinear} and (\ref{vintegestbmo-1}) , we see that \begin{equation}\label{trilineatestwbmo-1} \int\limits_{\delta}^t\int\limits_{\mathbb{R}^3} |v||w||\nabla w| dyd\tau\leq C\int\limits_{\delta}^t \|v(\cdot,\tau)\|_{L_{\infty}}^{2}\|w(\cdot,\tau)\|_{L_{2}}^2 d\tau+$$$$+\frac{1}{2}\int\limits_{\delta}^t\int\limits_{\mathbb{R}^3}|\nabla w|^2 dyd\tau \leq $$$$\leq C\|v\|_{\mathcal{E}_{T}}^{2}\int\limits_{\delta}^t \frac{\|w(\cdot,\tau)\|_{L_{2}}^{2}}{\tau} d\tau+ \frac{1}{2}\int\limits_{\delta}^t\int\limits_{\mathbb{R}^3}|\nabla w|^2 dyd\tau. \end{equation} The main point now is that Lemma \ref{estnearinitialforLeraywithbesov} implies that there exists $$\beta(p,\alpha)>0$$ and $$\gamma(\|u_{0}\|_{\dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)}, p,\alpha)>0$$ such that for $0<t<\min (1,T, \gamma)$ : \begin{equation}\label{estnearinitialtimeleraydifferenceslesbesgue} \|w(\cdot,t)\|_{L_{2}}^2\leq t^{\beta} c(p,\alpha, \|u_0\|_{L_{2}(\mathbb{R}^3)}, \|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)}). \end{equation} Hence, \begin{equation}\label{wweightedintimelesbesgue} \sup_{0<t<T} \frac{\|w(\cdot,t)\|_{L_2}^2}{t^{\beta}}<\infty. \end{equation} This allows us to take $\delta\rightarrow 0$ in (\ref{wenergyequalitybmo-1}) to get: \begin{equation}\label{wenergyequalityuptoinitialtimebmo-1lesbesgue} \|w(\cdot,t)\|_{L_2}^2+2\int\limits_{0}^{t}\int\limits_{\mathbb{R}^3}|\nabla w|^2 dyd\tau \leq 2\int\limits_{0}^t\int\limits_{\mathbb{R}^3} v\otimes w:\nabla w dyd\tau. \end{equation} Using (\ref{trilineatestwbmo-1}) and (\ref{wweightedintimelesbesgue}) we see that for $t\in [0,T]$: \begin{equation}\label{wmainest1bmo-1} \|w(\cdot,t)\|_{L_{2}}^2\leq C\|v\|_{\mathcal{E}_{T}}^{2}\int\limits_{0}^t \frac{\|w(\cdot,\tau)\|_{L_{2}}^{2}}{\tau} d\tau\leq $$$$\leq{\frac{C}{\beta}t^{\beta}\|v\|_{\mathcal{E}_{T}}^{2}}\sup_{0<\tau<T}\Big(\frac{\|w(\cdot,\tau)\|_{L_{2}}^2}{\tau^{\beta}}\Big). \end{equation} Using this and (\ref{vbmo-1estuptoT}), we have \begin{equation}\label{wmainest2bmo-1} \sup_{0<\tau<T}\Big(\frac{\|w(\cdot,\tau)\|_{L_{2}}^2}{\tau^{\beta}}\Big)\leq{\frac{C'}{\beta}\|S(t)u_0\|_{\mathcal{E}_{T}}^{2}}\sup_{0<\tau<T}\Big(\frac{\|w(\cdot,\tau)\|_{L_{2}}^2}{\tau^{\beta}}\Big). \end{equation} Using (\ref{VMO-1shrinking}), we see that we can choose $0<T=T(u_0)<\widehat{T}(u_0)$ such that $$\|S(t)u_0\|_{\mathcal{E}_{T}}\leq \min\Big(\frac{\beta}{2C'}\Big)^{\frac 1 2}.$$ With this choice of $T(u_0)$, it immediately follows that $w=0$ on $Q_{T(u_0)}$. \end{proof} \subsection{Proof of Corollary \ref{cannoneweakstronguniqueness}} \begin{proof} Since $3<p<\infty$, it is clear that there exists an $\alpha:=\alpha(p)$ such that $2<\alpha<3$ and \begin{equation}\label{pcondition} \alpha<p<\frac{\alpha}{3-\alpha}. \end{equation} With this $p$ and $\alpha$, we may apply Proposition \ref{interpolativeinequalitybahourichemindanchin} with $s_{1}= -\frac{3}{2}+\frac{3}{p},$ $s_{2}= -1+\frac{3}{p}$ and $\theta=6\Big(\frac{1}{\alpha}-\frac{1}{3}\Big).$ In particular this gives for any $u_0\in \mathcal{S}^{'}_{h}$: $$ \|u_0\|_{\dot{B}^{s_{p,\alpha}}_{p,1}}\leq c(p,\alpha)\|u_0\|_{\dot{B}^{-\frac{3}{2}+\frac{3}{p}}_{p,\infty}}^{6(\frac{1}{\alpha}-\frac{1}{3})}\|u_0\|_{\dot{B}^{-1+\frac{3}{p}}_{p,\infty}}^{6(\frac{1}{2}-\frac{1}{\alpha})}. $$ From Remark \ref{besovremark2}, we see that $\dot{B}^{s_{p,\alpha}}_{p,1}(\mathbb{R}^3)\hookrightarrow \dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3) $ and $L_{2}(\mathbb{R}^3) \hookrightarrow\dot{B}^{0}_{2,\infty}(\mathbb{R}^3)\hookrightarrow\dot{B}^{-\frac{3}{2}+\frac{3}{p}}_{p,\infty}(\mathbb{R}^3).$ Thus, we have the inclusion $$ \dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)\subset \dot{B}^{s_{p}}_{p,\infty}(\mathbb{R}^3)\cap L_{2}(\mathbb{R}^3). $$ From this, along with the inclusion $\mathbb{\dot{B}}^{s_p}_{p,\infty}(\mathbb{R}^3)\hookrightarrow VMO^{-1}(\mathbb{R}^3)$, we infer \begin{equation}\label{corollarybesovembeddings} \mathbb{\dot{B}}^{s_p}_{p,\infty}(\mathbb{R}^3)\cap J(\mathbb{R}^3)\subset VMO^{-1}(\mathbb{R}^3)\cap \dot{B}^{s_{p,\alpha}}_{p,p}(\mathbb{R}^3)\cap J(\mathbb{R}^3). \end{equation} From (\ref{pcondition}) and (\ref{corollarybesovembeddings}), we infer that conclusions of Corollary \ref{cannoneweakstronguniqueness} is an immediate consequence of Theorem \ref{weakstronguniquenessBesov}. \end{proof} \begin{remark}\label{remarkcannoneweakstrong} From the above proof of Corollary \ref{cannoneweakstronguniqueness}, we see that \begin{equation}\label{besovinclusionforestnearinitialtime} \dot{B}^{s_{p}}_{p,\infty}(\mathbb{R}^3)\cap J(\mathbb{R}^3)\subset\dot{B}^{s_{p}}_{p,\infty}(\mathbb{R}^3)\cap J(\mathbb{R}^3). \end{equation} Hence, the conclusions of Lemma \ref{estnearinitialforLeraywithbesov} apply if $u_{0}\in \dot{B}^{s_{p}}_{p,\infty}(\mathbb{R}^3)\cap J(\mathbb{R}^3).$ Using this we claim the assumptions of Corollary \ref{cannoneweakstronguniqueness} can be weakened. Namely, there exists a small $\tilde{\epsilon}_0=\tilde{\epsilon}_0(p)$ such that for all $u_{0} \in \dot{B}^{s_{p}}_{p,\infty}(\mathbb{R}^3)\cap J(\mathbb{R}^3)$ with \begin{equation}\label{smallkatonorm} \sup_{0<t<T} t^{\frac{-s_{p}}{2}}\|S(t)u_0\|_{L_{p}}<\tilde{\epsilon}_0, \end{equation} we have the following implication. Specifically, all weak Leray-Hopf solutions on $Q_{\infty}$, with initial data $u_0$, coincide on $Q_{T}$. Indeed taking any fixed $u_0$ in this class and taking $\tilde{\epsilon_0}$ sufficiently small, we may argue in a verbatim fashion as in the proof of Theorem \ref{regularity} (setting $\delta=0$). Consequently, there exists a weak Leray-Hopf solution $v$ on $Q_{T}$ such that \begin{equation}\label{Katosmall} \sup_{0<t<T}t^{\frac{-s_{p}}{2}}\|v(\cdot,t)\|_{L_{p}}<2\tilde{\epsilon}_0. \end{equation} Using this and Lemma \ref{estnearinitialforLeraywithbesov}, we may argue in a similar way to the proof of Theorem \ref{weakstronguniquenessBesov} to obtain the desired conclusion. \end{remark} \setcounter{equation}{0} \section{Uniqueness Criterion for Weak Leray Hopf Solutions} Now let us state two known facts, which will be used in the proof of Proposition \ref{extendladyzhenskayaserrinprodi}. If $v$ is a weak Leray-Hopf solution on $Q_{\infty}$ with initial data $u_{0} \in J(\mathbb{R}^3)$, then this implies that $v$ satisfies the integral equation in $Q_{\infty}$: \begin{equation}\label{vintegeqnweakLerayhopf} v(x,t)= S(t)u_0+G(v\otimes v)(x,t). \end{equation} The second fact is as follows. Consider $3<p<\infty$ and $2<q<\infty$ such that $ {3}/{p}+{2}/{q}=1. $ Then there exists a constant $C=C(p,q)$ such that for all $f, g\in L^{q,\infty}(0,T; L^{p,\infty}(\mathbb{R}^3))$ \begin{equation}\label{bicontinuityLorentz} \|G(f\otimes g)\|_{L^{q,\infty}(0,T; L^{p,\infty}(\mathbb{R}^3))}\leq C\|f\|_{L^{q,\infty}(0,T; L^{p,\infty}(\mathbb{R}^3))}\|g\|_{L^{q,\infty}(0,T; L^{p,\infty}(\mathbb{R}^3))}. \end{equation} These statements and their corresponding proofs, can be found in \cite{LR1}\footnote{Specifically, Theorem 11.2 of \cite{LR1}.} and \cite{LRprioux}\footnote{Specifically, Lemma 6.1 of \cite{LRprioux}.}, for example. \subsection{Proof of Proposition \ref{extendladyzhenskayaserrinprodi}} \begin{proof} \begin{itemize} \item[] \textbf{Case 1: $u$ satisfies (\ref{extendladyzhenskayaserrinprodi2})-(\ref{smallness}) } \end{itemize} In this case, we see that the facts mentioned in the previous paragraph imply \begin{equation}\label{lorentzinitialdatasmall} \|S(t)u_0\|_{L^{q,\infty}(0,T; L^{p,\infty})}\leq \epsilon_{*}+C\epsilon_{*}^{2}. \end{equation} For $0<t_{0}<t_{1}$, we may apply O'Neil's convolution inequality (Proposition \ref{O'Neil}), to infer that \begin{equation}\label{almostdecreasingverified} \|S(t_1)u_0\|_{L^{p,\infty}}\leq 3p \|S(t_{0})u_0\|_{L^{p,\infty}}. \end{equation} This, in conjunction with (\ref{lorentzinitialdatasmall}), allows us to apply Lemma \ref{pointwiselorentzdecreasing} to obtain that for $0<t<T$ \begin{equation}\label{lorentzpointwiseexplicit} \|S(t)u_0\|_{L^{p,\infty}}\leq \frac{ 6p(\epsilon_{*}+C\epsilon_{*}^{2})}{t^{\frac{1}{q}}}. \end{equation} A further application of O'Neil's convolution inequality gives \begin{equation}\label{lorentzbesovembedding} \|S(t)u_0\|_{L_{2p}}\leq \frac{C'(p)\|S(t/2)u_0\|_{L^{p,\infty}}}{t^{\frac{3}{4p}}} \end{equation} This and (\ref{lorentzpointwiseexplicit}) implies that for $0<t<T$ we have \begin{equation}\label{katoclassestimate} \sup_{0<t<T} t^{\frac{-s_{2p}}{2}} \|S(t)u_0\|_{L_{2p}}\leq C''(p,q)(\epsilon_{*}+\epsilon_{*}^2). \end{equation} Recalling that $u_0 \in J(\mathbb{R}^3)$ ($\subset \mathcal{S}_{h}^{'}$), we see that by Young's inequality $$ t^{\frac{-s_{2p}}{2}}\|S(t)u_0\|_{L_{2p}}\leq \frac{C'''(p)\|u_{0}\|_{L_{2}}}{t^{\frac{1}{4}}}.$$ From this and (\ref{katoclassestimate}), we deduce that \begin{equation}\label{initialdataspaces} u_{0} \in J(\mathbb{R}^3)\cap \dot{B}^{s_{2p}}_{2p,\infty}(\mathbb{R}^3). \end{equation} Using (\ref{katoclassestimate})-(\ref{initialdataspaces}) and Remark \ref{remarkcannoneweakstrong}, once reaches the desired conclusion provided $\epsilon_{*}$ is sufficiently small. \begin{itemize} \item[] \textbf{Case 2: $u$ satisfies (\ref{extendladyzhenskayaserrinprodi1})-(\ref{integrabilitycondition1}) } \end{itemize} The assumptions (\ref{extendladyzhenskayaserrinprodi1})-(\ref{integrabilitycondition1}) imply that $u\in L^{q,\infty}(0,T; L^{p,\infty}(\mathbb{R}^3))$ with \begin{equation}\label{shrinkingLorentz} \lim_{S\rightarrow 0} \|u\|_{L^{q,\infty}(0,S; L^{p,\infty})}=0. \end{equation} From \cite{Sohr}, we have that \begin{equation}\label{sohrreg} u \in L_{\infty}(\mathbb{R}^3 \times ]T',T[)\,\,\rm{for\,\,any}\,\, 0<T'<T. \end{equation} Now (\ref{shrinkingLorentz}), allows us to reduce to case 1 on some time interval. This observation, combined with (\ref{sohrreg}), enables us to deduce that $u$ is unique on $Q_{T}$ amongst all other weak Leray-Hopf solutions with the same initial value. \end{proof} \setcounter{equation}{0} \section{Appendix} \subsection{Appendix I: Sketch of Proof of Theorem \ref{regularitycriticalbmo-1}} \begin{proof} \begin{itemize} \item[]\textbf{ Step 1: the mollified integral equation} \end{itemize} Let $\omega \in C_{0}^{\infty}(B(1))$ be a standard mollifier. Moreover, denote $$\omega_{\epsilon}(x):= \frac{1}{\epsilon^3} \omega\Big(\frac{x}{\epsilon}\Big).$$ Recall Young's inequality: \begin{equation}\label{Youngsrecall} \|f\star g\|_{L^{r}}\leq C_{p,q}\|f\|_{L^{p}}\|g\|_{L^{q}} \end{equation} where $1<p,q,r<\infty$, $0<s_1,s_2\leq\infty$, $\frac 1 p+\frac 1 q=\frac 1 r +1$. Applying this and the pointwise estimate (\ref{kenrelKest}) gives the following: \begin{equation}\label{Ktensormainest1recall} \|K(\cdot, t-\tau)\star (f\otimes (\omega_{\epsilon}\star g))(\cdot,\tau)\|_{L_{2}}\leq \|K(\cdot,t-\tau)\|_{L_1}\|f\otimes (\omega_{\epsilon}\star g)\|_{L_{{2}}}\leq$$$$\leq C (t-\tau)^{-\frac{1}{2}}\|f\otimes (\omega_{\epsilon}\star g)\|_{L_{2}}\leq C (t-\tau)^{-\frac{1}{2}}\frac{\|f\|_{\mathcal{E}_T}\|g\|_{L_{\infty}(0,T; L_{2})}}{\tau^{\frac 1 2}}. \end{equation} One can then show that \begin{equation}\label{Gtensorest1bmo-1} \|G(f\otimes (\omega_{\epsilon}\star g))\|_{L_{\infty}(0,T; L_{2})}\leq C\|f\|_{\mathcal{E}_T}\|g\|_{L_{\infty}(0,T; L_{2})}. \end{equation} From \cite{LRprioux}, it is seen that $\mathcal{E}_{T}$ is preserved by the operation of mollification: \begin{equation}\label{kochtatarupathspacemollify} \| \omega_{\epsilon}\star g\|_{\mathcal{E}_{T}}\leq C'\|g\|_{\mathcal{E}_{T}}. \end{equation} Here, $C'$ is independent of $T$ and $\epsilon$. Using this and (\ref{bilinbmo-1}), we obtain: \begin{equation}\label{Gtensorest2bmo-1} \|G(f\otimes (\omega_{\epsilon}\star g))\|_{\mathcal{E}_{T}}\leq C\|f\|_{\mathcal{E}_T}\|g\|_{\mathcal{E}_T}. \end{equation} We briefly describe successive approximations. For $n=1,2,\cdots$ let $v^{(0)}= S(t)u_0$, $$v^{(n+1)}= v^{(0)}+G(v^{(n)} \otimes (\omega_{\epsilon}\star v^{(n)})).$$ Moreover for $n=0,1,\ldots$ define: \begin{equation}\label{Mdefbmo-1} M^{(n)}:=\|v^{(n)}\|_{\mathcal{E}_T} \end{equation} and \begin{equation}\label{Kdefbmo-1} K^{(n)}:=\|v^{(n)}\|_{L_{\infty}(0,T; L_{2})}. \end{equation} Then using (\ref{Gtensorest1bmo-1}) and (\ref{Gtensorest2bmo-1}), we have the following iterative relations: \begin{equation}\label{Miterativebmo-1} M^{(n+1)}\leq M^{(0)}+C(M^{(n)})^2 \end{equation} and \begin{equation}\label{Kiterativebmo-1} K^{(n+1)}\leq K^{(0)}+CM^{(n)}K^{(n)}. \end{equation} If \begin{equation}\label{Msmallnessconditionbmo-1} 4CM^{(0)}< 1, \end{equation} then one can show that for $n=1,2\cdots$ we have \begin{equation}\label{Mboundbmo-1} M^{(n)}<2M^{(0)} \end{equation} and \begin{equation}\label{Kboundbmo-1} K^{(n)}<2K^{(0)}. \end{equation} Using (\ref{Miterativebmo-1})-(\ref{Kboundbmo-1}) and arguing as in \cite{Kato}, we see that there exists $v^{\epsilon}\in L_{\infty}(0,T;L_{2})\cap\mathcal{E}_{T}$ such that \begin{equation}\label{converg1bmo-1} \lim_{n\rightarrow\infty}\|v^{n}-v^{\epsilon}\|_{\mathcal{E}_{T}}=0, \end{equation} \begin{equation}\label{converg2bmo-1} \lim_{n\rightarrow\infty}\|v^{n}-v^{\epsilon}\|_{L_{\infty}(0,T;L_{2})}=0 \end{equation} and $v^{\epsilon}$ solves the integral equation \begin{equation}\label{integraleqnmollified} v^{\epsilon}(x,t)= S(t)u_0+G(v^{\epsilon}\otimes (\omega_{\epsilon}\star v^{\epsilon}))(x,t) \end{equation} in $Q_{T}$. Define $$\pi_{v^{\epsilon}\otimes (\omega_{\epsilon}\star v^{\epsilon})}:= \mathcal{R}_{i}\mathcal{R}_{j}(v^{\epsilon}_{i}(\omega_{\epsilon}\star v^{\epsilon})_{j}),$$ where $\mathcal{R}_{i}$ denotes the Riesz transform and repeated indices are summed. One can readily show that on $Q_{T}$, $(v^{\epsilon}, \pi_{v^{\epsilon}\otimes (\omega_{\epsilon}\star v^{\epsilon})})$ are solutions to the mollified Navier-Stokes system: \begin{equation}\label{vepsiloneqn} \partial_{t} v^{\epsilon}-\Delta v^{\epsilon} +\rm{div}\,(v^{\epsilon}\otimes (\omega_{\epsilon}\star v^{\epsilon}))=-\nabla \pi_{v^{\epsilon}\otimes (\omega_{\epsilon}\star v^{\epsilon})}, \end{equation} \begin{equation}\label{vepsilondivfree} \rm{div\,v^{\epsilon}}\,=0, \end{equation} \begin{equation}\label{vepsilonintialcondition} v^{\epsilon}(\cdot,0)=u_0(\cdot). \end{equation} We can also infer that $v^{\epsilon}\in W^{1,0}_{2}(Q_T)\cap C([0,T]; J(\mathbb{R}^3))$, along with the energy equality \begin{equation}\label{vepsilonenergyest} \|v^{\epsilon}(\cdot,t)\|_{L_{2}}^2+\int\limits_0^t\int\limits_{\mathbb{R}^3}|\nabla v^{\epsilon}(x,s)|^2 dxds= \|u_0\|^2_{L_2}. \end{equation} \begin{itemize} \item[]\textbf{Step 2: passing to the limit in $\epsilon\rightarrow 0$} \end{itemize} Since $u_0 \in VMO^{-1}(\mathbb{R}^3)\cap J(\mathbb{R}^3)$, it is known from \cite{LRprioux} (specifically, Theorem 3.5 there) that there exists a $v\in \mathcal{E}_{T}$ such that \begin{equation}\label{convergmild} \lim_{\epsilon\rightarrow 0}\|v^{\epsilon}- v\|_{\mathcal{E}_{T}}=0 \end{equation} with $v$ satisfying the integral equation (\ref{vintegeqnbmo-1}) in $Q_{T}$. Using arguments from \cite{Le} and (\ref{vepsiloneqn})-(\ref{vepsilonenergyest}), we see that $v^{\epsilon}$ will converge to a weak Leray-Hopf solution on $Q_{T}$ with initial data $u_0$. Thus, $v\in \mathcal{E}_{T}$ is a weak Leray-Hopf solution. The remaining conclusions of Theorem \ref{regularitycriticalbmo-1} follow from similar reasoning as in the proof of the statements of Theorem \ref{regularity}, hence we omit details of them. \end{proof} \textbf{Acknowledgement.} The author wishes to warmly thank Kuijie Li, whose remarks on the first version of this paper led to an improvement to the statement of Theorem \ref{weakstronguniquenessBesov}.
{ "attr-fineweb-edu": 1.066406, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUa5w25YjgKLUjJFOx
\subsection{TTL Policy for Individual Caches} Consider the cache at node $j$. Each content $i$ is associated with a timer $T_{ij}$ under the TTL cache policy. While we focus on node $j,$ we omit the subscript ${}_{.j}.$ Consider the event when content $i$ is requested. There are two cases: (i) if content $i$ is not in the cache, content $i$ is inserted into the cache and its timer is set to $T_i;$ (ii) if content $i$ is in the cache, its timer is reset to $T_i$. The timer decreases at a constant rate and the content is evicted once its timer expires. \subsection{Replication Strategy for Cache Networks}\label{sub:cachenetwork} In a cache network, upon a cache hit, we need to specify how content is replicated along the reverse path towards the user that sent the request. \subsubsection{Content Request}\label{sec:content-request} The network serves requests for contents in $\mathcal{D}$ routed over the graph $G$. Any node in the network can generate a request for a content, which is forwarded along a fixed and unique path from the user towards a terminal node that is connected to a server that always contains the content. Note that the request need not reach the end of the path; it stops upon hitting a cache that stores the content. At that point, the requested content is propagated over the path in the reverse direction to the node that requested it. To be more specific, a request $(v, i, p)$ is determined by the node, $v$, that generated the request, the requested content, $i$, and the path, $p$, over which the request is routed. We denote a path $p$ of length $|p|=L$ as a sequence $\{v_{1p}, v_{2p}, \cdots, v_{Lp}\}$ of nodes $v_{lp}\in V$ such that $(v_{lp}, v_{(l+1)p})\in E$ for $l\in\{1, \cdots, L\},$ where $v_{Lp}=v.$ We assume that path $p$ is loop-free and terminal node $v_{1p}$ is the only node on path $p$ that accesses the server for content $i.$ \subsubsection{Replication Strategy} We consider TTL cache policies at every node in the cache network $G$ where each content has its own timer. Suppose content $i$ is requested and routed along path $p.$ There are two cases: (i) content $i$ is not in any cache along path $p,$ in which case content $i$ is fetched from the server and inserted into the first cache (denoted by cache $1$)\footnote{Since we consider path $p$, for simplicity, we move the dependency on $p$ and $v$, denote it as nodes $1,\cdots, L$ directly.} on the path. Its timer is set to $T_{i1}$; (ii) if content $i$ is in cache $l$ along path $p,$ we consider the following strategies \cite{rodriguez16} \begin{itemize} \item {\textbf{Move Copy Down (MCD)}:} content $i$ is moved to cache $l+1$ preceding cache $l$ in which $i$ is found, and the timer at cache $l+1$ is set to $T_{i{(l+1)}}$. Content $i$ is discarded once the timer expires; \item {\textbf{Move Copy Down with Push (MCDP)}:} MCDP behaves the same as MCD upon a cache hit. However, if timer $T_{il}$ expires, content $i$ is pushed one cache back to cache $l-1$ and the timer is set to $T_{i(l-1)}.$ \end{itemize} \subsection{Utility Function} Utility functions capture the satisfaction perceived by a user after being served a content. We associate each content $i\in\mathcal{D}$ with a utility function $U_i: [0,1]\rightarrow\mathbb{R}$ that is a function of hit probability $h_i$. $U_i(\cdot)$ is assumed to be increasing, continuously differentiable, and strictly concave. In particular, for our numerical studies, we focus on the widely used $\beta$-fair utility functions \cite{srikant13} given by \begin{equation}\label{eq:utility} U_i(h)= \begin{cases} w_i\frac{h^{1-\beta}}{1-\beta},& \beta\geq0, \beta\neq 1;\\ w_i\log h,& \beta=1, \end{cases} \end{equation} where $w_i>0$ denotes a weight associated with content $i$. \subsubsection{MCDP}\label{mcdpmodel} Requests for content $i$ arrive according to a Poisson process with rate $\lambda_i.$ Under TTL, content $i$ spends a deterministic time in a cache if it is not requested, independent of all other contents. We denote the timer as $T_{il}$ for content $i$ in cache $l$ on the path $p,$ where $l\in\{1,\cdots, |p|\}.$ Denote by $t_k^i$ the $k$-th time that content $i$ is either requested or the timer expires. For simplicity, we assume that content is in cache $0$ (i.e., server) when it is not in the cache network. We can then define a discrete time Markov chain (DTMC) $\{X_k^i\}_{k\geq0}$ with $|p|+1$ states, where $X_k^i$ is the index of the cache that content $i$ is in at time $t_k^i.$ The event that the time between two requests for content $i$ exceeds $T_{il}$ occurs with probability $e^{-\lambda_i T_{il}}$; consequently we obtain the transition probability matrix of $\{X_k^i\}_{k\geq0}$ and compute the stationary distribution. Details can be found in Appendix~\ref{mcdpmodel-appendix}. The timer-average probability that content $i$ is in cache $l\in\{1,\cdots, |p|\}$ is \begin{subequations}\label{eq:hit-prob-mcdp} \begin{align} & h_{i1} = \frac{e^{\lambda_iT_{i1}}-1}{1+\sum_{j=1}^{|p|}(e^{\lambda_iT_{i1}}-1)\cdots (e^{\lambda_iT_{ij}}-1)},\label{eq:mcdp1}\\ & h_{il} = h_{i(l-1)}(e^{\lambda_iT_{il}}-1),\; l = 2,\cdots,|p|,\label{eq:mcdp2} \end{align} \end{subequations} where $h_{il}$ is also the hit probability for content $i$ at cache $l.$ \subsubsection{MCD} Again, under TTL, content $i$ spends a deterministic time $T_{il}$ in cache $l$ if it is not requested, independent of all other contents. We define a DTMC $\{Y_k^i\}_{k\geq0}$ by observing the system at the time that content $i$ is requested. Similar to MCDP, if content $i$ is not in the cache network, it is in cache $0$; thus we still have $|p|+1$ states. If $Y_k^i=l$, then the next request for content $i$ comes within time $T_{il}$ with probability $1-e^{-\lambda_iT_{il}}$, and $Y_{k+1}^i=l+1,$ otherwise $Y_{k+1}^i=0$ due to the MCD policy. We can obtain the transition probability matrix of $\{Y_k^i\}_{k\geq0}$ and compute the stationary distribution, details are available in Appendix~\ref{mcdmodel-appendix}. By the PASTA property \cite{MeyTwe09}, it follows that the stationary probability that content $i$ is in cache $l\in\{1,\cdots, |p|\}$ is \begin{subequations}\label{eq:hit-prob-mcd} \begin{align} &h_{il}=h_{i0}\prod_{j=1}^{l}(1-e^{-\lambda_iT_{ij}}),\quad l=1,\cdots, |p|-1,\displaybreak[1]\label{eq:mcd2}\\ &h_{i|p|}=e^{\lambda_i T_{i|p|}}h_{i0}\prod_{j=1}^{|p|-1}(1-e^{-\lambda_iT_{ij}}),\label{eq:mcd3} \end{align} \end{subequations} where $h_{i0}=1/[1+\sum_{l=1}^{|p|-1}\prod_{j=1}^l(1-e^{-\lambda_i T_{ij}})+e^{\lambda_i T_{i|p|}}\prod_{j=1}^{|p|}(1-e^{-\lambda_i T_{ij}})].$ \subsubsection{MCDP} Since $0\leq T_{il}\leq \infty$, it is easy to check that $0\leq h_{il}\leq 1$ for $l\in\{1,\cdots,|p|\}$ from~(\ref{eq:mcdp1}) and~(\ref{eq:mcdp2}). Furthermore, it is clear that there exists a mapping between $(h_{i1},\cdots, h_{i|p|})$ and $(T_{i1},\cdots, T_{i|p|}).$ By simple algebra, we obtain \begin{subequations}\label{eq:stationary-mcdp-timers} \begin{align} & T_{i1} = \frac{1}{\lambda_i}\log \bigg(1 + \frac{h_{i1}}{1-\big(h_{i1} + h_{i2} + \cdots + h_{i|p|}\big)}\bigg), \label{eq:mcdpttl1}\\ & T_{il} = \frac{1}{\lambda_i}\log \bigg(1 + \frac{h_{il}}{h_{i(l-1)}}\bigg),\quad l= 2,\cdots, |p|.\label{eq:mcdpttl2} \end{align} \end{subequations} Note that \begin{align}\label{eq:mcdp-constraint} h_{i1} + h_{i2} + \ldots + h_{i|p|} \leq 1, \end{align} must hold during the operation, which is always true for our caching policies. \subsubsection{MCD} Similarly, from~(\ref{eq:mcd2}) and~(\ref{eq:mcd3}), we simply check that there exists a mapping between $(h_{i1},\cdots, h_{i|p|})$ and $(T_{i1},\cdots, T_{i|p|}).$ Since $T_{il}\geq0,$ by~(\ref{eq:mcd2}), we have \begin{align} \label{eq:mcd-constraint1} h_{i(|p|-1)} \leq h_{i(|p|-2)} \leq \cdots \leq h_{i1} \leq h_{i0}. \end{align} By simple algebra, we can obtain \begin{subequations}\label{eq:stationary-mcd-timers} \begin{align} & T_{i1} = -\frac{1}{\lambda_i}\log \bigg(1 - \frac{h_{i1}}{1-\big(h_{i1} + h_{i2} + \cdots + h_{i|p|}\big)}\bigg), \label{eq:mcdttl1}\\ & T_{il} = -\frac{1}{\lambda_i}\log\bigg(1-\frac{h_{il}}{h_{i(l-1)}}\bigg),\quad l= 2, \cdots , |p|-1, \label{eq:mcdttl2}\\ & T_{i|p|} = \frac{1}{\lambda_i}\log\bigg(1+\frac{h_{i|p|}}{h_{i(|p|-1)}}\bigg). \label{eq:mcdttl3} \end{align} \end{subequations} Again \begin{align} \label{eq:mcd-constraint2} h_{i1} + h_{i2} + \cdots + h_{i|p|} \leq 1, \end{align} must hold during the operation to obtain~(\ref{eq:stationary-mcd-timers}), which is always true for MCD. \subsubsection{MCDP}\label{sec:line-utility-max-mcdp} Given~(\ref{eq:stationary-mcdp-timers}) and~(\ref{eq:mcdp-constraint}), optimization problem~(\ref{eq:max-ttl}) under MCDP becomes \begin{subequations}\label{eq:max-mcdp} \begin{align} \text{\bf{L-U-MCDP:}} \max \quad&\sum_{i\in \mathcal{D}} \sum_{l=1}^{|p|} \psi^{|p|-l} U_i(h_{il}) \displaybreak[0]\\ \text{s.t.} \quad&\sum_{i\in \mathcal{D}} h_{il} \leq B_l,\quad l=1, \cdots, |p|, \displaybreak[1]\label{eq:hpbmcdp1}\\ & \sum_{l=1}^{|p|}h_{il}\leq 1,\quad\forall i \in \mathcal{D}, \displaybreak[2]\label{eq:hpbmcdp2}\\ &0\leq h_{il}\leq1, \quad\forall i \in \mathcal{D}, \end{align} \end{subequations} where~(\ref{eq:hpbmcdp1}) is the cache capacity constraint and~(\ref{eq:hpbmcdp2}) is due to the variable exchanges under MCDP as discussed in~(\ref{eq:mcdp-constraint}) . \begin{prop} Optimization problem defined in~(\ref{eq:max-mcdp}) under MCDP has a unique global optimum. \end{prop} \subsubsection{MCD}\label{sec:line-utility-max-mcd} Given~(\ref{eq:mcd-constraint1}),~(\ref{eq:stationary-mcd-timers}) and~(\ref{eq:mcd-constraint2}), optimization problem~(\ref{eq:max-ttl}) under MCD becomes \begin{subequations}\label{eq:max-mcd} \begin{align} \text{\bf{L-U-MCD:}} \max \quad&\sum_{i\in \mathcal{D}} \sum_{l=1}^{|p|} \psi^{|p|-l} U_i(h_{il}) \displaybreak[0] \\ \text{s.t.} \quad&\sum_{i\in \mathcal{D}} h_{il} \leq B_l,\quad l=1, \cdots, |p|, \displaybreak[1]\label{eq:hpbmcd1}\\ &h_{i(|p|-1)} \leq \cdots \leq h_{i1} \leq h_{i0},\quad\forall i \in \mathcal{D}, \displaybreak[2]\label{eq:hpbmcd2}\\ & \sum_{l=1}^{|p|}h_{il}\leq 1,\quad\forall i \in \mathcal{D}, \displaybreak[3] \label{eq:hpbmcd3}\\ &0\leq h_{il}\leq1, \quad\forall i \in \mathcal{D}, \label{eq:hpbmcdp4} \end{align} \end{subequations} where~(\ref{eq:hpbmcd1}) is the cache capacity constraint, ~(\ref{eq:hpbmcd2}) and~(\ref{eq:hpbmcd3}) are due to the variable exchanges under MCD as discussed in~(\ref{eq:mcd-constraint1}) and~(\ref{eq:mcd-constraint2}). \begin{prop} Optimization problem defined in~(\ref{eq:max-mcd}) under MCD has a unique global optimum. \end{prop} \subsubsection{Online Algorithm}\label{sec:line-online-primal} \begin{figure*}[htbp] \centering \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\columnwidth]{figures/opt_prob_primal_single_path_LRUm.pdf} \caption{Hit probability for MCDP under primal solution in a three-node linear cache network.} \label{mcdphp-line} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\columnwidth]{figures/cachesize_pdf_primal_single_path_LRUm.pdf} \caption{Cache size distribution for MCDP under primal solution in a three-node linear cache network.} \label{mcdpcs-line} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\columnwidth]{figures/opt_prob_psipoint1_LRUm.pdf} \caption{The impact of discount factor on the performance in a three-node linear cache network: $\psi=0.1.$} \label{fig:line-beta01} \end{minipage} \vspace{-0.1in} \end{figure*} In Sections~\ref{sec:line-utility-max-mcdp} and~\ref{sec:line-utility-max-mcd}, we formulated convex utility maximization problems with a fixed cache size. However, system parameters (e.g. cache size and request processes) can change over time, so it is not feasible to solve the optimization offline and implement the optimal strategy. Thus, we need to design online algorithms to implement the optimal strategy and adapt to the changes in the presence of limited information. In the following, we develop such an algorithm for MCDP. A similar algorithm exists for MCD and is omitted due to space constraints. \noindent{\textit{\textbf{Primal Algorithm:}}} We aim to design an algorithm based on the optimization problem in~(\ref{eq:max-mcdp}), which is the primal formulation. Denote $\boldsymbol h_i=(h_{i1},\cdots, h_{i|p|})$ and $\boldsymbol h=(\boldsymbol h_1,\cdots, \boldsymbol h_n).$ We first define the following objective function \begin{align}\label{eq:primal} Z(\boldsymbol h) = \sum_{i\in \mathcal{D}} \sum_{l=1}^{|p|} \psi^{|p|-l} U_i(h_{il})&-\sum_{l=1}^{|p|}C_l\left(\sum\limits_{i\in \mathcal{D}} h_{il} - B_l\right)\nonumber\\&-\sum_{i\in \mathcal{D}}\tilde{C}_i\left(\sum\limits_{l=1}^{|p|}h_{il}-1\right), \end{align} where $C_l(\cdot)$ and $\tilde{C}_i(\cdot)$ are convex and non-decreasing penalty functions denoting the cost for violating constraints~(\ref{eq:hpbmcdp1}) and~(\ref{eq:hpbmcdp2}). Therefore, it is clear that $Z(\cdot)$ is strictly concave. Hence, a natural way to obtain the maximal value of~(\ref{eq:primal}) is to use the standard \emph{gradient ascent algorithm} to move the variable $h_{il}$ for $i\in\mathcal{D}$ and $l\in\{1,\cdots,|p|\}$ in the direction of the gradient, given as \begin{align} \frac{\partial Z(\boldsymbol h)}{\partial h_{il}}= \psi^{|p|-l}U_i^\prime(h_{il})-C_l^\prime\left(\sum\limits_{i\in \mathcal{D}} h_{il} - B_l\right) - \tilde{C}_i^\prime\left(\sum\limits_{l=1}^{|p|}h_{il}-1\right), \end{align} where $U_i^\prime(\cdot),$ $C_l^\prime(\cdot)$ and $\tilde{C}_i^\prime(\cdot)$ denote partial derivatives w.r.t. $h_{il}.$ Since $h_{il}$ indicates the probability that content $i$ is in cache $l$, $\sum_{i\in \mathcal{D}} h_{il}$ is the expected number of contents currently in cache $l$, denoted by $B_{\text{curr},l}$. Therefore, the primal algorithm for MCDP is given by \begin{subequations}\label{eq:primal-mcdp} \begin{align} &T_{il}[k] \leftarrow \begin{cases} \frac{1}{\lambda_i}\log \bigg(1 + \frac{h_{il}[k]}{1-\big(h_{i1}[k] + h_{i2}[k] + \cdots + h_{i|p|}[k]\big)}\bigg),\quad l=1;\\ \frac{1}{\lambda_i}\log \bigg(1 + \frac{h_{il}[k]}{h_{i(l-1)}[k]}\bigg),\quad l= 2, \cdots , |p|,\label{eq:primal-mcdp-t} \end{cases}\\ &h_{il}[k+1]\leftarrow \max\Bigg\{0, h_{il}[k]+\zeta_{il}\Bigg[\psi^{|p|-l}U_i^\prime(h_{il}[k])\nonumber\\ &\qquad\qquad-C_l^\prime\left(B_{\text{curr},l} - B_l\right)-\tilde{C}_i^\prime\left(\sum\limits_{l=1}^{|p|}h_{il}[k]-1\right)\Bigg]\Bigg\},\label{eq:primal-mcdp-h} \end{align} \end{subequations} where $\zeta_{il} \geq0$ is the step-size parameter, and $k$ is the iteration number incremented upon each request arrival. \begin{theorem}\label{thm:mcdp-primal-conv} The primal algorithm given in~(\ref{eq:primal-mcdp}) converges to the optimal solution given a sufficiently small step-size parameter $\zeta_{il}.$ \end{theorem} \begin{proof} Since $U_i(\cdot)$ is strictly concave, $C_l(\cdot)$ and $\tilde{C}_i(\cdot)$ are convex, ~(\ref{eq:primal}) is strictly concave, hence there exists a unique maximizer. Denote it as $\boldsymbol h^*.$ Define the following function \begin{align} Y(\boldsymbol h)=Z(\boldsymbol h^*)-Z(\boldsymbol h), \end{align} then it is clear that $Y(\boldsymbol h)\geq 0$ for any feasible $\boldsymbol h$ that satisfies the constraints in the original optimization problem, and $Y(\boldsymbol h)= 0$ if and only if $\boldsymbol h=\boldsymbol h^*.$ We prove that $Y(\boldsymbol h)$ is a Lyapunov function, and then the above primal algorithm converges to the optimum. Details are available in Appendix~\ref{appendix:convergence}. \end{proof} \subsubsection{Model Validations and Insights}\label{sec:validations-line-cache} In this section, we validate our analytical results with simulations for MCDP. We consider a linear three-node cache network with cache capacities $B_l= 30$, $l=1, 2, 3.$ The total number of unique contents considered in the system is $n=100.$ We consider the Zipf popularity distribution with parameter $\alpha=0.8$. W.l.o.g., we consider a log utility function, and discount factor $\psi=0.6.$ W.l.o.g., we assume that requests arrive according to a Poisson process with aggregate request rate $\Lambda=1.$ We first solve the optimization problem~(\ref{eq:max-mcdp}) using a Matlab routine \texttt{fmincon}. Then we implement our primal algorithm given in~(\ref{eq:primal-mcdp}), where we take the following penalty functions \cite{srikant13} $C_l(x)= \max\{0,x - B_l\log(B_l+x)\}$ and $\tilde{C}_i(x)= \max\{0,x - \log(1+x)\}$. From Figure~\ref{mcdphp-line}, we observe that our algorithm yields the exact optimal and empirical hit probabilities under MCDP. Figure~\ref{mcdpcs-line} shows the probability density for the number of contents in the cache network\footnote{The constraint~(\ref{eq:hpbmcdp1}) in problem~(\ref{eq:max-mcdp}) is on average cache occupancy. However it can be shown that if $n\to\infty$ and $B_l$ grows in sub-linear manner, the probability of violating the target cache size $B_l$ becomes negligible \cite{dehghan16}.}. As expected, the density is concentrated around their corresponding cache sizes. We further characterize the impact of the discount factor $\psi$ on performance. We consider different values of $\psi$. Figure~\ref{fig:line-beta01} shows the result for $\psi=0.1.$ We observe that as $\psi$ decreases, if a cache hit occurs in a lower index cache, the most popular contents are likely to be cached in higher index caches (i.e., cache $3$) and least popular contents are likely to be cached in lower index caches (cache 1). This provides significant insight on the design of hierarchical caches, since in a linear cache network, a content enters the network via the first cache, and only advances to a higher index cache upon a cache hit. Under a stationary request process (e.g., Poisson process), only popular contents will be promoted to higher index cache, which is consistent with what we observe in Figure~\ref{fig:line-beta01}. A similar phenomenon has been observed in \cite{gast16,jiansri17} through numerical studies, while we characterize this through utility optimization. Second, we see that as $\psi$ increases, the performance difference between different caches decreases, and they become identical when $\psi=1$. This is because as $\psi$ increases, the performance degradation for cache hits on a lower index cache decreases and there is no difference between them when $\psi=1.$ Due to space constraints, the results for $\psi=0.4, 0.6, 1$ are given in Appendix~\ref{appendix:line-validation-discount}. {We also compare our proposed scheme to replication strategies with LRU, LFU, FIFO and Random (RR) eviction policies. In a cache network, upon a cache hit, the requested content usually get replicated back in the network, there are three mechanisms in the literature: leave-copy-everywhere (LCE), leave-copy-probabilistically (LCP) and leave-copy-down (LCD), with the differences in how to replicate the requested content in the reverse path. Due to space constraints, we refer interested readers to \cite{garetto16} for detailed explanations of these mechanisms. Furthermore, based on \cite{garetto16}, LCD significantly outperforms LCE and LCP. Hence, we only consider LCD here. } {Figure~\ref{fig:line-comp-lcd} compares the performance of different eviction policies with LCD replication strategies to our algorithm under MCDP for a three-node line cache network. We plot the relative performance w.r.t. the optimal aggregated utilities of all above policies, normalized to that under MCDP. We observe that MCDP significantly outperforms all other caching evictions with LCD replications. At last, we consider a larger line cache network at the expense of simulation. We again observe the huge gain of MCDP w.r.t. other caching eviction policies with LCD, hence are omitted here due to space constraints. } \subsubsection{Search and Fetch Cost} A request is sent along a path until it hits a cache that stores the requested content. We define \emph{search cost} (\emph{fetch cost}) as the cost of finding (serving) the requested content in the cache network (to the user). Consider cost as a function $c_s(\cdot)$ ($c_f(\cdot)$) of the hit probabilities. Then the expected search cost across the network is given as \begin{align}\label{eq:searching-cost} S_{\text{MCD}}=S_{\text{MCDP}}= \sum_{i\in \mathcal{D}} \lambda_i c_s\left(\sum_{l=0}^{|p|}(|p|-l+1)h_{il}\right). \end{align} Fetch cost has a similar expression with $c_f(\cdot)$ replacing $c_s(\cdot).$ \subsubsection{Transfer Cost} Under TTL, upon a cache hit, the content either transfers to a higher index cache or stays in the current one, and upon a cache miss, the content either transfers to a lower index cache (MCDP) or is discarded from the network (MCD). We define \emph{transfer cost} as the cost due to cache management upon a cache hit or miss. Consider the cost as a function $c_m(\cdot)$ of the hit probabilities. \noindent{\textit{\textbf{MCD:}}} Under MCD, since the content is discarded from the network once its timer expires, transfer costs are only incurred at each cache hit. To that end, the requested content either transfers to a higher index cache if it was in cache $l\in\{1,\cdots, |p|-1\}$ or stays in the same cache if it was in cache $|p|.$ Then the expected transfer cost across the network for MCD is given as \begin{align}\label{eq:moving-cost-mcd} M_{\text{MCD}} = \sum_{i\in \mathcal{D}} \lambda_ic_m\left(1 - h_{i|p|}\right). \end{align} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/line-comparison-LCD.pdf} \caption{Optimal aggregated utilities under different caching eviction policies with LCD replications, normalized to the aggregated utilities under MCDP.} \label{fig:line-comp-lcd} \vspace{-0.1in} \end{figure} \noindent{\textit{\textbf{MCDP:}}} {Note that under MCDP, there is a transfer upon a content request or a timer expiry besides two cases: (i) content $i$ is in cache $1$ and a timer expiry occurs, which occurs with probability $\pi_{i1}e^{-\lambda_iT_{i1}}$; and (ii) content $i$ is in cache $|p|$ and a cache hit (request) occurs, which occurs with probability $\pi_{i|p|}(1-e^{-\lambda_iT_{i|p|}})$. Then the transfer cost for content $i$ at steady sate is \begin{align} M_{\text{MCDP}}^i&= \lim_{n\rightarrow \infty}M_{\text{MCDP}}^{i, n}=\lim_{n\rightarrow\infty}\frac{\frac{1}{n}M_{\text{MCDP}}^{i, j}}{\frac{1}{n}\sum_{j=1}^n (t_j^i-t_{j-1}^i)}\nonumber\\ &=\frac{1-\pi_{i1}e^{-\lambda_iT_{i1}}-\pi_{i|p|}(1-e^{-\lambda_iT_{i|p|}})}{\sum_{l=0}^{|p|}\pi_l\mathbb{E}[t_j^i-t_{j-1}^i|X_j^i=l]}, \end{align} where $M_{\text{MCDP}}^{i, j}$ means there is a transfer cost for content $i$ at the $j$-th request or timer expiry, $\mathbb{E}[t_j^i-t_{j-1}^i|X_j^i=l]=\frac{1-e^{-\lambda_iT_{il}}}{\lambda_i}$ is the average time content $i$ spends in cache $l.$} {Therefore, the transfer cost for MCDP is \begin{align}\label{eq:moving-cost-mcdp} M_{\text{MCDP}}&=\sum_{i\in\mathcal{D}}M_{\text{MCDP}}^i= \sum_{i\in\mathcal{D}}\frac{1-\pi_{i1}e^{-\lambda_iT_{i1}}-\pi_{i|p|}(1-e^{-\lambda_iT_{i|p|}})}{\sum_{l=0}^{|p|}\pi_{il}\frac{1-e^{-\lambda_iT_{il}}}{\lambda_i}}\nonumber\displaybreak[0]\\ &=\sum_{i\in\mathcal{D}}\Bigg(\frac{1-\pi_{i1}}{\sum_{l=0}^{|p|}\pi_{il}\mathbb{E}[t_j^i-t_{j-1}^i|X_j^i=l]}+\lambda_i (h_{i1}-h_{i|p|})\Bigg), \end{align} where $(\pi_{i0},\cdots, \pi_{i|p|})$ for $i\in\mathcal{D}$ is the stationary distribution for the DTMC $\{X_k^i\}_{k\geq0}$ defined in Section~\ref{sec:ttl-stationary}. Due to space constraints, we relegate its explicit expression to Appendix~\ref{appendix:mcdp-cost2}.} \begin{remark}\label{rem:line-moving-cost} {The expected transfer cost $M_{\text{MCDP}}$~(\ref{eq:moving-cost-mcdp}) is a function of the timer values. Unlike the problem of maximizing sum of utilities, it is difficult to express $M_{\text{MCDP}}$ as a function of hit probabilities. } \end{remark} \begin{figure*}[htbp] \centering \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\linewidth]{figures/7node-cache.pdf} \caption{A seven-node binary tree cache network.} \label{fig:cache-7-nodes} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\textwidth]{figures/opt_prob_path4.pdf} \caption{Hit probability of MCDP under seven-node cache network where each path requests distinct contents.} \label{7nodecache-hit} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\textwidth]{figures/cachesize_pdf_path4.png} \caption{Cache size of MCDP under seven-node cache network where each path requests distinct contents.} \label{7nodecache-size} \end{minipage} \vspace{-0.1in} \end{figure*} \subsubsection{Optimization} Our goal is to determine optimal timer values at each cache in a linear cache network so that the total costs are minimized. To that end, we formulate the following optimization problem for MCDP \begin{subequations}\label{eq:min-mcdp1} \begin{align} \text{\bf{L-C-MCDP:}}\min \quad&{S}_{\text{MCDP}} +{F}_{\text{MCDP}} +{M}_{\text{MCDP}} \displaybreak[0]\\ &\text{Constraints in~(\ref{eq:max-mcdp}}). \end{align} \end{subequations} A similar optimization problem can be formulated for MCD and is omitted here due to space constraints. \begin{remark} {As discussed in Remark~\ref{rem:line-moving-cost}, we cannot express transfer cost of MCDP~(\ref{eq:moving-cost-mcdp}) in terms of hit probabilities, hence, we are not able to transform the optimization problem~(\ref{eq:min-mcdp1}) for MCDP into a convex one through a change of variables as we did in Section~\ref{sec:line-utility-max}. Solving the non-convex optimization~(\ref{eq:min-mcdp1}) is a subject of future work. However, we note that transfer costs of MCD~(\ref{eq:moving-cost-mcd}) are simply a function of hit probabilities and the corresponding optimization problem is convex so long as the cost functions are convex. } \end{remark} \subsection{Stationary Behavior}\label{sec:ttl-stationary} \input{03A-stationary} \subsection{From Timer to Hit Probability}\label{sec:ttl-hit-prob} \input{03B-ttl-hit-prob} \subsection{Maximizing Aggregate Utility}\label{sec:line-utility-max} \input{03C-max-utility} \subsection{Minimizing Overall Costs}\label{sec:line-cost-min} \input{03D-min-cost} \subsection{Common Requested Contents}\label{sec:general-cache-common} Now consider the case where different users share the same content, e.g., there are two requests $(v_1, i, p_1)$ and $(v_2, i, p_2).$ Suppose that cache $l$ is on both paths $p_1$ and $p_2$, where $v_1$ and $v_2$ request the same content $i$. If we cache separate copies on each path, results from the previous section apply. However, maintaining redundant copies in the same cache decreases efficiency. A simple way to deal with that is to only cache one copy of content $i$ at $l$ to serve both requests from $v_1$ and $v_2.$ Though this reduces redundancy, it complicates the optimization problem. In the following, we formulate a utility maximization problem for MCDP with TTL caches, where all users share the same requested contents $\mathcal{D}.$ \begin{subequations}\label{eq:max-mcdp-general-common} \begin{align} &\text{\bf{G-U-MCDP:}}\nonumber\displaybreak[0]\\ \max \quad&\sum_{i\in \mathcal{D}} \sum_{p\in \mathcal{P}^i} \sum_{l=1}^{|p|} \psi^{|p|-l} U_{ip}(h_{il}^{(p)}) \displaybreak[1]\\ \text{s.t.} \quad& \sum_{i\in \mathcal{D}} \bigg(1-\prod_{p:j\in\{1,\cdots,|p|\}}(1-h_{ij}^{(p)})\bigg) \leq B_j,\quad\forall j \in V, \displaybreak[2]\label{eq:max-mcdp-genenral-cons1}\\ & \sum_{j\in\{1,\cdots,|p|\}}h_{ij}^{(p)}\leq 1,\quad \forall i \in \mathcal{D}, p \in \mathcal{P}^i, \displaybreak[3] \label{eq:max-mcdp-genenral-cons2}\\ &0\leq h_{il}^{(p)}\leq 1, \quad\forall i\in\mathcal{D}, j\in\{1,\cdots,|p|\}, p\in\mathcal{P}^i,\label{eq:max-mcdp-genenral-cons3} \end{align} \end{subequations} where~(\ref{eq:max-mcdp-genenral-cons1}) ensures that only one copy of content $i\in\mathcal{D}$ is cached at node $j$ for all paths $p$ that pass through node $j$. This is because the term $1-\prod_{p:j\in\{1,\cdots,|p|\}}(1-h_{ij}^{(p)})$ is the overall hit probability of content $i$ at node $j$ over all paths. ~(\ref{eq:max-mcdp-genenral-cons2}) is the cache capacity constraint and~(\ref{eq:max-mcdp-genenral-cons3}) is the constraint from MCDP TTL cache policy as discussed in Section~\ref{sec:ttl-hit-prob}. \begin{example}\label{exm} Consider two requests $(v_1, i, p_1)$ and $(v_2, i, p_2)$ with paths $p_1$ and $p_2$ which intersect at node $j.$ Denote the corresponding path perspective hit probability as $h_{ij}^{(p_1)}$ and $h_{ij}^{(p_2)}$, respectively. Then the term inside the outer summation of~(\ref{eq:max-mcdp-genenral-cons1}) is $1-(1-h_{ij}^{(p_1)})(1-h_{ij}^{(p_2)})$, i.e., the hit probability of content $i$ in node $j$. \end{example} \begin{remark} Note that we assume independence between different requests $(v, i, p)$ in~(\ref{eq:max-mcdp-general-common}), e.g., in Example~\ref{exm}, if the insertion of content $i$ in node $j$ is caused by request $(v_1, i, p_1),$ when request $(v_2, i, p_2)$ comes, it is not counted as a cache hit from its perspective. Our framework still holds if we follow the logical TTL MCDP on linear cache networks. However, in that case, the utilities will be larger than the one we consider here. \end{remark} Similarly, we can formulate a utility maximization optimization problem for MCD. This can be found in Appendix~\ref{appendix:mcd-common}. \begin{prop} Since the feasible sets are non-convex, the optimization problem defined in~(\ref{eq:max-mcdp-general-common}) under MCDP is a non-convex optimization problem. \end{prop} In the following, we develop an optimization framework that handles the non-convexity issue in this optimization problem and provides a distributed solution. To this end, we first introduce the Lagrangian function {\footnotesize \begin{align}\label{eq:lagrangian} &L(\boldsymbol{h,\nu,\mu})=\sum_{i \in \mathcal{D}}\sum_{p \in \mathcal{P}^i}\sum_{l=1}^{|p|}\psi^{|p|-l}U_{ip}(h_{il}^{(p)})-\sum_{j\in V}\nu_{j}\Bigg(\sum_{i \in \mathcal{D}}\bigg[1-\nonumber\displaybreak[0]\\ &\prod_{p:j\in\{1,\cdots,|p|\}}(1-h_{ij}^{(p)})\bigg]-B_j\Bigg)-\sum_{i\in\mathcal{D}}\sum_{p\in\mathcal{P}^i}\mu_{ip}\Bigg(\sum_{j\in\{1,\cdots,|p|\}}h_{ij}^{(p)}-1\Bigg), \end{align} } where the Lagrangian multipliers (price vector and price matrix) are $\boldsymbol\nu=(\nu_{j})_{j\in V},$ and $\boldsymbol \mu=(\mu_{ip})_{i\in\mathcal{D}, p\in\mathcal{P}}.$ The dual function can be defined as \begin{align}\label{eq:dual} d(\boldsymbol{\nu,\mu})=\sup_{\boldsymbol h} L(\boldsymbol{h,\nu,\mu}), \end{align} and the dual problem is given as \begin{align}\label{eq:dual-opt} \min_{\boldsymbol{ \nu,\mu}} \quad&d(\boldsymbol{\nu,\mu})=L(\boldsymbol h^*(\boldsymbol{\nu,\mu}), \boldsymbol{\nu,\mu}),\quad\text{s.t.}\quad\boldsymbol{\nu,\mu}\geq \boldsymbol{0}, \end{align} where the constraint is defined pointwise for $\boldsymbol{\nu,\mu},$ and $\boldsymbol h^*(\boldsymbol{\nu,\mu})$ is a function that maximizes the Lagrangian function for given $(\boldsymbol{\nu,\mu}),$ i.e., \begin{align}\label{eq:dual-opt-h} \boldsymbol h^*(\boldsymbol{\nu,\mu})=\arg\max_{\boldsymbol h}L(\boldsymbol{h,\nu,\mu}). \end{align} The dual function $d(\boldsymbol{\nu,\mu})$ is always convex in $(\boldsymbol{\nu,\mu})$ regardless of the concavity of the optimization problem~(\ref{eq:max-mcdp-general-common}) \cite{boyd04}. Therefore, it is always possible to iteratively solve the dual problem using \begin{align}\label{eq:dual-opt-lambda} &\nu_l[k+1]=\nu_l[k]-\gamma_l\frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \nu_l},\nonumber\displaybreak[0]\\ & \mu_{ip}[k+1]= \mu_{ip}[k]-\eta_{ip}\frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \mu_{ip}}, \end{align} where $\gamma_l$ and $\eta_{ip}$ are the step sizes, and $\frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \nu_l}$ and $\frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \mu_{ip}}$ are the partial derivative of $L(\boldsymbol{\nu,\mu})$ w.r.t. $ \nu_l$ and $\mu_{ip},$ respectively, satisfyting \begin{align}\label{eq:gradient-lambda-mu} \frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \nu_l} &=-\bigg(\sum_{i \in \mathcal{D}}\bigg[1-\prod_{p:l\in\{1,\cdots,|p|\}}(1-h_{il}^{(p)})\bigg]-B_l\bigg),\nonumber\displaybreak[0]\\ \frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \mu_{ip}} &= -\Bigg(\sum_{j\in\{1,\cdots,|p|\}}h_{ij}^{(p)}-1\Bigg). \end{align} Sufficient and necessary conditions for the uniqueness of $\boldsymbol{\nu,\mu}$ are given in \cite{kyparisis85}. The convergence of the primal-dual algorithm consisting of~(\ref{eq:dual-opt-h}) and~(\ref{eq:dual-opt-lambda}) is guaranteed if the original optimization problem is convex. However, our problem is not convex. Nevertheless, in the following, we show that the duality gap is zero, hence~(\ref{eq:dual-opt-h}) and~(\ref{eq:dual-opt-lambda}) converge to the globally optimal solution. To begin with, we introduce the following results \begin{theorem}\label{thm:sufficient} \cite{tychogiorgos13} (Sufficient Condition). If the price based function $\boldsymbol h^*(\boldsymbol{\nu,\mu})$ is continuous at one or more of the optimal lagrange multiplier vectors $\boldsymbol \nu^*$ and $\boldsymbol \mu^*$, then the iterative algorithm consisting of~(\ref{eq:dual-opt-h}) and~(\ref{eq:dual-opt-lambda}) converges to the globally optimal solution. \end{theorem} \begin{theorem}\label{thm:necessary} \cite{tychogiorgos13} If at least one constraint of~(\ref{eq:max-mcdp-general-common}) is active at the optimal solution, the condition in Theorem~\ref{thm:sufficient} is also a necessary condition. \end{theorem} Hence, if we can show the continuity of $\boldsymbol h^*(\boldsymbol{\nu,\mu})$ and that constraints~(\ref{eq:max-mcdp-general-common}) are active, then given Theorems~\ref{thm:sufficient} and~\ref{thm:necessary}, the duality gap is zero, i.e., ~(\ref{eq:dual-opt-h}) and~(\ref{eq:dual-opt-lambda}) converge to the globally optimal solution. \subsubsection{Model Validations and Insights}\label{sec:general-common-validations} We evaluate the performance of Algorithm~\ref{algo:primal-dual-alg} on a seven-node binary tree cache network, shown in Figure~\ref{fig:cache-7-nodes}. We assume that there are totally $100$ unique contents in the system requested from four paths. The cache size is given as $B_v=10$ for $v=1,\cdots,7.$ We consider a log utility function and the popularity distribution over these contents is Zipf with parameter $1.2.$ W.l.o.g., the aggregate request arrival rate is one. The discount factor $\psi=0.6.$ \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/tree-comparison-LCD-new.pdf} \caption{Normalized optimal aggregated utilities under different caching eviction policies with LCD replications to that under MCDP.} \label{fig:tree-comp-lcd} \vspace{-0.1in} \end{figure} We solve the optimization problem in~(\ref{eq:max-mcdp-general-common}) using a Matlab routine \texttt{fmincon}. Then we implement our primal-dual algorithm given in Algorithm~\ref{algo:primal-dual-alg}. The result for path $p_4$ is presented in Figure~\ref{fig:primal-dual}. We observe that our algorithm yields the exact optimal hit probabilities under MCDP. Similar results hold for other three paths and hence are omitted here. {Similar to Section~\ref{sec:validations-line-cache}, we compare Algorithm~\ref{algo:primal-dual-alg} to LRU, LFU, FIFO and RR with LCD replications. Figure~\ref{fig:tree-comp-lcd} compares the performance of different eviction policies with LCD replication strategies to our primal-dual algorithm under MCDP, in case of a seven-node binary tree cache network, shown in Figure~\ref{fig:cache-7-nodes}. We plot the relative performance w.r.t. the optimal aggregated utilities of all above policies, normalized to that under MCDP. We again observe the huge gain of MCDP w.r.t. other caching eviction policies with LCD.} \subsection{Contents, Servers and Requests} Consider the general cache network described in Section~\ref{sec:prelim}. Denote by $\mathcal{P}$ the set of all requests, and $\mathcal{P}^i$ the set of requests for content $i.$ Suppose a cache in node $v$ serves two requests $(v_1, i_1, p_1)$ and $(v_2, i_2, p_2)$, then there are two cases: (i) non-common requested content, i.e., $i_1\neq i_2;$ and (ii) common requested content, i.e., $i_1=i_2.$ \subsection{Non-common Requested Content}\label{sec:general-cache-non-common} We first consider the case that each node serves requests for different contents from each request $(v, i, p)$ passing through it. Since there is no coupling between different requests $(v, i, p),$ we can directly generalize the results for linear cache networks in Section~\ref{sec:line-cache}. Hence, given the utility maximization formulation in~(\ref{eq:max-mcdp}), we can directly formulate the optimization problem for MCDP as \begin{subequations}\label{eq:max-mcdp-general} \begin{align} \text{\bf{G-N-U-MCDP:}} \max \quad&\sum_{i\in \mathcal{D}} \sum_{p\in\mathcal{P}^i}\sum_{l=1}^{|p|} \psi^{|p|-l} U_{ip}(h_{il}^{(p)}) \displaybreak[0]\\ \text{s.t.} \quad&\sum_{i\in \mathcal{D}} \sum_{p:l\in\{1,\cdots,|p|\}}h_{il}^{(p)} \leq B_l,p\in\mathcal{P}, \displaybreak[1]\label{eq:hpbmcdp1-general}\\ & \sum_{l=1}^{|p|}h_{il}^{(p)}\leq 1,\quad\forall i \in \mathcal{D}, p\in\mathcal{P}^i,\displaybreak[2]\label{eq:hpbmcdp2-general}\\ &0\leq h_{il}^{(p)}\leq1, \quad\forall i \in \mathcal{D}, l\in\{1,\cdots,|p|\}, \nonumber\\ &\qquad\qquad\qquad p\in\mathcal{P}^i, \end{align} \end{subequations} where~(\ref{eq:hpbmcdp1-general}) is the cache capacity constraint and~(\ref{eq:hpbmcdp2-general}) follows the discussion for MCDP in~(\ref{eq:mcdp-constraint}). \begin{prop} Since the feasible sets are convex and the objective function is strictly concave and continuous, the optimization problem defined in~(\ref{eq:max-mcdp-general}) under MCDP has a unique global optimum. \end{prop} We can similarly formulate a utility maximization optimization problem for MCD for a general cache network. This can be found in Appendix~\ref{appendix:mcd-non-common}. \subsubsection{Model Validations and Insights} We consider a seven-node binary tree network, shown in Figure~\ref{fig:cache-7-nodes} with node set $\{1,\cdots, 7\}$. There exist four paths $p_1=\{1, 5, 7\},$ $p_2=\{2, 5, 7\},$ $p_3=\{3, 6, 7\}$ and $p_4=\{4, 6, 7\}.$ Each leaf node serves requests for $100$ distinct contents, and cache size is $B_v=30$ for $v\in\{1,\cdots, 7\}.$ Assume that the content follows a Zipf distribution with parameter $\alpha_1=0.2,$ $\alpha_2=0.4,$ $\alpha_3=0.6$ and $\alpha_4=0.8,$ respectively. We consider the log utility function $U_{ip}(x) = \lambda_{ip} \log x,$ where $\lambda_{ip}$ is the request arrival rate for content $i$ on path $p,$ and requests are described by a Poisson process with $\Lambda_p=1$ for $p=1, 2, 3, 4.$ The discount factor $\psi=0.6$. Figures~\ref{7nodecache-hit} and~\ref{7nodecache-size} show results for path $p_4=\{4, 6, 7\}.$ From Figure~\ref{7nodecache-hit}, we observe that our algorithm yields the exact optimal and empirical hit probabilities under MCDP. Figure~\ref{7nodecache-size} shows the probability density for the number of contents in the cache network. As expected, the density is concentrated around their corresponding cache sizes. Similar trends exist for paths $p_1$, $p_2$ and $p_3$, hence are omitted here. \subsection{Content Distribution}\label{app:cdn} {Here we consider a general network topology with overlapping paths and common contents requested along different paths. Similar to~(\ref{eq:max-mcdp-general-common}) and Algorithm~\ref{algo:primal-dual-alg}, a non-convex optimization problem can be formulated and a primal-dual algorithm can be designed, respectively. Due to space constraints, we omit the details. Instead, we show the performance of this general network. } {We consider a $2$-dimensional square grid with $16$ nodes, denoted as $G=(V, E)$. We assume a library of $|\mathcal{D}|=30$ unique contents. Each node has access to a subset of contents in the library. We assign a weight to each edge in $E$ selected uniformly from the interval $[1, 20].$ Next, we generate a set of requests in $G$ as follows. To ensure that paths overlap, we randomly select a subset $\tilde{V}\subset V$ with $|\tilde{V}|=12$ nodes to generate requests. Each node in $\tilde{V}$ can generate requests for contents in $\mathcal{D}$ following a Zipf distribution with parameter $\alpha=0.8.$ Requests are then routed over the shortest path between the requesting node in $\tilde{V}$ and the node in $V$ that caches the content. Again, we assumed that the aggregate request rate at each node in $\tilde{V}$ is one. We achieve similar performance as shown in Figure~\ref{fig:primal-dual} for different paths, hence omit them due to space constraints. The aggregated optimal utilities obtained by our primal-dual algorithm and through a centralized solver are $-14.9$ and $-14.8,$ respectively.} \subsubsection{Caching and Compression} Again, we represent the network as a directed graph $G=(V, E).$ For simplicity, we consider a tree-structured WSN, as shown in Figure~\ref{fig:wsn-exm}. Each node is associated with a cache that is capable of storing a constant amount of content. Denote $B_v$ as the cache capacity at node $v\in V.$ Let $\mathcal{K}\subset V$ be the set of leaf nodes with $|\mathcal{K}|=K.$ Furthermore, we assume that each node $j$ that receives the content from leaf node $k$ can compress it with a reduction ratio\footnote{defined as the ratio of the volume of the output content to the volume of input content at any node. We consider the compression that only reduces the quality of content (e.g. remove redundant information), but the total number of distinct content in the system remains the same.} $\delta_{kj},$ where $0<\delta_{kj}\leq 1$, $\forall k, j.$ \subsubsection{Content Generation and Requests} We assume that leaf node $k\in\mathcal{K}$ continuously generates content, which will be active for a time interval $W$ and requested by users outside the network. If there is no request for these content in that time interval, the generated content becomes inactive and discarded from the system. The generated content is compressed and cached along the path between the leaf node and the sink node when a request is made for the active content. Denote the path as $p^k=(1, \cdots, |p^k|)$ between leaf node $k$ and the sink node. Since we consider a tree-structured network, the total number of paths is $|\mathcal{K}|=K,$ hence, w.l.o.g., $\mathcal{K}$ is also used to denote the set of all paths. Here, we consider the MCDP and MCD with TTL caching policy. In WSN, each sensor (leaf node) generates a sequence of content that users are interested in. Different sensors may generate different types of contents, i.e., there is no common contents sharing between different sensors. Hence, the cache performance analysis in WSN can be mapped to the problem we considered in Section~\ref{sec:general-cache-non-common}. W.l.o.g., we consider a particular leaf node $k$ and the content that is active and requested by the users. For simplicity, we drop the superscript ${}^{.k}$ and denote the path as $p=(1,\cdots,|p|),$ where cache $|p|$ is the sink node that serves the requests and cache $1$ is the leaf node that generates the content. Let the set of contents generated by leaf node $k$ be $\mathcal{D}^{(p)}$ and the request arrivals for $\mathcal{D}^{(p)}$ follow a Poisson process with rate $\lambda_i$ for $i\in\mathcal{D}^{(p)}.$ Let $h_{ij}^{(p)}, T_{ij}^{(p)}$ be the hit probability and the TTL timer associated with content $i \in \mathcal{D}^{(p)}$ at node $j\in\{1,\cdots,|p|\},$ respectively. Denote $\boldsymbol h_i^{(p)} =(h_{i1}^{(p)}, \cdots, h_{ip}^{(p)})$, $\boldsymbol \delta_i^{(p)} =(\delta_{i1}^{(p)}, \cdots, \delta_{ip}^{(p)})$ and $\boldsymbol T_i^{(p)}=(T_{i1}^{(p)},\cdots, T_{i|p|}^{(p)}).$ Let $\boldsymbol h=(\boldsymbol h_i^{(p)})$, $\boldsymbol \delta= (\boldsymbol\delta_i^{(p)})$ and $\boldsymbol T=(\boldsymbol T_i^{(p)})$ for $i\in\mathcal{D}^{(p)}$ and $p\in\mathcal{K}.$ \subsubsection{Utilities} Following a similar argument with Section~\ref{sec:general-cache-non-common}, the overall utilities for content $i$ along path $p$ is given as \begin{align} \sum_{j=1}^{|p|}\psi^{|p|-j}U_{i}^{(p)}\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg), \end{align} where the utilities not only capture the hit probabilities but also characterize the content quality degradation due to compression along the path. \subsubsection{Costs} We consider the costs, e.g. delay, of routing the content along the path, which includes the cost to forward content to the node that caches it, the cost to search for the content along the path, and the cost to fetch the cached content to the users that sent the requests. Again, we assume that the per hop cost to transfer (search) the content along the path is a function $c_f(\cdot)$ ($c_s(\cdot)$) of hit probabilities and compression ratios \noindent{\textit{\textbf{Forwarding Costs:}}} Suppose a cache hit for content $i$ occurs on node $j\in\{1,\cdots, |p|\}$, then the total cost to forward content $i$ along $p$ is given as \begin{align} \sum_{j=1}^{|p|}\lambda_i\cdot j\cdot c_f\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg). \end{align} \noindent{\textit{\textbf{Search Costs:}}} Given a cache hit for content $i$ on node $j\in\{1,\cdots, |p|\}$, the total cost to search for content $i$ along $p$ is given as \begin{align} \sum_{j=1}^{|p|}\lambda_i\cdot(|p|-j+1)\cdot c_s(h_{ij}^{(p)}). \end{align} \noindent{\textit{\textbf{Fetching Costs:}}} Upon a cache hit for content $i$ on node $j\in\{1,\cdots, |p|\}$, the total cost to fetch content $i$ along $p$ is given as \begin{align} \sum_{j=1}^{|p|}\lambda_i\cdot(|p|-j+1)\cdot c_f\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg). \end{align} \subsection{Optimization Formulation} Our objective is to determine a feasible TTL policy and compression ratio for content management in a tree-structured WSN to maximize the difference between utilities and costs, i.e., \begin{align}\label{eq:wsn-opt-obj} &F(\boldsymbol h, \boldsymbol \delta)=\sum_{p\in\mathcal{K}}\sum_{i\in\mathcal{D}^{(p)}}\Bigg\{\sum_{j=1}^{|p|}\psi^{|p|-j}U_{i}^{(p)}\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg)\nonumber\displaybreak[0]\\ &\qquad-\sum_{j=1}^{|p|}\lambda_i\cdot j\cdot c_f\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg)- \sum_{j=1}^{|p|}\lambda_i\cdot(|p|-j+1)\cdot c_s(h_{ij}^{(p)})\nonumber\displaybreak[1]\\ &\qquad-\sum_{j=1}^{|p|}\lambda_i\cdot(|p|-j+1)\cdot c_f\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg)\Bigg\}\nonumber\displaybreak[2]\\ &=\sum_{p\in\mathcal{K}}\sum_{i\in\mathcal{D}^{(p)}}\Bigg\{\sum_{j=1}^{|p|}\psi^{|p|-j}U_{i}^{(p)}\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg)\nonumber\displaybreak[3]\\ &-\Bigg[\sum_{j=1}^{|p|}\lambda_i(|p|+1)c_f\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg)+\sum_{j=1}^{|p|}\lambda_i(|p|-j+1) c_s(h_{ij}^{(p)})\Bigg]\Bigg\}. \end{align} Hence, the optimal TTL policy and compression ratio for MCDP should solve the following optimization problem: \begin{subequations}\label{eq:wsn-opt-mcdp} \begin{align} &\text{\bf{WSN-MCDP:}}\nonumber\\ \max\quad & F(\boldsymbol h, \boldsymbol \delta)\displaybreak[0]\\ \text{s.t.} \quad &\sum_{p:l\in p}\sum_{i\in \mathcal{D}^{(p)}} h_{il}^{(p)} \prod_{j=1}^{\mathcal{I}(l,p)}\delta_{ij}^{(p)}\leq B_l,\quad \forall l \in V,\displaybreak[1]\label{eq:wsn-opt-mcdp-cons1}\\ &c_c\Bigg(\sum_{i\in \mathcal{D}^{(p)}} \sum_{l=1}^{|p|}\prod_{j=1}^{l}\delta_{ij}^{(p)}\Bigg)\leq O^{(p)},\quad\forall p \in\mathcal{K},\label{eq:wsn-opt-mcdp-cons-qoi}\\ & \sum_{j\in\{1,\cdots,|p|\}}h_{ij}^{(p)}\leq 1,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K}, \displaybreak[2]\label{eq:wsn-opt-mcdp-cons2}\\ &0\leq h_{ij}^{(p)}\leq1,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K},\displaybreak[3] \label{eq:wsn-opt-mcdp-cons3}\\ &0<\delta_{ij}^{(p)}\leq1,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K}, \label{eq:wsn-opt-mcdp-cons4} \end{align} \end{subequations} where $\mathcal{I}(l,p)$ is the index of node $j$ on path $p$ and constraint~(\ref{eq:wsn-opt-mcdp-cons-qoi}) ensures that the content will not be over compressed since the available energy $O^{(p)}$ is limited for path $p$, and $c_c(\cdot)$ is the per unit energy consumption function for compression. Similarly, we can formulate an optimization problem for MCD. We relegate this formulation to Appendix~\ref{appendix:mcd-wsn}. It is easy to check that~(\ref{eq:wsn-opt-mcdp}) is a non-convex problem. In the following, we transform it into a convex one through Boyd's method (Section $4.5$ \cite{boyd04}). \begin{figure*}[htbp] \centering \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\columnwidth]{figures/opt_prob_central_wsn_30docs_LRUm_w40.pdf} \caption{Hit probability of MCDP under seven-node WSN.} \label{7nodewsn-hit} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\columnwidth]{figures/cachesize_pdf_central_wsn_30docs_LRUm_w40.pdf} \caption{Cache size of MCDP under seven-node WSN.} \label{7nodewsn-size} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\linewidth]{figures/opt_delta_central_wsn_30docs_LRUm_w40.pdf} \caption{Compression ratio of MCDP under a seven-node WSN.} \label{7nodewsn-delta} \end{minipage} \vspace{-0.1in} \end{figure*} \subsection{Convex Transformation} First, we define two new sets of variables for $i\in\mathcal{D}^{(p)},$ $l\in\{1,\cdots, |p|\}$ and $p\in\mathcal{K}$ as follows: \begin{align} &\log h_{ij}^{(p)}\triangleq \sigma_{ij}^{(p)}, \quad i.e.,\quad h_{ij}^{(p)}=e^{\sigma_{ij}^{(p)}},\nonumber\displaybreak[0]\\ &\log \delta_{ij}^{(p)}\triangleq \tau_{ij}^{(p)},\quad i.e.,\quad \delta_{ij}^{(p)}=e^{\tau_{ij}^{(p)}}, \end{align} and denote $\boldsymbol \sigma_i^{(p)} =(\sigma_{i1}^{(p)}, \cdots, \sigma_{ip}^{(p)})$, $\boldsymbol \tau_i^{(p)} =(\tau_{i1}^{(p)}, \cdots, \tau_{ip}^{(p)})$ and $\boldsymbol \sigma=(\boldsymbol \sigma_i^{(p)})$, $\boldsymbol \tau= (\boldsymbol \tau_i^{(p)})$ for $i\in\mathcal{D}^{(p)}$ and $p\in\mathcal{K}.$ Then the objective function~(\ref{eq:wsn-opt-obj}) can be transformed into \begin{align}\label{eq:wsn-opt-obj-trans} &F(\boldsymbol \sigma, \boldsymbol \tau) =\sum_{p\in\mathcal{K}}\sum_{i\in\mathcal{D}^{(p)}}\Bigg\{\sum_{j=1}^{|p|}\psi^{|p|-j}U_{i}^{(p)}\Bigg(e^{\sigma_{ij}^{(p)}+\sum_{l=1}^j\tau_{il}^{(p)}}\Bigg)-\nonumber\displaybreak[0]\\ &\Bigg[\sum_{j=1}^{|p|}\lambda_i(|p|+1) c_f\Bigg(e^{\sigma_{ij}^{(p)}+\sum_{l=1}^j\tau_{il}^{(p)}}\Bigg)+ \sum_{j=1}^{|p|}\lambda_i(|p|-j+1) c_s\Bigg(e^{\sigma_{ij}^{(p)}}\Bigg)\Bigg]\Bigg\}. \end{align} Similarly, we can transform the constraints. Then we obtain the following transformed optimization problem \begin{subequations}\label{eq:wsn-opt-mcdp-trans} \begin{align} &\text{\bf{WSN-T-MCDP:}}\nonumber\\ \max\quad & F(\boldsymbol \sigma, \boldsymbol \tau)\displaybreak[0]\\ \text{s.t.} \quad &\sum_{p:l\in p}\sum_{i\in \mathcal{D}^{(p)}} e^{\sigma_{il}^{(p)}+ \sum_{j=1}^{\mathcal{I}(l,p)}\tau_{ij}^{(p)}}\leq B_l,\quad \forall l \in V,\displaybreak[1]\label{eq:wsn-opt-mcdp-cons1-trans}\\ &c_c\Bigg(\sum_{i\in \mathcal{D}^{(p)}} \sum_{l=1}^{|p|}e^{\sum_{j=1}^{l}\tau_{ij}^{(p)}}\Bigg)\leq O^{(p)},\quad\forall p \in\mathcal{K},\\ & \sum_{j\in\{1,\cdots,|p|\}}e^{\sigma_{ij}^{(p)}}\leq 1,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K}, \displaybreak[2]\label{eq:wsn-opt-mcdp-cons2-trans}\\ & \sigma_{ij}^{(p)}\leq0,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K},\displaybreak[3] \label{eq:wsn-opt-mcdp-cons3-trans}\\ &\tau_{ij}^{(p)}\leq0,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K}, \label{eq:wsn-opt-mcdp-cons4-trans} \end{align} \end{subequations} where $\mathcal{I}(l,p)$ is the index of node $j$ on path $p.$ \begin{theorem} The transformed optimization problems in~(\ref{eq:wsn-opt-mcdp-trans}) is convex $\boldsymbol \sigma$ and $\boldsymbol \tau$, when we consider the $\beta$-utility function with $\beta\geq1$ and increasing convex cost functions $c_f(\cdot),$ $c_s(\cdot)$ and $c_c(\cdot).$ \end{theorem} It can be easily checked that the objective function satisfies the second order condition \cite{boyd04} for $\boldsymbol \sigma$ and $\boldsymbol \tau$ when the utility and cost functions satisfy the conditions. We omit the details due to space constraints. \begin{theorem} The optimization problems in~(\ref{eq:wsn-opt-mcdp-trans}) and~(\ref{eq:wsn-opt-mcdp}) are equivalent. \end{theorem} \begin{proof} This is clear from the way we convexified the problem. \end{proof} \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/7node-wsn-new.pdf} \vspace{-0.1in} \caption{A seven-node binary tree WSN. } \label{fig:wsn-7-nodes} \vspace{-0.1in} \end{figure} \subsection{Stationary Behaviors of MCDP and MCD} \subsubsection{MCDP}\label{mcdpmodel-appendix} Under IRM model, the request for content $i$ arrives according a Poisson process with rate $\lambda_i.$ As discussed earlier, for TTL caches, content $i$ spends a deterministic time in a cache if it is not requested, which is independent of all other contents. We denote the timer as $T_{il}$ for content $i$ in cache $l$ on the path $p,$ where $l\in\{1,\cdots, |p|\}.$ Denote $t_k^i$ as the $k$-th time that content $i$ is either requested or moved from one cache to another. For simplicity, we assume that content is in cache $0$ (i.e., server) if it is not in the cache network. Then we can define a discrete time Markov chain (DTMC) $\{X_k^i\}_{k\geq0}$ with $|p|+1$ states, where $X_k^i$ is the cache index that content $i$ is in at time $t_k^i.$ Since the event that the time between two requests for content $i$ exceeds $T_{il}$ happens with probability $e^{-\lambda_i T_{il}},$ then the transition matrix of $\{X_k^i\}_{k\geq0}$ is given as {\footnotesize \begin{align} {\bf P}_i^{\text{MCDP}}= \begin{bmatrix} 0 & 1 \\ e^{-\lambda_iT_{i1}} & 0 & 1-e^{-\lambda_iT_{i1}} \\ &\ddots&\ddots&\ddots\\ &&e^{-\lambda_iT_{i(|p|-1)}} & 0 & 1-e^{-\lambda_iT_{i(|p|-1)}} \\ &&&e^{-\lambda_iT_{i|p|}} & 1-e^{-\lambda_iT_{i|p|}} \end{bmatrix}. \end{align} } Let $(\pi_{i0},\cdots,\pi_{i|p|})$ be the stationary distribution for ${\bf P}_i^{\text{MCDP}}$, we have \begin{subequations}\label{eq:stationary-mcdp} \begin{align} & \pi_{i0} = \frac{1}{1+\sum_{j=1}^{|p|}e^{\lambda_iT_{ij}}\prod_{s=1}^{j-1}(e^{\lambda_iT_{is}}-1)},\displaybreak[0] \label{eq:stationary-mcdp0}\\ &\pi_{i1} = \pi_{i0}e^{\lambda_iT_{i1}},\displaybreak[1]\label{eq:stationary-mcdp1}\\ &\pi_{il} = \pi_{i0}e^{\lambda_iT_{il}}\prod_{s=1}^{l-1}(e^{\lambda_iT_{is}}-1),\; l = 2,\cdots, |p|.\label{eq:stationary-mcdp2} \end{align} \end{subequations} Then the average time that content $i$ spends in cache $l\in\{1,\cdots, |p|\}$ can be computed as \begin{align}\label{eq:averagetime-mcdp} \mathbb{E}[t_{k+1}^i-t_k^i|X_k^i = l]= \int_{0}^{T_{il}}\left(1-\left[1-e^{-\lambda_it}\right]\right)dt = \frac{1-e^{-\lambda_iT_{il}}}{\lambda_i}, \end{align} and $\mathbb{E}[t_{k+1}^i-t_k^i|X_k^i = 0] = \frac{1}{\lambda_i}.$ Given~(\ref{eq:stationary-mcdp}) and~(\ref{eq:averagetime-mcdp}), the timer-average probability that content $i$ is in cache $l\in\{1,\cdots, |p|\}$ is \begin{align*} & h_{i1} = \frac{e^{\lambda_iT_{i1}}-1}{1+\sum_{j=1}^{|p|}(e^{\lambda_iT_{i1}}-1)\cdots (e^{\lambda_iT_{ij}}-1)},\\ & h_{il} = h_{i(l-1)}(e^{\lambda_iT_{il}}-1),\; l = 2,\cdots,|p|, \end{align*} where $h_{il}$ is also the hit probability for content $i$ at cache $l.$ \subsubsection{MCD}\label{mcdmodel-appendix} Again, for TTL caches, content $i$ spends a deterministic time $T_{il}$ in cache $l$ if it is not requested, which is independent of all other contents. We define a DTMC $\{Y_k^i\}_{k\geq0}$ by observing the system at the time that content $i$ is requested. Similarly, if content $i$ is not in the cache network, then it is in cache $0$, thus we still have $|p|+1$ states. If $Y_k^i=l$, then the next request for content $i$ comes within time $T_{il}$ with probability $1-e^{-\lambda_iT_{il}}$, thus we have $Y_{k+1}^i=l+1,$ otherwise $Y_{k+1}^i=0$ due to the MCD policy. Therefore, the transition matrix of $\{Y_k^i\}_{k\geq0}$ is given as {\footnotesize \begin{align} {\bf P}_i^{\text{MCD}}= \begin{bmatrix} e^{-\lambda_iT_{i1}} & 1-e^{-\lambda_iT_{i1}} & &\\ e^{-\lambda_iT_{i2}} & &1-e^{-\lambda_iT_{i2}} & &\\ \vdots&&\ddots&\\ e^{-\lambda_iT_{i|p|}} & && 1-e^{-\lambda_iT_{i|p|}} \\ e^{-\lambda_iT_{i|p|}} && &1-e^{-\lambda_iT_{i|p|}} \end{bmatrix}. \end{align} } Let $(\tilde{\pi}_{i0},\cdots,\tilde{\pi}_{i|p|})$ be the stationary distribution for ${\bf P}_i^{\text{MCD}}$, then we have \begin{subequations}\label{eq:stationary-mcd-app} \begin{align} &\tilde{\pi}_{i0}=\frac{1}{1+\sum_{l=1}^{|p|-1}\prod_{j=1}^l(1-e^{-\lambda_i T_{ij}})+e^{\lambda_i T_{i|p|}}\prod_{j=1}^{|p|}(1-e^{-\lambda_i T_{ij}})},\displaybreak[0]\label{eq:mcd1-app}\\ &\tilde{\pi}_{il}=\tilde{\pi}_{i0}\prod_{j=1}^{l}(1-e^{-\lambda_iT_{ij}}),\quad l=1,\cdots, |p|-1,\displaybreak[1]\label{eq:mcd2-app}\\ &\tilde{\pi}_{i|p|}=e^{\lambda_i T_{i|p|}}\tilde{\pi}_{i0}\prod_{j=1}^{|p|-1}(1-e^{-\lambda_iT_{ij}}).\label{eq:mcd3-app} \end{align} \end{subequations} By PASTA property \cite{MeyTwe09}, we immediately have that the stationary probability that content $i$ is in cache $l\in\{1,\cdots, |p|\}$ is given as \begin{align*} h_{il}=\tilde{\pi}_{il}, \quad l=0, 1, \cdots, |p|, \end{align*} where $\tilde{\pi}_{il}$ are given in~(\ref{eq:stationary-mcd-app}). \subsection{The impact of discount factor on the performance in linear cache network}\label{appendix:line-validation-discount} The results for $\psi=0.4, 0.6, 1$ are shown in Figures~\ref{fig:line-beta04},~\ref{fig:line-beta06} and~\ref{fig:line-beta1}. \subsection{Minimizing Overall Costs} \label{appendix:mcdp-cost2} In Section~\ref{sec:line-utility-max}, we aim to maximize overall utilities across all contents over the cache network, which captures the user satisfactions. However, the communication costs for content transfers across the network is also critical in many network applications. This cost includes (i) the search cost for finding the requested content in the network; (ii) the fetch cost to serve the content to the user; and (iii) the transfer cost for cache inner management due to a cache hit or miss. In the following, we first characterize these costs for MCD. Then we formulate a minimization optimization problem to characterize the optimal TTL policy for content placement in linear cache network. \subsubsection{Search Cost} Requests from user are sent along a path until it hits a cache that stores the requested content. We define the \emph{search cost} as the cost for finding the requested content in the cache network. Consider the cost as a function $c_s(\cdot)$ of the hit probabilities. Then the expected searching cost across the network is given as \begin{align* S_{\text{MCD}}=S_{\text{MCDP}}= \sum_{i\in \mathcal{D}} \lambda_ic_s\left(\sum_{l=0}^{|p|}(|p|-l+1)h_{il}\right). \end{align*} \subsubsection{Fetch Cost} Upon a cache hit, the requested content will be sent to the user along the reverse direction of the path. We define the \emph{fetch cost} as the costing of fetching the content to serve the user who sent the request. Consider the cost as a function $c_f(\cdot)$ of the hit probabilities. Then the expected fetching cost across the network is given as \begin{align}\label{eq:fetching-cost} F_{\text{MCD}}=F_{\text{MCDP}}= \sum_{i\in \mathcal{D}} \lambda_ic_f\left(\sum_{l=0}^{|p|}(|p|-l+1)h_{il}\right). \end{align} \subsubsection{Transfer Cost} Under TTL cache, upon a cache hit, the content either moves to a higher index cache or stays in the current one, and upon a cache miss, the content either transfers to a lower index cache (MCDP) or is discarded from the network (MCD). We define the \emph{transfer cost} as the cost due to caching management upon a cache hit or miss. Consider the cost as a function $c_m(\cdot)$ of the hit probabilities \noindent{\textit{\textbf{MCD:}}} Under MCD, since the content is discarded from the network once its timer expires, the transfer cost is caused by a cache hit. To that end, the requested content either moves to a higher index cache if it was in cache $l\in\{1,\cdots, |p|-1\}$ or stays in the same cache if it was in cache $|p|.$ Then the expected transfer cost across the network for MCD is given as \begin{align* M_{\text{MCD}} = \sum_{i\in \mathcal{D}} \lambda_ic_m\left(1 - h_{i|p|}\right). \end{align*} \subsubsection{Total Costs} Given the search cost, fetch cost and transfer cost, the total cost for MCD and MCDP can be defined as \begin{subequations}\label{eq:total-cost} \begin{align} &{SFM}_{\text{MCD}} =S_{\text{MCD}}+F_{\text{MCD}}+M_{\text{MCD}}, \displaybreak[0]\label{eq:total-cost-mcd}\\ &{SFM}_{\text{MCDP}} = S_{\text{MCDP}}+F_{\text{MCDP}}+M_{\text{MCDP}},\label{eq:total-cost-mcdp} \end{align} \end{subequations} where the corresponding costs are given in~(\ref{eq:searching-cost}),~(\ref{eq:fetching-cost}),~(\ref{eq:moving-cost-mcd}) and~(\ref{eq:moving-cost-mcdp}), respectively. \subsection{Proof of Theorem~\ref{thm:mcdp-primal-conv}: Convergence of Primal Algorithm}\label{appendix:primal-conv} \begin{proof} Since $U_i(\cdot)$ is strictly concave, $C_l(\cdot)$ and $\tilde{C}_i(\cdot)$ are convex, then~(\ref{eq:primal}) is strictly concave, hence there exists a unique maximizer. Denote it as $\boldsymbol h^*.$ Define the following function \begin{align} Y(\boldsymbol h)=Z(\boldsymbol h^*)-Z(\boldsymbol h), \end{align} then it is clear that $Y(\boldsymbol h)\geq 0$ for any feasible $\boldsymbol h$ that satisfies the constraint in the original optimization problem, and $Y(\boldsymbol h)= 0$ if and only iff $\boldsymbol h=\boldsymbol h^*.$ Now, we take the derivative of $Y(\boldsymbol h)$ w.r.t. time $t$, we have \begin{align}\label{eq:derivative} \frac{dY(\boldsymbol h)}{dt}&=-\frac{dZ(\boldsymbol h)}{dt}=-\sum_{i\in\mathcal{D}}\sum_{l\in\{1,\cdots, |p|\}}\frac{\partial Z(\boldsymbol h)}{\partial h_{il}}\cdot\frac{\partial h_{il}}{\partial t}\nonumber\displaybreak[0]\\ &=-\sum_{i\in\mathcal{D}}\sum_{l\in\{1,\cdots, |p|\}}\Bigg[\psi^{|p|-l} U^\prime_i(h_{il})-C^\prime_l\left(\sum\limits_{i\in \mathcal{D}} h_{il} - B_l\right)\nonumber\displaybreak[1]\\ &\qquad\qquad\qquad-\tilde{C}^\prime_i\left(\sum\limits_{l=1}^{|p|}h_{il}-1\right)\Bigg]\cdot\frac{\partial h_{il}}{\partial t}. \end{align} Now we consider the term $\frac{\partial h_{il}}{\partial t}$, we have \begin{align}\label{eq:partial-ht} \frac{\partial h_{il}}{\partial t}=\frac{\partial h_{il}}{\partial T_{il}}\cdot\frac{\partial T_{il}}{\partial t}. \end{align} From the relation between $h_{il}$ and $(T_{i1},\cdots, T_{i|p|})$, we have {\footnotesize \begin{align}\label{eq:partial-htil} &\frac{\partial h_{il}}{\partial T_{il}}=\frac{\partial}{\partial T_{il}} \Bigg(\frac{\prod_{j=1}^l (e^{\lambda_iT_{ij}}-1)}{1+\sum_{k=1}^{|p|}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)}\Bigg)\nonumber\displaybreak[0]\\ &=\frac{1}{\Bigg[1+\sum_{k=1}^{|p|}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]^2}\cdot\Bigg( \lambda_i \prod_{j=1}^{l-1} (e^{\lambda_iT_{ij}}-1)\Bigg[1+\nonumber\displaybreak[1]\\ &\sum_{k=1}^{|p|}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]- \lambda_i \prod_{j=1}^l (e^{\lambda_iT_{ij}}-1)\Bigg[\sum_{k=l}^{|p|}\prod_{j=1, j\neq l}^k(e^{\lambda_iT_{ij}}-1)\Bigg]\Bigg)\nonumber\displaybreak[2]\\ &=\frac{\lambda_i \prod_{j=1}^{l-1} (e^{\lambda_iT_{ij}}-1) \Bigg[1+\sum_{k=1}^{l-1}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]}{\Bigg[1+\sum_{k=1}^{|p|}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]^2}. \end{align} } Note that in our primal algorithm, we update the hit probability upon each request in~(\ref{eq:primal-mcdp-h}), which further updates the timer in~(\ref{eq:primal-mcdp-t}), and in turn compute new $B_{curr, l}$ used in the next update of~(\ref{eq:primal-mcdp-h}), hence to be more precise, ~(\ref{eq:partial-ht}) should be equivalent to \begin{align}\label{eq:partial-ht-new} \frac{\partial h_{il}}{\partial t}=\frac{\partial h_{il}}{\partial T_{il}}\cdot \Delta h_{il}, \end{align} where {\footnotesize \begin{align}\label{eq:delta-hil} \Delta h_{il}=\delta_{il}\Bigg[\psi^{|p|-l}U_i^\prime(h_{il})-C_l^\prime\left(B_{\text{curr},l} - B_l\right)-\tilde{C}_i^\prime\left(\sum\limits_{l=1}^{|p|}h_{il}-1\right)\Bigg]. \end{align} } Thus, from~(\ref{eq:derivative}),~(\ref{eq:partial-htil}),~(\ref{eq:partial-ht-new}) and~(\ref{eq:delta-hil}), we have {\footnotesize \begin{align}\label{eq:derivative-new} &\frac{dY(\boldsymbol h)}{dt}=-\delta_{il}\sum_{i\in\mathcal{D}}\sum_{l\in\{1,\cdots, |p|\}}\Bigg[\psi^{|p|-l} U^\prime_i(h_{il})-C^\prime_l\left(\sum\limits_{i\in \mathcal{D}} h_{il} - B_l\right)-\nonumber\displaybreak[0]\\ &\tilde{C}^\prime_i\left(\sum\limits_{l=1}^{|p|}h_{il}-1\right)\Bigg]^2\frac{\lambda_i \prod_{j=1}^{l-1} (e^{\lambda_iT_{ij}}-1) \Bigg[1+\sum_{k=1}^{l-1}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]}{\Bigg[1+\sum_{k=1}^{|p|}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]^2}\nonumber\displaybreak[1]\\ &<0. \end{align} } Therefore, $Y(\cdot)$ is a Lyapunov function and then our primal algorithm converges to the unique maximum $\boldsymbol h^*$ for any feasible initial points $\boldsymbol h.$ \end{proof} \subsection{Optimization Problem for MCD} \subsubsection{Non-common Content Requests under General Cache Networks}\label{appendix:mcd-non-common} Similarly, we can formulate a utility maximization optimization problem for MCD under general cache network. \begin{subequations}\label{eq:max-mcd-general} \begin{align} \text{\bf{G-N-U-MCD:}} \max \quad&\sum_{i\in \mathcal{D}} \sum_{p\in\mathcal{P}^i}\sum_{l=1}^{|p|} \psi^{|p|-l} U_{ip}(h_{il}^{(p)}) \displaybreak[0]\\ \text{s.t.} \quad&\sum_{i\in \mathcal{D}} \sum_{l\in p}h_{il}^{(p)} \leq B_l,p\in\mathcal{P}, \displaybreak[1]\label{eq:hpbmcd1-general}\\ & \sum_{l=1}^{|p|}h_{il}^{(p)}\leq 1,\quad\forall i \in \mathcal{D}, p\in\mathcal{P}^i,\displaybreak[2]\label{eq:hpbmcd2-general}\\ & h_{i(|p|-1)}^{(p)} \leq \cdots \leq h_{i1}^{(p)} \leq h_{i0}^{(p)}, \quad\forall p\in\mathcal{P}^i, \label{eq:hpbmcd3-general}\\ &0\leq h_{il}^{(p)}\leq1, \quad\forall i \in \mathcal{D}, p\in\mathcal{P}^i. \end{align} \end{subequations} \begin{prop} Since the feasible sets are convex and the objective function is strictly concave and continuous, the optimization problem defined in~(\ref{eq:max-mcd-general}) under MCD has a unique global optimum. \end{prop} \subsubsection{Common Content Requests under General Cache Networks}\label{appendix:mcd-common} Similarly, we can formulate the following optimization problem for MCD with TTL caches, \begin{subequations}\label{eq:max-mcd-general} \begin{align} &\text{\bf{G-U-MCD:}}\nonumber\\ \max \quad&\sum_{i\in \mathcal{D}} \sum_{p\in \mathcal{P}^i} \sum_{l=1}^{|p|} \psi^{|p|-l} U_{ip}(h_{il}^{(p)}) \displaybreak[0]\\ \text{s.t.} \quad& \sum_{i\in \mathcal{D}} \bigg(1-\prod_{p:j\in\{1,\cdots,|p|\}}(1-h_{ij}^{(p)})\bigg) \leq C_j,\quad\forall j \in V, \displaybreak[1] \label{eq:max-mcd-genenral-cons1}\\ & \sum_{j\in\{1,\cdots,|p|\}}h_{ij}^{(p)}\leq 1,\quad \forall i \in \mathcal{D}, \forall p \in \mathcal{P}^i, \displaybreak[2]\label{eq:max-mcd-genenral-cons2}\\ & h_{i(|p|-1)}^{(p)} \leq \cdots \leq h_{i1}^{(p)} \leq h_{i0}^{(p)}, \quad\forall p\in\mathcal{P}^i, \displaybreak[3] \label{eq:max-mcd-genenral-cons3}\\ &0\leq h_{il}^{(p)}\leq 1, \quad\forall i\in\mathcal{D}, \forall p\in\mathcal{P}^i. \label{eq:max-mcd-genenral-cons4} \end{align} \end{subequations} \begin{prop} Since the feasible sets are non-convex, the optimization problem defined in~(\ref{eq:max-mcd-general}) under MCD is a non-convex optimization problem. \end{prop} \subsubsection{MCDP} Since $0\leq T_{il}\leq \infty$, it is easy to check that $0\leq h_{il}\leq 1$ for $l\in\{1,\cdots,|p|\}$ from~(\ref{eq:mcdp1}) and~(\ref{eq:mcdp2}). Furthermore, it is clear that there exists a mapping between $(h_{i1},\cdots, h_{i|p|})$ and $(T_{i1},\cdots, T_{i|p|}).$ By simple algebra, we obtain \begin{subequations}\label{eq:stationary-mcdp-timers} \begin{align} & T_{i1} = \frac{1}{\lambda_i}\log \bigg(1 + \frac{h_{i1}}{1-\big(h_{i1} + h_{i2} + \cdots + h_{i|p|}\big)}\bigg), \label{eq:mcdpttl1}\\ & T_{il} = \frac{1}{\lambda_i}\log \bigg(1 + \frac{h_{il}}{h_{i(l-1)}}\bigg),\quad l= 2,\cdots, |p|.\label{eq:mcdpttl2} \end{align} \end{subequations} Note that \begin{align}\label{eq:mcdp-constraint} h_{i1} + h_{i2} + \ldots + h_{i|p|} \leq 1, \end{align} must hold during the operation, which is always true for our caching policies. \subsubsection{MCD} Similarly, from~(\ref{eq:mcd2}) and~(\ref{eq:mcd3}), we simply check that there exists a mapping between $(h_{i1},\cdots, h_{i|p|})$ and $(T_{i1},\cdots, T_{i|p|}).$ Since $T_{il}\geq0,$ by~(\ref{eq:mcd2}), we have \begin{align} \label{eq:mcd-constraint1} h_{i(|p|-1)} \leq h_{i(|p|-2)} \leq \cdots \leq h_{i1} \leq h_{i0}. \end{align} By simple algebra, we can obtain \begin{subequations}\label{eq:stationary-mcd-timers} \begin{align} & T_{i1} = -\frac{1}{\lambda_i}\log \bigg(1 - \frac{h_{i1}}{1-\big(h_{i1} + h_{i2} + \cdots + h_{i|p|}\big)}\bigg), \label{eq:mcdttl1}\\ & T_{il} = -\frac{1}{\lambda_i}\log\bigg(1-\frac{h_{il}}{h_{i(l-1)}}\bigg),\quad l= 2, \cdots , |p|-1, \label{eq:mcdttl2}\\ & T_{i|p|} = \frac{1}{\lambda_i}\log\bigg(1+\frac{h_{i|p|}}{h_{i(|p|-1)}}\bigg). \label{eq:mcdttl3} \end{align} \end{subequations} Again \begin{align} \label{eq:mcd-constraint2} h_{i1} + h_{i2} + \cdots + h_{i|p|} \leq 1, \end{align} must hold during the operation to obtain~(\ref{eq:stationary-mcd-timers}), which is always true for MCD. \subsection{Proof of Theorem~\ref{thm:mcdp-primal-conv}: Convergence of Primal Algorithm}\label{appendix:primal-conv} \begin{proof} Since $U_i(\cdot)$ is strictly concave, $C_l(\cdot)$ and $\tilde{C}_i(\cdot)$ are convex, then~(\ref{eq:primal}) is strictly concave, hence there exists a unique maximizer. Denote it as $\boldsymbol h^*.$ Define the following function \begin{align} Y(\boldsymbol h)=Z(\boldsymbol h^*)-Z(\boldsymbol h), \end{align} then it is clear that $Y(\boldsymbol h)\geq 0$ for any feasible $\boldsymbol h$ that satisfies the constraint in the original optimization problem, and $Y(\boldsymbol h)= 0$ if and only iff $\boldsymbol h=\boldsymbol h^*.$ Now, we take the derivative of $Y(\boldsymbol h)$ w.r.t. time $t$, we have \begin{align}\label{eq:derivative} \frac{dY(\boldsymbol h)}{dt}&=-\frac{dZ(\boldsymbol h)}{dt}=-\sum_{i\in\mathcal{D}}\sum_{l\in\{1,\cdots, |p|\}}\frac{\partial Z(\boldsymbol h)}{\partial h_{il}}\cdot\frac{\partial h_{il}}{\partial t}\nonumber\displaybreak[0]\\ &=-\sum_{i\in\mathcal{D}}\sum_{l\in\{1,\cdots, |p|\}}\Bigg[\psi^{|p|-l} U^\prime_i(h_{il})-C^\prime_l\left(\sum\limits_{i\in \mathcal{D}} h_{il} - B_l\right)\nonumber\displaybreak[1]\\ &\qquad\qquad\qquad-\tilde{C}^\prime_i\left(\sum\limits_{l=1}^{|p|}h_{il}-1\right)\Bigg]\cdot\frac{\partial h_{il}}{\partial t}. \end{align} Now we consider the term $\frac{\partial h_{il}}{\partial t}$, we have \begin{align}\label{eq:partial-ht} \frac{\partial h_{il}}{\partial t}=\frac{\partial h_{il}}{\partial T_{il}}\cdot\frac{\partial T_{il}}{\partial t}. \end{align} From the relation between $h_{il}$ and $(T_{i1},\cdots, T_{i|p|})$, we have {\footnotesize \begin{align}\label{eq:partial-htil} &\frac{\partial h_{il}}{\partial T_{il}}=\frac{\partial}{\partial T_{il}} \Bigg(\frac{\prod_{j=1}^l (e^{\lambda_iT_{ij}}-1)}{1+\sum_{k=1}^{|p|}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)}\Bigg)\nonumber\displaybreak[0]\\ &=\frac{1}{\Bigg[1+\sum_{k=1}^{|p|}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]^2}\cdot\Bigg( \lambda_i \prod_{j=1}^{l-1} (e^{\lambda_iT_{ij}}-1)\Bigg[1+\nonumber\displaybreak[1]\\ &\sum_{k=1}^{|p|}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]- \lambda_i \prod_{j=1}^l (e^{\lambda_iT_{ij}}-1)\Bigg[\sum_{k=l}^{|p|}\prod_{j=1, j\neq l}^k(e^{\lambda_iT_{ij}}-1)\Bigg]\Bigg)\nonumber\displaybreak[2]\\ &=\frac{\lambda_i \prod_{j=1}^{l-1} (e^{\lambda_iT_{ij}}-1) \Bigg[1+\sum_{k=1}^{l-1}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]}{\Bigg[1+\sum_{k=1}^{|p|}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]^2}. \end{align} } Note that in our primal algorithm, we update the hit probability upon each request in~(\ref{eq:primal-mcdp-h}), which further updates the timer in~(\ref{eq:primal-mcdp-t}), and in turn compute new $B_{curr, l}$ used in the next update of~(\ref{eq:primal-mcdp-h}), hence to be more precise, ~(\ref{eq:partial-ht}) should be equivalent to \begin{align}\label{eq:partial-ht-new} \frac{\partial h_{il}}{\partial t}=\frac{\partial h_{il}}{\partial T_{il}}\cdot \Delta h_{il}, \end{align} where {\footnotesize \begin{align}\label{eq:delta-hil} \Delta h_{il}=\delta_{il}\Bigg[\psi^{|p|-l}U_i^\prime(h_{il})-C_l^\prime\left(B_{\text{curr},l} - B_l\right)-\tilde{C}_i^\prime\left(\sum\limits_{l=1}^{|p|}h_{il}-1\right)\Bigg]. \end{align} } Thus, from~(\ref{eq:derivative}),~(\ref{eq:partial-htil}),~(\ref{eq:partial-ht-new}) and~(\ref{eq:delta-hil}), we have {\footnotesize \begin{align}\label{eq:derivative-new} &\frac{dY(\boldsymbol h)}{dt}=-\delta_{il}\sum_{i\in\mathcal{D}}\sum_{l\in\{1,\cdots, |p|\}}\Bigg[\psi^{|p|-l} U^\prime_i(h_{il})-C^\prime_l\left(\sum\limits_{i\in \mathcal{D}} h_{il} - B_l\right)-\nonumber\displaybreak[0]\\ &\tilde{C}^\prime_i\left(\sum\limits_{l=1}^{|p|}h_{il}-1\right)\Bigg]^2\frac{\lambda_i \prod_{j=1}^{l-1} (e^{\lambda_iT_{ij}}-1) \Bigg[1+\sum_{k=1}^{l-1}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]}{\Bigg[1+\sum_{k=1}^{|p|}\prod_{j=1}^k(e^{\lambda_iT_{ij}}-1)\Bigg]^2}\nonumber\displaybreak[1]\\ &<0. \end{align} } Therefore, $Y(\cdot)$ is a Lyapunov function and then our primal algorithm converges to the unique maximum $\boldsymbol h^*$ for any feasible initial points $\boldsymbol h.$ \end{proof} \subsection{Stationary Behavior}\label{sec:ttl-stationary} \input{03A-stationary} \subsection{From Timer to Hit Probability}\label{sec:ttl-hit-prob} \input{03B-ttl-hit-prob} \subsection{Maximizing Aggregate Utility}\label{sec:line-utility-max} \input{03C-max-utility} \subsection{Minimizing Overall Costs}\label{sec:line-cost-min} \input{03D-min-cost} \subsubsection{Model Validations and Insights}\label{sec:general-common-validations} We evaluate the performance of Algorithm~\ref{algo:primal-dual-alg} on a seven-node binary tree cache network, shown in Figure~\ref{fig:cache-7-nodes}. We assume that there are totally $100$ unique contents in the system requested from four paths. The cache size is given as $B_v=10$ for $v=1,\cdots,7.$ We consider a log utility function and the popularity distribution over these contents is Zipf with parameter $1.2.$ W.l.o.g., the aggregate request arrival rate is one. The discount factor $\psi=0.6.$ \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/tree-comparison-LCD-new.pdf} \caption{Normalized optimal aggregated utilities under different caching eviction policies with LCD replications to that under MCDP.} \label{fig:tree-comp-lcd} \vspace{-0.1in} \end{figure} We solve the optimization problem in~(\ref{eq:max-mcdp-general-common}) using a Matlab routine \texttt{fmincon}. Then we implement our primal-dual algorithm given in Algorithm~\ref{algo:primal-dual-alg}. The result for path $p_4$ is presented in Figure~\ref{fig:primal-dual}. We observe that our algorithm yields the exact optimal hit probabilities under MCDP. Similar results hold for other three paths and hence are omitted here. {Similar to Section~\ref{sec:validations-line-cache}, we compare Algorithm~\ref{algo:primal-dual-alg} to LRU, LFU, FIFO and RR with LCD replications. Figure~\ref{fig:tree-comp-lcd} compares the performance of different eviction policies with LCD replication strategies to our primal-dual algorithm under MCDP, in case of a seven-node binary tree cache network, shown in Figure~\ref{fig:cache-7-nodes}. We plot the relative performance w.r.t. the optimal aggregated utilities of all above policies, normalized to that under MCDP. We again observe the huge gain of MCDP w.r.t. other caching eviction policies with LCD.} \subsection{TTL Policy for Individual Caches} Consider the cache at node $j$. Each content $i$ is associated with a timer $T_{ij}$ under the TTL cache policy. While we focus on node $j,$ we omit the subscript ${}_{.j}.$ Consider the event when content $i$ is requested. There are two cases: (i) if content $i$ is not in the cache, content $i$ is inserted into the cache and its timer is set to $T_i;$ (ii) if content $i$ is in the cache, its timer is reset to $T_i$. The timer decreases at a constant rate and the content is evicted once its timer expires. \subsection{Replication Strategy for Cache Networks}\label{sub:cachenetwork} In a cache network, upon a cache hit, we need to specify how content is replicated along the reverse path towards the user that sent the request. \subsubsection{Content Request}\label{sec:content-request} The network serves requests for contents in $\mathcal{D}$ routed over the graph $G$. Any node in the network can generate a request for a content, which is forwarded along a fixed and unique path from the user towards a terminal node that is connected to a server that always contains the content. Note that the request need not reach the end of the path; it stops upon hitting a cache that stores the content. At that point, the requested content is propagated over the path in the reverse direction to the node that requested it. To be more specific, a request $(v, i, p)$ is determined by the node, $v$, that generated the request, the requested content, $i$, and the path, $p$, over which the request is routed. We denote a path $p$ of length $|p|=L$ as a sequence $\{v_{1p}, v_{2p}, \cdots, v_{Lp}\}$ of nodes $v_{lp}\in V$ such that $(v_{lp}, v_{(l+1)p})\in E$ for $l\in\{1, \cdots, L\},$ where $v_{Lp}=v.$ We assume that path $p$ is loop-free and terminal node $v_{1p}$ is the only node on path $p$ that accesses the server for content $i.$ \subsubsection{Replication Strategy} We consider TTL cache policies at every node in the cache network $G$ where each content has its own timer. Suppose content $i$ is requested and routed along path $p.$ There are two cases: (i) content $i$ is not in any cache along path $p,$ in which case content $i$ is fetched from the server and inserted into the first cache (denoted by cache $1$)\footnote{Since we consider path $p$, for simplicity, we move the dependency on $p$ and $v$, denote it as nodes $1,\cdots, L$ directly.} on the path. Its timer is set to $T_{i1}$; (ii) if content $i$ is in cache $l$ along path $p,$ we consider the following strategies \cite{rodriguez16} \begin{itemize} \item {\textbf{Move Copy Down (MCD)}:} content $i$ is moved to cache $l+1$ preceding cache $l$ in which $i$ is found, and the timer at cache $l+1$ is set to $T_{i{(l+1)}}$. Content $i$ is discarded once the timer expires; \item {\textbf{Move Copy Down with Push (MCDP)}:} MCDP behaves the same as MCD upon a cache hit. However, if timer $T_{il}$ expires, content $i$ is pushed one cache back to cache $l-1$ and the timer is set to $T_{i(l-1)}.$ \end{itemize} \subsection{Utility Function} Utility functions capture the satisfaction perceived by a user after being served a content. We associate each content $i\in\mathcal{D}$ with a utility function $U_i: [0,1]\rightarrow\mathbb{R}$ that is a function of hit probability $h_i$. $U_i(\cdot)$ is assumed to be increasing, continuously differentiable, and strictly concave. In particular, for our numerical studies, we focus on the widely used $\beta$-fair utility functions \cite{srikant13} given by \begin{equation}\label{eq:utility} U_i(h)= \begin{cases} w_i\frac{h^{1-\beta}}{1-\beta},& \beta\geq0, \beta\neq 1;\\ w_i\log h,& \beta=1, \end{cases} \end{equation} where $w_i>0$ denotes a weight associated with content $i$. \subsection{Convex Transformation} First, we define two new sets of variables for $i\in\mathcal{D}^{(p)},$ $l\in\{1,\cdots, |p|\}$ and $p\in\mathcal{K}$ as follows: \begin{align} &\log h_{ij}^{(p)}\triangleq \sigma_{ij}^{(p)}, \quad i.e.,\quad h_{ij}^{(p)}=e^{\sigma_{ij}^{(p)}},\nonumber\displaybreak[0]\\ &\log \delta_{ij}^{(p)}\triangleq \tau_{ij}^{(p)},\quad i.e.,\quad \delta_{ij}^{(p)}=e^{\tau_{ij}^{(p)}}, \end{align} and denote $\boldsymbol \sigma_i^{(p)} =(\sigma_{i1}^{(p)}, \cdots, \sigma_{ip}^{(p)})$, $\boldsymbol \tau_i^{(p)} =(\tau_{i1}^{(p)}, \cdots, \tau_{ip}^{(p)})$ and $\boldsymbol \sigma=(\boldsymbol \sigma_i^{(p)})$, $\boldsymbol \tau= (\boldsymbol \tau_i^{(p)})$ for $i\in\mathcal{D}^{(p)}$ and $p\in\mathcal{K}.$ Then the objective function~(\ref{eq:wsn-opt-obj}) can be transformed into \begin{align}\label{eq:wsn-opt-obj-trans} &F(\boldsymbol \sigma, \boldsymbol \tau) =\sum_{p\in\mathcal{K}}\sum_{i\in\mathcal{D}^{(p)}}\Bigg\{\sum_{j=1}^{|p|}\psi^{|p|-j}U_{i}^{(p)}\Bigg(e^{\sigma_{ij}^{(p)}+\sum_{l=1}^j\tau_{il}^{(p)}}\Bigg)-\nonumber\displaybreak[0]\\ &\Bigg[\sum_{j=1}^{|p|}\lambda_i(|p|+1) c_f\Bigg(e^{\sigma_{ij}^{(p)}+\sum_{l=1}^j\tau_{il}^{(p)}}\Bigg)+ \sum_{j=1}^{|p|}\lambda_i(|p|-j+1) c_s\Bigg(e^{\sigma_{ij}^{(p)}}\Bigg)\Bigg]\Bigg\}. \end{align} Similarly, we can transform the constraints. Then we obtain the following transformed optimization problem \begin{subequations}\label{eq:wsn-opt-mcdp-trans} \begin{align} &\text{\bf{WSN-T-MCDP:}}\nonumber\\ \max\quad & F(\boldsymbol \sigma, \boldsymbol \tau)\displaybreak[0]\\ \text{s.t.} \quad &\sum_{p:l\in p}\sum_{i\in \mathcal{D}^{(p)}} e^{\sigma_{il}^{(p)}+ \sum_{j=1}^{\mathcal{I}(l,p)}\tau_{ij}^{(p)}}\leq B_l,\quad \forall l \in V,\displaybreak[1]\label{eq:wsn-opt-mcdp-cons1-trans}\\ &c_c\Bigg(\sum_{i\in \mathcal{D}^{(p)}} \sum_{l=1}^{|p|}e^{\sum_{j=1}^{l}\tau_{ij}^{(p)}}\Bigg)\leq O^{(p)},\quad\forall p \in\mathcal{K},\\ & \sum_{j\in\{1,\cdots,|p|\}}e^{\sigma_{ij}^{(p)}}\leq 1,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K}, \displaybreak[2]\label{eq:wsn-opt-mcdp-cons2-trans}\\ & \sigma_{ij}^{(p)}\leq0,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K},\displaybreak[3] \label{eq:wsn-opt-mcdp-cons3-trans}\\ &\tau_{ij}^{(p)}\leq0,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K}, \label{eq:wsn-opt-mcdp-cons4-trans} \end{align} \end{subequations} where $\mathcal{I}(l,p)$ is the index of node $j$ on path $p.$ \begin{theorem} The transformed optimization problems in~(\ref{eq:wsn-opt-mcdp-trans}) is convex $\boldsymbol \sigma$ and $\boldsymbol \tau$, when we consider the $\beta$-utility function with $\beta\geq1$ and increasing convex cost functions $c_f(\cdot),$ $c_s(\cdot)$ and $c_c(\cdot).$ \end{theorem} It can be easily checked that the objective function satisfies the second order condition \cite{boyd04} for $\boldsymbol \sigma$ and $\boldsymbol \tau$ when the utility and cost functions satisfy the conditions. We omit the details due to space constraints. \begin{theorem} The optimization problems in~(\ref{eq:wsn-opt-mcdp-trans}) and~(\ref{eq:wsn-opt-mcdp}) are equivalent. \end{theorem} \begin{proof} This is clear from the way we convexified the problem. \end{proof} \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/7node-wsn-new.pdf} \vspace{-0.1in} \caption{A seven-node binary tree WSN. } \label{fig:wsn-7-nodes} \vspace{-0.1in} \end{figure} \subsubsection{Search and Fetch Cost} A request is sent along a path until it hits a cache that stores the requested content. We define \emph{search cost} (\emph{fetch cost}) as the cost of finding (serving) the requested content in the cache network (to the user). Consider cost as a function $c_s(\cdot)$ ($c_f(\cdot)$) of the hit probabilities. Then the expected search cost across the network is given as \begin{align}\label{eq:searching-cost} S_{\text{MCD}}=S_{\text{MCDP}}= \sum_{i\in \mathcal{D}} \lambda_i c_s\left(\sum_{l=0}^{|p|}(|p|-l+1)h_{il}\right). \end{align} Fetch cost has a similar expression with $c_f(\cdot)$ replacing $c_s(\cdot).$ \subsubsection{Transfer Cost} Under TTL, upon a cache hit, the content either transfers to a higher index cache or stays in the current one, and upon a cache miss, the content either transfers to a lower index cache (MCDP) or is discarded from the network (MCD). We define \emph{transfer cost} as the cost due to cache management upon a cache hit or miss. Consider the cost as a function $c_m(\cdot)$ of the hit probabilities. \noindent{\textit{\textbf{MCD:}}} Under MCD, since the content is discarded from the network once its timer expires, transfer costs are only incurred at each cache hit. To that end, the requested content either transfers to a higher index cache if it was in cache $l\in\{1,\cdots, |p|-1\}$ or stays in the same cache if it was in cache $|p|.$ Then the expected transfer cost across the network for MCD is given as \begin{align}\label{eq:moving-cost-mcd} M_{\text{MCD}} = \sum_{i\in \mathcal{D}} \lambda_ic_m\left(1 - h_{i|p|}\right). \end{align} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/line-comparison-LCD.pdf} \caption{Optimal aggregated utilities under different caching eviction policies with LCD replications, normalized to the aggregated utilities under MCDP.} \label{fig:line-comp-lcd} \vspace{-0.1in} \end{figure} \noindent{\textit{\textbf{MCDP:}}} {Note that under MCDP, there is a transfer upon a content request or a timer expiry besides two cases: (i) content $i$ is in cache $1$ and a timer expiry occurs, which occurs with probability $\pi_{i1}e^{-\lambda_iT_{i1}}$; and (ii) content $i$ is in cache $|p|$ and a cache hit (request) occurs, which occurs with probability $\pi_{i|p|}(1-e^{-\lambda_iT_{i|p|}})$. Then the transfer cost for content $i$ at steady sate is \begin{align} M_{\text{MCDP}}^i&= \lim_{n\rightarrow \infty}M_{\text{MCDP}}^{i, n}=\lim_{n\rightarrow\infty}\frac{\frac{1}{n}M_{\text{MCDP}}^{i, j}}{\frac{1}{n}\sum_{j=1}^n (t_j^i-t_{j-1}^i)}\nonumber\\ &=\frac{1-\pi_{i1}e^{-\lambda_iT_{i1}}-\pi_{i|p|}(1-e^{-\lambda_iT_{i|p|}})}{\sum_{l=0}^{|p|}\pi_l\mathbb{E}[t_j^i-t_{j-1}^i|X_j^i=l]}, \end{align} where $M_{\text{MCDP}}^{i, j}$ means there is a transfer cost for content $i$ at the $j$-th request or timer expiry, $\mathbb{E}[t_j^i-t_{j-1}^i|X_j^i=l]=\frac{1-e^{-\lambda_iT_{il}}}{\lambda_i}$ is the average time content $i$ spends in cache $l.$} {Therefore, the transfer cost for MCDP is \begin{align}\label{eq:moving-cost-mcdp} M_{\text{MCDP}}&=\sum_{i\in\mathcal{D}}M_{\text{MCDP}}^i= \sum_{i\in\mathcal{D}}\frac{1-\pi_{i1}e^{-\lambda_iT_{i1}}-\pi_{i|p|}(1-e^{-\lambda_iT_{i|p|}})}{\sum_{l=0}^{|p|}\pi_{il}\frac{1-e^{-\lambda_iT_{il}}}{\lambda_i}}\nonumber\displaybreak[0]\\ &=\sum_{i\in\mathcal{D}}\Bigg(\frac{1-\pi_{i1}}{\sum_{l=0}^{|p|}\pi_{il}\mathbb{E}[t_j^i-t_{j-1}^i|X_j^i=l]}+\lambda_i (h_{i1}-h_{i|p|})\Bigg), \end{align} where $(\pi_{i0},\cdots, \pi_{i|p|})$ for $i\in\mathcal{D}$ is the stationary distribution for the DTMC $\{X_k^i\}_{k\geq0}$ defined in Section~\ref{sec:ttl-stationary}. Due to space constraints, we relegate its explicit expression to Appendix~\ref{appendix:mcdp-cost2}.} \begin{remark}\label{rem:line-moving-cost} {The expected transfer cost $M_{\text{MCDP}}$~(\ref{eq:moving-cost-mcdp}) is a function of the timer values. Unlike the problem of maximizing sum of utilities, it is difficult to express $M_{\text{MCDP}}$ as a function of hit probabilities. } \end{remark} \begin{figure*}[htbp] \centering \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\linewidth]{figures/7node-cache.pdf} \caption{A seven-node binary tree cache network.} \label{fig:cache-7-nodes} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\textwidth]{figures/opt_prob_path4.pdf} \caption{Hit probability of MCDP under seven-node cache network where each path requests distinct contents.} \label{7nodecache-hit} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\textwidth]{figures/cachesize_pdf_path4.png} \caption{Cache size of MCDP under seven-node cache network where each path requests distinct contents.} \label{7nodecache-size} \end{minipage} \vspace{-0.1in} \end{figure*} \subsubsection{Optimization} Our goal is to determine optimal timer values at each cache in a linear cache network so that the total costs are minimized. To that end, we formulate the following optimization problem for MCDP \begin{subequations}\label{eq:min-mcdp1} \begin{align} \text{\bf{L-C-MCDP:}}\min \quad&{S}_{\text{MCDP}} +{F}_{\text{MCDP}} +{M}_{\text{MCDP}} \displaybreak[0]\\ &\text{Constraints in~(\ref{eq:max-mcdp}}). \end{align} \end{subequations} A similar optimization problem can be formulated for MCD and is omitted here due to space constraints. \begin{remark} {As discussed in Remark~\ref{rem:line-moving-cost}, we cannot express transfer cost of MCDP~(\ref{eq:moving-cost-mcdp}) in terms of hit probabilities, hence, we are not able to transform the optimization problem~(\ref{eq:min-mcdp1}) for MCDP into a convex one through a change of variables as we did in Section~\ref{sec:line-utility-max}. Solving the non-convex optimization~(\ref{eq:min-mcdp1}) is a subject of future work. However, we note that transfer costs of MCD~(\ref{eq:moving-cost-mcd}) are simply a function of hit probabilities and the corresponding optimization problem is convex so long as the cost functions are convex. } \end{remark} \section{Introduction}\label{sec:intro} \input{01-introduction} \section{Related Work}\label{sec:related} \input{01A-related} \section{Preliminaries}\label{sec:prelim} \input{02-prelim} \section{Linear Cache Network}\label{sec:line-cache} \input{03-line-cache} \section{General Cache Networks}\label{sec:general-cache} \input{04-general-cache-network} \input{04A-common} \input{04B-duality}\label{sec:general-cache-duality} \section{Application}\label{sec:app} \input{05-app-cdn} \section{Conclusion}\label{sec:conclusion} \input{06-conclusion} \section{Acknowledgments}\label{sec:ack} \input{06a-ack} \bibliographystyle{ACM-Reference-Format} \subsection{Optimization Formulation} Our objective is to determine a feasible TTL policy and compression ratio for content management in a tree-structured WSN to maximize the difference between utilities and costs, i.e., \begin{align}\label{eq:wsn-opt-obj} &F(\boldsymbol h, \boldsymbol \delta)=\sum_{p\in\mathcal{K}}\sum_{i\in\mathcal{D}^{(p)}}\Bigg\{\sum_{j=1}^{|p|}\psi^{|p|-j}U_{i}^{(p)}\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg)\nonumber\displaybreak[0]\\ &\qquad-\sum_{j=1}^{|p|}\lambda_i\cdot j\cdot c_f\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg)- \sum_{j=1}^{|p|}\lambda_i\cdot(|p|-j+1)\cdot c_s(h_{ij}^{(p)})\nonumber\displaybreak[1]\\ &\qquad-\sum_{j=1}^{|p|}\lambda_i\cdot(|p|-j+1)\cdot c_f\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg)\Bigg\}\nonumber\displaybreak[2]\\ &=\sum_{p\in\mathcal{K}}\sum_{i\in\mathcal{D}^{(p)}}\Bigg\{\sum_{j=1}^{|p|}\psi^{|p|-j}U_{i}^{(p)}\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg)\nonumber\displaybreak[3]\\ &-\Bigg[\sum_{j=1}^{|p|}\lambda_i(|p|+1)c_f\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg)+\sum_{j=1}^{|p|}\lambda_i(|p|-j+1) c_s(h_{ij}^{(p)})\Bigg]\Bigg\}. \end{align} Hence, the optimal TTL policy and compression ratio for MCDP should solve the following optimization problem: \begin{subequations}\label{eq:wsn-opt-mcdp} \begin{align} &\text{\bf{WSN-MCDP:}}\nonumber\\ \max\quad & F(\boldsymbol h, \boldsymbol \delta)\displaybreak[0]\\ \text{s.t.} \quad &\sum_{p:l\in p}\sum_{i\in \mathcal{D}^{(p)}} h_{il}^{(p)} \prod_{j=1}^{\mathcal{I}(l,p)}\delta_{ij}^{(p)}\leq B_l,\quad \forall l \in V,\displaybreak[1]\label{eq:wsn-opt-mcdp-cons1}\\ &c_c\Bigg(\sum_{i\in \mathcal{D}^{(p)}} \sum_{l=1}^{|p|}\prod_{j=1}^{l}\delta_{ij}^{(p)}\Bigg)\leq O^{(p)},\quad\forall p \in\mathcal{K},\label{eq:wsn-opt-mcdp-cons-qoi}\\ & \sum_{j\in\{1,\cdots,|p|\}}h_{ij}^{(p)}\leq 1,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K}, \displaybreak[2]\label{eq:wsn-opt-mcdp-cons2}\\ &0\leq h_{ij}^{(p)}\leq1,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K},\displaybreak[3] \label{eq:wsn-opt-mcdp-cons3}\\ &0<\delta_{ij}^{(p)}\leq1,\quad \forall i \in \mathcal{D}^{(p)}, \forall p \in \mathcal{K}, \label{eq:wsn-opt-mcdp-cons4} \end{align} \end{subequations} where $\mathcal{I}(l,p)$ is the index of node $j$ on path $p$ and constraint~(\ref{eq:wsn-opt-mcdp-cons-qoi}) ensures that the content will not be over compressed since the available energy $O^{(p)}$ is limited for path $p$, and $c_c(\cdot)$ is the per unit energy consumption function for compression. Similarly, we can formulate an optimization problem for MCD. We relegate this formulation to Appendix~\ref{appendix:mcd-wsn}. It is easy to check that~(\ref{eq:wsn-opt-mcdp}) is a non-convex problem. In the following, we transform it into a convex one through Boyd's method (Section $4.5$ \cite{boyd04}). \begin{figure*}[htbp] \centering \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\columnwidth]{figures/opt_prob_central_wsn_30docs_LRUm_w40.pdf} \caption{Hit probability of MCDP under seven-node WSN.} \label{7nodewsn-hit} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\columnwidth]{figures/cachesize_pdf_central_wsn_30docs_LRUm_w40.pdf} \caption{Cache size of MCDP under seven-node WSN.} \label{7nodewsn-size} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\linewidth]{figures/opt_delta_central_wsn_30docs_LRUm_w40.pdf} \caption{Compression ratio of MCDP under a seven-node WSN.} \label{7nodewsn-delta} \end{minipage} \vspace{-0.1in} \end{figure*} \subsection{Content Distribution}\label{app:cdn} {Here we consider a general network topology with overlapping paths and common contents requested along different paths. Similar to~(\ref{eq:max-mcdp-general-common}) and Algorithm~\ref{algo:primal-dual-alg}, a non-convex optimization problem can be formulated and a primal-dual algorithm can be designed, respectively. Due to space constraints, we omit the details. Instead, we show the performance of this general network. } {We consider a $2$-dimensional square grid with $16$ nodes, denoted as $G=(V, E)$. We assume a library of $|\mathcal{D}|=30$ unique contents. Each node has access to a subset of contents in the library. We assign a weight to each edge in $E$ selected uniformly from the interval $[1, 20].$ Next, we generate a set of requests in $G$ as follows. To ensure that paths overlap, we randomly select a subset $\tilde{V}\subset V$ with $|\tilde{V}|=12$ nodes to generate requests. Each node in $\tilde{V}$ can generate requests for contents in $\mathcal{D}$ following a Zipf distribution with parameter $\alpha=0.8.$ Requests are then routed over the shortest path between the requesting node in $\tilde{V}$ and the node in $V$ that caches the content. Again, we assumed that the aggregate request rate at each node in $\tilde{V}$ is one. We achieve similar performance as shown in Figure~\ref{fig:primal-dual} for different paths, hence omit them due to space constraints. The aggregated optimal utilities obtained by our primal-dual algorithm and through a centralized solver are $-14.9$ and $-14.8,$ respectively.} \subsection{Common Requested Contents}\label{sec:general-cache-common} Now consider the case where different users share the same content, e.g., there are two requests $(v_1, i, p_1)$ and $(v_2, i, p_2).$ Suppose that cache $l$ is on both paths $p_1$ and $p_2$, where $v_1$ and $v_2$ request the same content $i$. If we cache separate copies on each path, results from the previous section apply. However, maintaining redundant copies in the same cache decreases efficiency. A simple way to deal with that is to only cache one copy of content $i$ at $l$ to serve both requests from $v_1$ and $v_2.$ Though this reduces redundancy, it complicates the optimization problem. In the following, we formulate a utility maximization problem for MCDP with TTL caches, where all users share the same requested contents $\mathcal{D}.$ \begin{subequations}\label{eq:max-mcdp-general-common} \begin{align} &\text{\bf{G-U-MCDP:}}\nonumber\displaybreak[0]\\ \max \quad&\sum_{i\in \mathcal{D}} \sum_{p\in \mathcal{P}^i} \sum_{l=1}^{|p|} \psi^{|p|-l} U_{ip}(h_{il}^{(p)}) \displaybreak[1]\\ \text{s.t.} \quad& \sum_{i\in \mathcal{D}} \bigg(1-\prod_{p:j\in\{1,\cdots,|p|\}}(1-h_{ij}^{(p)})\bigg) \leq B_j,\quad\forall j \in V, \displaybreak[2]\label{eq:max-mcdp-genenral-cons1}\\ & \sum_{j\in\{1,\cdots,|p|\}}h_{ij}^{(p)}\leq 1,\quad \forall i \in \mathcal{D}, p \in \mathcal{P}^i, \displaybreak[3] \label{eq:max-mcdp-genenral-cons2}\\ &0\leq h_{il}^{(p)}\leq 1, \quad\forall i\in\mathcal{D}, j\in\{1,\cdots,|p|\}, p\in\mathcal{P}^i,\label{eq:max-mcdp-genenral-cons3} \end{align} \end{subequations} where~(\ref{eq:max-mcdp-genenral-cons1}) ensures that only one copy of content $i\in\mathcal{D}$ is cached at node $j$ for all paths $p$ that pass through node $j$. This is because the term $1-\prod_{p:j\in\{1,\cdots,|p|\}}(1-h_{ij}^{(p)})$ is the overall hit probability of content $i$ at node $j$ over all paths. ~(\ref{eq:max-mcdp-genenral-cons2}) is the cache capacity constraint and~(\ref{eq:max-mcdp-genenral-cons3}) is the constraint from MCDP TTL cache policy as discussed in Section~\ref{sec:ttl-hit-prob}. \begin{example}\label{exm} Consider two requests $(v_1, i, p_1)$ and $(v_2, i, p_2)$ with paths $p_1$ and $p_2$ which intersect at node $j.$ Denote the corresponding path perspective hit probability as $h_{ij}^{(p_1)}$ and $h_{ij}^{(p_2)}$, respectively. Then the term inside the outer summation of~(\ref{eq:max-mcdp-genenral-cons1}) is $1-(1-h_{ij}^{(p_1)})(1-h_{ij}^{(p_2)})$, i.e., the hit probability of content $i$ in node $j$. \end{example} \begin{remark} Note that we assume independence between different requests $(v, i, p)$ in~(\ref{eq:max-mcdp-general-common}), e.g., in Example~\ref{exm}, if the insertion of content $i$ in node $j$ is caused by request $(v_1, i, p_1),$ when request $(v_2, i, p_2)$ comes, it is not counted as a cache hit from its perspective. Our framework still holds if we follow the logical TTL MCDP on linear cache networks. However, in that case, the utilities will be larger than the one we consider here. \end{remark} Similarly, we can formulate a utility maximization optimization problem for MCD. This can be found in Appendix~\ref{appendix:mcd-common}. \begin{prop} Since the feasible sets are non-convex, the optimization problem defined in~(\ref{eq:max-mcdp-general-common}) under MCDP is a non-convex optimization problem. \end{prop} In the following, we develop an optimization framework that handles the non-convexity issue in this optimization problem and provides a distributed solution. To this end, we first introduce the Lagrangian function {\footnotesize \begin{align}\label{eq:lagrangian} &L(\boldsymbol{h,\nu,\mu})=\sum_{i \in \mathcal{D}}\sum_{p \in \mathcal{P}^i}\sum_{l=1}^{|p|}\psi^{|p|-l}U_{ip}(h_{il}^{(p)})-\sum_{j\in V}\nu_{j}\Bigg(\sum_{i \in \mathcal{D}}\bigg[1-\nonumber\displaybreak[0]\\ &\prod_{p:j\in\{1,\cdots,|p|\}}(1-h_{ij}^{(p)})\bigg]-B_j\Bigg)-\sum_{i\in\mathcal{D}}\sum_{p\in\mathcal{P}^i}\mu_{ip}\Bigg(\sum_{j\in\{1,\cdots,|p|\}}h_{ij}^{(p)}-1\Bigg), \end{align} } where the Lagrangian multipliers (price vector and price matrix) are $\boldsymbol\nu=(\nu_{j})_{j\in V},$ and $\boldsymbol \mu=(\mu_{ip})_{i\in\mathcal{D}, p\in\mathcal{P}}.$ The dual function can be defined as \begin{align}\label{eq:dual} d(\boldsymbol{\nu,\mu})=\sup_{\boldsymbol h} L(\boldsymbol{h,\nu,\mu}), \end{align} and the dual problem is given as \begin{align}\label{eq:dual-opt} \min_{\boldsymbol{ \nu,\mu}} \quad&d(\boldsymbol{\nu,\mu})=L(\boldsymbol h^*(\boldsymbol{\nu,\mu}), \boldsymbol{\nu,\mu}),\quad\text{s.t.}\quad\boldsymbol{\nu,\mu}\geq \boldsymbol{0}, \end{align} where the constraint is defined pointwise for $\boldsymbol{\nu,\mu},$ and $\boldsymbol h^*(\boldsymbol{\nu,\mu})$ is a function that maximizes the Lagrangian function for given $(\boldsymbol{\nu,\mu}),$ i.e., \begin{align}\label{eq:dual-opt-h} \boldsymbol h^*(\boldsymbol{\nu,\mu})=\arg\max_{\boldsymbol h}L(\boldsymbol{h,\nu,\mu}). \end{align} The dual function $d(\boldsymbol{\nu,\mu})$ is always convex in $(\boldsymbol{\nu,\mu})$ regardless of the concavity of the optimization problem~(\ref{eq:max-mcdp-general-common}) \cite{boyd04}. Therefore, it is always possible to iteratively solve the dual problem using \begin{align}\label{eq:dual-opt-lambda} &\nu_l[k+1]=\nu_l[k]-\gamma_l\frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \nu_l},\nonumber\displaybreak[0]\\ & \mu_{ip}[k+1]= \mu_{ip}[k]-\eta_{ip}\frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \mu_{ip}}, \end{align} where $\gamma_l$ and $\eta_{ip}$ are the step sizes, and $\frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \nu_l}$ and $\frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \mu_{ip}}$ are the partial derivative of $L(\boldsymbol{\nu,\mu})$ w.r.t. $ \nu_l$ and $\mu_{ip},$ respectively, satisfyting \begin{align}\label{eq:gradient-lambda-mu} \frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \nu_l} &=-\bigg(\sum_{i \in \mathcal{D}}\bigg[1-\prod_{p:l\in\{1,\cdots,|p|\}}(1-h_{il}^{(p)})\bigg]-B_l\bigg),\nonumber\displaybreak[0]\\ \frac{\partial L(\boldsymbol{\nu,\mu})}{\partial \mu_{ip}} &= -\Bigg(\sum_{j\in\{1,\cdots,|p|\}}h_{ij}^{(p)}-1\Bigg). \end{align} Sufficient and necessary conditions for the uniqueness of $\boldsymbol{\nu,\mu}$ are given in \cite{kyparisis85}. The convergence of the primal-dual algorithm consisting of~(\ref{eq:dual-opt-h}) and~(\ref{eq:dual-opt-lambda}) is guaranteed if the original optimization problem is convex. However, our problem is not convex. Nevertheless, in the following, we show that the duality gap is zero, hence~(\ref{eq:dual-opt-h}) and~(\ref{eq:dual-opt-lambda}) converge to the globally optimal solution. To begin with, we introduce the following results \begin{theorem}\label{thm:sufficient} \cite{tychogiorgos13} (Sufficient Condition). If the price based function $\boldsymbol h^*(\boldsymbol{\nu,\mu})$ is continuous at one or more of the optimal lagrange multiplier vectors $\boldsymbol \nu^*$ and $\boldsymbol \mu^*$, then the iterative algorithm consisting of~(\ref{eq:dual-opt-h}) and~(\ref{eq:dual-opt-lambda}) converges to the globally optimal solution. \end{theorem} \begin{theorem}\label{thm:necessary} \cite{tychogiorgos13} If at least one constraint of~(\ref{eq:max-mcdp-general-common}) is active at the optimal solution, the condition in Theorem~\ref{thm:sufficient} is also a necessary condition. \end{theorem} Hence, if we can show the continuity of $\boldsymbol h^*(\boldsymbol{\nu,\mu})$ and that constraints~(\ref{eq:max-mcdp-general-common}) are active, then given Theorems~\ref{thm:sufficient} and~\ref{thm:necessary}, the duality gap is zero, i.e., ~(\ref{eq:dual-opt-h}) and~(\ref{eq:dual-opt-lambda}) converge to the globally optimal solution. \subsubsection{MCDP}\label{sec:line-utility-max-mcdp} Given~(\ref{eq:stationary-mcdp-timers}) and~(\ref{eq:mcdp-constraint}), optimization problem~(\ref{eq:max-ttl}) under MCDP becomes \begin{subequations}\label{eq:max-mcdp} \begin{align} \text{\bf{L-U-MCDP:}} \max \quad&\sum_{i\in \mathcal{D}} \sum_{l=1}^{|p|} \psi^{|p|-l} U_i(h_{il}) \displaybreak[0]\\ \text{s.t.} \quad&\sum_{i\in \mathcal{D}} h_{il} \leq B_l,\quad l=1, \cdots, |p|, \displaybreak[1]\label{eq:hpbmcdp1}\\ & \sum_{l=1}^{|p|}h_{il}\leq 1,\quad\forall i \in \mathcal{D}, \displaybreak[2]\label{eq:hpbmcdp2}\\ &0\leq h_{il}\leq1, \quad\forall i \in \mathcal{D}, \end{align} \end{subequations} where~(\ref{eq:hpbmcdp1}) is the cache capacity constraint and~(\ref{eq:hpbmcdp2}) is due to the variable exchanges under MCDP as discussed in~(\ref{eq:mcdp-constraint}) . \begin{prop} Optimization problem defined in~(\ref{eq:max-mcdp}) under MCDP has a unique global optimum. \end{prop} \subsubsection{MCD}\label{sec:line-utility-max-mcd} Given~(\ref{eq:mcd-constraint1}),~(\ref{eq:stationary-mcd-timers}) and~(\ref{eq:mcd-constraint2}), optimization problem~(\ref{eq:max-ttl}) under MCD becomes \begin{subequations}\label{eq:max-mcd} \begin{align} \text{\bf{L-U-MCD:}} \max \quad&\sum_{i\in \mathcal{D}} \sum_{l=1}^{|p|} \psi^{|p|-l} U_i(h_{il}) \displaybreak[0] \\ \text{s.t.} \quad&\sum_{i\in \mathcal{D}} h_{il} \leq B_l,\quad l=1, \cdots, |p|, \displaybreak[1]\label{eq:hpbmcd1}\\ &h_{i(|p|-1)} \leq \cdots \leq h_{i1} \leq h_{i0},\quad\forall i \in \mathcal{D}, \displaybreak[2]\label{eq:hpbmcd2}\\ & \sum_{l=1}^{|p|}h_{il}\leq 1,\quad\forall i \in \mathcal{D}, \displaybreak[3] \label{eq:hpbmcd3}\\ &0\leq h_{il}\leq1, \quad\forall i \in \mathcal{D}, \label{eq:hpbmcdp4} \end{align} \end{subequations} where~(\ref{eq:hpbmcd1}) is the cache capacity constraint, ~(\ref{eq:hpbmcd2}) and~(\ref{eq:hpbmcd3}) are due to the variable exchanges under MCD as discussed in~(\ref{eq:mcd-constraint1}) and~(\ref{eq:mcd-constraint2}). \begin{prop} Optimization problem defined in~(\ref{eq:max-mcd}) under MCD has a unique global optimum. \end{prop} \subsubsection{Online Algorithm}\label{sec:line-online-primal} \begin{figure*}[htbp] \centering \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\columnwidth]{figures/opt_prob_primal_single_path_LRUm.pdf} \caption{Hit probability for MCDP under primal solution in a three-node linear cache network.} \label{mcdphp-line} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\columnwidth]{figures/cachesize_pdf_primal_single_path_LRUm.pdf} \caption{Cache size distribution for MCDP under primal solution in a three-node linear cache network.} \label{mcdpcs-line} \end{minipage}\hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=1\columnwidth]{figures/opt_prob_psipoint1_LRUm.pdf} \caption{The impact of discount factor on the performance in a three-node linear cache network: $\psi=0.1.$} \label{fig:line-beta01} \end{minipage} \vspace{-0.1in} \end{figure*} In Sections~\ref{sec:line-utility-max-mcdp} and~\ref{sec:line-utility-max-mcd}, we formulated convex utility maximization problems with a fixed cache size. However, system parameters (e.g. cache size and request processes) can change over time, so it is not feasible to solve the optimization offline and implement the optimal strategy. Thus, we need to design online algorithms to implement the optimal strategy and adapt to the changes in the presence of limited information. In the following, we develop such an algorithm for MCDP. A similar algorithm exists for MCD and is omitted due to space constraints. \noindent{\textit{\textbf{Primal Algorithm:}}} We aim to design an algorithm based on the optimization problem in~(\ref{eq:max-mcdp}), which is the primal formulation. Denote $\boldsymbol h_i=(h_{i1},\cdots, h_{i|p|})$ and $\boldsymbol h=(\boldsymbol h_1,\cdots, \boldsymbol h_n).$ We first define the following objective function \begin{align}\label{eq:primal} Z(\boldsymbol h) = \sum_{i\in \mathcal{D}} \sum_{l=1}^{|p|} \psi^{|p|-l} U_i(h_{il})&-\sum_{l=1}^{|p|}C_l\left(\sum\limits_{i\in \mathcal{D}} h_{il} - B_l\right)\nonumber\\&-\sum_{i\in \mathcal{D}}\tilde{C}_i\left(\sum\limits_{l=1}^{|p|}h_{il}-1\right), \end{align} where $C_l(\cdot)$ and $\tilde{C}_i(\cdot)$ are convex and non-decreasing penalty functions denoting the cost for violating constraints~(\ref{eq:hpbmcdp1}) and~(\ref{eq:hpbmcdp2}). Therefore, it is clear that $Z(\cdot)$ is strictly concave. Hence, a natural way to obtain the maximal value of~(\ref{eq:primal}) is to use the standard \emph{gradient ascent algorithm} to move the variable $h_{il}$ for $i\in\mathcal{D}$ and $l\in\{1,\cdots,|p|\}$ in the direction of the gradient, given as \begin{align} \frac{\partial Z(\boldsymbol h)}{\partial h_{il}}= \psi^{|p|-l}U_i^\prime(h_{il})-C_l^\prime\left(\sum\limits_{i\in \mathcal{D}} h_{il} - B_l\right) - \tilde{C}_i^\prime\left(\sum\limits_{l=1}^{|p|}h_{il}-1\right), \end{align} where $U_i^\prime(\cdot),$ $C_l^\prime(\cdot)$ and $\tilde{C}_i^\prime(\cdot)$ denote partial derivatives w.r.t. $h_{il}.$ Since $h_{il}$ indicates the probability that content $i$ is in cache $l$, $\sum_{i\in \mathcal{D}} h_{il}$ is the expected number of contents currently in cache $l$, denoted by $B_{\text{curr},l}$. Therefore, the primal algorithm for MCDP is given by \begin{subequations}\label{eq:primal-mcdp} \begin{align} &T_{il}[k] \leftarrow \begin{cases} \frac{1}{\lambda_i}\log \bigg(1 + \frac{h_{il}[k]}{1-\big(h_{i1}[k] + h_{i2}[k] + \cdots + h_{i|p|}[k]\big)}\bigg),\quad l=1;\\ \frac{1}{\lambda_i}\log \bigg(1 + \frac{h_{il}[k]}{h_{i(l-1)}[k]}\bigg),\quad l= 2, \cdots , |p|,\label{eq:primal-mcdp-t} \end{cases}\\ &h_{il}[k+1]\leftarrow \max\Bigg\{0, h_{il}[k]+\zeta_{il}\Bigg[\psi^{|p|-l}U_i^\prime(h_{il}[k])\nonumber\\ &\qquad\qquad-C_l^\prime\left(B_{\text{curr},l} - B_l\right)-\tilde{C}_i^\prime\left(\sum\limits_{l=1}^{|p|}h_{il}[k]-1\right)\Bigg]\Bigg\},\label{eq:primal-mcdp-h} \end{align} \end{subequations} where $\zeta_{il} \geq0$ is the step-size parameter, and $k$ is the iteration number incremented upon each request arrival. \begin{theorem}\label{thm:mcdp-primal-conv} The primal algorithm given in~(\ref{eq:primal-mcdp}) converges to the optimal solution given a sufficiently small step-size parameter $\zeta_{il}.$ \end{theorem} \begin{proof} Since $U_i(\cdot)$ is strictly concave, $C_l(\cdot)$ and $\tilde{C}_i(\cdot)$ are convex, ~(\ref{eq:primal}) is strictly concave, hence there exists a unique maximizer. Denote it as $\boldsymbol h^*.$ Define the following function \begin{align} Y(\boldsymbol h)=Z(\boldsymbol h^*)-Z(\boldsymbol h), \end{align} then it is clear that $Y(\boldsymbol h)\geq 0$ for any feasible $\boldsymbol h$ that satisfies the constraints in the original optimization problem, and $Y(\boldsymbol h)= 0$ if and only if $\boldsymbol h=\boldsymbol h^*.$ We prove that $Y(\boldsymbol h)$ is a Lyapunov function, and then the above primal algorithm converges to the optimum. Details are available in Appendix~\ref{appendix:convergence}. \end{proof} \subsubsection{Model Validations and Insights}\label{sec:validations-line-cache} In this section, we validate our analytical results with simulations for MCDP. We consider a linear three-node cache network with cache capacities $B_l= 30$, $l=1, 2, 3.$ The total number of unique contents considered in the system is $n=100.$ We consider the Zipf popularity distribution with parameter $\alpha=0.8$. W.l.o.g., we consider a log utility function, and discount factor $\psi=0.6.$ W.l.o.g., we assume that requests arrive according to a Poisson process with aggregate request rate $\Lambda=1.$ We first solve the optimization problem~(\ref{eq:max-mcdp}) using a Matlab routine \texttt{fmincon}. Then we implement our primal algorithm given in~(\ref{eq:primal-mcdp}), where we take the following penalty functions \cite{srikant13} $C_l(x)= \max\{0,x - B_l\log(B_l+x)\}$ and $\tilde{C}_i(x)= \max\{0,x - \log(1+x)\}$. From Figure~\ref{mcdphp-line}, we observe that our algorithm yields the exact optimal and empirical hit probabilities under MCDP. Figure~\ref{mcdpcs-line} shows the probability density for the number of contents in the cache network\footnote{The constraint~(\ref{eq:hpbmcdp1}) in problem~(\ref{eq:max-mcdp}) is on average cache occupancy. However it can be shown that if $n\to\infty$ and $B_l$ grows in sub-linear manner, the probability of violating the target cache size $B_l$ becomes negligible \cite{dehghan16}.}. As expected, the density is concentrated around their corresponding cache sizes. We further characterize the impact of the discount factor $\psi$ on performance. We consider different values of $\psi$. Figure~\ref{fig:line-beta01} shows the result for $\psi=0.1.$ We observe that as $\psi$ decreases, if a cache hit occurs in a lower index cache, the most popular contents are likely to be cached in higher index caches (i.e., cache $3$) and least popular contents are likely to be cached in lower index caches (cache 1). This provides significant insight on the design of hierarchical caches, since in a linear cache network, a content enters the network via the first cache, and only advances to a higher index cache upon a cache hit. Under a stationary request process (e.g., Poisson process), only popular contents will be promoted to higher index cache, which is consistent with what we observe in Figure~\ref{fig:line-beta01}. A similar phenomenon has been observed in \cite{gast16,jiansri17} through numerical studies, while we characterize this through utility optimization. Second, we see that as $\psi$ increases, the performance difference between different caches decreases, and they become identical when $\psi=1$. This is because as $\psi$ increases, the performance degradation for cache hits on a lower index cache decreases and there is no difference between them when $\psi=1.$ Due to space constraints, the results for $\psi=0.4, 0.6, 1$ are given in Appendix~\ref{appendix:line-validation-discount}. {We also compare our proposed scheme to replication strategies with LRU, LFU, FIFO and Random (RR) eviction policies. In a cache network, upon a cache hit, the requested content usually get replicated back in the network, there are three mechanisms in the literature: leave-copy-everywhere (LCE), leave-copy-probabilistically (LCP) and leave-copy-down (LCD), with the differences in how to replicate the requested content in the reverse path. Due to space constraints, we refer interested readers to \cite{garetto16} for detailed explanations of these mechanisms. Furthermore, based on \cite{garetto16}, LCD significantly outperforms LCE and LCP. Hence, we only consider LCD here. } {Figure~\ref{fig:line-comp-lcd} compares the performance of different eviction policies with LCD replication strategies to our algorithm under MCDP for a three-node line cache network. We plot the relative performance w.r.t. the optimal aggregated utilities of all above policies, normalized to that under MCDP. We observe that MCDP significantly outperforms all other caching evictions with LCD replications. At last, we consider a larger line cache network at the expense of simulation. We again observe the huge gain of MCDP w.r.t. other caching eviction policies with LCD, hence are omitted here due to space constraints. } \subsection{Contents, Servers and Requests} Consider the general cache network described in Section~\ref{sec:prelim}. Denote by $\mathcal{P}$ the set of all requests, and $\mathcal{P}^i$ the set of requests for content $i.$ Suppose a cache in node $v$ serves two requests $(v_1, i_1, p_1)$ and $(v_2, i_2, p_2)$, then there are two cases: (i) non-common requested content, i.e., $i_1\neq i_2;$ and (ii) common requested content, i.e., $i_1=i_2.$ \subsection{Non-common Requested Content}\label{sec:general-cache-non-common} We first consider the case that each node serves requests for different contents from each request $(v, i, p)$ passing through it. Since there is no coupling between different requests $(v, i, p),$ we can directly generalize the results for linear cache networks in Section~\ref{sec:line-cache}. Hence, given the utility maximization formulation in~(\ref{eq:max-mcdp}), we can directly formulate the optimization problem for MCDP as \begin{subequations}\label{eq:max-mcdp-general} \begin{align} \text{\bf{G-N-U-MCDP:}} \max \quad&\sum_{i\in \mathcal{D}} \sum_{p\in\mathcal{P}^i}\sum_{l=1}^{|p|} \psi^{|p|-l} U_{ip}(h_{il}^{(p)}) \displaybreak[0]\\ \text{s.t.} \quad&\sum_{i\in \mathcal{D}} \sum_{p:l\in\{1,\cdots,|p|\}}h_{il}^{(p)} \leq B_l,p\in\mathcal{P}, \displaybreak[1]\label{eq:hpbmcdp1-general}\\ & \sum_{l=1}^{|p|}h_{il}^{(p)}\leq 1,\quad\forall i \in \mathcal{D}, p\in\mathcal{P}^i,\displaybreak[2]\label{eq:hpbmcdp2-general}\\ &0\leq h_{il}^{(p)}\leq1, \quad\forall i \in \mathcal{D}, l\in\{1,\cdots,|p|\}, \nonumber\\ &\qquad\qquad\qquad p\in\mathcal{P}^i, \end{align} \end{subequations} where~(\ref{eq:hpbmcdp1-general}) is the cache capacity constraint and~(\ref{eq:hpbmcdp2-general}) follows the discussion for MCDP in~(\ref{eq:mcdp-constraint}). \begin{prop} Since the feasible sets are convex and the objective function is strictly concave and continuous, the optimization problem defined in~(\ref{eq:max-mcdp-general}) under MCDP has a unique global optimum. \end{prop} We can similarly formulate a utility maximization optimization problem for MCD for a general cache network. This can be found in Appendix~\ref{appendix:mcd-non-common}. \subsubsection{Model Validations and Insights} We consider a seven-node binary tree network, shown in Figure~\ref{fig:cache-7-nodes} with node set $\{1,\cdots, 7\}$. There exist four paths $p_1=\{1, 5, 7\},$ $p_2=\{2, 5, 7\},$ $p_3=\{3, 6, 7\}$ and $p_4=\{4, 6, 7\}.$ Each leaf node serves requests for $100$ distinct contents, and cache size is $B_v=30$ for $v\in\{1,\cdots, 7\}.$ Assume that the content follows a Zipf distribution with parameter $\alpha_1=0.2,$ $\alpha_2=0.4,$ $\alpha_3=0.6$ and $\alpha_4=0.8,$ respectively. We consider the log utility function $U_{ip}(x) = \lambda_{ip} \log x,$ where $\lambda_{ip}$ is the request arrival rate for content $i$ on path $p,$ and requests are described by a Poisson process with $\Lambda_p=1$ for $p=1, 2, 3, 4.$ The discount factor $\psi=0.6$. Figures~\ref{7nodecache-hit} and~\ref{7nodecache-size} show results for path $p_4=\{4, 6, 7\}.$ From Figure~\ref{7nodecache-hit}, we observe that our algorithm yields the exact optimal and empirical hit probabilities under MCDP. Figure~\ref{7nodecache-size} shows the probability density for the number of contents in the cache network. As expected, the density is concentrated around their corresponding cache sizes. Similar trends exist for paths $p_1$, $p_2$ and $p_3$, hence are omitted here. \subsection{Minimizing Overall Costs} \label{appendix:mcdp-cost2} In Section~\ref{sec:line-utility-max}, we aim to maximize overall utilities across all contents over the cache network, which captures the user satisfactions. However, the communication costs for content transfers across the network is also critical in many network applications. This cost includes (i) the search cost for finding the requested content in the network; (ii) the fetch cost to serve the content to the user; and (iii) the transfer cost for cache inner management due to a cache hit or miss. In the following, we first characterize these costs for MCD. Then we formulate a minimization optimization problem to characterize the optimal TTL policy for content placement in linear cache network. \subsubsection{Search Cost} Requests from user are sent along a path until it hits a cache that stores the requested content. We define the \emph{search cost} as the cost for finding the requested content in the cache network. Consider the cost as a function $c_s(\cdot)$ of the hit probabilities. Then the expected searching cost across the network is given as \begin{align* S_{\text{MCD}}=S_{\text{MCDP}}= \sum_{i\in \mathcal{D}} \lambda_ic_s\left(\sum_{l=0}^{|p|}(|p|-l+1)h_{il}\right). \end{align*} \subsubsection{Fetch Cost} Upon a cache hit, the requested content will be sent to the user along the reverse direction of the path. We define the \emph{fetch cost} as the costing of fetching the content to serve the user who sent the request. Consider the cost as a function $c_f(\cdot)$ of the hit probabilities. Then the expected fetching cost across the network is given as \begin{align}\label{eq:fetching-cost} F_{\text{MCD}}=F_{\text{MCDP}}= \sum_{i\in \mathcal{D}} \lambda_ic_f\left(\sum_{l=0}^{|p|}(|p|-l+1)h_{il}\right). \end{align} \subsubsection{Transfer Cost} Under TTL cache, upon a cache hit, the content either moves to a higher index cache or stays in the current one, and upon a cache miss, the content either transfers to a lower index cache (MCDP) or is discarded from the network (MCD). We define the \emph{transfer cost} as the cost due to caching management upon a cache hit or miss. Consider the cost as a function $c_m(\cdot)$ of the hit probabilities \noindent{\textit{\textbf{MCD:}}} Under MCD, since the content is discarded from the network once its timer expires, the transfer cost is caused by a cache hit. To that end, the requested content either moves to a higher index cache if it was in cache $l\in\{1,\cdots, |p|-1\}$ or stays in the same cache if it was in cache $|p|.$ Then the expected transfer cost across the network for MCD is given as \begin{align* M_{\text{MCD}} = \sum_{i\in \mathcal{D}} \lambda_ic_m\left(1 - h_{i|p|}\right). \end{align*} \subsubsection{Total Costs} Given the search cost, fetch cost and transfer cost, the total cost for MCD and MCDP can be defined as \begin{subequations}\label{eq:total-cost} \begin{align} &{SFM}_{\text{MCD}} =S_{\text{MCD}}+F_{\text{MCD}}+M_{\text{MCD}}, \displaybreak[0]\label{eq:total-cost-mcd}\\ &{SFM}_{\text{MCDP}} = S_{\text{MCDP}}+F_{\text{MCDP}}+M_{\text{MCDP}},\label{eq:total-cost-mcdp} \end{align} \end{subequations} where the corresponding costs are given in~(\ref{eq:searching-cost}),~(\ref{eq:fetching-cost}),~(\ref{eq:moving-cost-mcd}) and~(\ref{eq:moving-cost-mcdp}), respectively. \subsubsection{MCDP}\label{mcdpmodel} Requests for content $i$ arrive according to a Poisson process with rate $\lambda_i.$ Under TTL, content $i$ spends a deterministic time in a cache if it is not requested, independent of all other contents. We denote the timer as $T_{il}$ for content $i$ in cache $l$ on the path $p,$ where $l\in\{1,\cdots, |p|\}.$ Denote by $t_k^i$ the $k$-th time that content $i$ is either requested or the timer expires. For simplicity, we assume that content is in cache $0$ (i.e., server) when it is not in the cache network. We can then define a discrete time Markov chain (DTMC) $\{X_k^i\}_{k\geq0}$ with $|p|+1$ states, where $X_k^i$ is the index of the cache that content $i$ is in at time $t_k^i.$ The event that the time between two requests for content $i$ exceeds $T_{il}$ occurs with probability $e^{-\lambda_i T_{il}}$; consequently we obtain the transition probability matrix of $\{X_k^i\}_{k\geq0}$ and compute the stationary distribution. Details can be found in Appendix~\ref{mcdpmodel-appendix}. The timer-average probability that content $i$ is in cache $l\in\{1,\cdots, |p|\}$ is \begin{subequations}\label{eq:hit-prob-mcdp} \begin{align} & h_{i1} = \frac{e^{\lambda_iT_{i1}}-1}{1+\sum_{j=1}^{|p|}(e^{\lambda_iT_{i1}}-1)\cdots (e^{\lambda_iT_{ij}}-1)},\label{eq:mcdp1}\\ & h_{il} = h_{i(l-1)}(e^{\lambda_iT_{il}}-1),\; l = 2,\cdots,|p|,\label{eq:mcdp2} \end{align} \end{subequations} where $h_{il}$ is also the hit probability for content $i$ at cache $l.$ \subsubsection{MCD} Again, under TTL, content $i$ spends a deterministic time $T_{il}$ in cache $l$ if it is not requested, independent of all other contents. We define a DTMC $\{Y_k^i\}_{k\geq0}$ by observing the system at the time that content $i$ is requested. Similar to MCDP, if content $i$ is not in the cache network, it is in cache $0$; thus we still have $|p|+1$ states. If $Y_k^i=l$, then the next request for content $i$ comes within time $T_{il}$ with probability $1-e^{-\lambda_iT_{il}}$, and $Y_{k+1}^i=l+1,$ otherwise $Y_{k+1}^i=0$ due to the MCD policy. We can obtain the transition probability matrix of $\{Y_k^i\}_{k\geq0}$ and compute the stationary distribution, details are available in Appendix~\ref{mcdmodel-appendix}. By the PASTA property \cite{MeyTwe09}, it follows that the stationary probability that content $i$ is in cache $l\in\{1,\cdots, |p|\}$ is \begin{subequations}\label{eq:hit-prob-mcd} \begin{align} &h_{il}=h_{i0}\prod_{j=1}^{l}(1-e^{-\lambda_iT_{ij}}),\quad l=1,\cdots, |p|-1,\displaybreak[1]\label{eq:mcd2}\\ &h_{i|p|}=e^{\lambda_i T_{i|p|}}h_{i0}\prod_{j=1}^{|p|-1}(1-e^{-\lambda_iT_{ij}}),\label{eq:mcd3} \end{align} \end{subequations} where $h_{i0}=1/[1+\sum_{l=1}^{|p|-1}\prod_{j=1}^l(1-e^{-\lambda_i T_{ij}})+e^{\lambda_i T_{i|p|}}\prod_{j=1}^{|p|}(1-e^{-\lambda_i T_{ij}})].$ \subsubsection{Caching and Compression} Again, we represent the network as a directed graph $G=(V, E).$ For simplicity, we consider a tree-structured WSN, as shown in Figure~\ref{fig:wsn-exm}. Each node is associated with a cache that is capable of storing a constant amount of content. Denote $B_v$ as the cache capacity at node $v\in V.$ Let $\mathcal{K}\subset V$ be the set of leaf nodes with $|\mathcal{K}|=K.$ Furthermore, we assume that each node $j$ that receives the content from leaf node $k$ can compress it with a reduction ratio\footnote{defined as the ratio of the volume of the output content to the volume of input content at any node. We consider the compression that only reduces the quality of content (e.g. remove redundant information), but the total number of distinct content in the system remains the same.} $\delta_{kj},$ where $0<\delta_{kj}\leq 1$, $\forall k, j.$ \subsubsection{Content Generation and Requests} We assume that leaf node $k\in\mathcal{K}$ continuously generates content, which will be active for a time interval $W$ and requested by users outside the network. If there is no request for these content in that time interval, the generated content becomes inactive and discarded from the system. The generated content is compressed and cached along the path between the leaf node and the sink node when a request is made for the active content. Denote the path as $p^k=(1, \cdots, |p^k|)$ between leaf node $k$ and the sink node. Since we consider a tree-structured network, the total number of paths is $|\mathcal{K}|=K,$ hence, w.l.o.g., $\mathcal{K}$ is also used to denote the set of all paths. Here, we consider the MCDP and MCD with TTL caching policy. In WSN, each sensor (leaf node) generates a sequence of content that users are interested in. Different sensors may generate different types of contents, i.e., there is no common contents sharing between different sensors. Hence, the cache performance analysis in WSN can be mapped to the problem we considered in Section~\ref{sec:general-cache-non-common}. W.l.o.g., we consider a particular leaf node $k$ and the content that is active and requested by the users. For simplicity, we drop the superscript ${}^{.k}$ and denote the path as $p=(1,\cdots,|p|),$ where cache $|p|$ is the sink node that serves the requests and cache $1$ is the leaf node that generates the content. Let the set of contents generated by leaf node $k$ be $\mathcal{D}^{(p)}$ and the request arrivals for $\mathcal{D}^{(p)}$ follow a Poisson process with rate $\lambda_i$ for $i\in\mathcal{D}^{(p)}.$ Let $h_{ij}^{(p)}, T_{ij}^{(p)}$ be the hit probability and the TTL timer associated with content $i \in \mathcal{D}^{(p)}$ at node $j\in\{1,\cdots,|p|\},$ respectively. Denote $\boldsymbol h_i^{(p)} =(h_{i1}^{(p)}, \cdots, h_{ip}^{(p)})$, $\boldsymbol \delta_i^{(p)} =(\delta_{i1}^{(p)}, \cdots, \delta_{ip}^{(p)})$ and $\boldsymbol T_i^{(p)}=(T_{i1}^{(p)},\cdots, T_{i|p|}^{(p)}).$ Let $\boldsymbol h=(\boldsymbol h_i^{(p)})$, $\boldsymbol \delta= (\boldsymbol\delta_i^{(p)})$ and $\boldsymbol T=(\boldsymbol T_i^{(p)})$ for $i\in\mathcal{D}^{(p)}$ and $p\in\mathcal{K}.$ \subsubsection{Utilities} Following a similar argument with Section~\ref{sec:general-cache-non-common}, the overall utilities for content $i$ along path $p$ is given as \begin{align} \sum_{j=1}^{|p|}\psi^{|p|-j}U_{i}^{(p)}\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg), \end{align} where the utilities not only capture the hit probabilities but also characterize the content quality degradation due to compression along the path. \subsubsection{Costs} We consider the costs, e.g. delay, of routing the content along the path, which includes the cost to forward content to the node that caches it, the cost to search for the content along the path, and the cost to fetch the cached content to the users that sent the requests. Again, we assume that the per hop cost to transfer (search) the content along the path is a function $c_f(\cdot)$ ($c_s(\cdot)$) of hit probabilities and compression ratios \noindent{\textit{\textbf{Forwarding Costs:}}} Suppose a cache hit for content $i$ occurs on node $j\in\{1,\cdots, |p|\}$, then the total cost to forward content $i$ along $p$ is given as \begin{align} \sum_{j=1}^{|p|}\lambda_i\cdot j\cdot c_f\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg). \end{align} \noindent{\textit{\textbf{Search Costs:}}} Given a cache hit for content $i$ on node $j\in\{1,\cdots, |p|\}$, the total cost to search for content $i$ along $p$ is given as \begin{align} \sum_{j=1}^{|p|}\lambda_i\cdot(|p|-j+1)\cdot c_s(h_{ij}^{(p)}). \end{align} \noindent{\textit{\textbf{Fetching Costs:}}} Upon a cache hit for content $i$ on node $j\in\{1,\cdots, |p|\}$, the total cost to fetch content $i$ along $p$ is given as \begin{align} \sum_{j=1}^{|p|}\lambda_i\cdot(|p|-j+1)\cdot c_f\Bigg(h_{ij}^{(p)}\prod_{l=1}^j\delta_{il}^{(p)}\Bigg). \end{align} \subsection{Stationary Behaviors of MCDP and MCD} \subsubsection{MCDP}\label{mcdpmodel-appendix} Under IRM model, the request for content $i$ arrives according a Poisson process with rate $\lambda_i.$ As discussed earlier, for TTL caches, content $i$ spends a deterministic time in a cache if it is not requested, which is independent of all other contents. We denote the timer as $T_{il}$ for content $i$ in cache $l$ on the path $p,$ where $l\in\{1,\cdots, |p|\}.$ Denote $t_k^i$ as the $k$-th time that content $i$ is either requested or moved from one cache to another. For simplicity, we assume that content is in cache $0$ (i.e., server) if it is not in the cache network. Then we can define a discrete time Markov chain (DTMC) $\{X_k^i\}_{k\geq0}$ with $|p|+1$ states, where $X_k^i$ is the cache index that content $i$ is in at time $t_k^i.$ Since the event that the time between two requests for content $i$ exceeds $T_{il}$ happens with probability $e^{-\lambda_i T_{il}},$ then the transition matrix of $\{X_k^i\}_{k\geq0}$ is given as {\footnotesize \begin{align} {\bf P}_i^{\text{MCDP}}= \begin{bmatrix} 0 & 1 \\ e^{-\lambda_iT_{i1}} & 0 & 1-e^{-\lambda_iT_{i1}} \\ &\ddots&\ddots&\ddots\\ &&e^{-\lambda_iT_{i(|p|-1)}} & 0 & 1-e^{-\lambda_iT_{i(|p|-1)}} \\ &&&e^{-\lambda_iT_{i|p|}} & 1-e^{-\lambda_iT_{i|p|}} \end{bmatrix}. \end{align} } Let $(\pi_{i0},\cdots,\pi_{i|p|})$ be the stationary distribution for ${\bf P}_i^{\text{MCDP}}$, we have \begin{subequations}\label{eq:stationary-mcdp} \begin{align} & \pi_{i0} = \frac{1}{1+\sum_{j=1}^{|p|}e^{\lambda_iT_{ij}}\prod_{s=1}^{j-1}(e^{\lambda_iT_{is}}-1)},\displaybreak[0] \label{eq:stationary-mcdp0}\\ &\pi_{i1} = \pi_{i0}e^{\lambda_iT_{i1}},\displaybreak[1]\label{eq:stationary-mcdp1}\\ &\pi_{il} = \pi_{i0}e^{\lambda_iT_{il}}\prod_{s=1}^{l-1}(e^{\lambda_iT_{is}}-1),\; l = 2,\cdots, |p|.\label{eq:stationary-mcdp2} \end{align} \end{subequations} Then the average time that content $i$ spends in cache $l\in\{1,\cdots, |p|\}$ can be computed as \begin{align}\label{eq:averagetime-mcdp} \mathbb{E}[t_{k+1}^i-t_k^i|X_k^i = l]= \int_{0}^{T_{il}}\left(1-\left[1-e^{-\lambda_it}\right]\right)dt = \frac{1-e^{-\lambda_iT_{il}}}{\lambda_i}, \end{align} and $\mathbb{E}[t_{k+1}^i-t_k^i|X_k^i = 0] = \frac{1}{\lambda_i}.$ Given~(\ref{eq:stationary-mcdp}) and~(\ref{eq:averagetime-mcdp}), the timer-average probability that content $i$ is in cache $l\in\{1,\cdots, |p|\}$ is \begin{align*} & h_{i1} = \frac{e^{\lambda_iT_{i1}}-1}{1+\sum_{j=1}^{|p|}(e^{\lambda_iT_{i1}}-1)\cdots (e^{\lambda_iT_{ij}}-1)},\\ & h_{il} = h_{i(l-1)}(e^{\lambda_iT_{il}}-1),\; l = 2,\cdots,|p|, \end{align*} where $h_{il}$ is also the hit probability for content $i$ at cache $l.$ \subsubsection{MCD}\label{mcdmodel-appendix} Again, for TTL caches, content $i$ spends a deterministic time $T_{il}$ in cache $l$ if it is not requested, which is independent of all other contents. We define a DTMC $\{Y_k^i\}_{k\geq0}$ by observing the system at the time that content $i$ is requested. Similarly, if content $i$ is not in the cache network, then it is in cache $0$, thus we still have $|p|+1$ states. If $Y_k^i=l$, then the next request for content $i$ comes within time $T_{il}$ with probability $1-e^{-\lambda_iT_{il}}$, thus we have $Y_{k+1}^i=l+1,$ otherwise $Y_{k+1}^i=0$ due to the MCD policy. Therefore, the transition matrix of $\{Y_k^i\}_{k\geq0}$ is given as {\footnotesize \begin{align} {\bf P}_i^{\text{MCD}}= \begin{bmatrix} e^{-\lambda_iT_{i1}} & 1-e^{-\lambda_iT_{i1}} & &\\ e^{-\lambda_iT_{i2}} & &1-e^{-\lambda_iT_{i2}} & &\\ \vdots&&\ddots&\\ e^{-\lambda_iT_{i|p|}} & && 1-e^{-\lambda_iT_{i|p|}} \\ e^{-\lambda_iT_{i|p|}} && &1-e^{-\lambda_iT_{i|p|}} \end{bmatrix}. \end{align} } Let $(\tilde{\pi}_{i0},\cdots,\tilde{\pi}_{i|p|})$ be the stationary distribution for ${\bf P}_i^{\text{MCD}}$, then we have \begin{subequations}\label{eq:stationary-mcd-app} \begin{align} &\tilde{\pi}_{i0}=\frac{1}{1+\sum_{l=1}^{|p|-1}\prod_{j=1}^l(1-e^{-\lambda_i T_{ij}})+e^{\lambda_i T_{i|p|}}\prod_{j=1}^{|p|}(1-e^{-\lambda_i T_{ij}})},\displaybreak[0]\label{eq:mcd1-app}\\ &\tilde{\pi}_{il}=\tilde{\pi}_{i0}\prod_{j=1}^{l}(1-e^{-\lambda_iT_{ij}}),\quad l=1,\cdots, |p|-1,\displaybreak[1]\label{eq:mcd2-app}\\ &\tilde{\pi}_{i|p|}=e^{\lambda_i T_{i|p|}}\tilde{\pi}_{i0}\prod_{j=1}^{|p|-1}(1-e^{-\lambda_iT_{ij}}).\label{eq:mcd3-app} \end{align} \end{subequations} By PASTA property \cite{MeyTwe09}, we immediately have that the stationary probability that content $i$ is in cache $l\in\{1,\cdots, |p|\}$ is given as \begin{align*} h_{il}=\tilde{\pi}_{il}, \quad l=0, 1, \cdots, |p|, \end{align*} where $\tilde{\pi}_{il}$ are given in~(\ref{eq:stationary-mcd-app}). \subsection{The impact of discount factor on the performance in linear cache network}\label{appendix:line-validation-discount} The results for $\psi=0.4, 0.6, 1$ are shown in Figures~\ref{fig:line-beta04},~\ref{fig:line-beta06} and~\ref{fig:line-beta1}. \subsection{Optimization Problem for MCD} \subsubsection{Non-common Content Requests under General Cache Networks}\label{appendix:mcd-non-common} Similarly, we can formulate a utility maximization optimization problem for MCD under general cache network. \begin{subequations}\label{eq:max-mcd-general} \begin{align} \text{\bf{G-N-U-MCD:}} \max \quad&\sum_{i\in \mathcal{D}} \sum_{p\in\mathcal{P}^i}\sum_{l=1}^{|p|} \psi^{|p|-l} U_{ip}(h_{il}^{(p)}) \displaybreak[0]\\ \text{s.t.} \quad&\sum_{i\in \mathcal{D}} \sum_{l\in p}h_{il}^{(p)} \leq B_l,p\in\mathcal{P}, \displaybreak[1]\label{eq:hpbmcd1-general}\\ & \sum_{l=1}^{|p|}h_{il}^{(p)}\leq 1,\quad\forall i \in \mathcal{D}, p\in\mathcal{P}^i,\displaybreak[2]\label{eq:hpbmcd2-general}\\ & h_{i(|p|-1)}^{(p)} \leq \cdots \leq h_{i1}^{(p)} \leq h_{i0}^{(p)}, \quad\forall p\in\mathcal{P}^i, \label{eq:hpbmcd3-general}\\ &0\leq h_{il}^{(p)}\leq1, \quad\forall i \in \mathcal{D}, p\in\mathcal{P}^i. \end{align} \end{subequations} \begin{prop} Since the feasible sets are convex and the objective function is strictly concave and continuous, the optimization problem defined in~(\ref{eq:max-mcd-general}) under MCD has a unique global optimum. \end{prop} \subsubsection{Common Content Requests under General Cache Networks}\label{appendix:mcd-common} Similarly, we can formulate the following optimization problem for MCD with TTL caches, \begin{subequations}\label{eq:max-mcd-general} \begin{align} &\text{\bf{G-U-MCD:}}\nonumber\\ \max \quad&\sum_{i\in \mathcal{D}} \sum_{p\in \mathcal{P}^i} \sum_{l=1}^{|p|} \psi^{|p|-l} U_{ip}(h_{il}^{(p)}) \displaybreak[0]\\ \text{s.t.} \quad& \sum_{i\in \mathcal{D}} \bigg(1-\prod_{p:j\in\{1,\cdots,|p|\}}(1-h_{ij}^{(p)})\bigg) \leq C_j,\quad\forall j \in V, \displaybreak[1] \label{eq:max-mcd-genenral-cons1}\\ & \sum_{j\in\{1,\cdots,|p|\}}h_{ij}^{(p)}\leq 1,\quad \forall i \in \mathcal{D}, \forall p \in \mathcal{P}^i, \displaybreak[2]\label{eq:max-mcd-genenral-cons2}\\ & h_{i(|p|-1)}^{(p)} \leq \cdots \leq h_{i1}^{(p)} \leq h_{i0}^{(p)}, \quad\forall p\in\mathcal{P}^i, \displaybreak[3] \label{eq:max-mcd-genenral-cons3}\\ &0\leq h_{il}^{(p)}\leq 1, \quad\forall i\in\mathcal{D}, \forall p\in\mathcal{P}^i. \label{eq:max-mcd-genenral-cons4} \end{align} \end{subequations} \begin{prop} Since the feasible sets are non-convex, the optimization problem defined in~(\ref{eq:max-mcd-general}) under MCD is a non-convex optimization problem. \end{prop} \section{Introduction}\label{sec:intro} \input{01-introduction} \section{Related Work}\label{sec:related} \input{01A-related} \section{Preliminaries}\label{sec:prelim} \input{02-prelim} \section{Linear Cache Network}\label{sec:line-cache} \input{03-line-cache} \section{General Cache Networks}\label{sec:general-cache} \input{04-general-cache-network} \input{04A-common} \input{04B-duality}\label{sec:general-cache-duality} \section{Application}\label{sec:app} \input{05-app-cdn} \section{Conclusion}\label{sec:conclusion} \input{06-conclusion} \section{Acknowledgments}\label{sec:ack} \input{06a-ack} \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.952148, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUa9PxK3YB9m8SUmHH
\section{Introduction} The higher-order topological phases introduced in higher-dimensional lattices recently extend the conventional understanding on the topological nontrivial materials, where the $d$-dimensional lattice owns not only the first-order ($d-1$)-dimensional edge states but also the $n$-order ($d-n$)-dimensional edge states~\cite{T20171, T20172, T20181, T20190, T20191, T20192, T20201, T20202}. The second-order corner states in two-dimensional (2D) lattices are widely investigated since 2019 in sonic~\cite{Es1, Es2, Es3, Es4, Es5, Es6}, ring resonator~\cite{Er}, waveguide~\cite{Ew1, Ew2, Ew3}, cavity~\cite{Ec1, Ec2}, and cold atom~\cite{Ecold} systems. Recently, the higher-order topological states in three-dimensional lattices are also reported~\cite{E3D1, E3D2}. The investigations on higher-order topological phases in both theories and experiments promote and extend the development of topological photonics~\cite{Topo_review_1, Topo_review_2}. The current principles to seek high-order topological states are mainly based on analyzing spatial or (and) nonspatial symmetries~\cite{T20171, T20172,TPRL118,T20181,Langbehn2017,Song2017,Linhu2018,Max2018}. In spatial-symmetric (such as inversion- or rotational-symmetric) systems, high-order topological states may originate from quantized dipole polarization~\cite{TPRL118,T20181} or multipole moments~\cite{Langbehn2017,Song2017}. In nonspatial-symmetric (such as chiral-symmetric) systems, corner states may arise due to nontrivial edge winding numbers~\cite{{Linhu2018}}. By combining nonspatial and spatial symmetries, second-order topological insulator and superconductors have been partially classified~\cite{Max2018}. The existing schemes requiring delicate designs of overall symmetry, as a top-to-bottom approach, cannot provide insight of the connection between lower-order and higher-order topological states. Since lower-order edge states are well-known, we may wonder whether it is possible to use lower-order topological states as building blocks assembling to higher-order topological states. If possible, what are their topological correspondence to the bulk? Here, we theoretically propose and experimentally demonstrate a bottom-to-top scheme for constructing topological corner states by using topological edge states as building blocks. In each direction the topological edge states are snapshot states in a topological pumping by means of a changing one-dimensional dipole moment which is related to Chern number. Such scheme naturally extends Chern number to vector Chern number with individual components separately defined in each direction, from lower- to higher-dimensional lattices. The hierarchical relation between two-dimensional Zak phase~\cite{TPRL118,T20181} and vector Chern number can be understood as a varying of two-dimensional dipole polarization generates quantized charge pumping in two directions. The fact that corner states are guaranteed by nontrivial vector Chern number can be termed as \emph{bulk-corner correspondence}, and they inherit the topological origin of edge states as a dimension reduction of quantum Hall phase. We have to emphasize that such corner states do not require any fine tuning of spatial or nonspatial symmetries. Taking the off-diagonal Aubry-Andr\'e-Harper (AAH) model for example, we theoretically analyze the topological origin when extending the lattice from one dimension to two dimensions. We construct the two-dimensional photonic topological lattice and successfully observe the higher-order topological corner states predicted in theory. Our model gives an intuitive understanding on the raising of higher-order topological phases in higher dimensional lattice, which connects the topological phases in different dimensional lattices and provides a convenient tool for constructing higher-order topological phases in higher dimensional lattices. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Figure_1.png} \caption{\textbf{Schematic of constructing corner states.} \textbf{(a)} Three types of edge states in one-dimensional lattices. The edge states in one-dimensional lattice can be regarded as the building block for the higher-order topological states. \textbf{(b-c)} Corner states in two- and three-dimensional lattices built by edge states in one-dimensional lattices. We can find the connection between the topological states in different dimensional lattice by projecting the higher-dimensional lattice into one-dimensional lattice in direction of x, y and z axis respectively.} \label{f1} \end{figure} \section{Vector Chern number and corner states} We explore a systematical method to construct corner states in a two-dimensional square lattice. Consider that the position of a particle is denoted by $(i,j)$, where $i$ and $j$ are the site indices of $x$ and $y$ directions respectively. The coupling between $(i,j)$th and $(k,l)$th lattice sites has the form, $H_x(i,k)\delta_{j,l}+\delta_{i,k}H_y(j,l)$, where $H_{x}(i,k)$ is the coupling matrix along $x$ direction irrelevant to positions in $y$ direction, and vice versa. The motion of a particle hopping in such lattice is governed by the Hamiltonian, \begin{equation} H=H_x\otimes I_y+I_x\otimes H_y, \end{equation} where $I_{s}$ is an identical matrix in $s$ direction with $s\in x, \ y$. Once we obtain the eigenvalues $E_p^{s}$ and eigenstates $|\psi_p^{s}\rangle$ corresponding to $H_s$ where $p$ are quantum numbers, we can immediately prove that \begin{equation} H|\psi_m^{x}\rangle\otimes |\psi_n^{y}\rangle=(E_m^{x}+E_n^{y})|\psi_m^{x}\rangle\otimes |\psi_n^{y}\rangle, \end{equation} that is, $E_m^{x}+E_n^{y}$ and $|\psi_m^{x}\rangle\otimes|\psi_n^{y}\rangle$ are the eigenvalues and eigenstates corresponding to $H$, respectively. If $|\psi_m^{x}\rangle$ and $|\psi_n^{y}\rangle$ are topological edge states, the product of these edge states becomes a topological corner state in two dimensions. Hence the seeking for topological corner states is transformed to separately design the coupling matrix in each direction that supports topological edge states. Consider that the coupling matrix $H_s$ is controlled by two parameters $\phi^{s}$ and $\theta^{s}$, which satisfied $H_s(\phi^{s},\theta^{s})=H_s(\phi^{s}+2\pi,\theta^{s}+2\pi)$. In practice, $\phi^{s}$ is a modulated phase, and $\theta^{s}$ is a twisted phase when a particle hopping across the boundary in $s$ direction by imposing twisted boundary condition. To characterize the bulk topology, we can define vector Chern number $(C_{\mathbf{u}}^{x}; C_{\mathbf{v}}^{y})$ with individual components given by \begin{equation} C_{\mathbf{w}}^{s}= \frac{1}{{2\pi i}}\int_{BZ} d \theta^{s}{d\phi^{s} \det [\mathcal{F}({\phi^{s},\theta^{s}})]}. \end{equation} Here, $ [\mathcal{F}({\phi^{s},\theta^{s}})]^{m,n}=\partial_{\phi^{s}} A_{\theta^{s}}^{m,n}-\partial_{\theta^{s}} A_{\phi^{s}}^{m,n} +i[A_{\phi^{s}},A_{\theta^{s}}]^{m,n} $ are elements of the non-Abelian Berry curvature with elements of Berry connection $(A_\mu)^{m,n}=\langle \psi_{m}^s({\phi^{s},\theta^{s}})|\nabla_\mu|\psi_{n}^s({\phi^{s},\theta^{s}})\rangle$. $m,n\in \mathbf{w}$ which is a subset of near degenerate bands. If both the two components of the vector Chern number $(C_{\mathbf{u}}^{x} ; C_{\mathbf{v}}^{y})$ are integer, in the open boundary condition there exists at least one topological corner state at some fixed modulated phases $(\phi^x,\phi^y)$. We term this relation as \emph{bulk-corner correspondence}, which gives a clear topological origin of corner states, i.e., a dimensional reduction of a four-dimensional topological space characterized by vector Chern number. \begin{figure}[!t] \centering \includegraphics[width=1.0\linewidth]{Figure_2.png} \caption{\textbf{The density of states for (a) one-dimensional AAH lattice and (b) two-dimensional AAH lattices.} The finite energy gaps between corner states and extended states inherit from those between edge states and extended states. The corner states are constructed from the edge states, the former has twice the energy of the latter. The local density of states are shown in the insets, the bulk state and edge state in one-dimensional AAH lattice are shown in (i) and (ii) respectively, the edge states and corner states in two-dimensional AAH lattice are shown in (iii, v) and (iv, vi) respectively. The parameters in simulation are adopted as: $t_{x(y)}=0.5$, $\lambda_{x(y)}=0.95$, $b_{x(y)}=(\sqrt{5}+1)/2$, and there are 15 sites for one-dimensional lattice and 15$\times$15 sites for two-dimensional lattice. DOS: density of states.} \label{f2} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=0.9\linewidth]{Figure_3.png} \caption{\textbf{Corner states.} \textbf{(a)} Schematic of the fabricated photonic quasicrystal. Nine input sites are set in the lattice, four waveguides marked as C1, C2, C3 and C4 are designed to observe the corner states. Four waveguides marked as B1, B2, B3 and B4 are designed to observe the edge states. The waveguide marked as T is designed to observe bulk state. \textbf{(b)} Spectrum of one-dimensional \textit{off-diagonal} AAH lattice. For 16-sited lattice, two boundary modes (green lines) cross in the band gap, for 15-sited lattice, only one boundary mode connects the bands separated by the gap. The red dash lines give the $\phi$ adopted in experiment. \textbf{(c-d)} Measured corner states. The photons are confined in the corners when we excite the lattice corner sites (c). The white dotted squares point out the ranges of corner states. The quantified results confirm the theoretically predicted topological corner states arising by extending topological lattice with edge state from one dimension to two dimensions.} \label{f3} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=1.0\linewidth]{Figure_4.png} \caption{\textbf{Edge states.} \textbf{(a)} The measured result of edge states. We inject the photons into the lattices from the input sites B1, B2, B3, and B4 respectively, the photons are localized in the edges of the lattices. \textbf{(b)} The trivial cases. The photon cannot be confined in the edges of the lattices. The white dotted squares point out the ranges of edge states. \textbf{(c)} The quantified results. The edge states can be conveniently extended from lower dimension to higher dimension.} \label{f4} \end{figure*} For explicitness, we focus on a two-dimensional \textit{off-diagonal} AAH lattice with hopping strengths varying in space, \begin{align} \label{eq1}\nonumber H = &\sum_{i,j} t_x[1+\lambda_x \cos(2\pi b_x i+\phi^{x})]\hat{a}_{i,j}\hat{a}_{i+1, j}^\dagger \\ &+t_y[1+\lambda_y \cos(2\pi b_y j+\phi^{y})]\hat{a}_{i, j}\hat{a}_{i, j+1}^\dagger + H.c., \end{align} where $\hat{a}_{i, j}^\dagger$ ($\hat{a}_{i, j}$) is the creation (annihilation) operator at site ($i, j$), $t_{x(y)}$ is the average coupling strengths, $\lambda_{x(y)}$, $b_{x(y)}$, and $\phi^{x(y)}$ are modulated strengths, frequencies and phases, respectively. In numerical calculations of vector Chern number, we choose $t_{x(y)}=0.5$, $\lambda_{x(y)}=0.95$, $b_{x(y)}=(\sqrt{5}+1)/2$, and the total sites are $15\times 15$ for two-dimensional lattice. There are three subsets of bands in each direction, and the vector Chern number takes values of $(1,-2,1;1,-2,1)$, indicating the existence of topological corner states. In each direction, there are three types of one-dimensional topological edge states, one localized at both edges, one localized at the left edge and the last one localized at the right edge; see Fig.~\ref{f1}(a). These edge states constitute basic building blocks to construct topological corner states in higher dimension. As shown in Fig.~\ref{f1}(b), we can construct corner states by using the edge states in both $x$ and $y$ directions (see Supplemental Materials for more details~\cite{SM}). Since the couplings along $x$ and $y$ directions are independent, the robustness of corner states inherits that of one-dimensional edge states in each dimension. Taking edge states in $x$ direction for example, there are energy gaps between edge states and extended states; see Fig.~\ref{f2}(a). % These topological edge states are robust to considerable disorder, perturbation and long-range couplings that mix the two dimensions provided energy gaps keep open. Hence, the constructing corner states also share similar topological protection to the one dimensional edge states, where there are finite energy gaps between corner states and extended states; see Fig.~\ref{f2}(b). Apart from corner states, products of edge state in one direction and extended state in other direction form a continuum subset which is also separated from extended states. Such approach can be naturally generalized to construct corner in three dimensional periodically-modulated lattice, see Fig.~\ref{f1} (c), along with hinge and surface states (see Supplemental Materials for more details~\cite{SM}). What's more, when $b_x=b_y=1/2$, the above model reduces to a two-dimensional Su-Schrieffer-Heeger (SSH) lattice, where coupling matrices are changed in a stagger way in both $x$ and $y$ directions~\cite{TPRL118}. Indeed, topological corner states are predicted with an alternative theory of two-dimensional polarization~\cite{T20201,T20181} and observed in experiment of photonic crystal slabs~\cite{Es1,Ew2,Ew3}. The varying of two-dimensional polarization~\cite{TPRL118} could give rise to topologial charge pumping in two dimensions which is characterized by vector Chern number. However, on one hand, these corner states can be more easily and naturally understood in our theoretical framework, that is, they are the product of two edge states in both $x$ and $y$ directions. On the other hand, in contrast to two-dimensional polarization which relies on spatial symmetries, our theory of vector Chern number can also predict corner states without requiring any fine-tuning of symmetries. \section{Experimental implement} In experiment, we first realize the Hamiltonian~\eqref{eq1} in a two-dimensional array of waveguides with modulated spacing. The site number is 15 or 16 in both $x$ and $y$ directions, the average coupling strength $t_x=t_y=t$ is adopted as 0.3 for photon with wavelength of 810 nm, the modulating amplitude $\lambda_x=\lambda_y=\lambda$ is set as 0.5, the periodic parameter $b_x=b_y=b$ is $(\sqrt5+1)/2$, and the initial phases $\phi^{x}$ and $\phi^{y}$ are set as the same value. We fabricate the photonic waveguide lattices according the Hamiltonian using the femtosecond laser direct-writing technique~\cite{fabri_1, fabri_2, fabri_3, fabri_4, PIT_Gap}. As shown in Fig.~\ref{f3}(a), the propagation direction of the waveguide lattice maps the evolution time, hence the fabricated tree dimensional waveguide lattice realizes the designed two-dimensional \textit{off-diagonal} AAH lattice. We further set nine sites for injecting the photons, including four corner sites labeled as from C1 to C4, four boundary sites labeled as from B1 to B4, and one site in the lattice center labeled as T. According to the prediction in theory, the corner state will appear when we extend the one-dimensional topological lattice with edge state to a two-dimensional lattice, and the corresponding topological origin is also extended to higher dimensions. As shown in Fig.~\ref{f3}(b), there are edge states in both two ends of lattice for 16-sited one-dimensional \textit{off-diagonal} AAH lattice with initial phase $\phi$ = 0.14$\pi$. We fabricate the two-dimensional \textit{off-diagonal} AAH lattice with initial phase $\phi^{x}=\phi^{y}$ = 0.14$\pi$ to demonstrate the predicted corner states. We inject the photons with wavelength of 810 nm in to the lattice from four corners respectively, the photons will be confined in the excited lattice corners if there are higher-order corner states in theoretical prediction. As shown in Fig.~\ref{f3}(c), the photon output distributions after 40 mm evolution distance are localized in the white dotted squares, which give the ranges of corner states. In Fig.~\ref{f3}(d), we give the quantified results for measured corner states, which is calculated by \begin{equation} \xi_{p,q}=\sum_{i,j}\hat{a}_{i,j}^\dagger\hat{a}_{i,j} \quad (|i-p|\leq l, |j-q|\leq l), \end{equation} where $(p, q)$ presents the excited site indices, and $l$ describes the range of corner states adopted as 3. Compared with the 16$\times$16-sited lattice with $\phi=0.14\pi$, there is no corner state for the case of $\phi=0.75\pi$, and the photons flow out of the corner state range. This is because there is no edge state in the one-dimensional AAH lattice for the case of $\phi=0.75\pi$. Furthermore, we fabricate three two-dimensional 15$\times$15-sited lattices with phase $\phi=0.14\pi$, 0.75$\pi$, and 1.25$\pi$ respectively. There is only left (right) edge state for one-dimensional lattice with $\phi=0.14\pi$ (1.25$\pi$), therefore the corner state can only be observed by exciting the lattice from input C2 (C4). Similar to 16$\times$16-sited lattice, there is no corner state for the case of $\phi=$ 0.75$\pi$. We excite the lattices from input C2 and C4 respectively and measure the photon output distributions, the quantified results, together with the results of 16$\times$16-sited lattices, confirm theoretical predictions on topological corner states arising by extending topological lattice with edge state from one dimension to two dimension. The corner states appearing in two-dimensional lattice require combination of the one-dimensional lattices owning edge states in both the $x$ and $y$ directions. Differently, The edge states in higher dimensional lattices, as a product of edge state in one direction and extended state in other direction, can be naturally extended from the edge states in lower dimensional lattices. As shown in Fig.~\ref{f4}(a), we inject the photons into the 16$\times$16-sited lattice with $\phi$ = 0.14$\pi$ from the input B1, B2, B3 and B4 respectively, the photons are confined in the boundaries. For the case of $\phi$ = 0.75$\pi$, there is no edge state that can be extended, so we can find that the photons flow out the boundary ranges, as shown in Fig.~\ref{f4}(b) taking the cases of B2 and B3 for example. The quantified results in Fig.~\ref{f4}(c) show the observed edge states extended from one-dimensional lattices. As intuitive understanding, edge states are extended from dots to lines when the topological lattices are extended from one dimension to two dimensions. In conclusion, we present a theoretical explanation on topological origin of higher-order topological phase in higher dimensional lattices, which is connected to the topological phase in lower dimensional lattices. We experimentally observe the theoretically predicted higher-order topological corner states in two-dimensional \textit{off-diagonal} AAH lattices. Our model intuitively explains the connection of topological phases in different dimensional lattices, which is universal to various models and is promising to be a convenient and practical tool for constructing higher-order topological phases in higher-order lattices.\\ \begin{acknowledgments} The authors thank Yidong Chong and Jian-Wei Pan for helpful discussions. X.-M.J. is supported by National Key R\&D Program of China (2019YFA0308700, 2017YFA0303700), National Natural Science Foundation of China (11761141014, 61734005, 11690033), Science and Technology Commission of Shanghai Municipality (17JC1400403), Shanghai Municipal Education Commission (2017-01-07-00-02-E00049). X.-M.J. acknowledges additional support from a Shanghai talent program. C. Lee is supported by the Key-Area Research and Development Program of GuangDong Province (2019B030330001), the National Natural Science Foundation of China (11874434, 11574405), and the Science and Technology Program of Guangzhou (201904020024). Y. K. is partially supported by the Office of China Postdoctoral Council (20180052), the National Natural Science Foundation of China (11904419), and the Australian Research Council (DP200101168). \end{acknowledgments}
{ "attr-fineweb-edu": 1.706055, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUa9nxK1ThhCdy6U69
\section{Introduction}\label{sec:introduction}\input{introduction} \section{Background}\label{sec:background}\input{background} \section{Nominal regular $\omega$-languages}\label{sec:languages}\input{languages} \section{Finite automata}\label{sec:hd-automata} \input{hd-automata} \section{Synchronized product}\label{sec:sync-product} \input{synchronized-product} \section{Boolean operations and decidability}\label{sec:boolean-operations-decidability}\input{boolean-operations-decidability} \section{Ultimately-periodic words}\label{sec:up-words}\input{up-words} \section{Conclusions}\label{sec:conclusions} \input{future-work} \input{related-work} \paragraph{Acknowledgements.} The authors thank Nikos Tzevelekos, Emilio Tuosto and Gianluca Mezzetti for several fruitful discussions related to nominal automata. \bibliographystyle{splncs}
{ "attr-fineweb-edu": 1.301758, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaY3xK0wg09lJ_rnI
\section{Introduction and the main result}\label{sec:intro} Let $A$ be a self-adjoint possibly unbounded operator on a separable Hilbert space $\cH$ such that the spectrum of $A$ is separated into two disjoint components, that is, \begin{equation}\label{eq:specSep} \spec(A)=\sigma\cup\Sigma\quad\text{ with }\quad d:=\dist(\sigma,\Sigma)>0\,. \end{equation} Let $V$ be a bounded self-adjoint operator on $\cH$. It is well known (see, e.g., \cite[Theorem V.4.10]{Kato66}) that the spectrum of the perturbed self-adjoint operator $A+V$ is confined in the closed $\norm{V}$-neighbourhood of the spectrum of the unperturbed operator $A$, that is, \begin{equation}\label{eq:specPert} \spec(A+V)\subset \overline{\cO_{\norm{V}}\bigl(\spec(A)\bigr)}\,, \end{equation} where $\cO_{\norm{V}}\bigl(\spec(A)\bigr)$ denotes the open $\norm{V}$-neighbourhood of $\spec(A)$. In particular, if \begin{equation}\label{eq:pertNormBound} \norm{V} < \frac{d}{2}\,, \end{equation} then the spectrum of the operator $A+V$ is likewise separated into two disjoint components $\omega$ and $\Omega$, where \[ \omega=\spec(A+V)\cap \cO_{d/2}(\sigma)\quad \text{ and }\quad \Omega=\spec(A+V)\cap \cO_{d/2}(\Sigma)\,. \] Therefore, under condition \eqref{eq:pertNormBound}, the two components of the spectrum of $A+V$ can be interpreted as perturbations of the corresponding original spectral components $\sigma$ and $\Sigma$ of $\spec(A)$. Clearly, the condition \eqref{eq:pertNormBound} is sharp in the sense that if $\norm{V}\ge d/2$, the spectrum of the perturbed operator $A+V$ may not have separated components at all. The effect of the additive perturbation $V$ on the spectral subspaces for $A$ is studied in terms of the corresponding spectral projections. Let $\EE_A(\sigma)$ and $\EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)$ denote the spectral projections for $A$ and $A+V$ associated with the Borel sets $\sigma$ and $\cO_{d/2}(\sigma)$, respectively. It is well known that $\norm{\EE_A(\sigma)-\EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)}\le 1$ since the corresponding inequality holds for every difference of orthogonal projections in $\cH$, see, e.g., \cite[Section 34]{AG93}. Moreover, if \begin{equation}\label{eq:projAcute} \norm{\EE_A(\sigma)-\EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)} < 1\,, \end{equation} then the spectral projections $\EE_A(\sigma)$ and $\EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)$ are unitarily equivalent, see, e.g., \cite[Theorem I.6.32]{Kato66}. In this sense, if inequality \eqref{eq:projAcute} holds, the spectral subspace $\Ran\EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)$ can be understood as a rotation of the unperturbed spectral subspace $\Ran\EE_A(\sigma)$. The quantity \[ \arcsin\bigl(\norm{\EE_A(\sigma) - \EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)}\bigr) \] serves as a measure for this rotation and is called the \emph{maximal angle} between the spectral subspaces $\Ran\EE_A(\sigma)$ and $\Ran\EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)$. A short survey on the concept of the maximal angle between closed subspaces of a Hilbert space can be found in \cite[Section 2]{AM13}; see also \cite{DK70}, \cite[Theorem 2.2]{KMM03:2}, \cite[Section 2]{Seel13}, and references therein. It is a natural question whether the bound \eqref{eq:pertNormBound} is sufficient for inequality \eqref{eq:projAcute} to hold, or if one has to impose a stronger bound on the norm of the perturbation $V$ in order to ensure \eqref{eq:projAcute}. Basically, the following two problems arise: \begin{enumerate} \renewcommand{\theenumi}{\roman{enumi}} \item What is the best possible constant $c_\text{opt}\in\bigl(0,\frac{1}{2}\bigr]$ such that \[ \arcsin\bigl(\norm{\EE_A(\sigma)-\EE_{A+V}(\cO_{d/2}\bigl(\sigma)\bigr)}\bigr)<\frac{\pi}{2}\quad\text{ whenever }\quad \norm{V}<c_\text{opt}\cdot d\ ? \] \item Which function $f\colon[0,c_\text{opt})\to\bigl[0,\frac{\pi}{2}\bigr)$ is best possible in the estimate \[ \arcsin\bigl(\norm{\EE_A(\sigma)-\EE_{A+V}(\cO_{d/2}\bigl(\sigma)\bigr)}\bigr) \le f\biggl(\frac{\norm{V}}{d}\biggr)\,, \quad \norm{V}<c_\text{opt}\cdot d\ ? \] \end{enumerate} Both the constant $c_{\mathrm{opt}}$ and the function $f$ are supposed to be universal in the sense that they are independent of the operators $A$ and $V$. Note that we have made no assumptions on the disposition of the spectral components $\sigma$ and $\Sigma$ other than \eqref{eq:specSep}. If, for example, $\sigma$ and $\Sigma$ are additionally assumed to be subordinated, that is, $\sup\sigma<\inf\Sigma$ or vice versa, or if one of the two sets lies in a finite gap of the other one, then the corresponding best possible constant in problem (i) is known to be $\frac{1}{2}$, and the best possible function $f$ in problem (ii) is given by $f(x)=\frac{1}{2}\arcsin\bigl(2x\bigr)$, see, e.g., \cite[Lemma 2.3]{KMM03} and \cite[Theorem 5.1]{Davis63}; see also \cite[Remark 2.9]{Seel13}. However, under the sole assumption \eqref{eq:specSep}, both problems are still unsolved. It has been conjectured that $c_\text{opt}=\frac{1}{2}$ (see \cite{AM13}; cf.\ also \cite{KMM03} and \cite{KMM07}), but there is no proof available for that yet. So far, only lower bounds on the optimal constant $c_\text{opt}$ and upper bounds on the best possible function $f$ can be given. For example, in \cite[Theorem 1]{KMM03} it was shown that \[ c_{\mathrm{opt}} \ge \frac{2}{2+\pi}=0.3889845\ldots \] and \begin{equation}\label{eq:KMM} f(x) \le \arcsin\Bigl(\frac{\pi}{2}\,\frac{x}{1-x}\Bigr)<\frac{\pi}{2}\quad\text{ for }\quad 0\le x < \frac{2}{2+\pi}\,. \end{equation} In \cite[Theorem 6.1]{MS10} this result was strengthened to \[ c_{\mathrm{opt}} \ge \frac{\sinh(1)}{\exp(1)}=0{.}4323323\ldots \] and \begin{equation}\label{eq:MS} f(x) \le \frac{\pi}{4}\log\Bigl(\frac{1}{1-2x}\Bigr) < \frac{\pi}{2}\quad\text{ for }\quad 0\le x <\frac{\sinh(1)}{\exp(1)}\,. \end{equation} Recently, Albeverio and Motovilov have shown in \cite[Theorem 5.4]{AM13} that \begin{equation}\label{eq:AMConst} c_\text{opt} \ge c_* = 16\,\frac{\pi^6-2\pi^4+32\pi^2-32}{(\pi^2+4)^4} = 0{.}4541692\ldots \end{equation} and \begin{equation}\label{eq:AM} f(x) \le M_*(x)<\frac{\pi}{2}\quad \text{ for }\quad 0\le x < c_*\,, \end{equation} where \begin{equation}\label{eq:AMFunc} M_*(x)= \begin{cases} \frac{1}{2}\arcsin(\pi x) & \text{for}\quad 0\le x\le \frac{4}{\pi^2+4}\,,\\[0.1cm] \frac{1}{2}\arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr) + \frac{1}{2}\arcsin\Bigl(\pi\,\frac{(\pi^2+4)x-4} {\pi^2-4}\Bigr) & \text{for}\quad \frac{4}{\pi^2+4} < x \le \frac{8\pi^2}{(\pi^2+4)^2}\,,\\[0.2cm] \arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr) + \frac{1}{2}\arcsin\Bigl(\pi\,\frac{(\pi^2+4)^2x-8\pi^2} {(\pi^2-4)^2}\Bigr) & \text{for}\quad \frac{8\pi^2}{(\pi^2+4)^2} < x \le c_*\,. \end{cases} \end{equation} It should be noted that the first two results \eqref{eq:KMM} and \eqref{eq:MS} were originally formulated in \cite{KMM03} and \cite{MS10}, respectively, only for the case where the operator $A$ is assumed to be bounded. However, both results admit an immediate, straightforward generalization to the case where the operator $A$ is allowed to be unbounded, see, e.g., \cite[Proposition 3.4 and Theorem 3.5]{AM13}. The aim of the present work is to sharpen the estimate \eqref{eq:AM}. More precisely, our main result is as follows. \begin{introtheorem}\label{thm:mainResult} Let $A$ be a self-adjoint operator on a separable Hilbert space $\cH$ such that the spectrum of $A$ is separated into two disjoint components, that is, \[ \spec(A)=\sigma\cup\Sigma\quad\text{ with }\quad d:=\dist(\sigma,\Sigma)>0\,. \] Let $V$ be a bounded self-adjoint operator on $\cH$ satisfying \[ \norm{V} < c_\mathrm{crit}\cdot d \] with \[ c_\mathrm{crit}=\frac{1-\bigl(1-\frac{\sqrt{3}}{\pi}\bigr)^3}{2}=3\sqrt{3}\,\frac{\pi^2-\sqrt{3}\pi+1}{2\pi^3}= 0{.}4548399\ldots \] Then, the spectral projections $\EE_A(\sigma)$ and $\EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)$ for the self-adjoint operators $A$ and $A+V$ associated with $\sigma$ and the open $\frac{d}{2}$-neighbourhood $\cO_{d/2}(\sigma)$ of $\sigma$, respectively, satisfy the estimate \begin{equation}\label{eq:mainResult} \arcsin\bigl(\norm{\EE_A({\sigma})-\EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)}\bigr) \le N\biggl(\frac{\norm{V}}{d}\biggr) < \frac{\pi}{2}\,, \end{equation} where the function $N\colon[0,c_\mathrm{crit}]\to\bigl[0,\frac{\pi}{2}\bigr]$ is given by \begin{equation}\label{eq:mainResultFunc} N(x) = \begin{cases} \frac{1}{2}\arcsin(\pi x) & \text{ for }\quad 0\le x\le \frac{4}{\pi^2+4}\,,\\[0.15cm] \arcsin\Bigl(\sqrt{\frac{2\pi^2x-4}{\pi^2-4}}\,\Bigr) & \text{ for }\quad \frac{4}{\pi^2+4} < x < 4\,\frac{\pi^2-2} {\pi^4}\,,\\[0.15cm] \arcsin\bigl(\frac{\pi}{2}(1-\sqrt{1-2x}\,)\bigr) & \text{ for }\quad 4\,\frac{\pi^2-2}{\pi^4} \le x \le \kappa\,,\\[0.15cm] \frac{3}{2}\arcsin\bigl(\frac{\pi}{2}(1-\sqrt[\leftroot{4}3]{1-2x}\,)\bigr) & \text{ for }\quad \kappa < x \le c_\mathrm{crit}\,. \end{cases} \end{equation} Here, $\kappa\in\bigl(4\frac{\pi^2-2}{\pi^4},2\frac{\pi-1}{\pi^2}\bigr)$ is the unique solution to the equation \begin{equation}\label{eq:kappa} \arcsin\Bigl(\frac{\pi}{2}\bigl(1-\sqrt{1-2\kappa}\,\bigr)\Bigr)=\frac{3}{2}\arcsin\Bigl(\frac{\pi}{2} \bigl(1-\sqrt[\leftroot{4}3]{1-2\kappa}\,\bigr)\Bigr) \end{equation} in the interval $\bigl(0,2\frac{\pi-1}{\pi^2}\bigr]$. The function $N$ is strictly increasing, continuous on $[0,c_\mathrm{crit}]$, and continuously differentiable on $(0,c_\mathrm{crit})\setminus\{\kappa\}$. \end{introtheorem} Numerical calculations give $\kappa=0{.}4098623\ldots$ The estimate \eqref{eq:mainResult} in Theorem \ref{thm:mainResult} remains valid if the constant $\kappa$ in the definition of the function $N$ is replaced by any other constant within the interval $\bigl(4\frac{\pi^2-2}{\pi^4},2\frac{\pi-1}{\pi^2}\bigr)$, see Remark \ref{rem:kappaRepl} below. However, the particular choice \eqref{eq:kappa} ensures that the function $N$ is continuous and as small as possible. In particular, we have $N(x) = M_*(x)$ for $0\le x\le\frac{4}{\pi^2+4}$ and \[ N(x) < M_*(x)\quad\text{ for }\quad \frac{4}{\pi^2+4} < x \le c_*\,, \] where $c_*$ and $M_*$ are given by \eqref{eq:AMConst} and \eqref{eq:AMFunc} respectively, see Remark \ref{rem:estOptimality} below. From Theorem \ref{thm:mainResult} we immediately deduce that \[ c_{\mathrm{opt}}\ge c_\mathrm{crit} > c_* \] and \[ f(x) \le N(x)<\frac{\pi}{2}\quad\text{ for }\quad 0\le x< c_\mathrm{crit}\,. \] Both are the best respective bounds for the two problems (i) and (ii) known so far. The paper is organized as follows: In Section \ref{sec:optProb}, based on the triangle inequality for the maximal angle and a suitable a priori rotation bound for small perturbations (see Proposition \ref{prop:genRotBound}), we formulate a constrained optimization problem, whose solution provides an estimating function for the maximal angle between the corresponding spectral subspaces, see Definition \ref{def:optProb}, Proposition \ref{prop:mainEstimate}, and Theorem \ref{thm:solOptProb}. In this way, the approach by Albeverio and Motovilov in \cite{AM13} is optimized and, in particular, a proof of Theorem \ref{thm:mainResult} is obtained. The explicit solution to the optimization problem is given in Theorem \ref{thm:solOptProb}, which is proved in Section \ref{sec:solOptProb}. The technique used there involves variational methods and may also be useful for solving optimization problems of a similar structure. Finally, Appendix \ref{app:sec:inequalities} is devoted to some elementary inequalities used in Section \ref{sec:solOptProb}. \section{An optimization problem}\label{sec:optProb} In this section, we formulate a constrained optimization problem, whose solution provides an estimate on the maximal angle between the spectral subspaces associated with isolated parts of the spectrum of the corresponding perturbed and unperturbed operators, respectively. In particular, this yields a proof of Theorem \ref{thm:mainResult}. We make the following notational setup. \begin{hypothesis}\label{app:hypHyp} Let $A$ be as in Theorem \ref{thm:mainResult}, and let $V\neq0$ be a bounded self-adjoint operator on the Hilbert space $\cH$. For $0\le t<\frac{1}{2}$, introduce $B_t:=A+td\,\frac{V}{\norm{V}}$, $\Dom(B_t):=\Dom(A)$, and denote by $P_t:=\EE_{B_t}\bigl(\cO_{d/2}(\sigma)\bigr)$ the spectral projection for $B_t$ associated with the open $\frac{d}{2}$-neighbourhood $\cO_{d/2}(\sigma)$ of $\sigma$. \end{hypothesis} Under Hypothesis \ref{app:hypHyp}, one has $\norm{B_t-A}=td<\frac{d}{2}$ for $0\le t<\frac{1}{2}$. Taking into account the inclusion \eqref{eq:specPert}, the spectrum of each $B_t$ is likewise separated into two disjoint components, that is, \[ \spec(B_t) = \omega_t\cup\Omega_t\quad\text{ for }\quad 0\le t<\frac{1}{2}\,, \] where \[ \omega_t=\spec(B_t)\cap\overline{\cO_{td}(\sigma)}\quad\text{ and }\quad \Omega_t=\spec(B_t)\cap\overline{\cO_{td}(\Sigma)}\,. \] In particular, one has \begin{equation}\label{eq:specGapBound} \delta_t:=\dist(\omega_t,\Omega_t) \ge (1-2t)d>0 \quad\text{ for }\quad 0\le t<\frac{1}{2}\,. \end{equation} Moreover, the mapping $\bigl[0,\frac{1}{2}\bigr)\ni t\mapsto P_t$ is norm continuous, see, e.g., \cite[Theorem 3.5]{AM13}; cf.\ also the forthcoming estimate \eqref{eq:localKMM}. For arbitrary $0\le r\le s<\frac{1}{2}$, we can consider $B_s=B_r+(s-r)d\frac{V}{\norm{V}}$ as a perturbation of $B_r$. Taking into account the a priori bound \eqref{eq:specGapBound}, we then observe that \begin{equation}\label{eq:localPert} \frac{\norm{B_s-B_r}}{\delta_r} = \frac{(s-r)d}{\dist(\omega_r,\Omega_r)}\le\frac{s-r}{1-2r} < \frac{1}{2} \quad\text{ for }\quad 0\le r\le s<\frac{1}{2}\,. \end{equation} Furthermore, it follows from \eqref{eq:localPert} and the inclusion \eqref{eq:specPert} that $\omega_s$ is exactly the part of $\spec(B_s)$ that is contained in the open $\frac{\delta_r}{2}$-neighbourhood of $\omega_r$, that is, \begin{equation}\label{eq:localSpecPert} \omega_s = \spec(B_s)\cap\cO_{\delta_r/2}(\omega_r) \quad\text{ for }\quad 0\le r\le s<\frac{1}{2}\,. \end{equation} Let $t\in\bigl(0,\frac{1}{2}\bigr)$ be arbitrary, and let $0=t_0<t_1<\dots<t_{n+1}=t$ with $n\in\N_0$ be a finite partition of the interval $[0,t]$. Define \begin{equation}\label{eq:defLambda} \lambda_j := \frac{t_{j+1}-t_j}{1-2t_j}<\frac{1}{2}\,,\quad j=0,\dots,n\,. \end{equation} Recall that the mapping $\rho$ given by \begin{equation}\label{eq:metric} \rho(P,Q)=\arcsin\bigl(\norm{P-Q}\bigr)\quad\text{ with }\quad P,Q\ \text{ orthogonal projections in }\cH\,, \end{equation} defines a metric on the set of orthogonal projections in $\cH$, see \cite{Brown93}, and also \cite[Lemma 2.15]{AM13} and \cite{MS10}. Using the triangle inequality for this metric, we obtain \begin{equation}\label{eq:projTriangle} \arcsin\bigl(\norm{P_0-P_t}\bigr) \le \sum_{j=0}^n \arcsin\bigl(\norm{P_{t_j}-P_{t_{j+1}}}\bigr)\,. \end{equation} Considering $B_{t_{j+1}}$ as a perturbation of $B_{t_j}$, it is clear from \eqref{eq:localPert} and \eqref{eq:localSpecPert} that each summand of the right-hand side of \eqref{eq:projTriangle} can be treated in the same way as the maximal angle in the general situation discussed in Section \ref{sec:intro}. For example, combining \eqref{eq:localPert}--\eqref{eq:defLambda} with the bound \eqref{eq:KMM} yields \begin{equation}\label{eq:localKMM} \norm{P_{t_j}-P_{t_{j+1}}} \le \frac{\pi}{2}\,\frac{\lambda_j}{1-\lambda_j} = \frac{\pi}{2}\,\frac{t_{j+1}-t_j}{1-t_j-t_{j+1}} \le \frac{\pi}{2}\,\frac{t_{j+1}-t_j}{1-2t_{j+1}}\,,\quad j=0,\dots,n\,, \end{equation} where we have taken into account that $\norm{P_{t_j}-P_{t_{j+1}}}\le1$ and that $\frac{\pi}{2}\frac{\lambda_j}{1-\lambda_j}\ge 1$ if $\lambda_j\ge\frac{2}{2+\pi}$. Obviously, the estimates \eqref{eq:projTriangle} and \eqref{eq:localKMM} hold for arbitrary finite partitions of the interval $[0,t]$. In particular, if partitions with arbitrarily small mesh size are considered, then, as a result of $\frac{t_{j+1}-t_j}{1-2t_{j+1}}\le\frac{t_{j+1}-t_j}{1-2t}$, the norm of each corresponding projector difference in \eqref{eq:localKMM} is arbitrarily small as well. At the same time, the corresponding Riemann sums \[ \sum_{j=0}^n \frac{t_{j+1}-t_j}{1-2t_{j+1}} \] are arbitrarily close to the integral $\int_0^t\frac{1}{1-2\tau}\,\dd\tau$. Since $\frac{\arcsin(x)}{x}\to1$ as $x\to0$, we conclude from \eqref{eq:projTriangle} and \eqref{eq:localKMM} that \[ \arcsin\bigl(\norm{P_0-P_t}\bigr) \le \frac{\pi}{2}\int_0^t \frac{1}{1-2\tau}\,\dd\tau= \frac{\pi}{4}\log\Bigl(\frac{1}{1-2t}\Bigr)\,. \] Once the bound \eqref{eq:KMM} has been generalized to the case where the operator $A$ is allowed to be unbounded, this argument is an easy and straightforward way to prove the bound \eqref{eq:MS}. Albeverio and Motovilov demonstrated in \cite{AM13} that a stronger result can be obtained from \eqref{eq:projTriangle}. They considered a specific finite partition of the interval $[0,t]$ and used a suitable a priori bound (see \cite[Corollary 4.3 and Remark 4.4]{AM13}) to estimate the corresponding summands of the right-hand side of \eqref{eq:projTriangle}. This a priori bound, which is related to the Davis-Kahan $\sin2\Theta$ theorem from \cite{DK70}, is used in the present work as well. We therefore state the corresponding result in the following proposition for future reference. It should be noted that our formulation of the statement slightly differs from the original one in \cite{AM13}. A justification of this modification, as well as a deeper discussion on the material including an alternative, straightforward proof of the original result \cite[Corollary 4.3]{AM13}, can be found in \cite{Seel13}. \begin{proposition}[{\cite[Corollary 2]{Seel13}}]\label{prop:genRotBound} Let $A$ and $V$ be as in Theorem \ref{thm:mainResult}. If $\norm{V}\le \frac{d}{\pi}$, then the spectral projections $\EE_{A}(\sigma)$ and $\EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)$ for the self-adjoint operators $A$ and $A+V$ associated with the Borel sets $\sigma$ and $\cO_{d/2}(\sigma)$, respectively, satisfy the estimate \[ \arcsin\bigl(\norm{\EE_A(\sigma)-\EE_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)}\bigr) \le \frac{1}{2}\arcsin\Bigl(\frac{\pi}{2}\cdot 2\,\frac{\norm{V}}{d}\Bigr)\le\frac{\pi}{4}\,. \] \end{proposition} The estimate given by Proposition \ref{prop:genRotBound} is universal in the sense that the estimating function $x\mapsto\frac{1}{2}\arcsin(\pi x)$ depends neither on the unperturbed operator $A$ nor on the perturbation $V$. Moreover, for perturbations $V$ satisfying $\norm{V}\le\frac{4}{\pi^2+4}\,d$, this a priori bound on the maximal angle between the corresponding spectral subspaces is the strongest one available so far, cf.\ \cite[Remark 5.5]{AM13}. Assume that the given partition of the interval $[0,t]$ additionally satisfies \begin{equation}\label{eq:seqParamCond} \lambda_j = \frac{t_{j+1}-t_j}{1-2t_j}\le\frac{1}{\pi}\,,\quad j=0,\dots,n\,. \end{equation} In this case, it follows from \eqref{eq:localPert}, \eqref{eq:localSpecPert}, \eqref{eq:projTriangle}, and Proposition \ref{prop:genRotBound} that \begin{equation}\label{eq:essEstimate} \arcsin\bigl(\norm{P_0-P_t}\bigr) \le \frac{1}{2}\sum_{j=0}^n \arcsin(\pi\lambda_j)\,. \end{equation} Along with a specific choice of the partition of the interval $[0,t]$, estimate \eqref{eq:essEstimate} is the essence of the approach by Albeverio and Motovilov in \cite{AM13}. In the present work, we optimize the choice of the partition of the interval $[0,t]$, so that for every fixed parameter $t$ the right-hand side of inequality \eqref{eq:essEstimate} is minimized. An equivalent and more convenient reformulation of this approach is to maximize the parameter $t$ in estimate \eqref{eq:essEstimate} over all possible choices of the parameters $n$ and $\lambda_j$ for which the right-hand side of \eqref{eq:essEstimate} takes a fixed value. Obviously, we can generalize estimate \eqref{eq:essEstimate} to the case where the finite sequence $(t_j)_{j=1}^n$ is allowed to be just increasing and not necessarily strictly increasing. Altogether, this motivates the following considerations. \begin{definition}\label{def:params} For $n\in\N_0$ define \[ D_n:=\Bigl\{ (\lambda_j)\in l^1(\N_0) \Bigm| 0\le \lambda_j\le\frac{1}{\pi}\ \ \text{ for }\ \ j\le n\ \ \text{ and }\ \ \lambda_j=0\ \ \text{ for }\ \ j\ge n+1 \Bigr\}\,, \] and let $D:=\bigcup_{n\in\N_0} D_n$. \end{definition} Every finite partition of the interval $[0,t]$ that satisfies condition \eqref{eq:seqParamCond} is related to a sequence in $D$ in the obvious way. Conversely, the following lemma allows to regain the finite partition of the interval $[0,t]$ from this sequence. \begin{lemma}\label{lem:seq} \hspace*{2cm} \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item For every $x\in\bigl[0,\frac{1}{2}\bigr)$ the mapping $\bigl[0,\frac{1}{2}\bigr]\ni t\mapsto t+x(1-2t)$ is strictly increasing. \item For every $\lambda=(\lambda_j)\in D$ the sequence $(t_j)\subset\R$ given by the recursion \begin{equation}\label{eq:seqDef} t_{j+1} = t_j + \lambda_j (1-2t_j)\,,\quad j\in\N_0\,,\quad t_0=0\,, \end{equation} is increasing and satisfies $0\le t_j<\frac{1}{2}$ for all $j\in\N_0$. Moreover, one has $t_j=t_{n+1}$ for $j\ge n+1$ if $\lambda\in D_n$. In particular, $(t_j)$ is eventually constant. \end{enumerate} \begin{proof} The proof of claim (a) is straightforward and is hence omitted. For the proof of (b), let $\lambda=(\lambda_j)\in D$ be arbitrary and let $(t_j)\subset\R$ be given by \eqref{eq:seqDef}. Observe that $t_0=0<\frac{1}{2}$ and that (a) implies that \[ 0 \le t_{j+1}=t_j + \lambda_j(1-2t_j) < \frac{1}{2} + \lambda_j\Bigl(1-2\cdot\frac{1}{2}\Bigr) = \frac{1}{2} \quad\text{ if }\quad 0\le t_j<\frac{1}{2}\,. \] Thus, the two-sided estimate $0\le t_j<\frac{1}{2}$ holds for all $j\in\N_0$ by induction. In particular, it follows that $t_{j+1}-t_j=\lambda_j(1-2t_j)\ge 0$ for all $j\in\N_0$, that is, the sequence $(t_j)$ is increasing. Let $n\in\N_0$ such that $\lambda\in D_n$. Since $\lambda_j=0$ for $j\ge n+1$, it follows from the definition of $(t_j)$ that $t_{j+1}=t_j$ for $j\ge n+1$, that is, $t_j=t_{n+1}$ for $j\ge n+1$. \end{proof \end{lemma} It follows from part (b) of the preceding lemma that for every $\lambda\in D$ the sequence $(t_j)$ given by \eqref{eq:seqDef} yields a finite partition of the interval $[0,t]$ with $t=\max_{j\in\N_0}t_j<\frac{1}{2}$. In this respect, the approach to optimize the parameter $t$ in \eqref{eq:essEstimate} with a fixed right-hand side can now be formalized in the following way. \begin{definition}\label{def:optProb} Let $W\colon D\to l^\infty(\N_0)$ denote the (non-linear) operator that maps every sequence in $D$ to the corresponding increasing and eventually constant sequence given by the recursion \eqref{eq:seqDef}. Moreover, let $M\colon\bigl[0,\frac{1}{\pi}\bigr]\to\bigl[0,\frac{\pi}{4}\bigr]$ be given by \[ M(x):=\frac{1}{2}\arcsin(\pi x)\,. \] Finally, for $\theta\in\bigl[0,\frac{\pi}{2}\bigr]$ define \[ D(\theta):=\biggl\{(\lambda_j)\in D \biggm| \sum_{j=0}^\infty M(\lambda_j)=\theta\biggr\}\subset D \] and \begin{equation}\label{eq:optProb} T(\theta):=\sup\bigl\{\max W(\lambda) \bigm| \lambda\in D(\theta)\bigr\}\,, \end{equation} where $\max W(\lambda):=\max_{j\in\N_0}t_j$ with $(t_j)=W(\lambda)$. \end{definition} For every fixed $\theta\in\bigl[0,\frac{\pi}{2}\bigr]$, it is easy to verify that indeed $D(\theta)\neq\emptyset$. Moreover, one has $0\le T(\theta)\le\frac{1}{2}$ by part (b) of Lemma \ref{lem:seq}, and $T(\theta)=0$ holds if and only if $\theta=0$. In order to compute $T(\theta)$ for $\theta\in\bigl(0,\frac{\pi}{2}\bigr]$, we have to maximize $\max W(\lambda)$ over $\lambda\in D(\theta)\subset D$. This constrained optimization problem plays the central role in the approach presented in this work. The following proposition shows how this optimization problem is related to the problem of estimating the maximal angle between the corresponding spectral subspaces. \begin{proposition}\label{prop:mainEstimate} Assume Hypothesis \ref{app:hypHyp}. Let $\bigl[0,\frac{\pi}{2}\bigr]\ni\theta\mapsto S(\theta)\in\bigl[0,S\bigl(\frac{\pi}{2}\bigr)\bigr]\subset\bigl[0,\frac{1}{2}\bigr]$ be a continuous, strictly increasing (hence invertible) mapping with \[ 0\le S(\theta) \le T(\theta)\quad\text{ for }\quad 0\le \theta<\frac{\pi}{2}\,. \] Then \[ \arcsin\bigl(\norm{P_0-P_t}\bigr) \le S^{-1}(t)\quad\text{ for }\quad 0\le t< S\Bigl(\frac{\pi}{2}\Bigr)\,. \] \begin{proof} Since the mapping $\theta\mapsto S(\theta)$ is invertible, it suffices to show the inequality \begin{equation}\label{eq:mainEstForS} \arcsin\bigl(\norm{P_0-P_{S(\theta)}}\bigr) \le \theta\quad\text{ for }\quad 0\le\theta<\frac{\pi}{2}\,. \end{equation} Considering $T(0)=S(0)=0$, the case $\theta=0$ in inequality \eqref{eq:mainEstForS} is obvious. Let $\theta\in\bigl(0,\frac{\pi}{2}\bigr)$. In particular, one has $T(\theta)>0$. For arbitrary $t$ with $0\le t<T(\theta)$ choose $\lambda=(\lambda_j)\in D(\theta)$ such that $t<\max W(\lambda)\le T(\theta)$. Denote $(t_j):=W(\lambda)$. Since $t_j<\frac{1}{2}$ for all $j\in\N_0$ by part (b) of Lemma \ref{lem:seq}, it follows from the definition of $(t_j)$ that \begin{equation}\label{eq:locSeqI} \frac{t_{j+1}-t_j}{1-2t_j}=\lambda_j\le\frac{1}{\pi} \quad\text{ for all }\quad j\in\N_0\,. \end{equation} Moreover, considering $t<\max W(\lambda)=\max_{j\in\N_0}t_j$, there is $k\in\N_0$ such that $t_k\le t< t_{k+1}$. In particular, one has \begin{equation}\label{eq:locSeqII} \frac{t-t_k}{1-2t_k} < \frac{t_{k+1}-t_k}{1-2t_k} = \lambda_k\le\frac{1}{\pi}\,. \end{equation} Using the triangle inequality for the metric $\rho$ given by \eqref{eq:metric}, it follows from \eqref{eq:localPert}, \eqref{eq:localSpecPert}, \eqref{eq:locSeqI}, \eqref{eq:locSeqII}, and Proposition \ref{prop:genRotBound} that \[ \begin{aligned} \arcsin\bigl(\norm{P_0-P_t}\bigr) &\le \sum_{j=0}^{k-1}\arcsin\bigl(\norm{P_{t_j}-P_{t_{j+1}}}\bigr) + \arcsin\bigl(\norm{P_{t_k}-P_t}\bigr)\\ &\le \sum_{j=0}^{k-1} M(\lambda_j) + M(\lambda_k) \le \sum_{j=0}^\infty M(\lambda_j)=\theta\,, \end{aligned} \] that is, \begin{equation}\label{eq:mainEstFort} \arcsin\bigl(\norm{P_0-P_t}\bigr) \le \theta\quad\text{ for all }\quad 0\le t < T(\theta)\,. \end{equation} Since the mapping $\bigl[0,\frac{1}{2}\bigr)\ni\tau\mapsto P_\tau$ is norm continuous and $S(\theta)<S\bigl(\frac{\pi}{2}\bigr)\le\frac{1}{2}$, estimate \eqref{eq:mainEstFort} also holds for $t=S(\theta)\le T(\theta)$. This shows \eqref{eq:mainEstForS} and, hence, completes the proof. \end{proof \end{proposition} It turns out that the mapping $\bigl[0,\frac{\pi}{2}\bigr]\ni\theta\mapsto T(\theta)$ is continuous and strictly increasing. It therefore satisfies the hypotheses of Proposition \ref{prop:mainEstimate}. In this respect, it remains to compute $T(\theta)$ for $\theta\in\bigl[0,\frac{\pi}{2}\bigr]$ in order to prove Theorem \ref{thm:mainResult}. This is done in Section \ref{sec:solOptProb} below. For convenience, the following theorem states the corresponding result in advance. \begin{theorem}\label{thm:solOptProb} In the interval $\bigl(0,\frac{\pi}{2}\bigr]$ the equation \[ \Bigl(1-\frac{2}{\pi}\sin\vartheta\Bigr)^2 = \biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\vartheta}{3}\Bigr)\biggr)^3 \] has a unique solution $\vartheta\in\bigl(\arcsin\bigl(\frac{2}{\pi}\bigr),\frac{\pi}{2}\bigr)$. Moreover, the quantity $T(\theta)$ given in \eqref{eq:optProb} has the representation \begin{equation}\label{eq:solOptProb} T(\theta) = \begin{cases} \frac{1}{\pi}\sin(2\theta) & \text{ for }\quad 0 \le \theta \le \arctan\bigl(\frac{2}{\pi}\bigr)=\frac{1}{2} \arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr)\,,\\[0.1cm] \frac{2}{\pi^2}+\frac{\pi^2-4}{2\pi^2}\sin^2\theta & \text{ for }\quad \arctan\bigl(\frac{2}{\pi} \bigr)<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)\,,\\[0.1cm] \frac{1}{2}-\frac{1}{2}\bigl(1-\frac{2}{\pi}\sin\theta\bigr)^2 & \text{ for }\quad \arcsin\bigl(\frac{2}{\pi}\bigr)\le\theta\le\vartheta\,,\\[0.1cm] \frac{1}{2}-\frac{1}{2}\Bigl(1-\frac{2}{\pi}\sin\bigl(\frac{2\theta}{3}\bigr)\Bigr)^3 &\text{ for }\quad \vartheta < \theta \le \frac{\pi}{2}\,. \end{cases} \end{equation} The mapping $\bigl[0,\frac{\pi}{2}\bigr]\ni\theta\mapsto T(\theta)$ is strictly increasing, continuous on $\bigl[0,\frac{\pi}{2}\bigr]$, and continuous differentiable on $\bigl(0,\frac{\pi}{2}\bigr)\setminus\{\vartheta\}$. \end{theorem} Theorem \ref{thm:mainResult} is now a straightforward consequence of Proposition \ref{prop:mainEstimate} and Theorem \ref{thm:solOptProb}. \begin{proof}[Proof of Theorem \ref{thm:mainResult}] According to Theorem \ref{thm:solOptProb}, the mapping $\bigl[0,\frac{\pi}{2}\bigr]\ni\theta\mapsto T(\theta)$ is strictly increasing and continuous. Hence, its range is the whole interval $[0,c_\mathrm{crit}]$, where $c_\mathrm{crit}$ is given by $c_\mathrm{crit}=T\bigl(\frac{\pi}{2}\bigr)=\frac{1}{2}-\frac{1}{2}\bigl(1-\frac{\sqrt{3}}{\pi}\bigl)^3$. Let $N=T^{-1}\colon[0,c_\mathrm{crit}]\to\bigl[0,\frac{\pi}{2}\bigr]$ denote the inverse of this mapping. Obviously, the function $N$ is also strictly increasing and continuous. Moreover, using representation \eqref{eq:solOptProb}, it is easy to verify that $N$ is explicitly given by \eqref{eq:mainResultFunc}. In particular, the constant $\kappa=T(\vartheta)=\frac{1}{2}-\frac{1}{2}\bigl(1-\frac{2}{\pi}\sin\vartheta\bigr)^2\in \bigl(4\frac{\pi^2-2}{\pi^4},2\frac{\pi-1}{\pi^2}\bigr)$ is the unique solution to equation \eqref{eq:kappa} in the interval $\bigl(0,2\frac{\pi-1}{\pi^2}\bigr]$. Furthermore, the function $N$ is continuously differentiable on $(0,c_\mathrm{crit})\setminus\{\kappa\}$ since the mapping $\theta\mapsto T(\theta)$ is continuously differentiable on $\bigl(0,\frac{\pi}{2}\bigr)\setminus\{\vartheta\}$. Let $V$ be a bounded self-adjoint operator on $\cH$ satisfying $\norm{V}<c_\mathrm{crit}\cdot d$. The case $V=0$ is obvious. Assume that $V\neq0$. Then, $B_t:=A+td\frac{V}{\norm{V}}$, $\Dom(B_t):=\Dom(A)$, and $P_t:=\EE_{B_t}\bigl(\cO_{d/2}(\sigma)\bigr)$ for $0\le t<\frac{1}{2}$ satisfy Hypothesis \ref{app:hypHyp}. Moreover, one has $A+V=B_\tau$ with $\tau = \frac{\norm{V}}{d}<c_\mathrm{crit}=T\bigl(\frac{\pi}{2}\bigr)$. Applying Proposition \ref{prop:mainEstimate} to the mapping $\theta\mapsto T(\theta)$ finally gives \begin{equation}\label{eq:finalEstimate} \arcsin\bigl(\norm{E_A(\sigma)-E_{A+V}\bigl(\cO_{d/2}(\sigma)\bigr)}\bigr)=\arcsin\bigl(\norm{P_0-P_\tau}\bigr) \le N(\tau) = N\Bigl(\frac{\norm{V}}{d}\Bigr)\,, \end{equation} which completes the proof. \end{proof \begin{remark}\label{rem:kappaRepl} Numerical evaluations give $\vartheta=1{.}1286942\ldots<\arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr)=2\arctan\bigl(\frac{2}{\pi}\bigr)$ and $\kappa = T(\vartheta) = 0{.}4098623\ldots<\frac{8\pi^2}{(\pi^2+4)^2}$. However, the estimate \eqref{eq:finalEstimate} remains valid if the constant $\kappa$ in the explicit representation for the function $N$ is replaced by any other constant within the interval $\bigl(4\frac{\pi^2-2}{\pi^4},2\frac{\pi-1}{\pi^2}\bigr)$. This can be seen by applying Proposition \ref{prop:mainEstimate} to each of the two mappings \[ \theta\mapsto \frac{1}{2}-\frac{1}{2}\Bigl(1-\frac{2}{\pi}\sin\theta\Bigr)^2\quad\text{ and }\quad \theta\mapsto \frac{1}{2}-\frac{1}{2}\biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{3}\Bigr)\biggr)^3\,. \] These mappings indeed satisfy the hypotheses of Proposition \ref{prop:mainEstimate}. Both are obviously continuous and strictly increasing, and, by particular choices of $\lambda\in D(\theta)$, it is easy to see from the considerations in Section \ref{sec:solOptProb} that they are less or equal to $T(\theta)$, see equation \eqref{eq:limitEquiParam} below. \end{remark} The statement of Theorem \ref{thm:solOptProb} actually goes beyond that of Theorem \ref{thm:mainResult}. As a matter of fact, instead of equality in \eqref{eq:solOptProb}, it would be sufficient for the proof of Theorem \ref{thm:mainResult} to have that the right-hand side of \eqref{eq:solOptProb} is just less or equal to $T(\theta)$. This, in turn, is rather easy to establish by particular choices of $\lambda\in D(\theta)$, see Lemma \ref{lem:critPoints} and the proof of Lemma \ref{lem:twoParams} below. However, Theorem \ref{thm:solOptProb} states that the right-hand side of \eqref{eq:solOptProb} provides an exact representation for $T(\theta)$, and most of the considerations in Section \ref{sec:solOptProb} are required to show this stronger result. As a consequence, the bound from Theorem \ref{thm:mainResult} is optimal within the framework of the approach by estimate \eqref{eq:essEstimate}. In fact, the following observation shows that a bound substantially stronger than the one from Proposition \ref{prop:genRotBound} is required, at least for small perturbations, in order to improve on Theorem \ref{thm:mainResult}. \begin{remark} One can modify the approach \eqref{eq:essEstimate} by replacing the term $M(\lambda_j)=\frac{1}{2}\arcsin(\pi\lambda_j)$ by $N(\lambda_j)$ and relaxing the condition \eqref{eq:seqParamCond} to $\lambda_j\le c_{\text{crit}}$. Yet, it follows from Theorem \ref{thm:solOptProb} that the corresponding optimization procedure leads to exactly the same result \eqref{eq:solOptProb}. This can be seen from the fact that each $N(\lambda_j)$ is of the form of the right-hand side of \eqref{eq:essEstimate} (cf.\ the computation of $T(\theta)$ in Section \ref{sec:solOptProb} below), so that we are actually dealing with essentially the same optimization problem. In this sense, the function $N$ is a fixed point in the approach presented here. \end{remark} We close this section with a comparison of Theorem \ref{thm:mainResult} with the strongest previously known result by Albeverio and Motovilov from \cite{AM13}. \begin{remark}\label{rem:estOptimality} One has $N(x)=M_*(x)$ for $0\le x\le \frac{4}{\pi^2+4}$, and the inequality $N(x)<M_*(x)$ holds for all $\frac{4}{\pi^2+4}<x\le c_*$, where $c_*\in\bigl(0,\frac{1}{2}\bigr)$ and $M_*\colon[0,c_*]\to\bigl[0,\frac{\pi}{2}\bigr]$ are given by \eqref{eq:AMConst} and \eqref{eq:AMFunc}, respectively. Indeed, it follows from the computation of $T(\theta)$ in Section \ref{sec:solOptProb} (see Remark \ref{rem:AMvsS} below) that \[ x < T(M_*(x))\le c_\mathrm{crit}\quad\text{ for }\quad \frac{4}{\pi^2+4} < x \le c_*\,. \] Since the function $N=T^{-1}\colon[0,c_{\mathrm{crit}}]\to\bigl[0,\frac{\pi}{2}\bigr]$ is strictly increasing, this implies that \[ N(x)< N\bigl(T(M_*(x))\bigr)=M_*(x)\quad \text{ for }\quad \frac{4}{\pi^2+4}<x\le c_*\,. \] \end{remark} \section{Proof of Theorem \ref{thm:solOptProb}}\label{sec:solOptProb} We split the proof of Theorem \ref{thm:solOptProb} into several steps. We first reduce the problem of computing $T(\theta)$ to the problem of solving suitable finite-dimensional constrained optimization problems, see equations \eqref{eq:supOptn} and \eqref{eq:extremalProb}. The corresponding critical points are then characterized in Lemma \ref{lem:critPoints} using Lagrange multipliers. The crucial tool to reduce the set of relevant critical points is provided by Lemma \ref{lem:paramSubst}. Finally, the finite-dimensional optimization problems are solved in Lemmas \ref{lem:twoParams}, \ref{lem:threeParams}, and \ref{prop:solOptProb}. Throughout this section, we make use of the notations introduced in Definitions \ref{def:params} and \ref{def:optProb}. In addition, we fix the following notations. \begin{definition} For $n\in\N_0$ and $\theta\in\bigl[0,\frac{\pi}{2}\bigr]$ define $D_n(\theta):=D(\theta)\cap D_n$. Moreover, let \[ T_n(\theta):=\sup\bigl\{\max W(\lambda) \bigm| \lambda\in D_n(\theta)\bigr\}\quad\text{ if }\quad D_n(\theta)\neq\emptyset\,, \] and set $T_n(\theta):=0$ if $D_n(\theta)=\emptyset$. \end{definition} As a result of $D(0)=D_n(0)=\{0\}\subset l^1(\N_0)$, we have $T(0)=T_n(0)=0$ for every $n\in\N_0$. Let $\theta\in\bigl(0,\frac{\pi}{2}\bigr]$ be arbitrary. Since $D_0(\theta) \subset D_1(\theta) \subset D_2(\theta)\subset \dots$, we obtain \[ T_0(\theta) \le T_1(\theta) \le T_2(\theta) \le \dots \] Moreover, we observe that \begin{equation}\label{eq:supOptn} T(\theta)=\sup_{n\in\N_0}T_n(\theta)\,. \end{equation} In fact, we show below that $T_n(\theta)=T_2(\theta)$ for every $n\ge 2$, so that $T(\theta)=T_2(\theta)$, see Lemma \ref{prop:solOptProb}. Let $n\in\N$ be arbitrary and let $\lambda=(\lambda_j)\in D_n$. Denote $(t_j):=W(\lambda)$. It follows from part (b) of Lemma \ref{lem:seq} that $\max W(\lambda)=t_{n+1}$. Moreover, we have \[ 1-2t_{j+1} = 1-2t_j -2\lambda_j(1-2t_j) = (1-2t_j)(1-2\lambda_j)\,,\quad j=0,\dots,n\,. \] Since $t_0=0$, this implies that \[ 1-2t_{n+1} = \prod_{j=0}^n (1-2\lambda_j)\,. \] In particular, we obtain the explicit representation \begin{equation}\label{eq:seqExplRepr} \max W(\lambda)=t_{n+1} = \frac{1}{2}\biggl(1-\prod_{j=0}^n(1-2\lambda_j)\biggr)\,. \end{equation} An immediate conclusion of representation \eqref{eq:seqExplRepr} is the following statement. \begin{lemma}\label{lem:paramPermut} For $\lambda=(\lambda_j)\in D_n$ the value of $\max W(\lambda)$ does not depend on the order of the entries $\lambda_0,\dots,\lambda_n$. \end{lemma} Another implication of representation \eqref{eq:seqExplRepr} is the fact that $\max W(\lambda)=t_{n+1}$ can be considered as a continuous function of the variables $\lambda_0,\dots,\lambda_n$. Since the set $D_n(\theta)$ is compact as a closed bounded subset of an $(n+1)$-dimensional subspace of $l^1(\N_0)$, we deduce that $T_n(\theta)$ can be written as \begin{equation}\label{eq:extremalProb} T_n(\theta) = \max\bigl\{t_{n+1} \bigm| (t_j)=W(\lambda)\,,\ \lambda\in D_n(\theta)\bigr\}\,. \end{equation} Hence, $T_n(\theta)$ is determined by a finite-dimensional constrained optimization problem, which can be studied by use of Lagrange multipliers. Taking into account the definition of the set $D_n(\theta)$, it follows from equation \eqref{eq:extremalProb} and representation \eqref{eq:seqExplRepr} that there is some point $(\lambda_0,\dots,\lambda_n)\in\bigl[0,\frac{1}{\pi}\bigr]^{n+1}$ such that \[ T_n(\theta) = t_{n+1} = \frac{1}{2}\biggl(1-\prod_{j=0}^n(1-2\lambda_j)\biggr)\quad\text{ and }\quad \sum_{j=0}^n M(\lambda_j)=\theta\,, \] where $M(x)=\frac{1}{2}\arcsin(\pi x)$ for $0\le x\le\frac{1}{\pi}$. In particular, if $(\lambda_0,\dots,\lambda_n)\in\bigl(0,\frac{1}{\pi}\bigr)^{n+1}$, then the method of Lagrange multipliers gives a constant $r\in\R$, $r\neq0$, with \[ \frac{\partial t_{n+1}}{\partial\lambda_k} = r\cdot M'(\lambda_k) = r\cdot\frac{\pi}{2\sqrt{1-\pi^2\lambda_k^2}} \quad\text{ for }\quad k=0,\dots,n\,. \] Hence, in this case, for every $k\in\{0,\dots,n-1\}$ we obtain \begin{equation}\label{eq:critPointsCond} \frac{\sqrt{1-\pi^2\lambda_k^2}}{\sqrt{1-\pi^2\lambda_{k+1}^2}} = \frac{\dfrac{\partial t_{n+1}}{\partial\lambda_{k+1}}}{\dfrac{\partial t_{n+1}}{\partial\lambda_k}} = \frac{\prod\limits_{\substack{j=0\\ j\neq k+1}}^n (1-2\lambda_j)}{\prod\limits_{\substack{j=0\\ j\neq k}}^n(1-2\lambda_j)} = \frac{1-2\lambda_k}{1-2\lambda_{k+1}}\,. \end{equation} This leads to the following characterization of critical points of the mapping $\lambda\mapsto\max W(\lambda)$ on $D_n(\theta)$. \begin{lemma}\label{lem:critPoints} For $n\ge 1$ and $\theta\in\bigl(0,\frac{\pi}{2}\bigr]$ let $\lambda=(\lambda_j)\in D_n(\theta)$ with $T_n(\theta)=\max W(\lambda)$. Assume that $\lambda_0\ge\dots\ge\lambda_n$. If, in addition, $\lambda_0<\frac{1}{\pi}$ and $\lambda_n>0$, then either one has \[ \lambda_0=\dots=\lambda_n=\frac{1}{\pi}\sin\Bigl(\frac{2\theta}{n+1}\Bigr)\,, \] so that \begin{equation}\label{eq:limitEquiParam} \max W(\lambda) = \frac{1}{2}-\frac{1}{2}\left(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{n+1}\Bigr)\right)^{n+1}\,, \end{equation} or there is $l\in\{0,\dots,n-1\}$ with \begin{equation}\label{eq:critPointsNotEqual} \frac{4}{\pi^2+4}>\lambda_0=\dots=\lambda_l>\frac{2}{\pi^2}>\lambda_{l+1}=\dots=\lambda_n>0\,. \end{equation} In the latter case, $\lambda_0$ and $\lambda_n$ satisfy \begin{equation}\label{eq:critPointsRels} \lambda_0+\lambda_n =\frac{4\alpha^2}{\pi^2+4\alpha^2}\quad\text{ and }\quad \lambda_0\lambda_n=\frac{\alpha^2-1}{\pi^2+4\alpha^2}\,, \end{equation} where \begin{equation}\label{eq:critPointsAlpha} \alpha = \frac{\sqrt{1-\pi^2\lambda_0^2}}{1-2\lambda_0} = \frac{\sqrt{1-\pi^2\lambda_n^2}}{1-2\lambda_n}\in(1,m)\,,\quad m:=\frac{\pi}{2}\tan\Bigl(\arcsin\Bigl(\frac{2}{\pi}\Bigr)\Bigr)\,. \end{equation} \begin{proof} Let $\lambda_0<\frac{1}{\pi}$ and $\lambda_n>0$. In particular, one has $(\lambda_0,\dots,\lambda_n)\in\bigl(0,\frac{1}{\pi}\bigr)^{n+1}$. Hence, it follows from \eqref{eq:critPointsCond} that \begin{equation}\label{eq:defAlpha} \alpha:=\frac{\sqrt{1-\pi^2\lambda_k^2}}{1-2\lambda_k} \end{equation} does not depend on $k\in\{0,\dots,n\}$. If $\lambda_0=\lambda_n$, then all $\lambda_j$ coincide and one has $\theta=(n+1)M(\lambda_0)=\frac{n+1}{2}\arcsin(\pi\lambda_0)$, that is, $\lambda_0=\dots=\lambda_n=\frac{1}{\pi}\sin\bigl(\frac{2\theta}{n+1}\bigr)$. Inserting this into representation \eqref{eq:seqExplRepr} yields equation \eqref{eq:limitEquiParam}. Now assume that $\lambda_0>\lambda_n$. A straightforward calculation shows that $x=\frac{2}{\pi^2}$ is the only critical point of the mapping \begin{equation}\label{eq:constrParamMapping} \Bigl[0,\frac{1}{\pi}\Bigr]\ni x\mapsto \frac{\sqrt{1-\pi^2x^2}}{1-2x}\,, \end{equation} cf.\ \figurename\ \ref{fig:functionPlot}. The image of this point is $\bigl(1-\frac{4}{\pi^2}\bigr)^{-1/2}=m>1$. Moreover, $0$ and $\frac{4}{\pi^2+4}$ are mapped to $1$, and $\frac{1}{\pi}$ is mapped to $0$. In particular, every value in the interval $(1,m)$ has exactly two preimages under the mapping \eqref{eq:constrParamMapping}, and all the other values in the range $[0,m]$ have only one preimage. Since $\lambda_0>\lambda_n$ by assumption, it follows from \eqref{eq:defAlpha} that $\alpha$ has two preimages. Hence, $\alpha\in(1,m)$ and $\frac{4}{\pi^2+4}>\lambda_0>\frac{2}{\pi^2}>\lambda_n>0$. Furthermore, there is $l\in\{0,\dots,n-1\}$ with $\lambda_0=\dots=\lambda_l$ and $\lambda_{l+1}=\dots=\lambda_n$. This proves \eqref{eq:critPointsNotEqual} and \eqref{eq:critPointsAlpha}. Finally, the relations \eqref{eq:critPointsRels} follow from the fact that the equation $\frac{\sqrt{1-\pi^2z^2}}{1-2z}=\alpha$ can be rewritten as \[ 0 = z^2 - \frac{4\alpha^2}{\pi^2+4\alpha^2}\,z + \frac{\alpha^2-1}{\pi^2+4\alpha^2} = (z-\lambda_0)(z-\lambda_n) = z^2 - (\lambda_0+\lambda_n)z + \lambda_0\lambda_n\,.\qedhere \] \end{proof \end{lemma} \begin{figure}[ht]\begin{center} \scalebox{0.5}{\includegraphics{figure.eps}} \caption{The mapping $\bigl[0,\frac{1}{\pi}\bigr]\ni x\mapsto \frac{\sqrt{1-\pi^2x^2}}{1-2x}$.} \label{fig:functionPlot} \end{center} \end{figure} The preceding lemma is one of the main ingredients for solving the constrained optimization problem that defines the quantity $T_n(\theta)$ in \eqref{eq:extremalProb}. However, it is still a hard task to compute $T_n(\theta)$ from the corresponding critical points. Especially the case \eqref{eq:critPointsNotEqual} in Lemma \ref{lem:critPoints} is difficult to handle and needs careful treatment. An efficient computation of $T_n(\theta)$ therefore requires a technique that allows to narrow down the set of relevant critical points. The following result provides an adequate tool for this and is thus crucial for the remaining considerations. The idea behind this approach may also prove useful for solving similar optimization problems. \begin{lemma}\label{lem:paramSubst} For $n\ge 1$ and $\theta\in\bigl(0,\frac{\pi}{2}\bigr]$ let $\lambda=(\lambda_j)\in D_n(\theta)$. If $T_n(\theta)=\max W(\lambda)$, then for every $k\in\{0,\dots,n\}$ one has \[ \max W\bigl((\lambda_0,\dots,\lambda_k,0,\dots)\bigr) = T_k(\theta_k)\quad\text{ with }\quad \theta_k = \sum_{j=0}^k M(\lambda_j) \le \theta\,. \] \begin{proof} Suppose that $T_n(\theta)=\max W(\lambda)$. The case $k=n$ in the claim obviously agrees with this hypothesis. Let $k\in\{0,\dots,n-1\}$ be arbitrary and denote $(t_j):=W(\lambda)$. It follows from part (b) of Lemma \ref{lem:seq} that $t_{k+1} = \max W\bigl( (\lambda_0,\dots,\lambda_k,0,\dots) \bigr)$. In particular, one has $t_{k+1}\le T_k(\theta_k)$ since $(\lambda_0,\dots,\lambda_k,0,\dots)\in D_k(\theta_k)$. Assume that $t_{k+1}<T_k(\theta_k)$, and let $\gamma=(\gamma_j)\in D_k(\theta_k)$ with $\max W(\gamma)=T_k(\theta_k)$. Denote $\mu:=(\gamma_0,\dots,\gamma_k,\lambda_{k+1},\dots,\lambda_n,0,\dots)\in D_n(\theta_n)$ and $(s_j):=W(\mu)$. Again by part (b) of Lemma \ref{lem:seq}, one has $s_{k+1}=\max W(\gamma)>t_{k+1}$ and $s_{n+1}=\max W(\mu)\le T_n(\theta_n)$. Taking into account part (a) of Lemma \ref{lem:seq} and the definition of the operator $W$, one obtains that \[ t_{k+2} = t_{k+1} + \lambda_{k+1}(1-2t_{k+1}) < s_{k+1} + \lambda_{k+1}(1-2s_{k+1}) = s_{k+2}\,. \] Iterating this estimate eventually gives $t_{n+1}<s_{n+1}\le T_n(\theta_n)$, which contradicts the case $k=n$ from above. Thus, $\max W\bigl((\lambda_0,\dots,\lambda_k,0,\dots)\bigr)=t_{k+1}=T_k(\theta_k)$ as claimed. \end{proof \end{lemma} Lemma \ref{lem:paramSubst} states that if a sequence $\lambda\in D_n(\theta)$ solves the optimization problem for $T_n(\theta)$, then every truncation of $\lambda$ solves the corresponding reduced optimization problem. This allows to exclude many sequences in $D_n(\theta)$ from the considerations once the optimization problem is understood for small $n$. The number of parameters in \eqref{eq:extremalProb} can thereby be reduced considerably. The following lemma demonstrates this technique. It implies that the condition $\lambda_0<\frac{1}{\pi}$ in Lemma \ref{lem:critPoints} is always satisfied except for one single case, which can be treated separately. \begin{lemma}\label{lem:lambda_0} For $n\ge 1$ and $\theta\in\bigl(0,\frac{\pi}{2}\bigr]$ let $\lambda=(\lambda_j)\in D_n(\theta)$ with $T_n(\theta)=\max W(\lambda)$ and $\lambda_0\ge\dots\ge\lambda_n$. If $\lambda_0=\frac{1}{\pi}$, then $\theta=\frac{\pi}{2}$ and $n=1$. \begin{proof} Let $\lambda_0=\frac{1}{\pi}$ and define $\theta_1:= M(\lambda_0)+M(\lambda_1)\le\theta$. It is obvious that $\lambda\in D_1\bigl(\frac{\pi}{2})$ is equivalent to $\theta_1=\theta=\frac{\pi}{2}$. Assume that $\theta_1<\frac{\pi}{2}$. Clearly, one has $\theta_1\ge M(\lambda_0)=\frac{\pi}{4}$ and $\lambda_1 = \frac{1}{\pi}\sin\bigl(2\theta_1-\frac{\pi}{2}\bigr) = -\frac{1}{\pi}\left(1-2\sin^2\theta_1\right)\in\bigl[0,\frac{1}{\pi}\bigr)$. Taking into account representation \eqref{eq:seqExplRepr}, for $\mu:=(\lambda_0,\lambda_1,0,\dots)\in D_1(\theta_1)$ one computes \[ \begin{aligned} \max W(\mu) &= \frac{1}{2}-\frac{1}{2}(1-2\lambda_0)(1-2\lambda_1) = (\lambda_0+\lambda_1) - 2\lambda_0\lambda_1\\ &= \frac{2}{\pi}\sin^2\theta_1+\frac{2}{\pi^2}\left(1-2\sin^2\theta_1\right)=\frac{2}{\pi^2} + \frac{2\pi-4} {\pi^2}\sin^2\theta_1\,. \end{aligned} \] Since $\arcsin\bigl(\frac{1}{\pi-1}\bigr)<\frac{\pi}{4}\le\theta_1<\frac{\pi}{2}$, it follows from part (\ref{app:lem1:a}) of Lemma \ref{app1:lem1} that \[ \max W(\mu) < \frac{2}{\pi}\Bigl(1-\frac{1}{\pi}\sin\theta_1\Bigr)\sin\theta_1 = \frac{1}{2}-\frac{1}{2}\Bigl(1-\frac{2}{\pi}\sin\theta_1\Bigr)^2 \le T_1(\theta_1)\,, \] where the last inequality is due to representation \eqref{eq:limitEquiParam}. This is a contradiction to Lemma \ref{lem:paramSubst}. Hence, $\theta_1=\theta=\frac{\pi}{2}$ and, in particular, $\lambda=\mu\in D_1\bigl(\frac{\pi}{2}\bigr)$. Obviously, one has $D_1\bigl(\frac{\pi}{2}\bigr)=\left\{\bigl(\frac{1}{\pi},\frac{1}{\pi},0,\dots\bigr)\right\}$, so that $\lambda=\bigl(\frac{1}{\pi},\frac{1}{\pi},0,\dots\bigr)$. Taking into account that $\sin\bigl(\frac{\pi}{3}\bigr)=\frac{\sqrt{3}}{2}$, it follows from representations \eqref{eq:seqExplRepr} and \eqref{eq:limitEquiParam} that \[ \max W(\lambda) = \frac{1}{2}-\frac{1}{2}\Bigl(1-\frac{2}{\pi}\Bigr)^2 < \frac{1}{2}-\frac{1}{2}\biggl(1-\frac{\sqrt{3}}{\pi}\biggr)^3\le T_2\Bigl(\frac{\pi}{2}\Bigr)\,. \] Since $\max W(\lambda)=T_n(\theta)$ by hypothesis, this implies that $n=1$. \end{proof \end{lemma} We are now able to solve the finite-dimensional constrained optimization problem in \eqref{eq:extremalProb} for every $\theta\in\bigl[0,\frac{\pi}{2}\bigr]$ and $n\in\N$. We start with the case $n=1$. \begin{lemma}\label{lem:twoParams} The quantity $T_1(\theta)$ has the representation \[ T_1(\theta) = \begin{cases} T_0(\theta)=\frac{1}{\pi}\,\sin(2\theta) & \text{ for }\quad 0\le\theta\le\arctan\bigl(\frac{2}{\pi}\bigr)=\frac{1}{2}\arcsin\bigl(\frac{4\pi} {\pi^2+4}\bigr)\,,\\[0.1cm] \frac{2}{\pi^2}+\frac{\pi^2-4}{2\pi^2}\sin^2\theta & \text{ for }\quad \arctan\bigl(\frac{2}{\pi}\bigr)<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)\,,\\[0.1cm] \frac{1}{2}-\frac{1}{2}\bigl(1-\frac{2}{\pi}\sin\theta\bigr)^2 & \text{ for }\quad \arcsin\bigl(\frac{2}{\pi}\bigr)\le\theta\le\frac{\pi}{2}\,. \end{cases} \] In particular, if $0<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)$ and $\lambda=(\lambda_0,\lambda_1,0,\dots)\in D_1(\theta)$ with $\lambda_0=\lambda_1$, then the strict inequality $\max W(\lambda)<T_1(\theta)$ holds. The mapping $\bigl[0,\frac{\pi}{2}\bigr]\ni\theta\mapsto T_1(\theta)$ is strictly increasing, continuous on $\bigl[0,\frac{\pi}{2}\bigr]$, and continuously differentiable on $\bigl(0,\frac{\pi}{2}\bigr)$. \begin{proof} Since $T_1(0)=T_0(0)=0$, the representation is obviously correct for $\theta=0$. For $\theta=\frac{\pi}{2}$ one has $D_1\bigl(\frac{\pi}{2}\bigr)=\left\{\bigl(\frac{1}{\pi},\frac{1}{\pi},0,\dots\bigr)\right\}$, so that $T_1\bigl(\frac{\pi}{2}\bigr)=\frac{1}{2}-\frac{1}{2}\bigl(1-\frac{2}{\pi}\bigr)^2$ by representation \eqref{eq:seqExplRepr}. This also agrees with the claim. Now let $\theta\in\bigl(0,\frac{\pi}{2}\bigr)$ be arbitrary. Obviously, one has $D_0(\theta)=\left\{\bigl(\frac{1}{\pi}\sin(2\theta),0,\dots\bigr)\right\}$ if $\theta\le\frac{\pi}{4}$, and $D_0(\theta)=\emptyset$ if $\theta>\frac{\pi}{4}$. Hence, \begin{equation}\label{eq:oneParam} T_0(\theta)=\frac{1}{\pi}\sin(2\theta)\quad\text{ if }\quad 0<\theta\le\frac{\pi}{4}\,, \end{equation} and $T_0(\theta)=0$ if $\theta>\frac{\pi}{4}$. By Lemmas \ref{lem:paramPermut}, \ref{lem:critPoints}, and \ref{lem:lambda_0} there are only two sequences in $D_1(\theta)\setminus D_0(\theta)$ that need to be considered in order to compute $T_1(\theta)$. One of them is given by $\mu=(\mu_0,\mu_1,0,\dots)$ with $\mu_0=\mu_1=\frac{1}{\pi}\sin\theta\in\bigl(0,\frac{1}{\pi}\bigr)$. For this sequence, representation \eqref{eq:limitEquiParam} yields \begin{equation}\label{eq:twoParamsEqui} \max W(\mu) = \frac{1}{2}-\frac{1}{2}\Bigl(1-\frac{2}{\pi}\sin\theta\Bigr)^2= \frac{2}{\pi}\Bigl(1-\frac{1}{\pi}\sin\theta\Bigr)\sin\theta\,. \end{equation} The other sequence in $D_1(\theta)\setminus D_0(\theta)$ that needs to be considered is $\lambda=(\lambda_0,\lambda_1,0,\dots)$ with $\lambda_0$ and $\lambda_1$ satisfying $\frac{4}{\pi^2+4}>\lambda_0>\frac{2}{\pi^2}>\lambda_1>0$ and \begin{equation}\label{eq:twoParamsRels} \lambda_0 + \lambda_1 = \frac{4\alpha^2}{\pi^2+4\alpha^2}\,,\quad \lambda_0\lambda_1 = \frac{\alpha^2-1}{\pi^2+4\alpha^2}\,, \end{equation} where \begin{equation}\label{eq:twoParamsAlpha} \alpha = \frac{\sqrt{1-\pi^2\lambda_0^2}}{1-2\lambda_0} = \frac{\sqrt{1-\pi^2\lambda_1^2}}{1-2\lambda_1}\in(1,m)\,,\quad m=\frac{\pi}{2}\tan\Bigl(\arcsin\Bigl(\frac{2}{\pi}\Bigr)\Bigr)\,. \end{equation} It turns out shortly that this sequence $\lambda$ exists if and only if $\arctan\bigl(\frac{2}{\pi}\bigr)<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)$. Using representation \eqref{eq:seqExplRepr} and the relations in \eqref{eq:twoParamsRels}, one obtains \begin{equation}\label{eq:twoParamsLimitAlpha} \max W(\lambda) = \frac{1}{2}-\frac{1}{2}(1-2\lambda_0)(1-2\lambda_1) = (\lambda_0+\lambda_1) - 2\lambda_0\lambda_1 = 2\,\frac{\alpha^2+1}{\pi^2+4\alpha^2}\,. \end{equation} The objective is to rewrite the right-hand side of \eqref{eq:twoParamsLimitAlpha} in terms of $\theta$. It follows from \begin{equation}\label{eq:twoParams2Theta} 2\theta = \arcsin(\pi\lambda_0) + \arcsin(\pi\lambda_1) \end{equation} and the relations \eqref{eq:twoParamsRels} and \eqref{eq:twoParamsAlpha} that \begin{equation}\label{eq:twoParamsSin2ThetaInAlpha} \begin{aligned} \sin(2\theta) &= \pi\lambda_0\sqrt{1-\pi^2\lambda_1^2} + \pi\lambda_1\sqrt{1-\pi^2\lambda_0^2} = \alpha\pi\lambda_0(1-2\lambda_1) + \alpha\pi\lambda_1(1-2\lambda_0)\\ &= \alpha\pi\left(\lambda_0+\lambda_1 - 4\lambda_0\lambda_1\right) = \frac{4\alpha\pi}{\pi^2+4\alpha^2}\,. \end{aligned} \end{equation} Taking into account that $\sin(2\theta)>0$, equation \eqref{eq:twoParamsSin2ThetaInAlpha} can be rewritten as \[ \alpha^2 - \frac{\pi}{\sin(2\theta)}\alpha + \frac{\pi^2}{4}=0\,. \] In turn, this gives \[ \alpha = \frac{\pi}{2\sin(2\theta)}\left(1\pm\sqrt{1-\sin^2(2\theta)}\right) = \frac{\pi}{2}\,\frac{1\pm \abs{\cos^2\theta-\sin^2\theta}}{2\sin\theta\cos\theta}\,, \] that is, \begin{equation}\label{eq:twoParamsPossAlpha} \alpha = \frac{\pi}{2}\tan\theta\quad\text{ or }\quad \alpha = \frac{\pi}{2}\cot\theta\,. \end{equation} We show that the second case in \eqref{eq:twoParamsPossAlpha} does not occur. Since $1<\alpha<m<\frac{\pi}{2}$, by equation \eqref{eq:twoParamsSin2ThetaInAlpha} one has $\sin(2\theta)<1$, which implies that $\theta\neq\frac{\pi}{4}$. Moreover, combining relations \eqref{eq:twoParamsRels} and \eqref{eq:twoParamsAlpha}, $\lambda_1$ can be expressed in terms of $\lambda_0$ alone. Hence, by equation \eqref{eq:twoParams2Theta} the quantity $\theta$ can be written as a continuous function of the sole variable $\lambda_0\in \bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$. Taking the limit $\lambda_0\to\frac{4}{\pi^2+4}$ in equation \eqref{eq:twoParams2Theta} then implies that $\lambda_1\to0$ and, therefore, $\theta\to\frac{1}{2}\arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr)<\frac{\pi}{4}$. This yields $\theta<\frac{\pi}{4}$ for every $\lambda_0\in\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$ by continuity, that is, the sequence $\lambda$ can exist only if $\theta<\frac{\pi}{4}$. Taking into account that $\alpha$ satisfies $1<\alpha<m=\frac{\pi}{2}\tan\bigl(\arcsin\bigl(\frac{2}{\pi}\bigr)\bigr)$, it now follows from \eqref{eq:twoParamsPossAlpha} that the sequence $\lambda$ exists if and only if $\arctan\bigl(\frac{2}{\pi}\bigr)<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)$ and, in this case, one has \begin{equation}\label{eq:twoParamsAlphaInTheta} \alpha=\frac{\pi}{2}\tan\theta\,. \end{equation} Combining equations \eqref{eq:twoParamsLimitAlpha} and \eqref{eq:twoParamsAlphaInTheta} finally gives \begin{equation}\label{eq:twoParamsLimitTheta} \begin{aligned} \max W(\lambda) &= \frac{1}{2}\,\frac{\frac{4}{\pi^2}+\tan^2\theta}{1+\tan^2\theta} = \frac{2}{\pi^2}\cos^2\theta + \frac{1}{2}\sin^2\theta = \frac{2}{\pi^2} + \frac{\pi^2-4}{2\pi^2}\,\sin^2\theta \end{aligned} \end{equation} for $\arctan\bigl(\frac{2}{\pi}\bigr)<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)$. As a result of Lemmas \ref{lem:paramPermut}, \ref{lem:critPoints}, and \ref{lem:lambda_0}, the quantities \eqref{eq:oneParam}, \eqref{eq:twoParamsEqui}, and \eqref{eq:twoParamsLimitTheta} are the only possible values for $T_1(\theta)$, and we have to determine which of them is the greatest. The easiest case is $\theta>\frac{\pi}{4}$ since then \eqref{eq:twoParamsEqui} is the only possibility for $T_1(\theta)$. The quantity \eqref{eq:twoParamsLimitTheta} is relevant only if $\arctan\bigl(\frac{2}{\pi}\bigr)<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)<\frac{\pi}{4}$. In this case, it follows from parts (\ref{app:lem1:b}) and (\ref{app:lem1:c}) of Lemma \ref{app1:lem1} that \eqref{eq:twoParamsLimitTheta} gives the greatest value of the three possibilities and, hence, is the correct term for $T_1(\theta)$ here. For $0<\theta\le\arctan\bigl(\frac{2}{\pi}\bigr)<2\arctan\bigl(\frac{1}{\pi}\bigr)$, by part (\ref{app:lem1:d}) of Lemma \ref{app1:lem1} the quantity \eqref{eq:oneParam} is greater than \eqref{eq:twoParamsEqui}. Therefore, $T_1(\theta)$ is given by \eqref{eq:oneParam} in this case. Finally, consider the case $\arcsin\bigl(\frac{2}{\pi}\bigr)\le\theta\le\frac{\pi}{4}$. Since $2\arctan\bigl(\frac{1}{\pi}\bigr)<\arcsin\bigl(\frac{2}{\pi}\bigr)$, it follows from part (\ref{app:lem1:e}) of Lemma \ref{app1:lem1} that \eqref{eq:twoParamsEqui} is greater than \eqref{eq:oneParam} and, hence, coincides with $T_1(\theta)$. This completes the computation of $T_1(\theta)$ for $\theta\in\bigl[0,\frac{\pi}{2}\bigr]$. In particular, it follows from the discussion of the two cases $0<\theta\le\arctan\bigl(\frac{2}{\pi}\bigr)$ and $\arctan\bigl(\frac{2}{\pi}\bigr)<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)$ that $\max W(\mu)$ is always strictly less than $T_1(\theta)$ if $0<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)$. The piecewise defined mapping $\bigl[0,\frac{\pi}{2}\bigr]\ni\theta\mapsto T_1(\theta)$ is continuously differentiable on each of the corresponding subintervals. It remains to prove that the mapping is continuous and continuously differentiable at the points $\theta=\arctan\bigl(\frac{2}{\pi}\bigr)=\frac{1}{2}\arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr)$ and $\theta=\arcsin\bigl(\frac{2}{\pi}\bigr)$. Taking into account that $\sin^2\theta=\frac{4}{\pi^2+4}$ for $\theta=\frac{1}{2}\arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr)$, the continuity is straightforward to verify. The continuous differentiability follows from the relations \[ \frac{\pi^2-4}{\pi^2}\sin\theta\cos\theta = \frac{2}{\pi}\Bigl(1-\frac{2}{\pi}\sin\theta\Bigr)\cos\theta \quad\text{ for }\quad \theta=\arcsin\Bigl(\frac{2}{\pi}\Bigr)\, \] and \[ \frac{2}{\pi} \cos(2\theta) = \frac{\pi^2-4}{2\pi^2}\sin(2\theta) = \frac{\pi^2-4}{\pi^2}\sin\theta\cos\theta \quad\text{ for }\quad \theta = \frac{1}{2}\arcsin\Bigl(\frac{4\pi}{\pi^2+4}\Bigr)\,, \] where the latter is due to \[ \cot\Bigl(\arcsin\Bigl(\frac{4\pi}{\pi^2+4}\Bigr)\Bigr) = \frac{\sqrt{1-\frac{16\pi^2}{(\pi^2+4)^2}}}{\frac{4\pi}{\pi^2+4}} = \frac{\pi^2-4}{4\pi}\,. \] This completes the proof. \end{proof \end{lemma} So far, Lemma \ref{lem:paramSubst} has been used only to obtain Lemma \ref{lem:lambda_0}. Its whole strength becomes apparent in connection with Lemma \ref{lem:paramPermut}. This is demonstrated in the following corollary to Lemma \ref{lem:twoParams}, which states that in \eqref{eq:critPointsNotEqual} the sequences with $l\in\{0,\dots,n-2\}$ do not need to be considered. \begin{corollary}\label{cor:critPoints:ln} In the case \eqref{eq:critPointsNotEqual} in Lemma \ref{lem:critPoints} one has $l=n-1$. \begin{proof} The case $n=1$ is obvious. For $n\ge 2$ let $\lambda=(\lambda_0,\dots,\lambda_n,0,\dots)\in D_n(\theta)$ with \[ \frac{4}{\pi^2+4}>\lambda_0=\dots=\lambda_l > \frac{2}{\pi^2} > \lambda_{l+1}=\dots=\lambda_n>0 \] for some $l\in\{0,\dots,n-2\}$. In particular, one has $0<\lambda_{n-1}=\lambda_n<\frac{2}{\pi^2}$, which implies that $0<\tilde\theta:=M(\lambda_{n-1})+M(\lambda_n)<\arcsin\bigl(\frac{2}{\pi}\bigr)$. Hence, it follows from Lemma \ref{lem:twoParams} that \[ \max W\bigl((\lambda_{n-1},\lambda_n,0,\dots)\bigr) < T_1(\tilde\theta)\,. \] By Lemmas \ref{lem:paramPermut} and \ref{lem:paramSubst} one concludes that \[ \max W(\lambda) = \max W\bigl((\lambda_{n-1},\lambda_n,\lambda_0,\dots,\lambda_{n-2},0,\dots)\bigr)<T_n(\theta)\,. \] This leaves $l=n-1$ as the only possibility in \eqref{eq:critPointsNotEqual}. \end{proof \end{corollary} We now turn to the computation of $T_2(\theta)$ for $\theta\in\bigl[0,\frac{\pi}{2}\bigr]$. \begin{lemma}\label{lem:threeParams} In the interval $\bigl(0,\frac{\pi}{2}\bigr]$ the equation \begin{equation}\label{eq:defVartheta} \Bigl(1-\frac{2}{\pi}\sin\vartheta\Bigr)^2 = \biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\vartheta}{3}\Bigr)\biggr)^3 \end{equation} has a unique solution $\vartheta\in\bigl(\arcsin\bigl(\frac{2}{\pi}\bigr),\frac{\pi}{2}\bigr)$. Moreover, the quantity $T_2(\theta)$ has the representation \[ T_2(\theta) = \begin{cases} T_1(\theta) & \text{ for }\quad 0\le\theta\le\vartheta\,,\\[0.1cm] \dfrac{1}{2}-\dfrac{1}{2}\biggl(1-\dfrac{2}{\pi}\sin\Bigl(\dfrac{2\theta}{3}\Bigr)\biggr)^3 & \text{ for }\quad \vartheta<\theta\le\frac{\pi}{2}\,. \end{cases} \] In particular, one has $T_1(\theta)<T_2(\theta)$ if $\theta>\vartheta$, and the strict inequality $\max W(\lambda)<T_2( \theta)$ holds for $\theta\in\bigl(0,\frac{\pi}{2}\bigr]$ and $\lambda=(\lambda_0,\lambda_1,\lambda_2,0, \dots)\in D_2(\theta)$ with $\lambda_0=\lambda_1>\lambda_2>0$. The mapping $\bigl[0,\frac{\pi}{2}\bigr]\ni\theta\mapsto T_2(\theta)$ is strictly increasing, continuous on $\bigl[0,\frac{\pi}{2}\bigr]$, and continuously differentiable on $\bigl(0,\frac{\pi}{2}\bigr)\setminus\{\vartheta\}$. \begin{proof} Since $T_2(0)=T_1(0)=0$, the case $\theta=0$ in the representation for $T_2(\theta)$ is obvious. Let $\theta\in\bigl(0,\frac{\pi}{2}\bigr]$ be arbitrary. It follows from Lemmas \ref{lem:paramPermut}, \ref{lem:critPoints}, and \ref{lem:lambda_0} and Corollary \ref{cor:critPoints:ln} that there are only two sequences in $D_2(\theta)\setminus D_1(\theta)$ that need to be considered in order to compute $T_2(\theta)$. One of them is $\mu=(\mu_0,\mu_1,\mu_2,0,\dots)$ with $\mu_0=\mu_1=\mu_2=\frac{1}{\pi}\sin\bigl(\frac{2\theta}{3}\bigr)$. For this sequence representation \eqref{eq:limitEquiParam} yields \begin{equation}\label{eq:threeParamsLimitEqui} \max W(\mu) = \frac{1}{2}-\frac{1}{2}\biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{3}\Bigr)\biggr)^3\,. \end{equation} The other sequence in $D_2(\theta)\setminus D_1(\theta)$ that needs to be considered is $\lambda=(\lambda_0,\lambda_1,\lambda_2,0,\dots)$, where $\frac{4}{\pi^2+4}>\lambda_0=\lambda_1>\frac{2}{\pi^2}>\lambda_2>0$ and $\lambda_0$ and $\lambda_2$ are given by \eqref{eq:critPointsRels} and \eqref{eq:critPointsAlpha}. Using representation \eqref{eq:seqExplRepr}, one obtains \begin{equation}\label{eq:threeParamsLimitNotEqual} \max W(\lambda) = \frac{1}{2}-\frac{1}{2}(1-2\lambda_0)^2(1-2\lambda_2)\,. \end{equation} According to Lemma \ref{app:lem3}, this sequence $\lambda$ can exist only if $\theta$ satisfies the two-sided estimate $\frac{3}{2}\arcsin\bigl(\frac{2}{\pi}\bigr)<\theta\le\arcsin\bigl(\frac{12+\pi^2}{8\pi}\bigr)+ \frac{1}{2}\arcsin\bigl(\frac{12-\pi^2}{4\pi}\bigr)$. However, if $\lambda$ exists, combining Lemma \ref{app:lem3} with equations \eqref{eq:threeParamsLimitEqui} and \eqref{eq:threeParamsLimitNotEqual} yields \[ \max W(\lambda) < \max W(\mu)\,. \] Therefore, in order to compute $T_2(\theta)$ for $\theta\in\bigl(0,\frac{\pi}{2}\bigr]$, it remains to compare \eqref{eq:threeParamsLimitEqui} with $T_1(\theta)$. In particular, for every sequence $\lambda=(\lambda_0,\lambda_1,\lambda_2,0,\dots)\in D_2(\theta)$ with $\lambda_0=\lambda_1>\lambda_2>0$ the strict inequality $\max W(\lambda) < T_2(\theta)$ holds. According to Lemma \ref{app:lem2}, there is a unique $\vartheta\in\bigl(\arcsin\bigl(\frac{2}{\pi}\bigr),\frac{\pi}{2}\bigr)$ such that \[ \Bigl(1-\frac{2}{\pi}\sin\theta\Bigr)^2 < \biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{3}\Bigr)\biggr)^3 \quad\text{ for }\quad 0 < \theta < \vartheta \] and \[ \Bigl(1-\frac{2}{\pi}\sin\theta\Bigr)^2 > \biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{3}\Bigr)\biggr)^3 \quad\text{ for }\quad \vartheta < \theta \le \frac{\pi}{2}\,. \] These inequalities imply that $\vartheta$ is the unique solution to equation \eqref{eq:defVartheta} in the interval $\bigl(0,\frac{\pi}{2}\bigr]$. Moreover, taking into account Lemma \ref{lem:twoParams}, equation \eqref{eq:threeParamsLimitEqui}, and the inequality $\vartheta>\arcsin\bigl(\frac{2}{\pi}\bigr)$, it follows that $T_1(\theta) < \max W(\mu)$ if and only if $\theta>\vartheta$. This proves the claimed representation for $T_2(\theta)$. By Lemma \ref{lem:twoParams} and the choice of $\vartheta$ it is obvious that the mapping $\bigl[0,\frac{\pi}{2}\bigr]\ni\theta\mapsto T_2(\theta)$ is strictly increasing, continuous on $\bigl[0,\frac{\pi}{2}\bigr]$, and continuously differentiable on $\bigl(0,\frac{\pi}{2}\bigr)\setminus\{\vartheta\}$. \end{proof \end{lemma} In order to prove Theorem \ref{thm:solOptProb}, it remains to show that $T(\theta)$ coincides with $T_2(\theta)$. \begin{proposition}\label{prop:solOptProb} For every $\theta\in\bigl[0,\frac{\pi}{2}\bigr]$ and $n\ge 2$ one has $T(\theta)=T_n(\theta)=T_2(\theta)$. \begin{proof} Since $T(0)=0$, the case $\theta=0$ is obvious. Let $\theta\in\bigl(0,\frac{\pi}{2}\bigr]$ be arbitrary. As a result of equation \eqref{eq:supOptn}, it suffices to show that $T_n(\theta)=T_2(\theta)$ for all $n\ge 3$. Let $n\ge 3$ and let $\lambda=(\lambda_j)\in D_n(\theta)\setminus D_{n-1}(\theta)$. The objective is to show that $\max W(\lambda)<T_n(\theta)$. First, assume that $\lambda_0=\dots=\lambda_n=\frac{1}{\pi}\sin\bigl(\frac{2\theta}{n+1}\bigr)>0$. We examine the two cases $\lambda_0<\frac{2}{\pi^2}$ and $\lambda_0\ge\frac{2}{\pi^2}$. If $\lambda_0<\frac{2}{\pi^2}$, then $2M(\lambda_0)<\arcsin\bigl(\frac{2}{\pi}\bigr)$. In this case, it follows from Lemma \ref{lem:twoParams} that $\max W\bigl((\lambda_0,\lambda_0,0,\dots)\bigr)<T_1(\tilde\theta)$ with $\tilde\theta=2M(\lambda_0)$. Hence, by Lemma \ref{lem:paramSubst} one has $\max W(\lambda)<T_n(\theta)$. If $\lambda_0\ge\frac{2}{\pi^2}$, then \[ (n+1)\arcsin\Bigl(\frac{2}{\pi}\Bigr) \le 2(n+1)M(\lambda_0) = 2\theta \le \pi\,, \] which is possible only if $n\le3$, that is, $n=3$. In this case, one has $\lambda_0=\frac{1}{\pi}\sin\bigl(\frac{\theta}{2}\bigr)$. Taking into account representation \eqref{eq:limitEquiParam}, it follows from Lemma \ref{app:lem4} that \[ \max W(\lambda)=\frac{1}{2}-\frac{1}{2}\biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{\theta}{2}\Bigr)\biggr)^4 < \frac{1}{2}-\frac{1}{2}\biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{3}\Bigr)\biggr)^3 \le T_2(\theta)\le T_n(\theta)\,. \] So, one concludes that $\max W(\lambda)<T_n(\theta)$ again. Now, assume that $\lambda=(\lambda_j)\in D_n(\theta)\setminus D_{n-1}(\theta)$ satisfies $\lambda_0=\dots=\lambda_{n-1}>\lambda_n > 0$. Since, in particular, $\lambda_{n-2}=\lambda_{n-1}>\lambda_n>0$, Lemma \ref{lem:threeParams} implies that \[ \max W\bigl((\lambda_{n-2},\lambda_{n-1},\lambda_n,0,\dots)\bigr) < T_2(\tilde\theta)\quad\text{ with }\quad \tilde\theta = \sum_{j=n-2}^n M(\lambda_j)\,. \] It follows from Lemmas \ref{lem:paramPermut} and \ref{lem:paramSubst} that \[ \max W(\lambda) = \max W\bigl((\lambda_{n-2},\lambda_{n-1},\lambda_n,\lambda_0,\dots,\lambda_{n-3},0,\dots)\bigr)<T_n(\theta)\,, \] that is, $\max W(\lambda)< T_n(\theta)$ once again. Hence, by Lemmas \ref{lem:paramPermut}, \ref{lem:critPoints}, and \ref{lem:lambda_0} and Corollary \ref{cor:critPoints:ln} the inequality $\max W(\lambda) < T_n(\theta)$ holds for all $\lambda\in D_n(\theta)\setminus D_{n-1}(\theta)$, which implies that $T_n(\theta)=T_{n-1}(\theta)$. Now the claim follows by induction. \end{proof \end{proposition} We close this section with the following observation, which, together with Remark \ref{rem:estOptimality} above, shows that the estimate from Theorem \ref{thm:mainResult} is indeed stronger than the previously known estimates. \begin{remark}\label{rem:AMvsS} It follows from the previous considerations that \[ x < T(M_*(x))\quad\text{ for }\quad \frac{4}{\pi^2+4}<x\le c_*\,, \] where $c_*\in\bigl(0,\frac{1}{2}\bigr)$ and $M_*\colon[0,c_*]\to\bigl[0,\frac{\pi}{2}\bigr]$ are given by \eqref{eq:AMConst} and \eqref{eq:AMFunc}, respectively. Indeed, let $x\in\bigl(\frac{4}{\pi^2+4},c_*\bigr]$ be arbitrary and set $\theta:=M_*(x)>\frac{1}{2}\arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr)$. Define $\lambda\in D_2(\theta)$ by \[ \lambda:=\Bigl(\frac{4}{\pi^2+4},\frac{1}{\pi}\sin\Bigl(2\theta-\arcsin\Bigl(\frac{4\pi}{\pi^2+4}\Bigr)\Bigr),0,\dots\Bigr) \quad\text{ if }\quad \theta\le\arcsin\Bigl(\frac{4\pi}{\pi^2+4}\Bigr) \] and by \[ \lambda:=\Bigl(\frac{4}{\pi^2+4},\frac{4}{\pi^2+4},\frac{1}{\pi}\sin\Bigl(2\theta-2\arcsin\Bigl(\frac{4\pi}{\pi^2+4}\Bigr)\Bigr), 0,\dots\Bigr)\quad\text{ if }\quad \theta>\arcsin\Bigl(\frac{4\pi}{\pi^2+4}\Bigr)\,. \] Using representation \eqref{eq:seqExplRepr}, a straightforward calculation shows that in both cases one has \[ x=M_*^{-1}(\theta)=\max W(\lambda)\,. \] If $\theta=\arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr)>\vartheta$ (cf.\ Remark \ref{rem:kappaRepl}), that is, $\lambda=\bigl(\frac{4}{\pi^2+4},\frac{4}{\pi^2+4},0,\dots\bigr)$, then it follows from Lemma \ref{lem:threeParams} that $\max W(\lambda) \le T_1(\theta) < T_2(\theta)$. If $\theta\neq\arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr)$, then the inequality $\max W(\lambda)<T_2(\theta)$ holds since, in this case, $\lambda$ is none of the critical points from Lemma \ref{lem:critPoints}. So, in either case one has $x=\max W(\lambda)<T_2(\theta)=T(\theta)=T(M_*(x))$. \end{remark} \begin{appendix} \section{Proofs of some inequalities}\label{app:sec:inequalities} \begin{lemma}\label{app1:lem1} The following inequalities hold: \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item $\frac{2}{\pi^2} + \frac{2\pi-4}{\pi^2}\sin^2\theta < \frac{2}{\pi}\left(1-\frac{1}{\pi}\sin\theta\right)\sin\theta$ \quad\text{ for }\quad $\arcsin\bigl(\frac{1}{\pi-1}\bigr) < \theta < \frac{\pi}{2}$\,,\label{app:lem1:a} \item $\frac{1}{\pi}\sin(2\theta) < \frac{2}{\pi^2} + \frac{\pi^2-4}{2\pi^2}\sin^2\theta$\quad\text{ for }\quad $\arctan\bigl(\frac{2}{\pi}\bigr)<\theta\le\frac{\pi}{4}$\,,\label{app:lem1:b} \item $\frac{2}{\pi}\left(1-\frac{1}{\pi}\sin\theta\right)\sin\theta < \frac{2}{\pi^2} + \frac{\pi^2-4}{2\pi^2}\sin^2\theta$ \quad\text{ for }\quad $\theta\neq\arcsin\bigl(\frac{2}{\pi}\bigr)$\,,\label{app:lem1:c} \item $\frac{2}{\pi}\left(1-\frac{1}{\pi}\sin\theta\right)\sin\theta < \frac{1}{\pi}\sin(2\theta)$\quad\text{ for }\quad $0<\theta<2\arctan\bigl(\frac{1}{\pi}\bigr)$\,,\label{app:lem1:d} \item $\frac{2}{\pi}\left(1-\frac{1}{\pi}\sin\theta\right)\sin\theta > \frac{1}{\pi}\sin(2\theta)$\quad\text{ for }\quad $2\arctan\bigl(\frac{1}{\pi}\bigr) < \theta < \pi$\,.\label{app:lem1:e} \end{enumerate} \begin{proof} One has \[ \begin{aligned} \frac{2}{\pi}\Bigl(1-\frac{1}{\pi}\sin\theta\Bigr)\sin\theta &- \Bigl(\frac{2}{\pi^2}+\frac{2\pi-4}{\pi^2}\sin^2\theta\Bigr)\\ &= - \frac{2(\pi-1)}{\pi^2} \Bigl(\sin^2\theta - \frac{\pi}{\pi-1}\sin\theta+\frac{1}{\pi-1}\Bigr)\\ &= - \frac{2(\pi-1)}{\pi^2} \biggl(\Bigl(\sin\theta-\frac{\pi}{2(\pi-1)}\Bigr)^2-\frac{(\pi-2)^2}{4(\pi-1)^2}\biggr), \end{aligned} \] which is strictly positive if and only if \[ \Bigl(\sin\theta-\frac{\pi}{2(\pi-1)}\Bigr)^2 < \frac{(\pi-2)^2}{4(\pi-1)^2}\,. \] A straightforward analysis shows that the last inequality holds for $\arcsin\bigl(\frac{1}{\pi-1}\bigr)<\theta<\frac{\pi}{2}$, which proves (\ref{app:lem1:a}). For $\theta_0:=\arctan\bigl(\frac{2}{\pi}\bigr)=\frac{1}{2} \arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr)$ one has $\sin(2\theta_0)=\frac{4\pi}{\pi^2+4}$ and $\sin^2\theta_0=\frac{4}{\pi^2+4}$. Thus, the inequality in (b) becomes an equality for $\theta=\theta_0$. Therefore, in order to show (b), it suffices to show that the corresponding estimate holds for the derivatives of both sides of the inequality, that is, \[ \frac{2}{\pi}\cos(2\theta) < \frac{\pi^2-4}{2\pi^2}\sin(2\theta)\quad\text{ for }\quad \theta_0 < \theta < \frac{\pi}{4}\,. \] This inequality is equivalent to $\tan(2\theta)>\frac{4\pi}{\pi^2-4}$ for $\theta_0<\theta<\frac{\pi}{4}$, which, in turn, follows from $\tan(2\theta_0)=\frac{2\tan\theta_0}{1-\tan^2\theta_0}=\frac{4\pi}{\pi^2-4}$. This implies (\ref{app:lem1:b}). The claim (\ref{app:lem1:c}) follows immediately from \[ \frac{2}{\pi^2} + \frac{\pi^2-4}{2\pi^2}\sin^2\theta - \frac{2}{\pi}\Bigl(1-\frac{1}{\pi}\sin\theta\Bigr)\sin\theta = \frac{1}{2}\Bigl(\frac{2}{\pi}-\sin\theta\Bigr)^2\,. \] Finally, observe that \begin{equation}\label{app:eq:lem1:de} \frac{1}{\pi}\sin(2\theta) - \frac{2}{\pi}\Bigl(1-\frac{1}{\pi}\sin\theta\Bigr)\sin\theta= \frac{2}{\pi}\Bigl(\cos\theta-1+\frac{1}{\pi}\sin\theta\Bigr)\sin\theta\,. \end{equation} For $0<\theta<\pi$, the right-hand side of \eqref{app:eq:lem1:de} is positive if and only if $\frac{1-\cos\theta}{\sin\theta}=\tan\bigl(\frac{\theta}{2}\bigr)$ is less than $\frac{1}{\pi}$. This is the case if and only if $\theta < 2\arctan\bigl(\frac{1}{\pi}\bigr)$, which proves (\ref{app:lem1:d}). The proof of claim (\ref{app:lem1:e}) is analogous. \end{proof \end{lemma} \begin{lemma}\label{app:lem2} There is a unique $\vartheta\in\bigl(\arcsin\bigl(\frac{2}{\pi}\bigr),\frac{\pi}{2}\bigr)$ such that \[ \Bigl(1-\frac{2}{\pi}\sin\theta\Bigr)^2 < \biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{3}\Bigr)\biggr)^3\quad\text{ for }\quad 0 < \theta < \vartheta \] and \[ \Bigl(1-\frac{2}{\pi}\sin\theta\Bigr)^2 > \biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{3}\Bigr)\biggr)^3\quad\text{ for }\quad \vartheta < \theta \le\frac{\pi}{2}\,. \] \begin{proof} Define $u,v,w\colon\R\to\R$ by \[ u(\theta):=\sin\Bigl(\frac{2\theta}{3}\Bigr),\quad v(\theta):=\frac{\pi}{2}-\frac{\pi}{2}\Bigl(1-\frac{2}{\pi}\sin\theta\Bigr)^{2/3}\,,\quad\text{ and }\quad w(\theta):=u(\theta) - v(\theta)\,. \] Obviously, the claim is equivalent to the existence of $\vartheta\in\bigl(\arcsin\bigl(\frac{2}{\pi}\bigr),\frac{\pi}{2}\bigr)$ such that $w(\theta)<0$ for $0<\theta<\vartheta$ and $w(\theta)>0$ for $\vartheta<\theta\le\frac{\pi}{2}$. Observe that $u'''(\theta)=-\frac{8}{27}\cos\bigl(\frac{2\theta}{3}\bigr)<0$ for $0\le\theta\le\frac{\pi}{2}$. In particular, $u''$ is strictly decreasing on the interval $\bigl[0,\frac{\pi}{2}\bigr]$. Moreover, $u'''$ is strictly increasing on $\bigl[0,\frac{\pi}{2}\bigr]$, so that the inequality $u'''\ge u'''(0)=-\frac{8}{27}>-\frac{1}{2}$ holds on $\bigl[0,\frac{\pi}{2}\bigr]$. One computes \begin{equation}\label{app:eq:lem2:d4v} v^{(4)}(\theta) = \frac{2\pi^{1/3}}{81}\,\frac{p(\sin\theta)}{(\pi-2\sin\theta)^{10/3}}\quad\text{ for }\quad 0\le\theta\le\frac{\pi}{2}\,, \end{equation} where \[ p(x)=224-72\pi^2 + 27\pi^3x - (160+36\pi^2)x^2 + 108\pi x^3 - 64x^4\,. \] The polynomial $p$ is strictly increasing on $[0,1]$ and has exactly one root in the interval $(0,1)$. Combining this with equation \eqref{app:eq:lem2:d4v}, one obtains that $v^{(4)}$ has a unique zero in $\bigl(0,\frac{\pi}{2}\bigr)$ and that $v^{(4)}$ changes its sign from minus to plus there. Observing that $v'''(0)<-\frac{1}{2}$ and $v'''\bigl(\frac{\pi}{2}\bigr)=0$, this yields $v'''< 0$ on $\bigl[0,\frac{\pi}{2}\bigr)$, that is, $v''$ is strictly decreasing on $\bigl[0,\frac{\pi}{2}\bigr]$. Moreover, it is easy to verify that $v'''\bigl(\frac{\pi}{3}\bigr)< v'''(0)$, so that $v'''\le v'''(0)<-\frac{1}{2}$ on $\bigl[0,\frac{\pi}{3}\bigr]$. Since $u'''>-\frac{1}{2}$ on $\bigl[0,\frac{\pi}{2}\bigr]$ as stated above, it follows that $w'''=u'''-v'''>0$ on $\bigl[0,\frac{\pi}{3}\bigr]$, that is, $w''$ is strictly increasing on $\bigl[0,\frac{\pi}{3}\bigr]$. Recall that $u''$ and $v''$ are both decreasing functions on $\bigl[0,\frac{\pi}{2}\bigr]$. Observing the inequality $u''\bigl(\frac{\pi}{2}\bigr) > v''\bigl(\frac{\pi}{3}\bigr)$, one deduces that \begin{equation}\label{app:eq:lem2:ddw} w''(\theta) = u''(\theta) - v''(\theta) \ge u''\Bigl(\frac{\pi}{2}\Bigr) - v''\Bigl(\frac{\pi}{3}\Bigr)>0\quad\text{ for }\quad \theta\in\Bigl[\frac{\pi}{3},\frac{\pi}{2}\Bigr]\,. \end{equation} Moreover, one has $w''(0)<0$. Combining this with \eqref{app:eq:lem2:ddw} and the fact that $w''$ is strictly increasing on $\bigl[0,\frac{\pi}{3}\bigr]$, one concludes that $w''$ has a unique zero in the interval $\bigl(0,\frac{\pi}{2}\bigr)$ and that $w''$ changes its sign from minus to plus there. Since $w'(0)=0$ and $w'\bigl(\frac{\pi}{2}\bigr)=\frac{1}{3}>0$, it follows that $w'$ has a unique zero in $\bigl(0,\frac{\pi}{2}\bigr)$, where it changes its sign from minus to plus. Finally, observing that $w(0)=0$ and $w\bigl(\frac{\pi}{2}\bigr)>0$, in the same way one arrives at the conclusion that $w$ has a unique zero $\vartheta\in\bigl(0,\frac{\pi}{2}\bigr)$ such that $w(\theta)<0$ for $0<\theta<\vartheta$ and $w(\theta)>0$ for $\vartheta<\theta<\frac{\pi}{2}$. As a result of $w\bigl(\arcsin\bigl(\frac{2}{\pi}\bigr)\bigr)<0$, one has $\vartheta>\arcsin\bigl(\frac{2}{\pi}\bigr)$. \end{proof \end{lemma} \begin{lemma}\label{app:lem3} For $x\in\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$ let \begin{equation}\label{app:eq:lem3:alphay} \alpha := \frac{\sqrt{1-\pi^2x^2}}{1-2x}\quad\text{ and }\quad y:=\frac{4\alpha^2}{\pi^2+4\alpha^2} - x \,. \end{equation} Then, $\theta:=\arcsin(\pi x)+\frac{1}{2}\arcsin(\pi y)$ satisfies the inequalities \begin{equation}\label{app:eq:lem3:theta} \frac{3}{2}\arcsin\Bigl(\frac{2}{\pi}\Bigr) < \theta \le \arcsin\Bigl(\frac{12+\pi^2}{8\pi}\Bigr)+\frac{1}{2}\arcsin\Bigl(\frac{12-\pi^2}{4\pi}\Bigr) \end{equation} and \begin{equation}\label{app:eq:lem3:estimate} \left(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{3}\Bigr)\right)^3 < (1-2x)^2(1-2y)\,. \end{equation} \begin{proof} One has $1<\alpha<m:=\frac{\pi}{2}\tan\bigl(\arcsin\bigl(\frac{2}{\pi}\bigr)\bigr)$, $y\in\bigl(0,\frac{2}{\pi^2}\bigr)$, and $\alpha=\frac{\sqrt{1-\pi^2y^2}}{1-2y}$ (cf.\ Lemma \ref{lem:critPoints}). Moreover, taking into account that $\alpha^2=\frac{1-\pi^2x^2}{(1-2x)^2}$ by \eqref{app:eq:lem3:alphay}, one computes \begin{equation}\label{app:eq:lem3:y} y=\frac{4-(\pi^2+4)x}{\pi^2+4-4\pi^2x}\,. \end{equation} Observe that $\alpha\to m$ and $y\to\frac{2}{\pi^2}$ as $x\to\frac{2}{\pi^2}$, and that $\alpha\to1$ and $y\to 0$ as $x\to\frac{4}{\pi^2+4}$. With this and taking into account \eqref{app:eq:lem3:y}, it is convenient to consider $\alpha=\alpha(x)$, $y=y(x)$, and $\theta=\theta(x)$ as continuous functions of the variable $x\in\bigl[\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr]$. Straightforward calculations show that \[ 1-2y(x) = \frac{\pi^2-4}{\pi^2+4-4\pi^2x}\cdot (1-2x)\quad\text{ for }\quad \frac{2}{\pi^2}\le x\le \frac{4}{\pi^2+4}\,, \] so that \[ y'(x)=-\frac{(\pi^2-4)^2}{(\pi^2+4-4\pi^2x)^2} = -\frac{(1-2y(x))^2}{(1-2x)^2}\quad\text{ for }\quad \frac{2}{\pi^2}< x< \frac{4}{\pi^2+4}\,. \] Taking into account that $\alpha(x)=\alpha\bigl(y(x)\bigr)$, that is, $\frac{\sqrt{1-\pi^2x^2}}{\sqrt{1-\pi^2y(x)^2}}=\frac{1-2x}{1-2y(x)}$, this leads to \begin{equation}\label{app:eq:lem3:dtheta} \begin{aligned} \theta'(x) &=\frac{\pi}{\sqrt{1-\pi^2x^2}} + \frac{\pi y'(x)}{2\sqrt{1-\pi^2y(x)^2}} = \frac{\pi}{2\sqrt{1-\pi^2x^2}}\left(2+\frac{1-2x}{1-2y(x)}\cdot y'(x)\right)\\ &=\frac{\pi}{2\sqrt{1-\pi^2x^2}} \left(2-\frac{\pi^2-4}{\pi^2+4-4\pi^2x}\right) = \frac{\pi}{2\sqrt{1-\pi^2x^2}}\cdot\frac{12+\pi^2-8\pi^2x}{\pi^2+4-4\pi^2x}\,. \end{aligned} \end{equation} In particular, $x=\frac{12+\pi^2}{8\pi^2}$ is the only critical point of $\theta$ in the interval $\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$ and $\theta'$ changes its sign from plus to minus there. Moreover, using $y\bigl(\frac{2}{\pi^2}\bigr)=\frac{2}{\pi^2}$ and $y\bigl(\frac{4}{\pi^2+4}\bigr)=0$, one has $\theta\bigl(\frac{2}{\pi^2}\bigr)=\frac{3}{2}\arcsin\bigl(\frac{2}{\pi}\bigr)<\arcsin\bigl(\frac{4\pi}{\pi^2+4}\bigr)= \theta\bigl(\frac{4}{\pi^2+4}\bigr)$, so that \[ \frac{3}{2}\arcsin\Bigl(\frac{2}{\pi}\Bigr) < \theta(x) \le \theta\Bigl(\frac{12+\pi^2}{8\pi^2}\Bigr)\quad\text{ for }\quad \frac{2}{\pi^2} < x < \frac{4}{\pi^2+4}\,. \] Since $y\bigl(\frac{12+\pi^2}{8\pi^2}\bigr)=\frac{12-\pi^2}{4\pi^2}$, this proves the two-sided inequality \eqref{app:eq:lem3:theta}. Further calculations show that \begin{equation}\label{app:eq:lem3:d2theta} \theta''(x) = \frac{\pi^3}{2}\,\frac{p(x)}{(1-\pi^2x^2)^{3/2}\left(\pi^2+4-4\pi^2x\right)^2}\quad\text{ for }\quad \frac{2}{\pi^2} < x < \frac{4}{\pi^2+4}\,, \end{equation} where \[ p(x)=16-4\pi^2 + (48+16\pi^2+\pi^4)x - 8\pi^2(12+\pi^2)x^2 + 32\pi^4x^3\,. \] The polynomial $p$ is strictly negative on the interval $\bigl[\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr]$, so that $\theta'$ is strictly decreasing. Define $w\colon\bigl[\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr]\to\R$ by \[ w(x):=(1-2x)^2\cdot \bigl(1-2y(x)\bigr) - \biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta(x)}{3}\Bigr)\biggr)^3\,. \] The claim \eqref{app:eq:lem3:estimate} is equivalent to the inequality $w(x)>0$ for $\frac{2}{\pi^2}<x<\frac{4}{\pi^2+4}$. Since $y\bigl(\frac{2}{\pi^2}\bigr)=\frac{2}{\pi^2}$ and, hence, $\theta\bigl(\frac{2}{\pi^2}\bigr)=\frac{3}{2}\arcsin\bigl(\frac{2}{\pi}\bigr)$, one has $w\bigl(\frac{2}{\pi^2}\bigr)=0$. Moreover, a numerical evaluation gives $w\bigl(\frac{4}{\pi^2+4}\bigr)>0$. Therefore, in order to prove $w(x)>0$ for $\frac{2}{\pi^2}<x<\frac{4}{\pi^2+4}$, it suffices to show that $w$ has exactly one critical point in the interval $\bigl(\frac{2}{\pi^2}, \frac{4}{\pi^2+4}\bigr)$ and that $w$ takes its maximum there. Using \eqref{app:eq:lem3:dtheta} and taking into account that $\sqrt{1-\pi^2x^2}=\alpha(x)(1-2x)$, one computes \[ \begin{aligned} \frac{\dd}{\dd x} (1-2x)^2\bigl(1-2y(x)\bigr) &= -4(1-2x)\bigl(1-2y(x)\bigr) - 2(1-2x)^2y'(x)\\ &= -2(1-2x)\bigl(1-2y(x)\bigr)\left(2+\frac{1-2x}{1-2y(x)}\cdot y'(x)\right)\\ &= -\frac{4}{\pi}(1-2x)^2\bigl(1-2y(x)\bigr)\alpha(x)\theta'(x)\,. \end{aligned} \] Hence, for $\frac{2}{\pi^2}<x<\frac{4}{\pi^2+4}$ one obtains \[ \begin{aligned} w'(x) &=-\frac{4}{\pi}\theta'(x)\cdot \left(\alpha(x) (1-2x)^2\bigl(1-2y(x)\bigr)-\left(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta(x)} {3}\Bigr)\right)^2\cos\Bigl(\frac{2\theta(x)}{3}\Bigr)\right)\\ &=-\frac{4}{\pi}\theta'(x)\cdot \bigl(u(x)-v(x)\bigr)\,, \end{aligned} \] where $u,v\colon\bigl[\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr]\to\R$ are given by \[ u(x):=\alpha(x) (1-2x)^2\bigl(1-2y(x)\bigr)\,,\quad v(x):=\left(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta(x)}{3}\Bigr)\right)^2\cos\Bigl(\frac{2\theta(x)}{3}\Bigr)\,. \] Suppose that the difference $u(x)-v(x)$ is strictly negative for all $x\in\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$. In this case, $w'$ and $\theta'$ have the same zeros on $\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$, and $w'(x)$ and $\theta'(x)$ have the same sign for all $x\in\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$. Combining this with \eqref{app:eq:lem3:dtheta}, one concludes that $x=\frac{12+\pi^2}{8\pi^2}$ is the only critical point of $w$ in the interval $\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$ and that $w$ takes its maximum in this point. Hence, it remains to show that the difference $u-v$ is indeed strictly negative on $\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$. Since $\alpha\bigl(\frac{2}{\pi^2}\bigr)=\frac{\pi}{2}\tan\bigl(\arcsin\bigl(\frac{2}{\pi}\bigr)\bigr)$, $y\bigl(\frac{2}{\pi^2}\bigr)=\frac{2}{\pi^2}$, and $\theta\bigl(\frac{2}{\pi^2}\bigr)=\frac{3}{2}\arcsin\bigl(\frac{2}{\pi}\bigr)$, it is easy to verify that $u\bigl(\frac{2}{\pi^2}\bigr)=v\bigl(\frac{2}{\pi^2}\bigr)$ and $u'\bigl(\frac{2}{\pi^2}\bigr)=v'\bigl(\frac{2}{\pi^2}\bigr)<0$. Therefore, it suffices to show that $u'<v'$ holds on the whole interval $\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$. One computes \begin{equation}\label{app:eq:lem3:d2u} u''(x) = \frac{(\pi^2-4)q(x)}{(1-\pi^2x^2)^{3/2}(\pi^2+4-4\pi^2x)^3} \end{equation} where \[ \begin{aligned} q(x) &= (128-80\pi^2-\pi^6)+12\pi^2(\pi^2+4)^2x-12\pi^2(7\pi^4+24\pi^2+48)x^2\\ &\quad+32\pi^4(5\pi^2+12)x^3+24\pi^4(\pi^4+16)x^4 -96\pi^6(\pi^2+4)x^5 + 128\pi^8x^6\,. \end{aligned} \] A further analysis shows that $q''$, which is a polynomial of degree $4$, has exactly one root in the interval $\bigl[\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr]$ and that $q''$ changes its sign from minus to plus there. Moreover, $q'$ takes a positive value in this root of $q''$, so that $q'>0$ on $\bigl[\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr]$, that is, $q$ is strictly increasing on this interval. Since $q\bigl(\frac{4}{\pi^2+4}\bigr)<0$, one concludes that $q<0$ on $\bigl[\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr]$. It follows from \eqref{app:eq:lem3:d2u} that $u''<0$ on $\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$, so that $u'$ is strictly decreasing. In particular, one has $u'<u'\bigl(\frac{2}{\pi^2}\bigr)<0$ on $\bigl(\frac{2}{\pi^2}, \frac{4}{\pi^2+4}\bigr)$. A straightforward calculation yields \begin{equation}\label{app:eq:lem3:dv} v'(x) = -\frac{2}{3}\biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta(x)}{3}\Bigr)\biggr)\cdot \theta'(x)\cdot r\biggl(\sin\Bigl( \frac{2\theta(x)}{3}\Bigr)\biggr)\,, \end{equation} where $r(t)=\frac{4}{\pi}+t-\frac{6}{\pi}t^2$. The polynomial $r$ is positive and strictly decreasing on the interval $\bigl[\frac{1}{2},1]$. Moreover, taking into account \eqref{app:eq:lem3:theta}, one has $\frac{1}{2}<\sin\bigl(\frac{2\theta(x)}{3}\bigr)<1$. Combining this with equation \eqref{app:eq:lem3:dv}, one deduces that $v'(x)$ has the opposite sign of $\theta'(x)$ for all $\frac{2}{\pi^2}<x<\frac{4}{\pi^2+4}$. In particular, by \eqref{app:eq:lem3:dtheta} it follows that $v'(x)\ge0$ if $x\ge\frac{12+\pi^2}{8\pi^2}$. Since $u'<0$ on $\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$, this implies that $v'(x)>u'(x)$ for $\frac{12+\pi^2}{8\pi^2}\le x<\frac{4}{\pi^2+4}$. If $\frac{2}{\pi^2}<x<\frac{12+\pi^2}{8\pi^2}$, then one has $\theta'(x)>0$. In particular, $\theta$ is strictly increasing on $\bigl(\frac{2}{\pi^2},\frac{12+\pi^2}{8\pi^2}\bigr)$. Recall, that $\theta'$ is strictly decreasing by \eqref{app:eq:lem3:d2theta}. Combining all this with equation \eqref{app:eq:lem3:dv} again, one deduces that on the interval $\bigl(\frac{2}{\pi^2},\frac{12+\pi^2}{8\pi^2}\bigr)$ the function $-v'$ can be expressed as a product of three positive, strictly decreasing terms. Hence, on this interval $v'$ is negative and strictly increasing. Recall that $u'<u'\bigl(\frac{2}{\pi^2}\bigr)=v'\bigl(\frac{2}{\pi^2}\bigr)$ on $\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$, which now implies that \[ u'(x) < u'\Bigl(\frac{2}{\pi^2}\Bigr) = v'\Bigl(\frac{2}{\pi^2}\Bigr) < v'(x)\quad\text{ for }\quad \frac{2}{\pi^2} < x < \frac{12+\pi^2}{8\pi^2}\,. \] Since the inequality $u'(x)<v'(x)$ has already been shown for $x\ge\frac{12+\pi^2}{8\pi^2}$, one concludes that $u'<v'$ holds on the whole interval $\bigl(\frac{2}{\pi^2},\frac{4}{\pi^2+4}\bigr)$. This completes the proof. \end{proof \end{lemma} \begin{lemma}\label{app:lem4} One has \[ \biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{3}\Bigr)\biggr)^3 < \left(1-\frac{2}{\pi}\sin\Bigl(\frac{\theta}{2}\Bigr)\right)^4 \quad\text{ for }\quad 0<\theta\le\frac{\pi}{2}\,. \] \begin{proof} The proof is similar to the one of Lemma \ref{app:lem2}. Define $u,v,w\colon\R\to\R$ by \[ u(\theta):=\sin\Bigl(\frac{\theta}{2}\Bigr)\,,\quad v(\theta):=\frac{\pi}{2}-\frac{\pi}{2}\biggl(1-\frac{2}{\pi}\sin\Bigl(\frac{2\theta}{3}\Bigr)\biggr)^{3/4}\,, \quad\text{ and }\quad w(\theta):=u(\theta)-v(\theta)\,. \] Obviously, the claim is equivalent to the inequality $w(\theta)<0$ for $0<\theta\le\frac{\pi}{2}$. Observe that $u'''(\theta)=-\frac{1}{8}\cos\bigl(\frac{\theta}{2}\bigr)<0$ for $0\le\theta\le\frac{\pi}{2}$. In particular, $u'''$ is strictly increasing on $\bigl[0,\frac{\pi}{2}\bigr]$ and satisfies $u'''\ge u'''(0)=-\frac{1}{8}$. One computes \begin{equation}\label{app:eq:lem4:d4v} v^{(4)}(\theta) = \frac{\pi^{1/4}}{54}\,\frac{p\Bigl(\sin\bigl(\frac{2\theta}{3}\bigr)\Bigr)} {\Bigl(\pi-2\sin\bigl(\frac{2\theta}{3}\bigr)\Bigr)^{13/4}}\quad\text{ for }\quad 0\le\theta\le\frac{\pi}{2}\,, \end{equation} where \[ p(x) = 45-16\pi^2 + 4\pi(1+2\pi^2)x - (34+20\pi^2)x^2 + 44\pi x^3 - 27x^4\,. \] The polynomial $p$ is strictly increasing on $\bigl[0,\frac{\sqrt{3}}{2}\bigr]$ and has exactly one root in the interval $\bigl(0,\frac{\sqrt{3}}{2}\bigr)$. Combining this with equation \eqref{app:eq:lem4:d4v}, one obtains that $v^{(4)}$ has a unique zero in the interval $\bigl(0,\frac{\pi}{2}\bigr)$ and that $v^{(4)}$ changes its sign from minus to plus there. Moreover, it is easy to verify that $v'''\bigl(\frac{\pi}{2}\bigl)<v'''(0)<-\frac{1}{8}$. Hence, one has $v'''<-\frac{1}{8}$ on $\bigl[0,\frac{\pi}{2}\bigr]$. Since $u'''\ge-\frac{1}{8}$ on $\bigl[0,\frac{\pi}{2}\bigr]$ as stated above, this implies that $w'''=u'''-v'''>0$ on $\bigl[0,\frac{\pi}{2}\bigr]$, that is, $w''$ is strictly increasing on $\bigl[0,\frac{\pi}{2}\bigr]$. With $w''(0)<0$ and $w''\bigl(\frac{\pi}{2}\bigr)>0$ one deduces that $w''$ has a unique zero in $\bigl(0,\frac{\pi}{2}\bigr)$ and that $w''$ changes its sign from minus to plus there. Since $w'(0)=0$ and $w'\bigl(\frac{\pi}{2}\bigr)>0$, it follows that $w'$ has a unique zero in $\bigl(0,\frac{\pi}{2}\bigr)$, where it changes its sign from minus to plus. Finally, observing that $w(0)=0$ and $w\bigl(\frac{\pi}{2}\bigr)<0$, one concludes that $w(\theta)<0$ for $0<\theta\le\frac{\pi}{2}$. \end{proof \end{lemma} \end{appendix} \section*{Acknowledgements} The author is indebted to his Ph.D.\ advisor Vadim Kostrykin for introducing him to this field of research and fruitful discussions. The author would also like to thank Andr\'{e} H\"anel for a helpful conversation.
{ "attr-fineweb-edu": 1.15332, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaZXxK19JmejM6zuG
\section{Introduction} In this paper we study a general class of quantum Hamiltonians which are relevant to a number of different physical systems. The Hamiltonians we consider are defined on a $2N$ dimensional phase space $\{x_1,...,x_N,p_1,...,p_N\}$ with $[x_i, p_j] = i \delta_{ij}$. They take the form \begin{align} H = H_0 - U \sum_{i=1}^M \cos(C_i) \label{genHamc} \end{align} where $H_0$ is a \emph{quadratic} function of the phase space variables $\{x_1,...,x_N,p_1,...,p_N\}$, and $C_i$ is \emph{linear} in these variables. The $C_i$'s can be chosen arbitrarily except for two restrictions: \begin{enumerate} \item{$\{C_1,...,C_M\}$ are linearly independent.} \item{$[C_i, C_j]$ is an integer multiple of $2\pi i$ for all $i,j$.} \end{enumerate} Here, the significance of the second condition is that it guarantees that the cosine terms commute with one another: $[\cos(C_i),\cos(C_j)] = 0$ for all $i,j$. For small $U$, we can straightforwardly analyze these Hamiltonians by treating the cosine terms as perturbations to $H_0$. But how can we study these systems when $U$ is large? The most obvious approach is to expand around $U = \infty$, just as in the small $U$ case we expand around $U=0$. But in order to make such an expansion, we first need to be able to solve these Hamiltonians exactly in the infinite $U$ limit. The purpose of this paper is to describe a systematic procedure for obtaining such a solution, at least at low energies. The basic idea underlying our solution is that when $U$ is very large, the cosine terms act as \emph{constraints} at low energies. Thus, the low energy spectrum of $H$ can be described by an effective Hamiltonian $H_{\text{eff}}$ defined within a constrained Hilbert space $\mathcal{H}_{\text{eff}}$. This effective Hamiltonian $H_{\text{eff}}$ is quadratic in $\{x_1,...,x_N,p_1,...,p_N\}$ since $H_0$ is quadratic and the constraints are linear in these variables. We can therefore diagonalize $H_{\text{eff}}$ and in this way determine the low energy properties of $H$. Our main result is a general recipe for finding the exact low energy spectrum of $H$ in the limit $U \rightarrow \infty$. This recipe consists of two steps and is only slightly more complicated than what is required to solve a conventional quadratic Hamiltonian. The first step involves finding creation and annihilation operators for the low energy effective Hamiltonian $H_{\text{eff}}$ (Eq. \ref{auxeqsumm}). The second step of the recipe involves finding integer linear combinations of the $C_i$'s that have simple commutation relations with one another. In practice, this step amounts to finding a change of basis that puts a particular integer skew-symmetric matrix into canonical form (Eq. \ref{zprime}). Once these two steps are completed, the low energy spectrum can be written down immediately (Eq. \ref{energyspectsumm}). In addition to our exact solution in the infinite $U$ limit, we also discuss how to analyze these systems when $U$ is large but finite. In particular, we show that in the finite $U$ case, we need to add small (non-quadratic) corrections to the effective Hamiltonian $H_{\text{eff}}$ in order to reproduce the low energy physics of $H$. One of our key results is a discussion of the general form of these finite $U$ corrections, and how they scale with $U$. Our results are useful because there are a number of physical systems where one needs to understand the effect of cosine-like interactions on a quadratic Hamiltonian. An important class of examples are the edges of Abelian fractional quantum Hall (FQH) states. Previously it has been argued that a general Abelian FQH edge can be modeled as collection of $p$ chiral Luttinger liquids with Hamiltonian~\cite{wenedge, 1991-FrohlichKerler, WenReview, WenBook} \begin{equation*} H_0 = \int dx \frac{1}{4\pi} (\partial_x \Phi)^T V (\partial_x \Phi) \end{equation*} Here $\Phi(x) = (\phi_1(x),...,\phi_p(x))$, with each component $\phi_I$ describing a different (bosonized) edge mode while $V$ is a $p \times p$ matrix that describes the velocities and density-density interactions between the different edge modes. The commutation relations for the $\phi_I$ operators are $[\phi_I(x), \partial_y \phi_J(y)] = 2\pi i (K^{-1})_{IJ} \delta(x-y)$ where $K$ is a symmetric, integer $p \times p$ matrix which is determined by the bulk FQH state. The above Hamiltonian $H_0$ is quadratic and hence exactly soluble, but in many cases it is unrealistic because it describes an edge in which electrons do not scatter between the different edge modes. In order to incorporate scattering into the model, we need to add terms to the Hamiltonian of the form \begin{equation*} H_{\text{scat}} = \int U(x) \cos(\Lambda^T K \Phi - \alpha(x)) dx \end{equation*} where $\Lambda$ is a $p$-component integer vector that describes the number of electrons scattered from each edge mode~\cite{kanefisher}. However, it is usually difficult to analyze the effect of these cosine terms beyond the small $U$ limit where perturbative techniques are applicable. (An important exception is when $\Lambda$ is a null vector~\cite{Haldane}, i.e. $\Lambda^T K \Lambda = 0$: in this case, the fate of the edge modes can be determined by mapping the system onto a Sine-Gordon model~\cite{wang2013weak}). Our results can provide insight into this class of systems because they allow us to construct exactly soluble toy models that capture the effect of electron scattering at the edge. Such toy models can be obtained by replacing the above continuum scattering term $H_{\text{scat}}$ by a collection of discrete impurity scatterers, $U \sum_i \cos(\Lambda^T K \Phi(x_i) - \alpha_i)$, and then taking the limit $U \rightarrow \infty$. It is not hard to see that the latter cosine terms obey conditions (1) and (2) from above, so we can solve the resulting models exactly using our general recipe. Importantly, this construction is valid for \emph{any} choice of $\Lambda$, whether or not $\Lambda$ is a null vector. Although the application to FQH edge states is one of the most interesting aspects of our results, our focus in this paper is on the general formalism rather than the implications for specific physical systems. Therefore, we will only present a few simple examples involving a fractional quantum spin Hall edge with different types of impurities. The primary purpose of these examples is to demonstrate how our formalism works rather than to obtain novel results. We now discuss the relationship with previous work. One paper that explores some related ideas is Ref.~\onlinecite{gottesman2001encoding}. In that paper, Gottesman, Kitaev, and Preskill discussed Hamiltonians similar to (\ref{genHamc}) for the case where the $C_i$ operators do not commute, i.e. $[C_i, C_j] \neq 0$. They showed that these Hamiltonians can have degenerate ground states and proposed using these degenerate states to realize qubits in continuous variable quantum systems. Another line of research that has connections to the present work involves the problem of understanding constraints in quantum mechanics. In particular, a number of previous works have studied the problem of a quantum particle that is constrained to move on a surface by a strong confining potential~\cite{JensenKoppe,da1981quantum}. This problem is similar in spirit to one we study here, particularly for the special case where $[C_i, C_j] = 0$: in that case, if we identify $C_i$ as position coordinates $x_i$, then the Hamiltonian (\ref{genHamc}) can be thought of as describing a particle that is constrained to move on a periodic array of hyperplanes. Our proposal to apply our formalism to FQH edge states also has connections to the previous literature. In particular, it has long been known that the problem of an impurity in a non-chiral Luttinger liquid has a simple exact solution in the limit of infinitely strong backscattering~\cite{kane1992transmission, kane1992transport,chamon, chamon0}. The infinite backscattering limit for a single impurity has also been studied for more complicated Luttinger liquid systems~\cite{ChklovskiiHalperin,chamon1997distinct, PhysRevLett.99.066803, ganeshan2012fractional}. The advantage of our approach to these systems is that our methods allow us to study not just single impurities but also multiple coherently coupled impurities, and to obtain the full quantum dynamics not just transport properties. The paper is organized as follows. In section \ref{summsect} we summarize our formalism and main results. In section \ref{examsect} we illustrate our formalism with some examples involving fractional quantum spin Hall edges with impurities. We discuss directions for future work in the conclusion. The appendices contain the general derivation of our formalism as well as other technical results. \section{Summary of results}\label{summsect} \subsection{Low energy effective theory} \label{effsummsect} Our first result is that we derive an \emph{effective theory} that describes the low energy spectrum of \begin{align*} H = H_0 - U \sum_{i=1}^M \cos(C_i) \end{align*} in the limit $U \rightarrow \infty$. This effective theory consists of an effective Hamiltonian $H_{\text{eff}}$ and an effective Hilbert space $\mathcal{H}_{\text{eff}}$. Conveniently, we find a simple algebraic expression for $H_{\text{eff}}$ and $\mathcal{H}_{\text{eff}}$ that holds in the most general case. Specifically, the effective Hamiltonian is given by \begin{equation} H_{\text{eff}} = H_0 - \sum_{i,j=1}^{M} \frac{(\mathcal{M}^{-1})_{ij}}{2} \cdot \Pi_i \Pi_j \label{Heffgenc} \end{equation} where the operators $\Pi_1,..., \Pi_{M}$ are defined by \begin{equation} \Pi_{i} = \frac{1}{2\pi i} \sum_{j=1}^{M} \mathcal{M}_{ij} [C_j, H_0] \end{equation} and where $\mathcal{M}_{ij}$ is an $M \times M$ matrix defined by \begin{equation} \mathcal{M} = \mathcal{N}^{-1}, \quad \mathcal{N}_{ij} = -\frac{1}{4\pi^2}[C_i,[C_j,H_0]] \end{equation} This effective Hamiltonian is defined on an effective Hilbert space $\mathcal{H}_{\text{eff}}$, which is a \emph{subspace} of the original Hilbert space $\mathcal{H}$, and consists of all states $|\psi\>$ that satisfy \begin{equation} \cos(C_i)|\psi\> = |\psi\>, \quad i=1,...,M \label{Hilbeff} \end{equation} A few remarks about these formulas: first, notice that $\mathcal{M}$ and $\mathcal{N}$ are matrices of \emph{$c$-numbers} since $H_0$ is quadratic and the $C_i$'s are linear combinations of $x_j$'s and $p_j$'s. Also notice that the $\Pi_i$ operators are linear functions of $\{x_1,...,x_N,p_1,...,p_N\}$. These observations imply that the effective Hamiltonian $H_{\text{eff}}$ is always \emph{quadratic}. Another important point is that the $\Pi_i$ operators are conjugate to the $C_i$'s: \begin{equation} [C_i, \Pi_j] = 2\pi i \delta_{ij} \end{equation} This means that we can think of the $\Pi_i$'s as generalized momentum operators. Finally, notice that \begin{equation} [C_i, H_{\text{eff}}] = 0 \label{CiH0} \end{equation} The significance of this last equation is that it shows that the Hamiltonian $H_{\text{eff}}$ can be naturally defined within the above Hilbert space (\ref{Hilbeff}). We can motivate this effective theory as follows. First, it is natural to expect that the lowest energy states in the limit $U \rightarrow \infty$ are those that minimize the cosine terms. This leads to the effective Hilbert space given in Eq. \ref{Hilbeff}. Second, it is natural to expect that the dynamics in the $C_1,...,C_M$ directions freezes out at low energies. Hence, the terms that generate this dynamics, namely $\sum_{ij} \frac{\mathcal{M}^{-1}_{ij} }{2}\Pi_i \Pi_j$, should be removed from the effective Hamiltonian. This leads to Eq. \ref{Heffgenc}. Of course this line of reasoning is just an intuitive picture; for a formal derivation of the effective theory, we refer the reader to appendix \ref{derivsect}. At what energy scale is the above effective theory valid? We show that $H_{\text{eff}}$ correctly reproduces the energy spectrum of $H$ for energies less than $\sqrt{U/m}$ where $m$ is the maximum eigenvalue of $\mathcal{M}_{ij}$. One implication of this result is that our effective theory is only valid if $\mathcal{N}$ is \emph{non-degenerate}: if $\mathcal{N}$ were degenerate than $\mathcal{M}$ would have an infinitely large eigenvalue, which would mean that there would be no energy scale below which our theory is valid. Physically, the reason that our effective theory breaks down when $\mathcal{N}$ is degenerate is that in this case, the dynamics in the $C_1,...,C_M$ directions does not completely freeze out at low energies. To see an example of these results, consider a one dimensional harmonic oscillator with a cosine term: \begin{equation} H = \frac{p^2}{2m} + \frac{K x^2}{2} - U \cos(2\pi x) \end{equation} In this case, we have $H_0 = \frac{p^2}{2m} + \frac{K x^2}{2}$ and $C = 2\pi x$. If we substitute these expressions into Eq. \ref{Heffgenc}, a little algebra gives \begin{equation*} H_{\text{eff}} = \frac{K x^2}{2} \end{equation*} As for the effective Hilbert space, Eq. \ref{Hilbeff} tells us that $\mathcal{H}_{\text{eff}}$ consists of position eigenstates \begin{equation*} \mathcal{H}_{\text{eff}} = \{|x=q\>, \quad q = \text{(integer)}\} \end{equation*} If we now diagonalize the effective Hamiltonian within the effective Hilbert space, we obtain eigenstates $|x=q\>$ with energies $E = \frac{K q^2}{2}$. Our basic claim is that these eigenstates and energies should match the low energy spectrum of $H$ in the $U \rightarrow \infty$ limit. In appendix \ref{Heffex1sect}, we analyze this example in detail and we confirm that this claim is correct (up to a constant shift in the energy spectrum). To see another illustrative example, consider a one dimensional harmonic oscillator with \emph{two} cosine terms, \begin{equation} H = \frac{p^2}{2m} + \frac{K x^2}{2} - U \cos(d p) - U \cos(2\pi x) \label{example2summ} \end{equation} where $d$ is a positive integer. This example is fundamentally different from the previous one because the arguments of the cosine do not commute: $[x,p] \neq 0$. This property leads to some new features, such as degeneracies in the low energy spectrum. To find the effective theory in this case, we note that $H_0 = \frac{p^2}{2m} + \frac{K x^2}{2}$ and $C_1 = dp$, $C_2 = 2\pi x$. With a little algebra, Eq. \ref{Heffgenc} gives \begin{equation*} H_{\text{eff}} = 0 \end{equation*} As for the effective Hilbert space, Eq. \ref{Hilbeff} tells us that $\mathcal{H}_{\text{eff}}$ consists of all states $|\psi\>$ satisfying \begin{equation*} \cos(2\pi x) |\psi\> = \cos(dp) |\psi\> = |\psi\>. \end{equation*} One can check that there are $d$ linearly independent states obeying the above conditions; hence if we diagonalize the effective Hamiltonian within the effective Hilbert space, we obtain $d$ exactly degenerate eigenstates with energy $E = 0$. The prediction of our formalism is therefore that $H$ has a $d$-fold ground state degeneracy in the $U \rightarrow \infty$ limit. In appendix \ref{Heffex2sect}, we analyze this example and confirm this prediction. \subsection{Diagonalizing the effective theory} \label{diagsummsect} We now move on to discuss our second result, which is a recipe for diagonalizing the effective Hamiltonian $H_{\text{eff}}$. Note that this diagonalization procedure is unnecessary for the two examples discussed above, since $H_{\text{eff}}$ is very simple in these cases. However, in general, $H_{\text{eff}}$ is a complicated quadratic Hamiltonian which is defined within a complicated Hilbert space $\mathcal{H}_{\text{eff}}$, so diagonalization is an important issue. In fact, in practice, the results in this section are more useful than those in the previous section because we will see that we can diagonalize $H_{\text{eff}}$ without explicitly evaluating the expression in Eq. \ref{Heffgenc}. Our recipe for diagonalizing $H_{\text{eff}}$ has three steps. The first step is to find creation and annihilation operators for $H_{\text{eff}}$. Formally, this amounts to finding all operators $a$ that are linear combinations of $\{x_1,...,x_N, p_1,...,p_N\}$, and satisfy \begin{align} [a, H_{\text{eff}}] &= E a, \nonumber \\ [a, C_i] &= 0, \quad i=1,...,M \label{commeq} \end{align} for some scalar $E \neq 0$. While the first condition is the usual definition of creation and annihilation operators, the second condition is less standard; the motivation for this condition is that $H_{\text{eff}}$ commutes with $C_i$ (see Eq. \ref{CiH0}). As a result, we can impose the requirement $[a,C_i] = 0$ and we will still have enough quantum numbers to diagonalize $H_{\text{eff}}$ since we can use the $C_i$'s in addition to the $a$'s. Alternatively, there is another way to find creation and annihilation operators which is often more convenient: instead of looking for solutions to (\ref{commeq}), one can look for solutions to \begin{align} [a,H_0] &= E a + \sum_{j=1}^M \lambda_{j} [C_j,H_0], \nonumber \\ [a, C_i] &= 0, \quad i=1,...,M \label{auxeqsumm} \end{align} for some scalars $E,\lambda_j$ with $E \neq 0$. Indeed, we show in appendix \ref{altmethsect} that every solution to (\ref{commeq}) is also a solution to (\ref{auxeqsumm}) and vice versa, so these two sets of equations are equivalent. In practice, it is easier to work with Eq. \ref{auxeqsumm} \textcolor{black}{than} Eq. \ref{commeq} because Eq. \ref{auxeqsumm} is written in terms of $H_0$, and thus it does not require us to work out the expression for $H_{\text{eff}}$. The solutions to (\ref{commeq}), or equivalently (\ref{auxeqsumm}), can be divided into two classes: ``annihilation operators'' with $E > 0$, and ``creation operators'' with $E < 0$. Let $a_1,...,a_K$ denote a complete set of linearly independent annihilation operators. We will denote the corresponding $E$'s by $E_1,...,E_K$ and the creation operators by $a_1^\dagger,...,a_K^\dagger$. The creation/annihilation operators should be normalized so that \begin{align*} [a_k, a_{k'}^\dagger] =\delta_{kk'}, \quad [a_k, a_{k'}] = [a_k^\dagger, a_{k'}^\dagger] = 0 \end{align*} We are now ready to discuss the second step of the recipe. This step involves searching for linear combinations of $\{C_1,...,C_M\}$ that have simple commutation relations with one another. The idea behind this step is that we ultimately need to construct a complete set of quantum numbers for labeling the eigenstates of $H_{\text{eff}}$. Some of these quantum numbers will necessarily involve the $C_i$ operators since these operators play a prominent role in the definition of the effective Hilbert space, $\mathcal{H}_{\text{eff}}$. However, the $C_i$'s are unwieldy because they have complicated commutation relations with one another. Thus, it is natural to look for linear combinations of $C_i$'s that have simpler commutation relations. With this motivation in mind, let $\mathcal{Z}_{ij}$ be the $M \times M$ matrix defined by \begin{equation} \mathcal{Z}_{ij} = \frac{1}{2\pi i} [C_i, C_j] \end{equation} The matrix $\mathcal{Z}_{ij}$ is integer and skew-symmetric, but otherwise arbitrary. Next, let \begin{align} C_i' = \sum_{j=1}^M \mathcal{V}_{ij} C_j \textcolor{black}{+ \chi_i} \end{align} for some matrix $\mathcal{V}$ \textcolor{black}{and some vector $\chi$.} Then, $[C_i', C_j'] = 2\pi i \mathcal{Z}_{ij}'$ where $\mathcal{Z}' = \mathcal{V} \mathcal{Z} \mathcal{V}^T$. The second step of the recipe is to find a matrix $\mathcal{V}$ with integer entries and determinant $\pm 1$, such that $\mathcal{Z}'$ takes the simple form \begin{equation} \mathcal{Z}' = \begin{pmatrix} 0_I & -\mathcal{D} & 0 \\ \mathcal{D} & 0_I & 0 \\ 0 & 0 & 0_{M-2I} \end{pmatrix}, \quad \mathcal{D} = \begin{pmatrix} d_1 & 0 & \dots & 0 \\ 0 & d_2 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \dots & d_I \end{pmatrix} \label{zprime} \end{equation} Here $I$ is some integer with $0 \leq I \leq M/2$ and $0_I$ denotes an $I \times I$ matrix of zeros. In mathematical language, $\mathcal{V}$ is an integer change of basis that puts $\mathcal{Z}$ into \emph{skew-normal} form. It is known that such a change of basis always exists, though it is not unique.\cite{NewmanBook} \textcolor{black}{After finding an appropriate $\mathcal{V}$, the offset $\chi$ should then be chosen so that \begin{equation} \chi_i = \pi \cdot \sum_{j < k} \mathcal{V}_{ij} \mathcal{V}_{ik} \mathcal{Z}_{jk} \pmod{2\pi} \label{chicond} \end{equation} The reason for choosing $\chi$ in this way is that it ensures that $e^{iC_i'}|\psi\> = |\psi\>$ for any $|\psi\> \in \mathcal{H}_{\text{eff}}$, as can be easily seen from the Campbell-Baker-Hausdorff formula.} Once we perform these two steps, we can obtain the complete energy spectrum of $H_{\text{eff}}$ with the help of a few results that we prove in appendix \ref{diagsect}.\footnote{These results rely on a small technical assumption, see Eq. \ref{Rassumption}.} Our first result is that $H_{\text{eff}}$ can always be written in the form \begin{equation} H_{\text{eff}} = \sum_{k=1}^K E_k a_k^\dagger a_k + F \left(C_{2I+1}',...,C_M'\right) \label{Heffdiagsumm} \end{equation} where $F$ is some (a priori unknown) quadratic function. Our second result (which is really just an observation) is that the following operators all commute with each other: \begin{equation} \{e^{i C_{1}'/d_1},...,e^{i C_{I}'/d_I},e^{i C_{I+1}'},...,e^{i C_{2I}'},C_{2I+1}',...,C_M'\} \label{commop} \end{equation} Furthermore, these operators commute with the occupation number operators $\{a_1^\dagger a_1,...,a_K^\dagger a_K\}$. Therefore, we can simultaneously diagonalize (\ref{commop}) along with $\{a_k^\dagger a_k\}$. We denote the simultaneous eigenstates by \begin{align*} |\theta_1,...,\theta_I, \varphi_1,...,\varphi_I, x_{I+1}',...,x_{M-I}', n_1,...,n_K\> \end{align*} or, in more abbreviated form, $|\v{\theta}, \v{\varphi},\v{x}',\v{n}\>$. Here the different quantum numbers are defined by \begin{align} &e^{i C_i'/d_{i}} |\v{\theta}, \v{\varphi},\v{x}',\v{n}\> = e^{i\theta_{i}} |\v{\theta}, \v{\varphi},\v{x}',\v{n}\>, \ i = 1,...,I \nonumber \\ &e^{i C_i'} |\v{\theta}, \v{\varphi},\v{x}',\v{n}\> = e^{i\varphi_{i-I}} |\v{\theta}, \v{\varphi},\v{x}',\v{n}\>, \ i = I+1,...,2I \nonumber\\ &C_i' |\v{\theta}, \v{\varphi},\v{x}',\v{n}\> = 2\pi x_{i-I}' |\v{\theta}, \v{\varphi},\v{x}',\v{n}\>, \ i = 2I+1, ..., M \nonumber \\ &a_k^\dagger a_k |\v{\theta}, \v{\varphi},\v{x}',\v{n}\> = n_k |\v{\theta}, \v{\varphi},\v{x}',\v{n}\>, \ k = 1,...,K \label{thetaphixn} \end{align} where $0 \leq \theta_i, \varphi_i < 2\pi$, while $x_i'$ is real valued and $n_k$ ranges over non-negative integers. By construction the $|\v{\theta}, \v{\varphi},\v{x}',\v{n}\>$ states form a complete basis for the Hilbert space $\mathcal{H}$. Our third result is that a \emph{subset} of these states form a complete basis for the effective Hilbert space $\mathcal{H}_{\text{eff}}$. This subset consists of all $|\v{\theta}, \v{\varphi},\v{x}',\v{n}\>$ for which \textcolor{black}{\begin{enumerate} \item{$\v{\theta} = (2\pi \alpha_1/d_1,..., 2\pi \alpha_I/d_I)$ with $\alpha_i =0,1,...,d_i-1$.} \item{$\v{\varphi} = (0,0,...,0)$.} \item{$(x_{I+1}',...,x_{M-I}') = (q_1,...,q_{M-2I})$ for some integers $q_i$.} \end{enumerate}} We will denote this subset of eigenstates by $\{|\v{\alpha},\v{q},\v{n}\>\}$. Putting this together, we can see from equations (\ref{Heffdiagsumm}) and (\ref{thetaphixn}) that the $|\v{\alpha},\v{q},\v{n}\>$ are eigenstates of $H_{\text{eff}}$, with eigenvalues \begin{equation} E = \sum_{k=1}^K n_k E_k + F(2\pi q_1,...,2\pi q_{M-2I}) \label{energyspectsumm} \end{equation} We therefore have the full eigenspectrum of $H_{\text{eff}}$ --- up to the determination of the function $F$. With a bit more work, one can go further and compute the function $F$ (see appendix \ref{eigensect}) but we will not discuss this issue here because in many cases of interest it is more convenient to find $F$ using problem-specific approaches. To see examples of this diagonalization procedure, we refer the reader to section \ref{examsect}. As for the general derivation of this procedure, see appendix \ref{diagsect}. \subsection{Degeneracy} \label{degsummsect} One implication of Eq. \ref{energyspectsumm} which is worth mentioning is that the energy $E$ is independent of the quantum numbers $\alpha_1,...,\alpha_I$. Since $\alpha_i$ ranges from $0 \leq \alpha_i < d_i - 1$, it follows that every eigenvalue of $H_{\text{eff}}$ has a degeneracy of (at least) \begin{equation} D = \prod_{i=1}^I d_i \label{Deg} \end{equation} In the special case where $\mathcal Z_{ij}$ is non-degenerate (i.e. the case where $M = 2I$), this degeneracy can be conveniently written as \begin{equation} D = \sqrt{\det(\mathcal{Z})} \end{equation} since \begin{equation*} \det(\mathcal{Z}) = \det(\mathcal{Z}') = \prod_{i=1}^I d_i^2 \end{equation*} For an example of this degeneracy formula, consider the Hamiltonian (\ref{example2summ}) discussed in section \ref{effsummsect}. In this case, $C_1 = dp$ while $C_2 = 2\pi x$ so \begin{equation*} \mathcal Z_{ij} = \frac{1}{2\pi i} [C_i, C_j] = \begin{pmatrix} 0 & -d \\ d & 0 \end{pmatrix} \end{equation*} Thus, the above formula predicts that the degeneracy for this system is $D = \sqrt{\det(\mathcal Z)} = d$, which is consistent with our previous discussion. \subsection{Finite $U$ corrections} \label{finUsummsect} We now discuss our last major result. To understand this result, note that while $H_{\text{eff}}$ gives the exact low energy spectrum of $H$ in the infinite $U$ limit, it only gives \emph{approximate} results when $U$ is large but finite. Thus, to complete our picture we need to understand what types of corrections we need to add to $H_{\text{eff}}$ to obtain an exact effective theory in the finite $U$ case. It is instructive to start with a simple example: $H = \frac{p^2}{2m} + \frac{K x^2}{2} - U \cos(2\pi x)$. As we discussed in section \ref{effsummsect}, the low energy effective Hamiltonian in the infinite $U$ limit is $H_{\text{eff}} = \frac{K x^2}{2}$ while the low energy Hilbert space $\mathcal{H}_{\text{eff}}$ is spanned by position eigenstates $\{|q\>\}$ where $q$ is an integer. Let us consider this example in the case where $U$ is large but finite. In this case, we expect that there is some small amplitude for the system to tunnel from one cosine minima $x=q$ to another minima, $x = q-n$. Clearly we need to add correction terms to $\mathcal{H}_{\text{eff}}$ that describe these tunneling processes. But what are these correction terms? It is not hard to see that the most general possible correction terms can be parameterized as \begin{equation} \sum_{n=-\infty}^\infty e^{inp} \cdot \epsilon_n(x) \end{equation} where $\epsilon_n(x)$ is some unknown function which also depends on $U$. Physically, each term $e^{inp}$ describes a tunneling process $|q\> \rightarrow |q-n\>$ since $e^{inp} |q\> = |q-n\>$. The coefficient $\epsilon_n(x)$ describes the amplitude for this process, which may depend on $q$ in general. (The one exception is the $n=0$ term, which does not describe tunneling at all, but rather describes corrections to the onsite energies for each minima). Having developed our intuition with this example, we are now ready to describe our general result. Specifically, in the general case we show that the finite $U$ corrections can be written in the form \begin{equation} \sum_{\v{m}} e^{i \sum_{j=1}^M m_j \Pi_j} \cdot \epsilon_{\v{m}}(\{a_k, a_k^\dagger, C_{2I+i}'\}) \label{finUsumm} \end{equation} with the sum running over $M$ component integer vectors $\v{m} = (m_1,...,m_M)$. Here, the $\epsilon_{\v{m}}$ are unknown functions of $\{a_1,...,a_k,a_1^\dagger,...,a_k^\dagger,C_{2I+1'},...,C_M'\}$ which also depend on $U$. We give some examples of these results in section \ref{examsect}. For a derivation of the finite $U$ corrections, see appendix \ref{finUsect}. \subsection{Splitting of ground state degeneracy} \label{degsplitsummsect} One application of Eq. \ref{finUsumm} is to determining how the ground state degeneracy of $H_{\text{eff}}$ splits at finite $U$. Indeed, according to standard perturbation theory, we can find the splitting of the ground state degeneracy by projecting the finite $U$ corrections onto the ground state subspace and then diagonalizing the resulting $D \times D$ matrix. The details of this diagonalization problem are system dependent, so we cannot say much about it in general. However, we would like to mention a related result that is useful in this context. This result applies to any system in which the commutator matrix $\mathcal{Z}_{ij}$ is non-degenerate. Before stating the result, we first need to define some notation: let $\Gamma_1,...,\Gamma_M$ be operators defined by \begin{equation} \Gamma_i = \sum_j (\mathcal{Z}^{-1})_{ji} C_j \label{Gammadef} \end{equation} Note that, by construction, the $\Gamma_i$ operators obey the commutation relations \begin{align} [C_i,\Gamma_j] = 2\pi i \delta_{ij}, \quad [a_k,\Gamma_j] = [a_k^\dagger, \Gamma_j] = 0 \end{align} With this notation, our result is that \begin{equation} \<\alpha'|e^{i \sum_{j=1}^M m_j \Pi_j} \cdot \epsilon_{\v{m}} |\alpha\> = u_{\v{m}} \cdot \<\alpha'|e^{i \sum_{j=1}^M m_j \Gamma_j}|\alpha\> \label{finUidsumm} \end{equation} where $|\alpha\>, |\alpha'\>$ are ground states and $u_{\v{m}}$ is some unknown proportionality constant. This result is useful because it is relatively easy to compute the matrix elements of $e^{i \sum_{j=1}^M m_j \Gamma_j}$; hence the above relation allows us to compute the matrix elements of the finite $U$ corrections (up to the constants $u_{\v{m}}$) without much work. We derive this result in appendix \ref{finUsect}. \section{Examples} \label{examsect} In this section, we illustrate our formalism with some concrete examples. These examples involve a class of two dimensional electron systems in which the spin-up and spin-down electrons form $\nu = 1/k$ Laughlin states with opposite chiralities~\cite{BernevigZhang}. These states are known as ``fractional quantum spin Hall insulators.'' We will be primarily interested in the \emph{edges} of fractional quantum spin Hall (FQSH) insulators~\cite{LevinStern}. Since the edge of the Laughlin state can be modeled as a single chiral Luttinger liquid, the edge of a FQSH insulator consists of two chiral Luttinger liquids with opposite chiralities --- one for each spin direction (Fig. \ref{fig:cleanedge}). The examples that follow will explore the physics of the FQSH edge in the presence of impurity-induced scattering. More specifically, in the first example, we consider a FQSH edge with a single magnetic impurity; in the second example we consider a FQSH edge with multiple magnetic impurities; in the last example we consider a FQSH edge with alternating magnetic and superconducting impurities. In all cases, we study the impurities in the infinite scattering limit, which corresponds to $U \rightarrow \infty$ in (\ref{genHamc}). Then, in the last subsection we discuss how our results change when the scattering strength $U$ is large but finite. We emphasize that the main purpose of these examples is to illustrate our formalism rather than to derive novel results. In particular, many of our findings regarding these examples are known previously in the literature in some form. Of all the examples, the last one, involving magnetic and superconducting impurities, is perhaps most interesting: we find that this system has a ground state degeneracy that grows exponentially with the number of impurities. This ground state degeneracy is closely related to the previously known topological degeneracy that appears when a FQSH edge is proximity coupled to alternating ferromagnetic and superconducting strips~\cite{lindner2012fractionalizing, cheng2012superconducting, barkeshli2013twist,vaezi2013fractional, clarke2013exotic}. Before proceeding, we need to explain what we mean by ``magnetic impurities'' and ``superconducting impurities.'' At a formal level, a magnetic impurity is a localized object that scatters spin-up electrons into spin-down electrons. Likewise a superconducting impurity is a localized object that scatters spin-up electrons into spin-down \emph{holes}. More physically, a magnetic impurity can be realized by placing the tip of a ferromagnetic needle in proximity to the edge while a superconducting impurity can be realized by placing the tip of a superconducting needle in proximity to the edge. \subsection{Review of edge theory for clean system} \begin{figure}[tb] \centering \includegraphics[width=4.5cm,height=4.0cm]{fig0.pdf} \caption{The fractional quantum spin Hall edge consists of two counter-propagating chiral Luttinger Liquids --- one for each spin direction ($\uparrow,\downarrow$)} \label{fig:cleanedge} \end{figure} As discussed above, the edge theory for the $\nu = 1/k$ fractional quantum spin Hall state consists of two chiral Luttinger liquids with opposite chiralities --- one for each spin direction (Fig. \ref{fig:cleanedge}). The purpose of this section is to review the Hamiltonian formulation of this edge theory.\cite{WenReview,WenBook,LevinStern} More specifically we will discuss the edge theory for a disk geometry where the circumference of the disk has length $L$. Since we will work in a Hamiltonian formulation, in order to define the edge theory we need to specify the Hamiltonian, the set of physical observables, and the canonical commutation relations. We begin with the set of physical observables. The basic physical observables in the edge theory are a collection of operators $\{\partial_y \phi_\uparrow(y), \partial_y \phi_\downarrow(y)\}$ along with two additional operators $\phi_\uparrow(y_0)$, $\phi_\downarrow(y_0)$ where $y_0$ is an arbitrary, but fixed, point on the boundary of the disk. The $\{\partial_y \phi_\uparrow(y), \partial_y \phi_\downarrow(y),\phi_\uparrow(y_0),\phi_\downarrow(y_0)\}$ operators can be thought of as the fundamental phase space operators in this system, i.e. the analogues of the $\{x_1,...,x_N,p_1,...,p_N\}$ operators in section \ref{summsect}. Like $\{x_1,...,x_N,p_1,...,p_N\}$, all other physical observables can be written as functions/functionals of $\{\partial_y \phi_\uparrow(y), \partial_y \phi_\downarrow(y),\phi_\uparrow(y_0),\phi_\downarrow(y_0)\}$. Two important examples are the operators $\phi_\uparrow(y)$ and $\phi_\downarrow(y)$ which are defined by \begin{equation} \phi_\sigma(y) \equiv \phi_\sigma(y_0) + \int_{y_0}^y \partial_x \phi_\sigma dx, \quad \sigma = \uparrow, \downarrow \label{phidef} \end{equation} where the integral runs from $y_0$ to $y$ in the clockwise direction. The physical meaning of these operators is as follows: the density of spin-up electrons at position $y$ is given by $\rho_\uparrow(y) = \frac{1}{2\pi} \partial_y \phi_\uparrow$ while the density of spin-down electron is $\rho_\downarrow(y) = \frac{1}{2\pi} \partial_y \phi_\downarrow$. The total charge $Q$ and total spin $S^z$ on the edge are given by $Q = Q_\uparrow + Q_\downarrow$ and $S^z = 1/2(Q_\uparrow - Q_\downarrow)$ with \begin{align*} Q_\sigma = \frac{1}{2\pi} \int_{-L/2}^{L/2} \partial_y \phi_\sigma dy, \quad \sigma = \uparrow,\downarrow \end{align*} Finally, the spin-up and spin-down electron creation operators take the form \begin{align*} \psi_\uparrow^\dagger = e^{i k \phi_\uparrow}, \quad \psi_\downarrow^\dagger = e^{-i k \phi_\downarrow} \end{align*} In the above discussion, we ignored an important subtlety: $\phi_\uparrow(y_0)$ and $\phi_\downarrow(y_0)$ are actually \emph{compact} degrees of freedom which are only defined modulo $2\pi/k$. In other words, strictly speaking, $\phi_\uparrow(y_0)$ and $\phi_\downarrow(y_0)$ are \emph{not} well-defined operators: only $e^{i k \phi_\uparrow(y_0)}$ and $e^{ik \phi_\downarrow(y_0)}$ are well-defined. (Of course the same also goes for $\phi_\uparrow(y)$ and $\phi_\downarrow(y)$, in view of the above definition). Closely related to this fact, the conjugate ``momenta'' $Q_\uparrow$ and $Q_\downarrow$ are actually discrete degrees of freedom which can take only integer values. The compactness of $\phi_\uparrow(y_0),\phi_\downarrow(y_0)$ and discreteness of $Q_\uparrow, Q_\downarrow$ is inconvenient for us since the machinery discussed in section \ref{summsect} is designed for systems in which all the phase space operators are real-valued, rather than systems in which some operators are angular valued and some are integer valued. To get around this issue, we will initially treat $\phi_\uparrow(y_0)$ and $\phi_\downarrow(y_0)$ and the conjugate momenta $Q_\uparrow$, $Q_\downarrow$ as \emph{real valued} operators. We will then use a trick (described in the next section) to dynamically generate the compactness of $\phi_\uparrow(y_0), \phi_\downarrow(y_0)$ as well as the discreteness of $Q_\uparrow, Q_\downarrow$. Let us now discuss the commutation relations for the $\{\partial_y \phi_\uparrow(y), \partial_y \phi_\downarrow(y),\phi_\uparrow(y_0),\phi_\downarrow(y_0)\}$ operators. Like the usual phase space operators $\{x_1,...,x_N,p_1,...,p_N\}$, the commutators of $\{\partial_y \phi_\uparrow(y), \partial_y \phi_\downarrow(y),\phi_\uparrow(y_0),\phi_\downarrow(y_0)\}$ are $c$-numbers. More specifically, the basic commutation relations are \begin{align} [\partial_x\phi_\uparrow(x),\partial_{y}\phi_{\uparrow}(y)] &=\frac{2\pi i}{k} \partial_x\delta(x-y) \nonumber \\ [\partial_x\phi_{\downarrow}(x),\partial_{y}\phi_{\downarrow}(y)] &=-\frac{2\pi i}{k} \partial_x\delta(x-y) \nonumber \\ [\phi_\uparrow(y_0), \partial_y \phi_\uparrow(y)] &= \frac{2\pi i}{k}\delta(y-y_0) \nonumber \\ [\phi_\downarrow(y_0), \partial_y \phi_\downarrow(y)] &= -\frac{2\pi i}{k}\delta(y-y_0) \label{phicommrel0} \end{align} with the other commutators vanishing: \begin{align*} [\phi_\uparrow(y_0), \partial_y \phi_\downarrow(y)] &= [\phi_\downarrow(y_0), \partial_y \phi_\uparrow(y)] =0 \\ [\phi_\uparrow(y_0), \phi_\downarrow(y_0)] &= [\partial_x\phi_\uparrow(x),\partial_{y}\phi_{\downarrow}(y)] = 0 \end{align*} Using these basic commutation relations, together with the definition of $\phi_\sigma(y)$ (\ref{phidef}), one can derive the more general relations \begin{align} [\phi_\uparrow(x),\partial_{y}\phi_{\uparrow}(y)] &=\frac{2\pi i}{k}\delta(x-y) \nonumber \\ [\phi_{\downarrow}(x),\partial_{y}\phi_{\downarrow}(y)] &=-\frac{2\pi i}{k}\delta(x-y) \nonumber \\ [\phi_\uparrow(x), \partial_y \phi_{\downarrow}(y)] &= 0 \label{phicommrel} \end{align} as well as \begin{align} [\phi_\uparrow(x), \phi_\uparrow(y)] &= \frac{\pi i}{k} \text{sgn}(x,y) \nonumber \\ [\phi_\downarrow(x), \phi_\downarrow(y)] &= -\frac{\pi i}{k} \text{sgn}(x,y) \nonumber \\ [\phi_\uparrow(x), \phi_\downarrow(y)] &= 0 \label{genphicommrel} \end{align} where the $\text{sgn}$ function is defined by $\text{sgn}(x,y) = +1$ if $y_0 < x < y$ and $\text{sgn}(x,y) = -1$ if $y_0 < y < x$, with the ordering defined in the clockwise direction. The latter commutation relations (\ref{phicommrel}) and (\ref{genphicommrel}) will be particularly useful to us in the sections that follow. Having defined the physical observables and their commutation relations, the last step is to define the Hamiltonian for the edge theory. The Hamiltonian for a perfectly clean, homogeneous edge is \begin{equation} H_0 = \frac{kv}{4\pi}\int_{-L/2}^{L/2}[(\partial_{x}\phi_{\uparrow}(x))^{2}+(\partial_{x}\phi_{\downarrow}(x))^{2}]dx \label{Hclean} \end{equation} where $v$ is the velocity of the edge modes. At this point, the edge theory is complete except for one missing element: we have not given an explicit definition of the Hilbert space of the edge theory. There are two different (but equivalent) definitions that one can use. The first, more abstract, definition is that the Hilbert space is the unique irreducible representation of the operators $\{\partial_y \phi_\uparrow, \partial_y \phi_\downarrow,\phi_\uparrow(y_0),\phi_\downarrow(y_0)\}$ and the commutation relations (\ref{phicommrel0}). (This is akin to defining the Hilbert space of the 1D harmonic oscillator as the irreducible representation of the Heisenberg algebra $[x,p] = i$). The second definition, which is more concrete but also more complicated, is that the Hilbert space is spanned by the complete orthonormal basis $\{|q_\uparrow, q_\downarrow, \{n_{p\uparrow}\}, \{n_{p\downarrow}\}\>\}$ where the quantum numbers $q_\uparrow, q_\downarrow$ range over all integers \footnote{Actually, \unexpanded{$q_\uparrow, q_\downarrow$} range over arbitrary real numbers in our fictitious representation of the edge: as explained above, we initially pretend that \unexpanded{$Q_\uparrow, Q_\downarrow$} are not quantized, and then introduce quantization later on using a trick.} while $n_{p\uparrow}, n_{p\downarrow}$ range over all nonnegative integers for each value of $p = 2\pi/L, 4\pi/L,...$. These basis states have a simple physical meaning: $|q_\uparrow, q_\downarrow, \{n_{p\uparrow}\}, \{n_{p\downarrow}\}\>$ corresponds to a state with charge $q_\uparrow$ and $q_\downarrow$ on the two edge modes, and with $n_{p \uparrow}$ and $n_{p \downarrow}$ phonons with momentum $p$ on the two edge modes. \subsection{Example 1: Single magnetic impurity} \label{fmsisect} With this preparation, we now proceed to study a fractional quantum spin Hall edge with a single magnetic impurity in a disk geometry of circumference $L$ (Fig. \ref{fig:si}a). We assume that the impurity, which is located at $x = 0$, generates a backscattering term of the form $\frac{U}{2}(\psi_\uparrow^\dagger(0) \psi_\downarrow(0) + h.c)$. Thus, in the bosonized language, the system with an impurity is described by the Hamiltonian \begin{align} &H=H_0-U\cos(C), \quad C = k (\phi_\uparrow(0) + \phi_\downarrow(0)) \label{fmsi} \end{align} where $H_0$ is defined in Eq. \ref{Hclean}. Here, we temporarily ignore the question of how we regularize the cosine term; we will come back to this point below. Our goal is to find the low energy spectrum of $H$ in the strong backscattering limit, $U \rightarrow \infty$. We will accomplish this using the results from section \ref{summsect}. Note that, in using these results, we implicitly assume that our formalism applies to systems with infinite dimensional phase spaces, even though we only derived it in the finite dimensional case. \begin{figure}[tb] \centering \includegraphics[width=8.0cm,height=4.0cm]{fig1.pdf} \caption{(a) A magnetic impurity on a fractional quantum spin Hall edge causes spin-up electrons to backscatter into spin-down electrons. (b) In the infinite backscattering limit, the impurity effectively reconnects the edge modes.} \label{fig:si} \end{figure} First we describe a trick for correctly accounting for the compactness of $\phi_\uparrow(y_0), \phi_\downarrow(y_0)$ and the quantization of $Q_\uparrow, Q_\downarrow$. The idea is simple: we initially treat these variables as if they are real valued, and then we introduce compactness \emph{dynamically} by adding two additional cosine terms to our Hamiltonian: \begin{equation} H = H_0 - U \cos(C) - U \cos(2\pi Q_\uparrow) - U \cos(2\pi Q_\downarrow) \end{equation} These additional cosine terms effectively force $Q_\uparrow$ and $Q_\downarrow$ to be quantized at low energies, thereby generating the compactness that we seek.\footnote{Strictly speaking we also need to add (infinitesimal) quadratic terms to the Hamiltonian of the form \unexpanded{$\epsilon( \phi_\uparrow(y_0)^2 + \phi_\downarrow(y_0)^2)$} so that the $\mathcal{N}$ matrix is non-degenerate. However, these terms play no role in our analysis so we will not include them explicitly.} We will include all three cosine terms in our subsequent analysis. The next step is to calculate the low energy effective Hamiltonian $H_{\text{eff}}$ and low energy Hilbert space $\mathcal{H}_{\text{eff}}$. Instead of working out the expressions in Eqs. \ref{Heffgenc}, \ref{Hilbeff}, we will skip this computation and proceed directly to finding creation and annihilation operators for $H_{\text{eff}}$ using Eq. \ref{auxeqsumm}. (This approach works because equation (\ref{auxeqsumm}) does not require us to find the explicit form of $H_{\text{eff}}$). According to Eq. \ref{auxeqsumm}, we can find the creation and annihilation operators for $H_{\text{eff}}$ by finding all operators $a$ such that (1) $a$ is a linear combination of our fundamental phase space operators $\{\partial_y \phi_\uparrow, \partial_y \phi_\downarrow, \phi_\uparrow(y_0), \phi_\downarrow(y_0)\}$ and (2) $a$ obeys \begin{align} [a, H_{0}] &= E a + \lambda [C, H_0] + \lambda_\uparrow [Q_\uparrow, H_0] + \lambda_\downarrow [Q_\downarrow, H_0] \nonumber \\ [a, C] &= [a,Q_\uparrow] = [a,Q_\downarrow] = 0 \label{auxeqsi} \end{align} for some scalars $E, \lambda, \lambda_\uparrow, \lambda_\downarrow$ with $E \neq 0$. To proceed further, we note that the constraint $[a, Q_\uparrow] = [a, Q_\downarrow] = 0$, implies that $\phi_\uparrow(y_0), \phi_\downarrow(y_0)$ cannot appear in the expression for $a$. Hence, $a$ can be written in the general form \begin{equation} a=\int_{-L/2}^{L/2} [f_{\uparrow}(y)\partial_{y}\phi_{\uparrow}(y)+f_{\downarrow}(y)\partial_{y}\phi_{\downarrow}(y)] dy \label{genmode} \end{equation} Substituting this expression into the first line of Eq.~\ref{auxeqsi} we obtain the differential equations \begin{align*} -ivf_{\uparrow}'(y) & =Ef_{\uparrow}(y)+\lambda kiv\delta(y) \nonumber \\ ivf_{\downarrow}'(y) & = Ef_{\downarrow}(y)-\lambda kiv\delta(y) \end{align*} (The $\lambda_\uparrow, \lambda_\downarrow$ terms drop out of these equations since $Q_\uparrow, Q_\downarrow$ commute with $H_0$). These differential equations can be solved straightforwardly. The most general solution takes the form \begin{align} f_{\uparrow}(y) & = e^{ipy}[A_{1}\Theta(-y)+A_{2}\Theta(y)] \nonumber \\ f_{\downarrow}(y) & = e^{-ipy}[B_{1}\Theta(-y)+B_{2}\Theta(y)] \label{solsi} \end{align} where $p = E/v$, and \begin{equation} A_{2}=A_{1}-\lambda k, B_{2}=B_{1}-\lambda k \label{bcsi} \end{equation} Here $\Theta$ is the Heaviside step function defined as \begin{equation*} \Theta(x) = \begin{cases} 0 & -L/2 \leq x \leq 0 \\ 1 & 0 \leq x \leq L/2 \end{cases} \end{equation*} (Note that the above expressions (\ref{solsi}) for $f_\uparrow, f_\downarrow$ do not obey periodic boundary conditions at $x = \pm L/2$; we will not impose these boundary conditions until later in our calculation). Eliminating $\lambda$ from (\ref{bcsi}) we see that \begin{equation} A_2 - A_1 = B_2 - B_1 \label{constraintAB1} \end{equation} We still have to impose one more condition on $a$, namely $[a,C] = 0$. This condition leads to a second constraint on $A_1, A_2, B_1, B_2$, but the derivation of this constraint is somewhat subtle. The problem is that if we simply substitute (\ref{genmode}) into $[a,C] = 0$, we find \begin{equation} f_{\uparrow}(0) = f_{\downarrow}(0) \label{conssi} \end{equation} It is unclear how to translate this relation into one involving $A_1, A_2, B_1, B_2$ since $f_\uparrow, f_\downarrow$ are discontinuous at $x=0$ and hence $f_\uparrow(0), f_\downarrow(0)$ are ill-defined. The origin of this puzzle is that the cosine term in Eq. \ref{fmsi} contains short-distance singularities and hence is not well-defined. To resolve this issue we regularize the argument of the cosine term, replacing $C = k(\phi_\uparrow(0)+\phi_\downarrow(0))$ with \begin{equation} C \rightarrow \int_{-L/2}^{L/2} k ( \phi_\uparrow(x) + \phi_\downarrow(x)) \tilde{\delta}(x) dx \label{Cregsi} \end{equation} where $\tilde{\delta}$ is a narrowly peaked function with $\int \tilde{\delta}(x) dx =1$. Here, we can think of $\tilde{\delta}$ as an approximation to a delta function. Note that $\tilde{\delta}$ effectively introduces a short-distance cutoff and thus makes the cosine term non-singular. After making this replacement, it is straightforward to repeat the above analysis and solve the differential equations for $f_\uparrow, f_\downarrow$. In appendix \ref{regapp}, we work out this exercise, and we find that with this regularization, the condition $[a,C] = 0$ leads to the constraint \begin{equation} \frac{A_1 + A_2}{2} = \frac{B_1 + B_2}{2} \label{constraintAB2} \end{equation} Combining our two constraints on $A_1,B_1, A_2, B_2$ (\ref{constraintAB1},\ref{constraintAB2}), we obtain the relations \begin{align} A_1 = B_1, \quad A_2 = B_2 \end{align} So far we have not imposed any restriction on the momentum $p$. The momentum constraints come from the periodic boundary conditions on $f_\uparrow, f_\downarrow$: \begin{align*} f_\uparrow(-L/2) = f_\uparrow(L/2), \quad f_\downarrow(-L/2) = f_\downarrow(L/2) \end{align*} Using the explicit form of $f_\uparrow, f_\downarrow$, these boundary conditions give \begin{equation*} A_1 e^{-ipL/2} = A_2 e^{ipL/2}, \quad B_1 e^{ipL/2} = B_2 e^{-ipL/2} \end{equation*} from which we deduce \begin{equation} e^{2ipL} = 1, \quad A_2 = A_1 e^{-ipL} \end{equation} Putting this all together, we see that the most general possible creation/annihilation operator for $H_{\text{eff}}$ is given by \begin{align*} a_p = A_1 \int_{-L/2}^{L/2} &(e^{ipy} \partial_y \phi_\uparrow + e^{-ipy} \partial_y \phi_\downarrow) \Theta(-y) \nonumber \\ + e^{-ipL}&(e^{ipy} \partial_y \phi_\uparrow + e^{-ipy} \partial_y \phi_\downarrow) \Theta(y) dy \end{align*} where $p$ is quantized as $p = \pm \pi/L, \pm 2\pi/L,...$ and $E_p = vp$. (Note that $p=0$ does not correspond to a legitimate creation/annihilation operator according to the definition given above, since we require $E \neq 0$). Following the conventions from \textcolor{black}{section \ref{diagsummsect}}, we will refer to the operators with $E_p > 0$ --- or equivalently $p > 0$ --- as ``annihilation operators'' and the other operators as ``creation operators.'' Also, we will choose the normalization constant $A_1$ so that $[a_p, a_{p'}^\dagger] = \delta_{pp'}$ for $p,p' > 0$. This gives the expression \begin{align} a_p = \sqrt{\frac{k}{4\pi|p|L}} \int_{-L/2}^{L/2} &(e^{ipy} \partial_y \phi_\uparrow + e^{-ipy} \partial_y \phi_\downarrow) \Theta(-y) \nonumber \\ + e^{-ipL}&(e^{ipy} \partial_y \phi_\uparrow + e^{-ipy} \partial_y \phi_\downarrow) \Theta(y) dy \end{align} The next step is to compute the commutator matrix $\mathcal{Z}_{ij}$. In the case at hand, we have three cosine terms $\{\cos(C_1), \cos(C_2), \cos(C_3)\}$, where \begin{equation*} C_1 = C, \quad C_2 = 2\pi Q_\uparrow, \quad C_3= 2\pi Q_\downarrow \end{equation*} Therefore $\mathcal{Z}_{ij}$ is given by \begin{equation*} \mathcal{Z}_{ij} = \frac{1}{2\pi i} [C_i, C_j] = \begin{pmatrix} 0 & 1 & -1 \\ -1 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix} \end{equation*} \textcolor{black}{To proceed further we need to find an appropriate change of variables of the form $C_i' = \sum_{j=1}^{3} \mathcal{V}_{ij} C_j + \chi_i$. Here, $\mathcal V$ should an integer matrix with determinant $\pm 1$ with the property that $ \mathcal{Z}_{ij}' = \frac{1}{2\pi i} [C_i', C_j']$ is in skew-normal form, while $\chi$ should be a real vector satisfying Eq. \ref{chicond}.} It is easy to see that the following change of variables does the job: \begin{equation*} C_1' = C_1, \quad C_2' = -2\pi Q_\uparrow, \quad C_3' = 2\pi Q_\uparrow + 2\pi Q_\downarrow \end{equation*} Indeed, for this change of variables, \begin{equation*} \mathcal{Z}' = \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \end{equation*} We can see that this is in the canonical skew normal form shown in Eq. \ref{zprime}, with the parameters $M=3$, $I = 1$, $d_1 = 1$. We are now in a position to write down the low energy effective Hamiltonian $H_{\text{eff}}$: according to Eq. \ref{Heffdiagsumm}, $H_{\text{eff}}$ must take the form \begin{equation} H_{\text{eff}} = \sum_{p > 0} v p a_p^\dagger a_p + F \cdot (C_3')^2 \label{Heffsi1} \end{equation} where $F$ is some (as yet unknown) constant. To determine the constant $F$, we make two observations. First, we note that the first term in Eq. \ref{Heffsi1} can be rewritten as $\sum_{p \neq 0} \frac{v |p|}{2} (a_{-p} a_p)$. Second, we note that $C_3' = 2\pi Q$ is proportional to $a_{p =0}$. Given these observations, it is natural to interpret the $F (C_3')^2$ term as the missing $p = 0$ term in the sum. This suggests that we can fix the coefficient $F$ using continuity in the $p \rightarrow 0$ limit. To this end, we observe that \begin{equation*} \lim_{p \rightarrow 0} \frac{v|p|}{2} a_{-p} a_p = \frac{vk}{8 \pi L} \cdot (C_3')^2 \end{equation*} We conclude that $F = \frac{vk}{8 \pi L}$. Substituting this into (\ref{Heffsi1}), we derive \begin{equation} H_{\text{eff}} = \sum_{p > 0} v p a_p^\dagger a_p + \frac{vk}{8 \pi L} \cdot (C_3')^2 \label{Heffsi2} \end{equation} where the sum runs over $p = \pi/L, 2\pi/L, ...$. In addition to the effective Hamiltonian, we also need to discuss the effective Hilbert space $\mathcal{H}_{\text{eff}}$ in which this Hamiltonian is defined. According to the results of \textcolor{black}{section \ref{diagsummsect}}, the effective Hilbert space $\mathcal{H}_{\text{eff}}$ is spanned by states $\{|q,\{n_p\}\>\}$ where $|q,\{n_p\}\>$ is the unique simultaneous eigenstate of the form \begin{align*} e^{iC_1'}|q,\{n_p\}\> &= |q,\{n_p\}\>, \\ e^{iC_2'}|q,\{n_p\}\> &= |q,\{n_p\}\>, \\ C_3' |q,\{n_p\}\> &= 2\pi q |q,\{n_p\}\>, \\ a_p^\dagger a_p |q,\{n_p\}\> &= n_p |q,\{n_p\}\> \end{align*} Here $n_p$ runs over non-negative integers, while $q$ runs over all integers. Note that we do not need to label the $\{|q,\{n_p\}\>\}$ basis states with $\alpha$ quantum numbers since $d_1 = 1$ so there is no degeneracy. Having derived the effective theory, all that remains is to diagonalize it. Fortunately we can accomplish this without any extra work: from (\ref{Heffsi2}) it is clear that the $\{|q,\{n_p\}\>\}$ basis states are also eigenstates of $H_{\text{eff}}$ with energies given by \begin{equation} E = \sum_{p > 0} v p n_p + \frac{\pi vk}{2L} \cdot q^2 \label{Esi} \end{equation} We are now finished: the above equation gives the complete energy spectrum of $H_{\text{eff}}$, and thus the complete low energy spectrum of $H$ in the limit $U \rightarrow \infty$. To understand the physical interpretation of this energy spectrum, we can think of $n_p$ as describing the number of phonon excitations with momentum $p$, while $q$ describes the total charge on the edge. With these identifications, the first term in (\ref{Esi}) describes the total energy of the phonon excitations --- which are linearly dispersing with velocity $v$ --- while the second term describes the charging/capacitative energy of the edge. It is interesting that at low energies, our system has only \emph{one} branch of phonon modes and one charge degree of freedom, while the clean edge theory (\ref{Hclean}) has two branches of phonon modes and two charge degrees of freedom --- one for each spin direction. The explanation for this discrepancy can be seen in Fig. \ref{fig:si}b: in the infinite $U$ limit, the impurity induces perfect backscattering which effectively reconnects the edges to form a single chiral edge of length $2L$. \subsection{Example 2: Multiple magnetic impurities} \label{fmmisect} We now consider a fractional quantum spin Hall edge in a disk geometry with $N$ magnetic impurities located at positions $x_1,...,x_N$ (Fig. \ref{fig:mi}a). Modeling the impurities in the same way as in the previous section, the Hamiltonian is \begin{align} H=H_0-U\sum_{i=1}^{N}\cos(C_i), \quad C_{i}=k (\phi_{\uparrow}(x_i)+\phi_{\downarrow}(x_i)) \label{fmmi} \end{align} where $H_0$ is defined in Eq. \ref{Hclean}. As in the single impurity case, our goal is to understand the low energy physics of $H$ in the limit $U \rightarrow \infty$. We can accomplish this using the same approach as before. The first step is to take account of the compactness of $\phi_\uparrow, \phi_\downarrow$ and the discrete nature of $Q_\uparrow, Q_\downarrow$ by adding two additional cosine terms to our Hamiltonian: \begin{equation*} H = H_0 - U\sum_{i=1}^{N}\cos(C_i) - U \cos(2\pi Q_\uparrow) - U \cos(2\pi Q_\downarrow) \end{equation*} \begin{figure}[tb] \centering \includegraphics[width=8cm,height=4.5cm]{fig2.pdf} \caption{(a) A collection of $N$ magnetic impurities on a fractional quantum spin Hall edge. The impurities are located at positions $x_1,...,x_N$. (b) In the infinite backscattering limit, the impurities effectively reconnect the edge modes, breaking the edge into $N$ disconnected components.} \label{fig:mi} \end{figure} Next, we find the creation and annihilation operators for $H_{\text{eff}}$ using Eq. \ref{auxeqsumm}. That is, we search for all operators $a$ such that (1) $a$ is a linear combination of our fundamental phase space operators $\{\partial_y \phi_\uparrow, \partial_y \phi_\downarrow, \phi_\uparrow(y_0), \phi_\downarrow(y_0)\}$ and (2) $a$ obeys \begin{align} [a, H_{0}] &= E a + \sum_{i=1}^N \lambda_i [C_i, H_0] + \lambda_\uparrow [Q_\uparrow, H_0] + \lambda_\downarrow [Q_\downarrow, H_0] \nonumber \\ [a, C_j] &= [a,Q_\uparrow] = [a,Q_\downarrow] = 0 \label{auxeqfm} \end{align} for some $E, \textcolor{black}{\lambda_i}, \lambda_\uparrow, \lambda_\downarrow$ with $E \neq 0$. Given that $[a, Q_\uparrow] = [a, Q_\downarrow] = 0$, we know that $\phi_\uparrow(y_0), \phi_\downarrow(y_0)$ cannot appear in the expression for $a$. Hence, $a$ can be written in the general form \begin{equation*} a=\int_{-L/2}^{L/2} [f_{\uparrow}(y)\partial_{y}\phi_{\uparrow}(y)+f_{\downarrow}(y)\partial_{y}\phi_{\downarrow}(y)] dy \end{equation*} Substituting these expressions into the first line of Eq.~\ref{auxeqfm}, we obtain \begin{align*} -ivf_{\uparrow}'(y) & =Ef_{\uparrow}(y)+ kiv \sum_{j=1}^N \lambda_j \delta(y - x_j) \nonumber \\ ivf_{\downarrow}'(y) & = Ef_{\downarrow}(y)- kiv \sum_{j=1}^N \lambda_j \delta(y-x_j) \end{align*} Solving these differential equations gives \begin{align*} f_{\uparrow}(y) &=\sum_{j=1}^{N}A_{j}e^{ipy}\Theta(x_{j-1}<y<x_{j})] \nonumber \\ f_{\downarrow}(y) &=\sum_{j=1}^{N}B_{j}e^{-ipy}\Theta(x_{j-1}<y<x_{j})] \end{align*} where $p = E/v$ and where \begin{equation*} A_{j+1}=A_{j}-\lambda_{j}k e^{-ipx_{j}}, \quad B_{j+1}=B_{j}-\lambda_{j}k e^{ipx_{j}} \end{equation*} Here, $\Theta(a < y < b)$ is defined to take the value $1$ if $y$ is in the interval $[a,b]$ and $0$ otherwise. Also, we use a notation in which $x_{0}$ is identified with $x_N$. Eliminating $\lambda_j$, we derive \begin{equation} (A_{j+1} - A_{j}) e^{ipx_j} = (B_{j+1} - B_{j}) e^{-ipx_j} \label{constraintABfm1} \end{equation} We still have to impose the condition $[a,C_j] = 0$, which gives an additional set of constraints on $\{A_j, B_j\}$. As in the single impurity case, we regularize the cosine terms to derive these constraints. That is, we replace $C = k(\phi_\uparrow(x_j)+\phi_\downarrow(x_j))$ with \begin{equation} C \rightarrow \int_{-L/2}^{L/2} k ( \phi_\uparrow(x) + \phi_\downarrow(x)) \tilde{\delta}(x-x_j) dx \label{Cregmi} \end{equation} where $\tilde{\delta}$ is a narrowly peaked function with $\int \tilde{\delta}(x) dx =1$, i.e., an approximation to a delta function. With this regularization, it is not hard to show that $[a,C_j] = 0$ gives the constraint \begin{equation} \frac{1}{2}(A_{j} + A_{j+1}) e^{ipx_j} = \frac{1}{2}(B_{j} + B_{j+1}) e^{-ipx_j} \label{constraintABfm2} \end{equation} Combining (\ref{constraintABfm1}), (\ref{constraintABfm2}) we derive \begin{align} A_j e^{ipx_j} = B_j e^{-ipx_j}, \quad A_{j+1} e^{ipx_j} = B_{j+1} e^{-ipx_j} \label{constraintABfm3} \end{align} Our task is now to find all $\{A_j, B_j,p\}$ that satisfy (\ref{constraintABfm3}). For simplicity, we will specialize to the case where the impurities are uniformly spaced with spacing $s$, i.e. $x_{j+1} - x_j = s = L/N$ for all $j$. In this case, equation (\ref{constraintABfm3}) implies that $e^{2ips} = 1$, so that $p$ is quantized in integer multiples of $\pi/s$. For any such $p$, (\ref{constraintABfm3}) has $N$ linearly independent solutions of the form \begin{align*} B_j &= A_j = 0 \ \ \text{for } j \neq m \\ B_j &= A_j e^{2i p x_j} \neq 0 \ \ \text{for } j = m \end{align*} with $m=1,...,N$. Putting this all together, we see that the most general possible creation/annihilation operator for $H_{\text{eff}}$ is given by \begin{align*} a_{pm} = \sqrt{\frac{k}{4\pi|p|s}} \int_{-L/2}^{L/2} &[(e^{ipy}\partial_y \phi_\uparrow + e^{2ip x_m} e^{-ipy} \partial_y \phi_\downarrow) \\ & \cdot \Theta(x_{m-1}<y<x_{m})]dy \end{align*} with $E_{pm} = vp$. Here the index $m$ runs over $m=1,...,N$ while $p$ takes values $\pm \pi/s, \pm 2\pi/s,...$. (As in the single impurity case, $p=0$ does not correspond to a legitimate creation/annihilation operator, since we require that $E \neq 0$). Following the conventions from \textcolor{black}{section \ref{diagsummsect}}, we will refer to the operators with $E > 0$ --- or equivalently $p > 0$ --- as ``annihilation operators'' and the other operators as ``creation operators.'' Note that we have normalized the $a$ operators so that $[a_{pm}, a_{p'm'}^\dagger] = \delta_{pp'} \delta_{mm'}$ for $p,p' > 0$. The next step is to compute the commutator matrix $\mathcal{Z}_{ij} = \frac{1}{2\pi i} [C_i, C_j]$. Let us denote the $N+2$ cosine terms as $\{\cos(C_1),...,\cos(C_{N+2})\}$ where $C_{N+1} = 2\pi Q_\uparrow$, $C_{N+2} = 2\pi Q_\downarrow$. Using (\ref{genphicommrel}) we find that $\mathcal{Z}_{ij}$ takes the form \begin{eqnarray*} \mathcal{Z}_{ij} &=& \begin{pmatrix} 0 & \cdots & 0 & 1 & -1 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \cdots & 0 & 1 & -1 \\ -1 & \cdots & -1 & 0 & 0 \\ 1 & \cdots & 1 & 0 & 0 \end{pmatrix} \end{eqnarray*} \textcolor{black}{To proceed further we need to find an appropriate change of variables of the form $C_i' = \sum_{j=1}^{N+2} \mathcal{V}_{ij} C_j + \chi_i$. Here, $\mathcal V$ should be chosen so that $ \mathcal{Z}_{ij}' = \frac{1}{2\pi i} [C_i', C_j']$ is in skew-normal form, while $\chi$ should be chosen so that it obeys Eq. \ref{chicond}.} It is easy to see that the following change of variables does the job: \begin{align*} &C_1' = C_1, \quad C_2' = -2\pi Q_\uparrow, \quad C_3' = 2\pi Q_\uparrow + 2\pi Q_\downarrow \\ &C_m' = C_{m-2} - C_{m-3}, \quad m= 4,...,N+2 \end{align*} Indeed, it is easy to check that \begin{eqnarray*} \mathcal{Z}_{ij}' &=& \frac{1}{2\pi i} [C_i', C_j'] \nonumber \\ &=& \begin{pmatrix} 0 & -1 & 0 & \cdots & 0 \\ 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 \end{pmatrix} \end{eqnarray*} We can see that this is in the canonical skew-normal form shown in Eq. \ref{zprime}, with the parameters $M = N+2$, $I = 1$, $d_1 = 1$. We are now in a position to write down the low energy effective Hamiltonian $H_{\text{eff}}$: according to Eq. \ref{Heffdiagsumm}, $H_{\text{eff}}$ must take the form \begin{equation} H_{\text{eff}} = \sum_{m=1}^{N} \sum_{p > 0} v p a_{pm}^\dagger a_{pm} + F(C_3',..., C_{N+2}') \label{Hefffm} \end{equation} where the sum runs over $p = \pi/s, 2\pi/s,...$ and where $F$ is some quadratic function of $N$ variables. To determine $F$, we first need to work out more concrete expressions for $C_m'$. The $m=3$ case is simple: $C_3' = 2\pi Q$. On the other hand, for $m = 4,...,N+2$, we have \begin{eqnarray*} C_m' &=& k(\phi_\uparrow(x_{m-2}) + \phi_\downarrow(x_{m-2})) \\ &-& k(\phi_\uparrow(x_{m-3}) + \phi_\downarrow(x_{m-3}) \\ &=& k \int_{x_{m-3}}^{x_{m-2}} (\partial_y \phi_\uparrow + \partial_y \phi_\downarrow) dy \\ \end{eqnarray*} where the second line follows from the definition of $\phi_\uparrow, \phi_\downarrow$ (\ref{phidef}) along with the assumption that the impurities are arranged in the order $y_0 < x_1 < ... < x_N$ in the clockwise direction. With these expressions we can now find $F$. We use the same trick as in the single impurity case: we note that the first term in Eq. \ref{Hefffm} can be rewritten as as $\sum_m \sum_{p \neq 0} \frac{v |p|}{2} (a_{-pm} a_{pm})$, and we observe that \begin{equation*} \lim_{p \rightarrow 0} \frac{v|p|}{2} a_{-pm} a_{pm} = \frac{v}{8 \pi k s} \cdot (C_{m+2}')^2 \end{equation*} for $m=2,...,N$, while \begin{equation*} \lim_{p \rightarrow 0} \frac{v|p|}{2} a_{-p1} a_{p1} = \frac{v}{8 \pi k s} \cdot (k C_3' - \sum_{m=4}^{N+2} C_{m'})^2 \end{equation*} Assuming that $F$ reproduces the missing $p=0$ piece of the first term in Eq. \ref{Hefffm}, we deduce that \begin{eqnarray} F(C_3',...,C_{N+2}') &=& \frac{v}{8 \pi k s} \cdot \sum_{m=4}^{N+2} (C_m')^2 \nonumber \\ &+& \frac{v}{8 \pi k s} \cdot (k C_3' - \sum_{m=4}^{N+2} C_{m}')^2 \end{eqnarray} In addition to the effective Hamiltonian, we also need to discuss the effective Hilbert space $\mathcal{H}_{\text{eff}}$. Applying the results of \textcolor{black}{section \ref{diagsummsect}}, we see that $\mathcal{H}_{\text{eff}}$ is spanned by states $\{|\v{q},\{n_{pm}\}\>\}$ where $|\v{q},\{n_{pm}\}\>$ is the unique simultaneous eigenstate of the form \begin{align*} e^{iC_1'} |\v{q},\{n_{pm}\}\> &= |\v{q},\{n_{pm}\}\>, \\ e^{iC_2'} |\v{q},\{n_{pm}\}\> &= |\v{q},\{n_{pm}\}\>, \\ C_i' |\v{q},\{n_{pm}\}\> &= 2\pi q_{i-2} |\v{q},\{n_{pm}\}\>, \ i = 3,...,N+2\\ a_{pm}^\dagger a_{pm} |\v{q},\{n_{pm}\}\> &= n_{pm} |\v{q},\{n_{pm}\}\> \end{align*} Here $n_{pm}$ runs over non-negative integers, while $\v{q}$ is an $N$ component vector, $\v{q} = (q_1,...,q_N)$ where each component $q_i$ runs over all integers. As in the single impurity case, we do not need to label the $\{|\v{q},\{n_{pm}\}\>\}$ basis states with $\alpha$ quantum numbers since $d_1 = 1$ and thus there is no degeneracy. Now that we have derived the effective theory, all that remains is to diagonalize it. To do this, we note that the $\{|\v{q},\{n_{pm}\}\>\}$ basis states are also eigenstates of $H_{\text{eff}}$ with energies given by \begin{eqnarray} E = \sum_{m=1}^N \sum_{p > 0} v p n_{pm} &+& \frac{\pi v}{2 k s} \sum_{m=2}^{N} q_m^2 \label{Efm} \\ &+& \frac{\pi v}{2 k s} (k q_1 - q_2 -...-q_N)^2 \nonumber \end{eqnarray} The above equation gives the complete low energy spectrum of $H$ in the limit $U \rightarrow \infty$. Let us now discuss the physical interpretation of these results. As in the single impurity case, when $U \rightarrow \infty$ the impurities generate perfect backscattering, effectively reconnecting the edge modes. The result, as shown in Fig. \ref{fig:mi}b, is the formation of $N$ disconnected chiral modes living in the $N$ intervals, $[x_N,x_1], [x_1, x_2], ...,[x_{N-1}, x_N]$. With this picture in mind, the $n_{pm}$ quantum numbers have a natural interpretation as the number of phonon excitations with momentum $p$ on the $m$th disconnected component of the edge. Likewise, if we examine the definition of $q_m$, we can see that $q_m/k$ is equal to the total charge in the $m$th component of the edge, i.e. the total charge in the interval $[x_{m-1},x_m]$, for $m=2,...,N$. On the other hand, the quantum number $q_1$ is slightly different: $q_1$ is equal to the total charge on the entire boundary of the disk $[-L/2, L/2]$. Note that since $q_m$ is quantized to be an integer for all $m=1,...,N$, it follows that the charge in each interval $[x_{m-1},x_m]$ is quantized in integer multiples of $1/k$ while the total charge on the whole edge is quantized as an \emph{integer}. These quantization laws are physically sensible: indeed, the fractional quantum spin Hall state supports quasiparticle excitations with charge $1/k$, so it makes sense that disconnected components of the edge can carry such charge, but at the same time we also know that the \emph{total} charge on the boundary must be an integer. Putting this all together, we see that the first term in (\ref{Efm}) can be interpreted as the energy of the phonon excitations, summed over all momenta and all disconnected components of the edge. Similarly the second term can be interpreted as the charging energy of the disconnected components labeled by $m=2,...,N$, while the third term can be interpreted as the charging energy of the first component labeled by $m=1$. So far in this section we have considered magnetic impurities which backscatter spin-up electrons into spin-down electrons. These impurities explicitly break time reversal symmetry. However, one can also consider non-magnetic impurities which preserve time reversal symmetry and backscatter \emph{pairs} of spin-up electrons into pairs of spin-down electrons. When the scattering strength $U$ is sufficiently strong these impurities can cause a \emph{spontaneous} breaking of time reversal symmetry, leading to a two-fold degenerate ground state.~\cite{XuMoore,WuBernevigZhang,LevinStern} This physics can also be captured by an appropriate toy model and we provide an example in Appendix \ref{ssb}. \subsection{Example 3: Multiple magnetic and superconducting impurities} \label{fmscsect} We now consider a fractional quantum spin Hall edge in a disk geometry of circumference $L$ with $2N$ alternating magnetic and superconducting impurities. We take the magnetic impurities to be located at positions $x_1,x_3,...,x_{2N-1}$ while the superconducting impurities are located at positions $x_2,x_4,...,x_{2N}$ (Fig. \ref{fig:fmscmi}a). We assume that the magnetic impurities generate a backscattering term of the form $\frac{U}{2}(\psi_\uparrow^\dagger(0) \psi_\downarrow(0) + h.c)$, while the superconducting impurities generate a pairing term of the form $\frac{U}{2}(\psi_\uparrow^\dagger(0) \psi_\downarrow^\dagger(0) + h.c)$. The Hamiltonian is then \begin{align} &H=H_0-U\sum_{i=1}^{2N}\cos(C_i) \\ &C_{i}=k (\phi_{\uparrow}(x_i)+(-1)^{i+1} \phi_{\downarrow}(x_i)) \label{Hfmsc} \end{align} where $H_0$ is defined in Eq. \ref{Hclean}. As in the previous cases, our goal is to understand the low energy physics of $H$ in the limit $U \rightarrow \infty$. As before, we take account of the compactness of $\phi_\uparrow, \phi_\downarrow$ and the discrete nature of $Q_\uparrow, Q_\downarrow$ by adding two additional cosine terms to our Hamiltonian: \begin{equation} H = H_0 - U\sum_{i=1}^{2N}\cos(C_i) - U \cos(2\pi Q_\uparrow) - U \cos(2\pi Q_\downarrow) \end{equation} \begin{figure}[tb] \centering \includegraphics[width=8cm,height=4.2cm]{fig3.pdf} \caption{ (a) A collection of $2N$ alternating magnetic and superconducting impurities on a fractional quantum spin Hall edge. The magnetic impurities are located at positions $x_1,x_3,....,x_{2N-1}$ while the superconducting impurities are located at positions $x_2,x_4,...,x_{2N}$. The magnetic impurities scatter spin-up electrons into spin-down electrons while the superconducting impurities scatter spin-up electrons into spin-down holes. (b) In the infinite $U$ limit, the impurities effectively reconnect the edge modes, breaking the edge into $2N$ disconnected components.} \label{fig:fmscmi} \end{figure} Next, we find the creation and annihilation operators for $H_{\text{eff}}$ using Eq. \ref{auxeqsumm}. That is, we search for all operators $a$ such that (1) $a$ is a linear combination of our fundamental phase space operators $\{\partial_y \phi_\uparrow, \partial_y \phi_\downarrow, \phi_\uparrow(y_0), \phi_\downarrow(y_0)\}$ and (2) $a$ obeys \begin{align} [a, H_{0}] &= E a + \sum_{i=1}^{2N} \lambda_i [C_i, H_0] + \lambda_\uparrow [Q_\uparrow, H_0] + \lambda_\downarrow [Q_\downarrow, H_0] \nonumber \\ [a, C_j] &= [a,Q_\uparrow] = [a,Q_\downarrow] = 0 \label{auxeqscfm} \end{align} for some $E,\textcolor{black}{\lambda_i}, \lambda_\uparrow, \lambda_\downarrow$ with $E \neq 0$. As before, since $[a, Q_\uparrow] = [a, Q_\downarrow] = 0$, it follows that $\phi_\uparrow(y_0), \phi_\downarrow(y_0)$ cannot appear in the expression for $a$. Hence, $a$ can be written in the general form \begin{equation*} a=\int_{-L/2}^{L/2} [f_{\uparrow}(y)\partial_{y}\phi_{\uparrow}(y)+f_{\downarrow}(y)\partial_{y}\phi_{\downarrow}(y)] dy \end{equation*} Substituting this expression into the first line of Eq.~\ref{auxeqscfm}, we obtain \begin{align*} -ivf_{\uparrow}'(y) & = Ef_{\uparrow}+kiv\sum_{j}\lambda_{j}\delta(y-x_{j})\\ ivf_{\downarrow}'(y) & = Ef_{\downarrow}-kiv\sum_{j}(-1)^{j+1}\lambda_{j}\delta(y-x_{j}) \end{align*} Solving the above first order differential equation we get \begin{align*} f_{\uparrow}(y) &=\sum_{j=1}^{N}A_{j}e^{ipy}\Theta(x_{j-1}<y<x_{j})] \nonumber \\ f_{\downarrow}(y) &=\sum_{j=1}^{N}B_{j}e^{-ipy}\Theta(x_{j-1}<y<x_{j})] \end{align*} where $p = E/v$ and where \begin{align*} A_{j+1}=A_{j}-\lambda_{j}k e^{-ipx_{j}}, B_{j+1}=B_{j}-(-1)^{j+1}\lambda_{j}k e^{ipx_{j}} \end{align*} Here, $\Theta(a < y < b)$ is defined to take the value $1$ if $y$ is in the interval $[a,b]$ and $0$ otherwise. Also, we use a notation in which $x_0$ is identified with $x_N$. Eliminating $\lambda_j$, we derive \begin{equation} (A_{j+1} - A_{j}) e^{ipx_j} = (-1)^{j+1} (B_{j+1} - B_{j}) e^{-ipx_j} \label{constraintABfmsc1} \end{equation} We still have to impose the requirement $[a,C_j] = 0$ and derive the corresponding constraint on $\{A_j,B_j\}$. As in the previous cases, the correct way to do this is to regularize the cosine terms, replacing \begin{equation} C_j \rightarrow \int_{-L/2}^{L/2} k ( \phi_\uparrow(x) + (-1)^{j+1} \phi_\downarrow(x)) \tilde{\delta}(x-x_j) dx \label{Cregsc} \end{equation} where $\tilde{\delta}$ is a narrowly peaked function with $\int \tilde{\delta}(x) dx =1$, i.e., an approximation to a delta function. With this regularization, it is not hard to show that $[a,C_j] = 0$ gives the constraint \begin{equation} \frac{1}{2}(A_{j} + A_{j+1}) e^{ipx_j} = \frac{1}{2} (-1)^{j+1} (B_{j} + B_{j+1}) e^{-ipx_j} \label{constraintABfmsc2} \end{equation} Combining (\ref{constraintABfmsc1}), (\ref{constraintABfmsc2}) we derive \begin{align} A_j e^{ipx_j} &= (-1)^{j+1} B_j e^{-ipx_j}, \nonumber \\ A_{j+1} e^{ipx_j} &= (-1)^{j+1} B_{j+1} e^{-ipx_j} \label{constraintABfmsc3} \end{align} Our task is now to find all $\{A_j, B_j,p\}$ that satisfy (\ref{constraintABfmsc3}). For simplicity, we will specialize to the case where the impurities are uniformly spaced with spacing $s$, i.e. $x_{j+1} - x_j = s = L/N$ for all $j$. In this case, equation (\ref{constraintABfm3}) implies that $e^{2ips} = -1$, so that $p$ is quantized in half-odd-integer multiples of $\pi/s$. For any such $p$, (\ref{constraintABfm3}) has $N$ linearly independent solutions of the form \begin{align*} B_j &= A_j = 0 \ \ \text{for } j \neq m \\ B_j &= (-1)^{j+1} A_j e^{2i p x_j} \neq 0 \ \ \text{for } j = m \end{align*} with $m=1,...,2N$. Putting this all together, we see that the most general possible creation/annihilation operator for $H_{\text{eff}}$ is given by \begin{align*} a_{pm} &= \int_{-L/2}^{L/2} [(e^{ipy}\partial_y \phi_\uparrow + (-1)^{m+1} e^{2ip x_m} e^{-ipy} \partial_y \phi_\downarrow) \\ & \cdot \Theta(x_{m-1}<y<x_{m})]dy \cdot \sqrt{\frac{k}{4\pi|p|s}} \end{align*} with $E_{pm} = vp$. Here the index $m$ runs over $m=1,...,2N$ while $p$ takes values $\pm \pi/2s, \pm 3\pi/2s,...$. Note that we have normalized the $a$ operators so that $[a_{pm}, a_{p'm'}^\dagger] = \delta_{pp'} \delta_{mm'}$ for $p,p' > 0$. The next step is to compute the commutator matrix $\mathcal{Z}_{ij} = \frac{1}{2\pi i} [C_i,C_j]$. Let us denote the $2N+2$ cosine terms as $\{\cos(C_1),...,\cos(C_{2N+2})\}$ where $C_{2N+1} = 2\pi Q_\uparrow$, $C_{2N+2} = 2\pi Q_\downarrow$. Using the commutation relations (\ref{genphicommrel}), we find \begin{eqnarray*} \mathcal{Z}_{ij} &=& \begin{pmatrix} 0 & k & 0 & k & \cdots & 0 & k & 1 & -1 \\ -k & 0 & k & 0 & \cdots & k & 0 & 1 & 1 \\ 0 &-k & 0 & k & \cdots & 0 & k & 1 & -1 \\ -k & 0 &-k & 0 & \cdots & k & 0 & 1 & 1 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & -k & 0 & -k & \cdots & 0 & k & 1 & -1 \\ -k & 0 & -k & 0 & \cdots & -k & 0 & 1 & 1 \\ -1 &-1 & -1 & -1 & \cdots &-1 &-1 & 0 & 0 \\ 1 & -1 & 1 & -1 & \cdots & 1 & -1 & 0 & 0 \end{pmatrix} \end{eqnarray*} \textcolor{black}{To proceed further we need to find an appropriate change of variables of the form $C_i' = \sum_{j=1}^{N+2} \mathcal{V}_{ij} C_j + \chi_i$. Here, $\mathcal V$ should be chosen so that $ \mathcal{Z}_{ij}' = \frac{1}{2\pi i} [C_i', C_j']$ is in skew-normal form, while $\chi$ should be chosen so that it obeys Eq. \ref{chicond}.} It is easy to see that the following change of variables does the job: \begin{align*} &C_m' = C_{2m+1} - C_{2m-1}, \quad m= 1,...,N-1, \\ &C_N' = 2\pi Q_\uparrow + 2\pi Q_\downarrow, \quad C_{N+1}' = C_1, \\ &C_m' = C_{2m-2N-2} - C_{2N}, \quad m= N+2,...,2N\\ &C_{2N+1}' = C_{2N}-C_1 -2\pi k Q_\uparrow \textcolor{black}{+ \pi}, \quad C_{2N+2}' = -2\pi Q_\uparrow, \end{align*} Indeed, it is easy to check that \begin{eqnarray*} \mathcal{Z}_{ij}' &=& \frac{1}{2\pi i} [C_i', C_j'] \nonumber \\ &=& \begin{pmatrix} 0_{N+1} & -\mathcal{D} \\ \mathcal{D} & 0_{N+1} & \end{pmatrix}, \end{eqnarray*} where $\mathcal{D}$ is the $N+1$ dimensional diagonal matrix \begin{equation*} \mathcal{D} = \begin{pmatrix} 2k & 0 & \cdots & 0 & 0 \\ 0 & 2k & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & 2 & 0 \\ 0 & 0 & \cdots & 0 & 1 \end{pmatrix} \end{equation*} We can see that this is in the canonical skew-normal form shown in Eq. \ref{zprime}, with the parameters $M = 2N+2$, $I = N+1$ and \begin{equation*} d_1 = ... = d_{N-1} = 2k, \quad d_{N} = 2, \quad d_{N+1} = 1 \end{equation*} With these results we can write down the low energy effective Hamiltonian $H_{\text{eff}}$: according to Eq. \ref{Heffdiagsumm}, $H_{\text{eff}}$ must take the form \begin{equation} H_{\text{eff}} = \sum_{m=1}^{2N} \sum_{p > 0} v p a_{pm}^\dagger a_{pm} \label{Hefffmsc} \end{equation} where the sum runs over $p = \pi/2s, 3\pi/2s,...$. Notice that $H_{\text{eff}}$ does not include a term of the form $F(C_{2I+1}',...,C_M')$ which was present in the previous examples. The reason that this term is not present is that $M = 2I$ in this case --- that is, none of the $C_i'$ terms commute with all the other $C_j'$. This is closely related to the fact that the momentum $p$ is quantized in half-odd integer multiples of $\pi/2s$ so unlike the previous examples, we cannot construct an operator $a_{pm}$ with $p=0$ (sometimes called a ``zero mode'' operator). Let us now discuss the effective Hilbert space $\mathcal{H}_{\text{eff}}$. According to the results of \textcolor{black}{section \ref{diagsummsect}}, the effective Hilbert space $\mathcal{H}_{\text{eff}}$ is spanned by states $\{|\v{\alpha},\{n_{pm}\}\>\}$ where $|\v{\alpha},\{n_{pm}\}\>$ is the unique simultaneous eigenstate of the form \begin{align} e^{iC_i'/2k} |\v{\alpha},\{n_{pm}\}\> &= e^{i\pi \alpha_{i}/k}|\v{\alpha},\{n_{pm}\}\>, \ i = 1,...,N-1 \nonumber \\ e^{iC_N'/2} |\v{\alpha},\{n_{pm}\}\> &= e^{i\pi \alpha_N}|\v{\alpha},\{n_{pm}\}\>, \nonumber \\ e^{iC_{N+1}'} |\v{\alpha},\{n_{pm}\}\> &= |\v{\alpha},\{n_{pm}\}\>, \nonumber \\ e^{iC_i'} |\v{\alpha},\{n_{pm}\}\> &= |\v{\alpha},\{n_{pm}\}\>, \ i =N+2,...,2N+2 \nonumber \\ a_{pm}^\dagger a_{pm} |\v{\alpha},\{n_{pm}\}\> &= n_{pm} |\v{\alpha},\{n_{pm}\}\> \label{quantnumdef3} \end{align} Here the label $n_{pm}$ runs over non-negative integers, while $\v{\alpha}$ is an abbreviation for the $N$ component integer vector $(\alpha_1,...,\alpha_{N})$ where $\alpha_N$ runs over two values $\{0,1\}$, and the other $\alpha_i$'s run over $\{0,1...,2k-1\}$. As in the previous cases, we can easily diagonalize the effective theory: clearly the $\{|\v{\alpha},\{n_{pm}\}\>\}$ basis states are also eigenstates of $H_{\text{eff}}$ with energies given by \begin{equation} E = \sum_{m=1}^N \sum_{p > 0} v p n_{pm} \label{Efmsc} \end{equation} The above equation gives the complete low energy spectrum of $H$ in the limit $U \rightarrow \infty$. An important feature of the above energy spectrum (\ref{Efmsc}) is that the energy $E$ is independent of $\v{\alpha}$. It follows that every state, including the ground state, has a degeneracy of \begin{equation} D = 2 \cdot (2k)^{N-1} \label{degfmsc} \end{equation} since this is the number of different values that $\v{\alpha}$ ranges over. We now discuss the physical meaning of this degeneracy. As in the previous examples, when $U \rightarrow \infty$, the impurities reconnect the edge modes, breaking the edge up into $2N$ disconnected components associated with the intervals $[x_{2N},x_1],[x_1, x_2],...,[x_{2N-1},x_{2N}]$ (Fig. \ref{fig:fmscmi}b). The $n_{pm}$ quantum numbers describe the number of phonon excitations of momentum $p$ in the $m$th component of the edge. The $\v{\alpha}$ quantum numbers also have a simple physical interpretation. Indeed, if we examine the definition of $\v{\alpha}$ (\ref{quantnumdef3}), we can see that for $i \neq N$, $e^{i \pi \alpha_i/k} = e^{i \pi q_i}$ where $q_i$ is the total charge in the interval $[x_{2i-1}, x_{2i+1}]$ while $e^{i \pi \alpha_{N}} = e^{i \pi q}$ where $q$ is the total charge on the edge. Thus, for $i \neq N$, $\alpha_i/k$ is the total charge in the interval $[x_{2i-1}, x_{2i+1}]$ modulo $2$ while $\alpha_N$ is the total charge on the edge modulo $2$. The quantum number $\alpha_N$ ranges over two possible values $\{0,1\}$ since the total charge on the edge must be an integer while the other $\alpha_i$'s range over $2k$ values $\{0,1,...,2k-1\}$ since the fractional quantum spin Hall state supports excitations with charge $1/k$ and hence the charge in the interval $[x_{2i-1}, x_{2i+1}]$ can be any integer multiple of this elementary value. It is interesting to compare our formula for the degeneracy (\ref{degfmsc}) to that of Refs.~\onlinecite{lindner2012fractionalizing, cheng2012superconducting, barkeshli2013twist,vaezi2013fractional, clarke2013exotic}. Those papers studied a closely related system consisting of a FQSH edge in proximity to alternating ferromagnetic and superconducting strips (Fig. \ref{fig:fmscmi2}a). The authors found that this related system has a ground state degeneracy of $D = (2k)^{N-1}$, which agrees exactly with our result, since Refs.~\onlinecite{lindner2012fractionalizing, cheng2012superconducting, barkeshli2013twist,vaezi2013fractional, clarke2013exotic} did not include the two-fold degeneracy associated with fermion parity. In fact, it is not surprising that the two systems share the same degeneracy since one can tune from their system to our system by shrinking the size of the ferromagnetic and superconducting strips while at the same time increasing the strength of the proximity coupling (see Fig. \ref{fig:fmscmi2}b-c). Although our system shares the same degeneracy as the one studied in Refs.~\onlinecite{lindner2012fractionalizing, cheng2012superconducting, barkeshli2013twist,vaezi2013fractional, clarke2013exotic}, one should keep in mind that there is an important difference between the two degeneracies: the degeneracy in Refs.~\onlinecite{lindner2012fractionalizing, cheng2012superconducting, barkeshli2013twist,vaezi2013fractional, clarke2013exotic} is topologically protected and cannot be split by any local perturbation, while our degeneracy is not protected and splits at any finite value of $U$, as we explain in the next section. That being said, if we modify our model in a simple way, we \emph{can} capture the physics of a topologically protected degeneracy. In particular, the only modification we would need to make is to replace each individual magnetic impurity with a long array of many magnetic impurities, and similarly we would replace each individual superconducting impurity with a long array of many superconducting impurities. After making this change, the degeneracy would remain nearly exact even at finite $U$, with a splitting which is exponentially small in the length of the arrays. \begin{figure}[tb] \centering \includegraphics[width=8.5cm,height=3.5cm]{fig4.pdf} \caption{(a) A fractional quantum spin Hall edge in proximity to alternating ferromagnetic and superconducting strips. (b-c) By shrinking the size of the ferromagnetic and superconducting strips while at the same time increasing the strength of the proximity coupling, we can continuously deform the system into a fractional quantum spin Hall edge with magnetic and superconducting \emph{impurities}.} \label{fig:fmscmi2} \end{figure} \subsection{Finite \texorpdfstring{$U$}{U} corrections} In the previous sections we analyzed the low energy physics of three different systems in the limit $U \rightarrow \infty$. In this section, we discuss how these results change when $U$ is large but \emph{finite}. \subsubsection{Single magnetic impurity} We begin with the simplest case: a single magnetic impurity on the edge of a $\nu = 1/k$ fractional quantum spin Hall state. We wish to understand how finite $U$ corrections affect the low energy spectrum derived in section \ref{fmsisect}. We follow the approach outlined in section \ref{finUsummsect}. According to this approach, the first step is to construct the operator $\Pi$ which is conjugate to the argument of the cosine term, $C = k (\phi_\uparrow(0) + \phi_\downarrow(0))$. To do this, we regularize $C$ as in equation (\ref{Cregsi}), replacing $C \rightarrow \int_{-L/2}^{L/2} k ( \phi_\uparrow(x) + \phi_\downarrow(x)) \tilde{\delta}(x) dx$. For concreteness, we choose the regulated delta function $\tilde{\delta}$ to be \begin{equation*} \tilde{\delta}(x) = \begin{cases} \frac{1}{b} & |x| \leq b/2 \\ 0 & |x| > b/2 \end{cases} \end{equation*} With this regularization, we find: \begin{align*} [C, H_0] &= \frac{kvi}{b} \int_{-b/2}^{b/2} \textcolor{black}{(\partial_x \phi_\uparrow - \partial_x \phi_\downarrow)} dx \nonumber \\ [C,[C,H_0]] &= -\frac{4\pi k v}{b} \end{align*} so that \begin{align*} \mathcal{M} &= \frac{\pi b}{k v} \nonumber \\ \Pi &= \frac{1}{2} \int_{-b/2}^{b/2}\textcolor{black}{(\partial_x \phi_\uparrow - \partial_x \phi_\downarrow)} dx \end{align*} According to Eq. \ref{finUsumm}, the low energy theory at finite $U$ is obtained by adding terms to $H_{\text{eff}}$ (\ref{Heffsi2}) of the form $\sum_{n= -\infty}^{n=\infty} e^{i n \Pi} \epsilon_n(\{a_p, a_p^\dagger\}, C_3')$. Here, the $\epsilon_n$ are some unknown functions whose precise form cannot be determined without more calculation. We should mention that the $\epsilon_n$ functions also depend on $U$ --- in fact, $\epsilon_n \rightarrow 0$ as $U \rightarrow \infty$ --- but for notational simplicity we have chosen not to show this dependence explicitly. In what follows, instead of computing $\epsilon_n$, we take a more qualitative approach: we simply assume that $\epsilon_n$ contains all combinations of $a_p, a_p^\dagger, C_3'$ that are not forbidden by locality or other general principles, and we derive the consequences of this assumption. The next step is to analyze the effect of the above terms on the low energy spectrum. This analysis depends on which parameter regime one wishes to consider; here, we focus on the limit where $L \rightarrow \infty$, while $U$ is fixed but large. In this case, $H_{\text{eff}}$ (\ref{Heffsi2}) has a gapless spectrum, so we cannot use conventional perturbation theory to analyze the effect of the finite $U$ corrections; instead we need to use a renormalization group (RG) approach. This RG analysis has been carried out previously~\cite{kane1992transmission, kane1992transport} and we will not repeat it here. Instead, we merely summarize a few key results: first, one of the terms generated by finite $U$, namely $e^{i \Pi}$, is relevant for $k > 1$ and marginal for $k=1$. Second, the operator $e^{i \Pi}$ can be interpreted physically as describing a quasiparticle tunneling event where a charge $1/k$ quasiparticle tunnels from one side of the impurity to the other. Third, this operator drives the system from the $U =\infty$ fixed point to the $U=0$ fixed point. These results imply that when $k > 1$, for any finite $U$, the low energy spectrum in the thermodynamic limit $L \rightarrow \infty$ is always described by the $U=0$ theory $H_0$. Thus, in this case, the finite $U$ corrections have an important effect on the low energy physics. We note that these conclusions are consistent with the RG analysis of magnetic impurities given in Ref.~\onlinecite{BeriCooper}. \subsubsection{Multiple magnetic impurities} We now move on to consider a system of $N$ equally spaced magnetic impurities on an edge of circumference $L$. As in the single impurity case, the first step in understanding the finite $U$ corrections is to compute the $\Pi_i$ operators that are conjugate to the $C_i$'s. Regularizing the cosine terms as in the previous case, a straightforward calculation gives \begin{align*} \mathcal{M}_{ij} &= \frac{\pi b}{k v} \delta_{ij} \nonumber \\ \Pi_i &= \frac{1}{2} \int_{x_i-b/2}^{x_i+b/2}\textcolor{black}{(\partial_x \phi_\uparrow - \partial_x \phi_\downarrow)}dx \end{align*} where $i,j=1,...,N$. \footnote{We do not bother to compute the two remaining $\Pi_i$'s since we do not want to include finite $U$ corrections from the corresponding cosine terms \unexpanded{$\cos(2\pi Q_\uparrow)$} and \unexpanded{$\cos(2\pi Q_\downarrow)$}. Indeed, these terms were introduced as a mathematical trick and so their $U$ coefficients must be taken to infinity to obtain physical results.} According to Eq. \ref{finUsumm}, the finite $U$ corrections contribute additional terms to $H_{\text{eff}}$ (\ref{Hefffm}) of the form $\sum_{\v{n}} e^{i \sum_{j=1}^N n_j \Pi_j} \epsilon_{\v{n}}$ where $\v{n} =(n_1,...,n_N)$ is an $N$-component integer vector. Here, $\epsilon_{\v{n}}$ is some unknown function of the operators $\{a_{pm}, a_{pm}^\dagger, C_m'\}$ which vanishes as $U \rightarrow \infty$. We now discuss how the addition of these terms affects the low energy spectrum in two different parameter regimes. First we consider the limit where $L \rightarrow \infty$ with $U$ and $N$ fixed. This case is a simple generalization of the single impurity system discussed above, and it is easy to see that the same renormalization group analysis applies here. Thus, in this limit the finite $U$ corrections have a dramatic effect for $k > 1$ and cause the low energy spectrum to revert back to the $U=0$ system for any finite value of $U$, no matter how large. The second parameter regime that we consider is where $L, N \rightarrow \infty$ with $U$ and $L/N$ fixed. The case is different from the previous one because $H_{\text{eff}}$ (\ref{Hefffm}) has a finite \emph{energy gap} in this limit (of order $v/s$ where $s = L/N$). Furthermore, $H_{\text{eff}}$ has a unique ground state. These two properties are stable to small perturbations, so we conclude that the system will continue to have a unique ground state and an energy gap for finite but sufficiently large $U$. The presence of this energy gap at large $U$ is not surprising. Indeed, in the above limit, our system can be thought of as a toy model for a fractional quantum spin Hall edge that is proximity coupled to a ferromagnetic strip. It is well known that a ferromagnet can open up an energy gap at the edge if the coupling is sufficiently strong\cite{LevinStern, BeriCooper}, which is exactly what we have found here. \subsubsection{Multiple magnetic and superconducting impurities} \label{finUscsect} Finally, let us discuss a system of $2N$ equally spaced alternating magnetic and superconducting impurities on an edge of circumference $L$. As in the previous cases, the first step in understanding the finite $U$ corrections is to compute the $\Pi_i$ operators that are conjugate to the $C_i$'s. Regularizing the cosine terms as in the previous cases, a straightforward calculation gives \begin{eqnarray*} \mathcal{M}_{ij} &=& \frac{\pi b}{k v} \delta_{ij} \nonumber \\ \Pi_i &=& \frac{1}{2} \int_{x_i-b/2}^{x_i+b/2}\textcolor{black}{(\partial_x \phi_\uparrow - (-1)^{i+1} \partial_x \phi_\downarrow)} dx \end{eqnarray*} where $i,j=1,...,2N$. As a first step towards understanding the finite $U$ corrections, we consider a scenario in which only one of the impurities/cosine terms has a finite coupling constant $U$, while the others have a coupling constant which is infinitely large. This scenario is easy to analyze because we only have to include the corrections associated with a \emph{single} impurity. For concreteness, we assume that the impurity in question is superconducting rather than magnetic and we label the corresponding cosine term by $\cos(C_{2j})$ (in our notation the superconducting impurities are labeled by even integers). Having made these choices, we can immediately write down the finite $U$ corrections: according to Eq. \ref{finUsumm}, these corrections take the form \begin{equation*} \textcolor{black}{\sum_{n=-\infty}^\infty} e^{i n \Pi_{2j}} \epsilon_{n}(\{a_{pm},a_{pm}^\dagger\}), \end{equation*} where the $\epsilon_n$ are some unknown functions which vanish as $U \rightarrow \infty$. Our next task is to understand how these corrections affect the low energy spectrum. The answer to this question depends on which parameter regime one wishes to study: here we will focus on the regime where $L,N \rightarrow \infty$ with $L/N$ and $U$ fixed. In this limit, $H_{\text{eff}}$ (\ref{Hefffmsc}) has a finite energy gap of order $v/s$ where $s = L/N$. At the same time, the ground state is highly degenerate: in fact, the degeneracy is exponentially large in the system size, growing as $D = 2\cdot(2k)^{N-2}$. Given this energy spectrum, it follows that at the lowest energy scales, the only effect of the finite $U$ corrections is to split the ground state degeneracy. To analyze this splitting, we need to compute the matrix elements of the finite $U$ corrections between different ground states and then diagonalize the resulting $D \times D$ matrix. Our strategy will be to use the identity (\ref{finUidsumm}) which relates the matrix elements of the finite $U$ corrections to the matrix elements of $e^{ni\Gamma_{2j}}$. Following this approach, the first step in our calculation is to compute $\Gamma_{2j}$. Using the definition (\ref{Gammadef}), we find \begin{equation*} \Gamma_{2j} = \frac{1}{2k} (C_{2j+1} - C_{2j-1}) \end{equation*} assuming $j \neq N$. (The case $j = N$ is slightly more complicated due to our conventions for describing the periodic boundary conditions at the edge, so we will assume $j \neq N$ in what follows). The next step is to find the matrix elements of the operator $e^{in \Gamma_{2j}}$ between different ground states. To this end, we rewrite $\Gamma_{2j}$ in terms of the $C_i'$ operators: $\Gamma_{2j} = \frac{C_j'}{2k}$. The matrix elements of $e^{in \Gamma_{2j}}$ can now be computed straightforwardly using the known matrix elements of $C_j'$ (see Eqs. \ref{eciprimemat1}-\ref{eciprimemat2}): \begin{equation} \<\v{\alpha}'|e^{in\Gamma_{2j}}|\v{\alpha}\> = e^{\frac{\pi in\alpha_j}{k}} \delta_{\v{\alpha}' \v{\alpha}} \label{Gamma2j} \end{equation} where $|\v{\alpha}\>$ denotes the ground state $|\v{\alpha}\> \equiv |\v{\alpha},\v{n} = 0\>$. At this point we apply the identity (\ref{finUidsumm}) which states that the matrix elements of $e^{i n \Pi_{2j}} \epsilon_{n}(\{a_{pm},a_{pm}^\dagger\})$ are equal to the matrix elements of $u_n \cdot e^{i n \Gamma_{2j}}$ where $u_n$ is some unknown proportionality constant. Using this identity together with (\ref{Gamma2j}), we conclude that the matrix elements of the finite $U$ corrections are given by \begin{equation*} f \left(\frac{\alpha_j}{k}\right) \delta_{\v{\alpha}' \v{\alpha}} \end{equation*} where $f(x) = \sum_{n= -\infty}^{\infty} u_n e^{\pi inx}$. We are now in position to determine the splitting of the ground states. To do this, we note that while we don't know the values of the $u_n$ constants and therefore we don't know $f(x)$, we expect that generically the function $f$ will have a unique minimum for $\alpha_j \in \{0,1,...,2k-1\}$. Assuming this is the case, we conclude that the finite $U$ corrections favor a particular value of $\alpha_j$, say $\alpha_j = 0$. Thus these corrections reduce the ground state degeneracy from $D$ to $D/2k$. So far we have analyzed the case where one of the superconducting impurities is characterized by a finite coupling constant $U$ while the other impurities are at infinite coupling. Next, suppose that \emph{all} the superconducting impurities have finite $U$ while all the magnetic impurities have infinite $U$. In this case, similar analysis as above shows that the matrix elements of the finite $U$ corrections are of the following form: \footnote{In fact, in deriving this expression we used one more piece of information in addition to the general considerations discussed above, namely that the superconducting impurities are completely disconnected from one another by the magnetic impurities and therefore the finite $U$ corrections do not generate any ``interaction'' terms, e.g. of the form $g(\alpha_i, \alpha_j) \delta_{\boldsymbol{\alpha}' \boldsymbol{\alpha}}$.} \begin{equation} \left[\sum_{j=1}^{N-1} f\left(\frac{\alpha_j}{k} \right) + f \left(\alpha_N-\frac{1}{k} \sum_{j=1}^{N-1} \alpha_j \right) \right] \delta_{\v{\alpha}' \v{\alpha}} \label{finUmatelt} \end{equation} To determine the splitting of the ground states, we need to understand the eigenvalue spectrum of the above matrix. Let us assume that $f$ has a unique minimum at some $\alpha_j = q$ --- which is what we expect generically. Then, as long as the system size is commensurate with this value in the sense that $N q$ is a multiple of $k$, we can see that the above matrix has a unique ground state $|\v{\alpha}\>$ with $\alpha_1 = ... = \alpha_{N-1} = q$ and $\alpha_N = Nq/k$ (mod $2$). Furthermore, this ground state is separated from the lowest excited states by a finite gap, which is set by the function $f$. Thus, in this case, the finite $U$ corrections completely split the ground state degeneracy leading to a unique ground state with an energy gap. Likewise, we can consider the opposite scenario where the magnetic impurities have finite $U$ while the superconducting impurities have infinite $U$. Again, similar analysis shows that the corrections favor a unique ground state which is separated from the excited states by a finite gap. The main difference from the previous case is that the matrix elements of the finite $U$ corrections are \emph{off-diagonal} in the $|\v{\alpha}\>$ basis so the ground state is a superposition of many different $|\v{\alpha}\>$ states. To complete the discussion, let us consider the case where \emph{all} the impurities, magnetic and superconducting, have finite $U$. If the magnetic impurities are at much stronger coupling than the superconducting impurities or vice-versa then presumably the finite $U$ corrections drive the system to one of the two gapped phases discussed above. On the other hand, if the two types of impurities have comparable values of $U$, then the low energy physics is more delicate since the finite $U$ corrections associated with the two types of impurities do not commute with one other, i.e. $[e^{i\Gamma_{2j}}, e^{i\Gamma_{2j \pm 1}}] \neq 0$. In this case, a more quantitative analysis is required to determine the fate of the low energy spectrum. \section{Conclusion} In this paper we have presented a general recipe for computing the low energy spectrum of Hamiltonians of the form (\ref{genHamc}) in the limit $U \rightarrow \infty$. This recipe is based on the construction of an effective Hamiltonian $H_{\text{eff}}$ and an effective Hilbert space $\mathcal{H}_{\text{eff}}$ describing the low energy properties of our system in the infinite $U$ limit. The key reason that our approach works is that this effective Hamiltonian is quadratic, so there is a simple procedure for diagonalizing it. While our recipe gives exact results in the infinite $U$ limit, it provides only approximate results when $U$ is finite; in order to obtain the exact spectrum in the finite $U$ case, we need to include additional (non-quadratic) terms in $H_{\text{eff}}$. As part of this work, we have discussed the general form of these finite $U$ corrections and how they scale with $U$. However, we have not discussed how to actually compute these corrections. One direction for future research would be to develop quantitative approaches for obtaining these corrections --- for example using the instanton approach outlined in Ref.~\onlinecite{coleman1988aspects}. Some of the most promising directions for future work involve applications of our formalism to different physical systems. In this paper, we have focused on the application to Abelian fractional quantum Hall edges, but there are several other systems where our formalism could be useful. For example, it would be interesting to apply our methods to superconducting circuits --- quantum circuits built out of inductors, capacitors, and Josephson junctions. In particular, several authors have identified superconducting circuits with protected ground state degeneracies that could be used as qubits.\cite{KitaevCurrentMirror,Gladchenko,BrooksKitaevPreskill,Dempster} The formalism developed here might be useful for finding other circuits with protected degeneracies. \begin{acknowledgments} We thank Chris Heinrich for stimulating discussions. SG gratefully acknowledges support by NSF-JQI-PFC and LPS-MPO-CMTC. ML was supported in part by the NSF under grant No. DMR-1254741. \end{acknowledgments}
{ "attr-fineweb-edu": 1.430664, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUa_Q5qg5A51ATjGTQ
\section{Introduction} During the past several years, significant progresses have been made in the observation of the heavy-light mesons. In 2003, two new narrow charm-strange mesons $D^*_{sJ}(2317)$ and $D^*_{sJ}(2460)$ were observed by BaBar, CLEO and Belle \cite{Aubert:2003fg,Aubert:2003pe,Besson:2003cp,Abe:2003jk,Krokovny:2003zq}. Recently, BaBar reported another two new charm-strange mesons, i.e. $D^*_{sJ}(2860)$ with a width of $(47\pm 17)$ MeV and $D^*_{sJ}(2690)$ with a width of $(112\pm 43)$ MeV in the $DK$ decay channel~\cite{Aubert:2006mh}. Meanwhile, Belle reported a new vector state $D^*_{sJ}(2708)$ with a width of $(108\pm 23^{+36}_{-31})$ MeV \cite{:2007aa}. The $D^*_{sJ}(2690)$ and $D^*_{sJ}(2708)$ are believed to be the same state since their masses and widths are consistent with each other. In the $B$ meson sector two narrow states $B_1(5725)$ and $B^*_2(5740)$ were reported by CDF \cite{B:2007CDF}, and are assigned as orbitally excited $B$ mesons. They were confirmed by D0 collaboration with slightly different masses \cite{Abazov:2007vq}. CDF collaboration also reported their strange analogues, $B_{s1}(5829)$ and $B^*_{s2}(5840)$, as orbitally excited $B_s$ mesons~\cite{:2007tr}. The $B^*_{s2}(5840)$ is also observed by D0 collaboration \cite{:2007sna}. About the recent experimental status of the heavy-light mesons, many reviews can be found in Refs.~\cite{Bianco:2005hj,Kravchenko:2006qx,Waldi:2007bp,Zghiche:2007im, Poireau:2007cb,Mommsen:2006ai,Kreps:2007mz}. To understand the nature of the heavy-light mesons, especially the newly observed states, and to establish the heavy-light meson spectroscopy, a lot of efforts have been made on both experiment and theory. For example, one can find recent discussions about the dynamics and decay properties of the heavy-light mesons given by Close and Swanson \cite{Close:2005se}, Godfrey \cite{Godfrey:2005ww}, and other previous analyses in Refs.~\cite{Godfrey:1985xj,Godfrey:1986wj,Isgur:1991wq,Di Pierro:2001uu,Falk:1995th,Eichten:1993ub,Henriksson:2000gk,Tregoures:1999du,Goity:1998jr, Zhu:1998wy,Dai:1998ve,Orsland:1998de}. For the new observed heavy-light mesons, such as $D^*_{sJ}(2860)$ and $D^*_{sJ}(2690)$, various attempts on the explanation of their nature have been made \cite{vanBeveren:2006st,Zhang:2006yj,Wei:2006wa,Close:2006gr,Colangelo:2007ds, Wang:2007nfa,Guo:2006rp,Faessler:2008vc,Yasui:2007dv,Colangelo:2006rq,Koponen:2007nr}. Many systematic studies are devoted to establish the $D$, $D_s$, $B$, and $B_s$ spectroscopies~\cite{Vijande:2004he,Vijande:2006hj,Matsuki:2007zz,Vijande:2007ke,Li:2007px,Swanson:2007rf}, while some earlier works can be found in Refs.~\cite{Godfrey:1985xj,Ebert:1997nk}. Recent reviews of the status of the theory study of the heavy-light mesons can be found in Refs.~\cite{Barnes:2005zy,Swanson:2005tq,Swanson:2006st,Rosner:2006jz,Zhu:2007xb,Rosner:2006sv, Nicotri:2007in,Klempt:2007cp} On the one hand, the improved experimental measurements help clarify some old questions on the spectrum. On the other hand, they also raise some new ones which need further experimental and theoretical studies \cite{Colangelo:2007ur,Ananthanarayan:2007sa}. For instance, $D^*(2640)$ reported by DELPHI in $D^{*+}\pi^+\pi^-$~\cite{Abreu:1998vk} as the first radial excited state still has not yet been confirmed by any other experiments. The spin-parity of the narrow $D_1(2420)$ also need confirmations. The status of the broad $D^*_0(2400)$ is not clear at all, its measured mass and width have too large uncertainties. For the $D_s$ spectroscopy, the low masses of the $D^*_{sJ}(2317)$ and $D^*_{sJ}(2460)$ still cannot be well explained by theory; whether they are exotic states is an open question. Theoretical predictions for the $D^*_{sJ}(2860)$ and $D^*_{sJ}(2690)$ are far from convergence. The narrow state $D^*_{sJ}(2632)$ seen by SELEX Collaboration~\cite{Evdokimov:2004iy} cannot be naturally explained by any existed theory. Nevertheless, since the flavor symmetry of the heavy-light mesons is badly broken, mixture of states with the same $J^P$ may occur. This will add further complexities into the meson spectrum and further theoretical investigations are needed. In this work, we make a systematic study of the strong decays of heavy-light mesons in a chiral quark model. In the heavy-quark infinite mass limit, the flavor symmetry does no longer exist in the heavy-light mesons, which allows us to describe the initial and final $D$, $D_s$, $B$, and $B_s$ mesons in a nonrelativistic framework self-consistently. The meson decay will proceed through a single-quark transition by the emission of a pseudoscalar meson. An effective chiral Lagrangian is then introduced to account for the quark-meson coupling. Since the quark-meson coupling is invariant under the chiral transformation, some of the low-energy properties of QCD are retained. This approach is similar to that used in Refs.~\cite{Godfrey:1985xj,Godfrey:1986wj}, except that the two constants in the decay amplitudes of Refs.~\cite{Godfrey:1985xj,Godfrey:1986wj} are replaced by two energy-dependent factors deduced from the chiral Lagrangian in our model. The chiral quark model approach has been well developed and widely applied to meson photoproduction reactions~\cite{Manohar:1983md,qk1,qk2,qkk,Li:1997gda,qkk2,qk3,qk4,qk5}. Its recent extension to describe the process of $\pi N$ scattering and investigate the strong decays of charmed baryons also turns out to be successful and inspiring~\cite{Zhong:2007fx,Zhong:2007gp}. The paper is organized as follows. In the subsequent section, the heavy-light meson in the quark model is outlined. Then, the non-relativistic quark-meson couplings are given in Sec.\ \ref{qmc}. The decay amplitudes are deduced in Sec.\ \ref{apt}. We present our calculations and discussions in Sec.\ \ref{cd}. Finally, a summary is given in Sec.\ \ref{sum}. \section{meson spectroscopy } \subsection{Harmonic oscillator states} For a heavy-light $\bar{Q}q$ system consisting light quark 1 and heavy quark 2 with masses $m_1$ and $m_2$, respectively, its eigen-states are conventionally generated by a harmonic oscillator potential \begin{eqnarray} \label{hm1} \mathcal{H}=\frac{1}{2m_1}\mathbf{p}^2_1+\frac{1}{2m_2}\mathbf{p}^2_2+ \frac{3}{2}K(\mathbf{r}_1-\mathbf{r}_2)^2, \end{eqnarray} where vectors $\textbf{r}_{j}$ and $\textbf{p}_j$ are the coordinate and momentum for the $j$-th quark in the meson rest frame, and $K$ describes the oscillator potential strength which is independent of the flavor quantum number. One defines the Jacobi coordinates to eliminate the c.m. variables: \begin{eqnarray} \mathbf{r}&=&\mathbf{r}_1-\mathbf{r}_2,\label{zb1}\\ \mathbf{R}_{c.m.}&=&\frac{m_1\mathbf{r}_1+m_2\mathbf{r}_2}{m_1+m_2}\label{zb3}. \end{eqnarray} With the above relations (\ref{zb1}--\ref{zb3}), the oscillator hamiltonian (\ref{hm1}) is reduced to \begin{eqnarray} \label{hm2} \mathcal{H}=\frac{P^2_{cm}}{2 M}+\frac{1}{2\mu}\mathbf{p}^2+ \frac{3}{2}Kr^2. \end{eqnarray} where \begin{eqnarray} \label{mom} \mathbf{p}=\mu\dot{\mathbf{r}},\ \ \mathbf{P}_{c.m.}=M \mathbf{\dot{R}}_{c.m.}, \end{eqnarray} with \begin{eqnarray} \label{mass} M=m_1+m_2,\ \ \mu=\frac{m_1 m_2}{m_1+m_2}. \end{eqnarray} From Eqs.(\ref{zb1}--\ref{zb3}) and (\ref{mom}), the coordinate $\mathbf{r}_j$ can be expressed as functions of the Jacobi coordinate $\textbf{r}$: \begin{eqnarray} \mathbf{r}_1&=&\mathbf{R}_{c.m.}+\frac{\mu}{m_1}\mathbf{r},\\ \mathbf{r}_2&=&\mathbf{R}_{c.m.}-\frac{\mu}{m_2}\mathbf{r}, \end{eqnarray} and the momentum $\mathbf{p}_j$ is given by \begin{eqnarray} \mathbf{p}_1&=&\frac{m_1}{M}\mathbf{P}_{c.m.}+\mathbf{p},\\ \mathbf{p}_2&=&\frac{m_2}{M}\mathbf{P}_{c.m.}-\mathbf{p}. \end{eqnarray} Using standard notation, the principal quantum numbers of the oscillator is $N=(2n+l)$, the energy of a state is given by \begin{eqnarray} E_N&=&(N+\frac{3}{2})\omega, \end{eqnarray} and the frequency of the oscillator is \begin{eqnarray}\label{freq} \omega=(3K/\mu)^{1/2}. \end{eqnarray} In the quark model the useful oscillator parameter is defined by \begin{eqnarray} \label{par} \alpha^2=\mu \omega=\sqrt{\frac{2m_2}{m_1+m_2}}\beta^2, \end{eqnarray} where $\beta$ is the often used harmonic oscillator parameter with a universal value $\beta=0.4$ GeV. Then, the wave function of an oscillator is give by \begin{eqnarray} \psi^{n}_{l m}=R_{n l}Y_{l m}. \end{eqnarray} \begin{table}[ht] \caption{The total wave function for the heavy-light mesons, denoted by $|n ^{2S+1} L_{J}\rangle$. The Clebsch-Gordan series for the spin and angular-momentum addition $|n ^{2S+1} L_{J} \rangle= \sum_{m+S_z=J_z} \langle Lm,SS_z|JJ_z \rangle \psi^n_{Lm} \chi_{S_z}\Phi $ has been omitted, where $\Phi$ is the flavor wave function.} \label{wfS} \begin{tabular}{|c|c|c|c|c|c|c }\hline\hline $|n ^{2S+1} L_J\rangle$ & $J^P$ &\ \ wave function \\ \hline 1 $^1 S_0$ & $0^-$ & $\psi^{0}_{00}\chi^0\Phi$ \\ \hline 1 $^3 S_1$& $1^-$ & $\psi^{0}_{00}\chi^1_{S_z}\Phi$ \\ \hline 1 $^1 P_1$& $1^+$ &$\psi^{0}_{1m}\chi^0\Phi$ \\ \hline 1 $^3 P_0$& $0^+$ &$\psi^{0}_{1 m}\chi^1_{S_z}\Phi$ \\ \hline 1 $^3 P_1$& $1^+$ &$\psi^{0}_{1 m}\chi^1_{S_z}\Phi$ \\ \hline 1 $^3 P_2$& $2^+$ &$\psi^{0}_{1 m}\chi^1_{S_z}\Phi$ \\ \hline 2 $^1 S_0$& $0^-$ &$\psi^{1}_{00}\chi^0\Phi$ \\ \hline 2 $^3 S_1$& $1^-$ &$\psi^{1}_{00}\chi^1_{S_z}\Phi$ \\ \hline 1 $^1 D_2$& $2^-$ &$\psi^{0}_{2 m}\chi^0\Phi$ \\ \hline 1 $^3 D_1$& $1^-$ &$\psi^{0}_{2 m}\chi^1_{S_z}\Phi$ \\ \hline 1 $^3 D_2$& $2^-$ &$\psi^{0}_{2 m}\chi^1_{S_z}\Phi$ \\ \hline 1 $^3 D_3$& $3^-$ &$\psi^{0}_{2 m}\chi^1_{S_z}\Phi$ \\ \hline \end{tabular} \end{table} \subsection{Spin wave functions} The usual spin wave functions are adopted. For the spin-0 state, it is \begin{eqnarray} \chi^0&=&\frac{1}{\sqrt{2}}(\uparrow\downarrow-\downarrow\uparrow), \end{eqnarray} and for the spin-1 states, the wave functions are \begin{eqnarray} \chi^1_{1}&=&\uparrow\uparrow,\ \ \ \chi^1_{-1}=\downarrow \downarrow ,\nonumber\\ \chi^1_{0}&=&\frac{1}{\sqrt{2}} (\uparrow\downarrow+\downarrow\uparrow) . \end{eqnarray} We take the heavy-quark infinite mass limit as an approximation to construct the total wave function without flavor symmetry. All the wave functions up to $1D$ states are listed in Tab. \ref{wfS}. \section{The quark-meson couplings }\label{qmc} In the chiral quark model, the low energy quark-meson interactions are described by the effective Lagrangian \cite{Li:1997gda,qk3} \begin{eqnarray} \label{lg} \mathcal{L}=\bar{\psi}[\gamma_{\mu}(i\partial^{\mu}+V^{\mu}+\gamma_5A^{\mu})-m]\psi +\cdot\cdot\cdot, \end{eqnarray} where $V^{\mu}$ and $A^{\mu}$ correspond to vector and axial currents, respectively. They are given by \begin{eqnarray} V^{\mu} &=& \frac{1}{2}(\xi\partial^{\mu}\xi^{\dag}+\xi^{\dag}\partial^{\mu}\xi), \nonumber\\ A^{\mu} &=& \frac{1}{2i}(\xi\partial^{\mu}\xi^{\dag}-\xi^{\dag}\partial^{\mu}\xi), \end{eqnarray} with $ \xi=\exp{(i \phi_m/f_m)}$, where $f_m$ is the meson decay constant. For the $SU(3)$ case, the pseudoscalar-meson octet $\phi_m$ can be expressed as \begin{eqnarray} \phi_m=\pmatrix{ \frac{1}{\sqrt{2}}\pi^0+\frac{1}{\sqrt{6}}\eta & \pi^+ & K^+ \cr \pi^- & -\frac{1}{\sqrt{2}}\pi^0+\frac{1}{\sqrt{6}}\eta & K^0 \cr K^- & \bar{K}^0 & -\sqrt{\frac{2}{3}}\eta}, \end{eqnarray} and the quark field $\psi$ is given by \begin{eqnarray}\label{qf} \psi=\pmatrix{\psi(u)\cr \psi(d) \cr \psi(s) }. \end{eqnarray} From the leading order of the Lagrangian [see Eq.(\ref{lg})], we obtain the standard quark-meson pseudovector coupling at tree level \begin{eqnarray}\label{coup} H_m=\sum_j \frac{1}{f_m}I_j \bar{\psi}_j\gamma^{j}_{\mu}\gamma^{j}_{5}\psi_j\partial^{\mu}\phi_m. \end{eqnarray} where $\psi_j$ represents the $j$-th quark field in a hadron, and $I_j$ is the isospin operator to be given later. In the quark model, the non-relativistic form of Eq. (\ref{coup}) is written as \cite{Zhong:2007fx,Li:1997gda,qk3} \begin{eqnarray}\label{ccpk} H^{nr}_{m}&=&\sum_j\Big\{\frac{\omega_m}{E_f+M_f}\mbox{\boldmath$\sigma$\unboldmath}_j\cdot \textbf{P}_f+ \frac{\omega_m}{E_i+M_i}\mbox{\boldmath$\sigma$\unboldmath}_j \cdot \textbf{P}_i \nonumber\\ &&-\mbox{\boldmath$\sigma$\unboldmath}_j \cdot \textbf{q} +\frac{\omega_m}{2\mu_q}\mbox{\boldmath$\sigma$\unboldmath}_j\cdot \textbf{p}'_j\Big\}I_j \varphi_m, \end{eqnarray} where $\mbox{\boldmath$\sigma$\unboldmath}_j$ corresponds to the Pauli spin vector of the $j$-th quark in a hadron. $\mu_q$ is a reduced mass given by $1/\mu_q=1/m_j+1/m'_j$, where $m_j$ and $m'_j$ stand for the masses of the $j$-th quark in the initial and final hadrons, respectively. For emitting a meson, we have $\varphi_m=\exp({-i\textbf{q}\cdot \textbf{r}_j})$, and for absorbing a meson we have $\varphi_m=\exp({i\textbf{q}\cdot \textbf{r}_j})$. In the above non-relativistic expansions, $\textbf{p}'_j=\textbf{p}_j-\frac{m_j}{M}\mathbf{P}_{c.m.}$ is the internal momentum for the $j$-th quark in the initial meson rest frame. $\omega_m$ and $\textbf{q}$ are the energy and three-vector momentum of the light meson, respectively. The isospin operator $I_j$ in Eq. (\ref{coup}) is expressed as \begin{eqnarray} I_j=\cases{ a^{\dagger}_j(u)a_j(s) & for $K^+$, \cr a^{\dagger}_j(s)a_j(u) & for $K^-$,\cr a^{\dagger}_j(d)a_j(s) & for $K^0$, \cr a^{\dagger}_j(s)a_j(d) & for $\bar{K^0}$,\cr a^{\dagger}_j(u)a_j(d) & for $\pi^-$,\cr a^{\dagger}_j(d)a_j(u) & for $\pi^+$,\cr \frac{1}{\sqrt{2}}[a^{\dagger}_j(u)a_j(u)-a^{\dagger}_j(d)a_j(d)] & for $\pi^0$, \cr \cos\theta \frac{1}{\sqrt{2}}[a^{\dagger}_j(u)a_j(u)+a^{\dagger}_j(d)a_j(d)]\cr -\sin\theta a^{\dagger}_j(s)a_j(s)& for $\eta$,} \end{eqnarray} where $a^{\dagger}_j(u,d,s)$ and $a_j(u,d,s)$ are the creation and annihilation operators for the $u$, $d$ and $s$ quarks. Generally, $\theta$ ranges from $\simeq 32^\circ \sim 43^\circ$ depending on quadratic or line mass relation applied \cite{PDG}. In our convention, $\theta=45^\circ$ corresponds to the mixing scheme of Ref.~\cite{Close:2005se}. We applied the same value in order to compared with Ref.~\cite{Close:2005se}. However, we note in advance that within the commonly accepted range of $\theta$, our results do not show great sensitivities due to the relatively large uncertainties of the present experimental data. \begin{table}[ht] \caption{The spin-factors used in this work.} \label{gfactor} \begin{tabular}{|l |l| l|c|c|c|c }\hline \hline $g^z_{10}=\langle \chi^0|\sigma_{1z}|\chi^1_{0}\rangle=1$ \\ $g^+_{10}=\langle \chi^0|\sigma^+_{1}|\chi^1_{-1}\rangle=\sqrt{\frac{1}{2}}$ \\ $g^-_{10}=\langle \chi^0|\sigma^-_{1}|\chi^1_{1}\rangle=-\sqrt{\frac{1}{2}}$ \\ $g^z_{01}=\langle \chi^1_{0}|\sigma_{1z}|\chi^0\rangle=1$ \\ $g^+_{01}=\langle \chi^1_{1}|\sigma^+_{1}|\chi^0\rangle=-\sqrt{\frac{1}{2}}$ \\ $g^z_{11}=\langle \chi^1_{1}|\sigma_{1z}|\chi^1_{1}\rangle=1$ \\ $g^+_{11}=\langle \chi^1_{1}|\sigma^+_{1}|\chi^1_{0}\rangle=\sqrt{\frac{1}{2}}$ \\ \hline \end{tabular} \end{table} \begin{center} \begin{table}[ht] \caption{ The decay amplitudes for $|n \ ^{2S+1} L_{J} \rangle\rightarrow |1^1S_0\rangle \mathbb{P}$. $g_I$ is a isospin factor which is defined by $g_I=\langle \phi_\Sigma|I_1|\phi_\Lambda\rangle$. In the Tab. \ref{asa}--\ref{ase}, the overall factor $F(q')=\exp \left(-\frac{q^{\prime 2}}{4\alpha^2}\right)$, which plays the role of the decay form factor, is omitted for simplify, where $q^\prime= (\mu/m_1)q$. In the tables, we have defined $\mathcal{R}\equiv(\mathcal{G}q-\frac{1}{2}hq^\prime)$. Various spin-factors used in this work are listed in the Tab. \ref{gfactor}.} \label{asa} \begin{tabular}{|c|c|c|c|c|c|c }\hline\hline initial state & amplitude\\ \hline $1^3S_1 (1^-) $ & $g_Ig^z_{10}\mathcal{R} $\\ \hline $1^1P_1 (1^+) $ &forbidden\\ \hline $1^3P_0 (0^+) $ &$i\frac{1}{\sqrt{6}}g_Ig^z_{10}\mathcal{R}\frac{q^\prime}{\alpha} +i\frac{1}{\sqrt{6}}g_I(\sqrt{2}g^+_{10}+g^z_{10})h\alpha $ \\ \hline $1^3P_1 (1^+) $ &forbidden\\ \hline $1^3P_2 (2^+) $ &$i\frac{1}{\sqrt{3}}g_Ig^z_{10}\mathcal{R}\frac{q^\prime}{\alpha} $\\ \hline $2^1S_0 (0^-) $ &forbidden \\ \hline $2^3S_1 (1^-) $ &$\frac{1}{\sqrt{24}}g_Ig^z_{10}\mathcal{R}(\frac{q^\prime}{\alpha})^2 +\sqrt{\frac{1}{6}}g_Ig^z_{10}hq^\prime $ \\ \hline $1^1D_2 (2^-) $ &forbidden\\ \hline $1^3D_1 (1^-) $ &$\frac{1}{\sqrt{30}}g_Ig^z_{10}\mathcal{R}(\frac{q^\prime}{\alpha})^2 +\sqrt{\frac{3}{5}}g_Ig^+_{10}hq^\prime $\\ \hline $1^3D_2 (2^-) $ &forbidden\\ \hline $1^3D_3 (3^-) $ & $-\frac{1}{\sqrt{20}}g_Ig^z_{10}\mathcal{R}(\frac{q^\prime}{\alpha})^2 $\\ \hline \end{tabular} \end{table} \end{center} \section{strong decays}\label{apt} \begin{center} \begin{table}[ht] \caption{ The decay amplitudes for $|n \ ^{2S+1} L_{J} \rangle\rightarrow |1^3S_1\rangle \mathbb{P}$. } \label{asb} \begin{tabular}{|c|c|c|c|c|c|c }\hline\hline $|n \ ^{2S+1} L_{J}\rangle$ & $ J_z$ & amplitude\\ \hline $1^1P_1 (1^+) $ & $\pm 1$ &$ i g_Ig^+_{01}h\alpha $\\ & 0 & $-i\frac{1}{\sqrt{2}}g_Ig^z_{01}\mathcal{R}\frac{q^\prime}{\alpha} -i\frac{1}{\sqrt{2}}g_Ig^z_{01}h\alpha $\\ \hline $1^3P_0 (0^+) $ & & forbidden\\ \hline $1^3P_1 (1^+) $ &$\pm 1$ &$i\frac{1}{2}g_Ig^z_{11}\mathcal{R}\frac{q^\prime}{\alpha} +i\frac{1}{2}g_I(g^z_{11}+\sqrt{2}g^+_{11})h\alpha $\\ &0& $\sqrt{2}g_Ig^+_{11}h\alpha $\\ \hline $1^3P_2 (2^+) $ & $\pm 1$ &$-i\frac{1}{2}g_Ig^z_{11}\mathcal{R}\frac{q^\prime}{\alpha} $\\ &0&0\\ \hline $2^1S_0 (0^-) $ & 0 &$\frac{1}{\sqrt{24}}g_Ig^z_{10}\mathcal{R}(\frac{q^\prime}{\alpha})^2 +\sqrt{\frac{1}{6}}g_Ig^z_{10}hq^\prime $ \\ \hline $2^3S_1 (1^-) $ & $\pm 1$ &$\pm\left\{\frac{1}{\sqrt{24}}g_Ig^z_{11}\mathcal{R}(\frac{q^\prime}{\alpha})^2F+\sqrt{\frac{1}{6}}g_Ig^z_{11}hq^\prime \right\}$ \\ &0& 0\\ \hline $1^1D_2 (2^-)$ & $\pm 1$ &$\frac{1}{\sqrt{2}}g^+_{01}g_Ihq^\prime $\\ &0 &$-\sqrt{\frac{1}{12}}g_Ig^z_{01}\mathcal{R}(\frac{q^\prime}{\alpha})^2 -\sqrt{\frac{1}{3}}g_Ig^+_{01}hq^\prime $\\ \hline $1^3D_1 (1^-)$ & $\pm 1$ &$\mp\left[\sqrt{\frac{1}{120}}g_Ig^z_{11}\mathcal{R}(\frac{q^\prime}{\alpha})^2 +\sqrt{\frac{5}{12}}g_Ig^+_{11}hq^\prime \right]$\\ &0&0\\ \hline $1^3D_2 (2^-)$ & $\pm 1$ &$\sqrt{\frac{1}{24}}g_Ig^z_{10}\mathcal{R}(\frac{q^\prime}{\alpha})^2 +\sqrt{\frac{3}{4}}g_Ig^+_{11}hq^\prime $\\ &0& $ g^+_{11}g_Ihq^\prime $\\ \hline $1^3D_3 (3^-)$ & $\pm 1$ & $\mp \sqrt{\frac{1}{30}}g_Ig^z_{11}\mathcal{R}(\frac{q^\prime}{\alpha})^2 $\\ &0&0\\ \hline \end{tabular} \end{table} \end{center} \begin{center} \begin{table}[ht] \caption{ The decay amplitudes for $|n \ ^{2S+1} L_{J} \rangle\rightarrow |1^3P_0 \rangle\mathbb{P}$, where we have defined $\mathcal{W}\equiv\mathcal{G}q(-1+\frac{q^{\prime 2}}{4\alpha^2})$, $\mathcal{S}\equiv h\alpha(1-\frac{q^{\prime 2}}{2\alpha^2})$} \label{asc} \begin{tabular}{|c|c|c|c|c|c|c }\hline\hline $|n \ ^{2S+1} L_{J}\rangle$ & $ J_z$ & amplitude\\ \hline $2^1S_0 (0^-) $ & 0 &$i\frac{1}{3}g_Ig^z_{01}\mathcal{W}\frac{q^\prime}{\alpha}-i\frac{1}{3}g_Ig^z_{01}h\alpha \mathcal{A} $ \\ \hline $2^3S_1 (1^-) $ & & forbidden \\ \hline &0 &$i\frac{\sqrt{2}}{3}g_Ig^z_{01}\mathcal{W}\frac{q^\prime}{\alpha}+i\frac{\sqrt{2}}{3}g_Ig^z_{01}h\alpha \mathcal{A} $\\ $1^1D_2 (2^-)$ & $\pm 1$ &0\\ & $\pm 2$ &$-i\frac{\sqrt{2}}{3}g_Ig^+_{01}h\alpha $\\ \hline $1^3D_1 (1^-)$ & & forbidden \\ \hline &0& $-i \frac{\sqrt{6}}{3}g^+_{11}g_I \mathcal{S} $\\ $1^3D_2 (2^-)$ &$\pm 1$& 0\\ & $\pm 2$ & $-i \frac{2}{3}g^+_{11}g_I h\alpha F-i \frac{\sqrt{2}}{6}g^z_{11}g_I\mathcal{S} $\\ \hline $1^3D_3 (3^-)$ & $\pm 2$ & $-i \frac{\sqrt{2}}{3}g^+_{11}g_Ih\alpha F+i \frac{1}{3}g^z_{11}g_I \mathcal{S} $\\ &0& 0\\ \hline \end{tabular} \end{table} \end{center} \begin{center} \begin{table}[ht] \caption{ The decay amplitudes for $|n \ ^{2S+1} L_{J} \rangle\rightarrow |1^1P_1 \rangle\mathbb{P}$. } \label{asd} \begin{tabular}{|c|c|c|c|c|c|c }\hline\hline $|n \ ^{2S+1} L_{J}\rangle$ & $(J^f_z, J^i_z)$ & amplitude\\ \hline $2^1S_0 (0^-) $ & &forbidden \\ \hline $2^3S_1 (1^-) $ & $\pm(1,-1)$ & $-i\sqrt{\frac{2}{3}}g_Ig^+_{10}h\alpha (1+\frac{q^{\prime 2}}{4\alpha^2}) $ \\ & $(0,0)$ & $-i\frac{1}{\sqrt{3}}g_Ig^z_{10}\mathcal{W}\frac{q^\prime}{\alpha} +i\frac{1}{\sqrt{3}}g_Ig^z_{10}h\alpha \mathcal{A} $ \\ \hline $1^1D_2 (2^-)$ & &forbidden\\ \hline $1^3D_1 (1^-)$ & $\pm(1,-1)$&$-i \sqrt{\frac{3}{20}}g_Ig^z_{10}\mathcal{G}q\frac{q^\prime}{\alpha} -i\sqrt{\frac{1}{30}}g_Ig^+_{10}\mathcal{S} $\\ & $(0,0)$ & $-i\sqrt{\frac{4}{15}}g_Ig^z_{10}\mathcal{W}\frac{q^\prime}{\alpha} +i\sqrt{\frac{4}{15}}g_Ig^z_{10}h\alpha \mathcal{A} $ \\ \hline \hline $1^3D_2 (2^-)$ &$\pm (1,-1)$& $-i\frac{1}{\sqrt{12}}g_Ig^z_{10}\mathcal{R}\frac{q^\prime}{\alpha} +i\frac{1}{\sqrt{24}}g_Ig^+_{10}h q^\prime\frac{q^\prime}{\alpha} $\\ \hline $1^3D_3 (3^-)$ & $\pm(1,-1)$ & $-i \sqrt{\frac{1}{30}}(\sqrt{2}g^z_{10}-g^+_{10})g_Ihq^\prime\frac{q^\prime}{\alpha} $\\ &(0,0)& $i\sqrt{\frac{1}{5}}g_I[g^z_{10}\mathcal{W}\frac{q^\prime}{\alpha}-\sqrt{2}g^z_{10}h\alpha \mathcal{A} +2g^+_{10}\mathcal{S}] $ \\ \hline \end{tabular} \end{table} \end{center} \begin{center} \begin{table}[ht] \caption{ The decay amplitudes for $|n \ ^{2S+1} L_{J} \rangle\rightarrow |1^3P_1 \rangle\mathbb{P}$. } \label{ase} \begin{tabular}{|c|c|c|c|c|c|c }\hline\hline $|n \ ^{2S+1} L_{J}\rangle$ & $(J^f_z, J^i_z)$ & amplitude\\ \hline $2^1S_0 (0^-) $ & &forbidden \\ \hline $2^3S_1 (1^-) $ & $\pm(1,-1)$ & $-i\frac{1}{\sqrt{3}}g_Ig^+_{11}h\alpha (1+\frac{q^{\prime 2}}{4\alpha^2}) $ \\ & $\pm(1,1)$ & $i\frac{1}{\sqrt{6}}g_Ig^z_{11}\mathcal{W}\frac{q^\prime}{\alpha} -i\frac{1}{\sqrt{6}}g_Ig^z_{11}h\alpha \mathcal{A} $ \\ \hline & $\pm(0, 2)$ & $\pm ig^+_{01}h \alpha $\\ $1^1D_2 (2^-)$ & $\pm(1,1)$ & $\pm i\frac{1}{\sqrt{2}}g^+_{01} \mathcal{S} $\\ & $\pm(1,-1)$ & $\pm i\frac{1}{2}g_Ig^z_{01}(\mathcal{R}\frac{q^\prime}{\alpha}+h \alpha) $\\ \hline $1^3D_1 (1^-)$ & $\pm(1,-1)$&$-i\sqrt{\frac{1}{60}}g_Ig^+_{11}h\alpha(1+\frac{q^{\prime 2}}{2\alpha^2}) $\\ & $\pm(1,1)$ & $-i\sqrt{\frac{1}{30}}g_Ig^z_{11}(\mathcal{W}\frac{q^\prime}{\alpha}-h\alpha \mathcal{A} -\frac{3}{2}\mathcal{S}) $ \\ \hline \hline &$\pm(0,2)$ & $\pm i\frac{1}{\sqrt{12}}g_Ig^z_{11}(\mathcal{R}\frac{q^\prime}{\alpha} +3 h \alpha ) $\\ $1^3D_2 (2^-)$ &$\pm (1,-1)$& $\pm[-i\sqrt{\frac{1}{3}}g_Ig^z_{11}h\alpha -i\sqrt{\frac{1}{12}}g^+_{11}\mathcal{S} ]$\\ & $\pm (1,1)$ & $\pm i\sqrt{\frac{1}{6}}g_Ig^z_{11}(\mathcal{W}\frac{q^\prime}{\alpha}-h\alpha \mathcal{A} -\frac{1}{2}\mathcal{S}) $ \\ \hline &$\pm(0,2)$ & $i\frac{1}{\sqrt{6}}g_Ig^z_{11}\mathcal{R}\frac{q^\prime}{\alpha} $\\ $1^3D_3 (3^-)$ &$\pm (1,-1)$& $-i\sqrt{\frac{1}{60}}g_Ig^+_{11}h q^\prime \frac{q^\prime}{\alpha} $\\ & $\pm (1,1)$ & $-i\sqrt{\frac{2}{15}}g_Ig^z_{11}[\mathcal{W}\frac{q^\prime}{\alpha}-h\alpha \mathcal{A} +\mathcal{S}] $ \\ \hline \end{tabular} \end{table} \end{center} For a heavy-light meson $\bar{Q}q$, because the pseudoscalar mesons $\mathbb{P}$ only couple with the light quarks, the strong decay amplitudes for the process $\mathbb{M}_i \rightarrow \mathbb{M}_f \mathbb{P} $ can be written as \begin{eqnarray} \label{am} &&\mathcal{M}(\mathbb{M}_i \rightarrow \mathbb{M}_f \mathbb{P})\nonumber\\ & =& \left\langle \mathbb{M}_f\left|\left\{\mathcal{G}\mbox{\boldmath$\sigma$\unboldmath}_1\cdot \textbf{q} +h \mbox{\boldmath$\sigma$\unboldmath}_1\cdot \textbf{p}'_1\right\}I_1 e^{-i\textbf{q}\cdot \textbf{r}_1}\right|\mathbb{M}_i\right\rangle , \end{eqnarray} with \begin{eqnarray} \mathcal{G}\equiv-\left(\frac{\omega_m}{E_f+M_f}+1\right),\ \ h\equiv\frac{\omega_m}{2\mu_q}. \end{eqnarray} $\mathbb{M}_i$ and $\mathbb{M}_f$ are the initial and final meson wave functions, and they are listed in Tab. \ref{wfS}. In the initial-meson-rest frame the energies and momenta of the initial mesons $\mathbb{M}_i$ are denoted by $(E_i, \textbf{P}_i$), while those of the final state mesons $\mathbb{M}_f$ and the emitted pseudoscalar mesons $\mathbb{P}$ are denoted by $(E_f, \textbf{P}_f)$ and $(\omega_m, \textbf{q})$. Note that $\textbf{P}_i=0$ and $\textbf{P}_f=-\textbf{q}$. The form of Eq.(\ref{am}) is similar to that of in Refs. \cite{Godfrey:1985xj,Godfrey:1986wj}, except that the factors $\mathcal{G}$ and $h$ in this work have explicit dependence on the energies of final hadrons. In the calculations, we select $\mathbf{q}=q\hat{z}$, namely the meson moves along the $z$ axial. Finally, we can work out the decay amplitudes for various process, $\mathbb{M}\rightarrow |1^1S_0\rangle \mathbb{P}$, $\mathbb{M}\rightarrow |1^3S_1\rangle \mathbb{P}$, $\mathbb{M}\rightarrow |1^3P_0\rangle \mathbb{P}$, $\mathbb{M}\rightarrow |1^1P_1\rangle \mathbb{P}$ and $\mathbb{M}\rightarrow |1^3P_1\rangle \mathbb{P}$, which are listed in Tabs. \ref{asa}--\ref{ase}, respectively. Some analytical features can be learned here. From Tab. \ref{asa}, it shows that the decays of $1^1P_1$, $1^3P_1$, $2^1S_0$, $1^1D_2$ and $1^3D_2$ into $|1^1S_0\rangle \mathbb{P}$ are forbidden by parity conservation. The decay amplitudes for $2^3S_1$, $2^3P_2$, and $1^3D_3 \rightarrow |1^1S_0\rangle \mathbb{P}$ are proportional to $\mathcal{R}$ (i.e. proportional to $q$), $\mathcal{R} q^\prime/\alpha$ and $\mathcal{R} (q/\alpha)^2$, respectively. This is crucial for understanding the small branching ratios for $D^*(2007)\to D\pi$ as we will see later. In contrast, the decay amplitude for $1^3P_0\rightarrow |1^1S_0\rangle \mathbb{P}$ has two terms. One is proportional to $\mathcal{R} q^\prime/\alpha$, while the other is proportional to $\alpha$. Similarly, the decay amplitude for $2^3S_1\rightarrow |1^1S_0\rangle \mathbb{P}$ and $2^3D_1\rightarrow |1^1S_0\rangle \mathbb{P}$ also have two terms of which one is proportional to $\mathcal{R} (q^\prime/\alpha)^2$, and the other to $q^\prime$. This feature will have certain implications of their branching ratio rates into different $|1^1S_0\rangle \mathbb{P}$ states. From Tab. \ref{asb}, it shows that decays of $1^3P_0$ into $|1^1S_0\rangle \mathbb{P}$ are forbidden. Among those three helicity amplitudes $\mathcal{M}_{\pm}$ and $\mathcal{M}_{0}$, the longitudinal one $\mathcal{M}_0$ vanishes for $1^3P_2$, $2^3S_1$, $1^3D_1$, and $1^3D_3$ into $|1^1S_0\rangle \mathbb{P}$. From Tabs. \ref{asc}--\ref{ase}, we can see that the decays of $2^3S_1$ and $1^3D_1$ into $|1^3P_0\rangle \mathbb{P}$, $2^1S_0$ and $1^1D_2$ into $|1^1P_1\rangle \mathbb{P}$, and $2^1S_0$ into $|1^1P_1\rangle \mathbb{P}$ are forbidden parity conservation. These selection rules are useful for the state classifications. \section{calculations and analysis}\label{cd} With the transition amplitudes, one can calculate the partial decay width with \begin{equation}\label{dww} \Gamma=\left(\frac{\delta}{f_m}\right)^2\frac{(E_f+M_f)|\textbf{q}|}{4\pi M_i(2J_i+1)} \sum_{J_{iz},J_{fz}}|\mathcal{M}_{J_{iz},J_{fz}}|^2 , \end{equation} where $J_{iz}$ and $J_{fz}$ stand for the third components of the total angular momenta of the initial and final heavy-light mesons, respectively. $\delta$ as a global parameter accounts for the strength of the quark-meson couplings. In the heavy-light meson transitions, the flavor symmetry does not hold any more. Treating the light pseudoscalar meson as a chiral field while treating the heavy-light mesons as constitute quark system is an approximation. This will bring uncertainties to coupling vertices and form factors. Parameter $\delta$ is introduced to take into account such an effect. It has been determined in our previous study of the strong decays of the charmed baryons \cite{Zhong:2007gp}. Here, we fix its value the same as that in Ref.~\cite{Zhong:2007gp}, i.e. $\delta=0.557$. In the calculation, the standard parameters in the quark model are adopted. For the $u$, $d$, $s$, $c$ and $b$ constituent quark masses we set $m_u=m_d=350$ MeV, $m_s=550$ MeV, $m_c=1700$ MeV and $m_b=5100$ MeV, respectively. The decay constants for $\pi$, $K$, $\eta$ and $D$ mesons, $f_\pi=132$ MeV, $f_K=f_{\eta}=160$ MeV, and $f_D=226$ MeV, are used. For the masses of all the heavy-light mesons the PDG values are adopted in the calculations \cite{PDG}. To partly remedy the inadequate of the non-relativistic wave function as the relative momentum $q$ increases, a commonly used Lorentz boost factor is introduced into the decay amplitudes~\cite{qkk,qk5,Zhong:2007fx}, \begin{eqnarray} \mathcal{M}(q)\rightarrow \gamma_f \mathcal{M}(\gamma_fq), \end{eqnarray} where $\gamma_f=M_f/E_f$. In most decays, the three momentum carried by the final state mesons are relatively small, which means the non-relativistic prescription is reasonable and corrections from Lorentz boost are not drastic. \begin{table}[ht] \caption{Predictions of the strong decay widths (in MeV) for the heavy-light mesons. For comparison, the experimental data and some other model predictions are listed.} \label{width} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c }\hline\hline & notation & channel & $\Gamma( \mathrm{this\ work} ) $ & $\Gamma$\cite{Close:2005se}& $\Gamma$\cite{Godfrey:2005ww} & $\Gamma $ \cite{Orsland:1998de} & $\Gamma$\cite{Zhu:1998wy,Dai:1998ve} & $\Gamma$\cite{Falk:1995th} & $\Gamma_{exp}$ \\ \hline \hline $D^*(2007)^0$ &$(1^3S_1) 1^-$ & $D^0\pi^0$ & 58 keV & 16 keV & & 39 keV & & & $<2.1$ MeV \\ $D^*(2010)^+$ & & $D^0\pi^+$ & 77 keV & 25 keV & & 60 keV & & & $64\pm 15$ keV \\ & & $D^+\pi^0$ & 35 keV & 11 keV & & 27 keV & & & $29\pm7$ keV \\ \hline $D^*_0(2352)$ &$(1^3P_0) 0^+$ & $D\pi$ & 248 & & 277 & & & & $261\pm50$ \\ $D^*_0(2403)$ & & & 266 & 283 & & & & & $283\pm58$ \\ \hline $D_1(2420)$ &$(1^1P_1) 1^+$ & $D^*\pi$ & 84 & & & & & & \\ & $(P_1) 1^+$ & & 21.6 & 22 & 25 & & & & $25\pm 6$ \\ \hline $D'_1(2430)$ &$(1^3P_1) 1^+$ & $D^*\pi$ & 152 & & & & & & \\ &$( P_1')1^+$ & & 220 & 272 & 244 & & & & $384\pm117$ \\ \hline $D^*_2(2460)^0$ &$(1^3P_2) 2^+$ & $D\pi$ & 39 & 35 & 37 & & 13.7 & & \\ & & $D^*\pi$ & 19 & 20 & 18 & & 6.1 & & \\ & & $D \eta$ & 0.1 & 0.08 & & & & & \\ & & total & 59 & 55 & 55 & & 20 & & $43\pm 4$ \\ \hline\hline $D_{s1}(2536)$ &$(1^1P_1) 1^+$ & $D^*K$ & 59 & & & & & & \\ &$ (P_1) 1^+$ & & 0.35 & 0.8 & 0.34 & & & & $< 2.3$ \\ \hline $D^*_{s2}(2573)$ &$(1^3P_2) 2^+$ & $DK$ & 16 & 27 & 20 & & & & \\ & & $D^*K$ & 1 & 3.1 & 1 & & & & \\ & & $D_s \eta$ & 0.4 & 0.2 & & & & & \\ & & total & 17 & 30 & 21 & & & & $15^{+5}_{-4}$ \\ \hline $D^*_{sJ}(2860)$ &$(1^3D_3) 3^-$ & $DK$ & 27 & & & & & & \\ & & $D^*K$ & 11 & & & & & & \\ & & $D_s \eta$ & 3 & & & & & & \\ & & $D^*_s \eta$ & 0.3 & & & & & & \\ & & $D K^*$ & 0.4 & & & & & & \\ & & total & 42 & & & & & & $48\pm 17$ \\ \hline\hline $B^*_0(5730)$ &$ (^3P_0) 0^+$ & $B\pi$ & $272$ & & & 141 & 250 & & \\ \hline $B_1(5725)$ &$ (P_1) 1^+$ & $B^*\pi$ & 30 & & & 20 & & & $20\pm 12$ \\ \hline &$(1^3P_1) 1^+$ & $B^*\pi$ & 153 & & & & & & \\ $B'_1(5732)$ &$ (P_1') 1^+$ & $B^*\pi$ & 219 & & & 139 & 250 & & $128\pm 18$ \\ \hline $B^*_2(5740)$ &$(1^3P_2) 2^+$ & $B \pi$ & 25 & & & 15 & 3.9 & & \\ & & $B^*\pi$ & 22 & & & 14 & 3.4 & & \\ & & total & 47 & & & 29 & 7.3 & $16\pm 6$ & $22^{+7}_{-6}$ \\ \hline\hline $B^*_{s0}(5800)$ & $ (^3P_0) 0^+$ & $B K$ & 227 & & & & & & \\ \hline $B_{s1}(5830)$ & $ (P_1) 1^+$ & $B^* K$ & $0.4\sim 1$ & & & & & $3\pm 1$ & 1 \\ \hline $B_{s1}'(5830)$ & $ (P_1') 1^+$ & $B^* K$ & $149$ & & & & & & \\ \hline $B^*_{s2}(5839)$ &$(1^3P_2) 2^+$ & $B K$ & 2 & & & & & & \\ & & $B^*K$ & $0.12$ & & & & & & \\ & & total & 2 & & & & & $7\pm 3$ & 1 \\ \hline\hline \end{tabular} \end{table} \subsection{Strong decays of 1$S$ states} Due to isospin violation, $D^{*+}$ is about 3 MeV heavier than the neutral $D^{*0}$~\cite{PDG}. This small difference leads to a kinematic forbiddance of $D^{*0}\to D^+\pi^-$, while $D^{*+}\to D^0\pi^+$ and $D^+\pi^0$ are allowed, but with a strong kinematic suppression. Nevertheless, it shows by Tab. \ref{asa} that the decay amplitudes of $1S$ states are proportional to the final state momentum $q$. For the decays of $D^{*0}\to D^0\pi^0$, $D^{*+}\to D^0\pi^+$ and $D^+\pi^0$ of which the decay thresholds are close to the $D^*$ masses, it leads to further dynamic suppressions to the partial decay widths. As shown in Tab. \ref{width}, our calculations are in remarkable agreement with the experimental data. Since $q$ is small, the form factor corrections from quark model are negligibly small. One would expect that the ratio $\Gamma(D^0\pi^+)/\Gamma(D^+\pi^0)\simeq 2$ is then dominated by the isospin factor $g_I$, which agrees well with the prediction in Ref.~\cite{Close:2005se}. \subsection{Strong decays of 1$P$ states} In the $LS$ coupling scheme, there are four 1$P$ states: $^3P_0, ^3P_1,^3P_2$ and $^1P_1$. For $^3P_0$, its transition to $|1^3S_1\rangle \mathbb{P}$ is forbidden. States of $^1P_1$, and $^3P_1$ can couple into $|1^3S_1\rangle \mathbb{P}$, but not $|1^1S_0\rangle \mathbb{P}$. In contrast, $^3P_2$ can be coupled to both $|1^3S_1\rangle \mathbb{P}$ and $|1^1S_0\rangle \mathbb{P}$. In the decay amplitudes of $^3P_0$, $^1P_1$, and $^3P_1$, the term $h\alpha F$ dominates the partial decay widths, and usually their decay widths are much broader than that of $^3P_2$. Between the amplitudes of the $ ^1P_1$ and $^3P_1$ decays, we approximately have: \begin{eqnarray} \mathcal{M}(^1P_1\rightarrow |1^3S_1\rangle \mathbb{P})_{J_z}\simeq \frac{1}{\sqrt{2}}\mathcal{M}(^3P_1\rightarrow |1^3S_1\rangle \mathbb{P})_{J_z}, \end{eqnarray} since the term $\mathcal{R}\frac{q^\prime}{\alpha}F$ is negligible when the decay channel threshold is close to the initial meson mass. As a consequence, the decay widths of the $ ^1P_1$ states are narrower than those of $^3P_1$. \subsubsection{$^3P_0$ states} $D^*_0(2400)$ is listed in PDG~\cite{PDG} as a broad $^3P_0$ state. Its mass values from Belle~\cite{belle-04d} and FOCUS Collaboration~\cite{focus-04a} are quite different though the FOCUS result is consistent with the potential quark model prediction of 2403 MeV~\cite{Godfrey:1985xj}. In experiment, only the $D\pi$ channel are observed since the other channels are forbidden. The term $h \alpha F$ in the amplitude, which is in proportion to the oscillator parameter $\alpha$, accounts for the broad decay width. By applying the PDG averaged mass 2352 MeV and the FOCUS value 2403 MeV, its partial decay widths into $D\pi$ are calculated and presented in Tab. \ref{width}. They are in good agreement with the data~\cite{PDG,focus-04a}. In the $B$ meson sector, $B^*_0$ and $B^*_{s0}$, as the $^3P_0$ states, have not been confirmed in any experiments. The predicted mass for $B^*_0$ and $B^*_{s0}$ mesons are about 5730 MeV and 5800 MeV, respectively \cite{Vijande:2007ke}. Their strong decays only open to $B\pi$ or $BK$. Applying the theory-predicted masses, we obtain broad decay widths for both states, i.e. $\Gamma(B^*_0)=272$ MeV, and $\Gamma(B^*_{s0})=227$ MeV, respectively. Our prediction of $\Gamma(B^*_0)=272$ MeV is compatible with the QCDSR prediction $\Gamma(B^*_0)\simeq 250$ MeV \cite{Zhu:1998wy}. Such broad widths may explain why they have not yet been identified in experiment. \subsubsection{$^3P_2$ states} In PDG, the decay width of $D^{*}_2(2460)^0$ is $\Gamma=43\pm 4$ MeV and that of $D^{*}_2(2460)^{\pm}$ is $\Gamma=29\pm 5$ MeV. Since there is no obvious dynamic reason for such a significant difference, it may simply be due to experimental uncertainties. Our prediction $\Gamma=59$ MeV as a sum of the partial widths of $D\pi$, $D^*\pi$ and $D\eta$, is comparable with the data. Nevertheless, the partial width ratio \begin{eqnarray} R\equiv\frac{\Gamma(D\pi)}{\Gamma(D^*\pi)}\simeq 2.1 \end{eqnarray} obtained here is also in good agreement with the data $R\simeq 2.3\pm 0.6$~\cite{PDG}. $D^*_{s2}(2573)$ is assigned to be a $^3P_2$ state. Its total width is $\Gamma_{exp}=15^{+5}_{-4}$ and the width ratio between $D^*K$ and $DK$ is $R\equiv\Gamma(D^*K)/\Gamma(DK)<0.33$~\cite{PDG}. Our predictions for the total width and ratio $R$ are \begin{eqnarray} \Gamma &=&17 \ \mathrm{MeV}, \\ R&\equiv&\frac{\Gamma(D^*K)}{\Gamma(D K)}\simeq 6\%, \end{eqnarray} which are consistent with the data. Notice that the width of $D^*K$, $\sim 1$ MeV, is one-order-of-magnitude smaller than that of $DK$. Apart from the kinematic phase space suppression, its transition amplitude also suffers dynamic suppressions since it is proportional to $\mathcal{R}{q^\prime}/\alpha$. This explains its absence in experiment. Although the decay channel $D\eta /D_s \eta$ is also opened for $D^*_2(2460) /D^*_{s2}(2573)$, its partial width is negligibly small, i.e. $< 1$ MeV. In the $B$ meson sector, a candidate of $^3P_2$ state is from CDF collaboration with mass \cite{B:2007CDF} \begin{eqnarray} M(B^*_2)=5740\pm 2\pm 1 \ \mathrm{MeV}. \end{eqnarray} D0 collaboration also observed the same state with slightly different masses, $M(B^*_2)=5746.8\pm 2.4\pm 1.7 \ \mathrm{MeV}$ \cite{Abazov:2007vq}. By assigning $B^*_2$ as a $^3P_2$ state, the predicted total width as sum of $B\pi$ and $B^*\pi$ is \begin{eqnarray} \Gamma(B^*_2)\simeq 47\ \mathrm{MeV}, \end{eqnarray} which is consistent with the CDF measurement $\Gamma(B^*_2)_{exp}\simeq 22^{+7}_{-6}$ MeV. It shows that these two partial widths of $B\pi$ and $B^*\pi$ are comparable with each other, and the predicted width ratio is \begin{eqnarray} R\equiv\frac{\Gamma( B^*\pi)}{\Gamma( B^*\pi)+\Gamma(B\pi)}=0.47 \ . \end{eqnarray} This is also in good agreement with the recent D0 data $R=0.475\pm 0.095\pm 0.069$ \cite{Abazov:2007vq}. CDF collaboration also reported an observation of $B^*_2$'s strange analogue $B^*_{s2}$ \cite{:2007tr}, of which the mass is \begin{eqnarray} M(B^*_{s2})=5840\pm 1 \ \mathrm{MeV}. \end{eqnarray} With this mass, we obtain its partial decay widths, $\Gamma(B^*K)=0.12$ MeV and $\Gamma(BK)=2$ MeV, respectively. This gives its strong decay width and width ratio between $B^*K$ and $BK$: \begin{eqnarray} \Gamma(B^*_{s2})\simeq 2 \ \mathrm{MeV},\ R\equiv\frac{\Gamma(B^*K)}{\Gamma(BK)}\simeq 6\%. \end{eqnarray} The decay width is in good agreement with the data $\Gamma(B^*_{s2})_{exp}\sim 1 $ MeV \cite{Akers:1994fz}. It also shows that the partial width of $B^*K$ channel is negligible small, and will evade from observations in experiment. But a measurement of $\Gamma(BK)$ with improved statistics should be very interesting. \subsubsection{The mixed states} The $D_{1}(2420)$ and $D'_1(2430)$ listed in PDG~\cite{PDG} correspond to a narrow and broad state, respectively. Their two body pionic decays are only seen in $D^*\pi$. If they are pure $P$ wave states, they should be correspondent to $ ^1P_1$ and $^3P_1$. The calculated decay widths by assigning them as $^1P_1$ and $^3P_1$, are listed in Tab. \ref{width}. It shows that $D_1(2420)$ as a pure $ ^1P_1$ state, its decay width is overestimated by about an order, while $D'_1(2430)$ as a pure $ ^3P_1$ state, its decay width is underestimated by about a factor of 2. Similarly large discrepancies are also found if one simply exchanges the assignments. Thus, the pure $ ^1P_1$ and $^3P_1$ scenario cannot explain the nature of $D_1(2420)$ and $D'_1(2430)$. \begin{center} \begin{figure}[ht] \centering \epsfxsize=8 cm \epsfbox{D2420.eps} \caption{ The decay widths of $D_1(2420)$, $D_1(2430)$ and $D_{s1}(2536)$ as functions of the mixing angle $\phi$.} \label{mix} \end{figure} \end{center} Since the heavy-light mesons are not charge conjugation eigenstates, state mixing between spin $\textbf{S}=0$ and $\textbf{S}=1$ states with the same $J^P$ can occur. The physical states with $J^P=1^+$ would be given by \begin{eqnarray} |P_1'\rangle=+\cos (\phi) |^1P_1\rangle+\sin(\phi)|^3P_1\rangle, \\ |P_1\rangle=-\sin (\phi) |^1P_1\rangle+\cos(\phi)|^3P_1\rangle. \end{eqnarray} Our present knowledge about the $D_1(2420)$ and $D'_1(2430)$ mixing is still limited. The determination of the mixing angle is correlated with quark potential, and masses of the states~\cite{Godfrey:1986wj}. An analysis by Ref.~\cite{Close:2005se} suggests that a mixed state dominated by $S$-wave decay will have a broad width, and the $D$-wave-dominant decay will have a narrow one. By assuming that the heavy quark spin-orbit interaction is positive, this leads to an assignment of $D'_1(2430)$ and $D_1(2420)$ as a mixed $|P_1'\rangle$ and $|P_1\rangle$, respectively, with a negative mixing angle $\phi=-54.7^\circ$. However, this will lead to that the mass of $D_1$ is heavier that of the $D'_1$ for which the present experimental precision seem unable to rule out such a possibility~\cite{PDG}. An additional piece of information supporting such a scenario is that a positive spin-orbit interaction will lead to a heavier $2^+$ state than $0^+$ which indeed agrees with experiment~\cite{Close:2005se}. In our calculation, we plot the pionic decay widths of the mixed states $|P_1'\rangle$ and $|P_1\rangle$ as functions of $\phi$ in Fig. \ref{mix}. By looking for the best description of the experimental data, we determine the optimal mixing angle. It shows that with $\phi=-(55\pm 5)^\circ$, $D_1(2420)$, as the $|P_1\rangle$ mixed state, has a narrow decay width of $\Gamma\simeq 22$ MeV. This value agrees well with the experimental data (see Tab. \ref{width}). Our prediction for the width of $D'_1(2430)$ as a $|P'_1\rangle$ broad state is $\Gamma\simeq 217$ MeV, which also agrees with the data \cite{PDG}. Note that there are still large uncertainties with the $D'_1(2430)$ measurements, and further experimental investigation is needed. Such a mixing scenario may occur within the $D_{s1}$ states, which leads to $D_{s1}(2460)$ and $D_{s1}(2536)$ as the mixed $|P'_1\rangle$ and $|P_1\rangle$, respectively. Note that $D_{s1}(2460)$ has a relatively light mass which is below the $D^*K$ threshold, and also slightly below the $DK$ threshold. Therefore, its strong decay is nearly forbidden, which makes it a narrow state. On the other hand, $D_{s1}(2536)$, as a $|P_1\rangle$ mixed state with the mixing angle $\phi=-(55\pm 5)^\circ$, can give a decay width consistent with the data ($\Gamma< 2.3$ MeV) \begin{equation} \Gamma(D_{s1}(2536))\simeq 0.4\sim 2.5 \ \mathrm{MeV}. \end{equation} In contrast, if $D_{s1}(2536)$ is a pure $^1P_1$ state, its decay width will be 59 MeV, which is overestimated by a factor of 20. We also derive the width ratio \begin{equation} R\equiv\frac{\Gamma(D^*(2007)^0K^+)}{\Gamma(D^*(2010)^+K^0)}\simeq 1.2\sim 1.7, \end{equation} which is consistent with the experimental result, $R=1.27\pm 0.27$. In Fig.~\ref{mix}(B), the change of the strong decay width $\Gamma(D_{s1}(2536))$ in terms of the mixing angle $\phi$ is presented by treating it as mixed $|P_1\rangle$ state. It should be mentioned that the recent measurements of the angular decomposition of $D_{s1}(2536)^+\rightarrow D^{*+}K^0_S$ indicate configuration mixings within $D_{s1}(2536)$ \cite{:2007dya}. In the $B$ meson sector, two new narrow excited $B_1$ and $B_{s1}$ mesons are recently reported by CDF, with masses \begin{eqnarray} M(B_1)=5725\pm 2\pm 1 \ \mathrm{MeV},\\ M(B_{s1})=5829\pm 1 \ \mathrm{MeV}. \end{eqnarray} D0 collaboration also observed the same $B_1$ state with a slightly different mass, $M(B_1)=5720\pm 2.4\pm 1.4 \ \mathrm{MeV}$. The narrowness of these two axial vector states make them good candidates as the narrow heavy partners in the state mixing. $B_1$ as a $|P_1\rangle$ state, its strong decay width to $B^*\pi$ is predicted to be \begin{eqnarray} \Gamma(B_1)\simeq 30\ \mathrm{MeV}. \end{eqnarray} With the strong decay widths for $B_2^*\to B\pi$ and $B^*\pi$ calculated, we obtain the strong decay width ratio \begin{eqnarray} R\equiv\frac{\Gamma(B_1)}{\Gamma(B_1)+\Gamma(B^*_2)}=0.34, \end{eqnarray} which are in good agreement with the recent D0 data $R=0.477\pm 0.069\pm 0.062$ \cite{Abazov:2007vq} Note that $B_J^*(5732)$ in PDG~\cite{PDG} is a broad state with $\Gamma_{exp}=128\pm 18$ MeV. The PDG averaged mass is $5698\pm 8$ MeV which makes it lighter than $B_1(5725)$. This makes it a natural candidate as the mixed light partner $|P_1'\rangle$, for which the predicted width is $\Gamma(B_1')=219$ MeV, this result is compatible with the QCDSR prediction $\Gamma(B_1')\simeq 250$ MeV. As a test, we find that $B_J^*(5732)$ as a pure $^3P_1$ state its decay width is 153 MeV, which seems to agree well with the PDG suggested value. Whether $B_J^*(5732)$ is a mixed state $|P'_1\rangle$, a pure $^3P_1$ state, or other configurations, needs further improved experimental measurement. Similarly, $B_{s1}$ as a $|P_1\rangle$ state, its strong decay width and decay width ratio to the sum of $B_{s1}$ and $B_{2s}^*$ widths are \begin{eqnarray} && \Gamma(B_{s1})\simeq 0.4\sim 1\ \mathrm{MeV},\\ && R\equiv\frac{\Gamma(B_{s1})}{\Gamma(B_{s1})+\Gamma(B^*_{2s})}=0.02\sim 0.6. \end{eqnarray} The predicted width $\Gamma(B_{s1})$ agrees with the data $\Gamma(B_{s1})_{exp}\sim 1$ MeV \cite{Akers:1994fz}. Since the mass of $|P_1'\rangle$ is slightly lower than that of $|P_1\rangle$, the mass of $B'_{s1}$ (as a $|P_1'\rangle$ state) should be less than 5830 MeV. If we assume the mass of $B'_{s1}$ is around 5830 MeV, which gives a broad decay width to $B^*K$ channel \begin{eqnarray} && \Gamma(B'_{s1})\simeq 149\ \mathrm{MeV}. \end{eqnarray} We should point out that the mass of $B'_{s1}$ is most likely below the threshold of $B^*K$, thus, the decay $B'_{s1}\rightarrow B^*K$ is kinematically forbidden. In this case the decay width of $B'_{s1}$ will be very narrow. Its decay properties should be similar to those of $D_s(2460)$. The isospin violation decay $B'_{s1}\rightarrow B_s^*\pi$ and radiative decay $B'_{s1}\rightarrow B_s^*\gamma$ will be the dominant decay modes. A recent study of this scenario was given by Wang with light-cone sum rules~\cite{Wang:2008wz}. \subsection{Strong decays of 2$S$ states} \begin{center} \begin{figure}[ht] \centering \epsfxsize=8 cm \epsfbox{aa.eps} \centering \epsfxsize=8 cm \epsfbox{bb.eps}\caption{ The partial widths of $D(2^1S_0)$ and $D(2^3S_1)$ as functions of the mass.}\label{2sd} \end{figure} \end{center} \begin{center} \begin{figure}[ht] \centering \epsfxsize=8 cm \epsfbox{Ds.eps} \caption{ The partial widths of $D_s(2^1S_0)$ and $D_s(2^3S_1)$ as functions of the mass.}\label{2sds} \end{figure} \end{center} The radially excited heavy-light mesons are still not well-established in experiment, although there are several candidates, such as $D^{*}(2640)^{\pm}$ \cite{Abreu:1998vk}, $D^*_{sJ}(2632)$ \cite{Evdokimov:2004iy} and $D^*_{sJ}(2700)$ \cite{Aubert:2006mh,:2007aa}. In theory, the radially excited $D$ states $2^1S_0$ and $2^3S_1$ were predicted to have masses $\sim 2.58$ and $\sim 2.64$ GeV, respectively \cite{Godfrey:1985xj}, while the radially excited $D_s$ states $2^1S_0$ and $2^3S_1$ were $\sim 2.6$ and $\sim 2.7$ GeV, respectively \cite{Godfrey:1985xj,Close:2006gr}. In this section, we study the strong decays of these excited states into various channels. The mass uncertainties bring uncertainties into the predicted partial decay widths. Occasionally, some of the predicted partial widths exhibit sensitivities to the meson masses. Therefore, we present the strong decay widths of the $D$ and $D_s$ radially excited states as functions of their masses within a reasonable range as predicted by theory, and plot them along with their partial decay widths in Figs. \ref{2sd} and \ref{2sds}, respectively. For a given initial mass, by comparing the relative magnitudes among different partial widths from theoretical prediction and experimental measurement, one can extract additional information about the initial meson quantum numbers. \subsubsection{Radially excited $D$ mesons} For a $2^1S_0$ state with a mass around 2.64 GeV, it can decay into $D^*\pi$, $D^*\eta$, $D^*_sK$ and $D^*_0(2400)\pi$. In Fig. \ref{2sd}, the partial widths and total strong decay width are shown for a mass range. In these channels, the $D^*\pi$ dominates the decays, the total decay width is $\sim 14$ MeV at $m(D(2^1S_0))$=2.58 GeV, and it shows a flat behavior. Note that the threshold of $D^*_s K$ channel is very close to 2.64 GeV. Some sensitivities to this open channel thus occur in a mass range around 2.6 GeV. It shows that this width increases quickly with the masses and will compete again $D^*\eta$. For the radially excited state $2^3S_1$, its dominant decay channel is $D_1(2420)\pi$, while the other partial widths are much smaller (see lower panel of Fig. \ref{2sd}). Again, the $D^*\pi$ partial width appears insensitive to the initial $D$ meson mass. Comparing the decay patterns between $2^1S_0$ and $2^3S_1$ in Fig. \ref{2sd}, we find it useful for clarifying $D^{*}(2640)^{\pm}$. This state was first seen by BELPHI in $D^{*+}\pi^+\pi^-$ channel with a narrow width $< 15$ MeV \cite{Abreu:1998vk}, but has not yet been confirmed by other experiments. If it is a genuine resonance, it will fit better into the $2^3S_1$ state instead of $2^1S_0$ due to its dominant decays into $D^{*+}\pi^+\pi^-$ which can occur via the main channel $D^{*}(2640)^+\to D_1(2420)^0\pi^+$. In contrast, the assignment to a $2^1S_0$ state will imply a dominant decay channel to $D^*\pi$ which is not supported by the data. Although the predicted width $\sim 34$ MeV overestimates the data by nearly a factor of two, it should be more urgent to establish it in experiment and have more precise measurement of its partial decay widths to both $D^*\pi$ and $D^*\pi\pi$. \subsubsection{Radially excited $D_s$ mesons}\label{radial-ds} There are experimental signals for several excited $D_s$ states, i.e. $D_{sJ}(2632)$~\cite{Evdokimov:2004iy}, $D_{sJ}(2690)$, $D_{sJ}(2860)$~\cite{Aubert:2006mh}, and $D_{sJ}(2708)$~\cite{:2007aa,belle-2006} for which the spectroscopic classification is still unsettled. The $D_{sJ}(2690)$ and $D_{sJ}(2708)$ are likely to the same state as they have similar masses and both are broad. We shall compare these experimental observations with our model predictions in order to learn more about their spectroscopic classifications. $D_{sJ}(2632)$ was reported by SELEX as a narrow state, i.e. $\Gamma< 17$ MeV, in $D_s\eta$ and $DK$ channels \cite{Evdokimov:2004iy}. The measured ratio of the partial widths is \begin{eqnarray} R\equiv\frac{\Gamma(D^0 K^+ )}{ \Gamma(D_s \eta)}=0.16\pm 0.06. \end{eqnarray} Its dominant decay into $D_s\eta$ makes it difficult to assign it into any simple $c\bar{q}$ scenario~\cite{barnes-004}. In particular, since a $2^1S_0$ state is forbidden to decay into $D_s\eta$ and $DK$, it rules out $D_{sJ}(2632)$ to be a radially excited $0^-$. As shown by Fig. \ref{2sds}, the decay of a $2^3S_1$ state turns to be dominated by $D^*K$ and possibly $DK$, while its decay into $D_s\eta$ is rather small. Therefore, a simple $2^3S_1$ cannot explain its decay pattern as well. Some more investigations of the nature of $D_{sJ}(2632)$ can be found in the literature, and here we restrict our attention on the output of our model calculations. $D^*_{sJ}(2860)$ and $D^*_{sJ}(2690)$ from BaBar~\cite{Aubert:2006mh} (or $D_{sJ}(2708)$ from Belle~\cite{:2007aa,belle-2006}) have widths \begin{eqnarray} \Gamma(D_{sJ}(2860))=48\pm 17 \ \mathrm{MeV},\\ \Gamma(D_{sJ}(2690))=112\pm 43 \ \mathrm{MeV} , \end{eqnarray} and both are observed in the $DK$ channel, and no evidences are seen in $D^*K$ and $D_s\eta$ modes. Compare these with Fig. \ref{2sds}, it shows that neither of them can easily fit in $2^1S_0$ or $2^3S_1$. By fixing the masses of $2^1S_0$ and $2^3S_1$ states as suggested by the quark model~\cite{Close:2006gr}, i.e. $m(D_s(2^1S_0))=2.64$ GeV and $m(D_s(2^3S_1))=2.71$ GeV, we obtain their strong decay widths \begin{eqnarray} \Gamma(D_{s}(2^1S_0))\simeq 11 \ \mathrm{MeV},\\ \Gamma(D_{s}(2^3S_1))\simeq 14 \ \mathrm{MeV}, \end{eqnarray} which turn out to be narrow. For $D_s(2^1S_0)$, the predicted dominant decay mode is $D^*K$, while the $DK$ channel is forbidden. For $D_s(2^3S_1)$, there are two main decay channels $D^*K$ and $DK$, and they give a ratio of \begin{eqnarray} R\equiv\frac{\Gamma(D^*K)}{\Gamma(DK)}\simeq 2.6 \ . \end{eqnarray} The $D_s \eta$ channel is also opened, but is negligibly small in comparison with $DK$ and $D^*K$. As $D^*_{sJ}(2860)$ has a relatively larger mass to fit in a $D$-wave state, we shall examine it with $D$-wave decays in the following subsection. \begin{widetext} \begin{center} \begin{figure}[ht] \centering \epsfxsize=12 cm \epsfbox{Dw.eps} \caption{ The partial widths of $D(1^1D_2)$, $D(1^3D_1)$, $D(1^3D_2)$ and $D(1^3D_3)$ as functions of the mass.}\label{dw} \end{figure} \end{center} \end{widetext} \begin{widetext} \begin{center} \begin{figure}[ht] \centering \epsfxsize=12 cm \epsfbox{DwDs.eps} \caption{ The partial widths of $D_s(1^1D_2)$, $D_s(1^3D_1)$, $D_s(1^3D_2)$ and $D_s(1^3D_3)$ as functions of the mass. When we calculate the partial width of the $DK^*$ channel, the $D$ meson is looked as the emitted pseudoscalar meson in the $SU(4)$ case \cite{Zhong:2007gp}. }\label{dsw} \end{figure} \end{center} \end{widetext} \subsection{Strong decays of $1D$ states} Theoretical predicted masses of the $D$-wave excited $D$ and $D_s$ mesons are in a range of $2.8\sim 2.9$ GeV \cite{Godfrey:1985xj,Matsuki:2007zz}. To see the decay properties of those $D$-wave states, we plot their main decay partial widths in their possible mass region in Figs. \ref{dw} and \ref{dsw} for $D$ and $D_s$, respectively. \subsubsection{ Excited $D$ mesons} In Fig. \ref{dw}, decays of four $D$ wave states are presented. It shows that the widths of $D(1^1D_2)$ and $D(1^3D_2)$ states are dominated by $D^*\pi$ decay while the $D\pi/D\eta$ channels are forbidden. At $\sim 2.8$ GeV, they have very broad widths larger than 300 MeV. As $D^*$ dominantly decays into $D\pi$, such broad widths imply that their dominant final states are $D\pi\pi$, and it might be difficult to identify them in experiment. This may explain why these states are still evading experimental observations. For $D(1^3D_1)$, $D^*\pi$ is also the main decay channel, but with a much smaller width. In contrast, $D\pi$ dominates the decay of $D(1^3D_3)$. With theory-suggested masses $m(D(1^3D_1))=2.82$ and $m(D(1^3D_3))=2.83$ \cite{Godfrey:1985xj}, the total pionic decay widths for $D(1^3D_1)$ and $D(1^3D_3)$ are predicted to be \begin{eqnarray} \Gamma(D(1^3D_1))\simeq 93 \ \mathrm{MeV},\\ \Gamma(D(1^3D_3))\simeq 130 \ \mathrm{MeV}, \end{eqnarray} and the predicted ratios between the $D^*\pi$ and $D\pi$ widths are \begin{eqnarray} R(D(1^3D_1))\equiv \frac{\Gamma(D\pi)}{\Gamma(D^*\pi)}\simeq 0.12,\\ R(D(1^3D_3))\equiv \frac{\Gamma(D\pi)}{\Gamma(D^*\pi)}\simeq 1.7. \end{eqnarray} For $D(1^3D_3)$, the dominance of $D\pi$ decay suggests that it is relatively more accessible in experiment. \subsubsection{ Excited $D_s$ mesons} For $D_s(1^1D_2)$ and $D_s(1^3D_2)$, three important partial widths of $D^*K$, $DK^*$ and $D_s^*\eta$ are presented in Fig.~\ref{dsw}, and both states are dominated by the $D^*K$ decay. It is interesting to see that the mass of $\sim$ 2.8 GeV is about the threshold for $DK^*$. Although this decay mode is negligible near threshold, it can become important in case that the masses for these two $D$-wave states are above 2.8 GeV. At 2.8 GeV, the total widths are dominated by the $D^*K$ mode, which are \begin{eqnarray} \Gamma(D_s(1^1D_2))\simeq 61 \ \mathrm{MeV},\\ \Gamma(D_s(1^3D_2))\simeq 84 \ \mathrm{MeV}. \end{eqnarray} These results can guide a search for these two states around 2.8 GeV. As shown in Fig.~\ref{dsw}, $D^*K$ and $DK$ are two dominant decay modes for $D_s(1^3D_1)$ if its mass is around $2.8$ GeV, and the predicted width is relatively narrow. Implications from such an assignment will be discussed in subsection~\ref{2s-1d-mix}. Compared with the $D_s(1^3D_1)$ decay, the dominant decay mode of $D_s(1^3D_3)$ state is $DK$ around 2.8 GeV. With a higher mass the $D^*K$ channel becomes increasingly important. This feature fits well the experimental observation for $D^*_{sJ}(2860)$, and makes it a possible assignment for this state. To be more specific, $D^*_{sJ}(2860)$ as a $D_s(1^3D_3)$ state, its predicted width is \begin{eqnarray} \Gamma(D_s(1^3D_3))\simeq 41 \ \mathrm{MeV}, \end{eqnarray} and the dominant decay mode is $DK$. In comparison, the decays via $DK^*$ and $D_s\eta$ are much less important (see the Fig. \ref{dsw} and Tab.~\ref{width}). The ratio of $DK$ and $D^*K$ is found to be \begin{eqnarray} R(D_s(1^3D_3))\equiv\frac{\Gamma(DK)}{\Gamma(D^{*}K)}\simeq 2.3 \ , \end{eqnarray} which is also consistent with the experiment~\cite{Aubert:2006mh}. This assignment agrees with results of Refs.~\cite{Colangelo:2006rq,Zhang:2006yj,Koponen:2007nr}. Some models also suggested that $D^*_{sJ}(2860)$ could be a $2^3P_0$ state \cite{vanBeveren:2006st,Zhang:2006yj,Close:2006gr}, for which only decay mode $DK$ and $D_s \eta$ are allowed. In our model, a $2^3P_0$ state leads to decay amplitude \begin{eqnarray} &&\mathcal{M}(2^3P_0\rightarrow |1^1S_0\rangle \mathbb{P})\nonumber\\ &=& i\frac{1}{4}\sqrt{\frac{1}{15}}g_I F \frac{q^\prime}{\alpha}\left[ g^z_{10}\mathcal{G}q\frac{q^{\prime 2}}{\alpha^2} -\frac{1}{3} g^z_{10}h q^\prime(1+\frac{q^{\prime 2}}{2\alpha^2})\right.\nonumber\\ &&\left.- 2\sqrt{2} g^+_{10} h q^\prime (7-\frac{q^{\prime 2}}{\alpha^2})\right] \end{eqnarray} with which its partial decay width to $DK$ is about $\Gamma=184$ MeV, and much broader than the experimental observation. \subsection{ The $2^3S_1$-$1^3D_1$ mixing}\label{2s-1d-mix} \begin{figure}[ht] \centering \epsfxsize=8 cm \epsfbox{sdmix.eps} \caption{ The partial widths of low vector (A) and high vector (B) as functions of the mixing angle $\phi$. }\label{sdmix} \end{figure} In Ref.~\cite{Close:2006gr}, a mixing scheme between $2 ^3S_1$ and $1^3D_1$ was proposed as a solution for the relatively broad $D^*_{sJ}(2690)$, i.e. \begin{eqnarray}\label{mix-s-d} |D^*_{s1}(2690)\rangle & =& \sin(\phi)|1^3D_1\rangle +\cos(\phi)|2^3S_1\rangle \ ,\nonumber\\ |D^*_{s1}(2810)\rangle &= & \cos(\phi)|1^3D_1\rangle - \sin(\phi)|2^3S_1\rangle \ , \end{eqnarray} where an orthogonal state $D^*_{s1}(2810)$ was also predicted. The mixing angle was found to favor $\phi=-0.5$ radians, i.e. $\phi\simeq -27^{\circ}$. According to such a mixing scheme, $D^*_{s1}(2810)$ will also be a broad state and dominated by $1 ^3D_1$ configuration. Taking the mixing scheme of Eq.~(\ref{mix-s-d}), we plot the widths of $D^*_{sJ}(2690)$ and $D^*_{sJ}(2810)$ in terms of mixing angle $\phi$ in Fig. \ref{sdmix}. From the figure, it shows that if we take the mixing angle $\phi\simeq -27^\circ$ as predicted in Ref.~\cite{Close:2006gr}, the decay modes of $D^*_{sJ}(2690)$ is dominated by $D^*K$, which disagrees with the experimental observation. Nevertheless, the predicted decay width of $D^*_{sJ}(2690)$, $\Gamma\sim 25$ MeV, is underestimated by at least a factor of $2$ compared with the data. If the $2S$-$1D$ mixing is small, e.g. $D^*_{sJ}(2690)$ is a pure $1^3D_1$ state, the predicted decay width is $\Gamma\sim 42$ MeV, which is close to the lower limit of the data. However, the ratio $R=\Gamma(DK)/\Gamma(D^*K)\sim 0.8$ disagrees with the observation that the $DK$ channel dominates the decay modes. If we set $\phi\simeq 30^\circ$, which implies that the sign of the spin-orbit splitting term is now negative to keep the correct mass ordering, the $D^*K$ channel in $D^*_{sJ}(2690)$ decay will be completely suppressed. This is consistent with the observations except that the decay width $\Gamma\sim 15$ MeV is too small to compare with the data. For the high vector $D^*_{sJ}(2810)$, with $\phi\simeq \pm 30^\circ$, its width reads $\sim 40-60$ MeV and is dominated by the $D^*K$. Meanwhile, its branching ratio to $DK$ is still sizeable. Our test of the property of $D^*_{sJ}(2690)$ in the $2S-1D$ mixing scenario does not fit in the data very well. To clarify the nature of $D^*_{sJ}(2690)$, more accurate measurements of its width and ratio $R=DK/D^*K$, and experimental search for the accompanying $D^*_{sJ}(2810)$ in the $DK$ and $D^*K$ channels are needed. \section{Summary}\label{sum} In the chiral quark model framework, we systematically study the strong decays of heavy-light mesons in $\mathbb{M}\rightarrow |1^1S_0\rangle \mathbb{P}$, $\mathbb{M}\rightarrow |1^3S_1\rangle \mathbb{P}$, $\mathbb{M}\rightarrow |1^3P_0\rangle \mathbb{P}$, $\mathbb{M}\rightarrow |1^1P_1\rangle \mathbb{P}$, and $\mathbb{M}\rightarrow |1^3P_1\rangle \mathbb{P}$. By adopting commonly used values for the constituent quark masses and pseudoscalar meson decay constants, we make a full analysis of the strong decays of all the excited $D^*, \ D_s^*, \ B^*$ and $B_s^*$, and find that most available data can be consistently explained in this framework. We summarize our major results as follows. \subsection{Excited $D$ mesons} The calculated partial decay widths for the $D^*$ ($1^3S_1$), $D^*_0(2400)$ as a $1^3P_0$ state and $D^*_2(2460)$ as a $1^3P_2$ state are in good agreement with the data in our model and support their assignments as the low-lying excited $D^*$. State mixing between the $1^1P_1$ and $1^3P_1$ is favored. With the mixing angle, $\phi\simeq-(55\pm 5)^\circ$, which is consistent with the prediction of Ref. \cite{Godfrey:1986wj}, the narrow $D_1(2420)$ and broad $D'_1(2430)$ can be well explained as mixing states between $1^1P_1$ and $1^3P_1$. Precise measurement of the mass of the broad $D'_1(2430)$ is needed. Our result shows that assigning $D^*(2640)$ to be a radially excited $2^3S_1$ state can naturally explain its observation in $D^{*+}\pi^+\pi^-$ final state, although the predicted width $\sim 34$ MeV still possesses some discrepancies with the data. A search for the $D^*(2640)$ in the $D_1(2420)\pi$ channel is strongly recommended. Although the $2S$ and $1D$ excited $D$ states are still not well established, we analyzed their partial decay widths in their possible mass regions, which should be useful for future experimental studies. The decay widths of $2S$ states turn to be narrow, namely, at the order of a few tens of MeV. Our results shows that $D(2^1S_0)$ is dominated by the $D^*\pi$ decay channel, while both $D^*\pi$ and $D_1(2420)\pi$ are important for $D(2^3S_1)$. Both $D(1^1D_2)$ and $D(1^3D_2)$ have very broad widths $\gtrsim 300$ MeV, which may be difficult to identify in experiment. The decay widths of $D(1^3D_1)$ and $D(1^3D_3)$ are $\gtrsim 100$ MeV. The former has dominant decays into $D^*\pi$ and sizeable widths to $D\pi$ as well. For $D(1^3D_3)$, both $D\pi$ and $D^*\pi$ are important. \subsection{Excited $D_s$ mesons} $D_s(2460)$ and $D_{s1}(2536)$ are identified as partners due to mixing between $^1P_1$ and $^3P_1$. With mixing angle $\phi\simeq-(55\pm 5)^\circ$, our predictions for $D_{s1}(2536)$ are in good agreement with the data. Note that $D_s(2460)$ is below the strong decay threshold. $D^*_{s2}(2573)$ is consistent with a $1^3P_2$ state. Its partial decay width to $DK$ is dominant while to $D^*K$ is very small ($\sim 1$ MeV). This explains its absence in $D^*K$ channel~\cite{PDG}. For those not-well-established states, by analyzing their decay properties, we find that $D^*_{sJ}(2860)$ strongly favors a $D_s(1^3D_3)$ state, while the $D^*_{sJ}(2632)$ and $D^*_{sJ}(2690)$ cannot fit in any pure $D_s^*$ configurations. For the unobserved $2S$ states, $D_s(2^1S_0)$ and $D_s(2^3S_1)$, their decay widths are predicted to be an order of $\sim 15$ MeV, and dominated by the $D^*K$ decay mode. For the unobserved $1D$ states, $D_s(1^1D_2)$, $D_s(1^3D_2)$ and $D_s(1^3D_1)$, their decay widths are of the order of $60\sim 80$ MeV at a mass of $\sim 2.8$ GeV. The $D_s(1^1D_2)$ and $D_s(1^3D_2)$ decay are dominated by the $D^*K$ mode, while $D_s(1^3D_1)$ is dominated by the $D^*K$ and $DK$ together. \subsection{Excited $B$ mesons} We also study the decay properties of the newly observed bottom states $B_1(5725)$, $B^*_2(5829)$. Our calculations strongly suggest that $B_1(5725)$ is a mixed $|P_1\rangle$ state, and $B^*_2(5829)$ satisfies an assignment of $1^3P_2$. The $B_J^*(5732)$, which was first reported by L3 collaboration, can be naturally explained as the broad partner ($|P'_1\rangle$) of $B_1(5725)$ in the $^1P_1$ and $^3P_1$ mixing scheme. Its predicted width is $\Gamma(B_1')=219$ MeV, which is larger than the PDG suggested value $\Gamma_{exp}=128\pm 18$ MeV. In contrast, as a pure $1^3P_1$ state, its decay width is $\Gamma(B_1')=153$ MeV. Whether $B_J^*(5732)$ is a mixed state $|P'_1\rangle$, a pure $1^3P_1$ state, or other configurations, should be further studied. The theoretical prediction of the mass for $B^*_0(^3P_0)$ meson is about 5730 MeV. Its decay into $B\pi$ has a broad width $\Gamma(B^*_0)=272$ MeV according to our prediction. \subsection{Excited $B_s$ mesons} The two new narrow bottom-strange mesons $B_{s1}(5830)$ and $B^*_{s2}(5839)$ observed by CDF are likely the mixed state $P_1$ and the $1^3P_2$ state, respectively, though their decay widths and ratios are not given. $B_{s1}(5830)$ as a $|P_1\rangle$ state, its predicted width and decay ratio are $\Gamma(B_{s1})\simeq (0.4\sim 1)$ MeV and $\Gamma(B_{s1})/(\Gamma(B_{s1})+\Gamma(B^*_{2s}))=0.02\sim 0.6$. $B_s(5839)$ as the $^3P_2$ state, its decay width and width ratio predicted by us are $\Gamma(B^*_{s2})\simeq 2 \ \mathrm{MeV}$ and $\Gamma(B^*K)/\Gamma(BK) \simeq 6\%$. The theoretical predictions for the $B^*_{s0}$ masses is about 5800 MeV. In our model it has a broad width $\Gamma(B^*_0)=227$ MeV. For $B'_{s1}$ if its mass is above the threshold of $B^*K$, it will be a broad state with $\Gamma(B'_{s1})\simeq 149$ MeV. Otherwise, it should be a narrow state. It should be mentioned that uncertainties with quark model parameters can give rise to uncertainties with the theoretical results. A qualitative examination shows that such uncertainties can be as large as $10-20\%$, which are a typical order for quark model approaches. Interestingly, systematics arising from such a simple prescription are useful for us to gain insights into the effective degrees of freedom inside those heavy-light mesons and the underlying dynamics for their strong decays. \section*{ Acknowledgements } The authors wish to thank F.E. Close and S.-L. Zhu for useful discussions. This work is supported, in part, by the National Natural Science Foundation of China (Grants 10675131 and 10775145), Chinese Academy of Sciences (KJCX3-SYW-N2), the U.K. EPSRC (Grant No. GR/S99433/01), the Post-Doctoral Programme Foundation of China, and K. C. Wong Education Foundation, Hong Kong.
{ "attr-fineweb-edu": 1.698242, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaaDxK7Tt52mM6ewP
\section{Introduction}\label{intro} The cotangent cohomology groups $T^{n}$ with small ${n}$ play an important role in the deformation theory of singularities: $T^1$ classifies infinitesimal deformations and the obstructions land in $T^2$. Originally constructed ad hoc, the correct way to obtain these groups is as the cohomology of the cotangent complex. This yields also higher $T^{n}$, which no longer have a direct meaning in terms of deformations. \par In this paper we study these higher cohomology groups $T_Y^{n}$ for rational surface singularities $Y$. For a large class of rational surface singularities, including quotient singularities, we obtain their dimension. For an explicit formula for the Poincar\'{e} series $P_Y(t)$, see \zitat{hps}{app}. \par Our methods are a combination of the following three items: \begin{itemize}\vspace{-2ex} \item[(1)] We use the hyperplane section machinery of \cite{BC} to move freely between surface singularities, partition curves, and fat points. It suffices to compute the cohomology groups $T_Y^{n}$ for special singularities, to obtain the number of generators for all rational surface singularities. \item[(2)] In many cases, cotangent cohomology may be obtained via Harrison cohomology, which is much easier to handle. Using a Noether normalisation the Harrison complex gets linear over a bigger ring than just ${\,I\!\!\!\!C}$ (which is our ground field). \item[(3)] Taking for $Y$ a cone over a rational normal curve, we may use the explicit description of $T_Y^{n}$ obtained in \cite{AQ} by toric methods. \vspace{-1ex}\end{itemize} The descriptions in (2) and (3) complement each other and show that $T^{n}_Y$ of the cone over the rational normal curve is concentrated in degree $-{n}$. This allows us to compute the dimension as Euler characteristic. The paper is organised as follows. After recalling the definitions of cotangent and Harrison cohomology we review its computation for the case of the fat point of minimal multiplicity and give the explicit formula for its Poincar\'e series (we are indebted to Duco van Straten and Ragnar Buchweitz for help on this point).\\ Section 3 describes the applications of Noether normalisation to the computation of Harrison cohomology. The main result is the degree bound for the cotangent cohomology of Cohen-Macaulay singularities of minimal multiplicity from below, cf.\ Corollary \zitat{noeth}{zero}(2).\\ In the next section toric methods are used to deal with the cone over the rational normal curve. In this special case we can bound the degree of the cohomology groups from above, too. As a consequence, we obtain complete information about the Poincar\'e series.\\ Finally, using these results as input for the hyperplane machinery we find in the last section the Poincar\'e series for the partition curves and obtain that their $T^{n}$ is annihilated by the maximal ideal. This then implies that the number of generators of $T^{n}_Y$ is the same for all rational surface singularities. \par {\bf Notation:} We would like to give the following guide line concerning the notation for Poincar\'e series. The symbol $Q$ denotes those series involving Harrison cohomology of the actual space or ring with values in ${\,I\!\!\!\!C}$, while $P$ always points to the usual cotangent cohomology of the space itself. Moreover, if these letters come with a tilde, then a finer grading than the usual ${Z\!\!\!Z}$-grading is involved. \par \section{Cotangent cohomology and Harrison cohomology}\label{cotan} \neu{cotan-gen} Let $A$ be a commutative algebra of essentially finite type over a base-ring $S$. For any $A$-module $M$, one gets the {\it Andr\'e--Quillen\/} or {\it cotangent cohomology groups\/} as \[ T^{n}(A/S,M):=H^{n}\big({\rm Hom}_A(\L^{A/S}_*,M)\big) \] with $\L^{A/S}_*$ being the so-called cotangent complex. We are going to recall the major properties of this cohomology theory. For the details, including the definition of $\L^{A/S}_*$, see \cite{Loday}. \par If $A$ is a smooth $S$-algebra, then $T^{n}(A/S,M)=0$ for ${n}\geq1$ and all $A$-modules $M$. For general $A$, a short exact sequence of $A$-modules gives a long exact sequence in cotangent cohomology. Moreover, the Zariski-Jacobi sequence takes care of ring homomorphisms $S \to A \to B$; for a $B$-module $M$ it looks like \[ \cdots \longrightarrow T^{n}(B/A,M) \longrightarrow T^{n}(B/S,M) \longrightarrow T^{n}(A/S,M) \longrightarrow T^{{n}+1}(B/A,M) \longrightarrow \cdots \] The cotangent cohomology behaves well under base change. Given a co-cartesian diagram \[ \cdmatrix{ A&\longrightarrow &A' \cr \mapup{\phi}&&\mapup{}\cr S&\longrightarrow &S' \cr} \] with $\phi$ flat, and an $A'$-module $M'$, there is a natural isomorphism \[ T^{n}(A'/S',M')\cong T^{n}(A/S,M')\,. \] If, moreover, $S'$ is a flat $S$-module, then for any $A$-module $M$ \[ T^{n}(A'/S',M\otimes_S S')\cong T^{n}(A/S,M)\otimes_S S'\,. \] \par \neu{cotan-harr} To describe Harrison cohomology, we first recall Hochschild cohomology. While this concept works also for non-commutative unital algebras, we assume here the same setting as before. For an $A$-module $M$, we consider the complex \[ C^{n}(A/S,M):=\mbox{\rm Hom}_S(A^{\otimes {n}}, M) \] with differential \[ \displaylines{\qquad (\delta f) (a_0,\dots,a_{n}):= {}\hfill\cr\hfill a_0f(a_1,\dots,a_{n})+ \sum_{i=1}^{{n}}(-1)^i f(a_0,\dots,a_{i-1}a_{i},\dots,a_{n}) +(-1)^{{n}+1}a_{n} f(a_0,\dots,a_{{n}-1})\,. \qquad} \] {\em Hochschild cohomology} $HH^{n}(A/S,M)$ is the cohomology of this complex. It can also be computed from the so-called {\em reduced} subcomplex $\overline{C}^{\kss \bullet}(A/S,M)$ consisting only of those maps $f\colon A^{\otimes {n}}\to M$ that vanish whenever at least one of the arguments equals 1. \par {\bf Definition:} A permutation $\sigma\in S_{n}$ is called a $(p,{n}-p)$-shuffle if $\,\sigma(1)<\dots<\sigma(p)$ and $\,\sigma(p+1)<\dots<\sigma({n})$. Moreover, in the group algebra ${Z\!\!\!Z}[S_{n}]$ we define the elements \vspace{-2ex} \[ {\rm sh}_{p,{n}-p}:=\sum_{\mbox{\footnotesize $(p,{n}-p)$-shuffles}} \hspace{-1.5em}\mbox{sgn}(\sigma)\, \sigma \hspace{2em}\mbox{and}\hspace{2em} {\rm sh}:=\sum_{p=1}^{{n}-1}\mbox{\rm sh}_{p,{n}-p}\,. \vspace{-1ex} \] \par The latter element $\,{\rm sh}\in{Z\!\!\!Z}[S_{n}]$ gives rise to the so-called shuffle invariant subcomplexes \[ C_{\rm sh}^{n}(A/S,M):= \big\{f\in \mbox{\rm Hom}_S(A^{\otimes {n}}, M)\bigm| f({\rm sh}(\underline{a}))=0 \; \mbox{ for every }\; \underline{a}\in A^{\otimes {n}} \big\} \subseteq C^{n}(A/S,M) \] and $\overline{C}_{\rm sh}^{n}(A/S,M)\subseteq \overline{C}^{n}(A/S,M)$ defined in the same manner. Both complexes yield the same cohomology, which is called {\em Harrison cohomology\/}: \[ \mathop{\rm Harr}\nolimits^{n}(A/S,M):= H^{n}\big( C_{\rm sh}^{\kss \bullet}(A/S,M)\big) = H^{n}\big( \overline{C}_{\rm sh}^{\kss \bullet}(A/S,M)\big)\,. \vspace{-2ex} \] \par \neu{cotan-compare} The following well known result compares the cohomology theories defined so far. Good references are \cite{Loday} or \cite{Pal}. \par {\bf Theorem:} {\em If\/ $I\!\!\!\!Q\subseteq S$, then Harrison cohomology is a direct summand of Hochschild cohomology. Moreover, if $A$ is a {\it flat\/} $S$-module, then \[ \,T^{{n}}(A/S,M) \cong \mathop{\rm Harr}\nolimits^{{n}+1}(A/S,M)\,. \vspace{-3ex} \] } \par \neu{cotan-example} As an example, we consider the fat point $Z_m$ ($m\geq 2$) with minimal multiplicity $d=m+1$. Let $V$ be an $m$-dimensional ${\,I\!\!\!\!C}$-vector space and let $A={\cal O}_{Z_m}$ be the ring ${\,I\!\!\!\!C}\oplus V$ with trivial multiplication $V^2=0$. First we compute the Hochschild cohomology $HH^{\kss \bullet} (A/{\,I\!\!\!\!C},A)$. The reduced complex is \[ \overline C^{n}(A/{\,I\!\!\!\!C},A) = \mbox{\rm Hom}_{\,I\!\!\!\!C}(V^{\otimes {n}},A)\,. \] Because $ab=0\in A$ for all $a,b\in V$, the differential reduces to \[ (\delta f) (a_0,\dots,a_{n})= a_0f(a_1,\dots,a_{n})+ (-1)^{{n}+1}a_{n} f(a_0,\dots,a_{{n}-1})\,. \] We conclude that $\delta f =0 $ if and only if $\;\mbox{\rm im}\, f \subset V$; hence \[ HH^{n}(A/{\,I\!\!\!\!C},A)= \mbox{\rm Hom}\,(V^{\otimes {n}},V)\Big/ \delta \,\mbox{\rm Hom}\,(V^{\otimes ({n}-1)}, {\,I\!\!\!\!C}) \,. \] On the complex $\overline C^{n}(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C}) = \mbox{\rm Hom}_{\,I\!\!\!\!C}(V^{\otimes {n}},{\,I\!\!\!\!C})$ the differential is trivial, so $\mbox{\rm Hom}\,(V^{\otimes {n}}, {\,I\!\!\!\!C})=HH^{n}(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C})$. We finally obtain \[ HH^{n}(A/{\,I\!\!\!\!C},A)= HH^{n}(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C})\otimes V\Big/\delta_* \,HH^{{n}-1}(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C})\;, \] where the map $\delta_*$ is injective. For the Harrison cohomology, one has to add again the condition of shuffle invariance: \[ \mathop{\rm Harr}\nolimits^{n}(A/{\,I\!\!\!\!C},A)= \mathop{\rm Harr}\nolimits^{n}(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C})\otimes V\Big/\delta_* \,\mathop{\rm Harr}\nolimits^{{n}-1}(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C}) \,. \] \par {\bf Proposition:} (\cite{SchSt}) {\em Identifying the Hochschild cohomology $HH^{\kss \bullet} (A/{\,I\!\!\!\!C},{\,I\!\!\!\!C})$ with the tensor algebra $T V^*$ on the dual vector space $V^*$, the Harrison cohomology $\mathop{\rm Harr}\nolimits^{\kss \bullet} (A/{\,I\!\!\!\!C},{\,I\!\!\!\!C})\subseteq T V^*$ consists of the primitive elements in $T V^*$. They form a free graded Lie algebra $L$ on $V^*$ with $V^*$ sitting in degree $-1$. } {\bf Proof:} The tensor algebra $TV^*$ is a Hopf algebra with comultiplication \[ \Delta(x_1\otimes \cdots\otimes x_{n}) :=\sum_p \sum_{(p,{n}-p)-{\rm shuffles}\ \sigma\hspace{-2em}}\hspace{-0.1em} \mbox{\rm sgn}(\sigma)\, (x_{\sigma(1)}\otimes\cdots\otimes x_{\sigma(p)})\otimes (x_{\sigma(p+1)}\otimes\cdots\otimes x_{\sigma({n})}) \,. \] It is the dual of the Hopf algebra $T^cV$ with shuffle multiplication \[ (v_1\otimes \cdots\otimes v_p)*(v_{p+1}\otimes \cdots\otimes v_{n})= \sum_{(p,{n}-p)-{\rm shuffles}\ \sigma\hspace{-2em}}\hspace{-0.1em} \mbox{\rm sgn}(\sigma)\cdot v_{\sigma(1)}\otimes\cdots\otimes v_{\sigma({n})}\,. \] In particular, for any $f\in TV^*$ and $a,b \in T^cV$ one has $(\Delta f) (a,b) = f(a*b)$. Hence, the condition that $f$ vanishes on shuffles is equivalent to $\,\Delta f = f\otimes1+1\otimes f$, i.e.\ to $f$ being primitive in $TV^*$. \qed The dimension of $\mathop{\rm Harr}\nolimits^{n}(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C})$ follows now from the dimension of the space of homogeneous elements in the free Lie algebra, which was first computed in the graded case in \cite{Ree}. {\bf Lemma:} {\em $\dim_{\,I\!\!\!\!C} \mathop{\rm Harr}\nolimits^{n}(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C}) = \frac 1{n} \sum_{d|{n}}(-1)^{{n}+\frac {n} d}\mu(d)\,m^{\frac {n} d}\,$ with $\mu$ denoting the M\"obius function. } {\bf Proof:} In the free Lie algebra $L$ on $V^*$, we choose an ordered basis $p_i$ of the even degree homogeneous parts $L_{2{\kss \bullet}}$ as well as an ordered basis $q_i$ of the odd degree ones. Since $TV^*$ is the universal enveloping algebra of $L$, a basis for $TV^*$ is given by the elements of the form $p_1^{r_1}p_2^{r_2}\cdots p_k^{r_k}q_1^{s_1}\cdots q_l^{s_l}$ with $r_i\geq0$ and $s_i=0,1$. In particular, if $c_{n}:=\dim L_{-{n}}$, then the Poincar\'{e} series of the tensor algebra \[ \sum_{n} \dim T^{n} V^*\cdot t^{n} =\sum_{n} m^{n} \,t^{n}={1\over 1-mt} \] may be alternatively described as \[ \prod_{{n}\ {\rm even}}(1+t^{n}+t^{2{n}}+\cdots)^{c_{n}} \prod_{{n}\ {\rm odd}}(1+t^{n})^{c_{n}}\,. \] Replacing $t$ by $-t$ and taking logarithms, the comparison of both expressions yields \[ -\log (1+mt)\;=\;-\sum_{{n}\ {\rm even}}c_{n}\log(1-t^{n})+ \sum_{{n}\ {\rm odd}}c_{n}\log(1-t^{n}) \;=\; -\sum_{{n}}(-1)^{n} c_{n}\log(1-t^{n})\,. \] Hence \[ \sum_{n} {1\over {n}}\,(-m)^{n}\,t^{n} = \sum_{d, \nu}(-1)^d {1\over \nu}\,c_d\,t^{d\,\nu}\,, \] and by comparing the coefficients we find \[ (-m)^{n}=\sum_{d|{n}}(-1)^d\,d\,c_d \,. \] Now the result follows via M\"obius inversion. \qed \par We collect the dimensions in the Poincar\'e series \[ Q_{Z_m}(t):=\sum_{{n}\geq 1} \dim\, \mathop{\rm Harr}\nolimits^{{n}}({\,I\!\!\!\!C}\oplus V/{\,I\!\!\!\!C},{\,I\!\!\!\!C})\cdot t^{n} =\sum_{{n}\geq 1} c_n t^n\;. \] \section{Harrison cohomology via Noether normalisation}\label{noeth} \neu{noeth-norm} Let $Y$ be a Cohen-Macaulay singularity of dimension $N$ and multiplicity $d$; denote by $A$ its local ring. Choosing a {\em Noether normalisation}, i.e.\ a flat map $Y\to {\,I\!\!\!\!C}^N$ of degree $d$, provides a regular local ring $P$ of dimension $N$ and a homomorphism $P\to A$ turning $A$ into a free $P$-module of rank $d$. Strictly speaking, this might only be possible after passing to an \'etale covering. Alternatively one can work in the analytic category, see \cite{Pal} for the definition of analytic Harrison cohomology. \par {\bf Proposition:} {\em Let $A$ be a free $P$-module as above. If \/$M$ is any $A$-module, then $T^{n}(A/{\,I\!\!\!\!C},M) \cong T^{n}(A/P,M)$ for ${n}\geq 2$. Moreover, the latter equals $\mathop{\rm Harr}\nolimits^{{n}+1}(A/P,M)$. } \par {\bf Proof:} The Zariski-Jacobi sequence from \zitat{cotan}{gen} for ${\,I\!\!\!\!C}\to P \to A$ reads \[ \cdots \longrightarrow T^{n}(A/P,M) \longrightarrow T^{n}(A/{\,I\!\!\!\!C},M) \longrightarrow T^{n}(P/{\,I\!\!\!\!C},M)\longrightarrow T^{{n}+1}(A/P,M) \longrightarrow \cdots \] As $P$ is regular, we have $T^{n}(P/{\,I\!\!\!\!C},M)=0$ for ${n}\geq1$ for all $P$-modules. On the other hand, since $A$ is flat over $P$, we may use \zitat{cotan}{compare}. \qed \par \neu{noeth-V} A rational surface singularity has minimal multiplicity, in the sense that $\mbox{\rm embdim}\,Y=\mbox{\rm mult}\,Y+\dim Y -1$. In this situation we may choose coordinates $(z_1,\dots,z_{d+1})$ such that the projection on the $(z_d,z_{d+1})$-plane is a Noether normalisation. Using the above language, this means that $P={\,I\!\!\!\!C}[z_d,z_{d+1}]_{(z_d,z_{d+1})}$, and $\{1,z_1,\dots,z_{d-1}\}$ provides a basis of $A$ as a $P$-module. \par More generally, for a Cohen-Macaulay singularity of minimal multiplicity we may take coordinates $(z_1,\dots,z_{d+N-1})$ such that projection on the last $N$ coordinates $(z_d,\dots,z_{d+N-1})$ is a Noether normalisation. {\bf Lemma:} {\em $\;\ku{m}_A^2\;\subseteq\; \ku{m}_P\cdot\ku{m}_A$ \hspace{0.5em} and \hspace{0.5em} $(\ku{m}_P\cdot\ku{m}_A)\cap P\;\subseteq \;\ku{m}_P^2\,$. } \par {\bf Proof:} Every product $z_iz_j\in\ku{m}_A^2$ may be decomposed as $z_iz_j=p_0 + \sum_{v=1}^{d-1}p_vz_v$ with some $p_v\in P$. Since $\{z_1,\dots,z_{d+N-1}\}$ is a basis of $\raisebox{.5ex}{$\ku{m}_A$}\big/\raisebox{-.5ex}{$\ku{m}_A^2$}$, we obtain $p_0\in\ku{m}_P^2$ and $p_v\in\ku{m}_P$ for $v\geq 1$. The second inclusion follows from the fact that $z_1,\dots,z_{d-1}\in A$ are linearly independent over $P$. \qed {\bf Proposition:} {\em For a Cohen-Macaulay singularity of minimal multiplicity $d$ one has for ${n}\geq 1$ $T^{n}(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C})= T^{n}(A/P,{\,I\!\!\!\!C})=T^{n}({\,I\!\!\!\!C}\oplus V/{\,I\!\!\!\!C},{\,I\!\!\!\!C})$ with $V:=\raisebox{.5ex}{$\ku{m}_A$}\big/\raisebox{-.5ex}{$\ku{m}_PA$}$ being the $(d-1)$-dimensional vector space spanned by $z_1,\dots,z_{d-1}$. } {\bf Proof:} The equality $T^{n}(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C})= T^{n}(A/P,{\,I\!\!\!\!C})$ was already the subject of Proposition \zitat{noeth}{norm} with $M:={\,I\!\!\!\!C}$; it remains to treat the missing case of ${n}=1$. Using again the Zariski-Jacobi sequence, we have to show that $T^0(A/{\,I\!\!\!\!C},{\,I\!\!\!\!C})\to T^0(P/{\,I\!\!\!\!C},{\,I\!\!\!\!C})$ is surjective. However, since this map is dual to the homomorphism $\raisebox{.5ex}{$\ku{m}_P$}\big/\raisebox{-.5ex}{$\ku{m}_P^2$}\to \raisebox{.5ex}{$\ku{m}_A$}\big/\raisebox{-.5ex}{$\ku{m}_A^2$}$, which is injective by the lemma above, we are done. \vspace{0.5ex}\\ The second equality $T^{n}(A/P,{\,I\!\!\!\!C})=T^{n}({\,I\!\!\!\!C}\oplus V/{\,I\!\!\!\!C},{\,I\!\!\!\!C})$ follows by base change, cf.\ \zitat{cotan}{gen}: \[ \cdmatrix{ A&\longrightarrow &\hspace{-0.3em}{\,I\!\!\!\!C}\oplus V \cr \mapup{{\rm flat}}&&\mapup{}\cr P&\longrightarrow &\hspace{-0.3em}{\,I\!\!\!\!C} \cr} \vspace{-3ex} \] \qed \par \neu{noeth-rnc} The previous proposition reduces the cotangent cohomology with ${\,I\!\!\!\!C}$-coefficients of rational surface singularities of multiplicity $d$ to that of the fat point $Z_m$ with $m=d-1$, discussed in \zitat{cotan}{example}. \par {\bf Example:} Denote by $Y_d$ the cone over the rational normal curve of degree $d$. It may be described by the equations encoded in the condition \[ \mbox{\rm rank}\; \pmatrix{z_0 & z_1 & \dots & z_{d-2} & z_{d-1} \cr z_1 & z_2 & \dots & z_{d-1} & z_d \cr} \;\leq 1\;. \] As Noether normalisation we take the projection on the $(z_0,z_d)$-plane. With $\deg z_i:=[i,1]\in{Z\!\!\!Z}^2$, the local ring $A_d$ of $Y_d$ admits a ${Z\!\!\!Z}^2$-grading. We would like to show how this grading affects the modules $T^{\kss \bullet}(A_d/{\,I\!\!\!\!C},{\,I\!\!\!\!C})=T^{\kss \bullet}({\,I\!\!\!\!C}\oplus V/{\,I\!\!\!\!C},{\,I\!\!\!\!C})$ (excluding $T^0$), i.e.\ we are going to determine the dimensions $\, \dim T^{\kss \bullet}({\,I\!\!\!\!C}\oplus V/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(-R)$ for $R\in{Z\!\!\!Z}^2$. \hspace{1ex}\\ We know that for every ${n}$ \[ T^{{n}-1}({\,I\!\!\!\!C}\oplus V/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(-R) =\mathop{\rm Harr}\nolimits^{{n}}({\,I\!\!\!\!C}\oplus V/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(-R)\subseteq T^{{n}} V^\ast(-R)=0 \] unless ${n}={\rm ht}(R)$, where ${\rm ht}(R):=R_2$ denotes the part carrying the standard ${Z\!\!\!Z}$-grading. Hence, we just need to calculate the numbers \[ c_{R}:= \dim\, \mathop{\rm Harr}\nolimits^{\kht(R)}({\,I\!\!\!\!C}\oplus V/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(-R) \] and can proceed as in the proof of Proposition \zitat{cotan}{example}. Via the formal power series \[ \sum_{R\in{Z\!\!\!Z}^2} \dim\, T^{\kht(R)}V^\ast(-R) \cdot x^R \in {\,I\!\!\!\!C}[|{Z\!\!\!Z}^2|] \] we obtain the equation \[ -\log \big(1+x^{[1,1]}+\cdots +x^{[d-1,1]}\big)= -\sum_{R\in{Z\!\!\!Z}^2}(-1)^{\kht(R)}\,c_R\cdot\log(1-x^R)\,. \] In particular, if ${\rm ht}(R)={n}$, then the coefficient of $x^R$ in \[ (-1)^{n} \big(x^{[1,1]}+\cdots +x^{[d-1,1]}\big)^{n}= (-1)^{n} \left( \frac{x^{[d,1]}-x^{[1,1]}}{x^{[1,0]}-1} \right)^{n} \] equals $\sum_{R^\prime|R} (-1)^{\kht(R^\prime)}\,{\rm ht}(R^\prime)\cdot c_{R^\prime}$. \vspace{0.5ex} Again, we have to use M\"obius inversion to obtain an explicit formula for the dimensions $c_R$. \par {\bf Remarks:} \begin{itemize}\vspace{-2ex} \item[(1)] The multigraded Poincar\'{e} series \[ \widetilde{Q}_{Z_{d-1}}(x):=\sum_{R\in{Z\!\!\!Z}^2} \dim\, \mathop{\rm Harr}\nolimits^{\kht(R)}({\,I\!\!\!\!C}\oplus V/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(-R)\cdot x^R = \sum_{R} c_R\,x^R \] is contained in the completion of the semigroup ring ${\,I\!\!\!\!C}\big[{Z\!\!\!Z}_{\geq 0}\,[1,1]+{Z\!\!\!Z}_{\geq 0}\,[d-1,1]\big]$. \item[(2)] The cohomology groups $\,\mathop{\rm Harr}\nolimits^{{n}}(A_d/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(-R)$ vanish unless ${n}={\rm ht}(R)$, even for $n=1$. The corresponding Poincar\'{e} series $\widetilde Q_{Y_d}(x)$ equals $\widetilde{Q}_{Z_{d-1}}(x) +x^{[0,1]} + x^{[d,1]}$. The two additional terms arise from $\,\mathop{\rm Harr}\nolimits^{1}(P/{\,I\!\!\!\!C},{\,I\!\!\!\!C})=T^0(P/{\,I\!\!\!\!C},{\,I\!\!\!\!C})$ in the exact sequence of \zitat{noeth}{norm}. \vspace{-1ex}\end{itemize} \par \neu{noeth-zero} Let $Y$ be a Cohen-Macaulay singularity of minimal multiplicity $d\geq 3$. \par {\bf Lemma:} {\em The natural map $T^{n}(A/P,A)\to T^{n}(A/P,{\,I\!\!\!\!C})$ is the zero map. } \par {\bf Proof:} We compute $T^{n}(A/P,{\kss \bullet})$ with the reduced Harrison complex which sits in the reduced Hochschild complex. Using the notation of the beginning of \zitat{noeth}{V}, a reduced Hochschild $({n}+1)$-cocycle $f$ is, by $P$-linearity, determined by its values on the $({n}+1)$-tuples of the coordinates $z_1$, \dots, $z_{d-1}$. Suppose $f(z_{i_0}, \dots, z_{i_{n}}) \notin \ku{m}_A$. Since $d\geq 3$, we may choose a $z_k$ with $k\in\{1,\dots,d-1\}$ and $k\neq i_0$. Hence \[ \displaylines{\qquad 0=(\delta f)(z_{i_0}, \dots, z_{i_{n}},z_k)= z_{i_0}f(z_{i_1}, \dots, z_k) \pm f(z_{i_0}, \dots, z_{i_{n}})z_k \hfill\cr\hfill{}+ \mbox{ terms containing products $z_iz_j$ as arguments}\;. \qquad} \] Since $\ku{m}_A^2\;\subseteq\; \ku{m}_P\cdot\ku{m}_A$ by Lemma \zitat{noeth}{V}, we may again apply $P$-linearity to see that the latter terms are contained in $\ku{m}_P\cdot A$. Hence, modulo $\,\ku{m}_P=\ku{m}_P+\ku{m}_A^2$, these terms vanish, but the resulting equation inside $V=\raisebox{.5ex}{$\ku{m}_A$}\big/\raisebox{-.5ex}{$\ku{m}_PA$}$ contradicts the fact that $z_{i_0}$ and $z_k$ are linearly independent. \qed \par {\bf Corollary:} {\em \begin{itemize}\vspace{-2ex} \item[{\rm(1)}] The map $T^{n}(A/P,\ku{m}_A)\to T^{n}(A/P, A)$ is surjective. In particular, every element of the group $T^{n}(A/P, A)$ may be represented by a cocycle $f\colon A^{\otimes({n}+1)}\to\ku{m}_A$. \item[{\rm(2)}] If $P\to A$ is ${Z\!\!\!Z}$-graded with $\deg\,z_i=1$ for every $i$ \/{\rm(}such as for the cone over the rational normal curve presented in Example {\rm\zitat{noeth}{rnc})}, then $T^{n}(A/P, A)$ sits in degree $\geq -{n}$. \vspace{-1ex}\end{itemize} } \section{The cone over the rational normal curve}\label{cone} \neu{cone-poinc} Let $Y_d$ be the cone over the rational normal curve of degree $d\geq 3$. In Example \zitat{noeth}{rnc} we have calculated the multigraded Poincar\'{e} series $\widetilde{Q}_{Y_d}(x) = \sum_{R} \dim\, \mathop{\rm Harr}\nolimits^{\kht(R)}(A_d/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(-R)\cdot x^R$. The usual Poincar\'{e} series $Q_{Y_d}(t)=\sum_{{n}\geq 1} \dim\, \mathop{\rm Harr}\nolimits^{{n}}(A_d/{\,I\!\!\!\!C},{\,I\!\!\!\!C})\cdot t^{n}$ is related to it via the substitution $x^R\mapsto t^{\kht(R)}$.\\ The goal of the present section is to obtain information about \[ P_{Y_d}(t):=\sum_{{n}\geq 1} \dim\, T^{{n}}(A_d/{\,I\!\!\!\!C},A_d)(-R)\cdot t^{n} \vspace{-1ex} \] or its multi graded version \[ \widetilde{P}_{Y_d}(x,t):=\sum_{{n}\geq 1}\sum_{R\in{Z\!\!\!Z}^2} \dim\, T^{{n}}(A_d/{\,I\!\!\!\!C},A_d)(-R)\cdot x^R\, t^{n} \in {\,I\!\!\!\!C}[|{Z\!\!\!Z}^2|][|t|]\,. \] The first series may be obtained from the latter by substituting $1$ for all monomials $x^R$, i.e.~$P_{Y_d}(t)=\widetilde{P}_{Y_d}(1,t)$. \neu{cone-toric} In \cite{AQ}, Proposition (5.2), combinatorial formulas have been obtained for the dimension of the vector spaces $T^{n}(-R):=T^{{n}}(A_d/{\,I\!\!\!\!C},A_d)(-R)$. The point is that $Y_d$ equals the affine toric variety $Y_\sigma$ with $\sigma$ the plane polyhedral cone \[ \sigma:= {I\!\!R}_{\geq 0}\cdot (1,0) + {I\!\!R}_{\geq 0}\cdot (-1,d) = \big\{ (x,y)\in{I\!\!R}^2\,\big|\; y\geq 0\,;\; d\,x+y\geq 0\big\} \subseteq {I\!\!R}^2\,. \] The lattice containing the multidegrees $R$ may be identified with the dual of the lattice ${Z\!\!\!Z}^2$ inside ${I\!\!R}^2$, and the results of \cite{AQ} for this special cone may be described as follows: \begin{itemize}\vspace{-2ex} \item[(0)] $T^0(-R)$ is two-dimensional if $R\leq 0$ on $\sigma$. It has dimension $1$ if $R$ is still non-positive on one of the $\sigma$-generators $(1,0)$ or $(-1,d)$, but yields exactly $1$ at the other one. $T^0(-R)$ vanishes in every other case. \item[(1)] $T^1(-R)$ is one-dimensional for $R=[1,1]$ and $R=[d-1,1]$; it is two-dimensional for the degrees in between, i.e.\ for $R=[2,1],\dots,[d-2,1]$. Altogether this means that $\,\dim T^1=2d-4$. \item[(2)] The vector space $T^2$ lives exclusively in the degrees of height two. More detailed, we have \[ \renewcommand{\arraystretch}{1.1} \dim T^2(-R)=\left\{\begin{array}{cl} k-2& \mbox{for } \,R=[k,2] \;\mbox{with }\,2\leq k\leq d-1\\ d-3 &\mbox{for } \,R=[d,2] \\ 2d-k-2 &\mbox{for } \,R=[k,2] \;\mbox{with } \,d+1\leq k\leq 2d-2\,. \end{array}\right. \vspace{-2ex} \] \vspace{-1ex}\end{itemize} To formulate the result for the higher cohomology groups, we need some additional notation. If $R\in{Z\!\!\!Z}^2$, then let $K_R$ be the finite set \[ K_R:= \big\{ r\in{Z\!\!\!Z}^2\setminus\{0\}\,\big|\; r\geq 0 \mbox{ on } \sigma, \mbox{ but } \,r<R \mbox{ on } \sigma\setminus\{0\} \big\}\,. \] Every such set $K\subseteq{Z\!\!\!Z}^2$ gives rise to a complex $C^{\kss \bullet}(K)$ with \[ C^{{n}}(K):= \Big\{\varphi\colon \big\{(\lambda_1,\dots,\lambda_{n})\in K^{n}\, \big|\;\mbox{$\sum_v$}\lambda_v\in K\big\} \to {\,I\!\!\!\!C}\,\Big|\; \varphi \mbox{ is shuffle invariant}\Big\}\;, \] equipped with the modified, inhomogeneous Hochschild differential $d\colon C^{{n}}(K) \to C^{{n}+1}(K)$ given by \[ \displaylines{\qquad (d\varphi)(\lambda_0,\dots,\lambda_{n}):={}\hfill\cr\hfill \varphi(\lambda_1,\dots,\lambda_{n}) + \sum_{v=1}^{n} (-1)^v \varphi(\lambda_0,\dots,\lambda_{v-1}+\lambda_v,\dots,\lambda_{n}) + (-1)^{{n}+1}\varphi(\lambda_0,\dots,\lambda_{{n}-1})\,. \qquad} \] Denoting the cohomology of $C^{\kss \bullet}(K)$ by $H\!A^{\kss \bullet}(K)$, we may complete our list with the last point \begin{itemize}\vspace{-2ex} \item[(3)] $T^{n}(-R) = H\!A^{{n}-1}(K_R)\,$ for $\,{n}\geq 3$. \vspace{-1ex}\end{itemize} \par {\bf Remark:} The explicit description of $T^2(-R)$ does almost fit into the general context of $n\geq 3$. The correct formula is $T^2(-R)=\raisebox{0.3ex}{$H\!A^1(K_R)$}\big/ \raisebox{-0.3ex}{$(\span_{\,I\!\!\!\!C} K_R)^\ast$}$. \par \neu{cone-vanish} The previous results on $T^{n}(-R)$ have two important consequences. Let $\Lambda:=\{R\in{Z\!\!\!Z}^2\,|\; R\geq 0 \mbox{ on } \sigma\}$, $\Lambda_+:=\Lambda\setminus\{0\}$ and $\mbox{\rm int}\,\Lambda:=\{R\in{Z\!\!\!Z}^2\,|\; R> 0 \mbox{ on } \sigma\setminus \{0\}\}$. \par {\bf Proposition:} {\em Let ${n}\geq 1$. \begin{itemize}\vspace{-2ex} \item[{\rm(1)}] $T^{n}(-R)=0$ unless $R$ is strictly positive on $\sigma\setminus\{0\}$, i.e.\ unless $R\in\Lambda_+$. \item[{\rm(2)}] $T^{n}(-R)=0$ unless ${\rm ht}(R)={n}$. In particular, $T^{n}$ is killed by the maximal ideal of $A_d$. \vspace{-1ex} \vspace{-1ex}\end{itemize} } \par {\bf Proof:} (1) If $R$ is not positive on $\sigma\setminus\{0\}$, then $K_R=\emptyset$. \vspace{1ex}\\ (2) If ${n}-1\geq {\rm ht}(R)$, then $C^{{n}-1}(K_R)=0$ for trivial reasons. Hence $T^{n}$ sits in degree $\leq (-{n})$. But this is exactly the opposite inequality from Corollary \zitat{noeth}{zero}(2). \qed \par In particular, we may shorten our Poincar\'{e} series to \[ \widetilde{P}_{Y_d}(x):=\sum_{\kht(R)\geq 1} \dim\, T^{\kht(R)}(A_d/{\,I\!\!\!\!C},A_d)(-R)\cdot x^R \in \hat{A}_d = {\,I\!\!\!\!C}[|\Lambda|]\subseteq {\,I\!\!\!\!C}[|{Z\!\!\!Z}^2|]\,. \] We obtain $P_{Y_d}(t)$ from $\widetilde{P}_{Y_d}(x)$ via the substitution $x^R\mapsto t^{\kht(R)}$. \par \pagebreak[2] \neu{cone-geq3} {\bf Lemma:} {\em \begin{itemize}\vspace{-2ex} \item[{\rm(1)}] Let $R\in {Z\!\!\!Z}^2$ with ${\rm ht}(R)\geq 3$. Then \[ \displaylines{\quad \dim \,T^{\kht(R)}(-R)= {\displaystyle\sum\limits_ {\smash{r\in \mbox{\rm\footnotesize int} \Lambda,\, R-r\in\Lambda_+\hspace{-2em}}}} (-1)^{\kht(r)-1} \dim \,\mathop{\rm Harr}\nolimits^{\kht(R)-\kht(r)}(A_d/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(r-R) \hfill\cr\hfill{} +\, (-1)^{\kht(R)-1} \dim \,H\!A^1(K_R) \,.\qquad} \] \item[{\rm(2)}] For $R\in{Z\!\!\!Z}^2$ with ${\rm ht}(R)=1$ or $2$, the right hand side of the above formula always yields zero. \vspace{-1ex}\end{itemize} } {\bf Proof:} (1) The vanishing of $T_Y^{n}(-R)$ for ${n}\neq{\rm ht}(R)$ together with the equality $T_Y^{n}(-R)=H\!A^{{n}-1}(K_R)$ for ${n}\geq 3$ implies that the complex $C^{\kss \bullet}(K_R)$ is exact up to the first and the $({\rm ht}(R)-1)$-th place. In particular, we obtain \[ \dim \,T^{\kht(R)}(-R)=\sum_{{n}\geq 1} (-1)^{\kht(R)-1+{n}} \dim \,C^{n}(K_R) + (-1)^{\kht(R)-1} \dim\, H\!A^1(K_R) \] where the sum is a finite one because $C^{\geq\kht(R)}(K_R)=0$. Now the trick is to replace the differential of the inhomogeneous complex $C^{\kss \bullet}(K_R)$ by its homogeneous part $d^\prime\colon C^{n}(K_R)\to C^{{n}+1}(K_R)$ defined as \[ (d^\prime\varphi)(\lambda_0,\dots,\lambda_{n}):= \sum_{v=1}^{n} (-1)^v \varphi(\lambda_0,\dots,\lambda_{v-1}+\lambda_v,\dots,\lambda_{n})\,. \] Then $\big(C^{\kss \bullet}(K_R),\,d^\prime\big)$ splits into a direct sum $\oplus_{r\in K_R} V^{\kss \bullet}(-r)$ with \[ V^{{n}}(-r):= \Big\{\varphi\colon \big\{(\lambda_1,\dots,\lambda_{n})\in \Lambda_+^{n}\, \big|\;\mbox{$\sum_v$}\lambda_v=r\big\} \to {\,I\!\!\!\!C}\,\Big|\; \varphi \mbox{ is shuffle invariant}\Big\}\,. \] On the other hand, since $A_d={\,I\!\!\!\!C}[\Lambda]$, we recognise this exactly as the reduced complex computing $\mathop{\rm Harr}\nolimits^{\kss \bullet} (A_d/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(-r)$. Hence, \[ \dim \,T^{\kht(R)}(-R) = \sum_{{n}\geq 1,\,r\in K_R\hspace{-1em}} (-1)^{\kht(R)-1+{n}} \dim\,\mathop{\rm Harr}\nolimits^{n}(A_d/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(-r) \,+\, (-1)^{\kht(R)-1} \dim\, H\!A^1(K_R). \] Finally, we replace $r$ by $R-r$ and recall that $\mathop{\rm Harr}\nolimits^{n}(A_d/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(-r)=0$ unless $n={\rm ht}(r)$. \par (2) If ${\rm ht}(R)=2$, then the right hand side equals $\#K_R-\#K_R=0$. If ${\rm ht}(R)=1$, then no summand at all survives. \qed \par \neu{cone-result} Let $\,F(x):=\sum_{v=1}^{d-1} x^{[v,1]}-x^{[d,2]}= \frac{x^{[d,1]}-x^{[1,1]}}{x^{[1,0]}-1} -x^{[d,2]}$. Using the Poincar\'{e} series $\widetilde{Q}_{Y_d}(x)$ of \zitat{noeth}{rnc}, we obtain the following formula: \par {\bf Theorem:} {\em The multigraded Poincar\'{e} series of the cone over the rational normal curve of degree $d$ equals \[ \widetilde{P}_{Y_d}(x) \;=\; \frac{F(x)\cdot(\widetilde{Q}_{Y_d}(x)+2)} {\big(x^{[0,1]}+1\big)\big(x^{[d,1]}+1\big)} - \frac{x^{[1,1]}}{x^{[0,1]}+1} - \frac{x^{[d-1,1]}}{x^{[d,1]}+1}\,. \] } \par {\bf Proof:} The previous lemma implies that \[ \displaylines{\qquad \widetilde{P}_{Y_d}(x) = \sum_{\kht(R)=1,2\hspace{-1em}}\dim\,T^{\kht(R)}(-R)\cdot x^R + \sum_{ R\in\Lambda_+}(-1)^{\kht(R)-1} \dim \,H\!A^1(K_R)\cdot x^R \hfill\cr\hfill{}+ \sum_{\hspace{-2.0em}\stackrel {\scriptstyle R\in\Lambda_+} {\scriptstyle r\in\mbox{\rm\footnotesize int}\Lambda,\,R-r\in\Lambda_+} \hspace{-2.0em}} (-1)^{\kht(r)-1} \dim \,\mathop{\rm Harr}\nolimits^{\kht(R-r)}(A_d/{\,I\!\!\!\!C},{\,I\!\!\!\!C})(r-R) \cdot x^R\,. \qquad} \] Using the description of $T^1(-R)$ and $T^2(-R)$ from \zitat{cone}{toric}, including the remark at the very end, we obtain for the first two summands \[ \Big(2\sum_{v=1}^{d-1}x^{[v,1]} - x^{[1,1]}-x^{[d-1,1]}\Big) + \sum_{\scriptstyle R\in\Lambda_+}(-1)^{\kht(R)-1} \dim \,\span(K_R)\cdot x^R\,, \] which is equal to \[ 2\hspace{-0.5em}\sum_{R\in{\rm int}\Lambda} (-1)^{\kht(R)-1} x^R \;+\; \sum_{k\geq 1} (-1)^k x^{[1,k]} \;+\; \sum_{k\geq 1} (-1)^k x^{[kd-1,k]}\,. \] The third summand in the above formula for $\widetilde{P}_{Y_d}(x)$ may be approached by summing over $r$ first. Then, substituting $s:=R-r\in\Lambda_+$ and splitting $x^R$ into the product $x^r\cdot x^s$, we see that this summand is nothing else than \[ \Big(\hspace{-0.5em}\sum_{r\in{\rm int}\Lambda} (-1)^{\kht(r)-1} x^r \Big)\cdot \widetilde{Q}_{Y_d}(x)\,. \] In particular, we obtain \[ \widetilde{P}_{Y_d}(x)= \Big(\hspace{-0.5em}\sum_{R\in{\rm int}\Lambda} (-1)^{\kht(R)-1} x^R \Big)\cdot \Big(\widetilde{Q}_{Y_d}(x)+2\Big) \;+\; \sum_{k\geq 1} (-1)^k x^{[1,k]} \;+\; \sum_{k\geq 1} (-1)^k x^{[kd-1,k]}\,. \] Finally, we should calculate the infinite sums. The latter two are geometric series; they yield $-x^{[1,1]}\big/\big(x^{[0,1]}+1\big)$ and $-x^{[d-1,1]}\big/\big(x^{[d,1]}+1\big)$, respectively. With the first sum we proceed as follows: \[ \renewcommand{\arraystretch}{1.8} \begin{array}{rcl} \sum_{R}(-1)^{\kht(R)-1}x^R &=& -\sum_{k\geq 1}\sum_{v=1}^{kd-1} (-1)^k x^{[v,k]} \\ &=& -\sum_{k\geq 1} (-1)^k x^{[1,k]} \,\big(x^{[kd-1,0]}-1\big)\big/ \big(x^{[1,0]}-1\big)\\ &=& -\Big(\sum_{k\geq 1} (-1)^k x^{[kd,k]} - \sum_{k\geq 1} (-1)^k x^{[1,k]}\Big) \Big/\Big(x^{[1,0]}-1\Big) \\ &=& \Big(x^{[d,1]}\big/\big(1+x^{[d,1]}\big) - x^{[1,1]}\big/\big(1+x^{[0,1]}\big)\Big) \Big/\Big(x^{[1,0]}-1\Big) \\ &=& \Big( x^{[d,1]} - x^{[d+1,2]} + x^{[d,2]} - x^{[1,1]}\Big) \Big/\Big( \big(x^{[d,1]}+1\big)\big(x^{[1,0]}-1\big) \big(x^{[0,1]}+1\big)\Big)\\ &=& F(x) \Big/\Big( \big(x^{[d,1]}+1\big)\big(x^{[0,1]}+1\big)\Big)\,. \end{array} \vspace{-2ex} \] \qed \par \neu{cone-example} As example we determine the ${\rm ht}=3$ part of $\widetilde{P}_{Y_d}(x)$. We need the first terms of $\widetilde{Q}_{Z_{d-1}}(x)$. By \zitat{noeth}{rnc} the ${\rm ht}=1$ part is just $x^{[1,1]}+\cdots +x^{[d-1,1]}$, whereas the ${\rm ht}=2$ part is \[ \frac12 \big((x^{[1,1]}+\cdots +x^{[d-1,1]})^2 -(x^{[2,2]}+x^{[4,2]}+\cdots +x^{[2(d-1),2]})\big)= \frac{(x^{[d,1]}-x^{[1,1]})(x^{[d,1]}-x^{[2,1]})} {(x^{[1,0]}-1)^2(x^{[1,0]}+1)} \;. \] Inserting this in the formula for $\widetilde{P}_{Y_d}(x)$ we finally find the grading of $T^3_{Y_d}$: \[ \frac{(x^{[d,1]}-x^{[1,1]})(x^{[d-1,1]}-x^{[2,1]}) (x^{[d-2,1]}-x^{[2,1]})} {(x^{[1,0]}-1)^2(x^{[2,0]}-1)} \;. \] For even $d$ we get the symmetric formula \[ (x^{[1,1]}+\cdots +x^{[d-1,1]}) (x^{[2,1]}+x^{[3,1]}+\cdots +x^{[d-3,1]}+x^{[d-2,1]}) (x^{[2,1]}+x^{[4,1]}+\cdots +x^{[d-4,1]}+x^{[d-2,1]}) \;. \] \neu{cone-forget} Applying the ring homomorphism $x^R\mapsto t^{\kht(R)}$ to the formula of Theorem \zitat{cone}{result} yields: {\bf Corollary:} {\em The ordinary Poincar\'{e} series $P_{Y_d}(t)$ of the cone over the rational normal curve equals \[ P_{Y_d}(t)\,=\;\Big(Q_{Y_d}(t)+2\Big)\cdot \frac{(d-1)\,t-t^2}{(t+1)^2} \;-\; \frac{2\,t}{t+1}\,. \] } \par \section{Hyperplane sections}\label{hps} Choosing a Noether normalisation of an $N$-dimensional singularity $Y$ means writing $Y$ as total space of an $N$-parameter family. In this situation we have compared the cohomology of $Y$ with that of the $0$-dimensional special fibre. Cutting down the dimension step by step leads to the comparison of the cohomology of a singularity and its hyperplane section. \par \neu{hps-setup} First we recall the main points from \cite{BC}. Let $f\colon Y\to{\,I\!\!\!\!C}$ be a flat map such that both $Y$ and the special fibre $H$ have isolated singularities. By $T^{n}_Y$ and $T^{n}_H$ we simply denote the cotangent cohomology $T^{n}({\cal O}_Y/{\,I\!\!\!\!C},{\cal O}_Y)$ and $T^{n}({\cal O}_H/{\,I\!\!\!\!C},{\cal O}_H)$, respectively. \par {\bf Main Lemma:} (\cite{BC}, (1.3)) {\em There is a long exact sequence \[ T^1_H \longrightarrow T^2_Y \stackrel{\cdot f}{\longrightarrow} T^2_Y \longrightarrow T^2_H \longrightarrow T^3_Y \stackrel{\cdot f}{\longrightarrow} T^3_Y \longrightarrow \dots\,. \] Moreover, $\dim\,\raisebox{0.4ex}{$T^2_Y$}\big/ \raisebox{-0.4ex}{$f\cdot T^2_Y$}= \tau_H-e_{H,Y}$ with $\tau_H:=\dim\,T^1_H$ and $e_{H,Y}$ denoting the dimension of the smoothing component containing $f$ inside the versal base space of $H$. } \par This lemma will be an important tool for the comparison of the Poincar\'{e} series $P_Y(t)$ and $P_H(t)$ of $Y$ and $H$, respectively. However, since we are not only interested in the dimension, but also in the number of generators of the cohomology groups, we introduce the following notation. If $M$ is a module over a local ring $(A,\ku{m}_A)$, then \[ {\rm cg}(M):=\dim_{\,I\!\!\!\!C} \raisebox{0.5ex}{$M$}\big/ \raisebox{-0.5ex}{$\ku{m}_AM$} \] is the number of elements in a minimal generator set of $M$. By $P^\cg_Y(t)$ and $P^\cg_H(t)$ we denote the Poincar\'{e} series using ``${\rm cg}$'' instead of ``$\dim$''. Similarly, $\tau_{\kss \bullet}^\cg:={\rm cg}(T^1_{\kss \bullet})$. \par {\bf Proposition:} {\em \begin{itemize}\vspace{-2ex} \item[\rm{(1)}] Assume that $f\cdot T^{n}_Y=0$ for ${n}\geq 2$. Then $\,P_H(t)=(1+1/t)\,P_Y(t) - \tau_Y\,(t+1) + e_{H,Y}\,t$. \item[{\rm(2)}] If\/ $\ku{m}_H\cdot T^{n}_H=0$ for ${n}\geq 2$, then $\,P^\cg_H(t)=(1+1/t)\,P^\cg_Y(t) - \tau^\cg_Y\,(t+1) + (\tau_H^\cg-\tau_H+e_{H,Y})\,t$. \vspace{-1ex}\end{itemize} } \par {\bf Proof:} In the first case the long exact sequence of the Main Lemma splits into short exact sequences \[ 0\longrightarrow T^{n}_Y \longrightarrow T^{n}_H \longrightarrow T^{{n}+1}_Y \longrightarrow 0 \] for ${n}\geq 2$. Moreover, the assumption that $f$ annihilates $T_Y^2$ implies that $e_{H,Y}=\tau_H-\dim\,T^2_Y$.\\ For the second part we follow the arguments of \cite{BC}, (5.1). The short sequences have to be replaced by \[ 0\longrightarrow \raisebox{0.5ex}{$T^{n}_Y$}\big/ \raisebox{-0.5ex}{$f\cdot T^{n}_Y$} \longrightarrow T^{n}_H \longrightarrow \ker\big[f\colon T^{{n}+1}_Y \to T^{{n}+1}_Y\big]\longrightarrow 0\,. \] Since $T^{{n}+1}_Y$ is finite-dimensional, the dimensions of $\ker\big[f\colon T^{{n}+1}_Y \to T^{{n}+1}_Y\big]$ and $\raisebox{0.5ex}{$T^{n}_Y$}\big/ \raisebox{-0.5ex}{$f\cdot T^{n}_Y$}$ are equal. Now, the claim follows from the fact that $\raisebox{0.5ex}{$T^{n}_Y$}\big/ \raisebox{-0.5ex}{$f\cdot T^{n}_Y$} =\raisebox{0.5ex}{$T^{n}_Y$}\big/ \raisebox{-0.5ex}{$\ku{m}_Y\, T^{n}_Y$}$, which is a direct consequence of the assumption $\ku{m}_H\cdot T^{n}_H=0$. \qed \par \neu{hps-partition} We would like to apply the previous formulas to partition curves $H(d_1,\dots,d_r)$. They are defined as the wedge of the monomial curves $H(d_i)$ described by the equations \[ \mbox{\rm rank}\; \pmatrix{z_1 & z_2 & \dots & z_{d_i-1} & z_{d_i} \cr z_1 & z_3 & \dots & z_{d_i} & z_1^2 \cr} \;\leq 1 \] (\cite{BC}, 3.2). The point making partition curves so exciting is that they occur as the general hypersurface sections of rational surface singularities. Moreover, they sit right in between the cone over the rational normal curve $Y_d$ and the fat point $Z_{d-1}$ with $d:=d_1+\dots+d_r$. \par {\bf Theorem:} {\em Let $H:=H(d_1,\dots,d_r)$ be a partition curve. For ${n}\geq 2$ the modules $T^{n}_H$ are annihilated by the maximal ideal $\ku{m}_H$. The corresponding Poincar\'{e} series is \[ P_H(t)\;=\; \frac{\displaystyle d-1-t}{\displaystyle t+1} \,Q_{Z_{d-1}}(t) +\tau_H\,t -(d-1)^2\,t\,. \vspace{-2ex} \] } \par {\bf Proof:} We write $Y:=Y_d$ and $Z:=Z_{d-1}$. The idea is to compare $P_H(t)$ and $P^\cg_H(t)$ which can be calculated from $P_{Y}(t)$ and $P^\cg_{Z}(t)$, respectively. Firstly, since $\ku{m}_YT^{n}_Y=0$ for ${n}\geq 1$, we obtain from Proposition \zitat{hps}{setup}(1) and Corollary \zitat{cone}{forget} that \[ \renewcommand{\arraystretch}{1.6} \begin{array}{rcl} P_H(t) &=& (1+1/t)\,P_Y(t) - \tau_Y\,(t+1) + e_{H,Y}\,t\\ &=& \Big(Q_Y(t)+2\Big) \frac{\displaystyle (d-1)-t}{\displaystyle t+1} -2 - (2d-4)(t+1) + e_{H,Y}\,t \\ &=& \frac{\displaystyle d-1-t}{\displaystyle t+1} \,Q_Z(t) -2\,t\,(d-1) + e_{H,Y}\,t\,, \end{array} \] where we used that $Q_Y(t)=Q_Z(t)+2\,t$. On the other hand, since $\ku{m}_ZT^{n}_Z=0$ for all ${n}$, we can use the second part of Proposition \zitat{hps}{setup} to get \[ P_Z(t)\;=\; P^\cg_Z(t)\;=\; (1+1/t)\,P^\cg_H(t) - \tau^\cg_H\,(t+1) + e_{Z,H}\,t\,. \] The calculations of \zitat{cotan}{example} give us $P_Z(t)$ explicitly: we have $\dim T^{n}_Z = (d-1) c_{{n}+1}-c_{n}$ and $\dim T^0_Z = (d-1)^2$. Therefore \[ P_Z(t) \;=\; \frac{\displaystyle d-1-t}{\displaystyle t}\, Q_Z(t) - (d-1)^2\,. \] Hence, \[ P^\cg_H(t)=\frac{\displaystyle d-1-t}{\displaystyle t+1} \,Q_Z(t) + \tau_H^\cg\,t - \frac{t}{t+1} \Big((d-1)^2+e_{Z,H}\,t\Big)\,. \] Finally, we use that $\,\tau_H-e_{H,Y}=(d-1)(d-3)$ and $\,e_{Z,H}=(d-1)^2$ (see \cite{BC}, (4.5) and (6.3.2) respectively). This implies the $P_H(t)$-formula of the theorem as well as \[ P_H(t)-P^\cg_H(t)\;=\; \big(\tau_H-\tau_H^\cg\big)\,t\,. \] In particular, if ${n}\geq 2$, then the modules $T^{n}_H$ have as dimension the number of generators, i.e.\ they are killed by the maximal ideal. \qed \par {\bf Corollary:} {\em The number of generators of $T^{\geq 2}$ is the same for all rational surface singularities with fixed multiplicity $d$. } \par {\bf Proof:} Apply again Proposition \zitat{hps}{setup}(2). \qed \par \neu{hps-app} We have seen that $\dim\,T^{n}= {\rm cg}\,T^{n}$ ($n\geq2$) for the cone over the rational normal curve. This property holds for a larger class of singularities, including quotient singularities. {\bf Theorem:} {\em Let $Y$ be a rational surface singularity such that the projectivised tangent cone has only hypersurface singularities. Then the dimension of $T^{n}$ for ${n}\geq 3$ equals the number of generators. } \par {\bf Proof:} Under the assumptions of the theorem the tangent cone $\overline Y$ of $Y$ has also finite-dimensional $T^{n}$, ${n}\geq2$. With $d:=\mbox{mult}(Y)$ we shall show that $\dim\,T^{n}_{\overline Y} = \dim\,T^{n}_{Y_d}$ for ${n}\geq 3$. As $Y$ is a deformation of its tangent cone $\overline Y$, semi-continuity implies that $\dim T^{n}_Y = \dim T^{n}_{Y_d}$, which equals the number of generators of $T^{n}_Y$. The advantage of working with $\overline Y$ is that it is a homogeneous singularity, so Corollary \zitat{noeth}{zero}(2) applies. The general hyperplane section $\overline H$ of $\overline Y$ is in general a non-reduced curve. In fact, it is a wedge of curves described by the equations \[ \mbox{\rm rank}\; \pmatrix{z_1 & z_2 & \dots & z_{d_i-1} & z_{d_i} \cr z_1 & z_3 & \dots & z_{d_i} & 0 \cr} \;\leq 1\,, \] which is the tangent cone to the curve $H(d_i)$. The curve $\overline H$ is also a special section of the cone over the rational normal curve; to see this it suffices to take the cone over a suitable divisor of degree $d$ on $\P^1$. Applying the Main Lemma to $\overline H$ and $Y_d$ we obtain the short exact sequences \[ 0\longrightarrow T^{n}_{Y_d} \longrightarrow T^{n}_{\overline H} \longrightarrow T^{{n}+1}_{Y_d} \longrightarrow 0\;, \] which show that for $n\geq2$ the dimension of $T^{n}_{\overline H}$ is the same as that of a partition curve of multiplicity $d$. Moreover, as the module $T^{n}_{Y_d}$ is concentrated in degree $-{n}$, $T^{{n}+1}_{Y_d}$ in degree $-({n}+1)$ and the connecting homomorphism, being induced by a coboundary map, has degree $-1$, it follows that $T^{n}_{\overline H}$ is concentrated in degree $-{n}$. We now look again at $\overline H$ as hyperplane section of $\overline Y$. The short exact sequence corresponding to the second one in the proof of Proposition \zitat{hps}{setup}, yields that $\ker\big[f\colon T^{{n}+1}_{\overline Y} \to T^{{n}+1}_{\overline Y}\big]$ is concentrated in degree $-({n}+1)$ for $n\geq2$. The part of highest degree in $T^{{n}+1}_{\overline Y}$ is contained in this kernel, as multiplication by $f$ increases the degree. On the other hand, $T^{{n}+1}_{\overline Y}$ sits in degree $\geq -(n+1)$ by Corollary \zitat{noeth}{zero}(2). Therefore $T^{{n}+1}_{\overline Y}$ is concentrated in degree $\geq -(n+1)$ and its dimension equals the number of generators, which is the same as for all rational surface singularities of multiplicity $d$. \qed Note that we cannot conclude anything in the case ${n}=2$ and in fact the result does not hold for the famous counterexample (\cite{BC} 5.5). \par {\bf Corollary:} {\em For quotient singularities the dimension of $T^{n}$, ${n}\geq 2$, depends only on the multiplicity. In particular, the Poincar\'{e} series is \[ P(t)\,=\;\Big(Q_{Y_d}(t)+2\Big)\cdot \frac{(d-1)\,t-t^2}{(t+1)^2} \;-\; \frac{2\,t}{t+1}-(\tau-2d+4)t\,. \] } \par {\bf Proof:} For ${n}=2$ this is \cite{BC}, (Theorem 5.1.1.(3)). If ${n}\geq 3$, then we use the previous theorem. In the formula of Corollary \zitat{cone}{forget} we have then only to introduce a correction term for $\tau=\dim\, T^1$. \qed \par \newpage {\makeatletter \labelsep=0pt \def\@biblabel#1{#1} \def\@lbibitem[#1]#2{\item[]\if@filesw {\let\protect\noexpand \immediate \write\@auxout{\string\bibcite{#2}{#1}}}\fi\ignorespaces} \makeatother \frenchspacing
{ "attr-fineweb-edu": 1.918945, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaaY25V5hc_Oxvcui
\section{Introduction} The last two decades have witnessed remarkable developments in studies of out-of-equilibrium dynamics in isolated quantum many-body systems, as mainly promoted by experimental advances in atomic physics \cite{BI08,LDA16}. On another front, it has become possible to study {\it open} many-body physics in a highly controlled manner \cite{NS08,BG13,PYS15,RB16,LHP17,Tomita2017,LJ19,CL19}. Under such nonunitary perturbations acted by an external observer or a Markovian environment, the dynamics of an open quantum system can be described by the quantum trajectory approach \cite{DJ92,HC93,MU92}. With the ability to measure single quanta of many-body systems \cite{BWS09,SJF10,MM15,CLW15,PMF15,OA15,EH15,EGJ15,RY16,AA16}, one should unravel open-system dynamics based on microscopic information, e.g., the number of quantum jumps occurred in a certain time interval \cite{Daley2014,YA18}. Non-Hermitian quantum physics is a useful description to investigate such dynamics conditioned on measurement outcomes. From a broader perspective, we remark that non-Hermitian quantum physics has also been applied to a problem of nuclear and mesoscopic resonances \cite{HF58,NM98}, flux-line depinning \cite{HH96,IA04}, and quantum transport \cite{GGG15}. In particular, the Feshbach projection approach \cite{HF58} has recently been revisited in the context of correlated electrons \cite{VK17,YT18}. Non-Hermitian physics has also attracted considerable interest from other diverse subfields of physics in this decade \cite{EG18,HX2016,Gong2018,LJY19,FZN19% ,Liu2019,Ashida2020% }; one prominent example is its application to linear dynamics in classical systems. The reason is that one can regard a set of classical wave equations as the one-body Schr{\"o}dinger equation, allowing one to simulate non-Hermitian wave physics in classical setups \cite{AR05,REG07}. More specifically, recent advances in this direction have been sparked by realizations \cite{GA09,RCE10} of the parity-time (PT) symmetric quantum mechanics \cite{Bender1998} with fabricated gain and loss in chip-scale nanophotonic systems. In PT-symmetric systems, spectra can exhibit real-to-complex transitions typically accompanying singularities known as exceptional points \cite{TK80}. While such transitions can be found also for other types of antilinear symmetries \cite{CMB02}, an advantage in the PT symmetry is its physical simplicity, allowing one to ensure the symmetry via spatial modulation of dissipative media~\cite{Ozdemir2019,Peng2014,Peng2014b,Jing2014,Jing2015,Liu2016,Zhang2018,Liu2017,Quijandria2018,Zhang2015}. The subsequent studies revealed a great potential of exploring non-Hermitian physics in a variety of classical systems such as mechanical systems \cite{KB17}, electrical circuits \cite{CHL18,IS18,EM19a}, and acoustic or hydrodynamic systems \cite{SC16,ZW18,SK19}. Yet, these developments have so far been mainly restricted to one-body wave phenomena. Recently, there have been significant efforts to extend the paradigm of non-Hermitian physics to yet another realm of quantum many-body systems. Already, a number of unconventional many-particle phenomena that have no analogues in Hermitian regimes have been found \cite{YA17nc,LTE14,MG16,YA16crit,SJP16,LAS18,NM18,GA18,Hamazaki2019,YK19,YT19,LE19,MN19,GC19}. In particular, it has been theoretically proposed to realize a PT-symmetric quantum many-body system in ultracold atoms by using a spatially modulated dissipative optical potential, where the renormalization group (RG) fixed points and RG flows unique to non-Hermitian regimes have been predicted \cite{YA17nc}. Similar anomalous RG flows have also been studied in the Kondo model \cite{LAS18,NM18}. Also by considering a double-well optical lattice, a PT-symmetric Bose condensed system has been theoretically proposed \cite{Dizdarevic2018}. More recently, studies have been extended to nonequilibrium dynamical regimes \cite{YS17,DB192,DB19}, where the role of quantum jumps in time evolution has been elucidated \cite{YA18,YA18therm,NM19e,GC20}. The aim of this paper is to report our realization of a PT-symmetric non-Hermitian quantum many-body system using ultracold ytterbium (Yb) atoms in an optical lattice. While a highly controllable system of ultracold atoms in an optical lattice is inherently an isolated quantum many-body system, this controllability also enables us to realize open quantum systems by coupling the system to the environment. We used controlled one-body atom loss as dissipation. Specifically, we investigate loss dynamics of a one-dimensional Bose gas, which is the key for understanding this non-Hermite PT-symmetric system, revealed by our theory. This paper is organized as follows: In Sec. 2 we describe our newly developed theoretical framework. Then we describe our experiment with one-body loss in Sec. 3. Finally, we discuss the obtained results and summarize our work in Sec. 4. In the Appendix we describe the results for two-body loss. \section{Theoretical framework} In this section, we develop a theoretical framework to describe the experimental system considered in the present paper. We note that the formulation described here is equally applicable to both bosonic and fermionic systems. \subsection{Model} We consider atoms trapped in an optical lattice created by a far-detuned laser. We then discuss superimposing a dissipative optical lattice created by near-resonant light with a possible phase shift $\phi$. When the shift $\phi$ satisfies a certain condition discussed below, an effective non-Hermitian Hamiltonian in the system can satisfy the PT symmetry in the sense of a passive system. Suppose that atoms have the ground $|g\rangle$ and excited $|e\rangle$ states with the atomic frequency $\omega_{0}$, and the excited state has decay channels to other states with the total decay rate $\Gamma$, which is much larger than the spontaneous emission rate of $|e\rangle$ to $|g\rangle$. The dynamics can then be described by the many-body Lindblad equation: \begin{eqnarray} \frac{d\hat{\rho}}{dt} = -\frac{i}{\hbar}[\hat{H},\hat{\rho}]-\frac{\Gamma}{2}\int\left[\hat{\Psi}_{e}^{\dagger}({\bf x})\hat{\Psi}_{e}({\bf x})\hat{\rho}+\hat{\rho}\hat{\Psi}_{e}^{\dagger}({\bf x})\hat{\Psi}_{e}({\bf x})-2\hat{\Psi}_{e}({\bf x})\hat{\rho}\hat{\Psi}_{e}^{\dagger}({\bf x})\right]d{\bf x}, \end{eqnarray} where $\hat{\Psi}_{g,e}^\dagger$ ($\hat{\Psi}_{g,e}$) is the creation (annihilation) operator of the ground- or excited-state atom, and $\hat{H}$ is the Hamiltonian of the two-level atoms, \begin{eqnarray} \hat{H} & \!=\! & \int d{\bf x}\left[\sum_{i=g,e}\hat{\Psi}_{i}^{\dagger}({\bf x})\hat{H}_{i,{\rm CM}}({\bf x})\hat{\Psi}_{i}({\bf x})\!+\!\hbar\omega_{0}\hat{\Psi}_{e}^{\dagger}({\bf x})\hat{\Psi}_{e}({\bf x})\!+\!\hat{H}_{{\rm I}}({\bf x})\!+\!\hat{H}_{{\rm int}}({\bf x})\right],\\ \hat{H}_{i,{\rm CM}}({\bf x}) &\! =\! & -\frac{\hbar^{2}\nabla^{2}}{2m}\!+\!U_{i}({\bf x})\;\;\;\;\;\;(i=g,e),\\ \hat{H}_{{\rm I}}({\bf x}) & = & -\left({\bf d}\cdot{\bf E}({\bf x},t)\hat{\Psi}_{g}^{\dagger}({\bf x})\hat{\Psi}_{e}({\bf x})+{\rm H.c.}\right),\\ \hat{H}_{{\rm int}}({\bf x}) & = & \frac{u}{2}\hat{\Psi}_{g}^{\dagger}({\bf x})\hat{\Psi}_{g}^{\dagger}({\bf x})\hat{\Psi}_{g}({\bf x})\hat{\Psi}_{g}({\bf x}). \label{eq:u} \end{eqnarray} Here, $U_{i}({\bf x})$ are conservative optical potentials created by far-detuned light, ${\bf E}({\bf x},t)=2{\bf E}_{0}({\bf x})\cos(\omega_{{\rm L}}t)$ is the electric-field of near-resonant standing-wave light, and ${\bf d}=\langle e|\hat{{\bf d}}|g\rangle$ is the electric dipole moment. We then transform to the rotating frame \begin{equation} \hat{\tilde{\Psi}}_{e}({\bf x})\equiv e^{i\omega_{{\rm L}}t}\hat{\Psi}_{e}({\bf x}). \end{equation} Applying the rotating wave approximation, the Heisenberg equation of the motion becomes \begin{eqnarray} \dot{\hat{\tilde{\Psi}}}_{e}({\bf x}) & = & i\delta\hat{\tilde{\Psi}}_{e}({\bf x})+\frac{i\Omega^{*}({\bf x})}{2}\hat{\Psi}_{g}({\bf x})-\frac{\Gamma}{2}\hat{\tilde{\Psi}}_{e}({\bf x}),\\ \Omega({\bf x}) & \equiv & \frac{2{\bf d}\cdot{\bf E}_{0}({\bf x})}{\hbar}, \end{eqnarray} where $\delta\equiv\omega_{{\rm L}}-\omega_{0}$ is the detuning frequency. We assume the condition $|\delta+i\Gamma/2|>\Omega$ and then perform the adiabatic elimination of the excited state $|e\rangle$: \begin{eqnarray} \hat{\tilde{\Psi}}_{e}({\bf x}) & \simeq & -\frac{1}{\delta+\frac{i\Gamma}{2}}\frac{\Omega^{*}({\bf x})}{2}\hat{\Psi}_{g}({\bf x})\\ & \simeq & \frac{i\Omega^{*}({\bf x})}{\Gamma}\hat{\Psi}_{g}({\bf x}), \end{eqnarray} where we use the resonant condition $\delta\ll\Gamma$. The master equation of the ground-state atoms now reduces to \begin{equation} \frac{d\hat{\rho}}{dt}=-\frac{i}{\hbar}(\hat{H}_{{\rm eff}}\hat{\rho}-\hat{\rho}\hat{H}_{{\rm eff}}^{\dagger})+\int dx\frac{|\Omega({\bf x})|^{2}}{\Gamma}\hat{\Psi}({\bf x})\hat{\rho}\hat{\Psi}^{\dagger}({\bf x}),\label{eq:master} \end{equation} where we abbreviate the index $g$ and redefine the mass and the potential by incorporating the renormalization factor induced by the excited state. We here also introduce the effective non-Hermitian Hamiltonian \begin{equation} \hat{H}_{{\rm eff}}=\int d{\bf x}\hat{\Psi}^{\dagger}({\bf x})\left(-\frac{\hbar^{2}\nabla^{2}}{2m}+U({\bf x})-\frac{i\hbar|\Omega({\bf x})|^{2}}{2\Gamma}\right)\hat{\Psi}({\bf x})+\frac{u}{2}\int d{\bf x}\hat{\Psi}^{\dagger}({\bf x})\hat{\Psi}^{\dagger}({\bf x})\hat{\Psi}({\bf x})\hat{\Psi}({\bf x}). \end{equation} For the sake of simplicity, hereafter we set $u=0$ and consider a one-dimensional (1D) system subject to an off-resonant optical lattice \begin{equation} U(x)=\frac{V_{0}}{2}\cos\left(\frac{2\pi x}{d}\right), \end{equation} where $V_{0}$ is the lattice depth and $d=\lambda/2$ is the lattice constant. We consider superimposing a near-resonant optical lattice having the (shorter) lattice constant $d/2=\lambda/4$ with a phase shift $\phi$. The resulting time-evolution equation is \cite{KSJ98,RS05,YA17nc} \begin{eqnarray} \frac{d\hat{\rho}}{dt}&=&-\frac{i}{\hbar}(\hat{H}_{{\rm eff}}\hat{\rho}-\hat{\rho}\hat{H}_{{\rm eff}}^{\dagger})+\int dx\gamma_{0}\left(1+\sin\left(\frac{4\pi x}{d}+\phi\right)\right)\hat{\Psi}(x)\hat{\rho}\hat{\Psi}^{\dagger}(x),\label{eq:lind}\\ \hat{H}_{{\rm eff}} & = & \int dx\hat{\Psi}^{\dagger}(x)\left(-\frac{\hbar^{2}\nabla^{2}}{2m}+V(x)\right)\hat{\Psi}(x)-\frac{i\hbar\gamma_{0}}{2}\hat{N},\label{eq:non-her}\\ V(x) & = & \frac{V_{0}}{2}\left(\cos\left(\frac{2\pi x}{d}\right)-i\gamma\sin\left(\frac{4\pi x}{d}+\phi\right)\right),\\ \gamma & \equiv & \frac{2|d|^{2}\mathcal{E}_{0}^{2}}{\hbar\Gamma V_{0}},\;\;\;\;\;\;\gamma_{0} \equiv \frac{V_{0}\gamma}{\hbar}. \end{eqnarray} We remark that the effective Hamiltonian $\hat{H}_{\rm eff}$ satisfies the PT symmetry at the phase shift $\phi=0$ aside from a global imaginary constant $-i\hbar \gamma_0 N/2$ because of the relation $V(x)=V^*(-x)$. In this case, the real-to-complex spectral transition occurs at $\gamma_{\rm PT}\simeq 0.25$. Below this threshold $\gamma<\gamma_{\rm PT}$, the PT symmetry is unbroken, i.e., every eigenstate respects the symmetry and the entire spectrum is real. In particular, the band structure associated with the complex potential $V(x)$ exhibits gaps. As we increase $\gamma$, the energy gap between the second and third bands at $k=0$ becomes smaller and closes at $\gamma=\gamma_{\rm PT}$, where two merging eigenstates at $k=0$ coalesce into one, leading to the exceptional point. Above the threshold $\gamma>\gamma_{\rm PT}$, the PT symmetry is broken, i.e., eigenstates around $k=0$ turn to have complex eigenvalues. \subsection{Relating the non-Hermitian spectrum to loss dynamics } Signatures of the PT-symmetry breaking in the non-Hermitian Hamiltonian $\hat{H}_{{\rm eff}}$ can be extracted from the dynamics of the density matrix $\hat{\rho}$ obeying the master equation~(\ref{eq:master}). To this end, we focus on the time evolution of the loss rate $L(t)$ of the ground-state atoms: \begin{equation} L(t)\equiv-\frac{1}{N(t)}\frac{dN(t)}{dt}=-\frac{1}{N(t)}{\rm Tr}\left[\frac{d\hat{\rho}(t)}{dt}\int dx\hat{\Psi}^{\dagger}(x)\hat{\Psi}(x)\right]. \end{equation} We then introduce the quantity $\mathcal{L}_{\lambda}(\hat{\rho})$ defined as follows: \begin{equation} \mathcal{L}_{\lambda}(\hat{\rho})\equiv\frac{1}{N}{\rm Tr}\left[e^{\frac{i\hat{H}_{{\rm eff}}\lambda}{\hbar}}\hat{\rho}e^{-\frac{i\hat{H}_{{\rm eff}}^{\dagger}\lambda}{\hbar}}\right],\;\;\;N\equiv{\rm Tr}[\hat{N}\hat{\rho}].\label{eq:calL} \end{equation} From Eqs. (\ref{eq:non-her}) and (\ref{eq:calL}), one can readily show the following relation: \begin{equation} \left.\frac{\partial_{\lambda}\mathcal{L}_{\lambda}(\hat{\rho}(t))}{\partial\lambda}\right|_{\lambda=0}=L(t).\label{eq:pre1} \end{equation} To simplify the expression, we employ the spectrum decomposition of the many-body non-Hermitian Hamiltonian: \begin{eqnarray} \hat{H}_{{\rm eff}}&=&\sum_{\alpha}\mathcal{E}_{\alpha}|\Psi_{\alpha}\rangle\langle\Phi_{\alpha}|,\\ \mathcal{E}_{\alpha}&=&E_{\alpha}{+}\frac{i\Gamma_{\alpha}}{2}-\frac{i\hbar\gamma_{0} N_\alpha}{2}, \end{eqnarray} where $\mathcal{E}_{\alpha}$ is a complex eigenvalue with $E_{\alpha}$ ($\Gamma_{\alpha}$) denoting its real (imaginary) part, $N_\alpha$ is a particle number, and $|\Psi_{\alpha}\rangle$ and $|\Phi_{\alpha}\rangle$ are the corresponding right and left eigenvectors, respectively, which satisfy $\langle\Psi_{\alpha}|\Phi_{\beta}\rangle = \delta_{\alpha\beta}$ and $\sum_{\alpha}|\Psi_{\alpha}\rangle\langle\Phi_{\alpha}| = \hat{I}$. From Eq.~(\ref{eq:pre1}), one can relate the loss dynamics to the spectrum of the non-Hermitian Hamiltonian via \begin{eqnarray} L(t) & = & -\frac{1}{N(t)}\frac{2}{\hbar}{\rm Im}\left[\sum_{\alpha}\mathcal{E}_{\alpha}{\rm Tr}\left[|\Psi_{\alpha}\rangle\langle\Phi_{\alpha}|\hat{\rho}(t)\right]\right]\nonumber\\ & = & \sum_{\alpha}u_{\alpha}(t)\frac{\Gamma_{\alpha}}{\hbar}+2\sum_{\alpha}v_{\alpha}(t)\frac{E_{\alpha}}{\hbar}+\gamma_{0},\label{eq:main1} \end{eqnarray} where we introduce the coefficients $u_{\alpha},v_{\alpha}$ by \begin{eqnarray} u_{\alpha}(t) & \equiv & -\frac{1}{N(t)}{\rm Re}\left[{\rm Tr}\left[|\Psi_{\alpha}\rangle\langle\Phi_{\alpha}|\hat{\rho}(t)\right]\right]=-\frac{1}{N(t)}{\rm Tr}\left[\frac{|\Psi_{\alpha}\rangle\langle\Phi_{\alpha}|+|\Phi_{\alpha}\rangle\langle\Psi_{\alpha}|}{2}\hat{\rho}(t)\right],\\ v_{\alpha}(t) & \equiv & -\frac{1}{N(t)}{\rm Im}\left[{\rm Tr}\left[|\Psi_{\alpha}\rangle\langle\Phi_{\alpha}|\hat{\rho}(t)\right]\right]=-\frac{1}{N(t)}{\rm Tr}\left[\frac{|\Psi_{\alpha}\rangle\langle\Phi_{\alpha}|-|\Phi_{\alpha}\rangle\langle\Psi_{\alpha}|}{2}\hat{\rho}(t)\right].\label{eq:v} \end{eqnarray} Here, the density matrix $\hat{\rho}(t)$ appearing in $L(t)$ is originally defined as the Lindblad master equation in Eq. (\ref{eq:lind}). On the other hand, since we consider noninteracting particles with one-body loss, the loss dynamics is fully characterized by a single-particle effective non-Hermitian dynamics. From Eq. (\ref{eq:main1}), we infer that the long-time average of the loss rate (subtracted by the offset $\gamma_{0}$) can be used to probe signatures of the PT-symmetry breaking: \begin{equation} \mathcal{O}(\gamma)\equiv\overline{\gamma_{0}-L(t)}\equiv\frac{1}{T}\int_{0}^{T}dt\left[\gamma_{0}-L(t)\right], \end{equation} where we choose $T$ to be $T\gg1/\gamma_{0}$. Above the threshold ($\gamma>\gamma_{\rm PT}$), there exists a conjugate pair of complex eigenvalues in the spectrum. Since a positive imaginary eigenvalue ($\Gamma_{\alpha}>0$) indicates an exponential lasing of the associated eigenmode, which turns out to be evident after the reduction of $L(t)$ to the language of single-particle dynamics, in the time regime $t>\gamma_{0}$ the dominant contribution in the sum of Eq. (\ref{eq:main1}) will come from the eigenstate $\alpha_{{\rm max}}$ whose imaginary part of the eigenvalue is maximum, resulting in the approximation \begin{equation}\label{eq:ex1} \mathcal{O}(\gamma)\propto\frac{\Gamma_{\alpha_{{\rm max}}}}{\hbar}>0\;\;\;\;\;\;\;(\gamma>\gamma_{\rm PT}). \end{equation} In particular, in the vicinity of the threshold, an imaginary part grows with the square-root scaling, and thus, we obtain $\mathcal{O}(\gamma)\propto\sqrt{\gamma-\gamma_{\mathcal{PT}}}$. Meanwhile, below the threshold ($\gamma<\gamma_{{\rm PT}}$), all the eigenvalues are real, and thus, the first term in Eq. (\ref{eq:main1}) is absent and there are no lasing eigenmodes found in the PT-broken regime. One can also check that the time average of $v_\alpha(t)$ should vanish, i.e., $\overline{v_{\alpha}(t)}\simeq0$ in the PT-unbroken regime. To see this, we start from considering the simplest conditional density matrix evolving in the subspace of the initial total-atom number $N_0$ \begin{equation} \hat{\tilde{\rho}}_{N_0}(t)=\hat{P}_{N_0}\hat{\rho}(t)\hat{P}_{N_0}=e^{-i\hat{H}_{{\rm eff}}t}\hat{\rho}(0)e^{i\hat{H}_{{\rm eff}}^{\dagger}t}, \end{equation} where $\hat{P}_{N_0}$ denotes the projection operator on the subspace of the atom number $N_0$. Then, we can calculate $v_{\alpha}(t)$ in this particular sector as \begin{eqnarray} & & -\frac{1}{N(t)}{\rm Im}\left[{\rm Tr}\left[|\Psi_{\alpha}\rangle\langle\Phi_{\alpha}|\hat{\tilde{\rho}}_{N_0}(t)\right]\right]\\ & = & -\frac{e^{-N_0\gamma_{0}t}}{N(t)}{\rm Im}\left[{\rm Tr}\left[\sum_{\beta}e^{iE_{\beta}t}|\Phi_{\beta}\rangle\langle\Psi_{\beta}|\Psi_{\alpha}\rangle\langle\Phi_{\alpha}|e^{-iE_{\alpha}t}\hat{\tilde{\rho}}_{N_0}(0)\right]\right]\\ & = & -\frac{e^{-N_0\gamma_{0}t}}{N(t)}\sum_{\beta\neq\alpha}{\rm Im}\left[e^{i(E_{\beta}-E_{\alpha})t}\langle\Psi_{\beta}|\Psi_{\alpha}\rangle{\rm Tr}\left[|\Phi_{\beta}\rangle\langle\Phi_{\alpha}|\hat{\tilde{\rho}}_{N_0}(0)\right]\right].\label{eq:pre2} \end{eqnarray} Crucially, in non-Hermitian systems the left and right eigenvectors constitute an orthonormal set while they themselves are in general nonorthogonal each other, i.e., $\langle\Psi_{\beta}|\Psi_{\alpha}\rangle\neq0$ for $\beta\neq\alpha$, leading to an oscillatory behavior of the loss rate. When $E_{\beta}\neq E_{\alpha}$ if $\beta\neq\alpha$, the quantity~\eqref{eq:pre2} vanishes after taking the time average over a sufficiently long time. Generalizing the argument to other sectors of the atom numbers $N-1,N-2\ldots$, we can conclude that $\overline{v_{\alpha}(t)}\simeq0$, resulting in the relation in the PT-unbroken regime \begin{equation}\label{eq:ex2} \mathcal{O}(\gamma)\simeq0\;\;\;\;\;\;(\gamma<\gamma_{{\rm PT}}). \end{equation} Figure~\ref{fig:num} shows results of numerical simulations for time evolutions of the (negative) loss rate $\gamma_0-L(t)$ in (a) PT-unbroken and (b) PT-broken regimes. The results demonstrate the qualitative features in Eq.~\eqref{eq:ex1} and \eqref{eq:ex2} as expected from the above arguments; in the PT-unbroken regime, the loss rate $\gamma_0-L(t)$ oscillates around zero in time due to nonorthogonality inherent to the non-Hermitian dynamics (Fig.~\ref{fig:num}(a)), while in the PT-broken regime it relaxes to a nonzero value corresponding to the maximum imaginary part of eigenvalues (Fig.~\ref{fig:num}(b)). We note that, since the Liouvillian of the master equation in the case of one-body loss is the quadratic operator, the density matrix during the time evolution can be exactly expressed as the Gaussian state when the initial state is an equilibrium state. \begin{figure}[t] \centering\includegraphics[width=150mm,bb=0 0 433 162]{images/numerical_simu.pdf} \caption{Numerical results on dynamics of the (negative) loss rate $\gamma_0-L(t)$ in (a) PT-unbroken regime at $\gamma=0.2$ and (b) PT-broken regime at $\gamma=0.5$. The initial state is assumed to be the Gibbs distribution of fermions $\hat{\rho}(0)\propto e^{-\beta\hat{H}}$ with $\beta=1/(k_{\rm B}T)$, $T=0.5T_F$, and $\hat{H}$ being the Hamiltonian including an off-resonant periodic potential (but without dissipative terms). The dissipative optical potential is quenched at time $t=0$, and the density matrix is evolved in time according to the master equation~\eqref{eq:lind}. The time scale is plotted in terms of the recoil energy $E_r=\hbar^2\pi^2/(2md^2)$. We set the lattice depth to be $V_0=4E_r$.} \label{fig:num} \end{figure} \section{Experimental result} We successfully realize a PT-symmetric quantum many-body system using a Bose-Einstein condensate (BEC) of ${}^{174}\text{Yb}$ atoms in an optical superlattice. One-body loss is introduced by exploiting rich internal degrees of freedom of Yb atoms. This chapter describes the experimental schemes and results for one-body loss. \subsection {Experimental setup} Our experimental setup for the preparation of ${}^{174}\text{Yb}$ BEC and the basic optical lattice system are similar to our previous study~\cite{Tomita2017}. In the present experiment, we create a two-dimensional (2D) array of one-dimensional (1D) tubes by tight confinement along the X and Z axes provided by a 2D optical lattice with 532 nm light (see Fig.~\ref{fig:setup} (a)). We additionally introduce the superlattice which consists of two optical lattices generated by the retro-reflection of laser beams at 556 nm and 1112 nm along the Y axis (see Fig.~\ref{fig:setup} (b)). \begin{figure}[b] \centering\includegraphics[width=6in]{images/setup.pdf} \caption{(a) Experimental setup of our lattice system. Two-dimensional optical lattices (532 nm) are formed along the X and Z axes to create tight confinement potentials for 1D tubes. (b) Off-resonant (top) and on-resonant (bottom) lattice potentials with PT-symmetry. The superlattice of on-resonant and off-resonant lattices is formed along the Y axis. (c) Schematic diagram of frequency stabilization for the superlattice of on-resonant and off-resonant lattices. For the light source of the 556-nm and 1112-nm lattices, we used two fiber lasers operating at 1112 nm. One of the fiber lasers was used for the on-resonant lattice. The laser frequency was stabilized with an ultra low expansion cavity. Then 556 nm light was obtained with wave-guided second harmonic generation (SHG). Another fiber laser was used for the off-resonant lattice. The laser frequency was stabilized with a wavemeter. The frequency difference between the two fiber lasers was monitored with another wavemeter.} \label{fig:setup} \end{figure} Here we call the lattice generated by 556 nm light ``on-resonant lattice'' since the wavelength is resonant to the ${}^{1}\text{S}_0$-${}^{3}\text{P}_1$ transition of Yb atoms and the lattice by 1112 nm ``off-resonant lattice'' since it does not correspond to any resonance. To realize PT-symmetry of the system, the relative phase between the on-resonant and off-resonant lattices is finely tuned. The on-resonant lattice provides the excitation of the atoms in the ground state ${}^{1}\text{S}_0$ of our interest to the excited ${}^{3}\text{P}_1$ state, naturally introducing one-body loss in the system. However, to prevent an unnecessary heating effect, it is also required that the excited atom should not return to the ground state, as is assumed in the theory. For this purpose, we exploited the repumping technique, which is described in detail in the next subsection. \subsection{Repumping} Yb atoms excited in the ${}^{3}\text{P}_1$ state decay into the ${}^{1}\text{S}_0$ ground state by spontaneous emission. As a result, the atom obtains a recoil energy. The single recoil energy is, however, not enough for the atoms to escape from a deep optical lattice like in our experiment (see 3.3 for detail). It is also true that, because the photon absorption and emission constitutes a closed cycle, the atom escapes from the trap after the total energy of the atom becomes high until it reaches the depth of the trap potential. This causes serious heating in the system, which is quite unfavorable for our purpose. In order to realize the successful removal of an atom after a single excitation, we additionally use the ${}^{3}\text{P}_1$-${}^{3}\text{S}_1$ excitation (see Fig.~\ref{fig:repump}). \begin{figure}[t] \centering\includegraphics[width=6in]{images/repump.pdf} \caption{Repumping scheme for one-body loss. (a) Atoms in the ground state ${}^{1}\text{S}_0$ are excited to the ${}^{3}\text{P}_1$ state. (b) Atoms in the ${}^{3}\text{P}_1$ state are excited to the ${}^{3}\text{S}_1$ state, rather than decay to the ground state. (c) Atoms in the ${}^{3}\text{S}_1$ state spontaneously decay to one of the ${}^{3}\text{P}_2$, ${}^{3}\text{P}_1$, and ${}^{3}\text{P}_0$ states. (d) All atoms are excited again to the ${}^{3}\text{S}_1$ state.} \label{fig:repump} \end{figure} The ${}^{3}\text{P}_1$-${}^{3}\text{S}_1$ transition is the strong dipole-allowed transition, and if the Rabi frequency of the ${}^{3}\text{P}_1$-${}^{3}\text{S}_1$ transition is taken be quite large compared to the one of the ${}^{1}\text{S}_0$-${}^{3}\text{P}_1$ transition, the atoms in the ${}^{3}\text{P}_1$ are more likely to be excited into the ${}^{3}\text{S}_1$ state than to undergo a spontaneous decay to the ${}^{1}\text{S}_0$ ground state. Consequently, the excited atoms in the ${}^{3}\text{S}_1$ state will spontaneously decay into any one of the ${}^{3}\text{P}_2$, ${}^{3}\text{P}_1$, and ${}^{3}\text{P}_0$ states. The atom in the ${}^{3}\text{P}_1$ state will be excited into the ${}^{3}\text{S}_1$ state again. In addition, the atom in the ${}^{3}\text{P}_2$ (${}^{3}\text{P}_0$) state will be excited into the ${}^{3}\text{S}_1$ state by additional ${}^{3}\text{P}_2$-${}^{3}\text{S}_1$ (${}^{3}\text{P}_0$-${}^{3}\text{S}_1$) transition. As a result, once an atom in the ground state is excited into the ${}^{3}\text{P}_1$ state, the atom is removed from the trap without going back to the ground state. The typical Rabi frequency in our experiment is 1 MHz (60 MHz) for the ${}^{1}\text{S}_0$-${}^{3}\text{P}_1$ (${}^{3}\text{P}_1$-${}^{3}\text{S}_1$) excitation. Therefore the effective two-level model with one-body loss is realized. In the following we refer to the three light beams resonant to the ${}^{1}\text{S}_0$-${}^{3}\text{P}_x$ ($x=0,1,2$) transitions as repumping beams. \begin{figure}[t] \centering\includegraphics[width=6in]{images/one-body-repump.pdf} \caption{(a)-(c) Absorption images of atoms after 16 ms Time-of-Flight. (a) Resonant ${}^{1}\text{S}_0$-${}^{3}\text{P}_1$ light is not irradiated. (b) The resonant light is applied for 1.5 ms with simultaneous application of repumping beams. (c) The resonant light is applied for 1.5 ms without repumping beams. (d)-(f) Atom linear density integrated along the vertical direction. (d)-(f) correspond to (a)-(c), respectively.} \label{fig:one-body-repump} \end{figure} Absorption images after 16 ms Time-of-Flight (ToF) reveal the effects of the repumping beams. Figure~\ref{fig:one-body-repump}(a) shows a typical BEC ToF signal obtained with no ${}^{1}\text{S}_0$-${}^{3}\text{P}_1$ resonant light irradiation. In the presence of ${}^{1}\text{S}_0$-${}^{3}\text{P}_1$ resonant light irradiation, the atom numbers are reduced, as is shown in Figs.~\ref{fig:one-body-repump}(b) and (c). When no repumping beams are applied, considerable thermal components are accompanied as is shown in Fig.~\ref{fig:one-body-repump}(c). In contrast, owing to the simultaneous application of repumping beams, the heating effect is significantly reduced, as shown in Fig.~\ref{fig:one-body-repump}(b), which indicates that the atoms are removed from the trap without going back to the ground state in the presence of the repumping beams. \subsection{Relative phase measurement} The relative phase $\phi$ between the on-resonant and off-resonant lattices should be finely tuned in order to satisfy the PT-symmetry condition, which is essentially important in the present experiment. In order to determine the relative phase, we measure the dependence of the atom loss on the relative phase. \begin{figure}[t] \centering\includegraphics[width=6in]{images/one-body-scan.pdf} \caption{(a),(b) Off-resonant (top) and on-resonant (bottom) lattice potentials. Relative phases $\phi$ are (a) $+\pi/2$ and (b) $-\pi/2$, respectively (c) Remained atom number as a function of frequency difference of the on-resonant lattice and the off-resonant lattice. The solid line shows the fitting result using a sinusoidal function.} \label{fig:one-body-scan} \end{figure} In our experiment the off-resonant lattice is deep enough so that trapped atoms are localized in each site. In this case, the behavior of atom loss by the on-resonant lattice depends on the relative phase. When the relative phase is $\phi =+\pi/2$ ($\phi =-\pi/2$), atom loss has a maximum (minimum) value (see Fig.~\ref{fig:one-body-scan} (a) and (b)). In order to measure the dependence of atom loss on the relative phase, we performed the following experiment. After preparation of BEC in an optical trap, the optical lattices were adiabatically turned on until the depth of ($V_x$, $V_y$, $V_z$)=($18E_{R, 532\text{nm}}$, $8E_{R,1112\text{nm}}$, $15 E_{R, 532\text{nm}}$), where $V_i$ ($i=x,y,z$) are the lattice depths along the $i$ direction, and $E_{R, \lambda}=h^2/(2m\lambda^2)$ is a recoil energy. Here, atoms form a Mott insulator state in the center of the trap. Then the on-resonant lattice and repumping beam are simultaneously applied during 0.5 ms and the remained atom number was measured by absorption imaging. The relative phase is controlled by changing the frequency difference between the fundamental light for on-resonant lattice (556 nm) before the second-harmonic generation and off-resonant lattice (1112 nm). Figure~\ref{fig:one-body-scan}(c) shows the remained atom number as a function of frequency difference. Clear periodic dependence is observed, from which we can determine the phase at which PT-symmetric condition ($\phi=0$) is satisfied. This clear dependence did not appear without the repumping beams. In this way, we successfully realize a PT-symmetric quantum many-body system in a well controllable manner. \subsection{Loss rate measurement} As we discuss in the theoretical framework, the atom loss rate is an important quantity to characterize the phase of the PT-symmetric system. To measure loss behavior, we first adiabatically turned on the two 532-nm optical lattices along the X and Z directions as well as an off-resonant lattice along the Y direction up to ($V_x$, $V_y$, $V_z$)=($18E_{R, 532\text{nm}}$, $5 E_{R, 1112\text{nm}}$, $15 E_{R, 532\text{nm}}$). Then we suddenly turned on the on-resonant lattice, and repump beams. The relative phase was stabilized to the PT-symmetric condition point. Figure~\ref{fig:one-body} shows remained atom numbers as a function of hold time on the PT-symmetric lattice. \begin{figure}[t] \centering\includegraphics[width=6in]{images/one-body.pdf} \caption{(a),(b) Remained atom number as a function of hold time with (a) small or (b) large one-body loss. The solid lines show exponential fitting. From the fits, we estimated the loss rate $\gamma$ to be (a) $2.9$ or (b) $9.6$ kHz. (c),(d) Residuals of the fits in the cases of (a),(b), respectively.} \label{fig:one-body} \end{figure} The solid lines are fitting results with the exponential function \begin{equation} N(t)=N_0 \exp(-\gamma' t). \end{equation} Figure~\ref{fig:one-body}(a) shows a weak-loss case. From the exponential fitting of remained atom number, we estimated the loss rate $\gamma'$ to be $2.9$($1$) kHz. At this parameter, PT symmetry is not broken and the observed decay is well fitted by an exponential decay due to the constant decay term represented by the last term of Eq.(\ref{eq:non-her}). Figure~\ref{fig:one-body}(b) shows a strong-loss case. The light intensity of the ${}^{1}\text{S}_0$-${}^{3}\text{P}_1$ transition is about 3.3 times higher than that of the Fig.~\ref{fig:one-body}(a) case. At this parameter, PT symmetry is expected to be broken between the first excited band and the second excited band in the non-interacting case. The atom loss was well fitted by an exponential decay. While the additional loss is the signature of PT-broken phase, the loss rate $\gamma'$ was $9.6$($3$) kHz which corresponds to 3.3 times of the loss rate observed at the weak-loss case in Fig.~\ref{fig:one-body}(a) within the error bars. This means that the current measurement uncertainty is too large to detect the expected transition between PT-unbroken and PT-broken phases. \section{Discussion and Conclusion} We developed the theoretical framework and experimental setup for the PT-symmetric lattice using one-body loss. As a result, we successfully create the PT-symmetric non-Hermitian many-body system using ultracold Bose gas. Atom loss behaviors we measured were well described by using usual one-body atom loss models. In order to observe the unique behavior of the PT-broken and unbroken transition with respect to atom loss, the stability and sensitivity of atom number measurement should be improved. We used here absorption imaging method and it is difficult to measure atom numbers smaller than $10^3$ atoms because of noise on images originated from interference fringes. We already developed another imaging method, fluorescence imaging: atoms are recaptured by using a magneto-optical trap (MOT) with the ${}^1\text{S}_0$ - ${}^1\text{P}_1$ transition, and the fluorescence from the MOT is measured~\cite{Kato2012, Kato2016, Takasu2017}. We already confirmed that we could measure atom number of about $10^2$ Yb atoms with accuracy of about $10$ atoms~\cite{Kato2012, Takasu2017}. Another possible method is to create an ultracold atom system with a {\it gain} term. Simply thinking, it is hard to include the gain term by injection of atoms from environment. However, effective gain factor with anti-magic lattice in a two-orbital system is theoretically suggested~\cite{Gong2018}. In addition, an effective atom gain term by use of tilted lattice is theoretically suggested~\cite{Kreibich2014}. When operated in the presence of interaction, the experimental system developed here should allow one to study novel many-body phenomena unique to non-Hermitian regimes such as anomalous quantum phase transitions and criticality \cite{YA17nc,LAS18,NM18,MN19}. Even in noninteracting regimes, unique nonequilibrium features such as the violation of the Lieb-Robinson bound \cite{YA18,DB19} and the unconventional Kibble-Zurek mechanism \cite{YS17,DB192} should be investigated with the help of single-atom-resolved measurement techniques. \section*{Acknowledgment} This work was supported by the Grant-in-Aid for Scientific Research of the Ministry of Education, Culture Sports, Science, and Technology / Japan Society for the Promotion of Science (MEXT/JSPS KAKENHI) Nos. JP25220711, JP17H06138, JP18H05405, JP18H05228, and JP19K23424; the Impulsing Paradigm Change through Disruptive Technologies (ImPACT) program; Japan Science and Technology Agency CREST (No. JPMJCR1673), and MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant Number JPMXS0118069021. Y.A. acknowledges support from the Japan Society for the Promotion of Science through KAKENHI Grant No.~JP16J03613. R.H. was supported by the Japan Society for the Promotion of Science through Program for Leading Graduate Schools (ALPS) and JSPS fellowship (JSPS KAKENHI Grant No. JP17J03189). Y.K. was supported by JSPS fellowship (JSPS KAKENHI Grant No.JP17J00486).
{ "attr-fineweb-edu": 1.72168, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUacXxK3YB9m3usxev
\section{Introduction} All models that intend to describe the baryon asymmetry of the universe (BAU) by electroweak baryogenesis (EWB)\cite{Kuzmin:1985mm} depend on extensions of the Standard Model (SM) since the SM fails on the following grounds: {\bf A)} First order phase transition: Sakharov\cite{Sakharov:1967dj} pointed out that baryogenesis necessarily requires non-equilibrium physics. The expansion of the universe is too slow at the electroweak scale and one needs bubble nucleation during a first order EWPT. The phase diagram of the Standard Model is studied in detail\cite{Kajantie:1996qd}, and it is well known that there is no first order phase transition in the Standard Model for the experimentally allowed Higgs mass. {\bf B)} Sphaleron bound: To avoid washout after the phase transition, the {\it vev} of the broken Higgs field has to meet the criterion $\langle\Phi\rangle\gtrsim T_c$, i.e. a strong first order phase transition. This results in the Shaposhnikov bound on the Higgs mass\cite{Farrar:hn}. {\bf C)} Lack of CP violation: Since the only source of CP violation in the Standard Model is the Cabibbo-Kobayashi-Maskawa (CKM) matrix (apart from the neutrino mass matrix, which provides an even tinier source of CP violation) one has to face that it is too weak to account for the observed magnitude of BAU. In the following we will address the last point, the lack of sufficient CP violation, in the framework of kinetic theory. The strong first order phase transition is assumed to occur at about $T_c \simeq 100$ GeV and is parametrised by the velocity of the phase boundary (wall velocity) $v_w$ and its thickness $l_w$. In this article we will focus on the following main points: \begin{itemize} \item We will demonstrate how CP violating sources can arise in semiclassical Boltzmann type equations. \item We argue that the Jarlskog determinant as an upper bound of CP violation in the SM is not valid during the EWPT. \item We give a rough estimate of the CP violating sources during the EWPT and conclude that the source is by orders larger than considering the Jarlskog determinant but still insufficient to explain the magnitude of the BAU. \end{itemize} \section{Axial Currents in Kinetic Theory} Starting point in kinetic theory are the exact Schwinger-Dyson equation for the two point functions in the closed time-path (CTP) formalism. \begin{eqnarray} && \hskip -1.0 cm e^{-i\Diamond} \{ S_0^{-1} - \Sigma_R , S^< \} - e^{-i\Diamond} \{ \Sigma^< , S_R \} \nonumber\\ &=& \frac{1}{ 2} e^{-i\Diamond} \{ \Sigma^< , S^> \} - \frac{1}{ 2} e^{-i\Diamond} \{ \Sigma^> , S^< \}, \\ && \hskip -1.0 cm e^{-i\Diamond} \{ S_0^{-1} - \Sigma_R , { A} \} - e^{-i\Diamond} \{ \Sigma_A , S_R \} \nonumber =0, \end{eqnarray} where we have used the definitions and relations \begin{eqnarray} && S^{\bar t}:=S^{--}, \, S^t:=S^{++}, \, S^<:=S^{+-}, \, S^>:=S^{-+}, \nonumber \\ && { A}:=\frac{i}{2}(S^<-S^>), S_R:=S^t-\frac{i}{2}(S^<+S^>), \nonumber \\ && 2\Diamond\{A,B\} := \partial_{X^\mu} A \partial_{k_\mu} B -\partial_{k_\mu} A \partial_{X^\mu} B. \end{eqnarray} S denotes the Wightman function, $S_0^{-1}$ the free inverse propagator and $\Sigma$ the selfenergy. $S^{\pm\pm}$ denotes the entries of the $2\times 2$ Keldysh matrices and all functions depend on the average coordinate X and the momentum k. To simplify the equations, one can perform a gradient expansion. The terms on the left hand side will be expanded up to first order, whereas the collision terms on the right hand side vanish in equilibrium and are just kept up to zeroth order. The expansion parameter is formally $\partial_X/k$, which close to equilibrium and for typical thermal excitations reduces to $( l_w * T_c )^{-1}$. We will not solve the full transport equations, but only look for the appearance of CP-violating source terms. To start with, we consider a toy model with only one flavour and a mass term that contains a space dependent complex phase\cite{Kainulainen:2001cn}. The inverse propagator in a convenient coordinate system reads \begin{eqnarray} k_0 \gamma_0 + k_3 \gamma_3 + m_R(X_3) + i m_I(X_3) \gamma_5. \end{eqnarray} Using the spin projection operator $P_s = \frac{1}{2}(1 + s\gamma_0\gamma_3\gamma_5)$ the Schwinger-Dyson equations can be decoupled and finally yield ($m e^{i \theta} = m_R + i \, m_I$) \begin{eqnarray} \left( k_0^2 - k_3^2 - m^2 + s \frac{ m^2\theta^\prime}{k_0} \right) Tr(\gamma_0 S^{<}_s) &=& 0 \\ \left( k_3 \partial_{X_3} - \frac{(m^2)^\prime}{2} \partial_{k_3} +s\frac{ (m^2\theta^\prime)^\prime }{2 k_0} \partial_{k_3} \right) Tr(\gamma_0 S^{<}_s) &=& Coll. \end{eqnarray} We see, that in our approximation, the quasi-particle picture is still valid, since the Wightman function fulfills an algebraic constraint. Furthermore, the first order corrections lead to some source term, that is proportional to the complex phase of the mass and therefore CP violating. Performing the calculation with several flavours, one finds the generalization of this CP violating term, reading $Tr(m^{\dagger\prime} m - m^\dagger m^\prime)$. \section{Enhancement of CP Violation in the SM} Jarlskog proposed an upper bound for CP violating effects in the Standard Model. Following her argument of rephasing invariants, the first CP violating quantity constructed out of the Yukawa couplings is the Jarlskog determinant\cite{Jarlskog:1988ii} \begin{eqnarray} && \hskip -0.3in Im \, det[\tilde m_u \,\tilde m_u^\dagger, \tilde m_d \,\tilde m_d^\dagger] \nonumber \\ &=& Tr(C m_u^4 C^\dagger m_d^4 C m_u^2 C^\dagger \tilde m_d^2) \nonumber \\ &\approx& -2J \cdot m_t^4 m_b^4 m_c^2 m_s^2, \label{jarlsdet} \end{eqnarray} When applied to the case of electroweak baryogenesis, one finds the upper bound of the BAU\cite{Farrar:sp} \begin{eqnarray} &[\frac{g_W^2}{2M_W^2}]^7 J \cdot m_t^6 m_b^4 m_c^2 m_s^2 \approx 10^{-22},& \label{shapbound} \end{eqnarray} Though, two assumptions, that are needed for this argument are not fulfilled during the electroweak phase transition\cite{Konstandin:2003dx}. {\bf A)} Since the mass matrix is space dependent, one needs space dependent diagonalization matrices to transform to the mass eigenbasis. This leads to new physical relevant quantities, that can as well be CP violating. As a generalization of the CP violating source term in the kinetic toy model above, we found $Tr(m^{\dagger\prime} m - m^\dagger m^\prime)$. However in the Standard model this term vanishes at tree level, for the mass matrix is proportional to its derivative. {\bf B)} The argument of Jarlskog is based on the fact, that the examined quantity is perturbative in the Yukawa coupling. The calculation of the selfenergy in a thermal plasma involves integrations over divergent logarithms, of the form \begin{eqnarray} \hskip -0.2in h_2(\omega,\kappa) &=& \frac{1}{\kappa} \int_0^\infty \frac{d|{\bf p}|}{2\pi} \Big( \frac{|{\bf p}|}{\epsilon_h} L_2(\epsilon_h,|{\bf p}|) f_B(\epsilon_h) \nonumber \\ && \hskip 0.5in - \frac{|{\bf p}|}{\epsilon_u} L_1(\epsilon_u,|{\bf p}|) f_F(\epsilon_u) \Big) . \nonumber \end{eqnarray} \begin{eqnarray} \hskip -0.3in L_{1/2}(\epsilon,|{\bf p}|) &=& \log \left( \frac {\omega^2 - \kappa^2 \pm \Delta + 2 \epsilon \omega + 2 \kappa |{\bf p}|} {\omega^2 - \kappa^2 \pm \Delta + 2 \epsilon \omega - 2 \kappa |{\bf p}|} \right) \nonumber \\ && + \log \left( \frac {\omega^2 - \kappa^2 \pm \Delta - 2 \epsilon \omega + 2 \kappa |{\bf p}|} {\omega^2 - \kappa^2 \pm \Delta - 2 \epsilon \omega - 2 \kappa |{\bf p}|} \right), \nonumber \end{eqnarray} that lead to a significant space dependence of the selfenergy. \begin{figure}[ht] \centerline{\epsfxsize=3.1in\epsfbox{konstandin_fig1.eps}} \caption{Dependence of $h_2$ on the Higgs vev $\Phi$ in \% of its value $\Phi^0=246$ GeV at T=0. The external energies and momenta are fixed at $\omega=105$ GeV to $\omega=120$ GeV, k=100 GeV, the mass of the quark in the loop is $m_u=100$ GeV. \label{inter}} \end{figure} Since the space dependence is due to a resonance with the plasma particles, the selfenergy is highly sensitive to the quark masses and the W mass, that both change continously in the wall profile. \begin{figure}[ht] \centerline{\epsfxsize=3.1 in\epsfbox{konstandin_fig2.eps}} \caption{Dependence of $h_2^\prime$ on the mass of the quark in the loop with an on-shell external quark of mass $m_e=4$ GeV. The Higgs vev is chosen in a range of 25\% to 100\% of its value in the broken phase at T=0. \label{inter2}} \end{figure} However since CP violating effects only appear as an interference of the two loop and the one loop term, an estimation of the source term leads to the upper bound\cite{Konstandin:2003dx} $$ \frac{\delta\omega}{\omega} \sim J \cdot m_t^4 m_s^2 m_b^2 m_c^2 \frac{ \alpha_w^3 h_2^\prime} {m_W^8 l_w T^3} \approx 10^{-15}.$$ We conclude, that the axial current is enhanced seven orders in magnitude. Still the CP-violating source due to the CKM matrix might be too weak to account for the BAU. \vskip 0.3 cm {\bf Acknowledgements.} I would like to thank M.G.~Schmidt and T.~Prokopec for the nice collaboration.
{ "attr-fineweb-edu": 1.585938, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUadI25V5hdn6rPLzB
\section{Introduction} Klein-Gordon (KG) and Dirac oscillators \cite{Moshinsky 1989,Bruce 1993,Dvoeg 1994,Mirza 2004} have received much attention over the years. KG-oscillators in G\"{o}del-type spacetime (e.g., \cite{Moshinsky 1989,Bruce 1993,Dvoeg 1994,Das 2008,Carvalho 2016,Garcia 2017,Vitoria 2016}), in cosmic string spacetime and Kaluza-Klein theory backgrounds (e.g., \cite{Ahmed1 2021,Boumal 2014}), in Minkowski spacetime with space-like dislocation \cite% {Mustafa1 2022}, in Som-Raychaudhuri \cite{Wang 2015}, in (1+2)-dimensional G% \"{u}rses space-time backgrounds (e.g., \cite{Gurses 1994,Ahmed1 2019,Mustafa2 2022}). The KG-oscillators in a (1+2)-dimensional G\"{u}rses spacetime described by the metric% \begin{equation} ds^{2}=-dt^{2}+dr^{2}-2\Omega r^{2}dtd\theta +r^{2}\left( 1-\Omega ^{2}r^{2}\right) d\theta ^{2}=g_{\mu \nu }dx^{\mu }dx^{\nu };\text{ }\mu ,\nu =0,1,2, \label{e1} \end{equation}% were investigated investigated by Ahmed \cite{Ahmed1 2019}., using $% a_{_{0}}=b_{_{0}}=e_{_{0}}=1$, $b_{_{1}}=c_{_{0}}=\lambda _{_{0}}=0$, and vorticity $\Omega =-\mu /3$, in the G\"{u}rses metric% \begin{equation} ds^{2}=-\phi dt^{2}+2qdtd\theta +\frac{h^{2}\psi -q^{2}}{a_{_{0}}}d\theta ^{2}+\frac{1}{\psi }dr^{2} \label{e2} \end{equation}% (i.e., as in Eq.(5) of \cite{Gurses 1994}) where% \begin{equation} \phi =a_{_{0}},\,\psi =b_{_{0}}+\frac{b_{_{1}}}{r^{2}}+\frac{3\lambda _{_{0}}% }{4}r^{2},\,q=c_{_{0}}+\frac{e_{_{0}}\mu }{3}r^{2},\,h=e_{_{0}}r,\,\lambda _{_{0}}=\lambda +\frac{\mu ^{2}}{27}. \label{e3} \end{equation}% In this note, we shall show that there are more quantum mechanical features indulged in the spectroscopic structure of the KG-oscillators in the background of such a G\"{u}rses spacetime metric (\ref{e1}) than those reported by Ahmed \cite{Ahmed1 2019}, should this model be properly addressed. Throughout this note, such KG-oscillators shall be called KG-G\"{u}rses oscillators. We organize the current note in the following manner. In section 2, we revisit KG-oscillators in the (1+2)-dimensional G\"{u}rses spacetime of (\ref{e1}) and present them in a more general form, that includes position-dependent mass (PDM, which is a metaphoric notion) settings along with Mirza-Mohadesi's KG-oscillators \cite{Mirza 2004} recipe. We observe that the KG-G\"{u}rses oscillators are introduced as a byproduct of the very nature of the G\"{u}rses spacetime structure. This motivates us to first elaborate and discuss, in section 3, the effects of G\"{u}rses spacetime on the energy levels of the KG-G\"{u}rses oscillators, without the KG-oscillator prescription of Mirza-Mohadesi \cite{Mirza 2004}. Therein, we report that such KG-G\"{u}rses oscillators admit vorticity-energy correlations as well as spacetime associated degeneracies (STADs). In section 4, we discuss Ahmed's model \cite{Ahmed1 2019} that includes Mirza-Mohadesi \cite{Mirza 2004} recipe and pinpoint Ahmed's \cite{Ahmed1 2019} improper treatment of the model at hand. We consider the PDM KG-G\"{u}rses oscillators in section 5. We discuss and report KG pseudo-G\"{u}rses oscillators in section 6, where we observe that they admit isospectrality and invariance with the KG G\"{u}rses-oscillators and inherit the same vorticity-energy correlations as well as STADs. Our concluding remarks are given in section 7. \section{KG-G\"{u}rses oscillators and PDM settings} The covariant and contravariant metric tensors corresponding to the (1+2)-dimensional G\"{u}rses spacetime of (\ref{e1}), respectively, read% \begin{equation} g_{\mu \nu }=\left( \begin{tabular}{ccc} $-1\smallskip $ & $0$ & $-\Omega r^{2}$ \\ $0$ & $1\smallskip $ & $0$ \\ $-\Omega r^{2}$ & $\,0$ & $\,r^{2}\left( 1-\Omega ^{2}r^{2}\right) $% \end{tabular}% \right) \Longleftrightarrow g^{\mu \nu }=\left( \begin{tabular}{ccc} $\left( \Omega ^{2}r^{2}-1\right) $ & $0\smallskip $ & $-\Omega $ \\ $0$ & $1\smallskip $ & $0$ \\ $-\Omega $ & $\,0$ & $\,1/r^{2}$% \end{tabular}% \right) \text{ };\text{ \ }\det \left( g_{\mu \nu }\right) =-r^{2}. \label{e4} \end{equation}% Then the corresponding KG-equation is given by% \begin{equation} \frac{1}{\sqrt{-g}}\partial _{\mu }\left( \sqrt{-g}g^{\mu \nu }\partial _{\nu }\Psi \right) =m^{2}\Psi . \label{e5} \end{equation}% However, we shall now use the momentum operator% \begin{equation} p_{\mu }\longrightarrow p_{\mu }+i\mathcal{F}_{\mu }, \label{e6} \end{equation}% so that it incorporates the KG-oscillator prescription of Mirza-Mohadesi \cite{Mirza 2004} as well as position-dependent mass (PDM) settings proposed by Mustafa \cite{Mustafa1 2022}. Where $\mathcal{F}_{\mu }=\left( 0,\mathcal{% F}_{r},0\right) $ and our $\mathcal{F}_{r}=\eta r;$ $\eta =m\omega ,$ of \cite{Ahmed1 2019} and $\mathcal{F}_{r}=\eta r+g^{\prime }\left( r\right) /4g\left( r\right) $ to also include PDM settings as in Mustafa \cite{Mustafa1 2022}. This would suggest that Ahmed's model is retrieved when the positive-valued scalar multiplier $g\left( r\right) =1$. Nevertheless, the reader should be aware that the regular momentum operator $p_{\mu }$ is replaced by the PDM-momentum operator $p_{\mu }+i\mathcal{F}_{\mu }$ to describe PDM KG-particles in general (for more details on this issue the reader is advised to refer to \cite{Mustafa1 2022}). Under such assumptions, the KG-equation (\ref{e5}) would transform into% \begin{equation} \frac{1}{\sqrt{-g}}\left( \partial _{\mu }+\mathcal{F}_{\mu }\right) \left[ \sqrt{-g}g^{\mu \nu }\left( \partial _{\nu }-\mathcal{F}_{\nu }\right) \Psi % \right] =m^{2}\Psi , \label{e7} \end{equation}% which consequently yields% \begin{equation} \left\{ -\partial _{t}^{2}+\left( \Omega \,r\,\partial _{t}-\frac{1}{r}% \partial _{\theta }\right) ^{2}+\partial _{r}^{2}+\frac{1}{r}\partial _{r}-M\left( r\right) -m^{2}\right\} \Psi =0, \label{e8} \end{equation}% where% \begin{equation} M\left( r\right) =\frac{\mathcal{F}_{r}}{r}+\mathcal{F}_{r}^{\prime }+% \mathcal{F}_{r}^{2}. \label{e9} \end{equation}% We now substitute% \begin{equation} \Psi \left( t,r,\theta \right) =\exp \left( i\left[ \ell \theta -Et\right] \right) \psi \left( r\right) =\exp \left( -i\left[ \ell \theta -Et\right] \right) \frac{R\left( r\right) }{\sqrt{r}} \label{e10} \end{equation}% to imply \begin{equation} R^{\prime \prime }\left( r\right) +\left[ \lambda -\frac{\left( \ell ^{2}-1/4\right) }{r^{2}}-\tilde{\omega}^{2}r^{2}-\tilde{M}\left( r\right) % \right] R\left( r\right) =0, \label{e11} \end{equation}% where $\ell =0,\pm 1,\pm 2,\cdots $ is the magnetic quantum number,% \begin{equation} \tilde{M}\left( r\right) =-\frac{3}{16}\left( \frac{g^{\prime }\left( r\right) }{g\left( r\right) }\right) ^{2}+\frac{1}{4}\frac{g^{^{\prime \prime }}\left( r\right) }{g\left( r\right) }+\frac{1}{4}\frac{g^{\prime }\left( r\right) }{rg\left( r\right) }+\frac{1}{2}\frac{g^{\prime }\left( r\right) }{g\left( r\right) }\eta r, \label{e12} \end{equation}% and% \begin{equation} \lambda =E^{2}-2\,\Omega \,\ell \,E-2\eta -m^{2}\text{ ; \ }\tilde{\omega}% ^{2}=\Omega ^{2}E^{2}+\eta ^{2}. \label{e13} \end{equation}% It is obvious that we retrieve Ahmed's model \cite{Ahmed1 2019} when $% g\left( r\right) =1$. Moreover, we observe that the KG-G\"{u}rses oscillators are introduced as a byproduct of the very nature of the G\"{u}rses spacetime structure. This motivates us to first elaborate and discuss the effects of G\"{u}rses spacetime on the energy levels of the KG-G\"{u}rses oscillators, without the KG-oscillator prescription of Mirza-Mohadesi \cite{Mirza 2004} (i.e., with $\eta =0$). \section{KG-G\"{u}rses oscillators: vorticity-energy correlations and spacetime associated degeneracies} It is obvious that KG-G\"{u}rses oscillators are introduced by the very structure of G\"{u}rses spacetime. That is, for $\eta =0$, and $g\left( r\right) =1$ our KG-equation (\ref{e11}) collapses into the two-dimensional Schr\"{o}dinger oscillator% \begin{equation} R^{\prime \prime }\left( r\right) +\left[ \lambda -\frac{\left( \ell ^{2}-1/4\right) }{r^{2}}-\Omega ^{2}E^{2}r^{2}\right] R\left( r\right) =0, \label{e14} \end{equation}% which admits exact textbook solvability so that the eigenvalues and radial eigenfunctions, respectively, read% \begin{equation} \lambda =2\left\vert \Omega E\right\vert \left( 2n_{r}+\left\vert \ell \right\vert +1\right) \label{e15} \end{equation}% and \begin{equation} R\left( r\right) \sim r^{\left\vert \ell \right\vert +1/2}\exp \left( -\frac{% \left\vert \Omega E\right\vert r^{2}}{2}\right) L_{n_{n}}^{\left\vert \ell \right\vert }\left( \left\vert \Omega E\right\vert r^{2}\right) \Longleftrightarrow \psi \left( r\right) \sim r^{\left\vert \ell \right\vert }\exp \left( -\frac{\left\vert \Omega E\right\vert r^{2}}{2}\right) L_{n_{n}}^{\left\vert \ell \right\vert }\left( \left\vert \Omega E\right\vert r^{2}\right) . \label{e16} \end{equation}% where $L_{n_{n}}^{\left\vert \ell \right\vert }\left( \left\vert \Omega E\right\vert r^{2}\right) $ are the generalized Laguerre polynomials. Now with the help of (\ref{e13}) and (\ref{e15}) we obtain% \begin{equation} E^{2}-2\,\Omega \,\ell \,E-m^{2}=2\left\vert \Omega E\right\vert \left( 2n_{r}+\left\vert \ell \right\vert +1\right) . \label{e16.1} \end{equation}% \begin{figure}[!ht!] \centering \includegraphics[width=0.3\textwidth]{1a.eps} \includegraphics[width=0.3\textwidth]{1b.eps} \caption{\small { The energy levels for KG-G\"{u}rses oscillators of (\ref{e16.4}) and (\ref{e16.5}) are plotted $m=1$ (a) for $n_r=0, \, \ell=0,\pm1,\pm2$, and (b) for $n_{r}=3$, $\ell=0,\pm1, \pm2,\pm3$.}} \label{fig1} \end{figure}% This result should be dealt with diligently and rigorously, as mandated by the very nature of $\left\vert \Omega E\right\vert =\Omega _{\pm }E_{\pm }\geq 0$ or $% \left\vert \Omega E\right\vert =-\Omega _{\mp }E_{\pm }\geq 0$ (that secures the finiteness and square integrability of the radial wavefunction (\ref{e16})), where $\Omega _{\pm }=\pm \left\vert \Omega \right\vert $ and $E_{\pm }=\pm \left\vert E\right\vert $, That is, for $\left\vert \Omega E\right\vert =\Omega _{\pm }E_{\pm }$ in (\ref{e16.1}) we obtain% \begin{equation} E_{\pm }^{2}-2\,\Omega _{\pm }E_{\pm }\,\tilde{n}_{+}-m^{2}=0;\;\tilde{n}% _{+}=2n_{r}+\left\vert \ell \right\vert +\ell \,+1, \label{e16.2} \end{equation}% and for $\left\vert \Omega E\right\vert =-\Omega _{\mp }E_{\pm }$ we get% \begin{equation} E_{\pm }^{2}+2\,\Omega _{\mp }E_{\pm }\,\tilde{n}_{-}\,-m^{2}=0;\;\tilde{n}% _{-}=2n_{r}+\left\vert \ell \right\vert -\ell \,+1. \label{e16.3} \end{equation}% Which would allow us to cast% \begin{equation} E_{\pm }=\Omega _{\pm }\,\tilde{n}_{+}\pm \sqrt{\Omega ^{2}\tilde{n}% _{+}^{2}+m^{2}}\Rightarrow \left\{ \begin{tabular}{l} $E_{+}=\Omega _{\pm }\,\tilde{n}_{+}+\sqrt{\Omega ^{2}\tilde{n}_{+}^{2}+m^{2}% }$ \\ $E_{-}=\Omega _{\pm }\,\tilde{n}_{+}-\sqrt{\Omega ^{2}\tilde{n}_{+}^{2}+m^{2}% }$% \end{tabular}% \right. , \label{e16.4} \end{equation}% for $\left\vert \Omega E\right\vert =\Omega _{\pm }E_{\pm }$ and% \begin{equation} E_{\pm }=-\Omega _{\mp \,}\tilde{n}_{-}\,\pm \sqrt{\Omega ^{2}\tilde{n}% _{-}^{2}+m^{2}}\Rightarrow \left\{ \begin{tabular}{l} $E_{+}=-\Omega _{-\,}\tilde{n}_{-}\,+\sqrt{\Omega ^{2}\tilde{n}_{-}^{2}+m^{2}% }$ \\ $E_{-}=-\Omega _{+\,}\tilde{n}_{-}\,-\sqrt{\Omega ^{2}\tilde{n}_{-}^{2}+m^{2}% }$% \end{tabular}% \right. . \label{e16.5} \end{equation}% Consequently, one may rearrange such energy levels and cast them so that% \begin{equation} E_{\pm }^{\left( \Omega _{+}\right) }=\pm \left\vert \Omega \right\vert \,% \tilde{n}_{\pm }\pm \sqrt{\Omega ^{2}\tilde{n}_{\pm }^{2}+m^{2}}, \label{e16.6} \end{equation}% for positive vorticity, and% \begin{equation} E_{\pm }^{\left( \Omega _{-}\right) }=\pm \left\vert \Omega \right\vert \,% \tilde{n}_{\mp }\pm \sqrt{\Omega ^{2}\tilde{n}_{\mp }^{2}+m^{2}}. \label{e16.7} \end{equation}% for negative vorticity. Notably, we observe that $\tilde{n}_{\pm }\left( \ell =\pm \ell \right) =\tilde{n}_{\mp }\left( \ell =\mp \ell \right) $ which would in effect introduce the so called vorticity-energy correlations so that $E_{\pm }^{\left( \Omega _{+}\right) }\left( \ell =\pm \ell \right) =E_{\pm }^{\left( \Omega _{-}\right) }\left( \ell =\mp \ell \right) $. We have, therefore, four branches of energy levels so that the upper half (above $E=0$ line) is represented by $E_{+}$ and the lower half (below $E=0$ line) is represented by $E_{-}$ in the correlations mentioned above. Yet for massless KG-G\"{u}rses oscillators we obtain $E_{\pm }^{\left( \Omega _{+}\right) }=\pm 2\left\vert \Omega \right\vert \,\tilde{n}_{\pm }$ and $% E_{\pm }^{\left( \Omega _{-}\right) }=\pm 2\left\vert \Omega \right\vert \,% \tilde{n}_{\mp }$. Moreover, in Figures 1(a) and 1(b) we observe yet a new type of degeneracies in each branch of the energy levels (i.e., in each quarter of the figures). That is, states with the irrational quantum number $\tilde{n}% _{+}=2n_{r}+\left\vert \ell \right\vert +\ell \,+1$ collapse into $\ell =0$ state for $\forall \ell =-\left\vert \ell \right\vert $ and states with $% \tilde{n}_{-}=2n_{r}+\left\vert \ell \right\vert -\ell \,+1$ collapse into $% \ell =0$ state for $\forall \ell =+\left\vert \ell \right\vert $. This type of degeneracies is introduced by the structure of spacetime (G\"{u}rses spacetime is used here) and therefore should be called, hereinafter, spacetime associated degeneracies (STADs). \section{KG-G\"{u}rses plus Mirza-Mohadesi's oscillators} We now consider KG-G\"{u}rses plus Mirza-Mohadesi's oscillators with $\eta \neq 0$, and $g\left( r\right) =1$. In this case, our KG-equation (\ref{e11}% ) collapses again into the two-dimensional Schr\"{o}dinger oscillator% \begin{equation} R^{\prime \prime }\left( r\right) +\left[ \lambda -\frac{\left( \ell ^{2}-1/4\right) }{r^{2}}-\tilde{\omega}^{2}r^{2}\right] R\left( r\right) =0, \label{e17} \end{equation}% which admits exact textbook solvability so that the eigenvalues and radial eigenfunctions, respectively, read% \begin{equation} \lambda =2\left\vert \tilde{\omega}\right\vert \left( 2n_{r}+\left\vert \ell \right\vert +1\right) =2\left\vert \Omega E\right\vert \sqrt{1+\frac{\eta ^{2}}{\Omega ^{2}E^{2}}}\left( 2n_{r}+\left\vert \ell \right\vert +1\right) \label{e18} \end{equation}% and \begin{equation} R\left( r\right) \sim r^{\left\vert \ell \right\vert +1/2}\exp \left( -\frac{% \left\vert \tilde{\omega}\right\vert r^{2}}{2}\right) L_{n_{n}}^{\left\vert \ell \right\vert }\left( \left\vert \tilde{\omega}\right\vert r^{2}\right) \Longleftrightarrow \psi \left( r\right) \sim r^{\left\vert \ell \right\vert }\exp \left( -\frac{\left\vert \tilde{\omega}\right\vert r^{2}}{2}\right) L_{n_{n}}^{\left\vert \ell \right\vert }\left( \left\vert \tilde{\omega}% \right\vert r^{2}\right) . \label{e19} \end{equation}% Then, equation (\ref{e13}) along with (\ref{e18}) imply% \begin{equation} E^{2}-2\Omega E\ell -2\left\vert \Omega E\right\vert \sqrt{1+\frac{\eta ^{2}% }{\Omega ^{2}E^{2}}}\left( 2n_{r}+\left\vert \ell \right\vert +1\right) -\left( m^{2}+2\eta \right) =0. \label{e20} \end{equation}% It is obvious that for $\eta =0$ in (\ref{e20}) one would exactly obtain the results for the KG-G\"{u}rses oscillators discussed above. In Figure 2(a), we notice that the vorticity-energy correlations as well as STADs are now only partially valid because of the energy shifts introduced by Mirza-Mohadesi's \cite{Mirza 2004} parameter $\eta $. In Figures 2(b) and 2(c) we can clearly observe such shifts in each quarter of the figures. That is, quarters 1 and 2 are for $\Omega =\Omega _{+}=+\left\vert \Omega \right\vert $ (i.e., for $E_{\pm }^{\left( \Omega _{+}\right) }$), and 3 and 4 are for $\Omega =\Omega _{-}=-\left\vert \Omega \right\vert $ (i.e., for $% E_{\pm }^{\left( \Omega _{-}\right) }$). At this point, it should be pointed out that this equation was improperly treated by Ahmed \cite{Ahmed1 2019}, as he expressed the energies in terms of $\tilde{\omega}$ where $\tilde{\omega}=\sqrt{\Omega ^{2}E^{2}+\eta ^{2}}$ (see (16) vs (21) with (22) and (16) vs (35) with (36) of \cite{Ahmed1 2019}% ). That is, the energies are given in terms of the energies and his results starting form his equation (21) to the end of his paper are rendered misleading, and are incorrect. His results should be redirected to the results reported in current note, therefore.% \begin{figure}[!ht!] \centering \includegraphics[width=0.3\textwidth]{2a.eps} \includegraphics[width=0.3\textwidth]{2b.eps} \includegraphics[width=0.3\textwidth]{2c.eps} \caption{\small { The energy levels for KG-G\"{u}rses oscillators of (\ref{e20}) are plotted with $ m=1$ (a) for $\eta=5$, $n_r=1$, $\ell=0, \pm1, \pm2$, (b) for $n_{r}=2$, $\ell=1$, $\eta=0,1,3,6,9$ and (c) for $n_{r}=2$, $\ell=-2$, $\eta=0,1,3,6,9$.}} \label{fig2} \end{figure}% \section{PDM KG-G\"{u}rses oscillators} In this section we consider PDM settings for KG-G\"{u}rses oscillators, where $g\left( r\right) =\exp \left( 2\beta r^{2}\right) ;\;\beta \geq 0$. Under such settings, KG-equation (\ref{e11}) reads% \begin{equation} R^{\prime \prime }\left( r\right) +\left[ \lambda -\frac{\left( \ell ^{2}-1/4\right) }{r^{2}}-\tilde{\Omega}^{2}r^{2}\right] R\left( r\right) =0, \label{e21} \end{equation}% with% \begin{equation} \lambda =E^{2}-2\,\Omega \,\ell \,E-2\beta -m^{2}\text{ ; \ }\tilde{\Omega}% ^{2}=\Omega ^{2}E^{2}+\beta ^{2}. \label{e22} \end{equation}% In this case, the eigenvalues and radial wavefunctions, respectively, read% \begin{equation} \lambda =2\left\vert \tilde{\Omega}\right\vert \left( 2n_{r}+\left\vert \ell \right\vert +1\right) =2\left\vert \Omega E\right\vert \sqrt{1+\frac{\beta ^{2}}{\Omega ^{2}E^{2}}}\left( 2n_{r}+\left\vert \ell \right\vert +1\right) , \label{e23} \end{equation}% and \begin{equation} R\left( r\right) \sim r^{\left\vert \ell \right\vert +1/2}\exp \left( -\frac{% \left\vert \tilde{\Omega}\right\vert r^{2}}{2}\right) L_{n_{n}}^{\left\vert \ell \right\vert }\left( \left\vert \tilde{\Omega}\right\vert r^{2}\right) \Longleftrightarrow \psi \left( r\right) \sim r^{\left\vert \ell \right\vert }\exp \left( -\frac{\left\vert \tilde{\Omega}\right\vert r^{2}}{2}\right) L_{n_{n}}^{\left\vert \ell \right\vert }\left( \left\vert \tilde{\Omega}% \right\vert r^{2}\right) . \label{e23.1} \end{equation}% Consequently, the energies are given by% \begin{equation} E^{2}-2\,\Omega \,\ell \,E-2\beta -m^{2}\text{ }=2\left\vert \Omega E\right\vert \sqrt{1+\frac{\beta ^{2}}{\Omega ^{2}E^{2}}}\left( 2n_{r}+\left\vert \ell \right\vert +1\right) . \label{e24} \end{equation}% Obviously, the effect of $\beta $ on the energy levels is the same as that of \ the Mirza-Mohadesi's oscillators \cite{Mirza 2004} parameter $\eta $. This would suggest that Mirza-Mohadesi's oscillators \cite{Mirza 2004} may very well be considered as a special case of PDM KG-oscillators. \section{KG pseudo-G\"{u}rses oscillators: vorticity-energy correlations and spacetime associated degeneracies} We now consider a spacetime described by the metric% \begin{equation} ds^{2}=-dt^{2}+g\left( r\right) \,dr^{2}-2\Omega Q\left( r\right) r^{2}dtd\theta +Q\left( r\right) r^{2}\left( 1-\Omega ^{2}Q\left( r\right) r^{2}\right) d\theta ^{2}. \label{e25} \end{equation}% Next, let us introduce a transformation of the radial part so that% \begin{equation} \rho =\sqrt{Q\left( r\right) }r=\int \sqrt{g\left( r\right) }dr\Rightarrow \sqrt{g\left( r\right) }=\sqrt{Q\left( r\right) }\left[ 1+\frac{Q^{\prime }\left( r\right) }{2Q\left( r\right) }r\right] , \label{e26} \end{equation}% where $% \mathbb{R} \ni \left( \rho ,r\right) \in \left[ 0,\infty \right] $, and hence $Q\left( r\right) \in \mathbb{R} $ is a positive-valued dimensionless scalar multiplier (so is $g\left( r\right) $). In this case, our spacetime metric (\ref{e25}) now reads% \begin{equation} ds^{2}=-dt^{2}+\,d\rho ^{2}-2\Omega \rho ^{2}dtd\theta +\rho ^{2}\left( 1-\Omega ^{2}\rho ^{2}\right) d\theta ^{2}. \label{e27} \end{equation}% This metric looks very much like that of G\"{u}rses (\ref{e1}) and consequently the KG-equation (\ref{e14}) that describes KG-G\"{u}rses oscillators is indeed invariant and isospectral with the corresponding KG pseudo-G\"{u}rses oscillators equation% \begin{equation} R^{\prime \prime }\left( \rho \right) +\left[ \lambda -\frac{\left( \ell ^{2}-1/4\right) }{\rho ^{2}}-\Omega ^{2}E^{2}\rho ^{2}\right] R\left( \rho \right) =0. \label{e27.11} \end{equation} Hence, our KG pseudo-G\"{u}rses oscillators would copy the same\ energies for the KG-G\"{u}rses oscillators of (\ref{e16.6}) and (\ref{e16.7}) (discussed in section 3) so that% \begin{equation} E_{\pm }^{\left( \Omega _{+}\right) }=\pm \left\vert \Omega \right\vert \,% \tilde{n}_{\pm }\pm \sqrt{\Omega ^{2}\tilde{n}_{\pm }^{2}+m^{2}}, \label{e27.1} \end{equation}% for positive vorticity, and% \begin{equation} E_{\pm }^{\left( \Omega _{-}\right) }=\pm \left\vert \Omega \right\vert \,% \tilde{n}_{\mp }\pm \sqrt{\Omega ^{2}\tilde{n}_{\mp }^{2}+m^{2}}. \label{e27.2} \end{equation}% for negative vorticity. However, the radial wavefunctions are now given by \begin{equation} R\left( \rho \right) \sim \rho ^{\left\vert \ell \right\vert +1/2}\exp \left( -\frac{\left\vert \Omega E\right\vert \rho ^{2}}{2}\right) L_{n_{n}}^{\left\vert \ell \right\vert }\left( \left\vert \Omega E\right\vert \rho ^{2}\right) \Longleftrightarrow \psi \left( \rho \right) \sim \rho ^{\left\vert \ell \right\vert }\exp \left( -\frac{\left\vert \Omega E\right\vert \rho ^{2}}{2}\right) L_{n_{n}}^{\left\vert \ell \right\vert }\left( \left\vert \Omega E\right\vert \rho ^{2}\right) . \label{e27.3} \end{equation} The following notes on our spacetime metric (\ref{e25}) are unavoidable. \begin{description} \item[(a)] The spacetime metric (\ref{e27}) looks very much like G\"{u}rses spacetime one of (\ref{e1}) and should be called, hereinafter, pseudo-G\"{u}rses spacetime, therefore. \item[(b)] If we set $\Omega =-\mu /3$, $a_{_{0}}=1$ in \begin{equation} \phi =a_{_{0}},\,\psi =b_{_{0}}+\frac{b_{_{1}}}{\rho ^{2}}+\frac{3\lambda _{_{0}}}{4}\rho ^{2},\,q=c_{_{0}}+\frac{e_{_{0}}\mu }{3}\rho ^{2},\,h=e_{_{0}}\rho ,\,\lambda _{_{0}}=\lambda +\frac{\mu ^{2}}{27}, \label{e28} \end{equation}% of (\ref{e3}) and use% \begin{equation} Q\left( r\right) =e_{_{0}}+\frac{3c_{_{0}}}{\mu r^{2}}\Longleftrightarrow g\left( r\right) =\frac{\mu e_{_{0}}^{2}r^{2}}{\mu e_{_{0}}r^{2}+3c_{_{0}}}, \label{e29} \end{equation}% (where the parametric values are adjusted so that $\left( Q\left( r\right) ,g\left( r\right) \right) \in \mathbb{R} $ are positive-valued functions, i.e., $c_{_{0}}<0$ ) we obtain% \begin{equation} q=c_{_{0}}+\frac{e_{_{0}}\mu }{3}r^{2},\;\psi =\frac{1}{e_{_{0}}}+\frac{% 3c_{_{0}}}{\mu e_{_{0}}^{2}r^{2}}. \label{e30} \end{equation}% Which is yet another feasible structure for the G\"{u}rses spacetime of (\ref{e2}) and (\ref{e3}) with% \begin{equation} b_{_{0}}=\frac{1}{e_{_{0}}},\,b_{_{1}}=\frac{3c_{_{0}}}{\mu e_{_{0}}^{2}},% \,\lambda _{_{0}}=0,\text{ }h=e_{_{0}}r. \label{e31} \end{equation} \item[(c)] As long as condition (\ref{e26}) is satisfied, all KG-pseudo-G\"{u}rses oscillators (including the one in (b) above) in the spacetime model of (\ref{e27}) admit isospectrality and invariance with the KG-G\"{u}rses oscillators (\ref{e14}) and inherit the same vorticity-energy correlations so that $E_{\pm }^{\left( \Omega _{+}\right) }\left( \ell =\pm \ell \right) =E_{\pm }^{\left( \Omega _{-}\right) }\left( \ell =\mp \ell \right) $ as well as they inherit the spacetime associated degeneracies, discussed in section 3. \end{description} \section{Concluding remarks} In the current proposal, we revisited KG-oscillators in the (1+2)-dimensional G\"{u}rses spacetime of (\ref{e1}) so that PDM settings and Mirza-Mohadesi's KG-oscillators \cite{Mirza 2004} are included. We have observed that KG-G\"{u}rses oscillators are introduced as a byproduct of the very nature of the G\"{u}rses spacetime structure. This has, in turn, motivated us to first elaborate and discuss the effects of G\"{u}rses spacetime on the energy levels of the KG-G\"{u}rses oscillators. We have found that such KG-G\"{u}rses oscillators admit vorticity-energy correlations as well as spacetime associated degeneracies (STADs) (documented in Figures 1(a) and 1(b)).. However, for KG-G\"{u}rses plus Mirza-Mohadesi's oscillators we have observed that the vorticity-energy correlations as well as STADs are only partially valid because of the energy shifts introduced by Mirza-Mohadesi's \cite{Mirza 2004} parameter $\eta $ (documented in Figures 2(a), 2(b), and 2(c)). Nevertheless, this model was studied by Ahmed \cite{Ahmed1 2019} who has reported improper treatment and incorrect results. Consequently, his reported results (starting from his equation (21) to the end of his paper) should be redirected to the ones reported in the current study. Moreover, we have shown that PDM setting may very well have the same effect on the spectrum as that reported for KG-G\"{u}rses plus Mirza-Mohadesi's oscillators. Yet, a new set of the so called KG pseudo-G\"{u}rses oscillators is introduced and is shown to be invariant and isospectral with KG-G\"{u}rses oscillators. Therefore, such KG pseudo-G\"{u}rses-oscillators would inherit the vorticity-energy correlations as well as STADs of the KG-G\"{u}rses oscillators. \textbf{Data Availability Statement} Authors can confirm that all relevant data are included in the article and/or its supplementary information files. The author confirms that there are no online supplementary files (for web only publication) in this article. \bigskip
{ "attr-fineweb-edu": 1.825195, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUagLxK4tBVhvvprwv
\section{Introduction} In [KNS1], a class of functional relations, the T-system, was proposed. It is a family of functional relations for a set of commuting transfer matrices of solvable lattice models associated with any quantum affine algebras $U_q({\cal G}_{r}^{(1)})$. Using T-system, we can calculate various physical quantities [KNS2] such as the correlation lengths of the vertex models and central charges of RSOS models. The T-system is not only a family of transfer matrix functional relations but also two-dimensional Toda field equation on discrete space time. And it has beautiful pfaffian and determinant solutions [KOS,KNH,TK] (see also, [T]). In [KS1], analytic Bethe ansatz [R1] was carried out for fundamental representations of the Yangians $Y({\cal G})$[D], where ${\cal G}=B_{r}$, $C_{r}$ and $D_{r}$. That is, eigenvalue formulas in dressed vacuum form were proposed for the transfer matrices of solvable vertex models . These formulae are Yangian analogues of the Young tableaux for ${\cal G}$ and satisfy certain semi-standard like conditions. It had been proven that they are free of poles under the Bethe ansatz equation. Furthermore, for ${\cal G}=B_{r}$ case, these formulae were extended to the case of finite dimensional modules labeled by skew-Young diagrams $\lambda \subset \mu$ [KOS]. In analytic Bethe ansatz context, above-mentioned solutions of the T-system correspond to the eigenvalue formulae of the transfer matrices in dressed vacuum form labeled by rectangular-Young diagrams $\lambda =\phi, \mu=(m^a)$ (see also, [BR,KLWZ,K,KS2,S2]). The purpose of this paper is to extend similar analyses to Lie superalgebra ${\cal G}=sl(r+1|s+1)$ [Ka] case (see also [C] for comprehensible account on Lie superalgebras). Throughout this paper, we frequently use similar notation presented in [KS1,KOS,TK]. Studying supersymmetric integrable models is important not only in mathematical physics but also in condensed matter physics (cf.[EK,FK,KE,S1,ZB]). For example, the supersymmetric $t-J$ model received much attention in connection with high $T_{c}$ superconductivity. In the supersymmetric models, the R-matrix satisfies the graded Yang-Baxter equation [KulSk]. The transfer matrix is defined as a {\it super} trace of monodromy matrix. As a result, extra signs appear in the Bethe ansatz equation and eigenvalue formula of the transfer matrix. There are several inequivalent choices of simple root system for Lie superalgebra. We treat so-called distinguished simple root system [Ka] in the main text. We introduce the Young superdiagram [BB1], which is associated with a covariant tensor representation. To be exact, this Young superdiagram is different from the classical one in that it carries spectral parameter $u$. In contrast to ordinary Young diagram, there is no restriction on the number of rows. We define semi-standard like tableau on it. Using this tableau, we introduce the function ${\cal T}_{\lambda \subset \mu}(u)$ (\ref{Tge1}). This should be fusion transfer matrix of dressed vacuum form in the analytic Bethe ansatz. We prove pole-freeness of ${\cal T}^{a}(u)={\cal T}_{(1^{a})}(u)$, crucial property of analytic Bethe ansatz. Due to the same mechanism presented in [KOS], the function ${\cal T}_{\lambda \subset \mu}(u)$ has determinant expression whose matrix elements are only the functions associated with Young superdiagrams with shape $\lambda = \phi $; $\mu =(m)$ or $(1^{a})$. It can be viewed as quantum analogue of Jacobi-Trudi and Giambelli formulae for Lie superalgebra $sl(r+1|s+1)$. Then one can easily show that the function ${\cal T}_{\lambda \subset \mu}(u)$ is free of poles under the Bethe ansatz equation (\ref{BAE}). Among the above-mentioned eigenvalue formulae of transfer matrix in dressed vacuum form associated with rectangular Young superdiagrams, we present a class of transfer matrix functional relations. It is a special case of Hirota bilinear difference equation [H]. Deguchi and Martin [DM] discussed the spectrum of fusion model from the point of view of representation theory (see also, [MR]). The present paper will partially give us elemental account on their result from the point of view of the analytic Bethe ansatz. The outline of this paper is given as follows. In section2, we execute analytic Bethe ansatz based upon the Bethe ansatz equation (\ref{BAE}) associated with the distinguished simple roots. The observation that the Bathe ansatz equation can be expressed by root system of Lie algebra is traced back to [RW] (see also, [Kul] for $sl(r+1|s+1)$ case). Moreover, Kuniba et.al.[KOS] conjectured that left hand side of the Bethe ansatz equation (\ref{BAE}) can be written as a ratio of certain \symbol{96}Drinfeld polynomials' [D]. We introduce the function ${\cal T}_{\lambda \subset \mu}(u)$, which should be the transfer matrix whose auxiliary space is finite dimensional module of super Yangian $Y(sl(r+1|s+1))$ [N] or quantum affine superalgebra $U_{q}(sl(r+1|s+1)^{(1)})$ [Y], labeled by skew-Young superdiagram $\lambda \subset \mu$. The origin of the function ${\cal T}^{1}(u)$ goes back to the eigenvalue formula of transfer matrix of Perk-Schultz model [PS1,PS2,Sc], which is a multi-component generalization of the six-vertex model (see also [Kul]). In addition, the function ${\cal T}^{1}(u)$ reduces to the eigenvalue formula of transfer matrix derived by algebraic Bethe ansatz (For example, [FK]:$r=1,s=0$ case; [EK]:$r=0,s=1$ case; [EKS1,EKS2]:$r=s=1$ case). In section3, we propose functional relations, the T-system, associated with the transfer matrices in dressed vacuum form defined in the previous section. Section4 is devoted to summary and discussion. In appendix A, we briefly mentioned relation between fundamental $L$ operator and the transfer matrix. In this paper, we treat mainly the expressions related to covariant representations. For contravariant ones, we present several expressions in Appendix B. Appendix C and D provide some expressions related to non-distinguished simple roots of $sl(1|2)$. Appendix E explains how to represent the eigenvalue formulae of transfer matrices in dressed vacuum form ${\cal T}_{m}(u)$ and ${\cal T}^{a}(u)$ in terms of the functions ${\cal A}_{m}(u)$, ${\cal A}^{a}(u)$, ${\cal B}_{m}(u)$ and ${\cal B}^{a}(u)$, which are analogous to the fusion transfer matrices of $U_q({\cal G}^{(1)})$ vertex models (${\cal G}=sl_{r+1}, sl_{s+1}$). \section{Analytic Bethe ansatz} Lie superalgebra [Ka] is a ${\bf Z}_2$ graded algebra ${\cal G} ={\cal G}_{\bar{0}} \oplus {\cal G}_{\bar{1}}$ with a product $[\; , \; ]$, whose homogeneous elements $a\in {\cal G_{\alpha}},b\in {\cal G_{\beta}}$ $(\alpha, \beta \in {\bf Z}_2=\{\bar{0},\bar{1} \})$ and $c\in {\cal G}$ satisfy the following relations. \begin{eqnarray} \left[a,b\right] \in {\cal G}_{\alpha+\beta}, \nonumber \\ \left[a,b\right]=-(-1)^{\alpha \beta}[b,a], \\ \left[a,[b,c]\right]=[[a,b],c]+(-1)^{\alpha \beta} [b,[a,c]]. \nonumber \end{eqnarray} The set of non-zero roots can be divided into the set of non-zero even roots (bosonic roots) $\Delta_0^{\prime}$ and the set of odd roots (fermionic roots) $\Delta_1$. For $sl(r+1|s+1)$ case, they read \begin{equation} \Delta_0^{\prime}= \{ \epsilon_{i}-\epsilon_{j} \} \cup \{\delta_{i}-\delta_{j}\}, i \ne j ;\quad \Delta_1=\{\pm (\epsilon_{i}-\delta_{j})\} \end{equation} where $\epsilon_{1},\dots,\epsilon_{r+1};\delta_{1},\dots,\delta_{s+1}$ are basis of dual space of the Cartan subalgebra with the bilinear form $(\ |\ )$ such that \begin{equation} (\epsilon_{i}|\epsilon_{j})=\delta_{i\, j}, (\epsilon_{i}|\delta_{j})=(\delta_{i}|\epsilon_{j})=0 , (\delta_{i}|\delta_{j})=-\delta_{i\, j}. \end{equation} There are several choices of simple root system reflecting choices of Borel subalgebra. The simplest system of simple roots is so called distinguished one [Ka] (see, figure \ref{distinguished}). Let $\{\alpha_1,\dots,\alpha_{r+s+1} \}$ be the distinguished simple roots of Lie superalgebra $sl(r+1|s+1)$ \begin{eqnarray} \alpha_i = \epsilon_{i}-\epsilon_{i+1} \quad i=1,2,\dots,r, \nonumber \\ \alpha_{r+1} = \epsilon_{r+1}-\delta_{1} \\ \alpha_{j+r+1} = \delta_{j}-\delta_{j+1} , \quad j=1,2,\dots,s \nonumber \end{eqnarray} and with the grading \begin{equation} {\rm deg}(\alpha_a)=\left\{ \begin{array}{@{\,}ll} 0 & \mbox{for even root} \\ 1 & \mbox{for odd root} \end{array} \right. \end{equation} Especially for distinguished simple root, we have $\deg(\alpha_{a})=\delta_{a,r+1}$. \begin{figure} \setlength{\unitlength}{0.75pt} \begin{center} \begin{picture}(480,30) \put(10,20){\circle{20}} \put(20,20){\line(1,0){40}} \put(70,20){\circle{20}} \put(80,20){\line(1,0){20}} \put(110,20){\line(1,0){10}} \put(130,20){\line(1,0){10}} \put(150,20){\line(1,0){20}} \put(180,20){\circle{20}} \put(190,20){\line(1,0){40}} \put(240,20){\circle{20}} \put(232.929,12.9289){\line(1,1){14.14214}} \put(232.929,27.07107){\line(1,-1){14.14214}} \put(250,20){\line(1,0){40}} \put(300,20){\circle{20}} \put(310,20){\line(1,0){20}} \put(340,20){\line(1,0){10}} \put(360,20){\line(1,0){10}} \put(380,20){\line(1,0){20}} \put(410,20){\circle{20}} \put(420,20){\line(1,0){40}} \put(470,20){\circle{20}} \put(6,0){$\alpha_{1}$} \put(66,0){$\alpha_{2}$} \put(176,0){$\alpha_{r}$} \put(235,0){$\alpha_{r+1}$} \put(295,0){$\alpha_{r+2}$} \put(405,0){$\alpha_{r+s}$} \put(463,0){$\alpha_{r+s+1}$} \end{picture} \end{center} \caption{Dynkin diagram for the Lie superalgebra $sl(r+1|s+1)$ corresponding to the distinguished simple roots: white circle denotes even root $\alpha_i$; grey (a cross) circle denotes odd root $\alpha_j$ with $(\alpha_j|\alpha_j)=0$.} \label{distinguished} \end{figure} We consider the following type of the Bethe ansatz equation (cf. [Kul,RW,KOS]). \numparts \begin{eqnarray} -\frac{P_{a}(u_k^{(a)}+\frac{1}{t_{a}})}{P_{a}(u_k^{(a)}-\frac{1}{t_{a}})} =(-1)^{\deg(\alpha_a)} \prod_{b=1}^{r+s+1}\frac{Q_{b}(u_k^{(a)}+(\alpha_a|\alpha_b))} {Q_{b}(u_k^{(a)}-(\alpha_a|\alpha_b))}, \label{BAE} \\ Q_{a}(u)=\prod_{j=1}^{N_{a}}[u-u_j^{(a)}], \label{Q_a} \\ P_{a}(u)=\prod_{j=1}^{N}P_{a}^{(j)}(u), \\ P_{a}^{(j)}(u)=[u-w_{j}]^{\delta_{a,1}} \label{drinfeld}, \end{eqnarray} \endnumparts where $[u]=(q^u-q^{-u})/(q-q^{-1})$; $N_{a} \in {\bf Z }_{\ge 0}$; $u, w_{j}\in {\bf C}$; $a,k \in {\bf Z}$ ($1\le a \le r+s+1$,$\ 1\le k\le N_{a}$); $t_{a}=1$ for $1\le a \le r+1$, $t_{a}=-1$ for $r+2\le a \le r+s+1$. In this paper, we suppose that $q$ is generic. The left hand side of the Bethe ansatz equation (\ref{BAE}) is related to the quantum space. We suppose that it is given by the ratio of some \symbol{96}Drinfeld polynomials' labeled by skew-Young diagrams $\tilde{\lambda} \subset \tilde{\mu}$ (cf.[KOS]). For simplicity, we consider only the case $\tilde{\lambda}=\phi, \tilde{\mu}=(1) $. The generalization to the case for any skew-Young diagram will be achieved by the empirical procedures mentioned in [KOS]. The factor $(-1)^{\deg(\alpha_a)}$ of the Bethe ansatz equation (\ref{BAE}) appears so as to make the transfer matrix to be a {\it super} trace of monodromy matrix. We define the sets \begin{eqnarray} J=\{ 1,2,\dots,r+s+2\} , \quad J_{+}=\{ 1,2,\dots,r+1\} , \nonumber \\ J_{-}=\{ r+2,r+3,\dots,r+s+2\}, \label{set} \end{eqnarray} with the total order \begin{eqnarray} 1\prec 2 \prec \cdots \prec r+s+2 \label{order} \end{eqnarray} and with the grading \begin{equation} p(a)=\left\{ \begin{array}{@{\,}ll} 0 & \mbox{for $a \in J_{+}$} \\ 1 & \mbox{for $a \in J_{-}$ } \quad . \end{array} \right. \label{grading} \end{equation} For $a \in J $, set \begin{eqnarray} \fl z(a;u)=\psi_{a}(u) \frac{Q_{a-1}(u+a+1)Q_{a}(u+a-2)}{Q_{a-1}(u+a-1)Q_{a}(u+a)} \qquad {\rm for} \quad a \in J_{+}, \nonumber \\ \fl z(a;u)=\psi_{a}(u) \frac{Q_{a-1}(u+2r-a+1)Q_{a}(u+2r-a+4)} {Q_{a-1}(u+2r-a+3)Q_{a}(u+2r-a+2)} \qquad {\rm for} \quad a \in J_{-}, \label{z+} \end{eqnarray} where $Q_{0}(u)=1, Q_{r+s+2}(u)=1$ and \begin{equation} \psi_{a}(u)= \left\{ \begin{array}{@{\,}ll} P_{1}(u+2) & \mbox{for } \quad a=1 \\ P_{1}(u) & \mbox{for } \quad a \in J-\{1\} \end{array} \label{psi} \right. . \end{equation} In this paper, we often express the function $z(a;u)$ as the box $\framebox{a}_{u}$, whose spectral parameter $u$ will often be abbreviated. Under the Bethe ansatz equation, we have \begin{numparts} \begin{eqnarray} \fl Res_{u=-b+u_{k}^{(b)}}(z(b;u)+z(b+1;u))=0 \quad 1\le b \le r \label{res1} \\ \fl Res_{u=-r-1+u_{k}^{(r+1)}}(z(r+1;u)-z(r+2;u))=0 \label{res2} \\ \fl Res_{u=-2r-2+b+u_{k}^{(b)}}(z(b;u)+z(b+1;u))=0 \quad r+2\le b \le r+s+1 \label{res3} \end{eqnarray} \end{numparts} We will use the functions ${\cal T}^{a}(u)$ and ${\cal T}_{m}(u)$ ($a \in {\bf Z }$; $m \in {\bf Z }$; $u \in {\bf C }$) determined by the following generating series \begin{numparts} \begin{eqnarray} \fl (1+z(r+s+2;u)X)^{-1}\cdots (1+z(r+2;u)X)^{-1} (1+z(r+1;u)X)\cdots (1+z(1;u)X) \nonumber \\ \fl =\sum_{a=-\infty}^{\infty} {\cal F}^{a}(u+a-1) {\cal T}^{a}(u+a-1)X^{a}, \label{generating} \end{eqnarray} \begin{equation} \fl {\cal F}^{a}(u)= \left\{ \begin{array}{@{\,}ll} \prod_{j=1}^{a-1} P_{1}(u-2j+a-1) & \mbox{for } \quad a \ge 2 \\ 1 & \mbox{for } \quad a=1 \\ \frac{1}{P_{1}(u-1)} & \mbox{for } \quad a=0 \\ 0 & \mbox{for } \quad a \le -1 \\ \end{array} \right. , \end{equation} \begin{eqnarray} \fl (1-z(1;u)X)^{-1}\cdots (1-z(r+1;u)X)^{-1} (1-z(r+2;u)X)\cdots (1-z(r+s+2;u)X)\nonumber \\ \fl =\sum_{m=-\infty}^{\infty} {\cal T}_{m}(u+m-1)X^{m}, \end{eqnarray} \end{numparts} where $X$ is a shift operator $X=\e^{2\partial_{u}}$. In particular, we have ${\cal T}^{0}(u)=P_{1}(u-1)$; ${\cal T}_{0}(u)=1$; ${\cal T}^{a}(u)=0$ for $a<0$ ; ${\cal T}_{m}(u)=0$ for $m<0$. We remark that the origin of the function ${\cal T}^{1}(u)$ and the Bethe ansatz equation (\ref{BAE}) traces back to the eigenvalue formula of transfer matrix and the Bethe ansatz equation of Perk-Schultz model[Sc] except the vacuum part, some gauge factors and extra signs after some redefinition. (See also, [Kul]). Let $\lambda \subset \mu$ be a skew-Young superdiagram labeled by the sequences of non-negative integers $\lambda =(\lambda_{1},\lambda_{2},\dots)$ and $\mu =(\mu_{1},\mu_{2},\dots)$ such that $\mu_{i} \ge \lambda_{i}: i=1,2,\dots;$ $\lambda_{1} \ge \lambda_{2} \ge \dots \ge 0$; $\mu_{1} \ge \mu_{2} \ge \dots \ge 0$ and $\lambda^{\prime}=(\lambda_{1}^{\prime},\lambda_{2}^{\prime},\dots)$ be the conjugate of $\lambda $ (see, figure\ref{young} and \ref{conjyoung}). \begin{figure} \begin{center} \setlength{\unitlength}{2pt} \begin{picture}(50,55) \put(0,0){\line(0,1){20}} \put(10,0){\line(0,1){30}} \put(20,10){\line(0,1){40}} \put(30,20){\line(0,1){30}} \put(40,20){\line(0,1){30}} \put(50,30){\line(0,1){20}} \put(0,0){\line(1,0){10}} \put(0,10){\line(1,0){20}} \put(0,20){\line(1,0){40}} \put(10,30){\line(1,0){40}} \put(20,40){\line(1,0){30}} \put(20,50){\line(1,0){30}} \end{picture} \end{center} \caption{Young superdiagram with shape $\lambda \subset \mu$ : $\lambda=(2,2,1,0,0)$, $\mu=(5,5,4,2,1)$} \label{young} \end{figure} \begin{figure} \begin{center} \setlength{\unitlength}{2pt} \begin{picture}(50,55) \put(0,0){\line(0,1){30}} \put(10,0){\line(0,1){30}} \put(20,0){\line(0,1){40}} \put(30,10){\line(0,1){40}} \put(40,30){\line(0,1){20}} \put(50,40){\line(0,1){10}} \put(0,0){\line(1,0){20}} \put(0,10){\line(1,0){30}} \put(0,20){\line(1,0){30}} \put(0,30){\line(1,0){40}} \put(20,40){\line(1,0){30}} \put(30,50){\line(1,0){20}} \end{picture} \end{center} \caption{Young superdiagram with shape $\lambda ^{\prime} \subset \mu^{\prime}$ : $\lambda^{\prime}=(3,2,0,0,0)$, $\mu^{\prime}=(5,4,3,3,2)$} \label{conjyoung} \end{figure} On this skew-Young superdiagram $\lambda \subset \mu$, we assign a coordinates $(i,j)\in {\bf Z}^{2}$ such that the row index $i$ increases as we go downwards and the column index $j$ increases as we go from left to right and that $(1,1)$ is on the top left corner of $\mu$. Define an admissible tableau $b$ on the skew-Young superdiagram $\lambda \subset \mu$ as a set of element $b(i,j)\in J$ labeled by the coordinates $(i,j)$ mentioned above, obeying the following rule (admissibility conditions). \begin{enumerate} \item For any elements of $J_{+}$, \begin{numparts} \begin{equation} b(i,j) \prec b(i+1,j) \end{equation} \item for any elements of $J_{-}$, \begin{equation} b(i,j) \prec b(i,j+1), \end{equation} \item and for any elements of $J$, \begin{equation} b(i,j) \preceq b(i,j+1),\quad b(i,j) \preceq b(i+1,j). \end{equation} \end{numparts} \end{enumerate} Let $B(\lambda \subset \mu)$ be the set of admissible tableaux on $\lambda \subset \mu$. For any skew-Young superdiagram $\lambda \subset \mu$, define the function ${\cal T}_{\lambda \subset \mu}(u)$ as follows \begin{equation} \fl {\cal T}_{\lambda \subset \mu}(u)= \frac{1}{{\cal F}_{\lambda \subset \mu}(u)} \sum_{b \in B(\lambda \subset \mu)} \prod_{(i,j) \in (\lambda \subset \mu)} (-1)^{p(b(i,j))} z(b(i,j);u-\mu_{1}+\mu_{1}^{\prime}-2i+2j) \label{Tge1} \end{equation} where the product is taken over the coordinates $(i,j)$ on $\lambda \subset \mu$ and \begin{equation} \fl {\cal F}_{\lambda \subset \mu}(u)= \prod_{j=1}^{\mu_{1}} {\cal F}^{\mu_{j}^{\prime}-\lambda_{j}^{\prime}} (u+\mu_{1}^{\prime}-\mu_{1}-\mu_{j}^{\prime}-\lambda_{j}^{\prime}+2j-1). \end{equation} In particular, for an empty diagram $\phi$, set ${\cal T}_{\phi}(u)={\cal F}_{\phi}(u)=1$. The following relations should be valid by the same reason mentioned in [KOS], that is, they will be verified by induction on $\mu_{1}$ or $\mu_{1}^{\prime}$. \numparts \begin{eqnarray} \fl {\cal T}_{\lambda \subset \mu}(u)=det_{1 \le i,j \le \mu_{1}} ({\cal T}^{\mu_{i}^{\prime}-\lambda_{j}^{\prime}-i+j} (u-\mu_{1}+\mu_{1}^{\prime}-\mu_{i}^{\prime}-\lambda_{j}^{\prime}+i+j-1)) \label{Jacobi-Trudi1} \\ \fl =det_{1 \le i,j \le \mu_{1}^{\prime}} ({\cal T}_{\mu_{j}-\lambda_{i}+i-j} (u-\mu_{1}+\mu_{1}^{\prime}+\mu_{j}+\lambda_{i}-i-j+1)) \label{Jacobi-Trudi2} \end{eqnarray} \endnumparts For example, $\lambda=\phi, \mu=(2^2), r=1,s=0$ case, we have \begin{eqnarray} {\cal T}_{(2^2)}(u)= \frac{1}{{\cal F}_{(2^2)}(u)} \left( \begin{array}{|c|c|}\hline 1 & 1 \\ \hline 2 & 2 \\ \hline \end{array} -\begin{array}{|c|c|}\hline 1 & 1 \\ \hline 2 & 3 \\ \hline \end{array} - \begin{array}{|c|c|}\hline 1 & 2 \\ \hline 2 & 3 \\ \hline \end{array} +\begin{array}{|c|c|}\hline 1 & 3 \\ \hline 2 & 3 \\ \hline \end{array} \right) \nonumber \\ \fl = P_1(u+2)P_1(u+4)\frac{Q_2(u-2)}{Q_2(u+2)}- P_1(u+2)P_1(u+4) \frac{Q_1(u+1)Q_2(u-2)}{Q_1(u+3)Q_2(u+2)} \nonumber \\ - P_1(u+2)^2\frac{Q_1(u+5)Q_2(u-2)}{Q_1(u+3)Q_2(u+4)}+ P_1(u+2)^2\frac{Q_2(u-2)}{Q_2(u+4)} \\ =\begin{array}{|cc|} {\cal T}^{2}(u-1) & {\cal T}^{3}(u) \\ {\cal T}^{1}(u) & {\cal T}^{2}(u+1) \\ \end{array} \nonumber \end{eqnarray} where \begin{eqnarray} {\cal T}^{1}(u) =\begin{array}{|c|}\hline 1 \\ \hline \end{array} +\begin{array}{|c|}\hline 2 \\ \hline \end{array} -\begin{array}{|c|}\hline 3 \\ \hline \end{array} \\ =P_1(u+2)\frac{Q_1(u-1)}{Q_1(u+1)} +P_1(u)\frac{Q_1(u+3)Q_2(u)}{Q_1(u+1)Q_2(u+2)} -P_1(u)\frac{Q_2(u)}{Q_2(u+2)}, \nonumber \\ {\cal T}^{2}(u) =\frac{1}{{\cal F}^{2}(u)} \left( \begin{array}{|c|}\hline 1 \\ \hline 2 \\ \hline \end{array} - \begin{array}{|c|}\hline 1 \\ \hline 3 \\ \hline \end{array} - \begin{array}{|c|}\hline 2 \\ \hline 3 \\ \hline \end{array} + \begin{array}{|c|}\hline 3 \\ \hline 3 \\ \hline \end{array} \right) \\ =P_1(u+3)\frac{Q_2(u-1)}{Q_2(u+1)} -P_1(u+3)\frac{Q_1(u)Q_2(u-1)}{Q_1(u+2)Q_2(u+1)} \nonumber \\ -P_1(u+1)\frac{Q_1(u+4)Q_2(u-1)}{Q_1(u+2)Q_2(u+3)} +P_1(u+1)\frac{Q_2(u-1)}{Q_2(u+3)}, \nonumber \\ {\cal T}^{3}(u) =\frac{1}{{\cal F}^{3}(u)} \left( - \begin{array}{|c|}\hline 1 \\ \hline 2 \\ \hline 3 \\ \hline \end{array} + \begin{array}{|c|}\hline 1 \\ \hline 3 \\ \hline 3 \\ \hline \end{array} + \begin{array}{|c|}\hline 2 \\ \hline 3 \\ \hline 3 \\ \hline \end{array} - \begin{array}{|c|}\hline 3 \\ \hline 3 \\ \hline 3 \\ \hline \end{array} \right) \\ =-P_1(u+4)\frac{Q_2(u-2)}{Q_2(u+2)} +P_1(u+4)\frac{Q_1(u+1)Q_2(u-2)}{Q_1(u+3)Q_2(u+2)} \nonumber \\ +P_1(u+2)\frac{Q_1(u+5)Q_2(u-2)}{Q_1(u+3)Q_2(u+4)} -P_1(u+2)\frac{Q_2(u-2)}{Q_2(u+4)}. \nonumber \end{eqnarray} Remark1: If we drop the $u$ dependence of (\ref{Jacobi-Trudi1}) and (\ref{Jacobi-Trudi2}), they reduce to classical Jacobi-Trudi and Giambelli formulae for $sl(r+1|s+1)$ [BB1,PT], which bring us classical (super) characters. \\ Remark2: In the case $\lambda =\phi$ and $s=-1$, $(\ref{Jacobi-Trudi1})$ and $(\ref{Jacobi-Trudi2})$ correspond to quantum analogue of Jacobi-Trudi and Giambelli formulae for $sl_{r+1}$ [BR].\\ Remark3: $(\ref{Jacobi-Trudi1})$ and $(\ref{Jacobi-Trudi2})$ have the same form as the quantum Jacobi-Trudi and Giambelli formulae for $U_{q}(B_{n}^{(1)})$ in [KOS], but the function ${\cal T}^{a}(u)$ is quite different. The following Theorem is essential in analytic Bethe ansatz, which can be proved along the similar line of the proof of Theorem 3.3.1. in [KS1]. \begin{theorem}\label{polefree} For any integer $a$, the function ${\cal T}^a(u)$ is free of poles under the condition that the Bethe ansatz equation (\ref{BAE}) is valid. \end{theorem} At first, we present a lemma which is necessary for the proof of the Theorem\ref{polefree}. Lemma\ref{lemma2box} is $sl(r+1|s+1)$ version of Lemma3.3.2.in [KS1] and follows straightforwardly from the definitions of $z(a;u)$ (\ref{z+}). \begin{lemma} \label{lemma2box} For any $b \in J_{+}-\{r+1\}$, the function \begin{equation} \begin{array}{|c|l}\cline{1-1} b & _u \\ \cline{1-1} b+1 & _{u-2}\\ \cline{1-1} \end{array} \label{2box} \end{equation} does not contain the function $Q_b$ (\ref{Q_a}). \end{lemma} Proof of Theorem\ref{polefree}. For simplicity, we assume that the vacuum parts are formally trivial, that is, the left hand side of the Bethe ansatz equation (\ref{BAE}) is constantly $-1$. We prove that ${\cal T}^a(u)$ is free of color $b$ pole, namely, $Res_{u=u_{k}^{(b)}+\cdots}{\cal T}^a(u)=0$ for any $b \in J-\{r+s+2\}$ under the condition that the Bethe ansatz equation (\ref{BAE}) is valid. The function $z(c;u)=\framebox{$c$}_{u}$ with $c\in J $ has the color $b$ pole only for $c=b$ or $b+1$, so we shall trace only \framebox{$b$} or \framebox{$b+1$}. Denote $S_{k}$ the partial sum of ${\cal T}^a(u)$, which contains k boxes among \framebox{$b$} or \framebox{$b+1$}. Apparently, $S_{0}$ does not have color $b$ pole. This is also the case with $S_{2}$ for $b\in J_{+}-\{r+1\}$ since the admissible tableaux have the same subdiagrams as in (\ref{2box}) and thus do not involve $Q_{b}$ by lemma\ref{lemma2box}. Now we examine $S_{1}$ which is the summation of the tableaux of the form \begin{equation} \begin{array}{|c|}\hline \xi \\ \hline b \\ \hline \zeta \\ \hline \end{array} \qquad \qquad \begin{array}{|c|}\hline \xi \\ \hline b+1 \\ \hline \zeta \\ \hline \end{array} \label{tableaux1} \end{equation} where \framebox{$\xi$} and \framebox{$\zeta$} are columns with total length $a-1$ and they do not involve \framebox{$b$} and \framebox{$b+1$}. Thanks to the relations (\ref{res1}-\ref{res3}), color $b$ residues in these tableaux (\ref{tableaux1}) cancel each other under the Bethe ansatz equation (\ref{BAE}). Then we deal with $S_{k}$ only for $3 \le k \le a $ and $k=2$ with $b\in J_{-} \cup \{r+1 \}-\{r+s+2\}$ from now on. In this case, only the case for $b \in \{r+1 \} \bigcup J_{-}-\{r+s+2\}$ should be considered because, in the case for $b \in J_{+}- \{r+1 \}$, \framebox{$b$} or \framebox{$b+1$} appear at most twice in one column.\\ % The case $ b=r+1 $ : $S_{k} (k\ge 2)$ is the summation of the tableaux of the form \begin{equation} \begin{array}{|c|l}\cline{1-1} \xi & \\ \cline{1-1} r+1 & _v \\ \cline{1-1} r+2 & _{v-2} \\ \cline{1-1} \vdots & \\ \cline{1-1} r+2 & _{v-2k+2} \\ \cline{1-1} \zeta & \\ \cline{1-1} \end{array} = \frac{Q_{r+1}(v+r-2k+1)Q_{r+2}(v+r)}{Q_{r+1}(v+r+1)Q_{r+2}(v+r+2)}X_{3} \label{tableauxk1} \end{equation} and \begin{equation} \begin{array}{|c|l}\cline{1-1} \xi & \\ \cline{1-1} r+2 & _v \\ \cline{1-1} r+2 & _{v-2} \\ \cline{1-1} \vdots & \\ \cline{1-1} r+2 & _{v-2k+2}\\ \cline{1-1} \zeta & \\ \cline{1-1} \end{array} =\frac{Q_{r+1}(v+r-2k+1)Q_{r}(v+r)}{Q_{r+1}(v+r+1)Q_{r}(v+r+2)} X_{3} \label{tableauxk2} \end{equation} where \framebox{$\xi$} and \framebox{$\zeta$} are columns with total length $a-k$, which do not contain \framebox{$r+1$} and \framebox{$r+2$}; $v=u+h_1$: $h_1$ is some shift parameter; the function $X_{3}$ does not contain the function $Q_{r+1}$. Obviously, color $b=r+1$ residues in the (\ref{tableauxk1}) and (\ref{tableauxk2}) cancel each other under the Bethe ansatz equation (\ref{BAE}). \\ The case $b \in J_{-}-\{r+s+2\}$: $S_{k} (k \ge 2)$ is the summation of the tableaux of the form \begin{eqnarray} f(k,n,\xi,\zeta,u):= \begin{array}{|c|l}\cline{1-1} \xi & \\ \cline{1-1} b & _v \\ \cline{1-1} \vdots & \\ \cline{1-1} b & _{v-2n+2}\\ \cline{1-1} b+1 & _{v-2n} \\ \cline{1-1} \vdots & \\ \cline{1-1} b+1 & _{v-2k+2}\\ \cline{1-1} \zeta & \\ \cline{1-1} \end{array} \nonumber \\ =\frac{Q_{b-1}(v+2r+3-2n-b)Q_{b}(v+2r+4-b)} {Q_{b-1}(v+2r+3-b)Q_{b}(v+2r+4-2n-b)} \\ \times \frac{Q_{b}(v+2r+2-2k-b)Q_{b+1}(v+2r+3-2n-b)} {Q_{b}(v+2r+2-2n-b)Q_{b+1}(v+2r+3-2k-b)} X_{4} ,\quad 0 \le n \le k \nonumber \label{tableauxk3} \end{eqnarray} where \framebox{$\xi$} and \framebox{$\zeta$} are columns with total length $a-k$, which do not contain \framebox{$b$} and \framebox{$b+1$}; $v=u+h_2$: $h_2$ is some shift parameter and is independent of $n$; the function $X_{4}$ does not have color $b$ pole and is independent of $n$. $f(k,n,\xi,\zeta,u)$ has color $b$ poles at $u=-h_2-2r-2+b+2n+u_{p}^{(b)}$ and $u=-h_2-2r-4+b+2n+u_{p}^{(b)}$ for $1 \le n \le k-1$; at $u=-h_2-2r-2+b+u_{p}^{(b)}$ for $n=0$ ; at $u=-h_2-2r-4+b+2k+u_{p}^{(b)}$ for $n=k$. Evidently, color $b$ residue at $u=-h_2-2r-2+b+2n+u_{p}^{(b)}$ in $f(k,n,\xi,\zeta,u)$ and $f(k,n+1,\xi,\zeta,u)$ cancel each other under the Bethe ansatz equation (\ref{BAE}). Thus, under the Bethe ansatz equation (\ref{BAE}), $\sum_{n=0}^{k}f(k,n,\xi,\zeta,u)$ is free of color $b$ poles, so is $S_{k}$. \rule{5pt}{10pt} \\ Applying Theorem\ref{polefree} to (\ref{Jacobi-Trudi1}), one can show that ${\cal T}_{\lambda \subset \mu}(u)$ is free of poles under the Bethe ansatz equation (\ref{BAE}). The function ${\cal T}_{\lambda \subset \mu}(u)$ should express the eigenvalue of the transfer matrix whose auxiliary space $W_{\lambda \subset \mu}(u)$ is labeled by the skew-Young superdiagram with shape $\lambda \subset \mu$. We assume that $W_{\lambda \subset \mu}(u)$ is a finite dimensional module of the super Yangian $Y(sl(r+1|s+1))$ [N] ( or quantum super affine algebra $U_{q}(sl(r+1|s+1)^{(1)})$ [Y] in the trigonometric case ). On the other hand, for $\lambda =\phi $ case, highest weight representation of Lie superalgebra $sl(r+1|s+1)$, which is a classical counterpart of $W_{\mu}(u)$, is characterized by the highest weight whose Kac-Dynkin labels $a_{1},a_{2},\dots ,a_{r+s+1}$ [BMR] are given as follows: \begin{eqnarray} a_{j}=\mu_{j}-\mu_{j+1} \quad {\rm for} \quad 1 \le j \le r \nonumber\\ a_{r+1}=\mu_{r+1}+\eta_{1} \label{KacDynkin} \\ a_{j+r+1}=\eta_{j}-\eta_{j+1} \quad {\rm for} \quad 1 \le j \le s \nonumber \end{eqnarray} where $\eta_{j}=max\{\mu_{j}^{\prime}-r-1,0 \}$; $\mu_{r+2} \le s+1$ for covariant case. One can read the relations (\ref{KacDynkin}) from the \symbol{96}top term\symbol{39} [KS1,KOS] in (\ref{Tge1}) for large $q^u$ (see, figure\ref{top}). The \symbol{96}top term\symbol{39} in (\ref{Tge1}) is the term labeled by the tableau $b$ such that \begin{equation} b(i,j)= \left\{ \begin{array}{@{\,}ll} i & \mbox{ for } \quad 1 \le j \le \mu_{i} \quad \mbox{ and } \quad 1 \le i \le r+1 \\ r+j+1 & \mbox{ for } \quad 1 \le j \le \mu_{i} \quad \mbox{ and } \quad r+2 \le i \le \mu_{1}^{\prime} . \end{array} \right. \end{equation} % \begin{figure} \begin{center} \setlength{\unitlength}{2pt} \begin{picture}(50,60) \put(0,0){\line(0,1){60}} \put(10,0){\line(0,1){60}} \put(20,10){\line(0,1){50}} \put(30,30){\line(0,1){30}} \put(40,40){\line(0,1){20}} \put(50,50){\line(0,1){10}} \put(0,0){\line(1,0){10}} \put(0,10){\line(1,0){20}} \put(0,20){\line(1,0){20}} \put(0,30){\line(1,0){30}} \put(0,40){\line(1,0){40}} \put(0,50){\line(1,0){50}} \put(0,60){\line(1,0){50}} \put(4,4){4} \put(4,14){4} \put(4,24){4} \put(4,34){3} \put(4,44){2} \put(4,54){1} \put(14,14){5} \put(14,24){5} \put(14,34){3} \put(14,44){2} \put(14,54){1} \put(24,34){3} \put(24,44){2} \put(24,54){1} \put(34,44){2} \put(34,54){1} \put(44,54){1} \end{picture} \end{center} \caption{Young supertableau corresponding to the top term for $sl(3|2)$; $\lambda \subset \mu$ : $\lambda=\phi$, $\mu=(5,4,3,2,2,1)$} \label{top} \end{figure} Then, for large $q^u$, we have \begin{eqnarray} \prod_{(i,j) \in \mu} (-1)^{p(b(i,j))} z(b(i,j);u+\mu_{1}^{\prime}-\mu_{1}-2i+2j) \nonumber \\ = (-1)^{\sum_{i=r+2}^{\mu_{1}^{\prime}}\mu_{i}} \left\{ \prod_{i=1}^{r+1} \prod_{j=1}^{\mu_{i}} z(i;u+\mu_{1}^{\prime}-\mu_{1}-2i+2j) \right\} \nonumber \\ \times \left\{ \prod_{j=1}^{\mu_{r+2}} \prod_{i=r+2}^{\mu_{j}^{\prime}} z(r+j+1;u+\mu_{1}^{\prime}-\mu_{1}-2i+2j) \right\} \nonumber \\ \approx (-1)^{\sum_{i=r+2}^{\mu_{1}^{\prime}}\mu_{i}} q^{-2\sum N_{b}a_{b}t_{b}}. \end{eqnarray} Here we omit the vacuum part $\psi_{a}$. The \symbol{96}top term\symbol{39} is considered to be related with the \symbol{96}highest weight vector\symbol{39}. See [KS1,KOS], for more details. \section{Functional equations} Consider the following Jacobi identity: \begin{equation} \fl {D}\left[ \begin{array}{c} b \\ b \end{array} \right] {D} \left[ \begin{array}{c} c \\ c \end{array} \right]- {D}\left[ \begin{array}{c} b \\ c \end{array} \right] {D}\left[ \begin{array}{c} c \\ b \end{array} \right]= {D}\left[ \begin{array}{cc} b & c\\ b & c \end{array} \right] {D}, \quad b \ne c \label{jacobi} \end{equation} where $D$ is the determinant of a matrix and ${D}\left[ \begin{array}{ccc} a_{1} & a_{2} & \dots \\ b_{1} & b_{2} & \dots \end{array} \right]$ is its minor removing $a_{\alpha}$'s rows and $b_{\beta}$'s columns. Set $\lambda = \phi$, $ \mu =(m^a)$ in (\ref{Jacobi-Trudi1}). From the relation (\ref{jacobi}), we have \begin{equation} \fl {\cal T}_{m}^{a}(u-1) {\cal T}_{m}^{a}(u+1) = {\cal T}_{m+1}^{a}(u) {\cal T}_{m-1}^{a}(u)+ g_{m}^{a}(u) {\cal T}_{m}^{a-1}(u) {\cal T}_{m}^{a+1}(u) \label{t-sys1} \end{equation} where $a,m \ge 1$; ${\cal T}_{m}^{a}(u)={\cal T}_{(m^a)}(u)$: $a,m \ge 1$; ${\cal T}_{m}^{0}(u)=1$: $m \ge 0$; ${\cal T}_{0}^{a}(u)=1$: $a \ge 0$; $g_{m}^{1}(u)=\prod_{j=1}^{m} P_{1}(u-m+2j-2)$:$m \ge 1$; $g_{m}^{a}(u)=1$: $a \ge 2$ and $m \ge 0$, or $a=1$ and $m=0$. Note that the following relation holds: \begin{equation} g_{m}^{a}(u+1)g_{m}^{a}(u-1)=g_{m+1}^{a}(u)g_{m-1}^{a}(u) \quad {\rm for } \quad a,m \ge 1. \end{equation} The functional equation (\ref{t-sys1}) is a special case of Hirota bilinear difference equation [H]. In addition, there are some restrictions on it, which we consider below. \begin{theorem}\label{vanish} ${\cal T}_{\lambda \subset \mu}(u)=0$ if $\lambda \subset \mu$ contains a rectangular subdiagram with $r+2$ rows and $s+2$ columns. (see, [DM,MR]) \end{theorem} Proof. We assume the coordinate of the top left corner of this subdiagram is $(i_1,j_1)$. Consider the tableau $b$ on this Young superdiagram $\lambda \subset \mu$. Fill the first column of this subdiagram from the top to the bottom by the elements of $b(i,j_1) \in J$: $i_1 \le i \le i_1+r+1$, so as to meet the admissibility conditions (i), (ii) and (iii). We find $b(i_1+r+1,j_1) \in J_{-}$. Then we have $r+2 \preceq b(i_1+r+1,j_1) \prec b(i_1+r+1,j_1+1) \prec \dots \prec b(i_1+r+1,j_1+s+1) $. This contradicts the condition $b(i_1+r+1,j_1+s+1) \preceq r+s+2$. \rule{5pt}{10pt} \\ As a corollary, we have \begin{equation} {\cal T}_{m}^{a}(u)=0 \quad {\rm for} \quad a \ge r+2 \quad {\rm and} \quad m \ge s+2. \label{vanish2} \end{equation} Consider the admissible tableaux on the Young superdiagram with shape $(m^{r+1})$. From the admissibility conditions (i), (ii) and (iii), only such tableaux as $b(i,j)=i$ for $1 \le i \le r+1$ and $1\le j \le m-s-1$ are admissible. Then we have, \begin{numparts} \begin{eqnarray} \fl {\cal T}_{m}^{r+1}(u)={\cal T}_{(m^{r+1})}(u) \nonumber \\ \fl= \frac{1}{{\cal F}_{(m^{r+1})}(u)} \sum_{b \in B(m^{r+1})} \prod_{(i,j) \in (m^{r+1})} (-1)^{p(b(i,j))} z(b(i,j);u+r+1-m-2i+2j) \nonumber \\ \fl=\frac{1}{{\cal F}_{(m^{r+1})}(u)} \prod_{i=1}^{r+1} \prod_{j=1}^{m-s-1} (-1)^{p(i)} z(i;u+r+1-m-2i+2j) \nonumber \\ \fl \times \sum_{b \in B((s+1)^{r+1})} \prod_{i=1}^{r+1} \prod_{j=m-s}^{m} (-1)^{p(b(i,j))} z(b(i,j);u+r+1-m-2i+2j) \nonumber \\ \fl={\cal F}^{m-s}(u+r-s+2) \frac{Q_{r+1}(u-m)}{Q_{r+1}(u+m-2s-2)} \times {\cal T}_{s+1}^{r+1}(u+m-s-1),\label{red1} \\ \quad m \ge s+1. \nonumber \end{eqnarray} Similarly, we have \begin{eqnarray} \fl {\cal T}_{s+1}^{a}(u) = (-1)^{(s+1)(a-r-1)} \frac{Q_{r+1}(u-a-s+r)}{Q_{r+1}(u+a-s-r-2)} \times {\cal T}_{s+1}^{r+1}(u+a-r-1) , \nonumber \\ \quad a \ge r+1. \label{red2} \end{eqnarray} \end{numparts} From the relations (\ref{red1}) and (\ref{red2}), we obtain \begin{theorem}\label{dual} For $a \ge 1$ and $r \ge 0$, the following relation is valid. \begin{equation} {\cal T}_{a+s}^{r+1}(u)=(-1)^{(s+1)(a-1)} {\cal F}^{a}(u+r-s+2) {\cal T}_{s+1}^{r+a}(u). \end{equation} \end{theorem} Applying the relation (\ref{vanish2}) to (\ref{t-sys1}), we obtain \begin{numparts} \begin{equation} \fl {\cal T}_{m}^{r+1}(u-1) {\cal T}_{m}^{r+1}(u+1) = {\cal T}_{m+1}^{r+1}(u) {\cal T}_{m-1}^{r+1}(u) \quad m \ge s+2, \label{laplace1} \end{equation} \begin{equation} \fl {\cal T}_{s+1}^{a}(u-1) {\cal T}_{s+1}^{a}(u+1) = g_{s+1}^{a}(u) {\cal T}_{s+1}^{a-1}(u) {\cal T}_{s+1}^{a+1}(u) \quad a \ge r+2. \label{laplace2} \end{equation} \end{numparts} Thanks to Theorem\ref{dual}, (\ref{laplace1}) is equivalent to (\ref{laplace2}). From Theorem \ref{dual} , we also have \begin{eqnarray} \fl {\cal T}_{s+1}^{r+1}(u-1) {\cal T}_{s+1}^{r+1}(u+1) = {\cal T}_{s+2}^{r+1}(u)({\cal T}_{s}^{r+1}(u)+ (-1)^{s+1}\frac{{\cal T}_{s+1}^{r}(u)}{{\cal F}^{2}(u+r-s+2)}). \end{eqnarray} Remark: In the relation (\ref{red1}), we assume that the parameter $m$ takes only integer value. However, there is a possibility of $m$ taking non-integer values, except some \symbol{96}singular point\symbol{39}, for example, on which right hand side of (\ref{red1}) contains constant terms, by \symbol{96}analytic continuation\symbol{39}. We can easily observe this fact from the right hand side of (\ref{red1}) as long as normalization factor ${\cal F}^{m-s}(u)$ is disregarded. This seems to correspond to the fact that r+1 th Kac-Dynkin label (\ref{KacDynkin}) $a_{r+1}$ can take non-integer value [Ka]. Furthermore, these circumstances seem to be connected with the lattice models based upon the solution of the graded Young-Baxter equation, which depends on non-additive continuous parameter (see for example, [M,PF]). \section{Summary and discussion} In this paper, we have executed analytic Bethe ansatz for Lie superalgebra $sl(r+1|s+1)$. Pole-freeness of eigenvalue formula of transfer matrix in dressed vacuum form was shown for a wide class of finite dimensional representations labeled by skew-Young superdiagrams. Functional relation has been given especially for the eigenvalue formulae of transfer matrices in dressed vacuum form labeled by rectangular Young superdiagrams, which is a special case of Hirota bilinear difference equation with some restrictive relations. It should be emphasized that our method presented in this paper is also applicable even if such factors like extra sign (different from that of(\ref{BAE})), gauge factor, etc. appear in the Bethe ansatz equation (\ref{BAE}). This is because such factors do not affect the analytical property of right hand side of the Bethe ansatz equation (\ref{BAE}). It would be an interesting problem to extend similar analyses to mixed representation cases [BB2]. So far we have only found several determinant representations of mixed tableau. The simplest one is given as follows. \begin{equation} \sum_{(a,b)\in X} (-1)^{p(a)+p(b)}\dot{z}(a;u+s)z(b;u+r)= \begin{array}{|cc|} \dot{{\cal T}}^{1}(u+s) & 1 \\ 1 & {\cal T}^{1}(u+r) \end{array} \label{mix} \end{equation} where $X=\{ (a,b): a\in \dot{J}; b \in J;(a,b) \ne (-1,1) \}$ for $sl(r+1|s+1): r \ne s$; $\dot{{\cal T}}^{1}(u)$ and $\dot{J}$ are the expressions related to contravariant representations (see, Appendix B). Here we assume that the vacuum parts are formally trivial. Note that (\ref{mix}) reduces to the classical one for $sl(r+1|s+1); r \ne s$ [BB2], if we drop the $u$ dependence. In this paper, we mainly consider the Bethe ansatz equations for distinguished root system. The case for non-distinguished root system will be achieved by some modifications of the set $J_{+}$, $J_{-}$ and the function $z(a;u)$ without changing the set $J$ and tableau sum rule (see, Appendix C,D). It will be interesting to extend a similar analysis presented in this paper for other Lie superalgebras, such as $osp(m|2n)$. \ack The author would like to thank Professor A Kuniba for continual encouragement, useful advice and comments on the manuscript. He also thanks Dr J Suzuki for helpful discussions and pointing out some mistake in the earlier version of the manuscript; Professor T Deguchi for useful comments.
{ "attr-fineweb-edu": 1.460938, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaibxK6wB9lIs6cvV
\section{Seit Approach} Our goal in this paper is to address the challenges from the previous section without affecting the performance or the quality of service of tenants' applications. Our solution, Seit, provides an automotive high efficient system to classify tenants traffics and reroute them based on the receiver tenant requirements. Seit provides an automatic system to dynamically manage and control inter-tenant traffic in two steps (Automatic Tenant classification followed by Automatic tenant's traffic routing) as described next. \subsection {Automatic Tenant Classification} In the current version of Seit, we focus on classifying tenants based on their behavior although we envision the classification can be accomplished with other matrices such as tenant's location, ownership, applications. We mean by behavior here is the intention of the tenant to cause harm to other tenants or the cloud datacenter. This harm can be, for example, DOS, DDOS, sending malicious traffic, overwhelming the victim's network and resources, and unauthorized access to datacenter or other tenants' resources. \begin{figure}[h] \centering \includegraphics[width=\columnwidth, height=10cm]{graphs/IBR.pdf} \caption{IBR} \label{fig:batteryLog} \end{figure} To detect bad behaviors in efficient and quick manner, we implemented Introduction Based Routing (IBR) approach used in P2P systems. With IBR, peers are giving the choices with whom they interact with by providing incorporating feedback, giving participants a basis for making their relationship choices. IBR is decentralized system and requires minimum information sharing among peers and designed to scale to Internet sized networks with no modifications to operating systems and network backbone. To better understand IBR, consider the following example showing in figure[]. Nodes A, B, C and D are peers in P2P system. The green lines indicate established connections. Each node maintains reputation score for the nodes connected to it. When Node A wants to communicate with D, it must follow the chain of connections between it and D and ask the nodes in the chain for introduction to the node after. A starts asking B for introduction to C. Node B looks at node's A behavior history (represented by reputation score AB) and decides whether or not to forward A's request to C. If the request is forwarded, C looks at behavior history of B (BC) and decides whether or not to accept the introduction request. The process continues until A reaches D. If B, C, or D rejects a request, node A will not be able to communicate with D. After the connection is established between A and C, C assigns a reputations score to A (CA) which is X*BC. Same thing with D where it assigns reputations score of DA which is X* DC. If A starts behaving bad (e.g. sending malicious packets to D), D will decrease A's score DA and also decreases C's score DC since C took the responsibility and introduced A to D. C will do the same and decrease A's score AC and B's score since it introduced A to C. And finally, B will decrease A's reputation score AB. By this approach, nodes take extra cautious of whom they introduce and eventually, bad behaving nodes will be left alone with no other nodes willing to introduce them when their scores fall under the minimum accepted score. In Seit, we used the same approach but on a tenant level. Since tenants own more than one host in the datacenter, they are responsible for their host's behaviors and any miss behaving hosts will affect the tenant's reputation. Moreover, although we could make Seit's reputation system be completely decentralized, we decided to take advantage of the existing of a centralized management entity in the cloud to better provide IBR system with more robust implementation. When tenants detect bad behaviors, not only they will make decision of future introduction acceptance, but also inform the Cloud Manage CM of this behavior. This information helps CM to get a wide view of his datacenter and tenant sharing its resources and their overall behaviors which lead to make valuable decisions applied to tenant (isolating bad tenants, blocking their traffic, etc.). Further, in traditional IBR in P2P networks, nodes perform hill-climbing algorithm, where hosts advertise nodes they accept introductions from to other nodes. Instead, we make tenant advertise node the accept introduction to only CM where it can calculate the shortest/best path and return the paths to all tenants. We believe by involving CM in our system, we could save the tenants the extra overhead and delay caused by searching and discovering introduction path to destination (especially in large datacenters with thousands of tenants in it). \subsection {Automatic Traffic Differentiate Routing} Once tenants are classified based on their behaviors, we could easily classify and route their traffic by simply implementing SDN. When one of the tenant's host (source) starts communicating with a host (destination) that belongs to another tenant (after introduction), Seit looks at source's reputation score and matches the routing decision for destination (next section). Right after that, Seit informs the Openflow controller with these information (source IP, destination IP, protocol, action, etc.) so the controller can apply the appropriate rule for the traffic between the two hosts. \subsection {Scoring system} \begin{table} \begin{center} \caption{Tenant Score Table} \begin{tabular}{| p{1.2cm} | p{1.4cm} | p{1.4cm} | p{1.4cm} |} \hline Score - Application) & Mail & Web Server & CDN \\ \hline 0.0 - 0.1 & Block & Block & Block\\ \hline 0.1 - 0.2 & Block & Block & Forward to IDS\\ \hline 0.2 - 0.5 & Block & Forward to IDS & Forward to IDS\\ \hline 0.5 - 0.8 & Forward to IDS & Forward to Proxy & Forward to IDS\\ \hline 0.8 - 1.0 & Forward to IDS & Allow & Allow\\ \hline \end{tabular} \label{tab:waspMsgs} \end{center} \end{table} The uniqueness of Seit that tenants are the ones who decide the traffic classification and how it is handled. Table [] shows an example of one of the tenants' scoring system. Tenants provide the information of how traffic is handled for different score levels to variety of applications (or destination in their network) since different application might have different sensitivity. [] For example, tenant X requests that all incoming traffics to it from bad behaving tenants should be rerouted to IDS and all incoming traffics to it from good behaving tenants should be allowed to reach its destination directly. Tenant X does not have to classify the incoming traffic for a certain tenant (although it can do this in Seit). The classification will be applied to any tenant that wants to connect to tenant X. If tenant Y is communicating with tenant X and changes its behavior from good to bad (e.g sends malicious packets to tenant X), Seit system will automatically change the handling of the traffic and make flow through IDS. By this mechanism, there will be no need for manual configuration of the network and MBs. \section{Seit} \label{sec:seit} Seit was designed with the above challenges in mind. Figure~\ref{fig:arch} shows an overview of Seit's architecture. Seit includes a collection of interfaces that are specific to the individual components and parties within the cloud. Seit also consists of a centralized\footnote{We capitalize on the centralized nature of the cloud for this implementation, but can envision a decentralized implementation which is less efficient, but equivalent in functionality.} reputation manager that interfaces with both the cloud provider(s) and each tenant. In this section, we introduce these two main components and describe how they address the above challenges. \begin{figure}[t] \centering \includegraphics[width=0.75\columnwidth]{graphs/arch.pdf} \caption{Seit's architecture} \label{fig:arch} \end{figure} \section{Background} \subsection{P2P Reputation Systems} \subsection{Tenant Behaviors} \subsection{Impact of Attacks on Tenants} \subsection{Flow Classifications} \section{Conclusions and Future Work} \label{sec:conclusion} Cloud systems today are fostering ecosystems of interacting services. In this paper, we presented {\em Seit} as an inter-tenant framework that manages the interactions within the cloud through the use of a reputation-based system. The Seit architecture overcomes key challenges of using a reputation system in cloud environments around integration, isolation, stability, and resiliency. Using practical implementation, we demonstrate Seit's benefits across a wide spectrum of cloud services. As future work, we plan to integrate more components and improve the overall performance of Seit. We also want study the implications of incremental deployments, where some tenants do not implement Seit. Finally, we want to study scalability challenges when managing a large number of components and tenants, especially across autonomous geo-distributed clouds. \vspace{0.1in} \section{Deployment Model} \section{Discussion} In this section we explore and discuss some aspects of Seit. \subsection{Using IBR} We used Introduction Based Routing because of what it can provide of an interesting model of reputation and how it can increase and decrease through introductions. However, we do not claim any contribution on P2P area and we refrain from discussing the gaming aspect of as it is out of the scope of this paper. In addition, Seit engine can use other reputation systems such as (give other examples). At the end, all what the tenant needs is the reputation score of other tenants it interacts with to apply the appropriate actions toward them. \subsection{Overhead of establishing connection} As we showed in this paper, tenants using Seit's framework can only communicate with each other after their Seit's engines establish the connection. We understand that this approach may present an extra delay compared to systems with no Seit's framework. But we believe the overhead is very small (test how long it takes) and only when engines establish the connection. Once the services start communicating, all the traffic should take the normal time. \subsection{Role of Reputation Manager} Reputation Manager (CM) is only an addition to Seit and not an essential component of it. We implemented the Reputation Manager component and its interface in our current system for the benefits it can provide to Seit's framework. However, we understand that in some situation we will not be able to use it. In other word, Seit should be able to work in an environment without the need to rely on CM. For example, if we have tenants that interact through Seit system in Amazon, we cannot force Amazon to install a Reputation Manager but the tenants in it should be able to interact as any P2P system. \subsection{Complexity of configuring Seit's executors and sensors} How do we scale especially with VMs? And how to make sure that policies among components don't conflict. \subsection{Tenant that do not use Seit framework} One of the benefits of Seit's framework that tenants who use it can interact with the ones who do not without any conflict. Each Seit tenant has the ability to choose which tenants to interact with individually and based on their reputation. It can also choose to interact with non Seit tenants and assign them the reputation that fit them most. We assume that if a tenant decides not to use Seit's framework and decides to interact with tenant that do, this tenant will always gets the least reputation as it will be risky to trust it. \subsection{Symmetric of reputation and its reflection on QoS} Asked by Carlee in her notes. That's a good point and worth more discussion. Peers don't know their reputations at different neighbors. After Seit establishes a connection between two tenants and each knows the reputation score for the other, each tenant will inform the other of the QoS is should receive based on its score. For example. Let's say we have two tenants, one is a web server and the other is SQL DB service. The SQL server agreed to provide the Web server with 5GB of space. Where the SQL account and data will be stored is not of the Web server concern. How it is traffic is being monitored through IDS of FW is not of its concern. All what it cares about that it gets 5GB and connection to the database. If for any reason the SQL service shuts down its database or doesn't provide 5GB then the Web server will decrees the SQL service's reputation score. \section{Evaluation} \label{sec:evaluation} In this section, we evaluate both the performance of Seit's implementation through micro-benchmarks as well as the benefits of using reputation in a number of contexts. We used three large Linux servers to run the experiments: {\em Server~1:} 64GB RAM, 24 Intel CPUs (2.4GHz each), and running the reputation manager; {\em Server~2:} 64GB RAM, 24 CPUs (2.4GHz each), and running Mininet. {\em Server~3:} 32GB RAM, 12 Intel CPUs (2.00GHz each), and running Floodlight. \subsection{Performance Overhead} Despite its benefits, a reputation-based system does introduce some overheads. In this subsection, we study the extent of these overhead. \nip{Query Throughput and Latency.} The reputation manager performs a query when a tenant wants to connect to another tenant. We performed a micro benchmark where we varied the number of tenants in the network and calculated both the throughput (number of queries per second) and latency (time to calculate a single query on average) of our implementation. Shown in Figure~\ref{fig:throughput} is the throughput that the reputation manager can handle for a given number of tenants. To calculate throughput, we took a snapshot of the reputation graph from a simulated execution, and injected queries at a fixed rate. The max throughput was the maximum query rate we were able to achieve such that the total time to receive all responses was within a small threshold of the total time to send all queries (the response time is linearly related to the request time until overload, at which point it becomes exponentially related). Important to note is these results (i) reflects initial queries, which will not be frequent, and (ii) reflects a single instance. This can be mitigated if the reputation manager is designed as distributed component (and left as future work). Shown in Figure~\ref{fig:latency} is the average latency of a single query, on the order of milliseconds. This is similar (in order of magnitude) to the overheads imposed, for example, by a typical SDN flow setup (and we expect queries to be less frequent than flow setups). \begin{figure} \centering \subcaptionbox{\label{fig:throughput}}{ \includegraphics[width=0.45\textwidth]{graphs/throughput} }\par\medskip \vspace{-0.1in} \subcaptionbox{\label{fig:latency}}{ \includegraphics[width=0.45\textwidth]{graphs/latency} } \vspace{-0.1in} \caption{Reputation manager benchmark} \label{TS} \vspace{-0.2in} \end{figure} \nip{Impact of Dynamic Behavior.} To see how much dynamic behavior impacts an example component, we varied the frequency of reputation change notifications sent to an HAProxy component. In our setup with HAProxy running in its own virtual machine, iperf~\cite{iperf} reported HAProxy with a static configuration as being able to achieve a rate of 8.08 Gbps. With Seit updating the reputation (meaning the reputation changed to cross a threshold) at a rate of once every second only reduced the throughput of HAProxy to 7.78 Gbps; at (an extreme) rate of once every millisecond, it reduced the throughput to 5.58 Gbps. \subsection{Seit Benefits} The main motivation for using Seit is that it can improve a variety of aspects of a cloud operation. Here, we evaluate the benefit of Seit in three contexts chosen to show: (i) security improvements, (ii) efficiency gains (cost savings), and (iii) revenue gains. \subsection{Set up and Parameters} We built an evaluation platform using Mininet~\cite{mininet} to emulate a typical cloud environment. In each experiment, we run the Seit reputation manager along with configuring a tenant setup specific to each experiment. This evaluation platform allows us to specify four key parts of an experiment: \begin{itemize} \item \emph{Graph Construction:} How the graph is built ({\em i.e., } how interconnections are made). \vspace{-1ex} \item \emph{Sensor Configuration:} What is the sensor ({\em i.e., } what does the sensor detect in order to provide feedback) \vspace{-1ex} \item \emph{Reputation Use:} What can be controlled ({\em i.e., } what component/configuration does reputation impact). \vspace{-1ex} \item \emph{Traffic Pattern:} What is the traffic pattern. Importantly, in each case rates are chosen based on the emulation platform limitations (absolute values of rates are not meaningful). \end{itemize} \subsection{Improved Security by Isolating Attackers} \label{sec:eval:iso} One benefit of Seit is that it provides the ability to cluster good participants and effectively isolate bad participants. Here, we show Seit's effectiveness in thwarting a simulated denial of service (DoS) attack, where an attacker overwhelms his victims with packets/requests in order to exhaust the victim's resources (or in the case of elastically scalable services, cause the victim to spend more money). In our evaluation, we are mimicking an attack that happened on Amazon EC2~\cite{ec2dos1, ec2dos2}, where hackers exploited a bug in Amazon EC2 API to gain access to other tenants accounts and then flood other servers with UDP packets. We considered three scenarios in each run. The first scenario is a data center where tenants do not use any kind of reputation feedback and each tenant independently makes a local decision to block or allow communication. In the other two scenarios, tenants use Seit, and thus report about any attacks they detect and use reputation to isolate bad tenants. The only difference between the two scenarios is the attack pattern. In one, the attacker will attack its victims sequentially; in the other, the attacker establishes connections with all of its victims simultaneously and attacks them in parallel. \begin{itemize} \item \emph{Graph Construction:} For the graph construction, we select the number of tenants for the given run of the experiment. For each tenant, we select the number of other tenants it connects to randomly from 1 to 5. For each tenant, we set their `tenant quality': 3\% are explicitly marked as attackers with tenant quality of 0, the rest are randomly assigned a quality metric from 0.1 to 1.0. \vspace{-1ex} \item \emph{Sensor Configuration:} While Seit can support a variety of sensors, in this experiment we use a simplified model, where traffic explicitly consists of good packets and bad packets, and a simple sensor that detects ``bad'' packets with probability 0.9. \vspace{-1ex} \item \emph{Reputation Use:} the IaaS network controller (in our case, a Floodlight controller) will block a tenant from being able to send traffic to another tenant when the reputation of the sender drops below some value. \vspace{-1ex} \item \emph{Traffic Pattern:} Tenants generate traffic according to a simplistic model of a fixed rate and fixed inter-packet gap, where the probability of sending a good packet or bad packet is based on the tenant quality configuration ({\em e.g., } $q<0.05$, representing a malicious tenant, always send bad packets, $0.05<=q<0.5$, representing a careless tenant, send bad packets infrequently, and in proportion to the tenant quality, and $0.5<=q$, always send good packets). Attackers send traffic at a rate 10 times higher than other tenants (1 every 10ms vs 1 every 100ms). Each attacker sends a total 100 packets to each target while the rest send 50 packets. Attackers deviate from the connection graph determined above, instead attempt connection to 25\% of the other tenants randomly. \end{itemize} We measured the total number of attack packets generated by attackers and the total number of these packets that reached the victims. We varied the total number of tenants in each run starting from 32 tenants up to 1024 tenants. As shown in Figure \ref{fig:ddos}, without Seit, over 90\% of the attack packets will reach the victims, overloading the security middleboxes. With Seit, on the other hand, we are able to isolate the attacker. With more tenants in the cloud, we should be able to block more attack traffic as there will be greater amount of information about the attacker. A byproduct that is not shown is that, in this experiment, we are also decreasing the total overall traffic on the network by blocking at the source. \begin{figure} \centering \includegraphics[width =\columnwidth]{graphs/DDOS.pdf} \vspace{-0.1in} \caption{Effectiveness of protecting tenants in an IaaS cloud} \label{fig:ddos} \vspace{-0.05in} \end{figure} \subsection{Decreased Costs by Managing Middlebox Chaining Policy} Another benefit of being able to differentiate users with reputation is that we can decrease the cost of operating security middleboxes, without compromising security. Here, we explore this benefit: \begin{itemize} \item \emph{Graph construction:} The graph construction is identical to the experiment in Section~\ref{sec:eval:iso}, except that we fix the total number of tenants at 1024. \vspace{-1ex} \item \emph{Sensor configuration:} The sensor configuration is identical to the experiment in Section~\ref{sec:eval:iso}. \vspace{-1ex} \item \emph{Reputation Use:} Each tenant will direct other tenants connecting to it either through a middlebox or allow the tenant to bypass the middlebox based on the reputation of the connecting tenant. This represents a simple middlebox chaining policy. \vspace{-1ex} \item \emph{Traffic Pattern:} The traffic pattern is identical to the experiment in Section~\ref{sec:eval:iso}. \end{itemize} We place the constraint that a single middlebox instance can only handle 10 packets per second. This then allows us to capture the tradeoff between cost and security effectiveness (in our experiments, measured as number of bad packets that ultimately reached an end host). We ran two variants of this experiment. In one variant we allow the number of middleboxes to scale to what is needed to match the traffic in a given time interval. In the other variant, we fix the budget to a specific number of middleboxes, in which case, if the middleboxes are overloaded, they will fail to process every packet. In each case, we calculate the total cost of operation (number of middlebox instances needed) as well as the security effectiveness (percentage of attack packets reached destination host). As shown in Figure~\ref{fig:resource_vs_security}, we can see that using Seit has a distinct improvement in security when being held to a fixed budget, and a distinct reduction in cost when shooting for a specific security coverage to handle the varying load. \begin{figure} \centering \subcaptionbox{\label{fig:middlebox_cost}}{ \includegraphics[width=0.45\textwidth]{graphs/middlebox_cost} }\par\medskip \vspace{-0.1in} \subcaptionbox{\label{fig:attack_missed}}{ \includegraphics[width=0.45\textwidth]{graphs/attack_missed} } \vspace{-0.1in} \caption{Resource saving and security} \label{fig:resource_vs_security} \vspace{-0.1in} \end{figure} \subsection{Increasing Revenue by Managing PaaS Broker Search} In PaaS clouds, such as CloudFoundry, service providers offering similar service need a way to differentiate themselves. With a reputation system as provided with Seit, service providers that have the highest quality of service get rewarded with more customers. Using the Seit integration with the CloudFoundry broker which sorts and filters search results, we evaluate the relationship between quality of service and revenue. \begin{itemize} \item \emph{Graph Construction:} In this experiment, we have 1024 tenants, where we selected 256 tenants as service providers, 256 tenants as both service provider and service users, and the rest as service users only. For simplicity, we assume all service providers are providing identical services. To distinguish between service providers, we use four discrete tenant quality values 0.2, 0.4, 0.6, and 0.8 (higher is better). To bootstrap the experiment, we create an initial graph where for each service user tenant, we randomly select the number of service provider tenants it connects to (from 1 to 5). \vspace{-1ex} \item \emph{Sensor Configuration:} Here, clients make requests and receive responses. The sensor detects whether a request got a response or not; Dropped requests are proxy for poor service. \vspace{-1ex} \item \emph{Reputation Use:} For a PaaS broker, service tenant users perform a search for services to use. In the broker, we filter and sort the search results according to the service provider's reputation. We assume a client performs a new search every 20 seconds. The service user will choose among the top results with some probability distribution (1st search result chosen 85\% of the time, 2nd result 10\% of the time, 3rd result 5\% of the time). \vspace{-1ex} \item \emph{Traffic Pattern:} As we are attempting to show the incentive to a service provider, we make a simplifying assumption that a service user always sends a good request. The service provider will either respond or not based on the tenant quality for the given service provider. Every tenant sends a packet every second to the service provider connecting to. If a tenant receives a response, it will increase the reputation of the service provider and update its neighbor with this information. If no response received, the same process will happen but with a negative impact. \end{itemize} We run the experiment for two minutes, and as a proxy for revenue, we count the number of times a service user selects a given service provider. As shown in Figure~\ref{fig:revenue}, the expected benefits hold. As tenants with the greatest tenant quality (0.8) had a greater revenue (showing over 2000 times they were selected), while tenants with lowest tenant quality had the least revenue (being selected around 300 times). \begin{figure} \centering \includegraphics[width =\columnwidth]{graphs/revenue-quality} \caption{Revenue increase for high quality service providers} \label{fig:revenue} \vspace{-0.1in} \end{figure} \section{Design Goals} \label{sec:goals} Reputation systems have been used in peer-to-peer systems to prevent leachers (cites?), and in person-to-person communication systems to prevent unwanted communication (OSTRA). Applying to the cloud has the unique challenges in being machine to machine, highly variable, and highly dynamic. Here we highlight design goals toward addressing these challenges. \subsection{Local Reputation for Machine-to-Machine Communication} Need reputation system for machine-to-machine which reflects individual tenant views. different tenants interact differently different tenants have different views of what is `bad' and which can be stable, while still being dynamically (and automatically) adjustable and which can effectively cluster tenants into groups (e.g., isolate bad tenants) \subsection{Programmable Network Infrastructure} Need API for tenants to be able to influence the infrastructure / how traffic is handled toward a more efficient infrastructure (meaning cost, QoS, etc.). Sec -- haproxy to send traffic to A (trusted clients) or B (untrusted clients) Tenant Performance/cost -- bypass IDS (less VMs needed, so lower cost, and performance is increased since one less box to traverse) Cloud Provider performance / cost -- e..g, blocking at source (of cloud provider's network) \subsection{Programmable Sensing Infrastructure} Need ability to mark something as `bad', but in M2M, it is not necessarily binary (good or bad). Need \subsection{Reputation Manager} \label{sec:ibr} The reputation manager is responsible for maintaining a view of the reputations as perceived by various tenants. The core of the reputation manager is a reputation graph and means to query the graph. \subsubsection{Reputation Graph} In Seit, reputation is modeled as a graph with nodes representing tenants\footnote{In the future, we will explore reputations for a finer granularity than tenants.} and edges representing one node's (tenant's) view of the other node (tenant) when the two have direct communication with each other. Here, we describe the mechanism for building and using this reputation graph. Seit adapts the introduction-based routing (IBR) protocol~\cite{Frazier2011} used in P2P networks for tenant interactions because of its ability to incorporate feedback in a highly dynamic graph, and its resilience to sybil attacks. IBR, however, is not an intrinsic requirement of Seit. Seit requires only that reputation scores are maintained for each tenant's view of other tenants and uses these scores to determine the form of interaction between them. We, thus, do not consider the full spectrum of reputation system properties; considerations such as the possibility of gaming the system are out of the scope of this paper. \nip{Calculating Reputations.} IBR in P2P networks allows peers to use participant feedback as a basis for making their relationship choices. While IBR can be decentralized (as it was originally designed for P2P networks), in centralizing the design, we internally use the IBR model, but eliminate the signaling protocol between peers. We give an example of IBR's reputation calculation in Figure~\ref{fig:IBR}. The main idea is that tenants can pass feedback to each other based on their introductions to other tenants. Positive feedback results in a higher reputation, and negative feedback in a lower reputation. Nodes $A$, $B$, $C$ and $D$ in Figure~\ref{fig:IBR} are peers. The straight lines indicate established connections. Each node maintains a reputation score (trust level) for the nodes connected to it. When Node $A$ wants to communicate with Node $D$, it must follow the chain of connections between it and $D$ and ask the nodes in the chain for introduction to the node after. Node $A$ starts asking $B$ for introduction to $C$. Node $B$ looks at node $A$'s behavior history (represented by reputation score $AB$) from its local repository and decides whether or not to forward $A$'s request to $C$. If the request is forwarded, $C$ looks at behavior history of $B$ ($BC$) and decides whether or not to accept the introduction request. The process continues until $A$ reaches $D$. If $B$, $C$, or $D$ rejects a request, node $A$ will not be able to communicate with $D$. After the connection is established between $A$ and $C$, $C$ assigns a reputations score to $A$ ($CA$) which is $x \times R(BC)$, where $x$ is a scaling parameter and $R(BC)$ is the reputation score of $B$ to $A$. Similarly, $D$ assigns reputations score of $DA$ which is $y \times R(DC)$. If $A$ starts behaving negatively ({\em e.g., } sending malicious packets to $D$), $D$ will decrease $A$'s score $DA$ and also decreases $C$'s score $DC$ since $C$ took the responsibility and introduced $A$ to $D$. $C$ will do the same and decrease $A$'s score $AC$ and $B$'s score since it introduced $A$ to $C$. Finally, $B$ will decrease $A$'s reputation score $AB$. This approach ensures that nodes are especially cautious about whom they introduce. Eventually, misbehaving nodes will be isolated, with no other nodes will be willing to introduce them when their scores fall under the minimum trust level score. We will show this property in Section~\ref{sec:properties}. To reiterate, the IBR introduction mechanism is hidden from tenants. They simply ask for a connection to another tenant and are informed whether a path is available and accepted, and if so, what the reputation score is. \nip{Bootstrapping the Reputation Graph.} When a tenant joins Seit's framework, it receives a default global reputation score (assigned by the reputation manager) and zero local reputation score until it begins to interacts with other tenants. Upon being introduced to a tenant, the introduced tenant's initial reputation score will be based on the introducer's score; it can then evolve based on the reputation calculations that we describe next. In general, the speed with which a tenant builds a reputation depends on several factors such as number of tenants it interact with, the services it provides to these tenants, and any privacy and security threats to these tenants. A new tenant does, however, have an incentive to provide good services: its actions at one tenant can propagate to others through introductions, influencing its reputation at these other tenants. Since individual tenants do not have a global view of these introduction relationships, these dynamics also make it difficult for a malicious tenant to target a particular victim, as it must first find an introduction chain to reach the target. \subsubsection{Configurable Query Interface} In centralizing the IBR mechanism, we can provide a highly configurable interface to query the reputation. Here, we elaborate on both the initial query configuration as well as for subsequent queries. \nip{Initial Query.} A reputation query can be as simple as calculating the shortest path between two nodes, creating a new edge between the nodes, and then only updating the direct edge between the two nodes upon reputation feedback (direct feedback). In the reputation mechanism we are using, the feedback also impacts the edges of the initial path between the two nodes (indirect feedback). This adds an interesting aspect to the initial query: should an intermediate node allow the search to include certain outgoing edges in the query or not (or, in IBR terms, whether a node should make an introduction). To support each user's flexibility, Seit provides the ability to configure two aspects: \begin{itemize} \item {\bf Outgoing Edge Selectivity:} The idea behind introduction-based routing is analogous to real life: if a person I trust introduces someone, I am likely to trust that person more than if they were a random stranger. If I have a good interaction with that person, it generally strengthens my trust in the person that made the introduction. A bad interaction can have the opposite effect. As such, people are generally selective in introductions, especially in relationships where there is a great deal of good will built up. In Seit, the tenant has full control over the thresholds for which an introduction is made. They can introduce everyone with a low threshold; they can also be selective with a very high threshold. \item {\bf Query Rate Limit:} A consideration in serving as intermediate nodes (making introductions) is the magnitude of the potential impact to one's reputation. For this, Seit includes the ability to limit the rate at which a given node serves as an intermediate node. In doing so, it allows the system to adapt the reputations based on these new interactions, so that future requests to serve as an intermediate node will have that additional information. As an extreme example, say there is no rate limiting, Tenant $A$'s reputation is above the threshold for Tenant $B$ to introduce it to Tenant $C$ through $Z$. Tenant $A$ then attacks $C$ through $Z$, and $B$'s reputation suffers accordingly. Instead, if $B$ was rate limited, $A$ could only connect to $C$, and need to build up good reputation with $C$, or wait sufficient time, to be able to connect to the other tenants. \end{itemize} \nip{Subsequent (implicit) Queries.} In Seit, we support the view that any interaction should reflect the current reputation and that it is the current reputation that should be used when handling any interaction. In other words, a reputation query should be performed continuously for all ongoing interactions. This, of course, would be highly impractical. Instead, Seit integrates triggers within the reputation manager. Reputation changes whenever feedback is received---positive or negative. Within Seit, the effected paths through the graph that are affected by any single edge update is tracked. Then, upon an update of an edge due to feedback being received, Seit will examine a list of thresholds in order to notify a tenant when a threshold has been crossed (which ultimately are sent to each component). These threshold lists come from the shims within each tenant's infrastructure. The shims are what the tenant configures (or leaves the defaults) to specify actions to take based upon different reputation values. When a shim is initialized, or the configuration changes, the shim will notify the Seit reputation manager of these values. \section{Implementation} \label{sec:impl} We have built a prototype of the Seit reputation manager, and integrated it with several cloud components. We discuss these here. \begin{table*}[t] \small \centering \newcommand\T{\rule{0pt}{2.6ex}} \newcommand\B{\rule[-1.2ex]{0pt}{0pt}} \begin{tabular}{|m{1.2in}|m{1in}|m{4in}|} \hline \rowcolor[gray]{0.9} {\bf Category} & {\bf System} & {\bf Description} \T \B \\ \hline IaaS SDN Controller & Floodlight~\cite{floodlight} & The shim maps the reputation to OpenFlow rules via the Floodlight REST API to block or direct traffic. \\ \hline PaaS Broker & CloudFoundry~\cite{cloudfoundry} & The shim interfaces between the CloudFoundry broker and the CloudFoundry command line interface (used by the users) to filter and sort the marketplace results based on their reputation. \\ \hline Load Balancer & HAProxy~\cite{haproxy} & This shim alters the configurations written in a haproxy.cfg file to specify load balancing based on the reputation (directing tenants to servers based on reputation). Upon every change, the shim will tell HAProxy to reload the configuration. \\ \hline Infrastructure Monitoring & Nagios~\cite{nagios} & We took advantage of JNRPE (Java Nagios Remote Plugin Executor)~\cite{jnrpe} to build a Java plugin that is listening for any reputations sent by Seit's reputation manager, and displays this sentiment and configures alerts for when sentiment (collective reputation of the tenant running Nagios) drops. \\ \hline Intrusion Detection System & Snort~\cite{snort} & Snort alerts are configured to log to Syslog. By using SWATCH~\cite{swatch} to monitor Syslog, the Seit shim is alerted to all Snort alerts. The shim parses the alerts and extracts information such as source IP and alert type and send the feedback to the reputation manager. \\ \hline \end{tabular} \vspace{-0.1in} \caption{Implemented shims} \label{tab:plugins} \end{table*} We prototyped the reputation manager in approximately 7300 lines of Java code. We implemented the reputation manager as a scalable Java-based server that uses Java NIO to efficiently handle a large number of tenant connections. We also provided an admin API to setup, install, and view policies across the cloud, as well as facilitate new tenants and their services. Rather than the reputation manager interfacing with each component, within each tenant, we built a per-tenant server to serve as a proxy between the tenant's components and the reputation manager. This proxy is a light weight Java process that can be installed on any tenant machine. It listens on two separate interfaces for internal and external communications. The internal interface is used to communicate with a tenant's own components, while the external interface is used to communicate with the reputation manager. The proxy can be configured with a text configuration file that specifies the following: (i) a list of components, each of which has the name of the component, IP, type (service, executor, sensor), component description and its tasks; (ii) the edge selectivity threshold to specify when to refuse or accept connections or introduction requests from a tenant; and (iii) the query rate limit. All communication in Seit is performed through a common messaging interface. The API includes (1) a registration request from the components when they boot up for the first time, (2) a connection request when one tenant requires communication with another tenant, which includes both the initial request and a message to approve (sent to both source and destination) or reject the request (sent only to the source), (3) a feedback message containing the components desire to positively or negatively impact the reputation score for a given connection (impacting both the reputation of the other tenant, but also the introducers responsible), and (4) a configuration message setting new thresholds and query configurations. We built a shim interface for a number of cloud components, one for each example discussed in Section~\ref{sec:interfaces}. Table~\ref{tab:plugins} summarizes these components. \section{Introduction} Building and deploying any distributed ``app'' today is radically different from a decade ago. Where traditional applications of the past required dedicated infrastructure and middleware stacks, today's apps not only run on shared---cloud---infrastructure, they rely on many services, residing within and outside of underlying cloud. An app, for example, can use Facebook for authentication, Box for storage, Twilio for messaging, Square for payments, Google AdSense for advertising, etc. This trend of deploying and consuming services (often referred to as the {\em mesh economy}) can be seen by the rapid growth of cloud platforms which integrate services (and not just compute) like Amazon Web Services~\cite{aws}, Microsoft Azure~\cite{windowsazure}, Heroku~\cite{heroku}, IBM BlueMix~\cite{bluemix}, and CloudFoundry~\cite{cloudfoundry}, to name a few. More importantly, these emerging platforms further encourage the construction of apps from even smaller---micro---services ({\em e.g., } GrapheneDB, Redis, 3scale, etc.), where such services are developed and managed by different tenants. Despite the shift in how applications are being disaggregated, the management of security between services and tenants remains largely the same: {\em one focused on a perimeter defense with a largely static security configuration}. This is exacerbated by the isolation mechanisms being provided by cloud providers. It is also reinforced by the research community, which has also focused on technologies that ensure isolation \cite{Shieh2011seawall, Guo2010secondnet, Popa2012faircloud, Varadarajan2014scheduler}. While isolation enables tenants to reason about their perimeter, perimeter defense is widely considered insufficient~\cite{Zhang2012crossvm, Ristenpart2009heyyou, Zhang2014paasside, Wool2004, Bartal2004, Kaplan2014, thomson2014darkreading}. Attackers are largely indistinguishable from innocent parties and can rely on their relative anonymity to bypass the perimeter. Furthermore, erecting virtual perimeters in the cloud wastes an opportunity for efficiency optimizations available in a multi-tenant cloud infrastructure, especially since inter-tenant communication has been shown to be an important component in intra-cloud data center traffic~\cite{Ballani2013}. Such intra-cloud traffic will, of course, only grow as apps move onto Platform as a Service (PaaS) clouds like CloudFoundry, Heroku, and IBM BlueMix. The insufficient nature of static security configuration has received some attention from the research community in the form of highly programmable network infrastructures such as software-defined networking (SDN)~\cite{casado2007ethane, openflow} and network function virtualization (NFV) or software-defined middlebox infrastructure~\cite{NFV,Tennenhouse97asurvey, simple, comb, xomb, mbox_models, Gember-Jacobson2014opennf}. To date, existing research has largely focused on the systems to enable a dynamic security infrastructure, but leave the automated use of these newly programmable infrastructures as an open topic. In this paper, we argue that the cloud needs a reputation system. Reputation systems leverage the existence of many, collaborating parties to automatically form opinions about each other. A cloud can leverage the power of the crowd---the many interacting tenants---to achieve three primary goals: (1) focus isolation-related resources on tenants that more likely cause problems, (2) optimize communication between behaving parties, and (3) enable a security posture that automatically adapts to the dynamicity of tenants and services entering and leaving a cloud infrastructure. In addition to the above three goals, we believe a reputation-based system can encourage a culture of self-policing. A report from Verizon~\cite{verizon} shows that a large majority of compromises are detected by a third party, not the infected party itself, as the infected software starts interacting with external services. In service-centric clouds, being able to monitor sentiment through the reputation system will allow a good intentioned tenant to know something is wrong (e.g., when its reputation drops). This is a missing feature in traditional, security-centric infrastructures that are based on isolation from others. We introduce Seit\footnote{Seit means reputation or renown in the Arabic language.}, a general reputation-based framework. Seit defines simple, yet generic interfaces that can be easily integrated into different cloud management stacks. These interfaces interact with a reputation manager that maintains tenant reputations and governs introductions between different tenants. Together, the interfaces and reputation manager enable reputation-based service differentiation in a way that maintains stable reputations at each tenant and separates misbehaving tenants from well-behaved tenants. Specifically, this paper makes the following contributions: \begin{itemize} \item An \emph{architecture} that represents a practical realization of an existing reputation mechanism that is resilient to common attacks on reputation systems and adapted to support the operating environment of the cloud's dynamically changing behavior. We optimize that mechanism with a query mechanism that supports the abstraction of a continuous query being performed. \vspace{-1ex} \item The demonstration of the feasibility with a \emph{prototype} implementation and \emph{integration} of Seit across a number of popular cloud and network components, including: Floodlight~\cite{floodlight} (an SDN controller), CloudFoundry~\cite{cloudfoundry} (a Platform as a Service cloud), HAProxy~\cite{haproxy} (a load balancer), Snort~\cite{snort} (an intrusion detection system), and Nagios~\cite{nagios} (a monitoring system). \vspace{-1ex} \item A proof, through an \emph{analytical model}, that the system is able to effectively isolate bad tenants, and that the system can remain stable despite the high dynamics. \vspace{-1ex} \item The demonstration of the effectiveness of Seit and the cloud components we integrated through an \emph{evaluation} that shows (1) an effective ability to protect tenants from attackers by propagating information about malicious behavior by way of reputation updates, (2) an effectiveness in reducing costs of security middlebox infrastructure without compromising security, and (3) the incentives to provide good service in a PaaS cloud where users have information about service reputation in selecting a provider. \end{itemize} The remainder of the paper is organized as follows. In Section~\ref{sec:related} we survey related work. Section~\ref{sec:motivation} highlights how a reputation-based system can benefit different cloud environments, and also identifies key design challenges. Section~\ref{sec:seit} describes the architecture of Seit. Section~\ref{sec:impl} provides Seit's implementation details. We analyze the isolation and stability of properties of Seit in Section~\ref{sec:properties}. We evaluate Seit's efficacy in Section~\ref{sec:evaluation}. The paper concludes in Section~\ref{sec:conclusion}. \section{Reputation Matters} \label{sec:motivation} In this section, we elaborate on the potential benefits provided by a cloud reputation-based system, covering four different systems. Using these examples, we then identify the key design challenges in building a cloud reputation-based system. \subsection{Motivational Examples} \label{sec:examples} \nip{Reputation-augmented SDN Controller.} An IaaS cloud provides tenants with an ability to dynamically launch and terminate virtual machines (VMs). The provider's network is also becoming highly programmable using SDN approaches. SDN policies, in general, use {\em explicit} rules for managing flows (blocking some, allowing some, or directing some through a sequence of (security) middleboxes~\cite{casado2007ethane}). A reputation-based system would extend the SDN interfaces to enable flow control using {\em implicit} rules. Instead of a tenant needing to specify, for example, specific flows to block, it could specify that flows originating from sources with low reputation scores should be blocked. This is illustrated in Figure~\ref{fig:ex_iaas}(a), where Tenant $T2$ is blocking the traffic from $T1$. Likewise, the tenant can specify a set of middlebox traversal paths that then get tied to a given reputation score. This is illustrated in Figure~\ref{fig:ex_iaas}(b), where Tenant $T3$ views $T1$ as good, so gives it a direct path; Tenant $T3$ also views $T2$ as suspect, so forces its traffic through a middlebox. \begin{figure} \centering \includegraphics[width=\columnwidth]{graphs/motivation_ex_iaas.pdf} \caption{Example of an IaaS provider using a reputation-based system} \label{fig:ex_iaas} \vspace{-0.1in} \end{figure} \nip{Reputation-augmented PaaS Brokers.} PaaS clouds offer the ability to compose applications using services provided by the platform. In some cases, a PaaS cloud provides all of the services ({\em e.g., } Microsoft Azure~\cite{windowsazure}). In other cases, platforms, such as CloudFoundry~\cite{cloudfoundry}, provide an environment where many service providers can offer their services. As illustrated in Figure~\ref{fig:ex_paas}, service consumers use a {\em broker} to (1) discover services and (2) bind to them. Service discovery, in general, implements a simple search capability, focusing on returning one-to-one match with the needed service ({\em e.g., } version 2.6 of MongoDB). With a reputation-based system, service discovery can be enriched to include many new metrics that capture the quality of service as perceived by other users. So, if two different providers offer version 2.6 of MongoDB, then consumers can reason about which service offer better quality. In a similar way, a reputation-based system can be useful for service providers during the binding phase, since it maintains historical information on consumers of the service. In CloudFoundry, for example, the service provider ({\em e.g., } MongoDB) is responsible to implementing multi-tenancy. Most data stores do not create separate containers (or VMs) per tenant; they simply create different table spaces for each tenant. With a reputation-based system, service providers can implement different tenant isolation primitives. An untrusted tenant is provisioned a separate container and is charged more because of the additional computing resources that are required. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{graphs/motivation_ex_paas.pdf} \caption{Example of a PaaS provider using a reputation-based system} \label{fig:ex_paas} \vspace{-0.1in} \end{figure} \nip{Reputation-based Load Balancing.} Tenants and services can directly implement a reputation-based system without explicit support from the cloud providers. The resulting system in this setup would resemble a peer-to-peer reputation-based system, where reputation is used to provide differentiated services. Figure~\ref{fig:ex_tenant} illustrates the integration of a reputation-based system into a web service, where load balancers are used to distribute client load across identical instances of the service. Typically, load balancers aim for even distribution~\cite{nginx}. With a reputation-based system, the web service can differentiate its users based on their reputation, directing good/trusted clients to a set of servers, and bad/untrusted clients to a different set of servers. \nip{Sentiment-based Self-policing.} In traditional infrastructures, the administrator has great visibility of what is happening inside of the infrastructure through a variety of monitoring tools. The administrator, however, has limited visibility into how the infrastructure is viewed externally. Major outages are obvious and are easy to detect. Other types of issues might, at best, result in an email (or other reporting mechanism) being sent to the administrator. Sentiment analysis is widely used in corporations ({\em e.g., } monitor Twitter feeds to observe whether there is any positive or negative chatter affecting its brand~\cite{henshen2012, o2010tweets}). With a reputation-based system, a service can monitor its sentiment as perceived by its consumers. By monitoring one's sentiment, the tenant can determine whether others are having a negative experience interacting with it, then trigger a root cause analysis. This is supported by a report from Verizon~\cite{verizon} which says that in many cases, infiltrations are largely detected by external parties, not by the infected party itself. \subsection{Design Challenges} \label{sec:challenges} Reputation systems have been used in peer-to-peer systems to prevent leachers~\cite{qiu2004modeling, Kamvar2003eigentrust, Sirivianos2007dandelion, Sherman2012fairtorrent}, and in person-to-person communication systems to prevent unwanted communication~\cite{mislove2008ostra}. Applying to the cloud has the unique challenges in being machine-to-machine, highly variable, and highly dynamic interactions. In this subsection, we identify five design challenges when building a cloud-based reputation-based system. \nip{Integration.} Cloud components and services come in all shapes and sizes. Integrating a reputation-based system do not require substantial development efforts. Similar to service life-cycle calls in PaaS clouds~\cite{cloudfoundry}, a reputation-based system must define simple, yet generic interfaces that can be easily implemented by service providers. More importantly, the interfaces should support configurable query language that provides support for efficient continuous query and feedback. \nip{Interpretation of a Reputation.} Depending on the service, reputations can be interpreted in many ways. Here, there is challenge in defining what constitutes good or bad, and doing it automatically ({\em i.e., } a human will not be explicitly marking something as good or bad). Even more, in machine-to-machine communication, an interaction is not necessarily binary (good or bad), as there is a wide range of possible interactions. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{graphs/motivation_ex_tenant.pdf} \caption{Example of a tenant using a reputation-based system} \label{fig:ex_tenant} \vspace{-0.1in} \end{figure} \nip{Isolation.} In a human-centric reputation-based system ({\em e.g., } Stack Overflow~\cite{stack-overflow}), a global user reputation is desirable. In contrast, the cloud consists of a wide variety of systems. The reputation mechanism must be effective in clustering tenants into groups ({\em e.g., } to isolate bad tenants) based on both local (tenant) view and global view. \nip{Stability.} The ability to isolate bad tenants prevents system oscillations as tenants adjust their reputations: instead, misbehaving tenants will converge to a low reputation, and other tenants to a higher reputation. Moreover, the reputation mechanism should be stable to short-term fluctuations in behavior. For instance, if a tenant accidentally misbehaves for a short time before resuming its normal behavior, it should be able to eventually recover its reputation instead of being immediately and permanently blacklisted by other tenants. \nip{Resiliency.} Finally, a reputation mechanism must be resilient to attacks of the reputation mechanism itself. In particular, an attacker falsely manages to build up a good reputation before launching an attack by, for example, sybils, or other tenants controlled by it that effectively say good things about the attacker. \subsection{Seit Tenant} ------------------------------------- ---------------------------------------------------- \section{Overview} \subsection{Integration Interfaces} \label{sec:interfaces} To make Seit fully integratable into cloud systems, we need interfaces between user components ({\em e.g., } firewalls, load balancers, network controllers, etc.) and the reputation manager. Seit includes a framework to create a shim around existing components. As shown in Figure~\ref{fig:shim}, Seit shims extend a component's existing interface with (potentially) two additional interfaces that interact with Seit's reputation manager. The two additional interfaces represent two subcomponents: inbound and outbound logic. The shim's inbound logic interprets how incoming reputation updates should impact the execution of the component. The outbound logic translates how alerts, events, and status updates from the component should impact the reputation that is sent back to the reputation manager. Both inbound and outbound logics are component specific; some components only implement one of these two logics. Here, we provide a few examples to clarify the design and interface of shims: \begin{itemize} \item \textbf{SDN Controller in IaaS Clouds:} The IaaS network controller is what manages the physical cloud network. We assume that the network controller has an interface to set up a logical topology ({\em e.g., } place a firewall in between the external network and local network), and an interface to block traffic.\footnote{Even though former does not fully exist yet, we believe it will as the API becomes richer and as research in the space progresses.} The shim's inbound logic will extend these capabilities to make use of reputations. For example, if reputation is less than 0, block; between 0 and 0.8, direct to a security middlebox; greater than 0.8, provide a direct connection. \item \textbf{PaaS Broker:} The PaaS broker's responsibility is to effectively serve as a discovery mechanism for services in the cloud. With Seit, we can extend it to enrich and filter the results. Whenever a search request arrives at the broker, the shim's outbound logic would interpose on the request and queries the reputation manager to get the reputations of the searched services; the inbound logic would then sort and filter the results based on user-defined criteria. For example, it may be configured to filter out any services that would have less than a 0.3 reputation for a given user, and sort the remaining results. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{graphs/shim_interface.pdf} \caption{Seit shim generic interface.} \label{fig:shim} \end{figure} \item \textbf{Load Balancer:} As mentioned earlier, a reputation-augmented load balancer can be used to provide differentiated services to trusted and untrusted users. Here, the shim's inbound logic assigns new connections to servers based on the tenant's reputation score. \item \textbf{Infrastructure Monitor:} Infrastructure monitoring tools present information about the infrastructure to the administrators. Monitoring can be used as a way to alert administrators of changes in services reputations. This would be implemented in the shim's inbound logic. It can also be used to update the reputation of services based on monitoring information ({\em e.g., } detecting port scans by a tenant). This would be implemented in the shim's outbound logic. \item \textbf{Intrusion Detection System:} An intrusion detection system (IDS) monitors network traffic and looks for signatures within packets or performs behavioral analysis of the traffic to detect anomalies. In this case, the shim's outbound logic is designed to intercept the alerts from the IDS, and allow users to configure the feedback weights for each alert type. For example, the shim can decrease the tenant's reputation by 0.1 when seeing a connection drop alert. Similarly, it can decrease the reputation by 0.5 when seeing a port scan alert. \end{itemize} The above discussion presented only a few examples. We envision all components being integrated with Seit to provide feedback. For simplicity, the above discussion also focused on negative feedback. Positive feedback is an important aspect as well. Positive feedback might be time-based, packet-based, or connection-based. For example, a web server might provide positive feedback when it goes through an entire session with well-formed http requests. An IDS might provide positive feedback for every megabyte of traffic that does not trigger an alert. \section{Analysis of Seit's Isolation and Stability} \label{sec:properties} Using IBR allows Seit to both separate misbehaving tenants from well-behaved tenants and maintain stable reputations at each tenant. In this section, we formally show that these properties hold. We consider an IBR system with $N$ tenants, each of whom desires services from other tenants and can provide some services in return.\footnote{Different tenants may provide different services; our analysis is agnostic to the type of the service.} We consider a series of discrete timeslots $t = 0,1,2,\ldots$ and use $q_{ij}[t] \in \left[-1,1\right]$ to denote tenant $i$'s feedback on the services provided by tenant $j$ to $i$ at time $t$. This feedback may include both the received fraction of tenant $i$'s requested service (e.g., 3GB out of a requested 5GB of SQL storage) as well as whether the service was useful (e.g., sending malicious packets). We let $R_{ij}[t] \in [0,1]$ denote tenant $j$'s reputation score at tenant $i$ during timeslot $t$. Tenants update their reputation scores in every timeslot according to feedback from the previous timeslot, as described in Section \ref{sec:ibr}. We suppose that these updates are linear in the feedback received, and define $q_{ij}^{\rm ibr}[t]$ as a weighted average of the feedback $q_{lk}[t]$ provided by all tenant pairs that contribute to $j$'s reputation at tenant $i$ (e.g., including tenants that $j$ introduced to $i$). The reputation dynamics then follow \begin{equation} R_{ij}[t + 1] = \max\left\{(1 - \alpha)R_{ij}[t] + \alpha q_{ij}^{\rm ibr}[t], 0\right\} \label{eq:ibr} \end{equation} where $\alpha\in (0,1)$ is a parameter chosen by tenant $i$. A larger $\alpha$ allows the reputations to evolve more quickly. \nip{Isolation of Misbehaving Tenants.} In typical scenarios, tenants will likely act based on their reputation scores of other tenants: for instance, tenant $i$ would likely provide better service to tenants with higher reputation scores. We can approximate this behavior by supposing that the service provided by tenant $j$ to tenant $i$ (and thus $i$'s feedback $q_{ij}[t]$ on $j$'s service) is proportional to $i$'s reputation score at $j$: $q_{ij}[t] = \pm R_{ji}[t]$, where the sign of $q_{ij}[t]$ is fixed and determined by whether tenant $j$ is a ``good'' or ``bad'' tenant. The reputation dynamics (\ref{eq:ibr}) are then linear in the $R_{ij}$, allowing us to determine the equilibrium reputations: \begin{prop}[Equilibrium Reputations] \label{prop:equilibria} Equilibria of (\ref{eq:ibr}) occur when, for each pair of tenants $(i,j)$, either $q_{ij}^{\rm ibr}[t] = R_{ij}[t]$ or $R_{ij}[t] = 0$ and $q_{ij}^{\rm ibr}[t] < 0$. These equilibria are Lyapunov-stable, and the system converges to this equilibrium. \end{prop} \begin{proof} We can find the equilibria specified by solving (\ref{eq:ibr}) with $q_{ij}[t] = \pm R_{ji}[t]$. To see that the equilibria are Lyapunov-stable, we note that (\ref{eq:ibr}) can be written as $R[t + 1] = \Sigma R[t]$, where $R[t]$ is a vector of the $R_{ij}[t]$ and $\Sigma$ a constant matrix. It therefore suffices to show that $\Sigma$ has no eigenvalue larger than 1. We now write $\Sigma = (1 - \alpha)I_{2N} + \alpha\Sigma_1$, where each row of $\Sigma_1$ sums to 1 since $q_{ij}^{\rm ibr}$ is a weighted average. Thus, the maximum eigenvalue of $\alpha\Sigma_1$ is $\alpha$, and that of $\Sigma$ is $(1 - \alpha) + \alpha = 1$. Since linear systems either diverge or converge to an equilibrium and the $R_{ij}$ are bounded, the system must converge to this (unique) equilibrium. \end{proof} This result shows that equilibria are reached when tenants agree with each others' reputations: the overall feedback $q_{ij}^{\rm ibr}[t]$ that tenant $i$ receives from tenant $j$ is consistent with tenant $j$'s reputation at tenant $i$. We can interpret Prop. \ref{prop:equilibria}'s result as showing that at the equilibrium, tenants segregate into two different groups: one group of ``bad'' tenants who provide bad-quality service and have zero reputation, receiving no service; and one group of ``good'' tenants with positive reputations, who receive a positive amount of service. Thus, tenants may experience a desirable ``race to the top:'' good tenants will receive good service from other good tenants, incentivizing them to provide even better service to these tenants. Bad tenants experience an analogous ``race to the bottom.'' We illustrate these findings in Figure \ref{fig:reputations}, which simulates the behavior of 100 tenants, 10 of which are assumed to be malicious $\left(q_{ij}[t] = -1\right)$. Tenants' reputations are assumed to be specified as in (\ref{eq:ibr}) and are randomly initialized between 0 and 1, with $\alpha = 0.1$. The figure shows the average reputation over time of both bad and good tenants. We see that good tenants consistently maintain high reputations at good tenants, while bad tenants quickly gain bad reputations at all tenants. \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{Figures/ReputationsRand_NoIntros} \vspace{-0.05in} \caption{Average reputations for ``good'' and ``bad'' tenants over time.} \label{fig:reputations} \vspace{-0.1in} \end{figure} \nip{Stability.} While the analysis above considers binary ``bad'' and ``good'' tenants, some ``bad'' misbehaviors are not always malicious. For instance, tenants may occasionally send misconfigured packets by accident. Such tenants should be able to recover their reputations over time, instead of being immediately blacklisted at the client. Conversely, if malicious tenants occasionally send useful traffic in order to confuse their targets, they should not be able to improve their reputations permanently. We now show that this is the case: \begin{prop}[Reputation Stability] \label{prop:finite} Let $\left\{R_{ij}[t],t \geq 0\right\}$ and $\left\{R'_{ij}[t],t \geq 0\right\}$ respectively denote the reputation scores given the feedback $q_{ij}^{\rm ibr}[t]$ and ${q_{ij}'}^{\rm ibr}[t]$. Suppose $q_{ij}^{\rm ibr}[0] \neq {q_{ij}'}^{\rm ibr}[0]$ and $q_{ij}[t] = q_{ij}'[t]$ for $t > 0$. Then $\lim_{t\rightarrow\infty} \left|R_{ij}[t] - R_{ij}'[t]\right| \rightarrow 0$. \end{prop} The proof follows directly from (\ref{eq:ibr}). If tenants misbehave temporarily ($q_{ij}^{\rm ibr}[0] < 0$ and ${q_{ij}'}^{\rm ibr}[0] > 0$ but $q_{ij}^{\rm ibr}[t] > 0$), the effect of this initial misbehavior on their reputations disappears over time. \section{Related Work} \label{sec:related} Seit builds on past research in cloud systems and reputation systems. Here, we discuss these works. \nip{Cloud Systems for Inter-tenant Communication.} Communication within clouds has largely focused on isolation mechanisms between tenants. A few systems focused explicitly on handling inter-tenant communication. In particular, CloudPolice~\cite{Popa2010cloudpolice} implements a hypervisor-based access control mechanism, and Hadrian~\cite{Ballani2013} proposed a new network sharing framework which revisited the guarantees given to tenants. Seit is largely orthogonal to these. Most closely related to Seit is Jobber~\cite{jobber}, which proposed using reputation systems in the cloud. With Seit, we provide a practical implementation, demonstrate stability and isolation, integrate with real cloud components, and provide a full evaluation. \nip{Reputation for Determining Communication.} Leveraging reputation has been explored in many areas to create robust, trust-worthy systems. For example, Introduction-Based Routing (IBR) creates incentives to separate misbehaving network participants by leveraging implicit trust relationships and per-node discretion~\cite{Frazier2011}. With Seit, we leverage IBR in a practical implementation for a cloud system and extend it to support more dynamic use. Ostra~\cite{mislove2008ostra} studied the use of trust relationships among users, which already exist in many applications (namely, social networks), to thwart unwanted communication. Ostra's credit scheme ensures the overall credit balance unchanged at any times so that malicious, colluding users can pass credits only between themselves, protecting against sybils. SybilGuard~\cite{Yu2008SybilGuard} also uses social networks to identify a user with multiple identities. At a high level, Ostra and Seit have some similarity, namely in the objective of thwarting unwanted communication. The biggest difference is that Ostra was designed for person-to-person communication (email, social networks, etc.), whereas Seit is designed for machine-to-machine communication. This then impacts how communication is handled. In Ostra, communication is either wanted or unwanted; unwanted communication is blocked. Seit goes beyond simply being able to block certain users and allows for a variety of responses. Further, being machine-to-machine as opposed to being person-to-person impacts how feedback is handled. In Ostra, feedback was explicit based on human input, whereas Seit integrates with a variety of systems that provide implicit feedback of varying strength. \nip{Reputation for Peer-to-peer Systems.} In peer-to-peer systems, reputation-based approaches involving other participants are used for better decisions about cooperation such as preventing free-riders and inauthentic file pieces in the system. For example, EigenTrust~\cite{Kamvar2003eigentrust} aims to filter inauthentic file pieces by maintaining a unique global reputation score of each peer based on the peer's history of uploads. Dandelion~\cite{Sirivianos2007dandelion}, PledgeRouter~\cite{Landa2009Sybil}, FairTorrent~\cite{Sherman2012fairtorrent}, and one hop reputation system~\cite{Piatek2008Onehop} aim at minimizing the overhead of maintaining reputation scores across peers either by placing a central trusted server or by limiting the scope of reputation calculations. Seit is targeting an environment where there is not a direct give-and-take relationship; as such, we leverage a richer reputation mechanism that maintains a graph of all user interactions and uses it to determine local reputation for each user.
{ "attr-fineweb-edu": 1.592773, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaqDxK02iP15vd-Id
\section{Introduction} Mathematical modelling of damping in materials is a classical problem in structural dynamics, and not a fully solved one. The high dimensionality of structural finite element models combine with the non-analyticity of physically realistic damping models to produce nontrivial numerical challenges in dynamic simulation. This paper makes two contributions in this important area. The first contribution, which is in the area of simple but effective numerical integration, leads to the second contribution, which is in data-driven model order reduction. Although the linear viscous damping model is mathematically simple and numerically convenient, it is not really correct. It has been experimentally known for a long time that, for many materials subjected to periodic loading, the internal dissipation per cycle is frequency-independent \cite{kimball1927internal}. In the traditional linear viscous damping model the dissipation per cycle is proportional to frequency. Hysteretic dissipation, which is a rate-independent \cite{muravskii2004frequency} mechanism, is preferred by many structural dynamics researchers because it is more realistic. However, hysteresis involves nonanalytic behavior with slope changes at every reversal of loading direction. Numerical integration for structural dynamics with hysteretic damping needs greater care than with linear viscous damping. The difficulty grows greatly with finite element models wherein mesh refinement leads to high structural frequencies which require tiny time steps; and wherein the nonanalytic hysteretic damping is finely resolved in space as well. In this paper we first consider time integration of the vibration response for beams\footnote{% Our approach extends directly to frames modeled using beam elements. The approach may eventually be generalized to two or three dimensional elements.} with distributed hysteretic damping, and then consider model order reduction by projecting the dynamics onto a few vibration modes. Model order reduction is not trivial in this case because the distributed hysteresis needs to be projected onto lower dimensions as well. Initial numerical solutions of the full system, using a {\em semi-implicit} integration scheme developed in the first part of this paper, will be used in the second part to construct accurate lower order models with hysteretic damping. \subsection{Explicit, implicit, and semi-implicit integration}\label{explicit_implicit_def} A key idea in numerical integration of ordinary differential equations (ODEs) is summarized here for a general reader although it is known to experts. We consider ODE systems written in first order form, $\dot \by = \bsy{f}(\by,t)$, where $\by$ is a state vector and $t$ is time. In single-step methods, which we consider in this paper, we march forward in time using some algorithm equivalent to \begin{equation} \label{eqdef} \by(t+h)=\by(t)+h \cdot \bsy{H}(t, h, \by(t), \by(t+h)). \end{equation} The specific form of $\bsy{H}$ above is derived from the form of $\bsy{f}(\by,t)$ and depends on the integration algorithm. The actual evaluation of $\bsy{H}$ may involve one or multiple steps, but that is irrelevant: the method is single-step in time. For smooth systems, $\bsy{H}$ is guided by the first few terms in Taylor expansions of various quantities about points of interest in the current time step. In such cases, as $h \rightarrow 0$, the error in the computed solution goes to zero like $h^m$ for some $m>0$. If $m>1$, the convergence is called superlinear. If $m=2$, the convergence is called quadratic. Values of $m>2$ are easily possible for smooth systems with moderate numbers of variables: see, e.g., the well known Runge-Kutta methods \cite{chapra2011}. For large structural systems, the difficulty is that for the $h^m$ scaling to hold, the time step $h$ may need to be impractically small. For example, the highest natural frequency of a refined finite element (FE) model for a structure may be, say, $10^6$ Hz. This structure may be forced at, say, 10 Hz. If a numerical integration algorithm requires time steps that resolve the highest frequency in the structure, i.e., time steps much smaller than $10^{-6}$ seconds in this example, then that algorithm is impractical for the problem at hand. A high order of convergence that only holds for time steps that are smaller than $10^{-6}$ is of little use. We wish to obtain accurate results with time steps much smaller than the forcing period, e.g., $10^{-2}$ or $10^{-3}$ seconds. To develop such practically useful algorithms, we must consider the stability of the numerical solution for larger values of $h$, i.e., $10^{-6} < h < 10^{-2}$, say. To that end, we consider the nature of $\bsy{H}$. If $\bsy{H}$ does not depend explicitly on $\by(t+h)$, the algorithm is called explicit. Otherwise it is called implicit. If the dynamics is nonlinear, $\bsy{H}$ is usually a nonlinear function of its $\by$-arguments. Then each implicit integration step requires iterative solution for $\by(t+h)$. For linear dynamics, with $\bsy{H}$ linear in its $\by$-arguments, the $\by(t+h)$ can be moved over to the left hand side and the integration step can proceed without iteration, although usually with matrix inversion. The algorithm is still called implicit in such cases: implicitness and iteration are not the same thing. Finally, if we begin with an implicit scheme in mind but drop some of the troublesome nonlinear $\by(t+h)$ terms from $\bsy{H}$ or approximate them in some ``explicit'' way while retaining other $\by(t+h)$ terms within $\bsy{H}$, then we call the algorithm semi-implicit. The specific algorithms discussed later in this paper will illustrate these issues. \subsection{Contribution of this paper} In this paper we present a semi-implicit approach that can be used for high dimensional finite element models with distributed hysteresis. For simplicity, we choose the widely used Bouc-Wen model \cite{bouc1967forced,wen1976method,sivaselvan2000hysteretic,kottari2014consistent,% charalampakis2008identification,yar1987parameter,charalampakis2009bouc,visintin2013differential} as the damping mechanism in our study. After presenting and validating out numerical integration method, we present a new way to obtain useful lower order models for the structure, starting from a refined FE model. A key issue in model order reduction is that the refined or full FE model computes hysteretic variables at a large number of Gauss points in the structure, and a smaller subset needs to be selected systematically for the model order reduction to be practical. Thus, the contribution of the present paper is twofold. First, we present a simple semi-implicit algorithm for a structural FE model with distributed hysteresis and demonstrate its convergence and utility. Second, we use this algorithm to compute some responses of the structure and use those responses to construct accurate lower order models with reduced numbers of both vibration modes and hysteretic variables. These lower order models can be used for quick subsequent simulations of the structural dynamics under similar, but not identical, initial conditions or loading. \subsection{Representative literature review} We now briefly review some numerical integration methods that are available in popular software or research papers. We first observe that structural systems with Bouc-Wen hysteresis continue to be studied in research papers using algorithms that are not as efficient as the one we will present in this paper. For example, as recently as 2019, Vaiana et al.\ \cite{vaiana2019nonlinear} considered a lumped-parameter model and used an explicit time integration method from Chang's family \cite{chang2010new}. That method requires extremely small time steps: the authors used {\em one hundredth} of the time period of the highest structural mode. Such small time periods are impractical for highly refined FE models. Our algorithm will allow much larger time steps. Thus, in the specific area of FE models with distributed simple hysteresis, we will make a useful contribution in this paper. Next, we acknowledge that the commercial finite element software Abaqus \cite{abaqus2011abaqus} allows users to specify fairly complex material responses and also to choose between explicit and implicit numerical integration options. For many nonlinear and nonsmooth dynamic problems, explicit integration needs to be used in Abaqus. As outlined above, implicit or semi-implicit algorithms can be useful for somewhat simpler material modeling and in-house FE codes. Considering general purpose software for numerical work, many academic researchers dealing with hysteresis may begin with the popular software package Matlab. Matlab's \cite{matlab2010} built in function \texttt{ode15s} is designed for stiff systems but not specifically for systems with hysteresis. We have found that \texttt{ode15s} can handle ODEs from FE models with a modest number of elements and with hysteresis, but it struggles with higher numbers of elements because its adaptive time steps become too small. For those programming their own integration routines in structural dynamics, the well known Newmark method \cite{newmark1959method} from 1959 remains popular (see, e.g., \cite{bathe2007conserving,lee2017new}), although it cannot guarantee stability and may require extremely small time steps as noted in, e.g., \cite{hilber1977improved}. In that paper \cite{hilber1977improved} of 1977, Hilber, Hughes and Taylor modified the Newmark method and obtained unconditional stability for linear structural problems. However, this highly popular method (known as HHT-$\alpha$) shows slow dissipation of higher modes even without hysteretic damping. An appreciation of the issues faced for full three-dimensional (3D) simulation with hysteretic damping can be gained, e.g., from the work of Triantafyllou and Koumousis \cite{triantafyllou2014hysteretic}. Their formulation is actually based on plasticity in 3D; they compare their solutions with those from Abaqus; and they include dynamics through the Newmark algorithm. Their algorithm is rather advanced for a typical analyst to implement quickly. Note that our present application is easier than theirs because we have only one-dimensional Bouc-Wen hysteresis. Additionally, our primary interest is in model order reduction. And so we develop for our current use a numerical integration approach that is simpler than that in \cite{triantafyllou2014hysteretic}. Finally, readers interested in hybrid simulations (with an experimental structure in the loop) may see, e.g., Mosqueda and Ahmadizadeh \cite{mosqueda2007combined,mosqueda2011iterative} who used a modified version of the HHT-$\alpha$ for calculating the seismic response of a multi-degree-of-freedom structure. Purely simulation-based studies such as the present one can hopefully provide theoretical inputs into planning for such hybrid simulations in future work. \subsection{Our present approach} We will begin with an existing implicit scheme for linear structural dynamics without hysteresis that is both simple and significantly superior to both the Newmark and HHT-$\alpha$ methods. In 1995, Pich\'e \cite{piche1995stable} presented an L-stable method from the Rosenbrock family (see \cite{wanner1996solving} and references therein) which is stable, implicit without iteration for linear structures, and second order accurate. For smooth nonlinear systems of the form $$\mbf{M} \ddot{\bsy{x}} + \bsy{f}(\bsy{x}, \dot{\bsy{x}}) = \bsy{0},$$ Pich\'e suggests a one-time linearization of $\bsy{f}$ at the start of each time step, leading to a semi-implicit algorithm. Here we extend Pich\'e's formulation to include hysteretic damping in the otherwise-linear structural dynamics. In our method the stiffness and inertia terms are integrated implicitly (without iteration, being linear) and the nonsmooth hysteresis is monitored at Gauss points \cite{maiti2018vibrations} and treated with explicit time-integration. Thus, overall, our method is semi-implicit. Our proposed approach, when applied to a refined FE model of an Euler-Bernoulli beam with distributed Bouc-Wen hysteretic damping, is easy to implement and can be dramatically faster than Matlab's \texttt{ode15s}. In fact, it continues to work well beyond refinement levels where \texttt{ode15s} stops working. We emphasize that \texttt{ode15s} is not really designed for such systems. We use it here because it has adaptive step sizing and error estimation: when it does work, it can be highly accurate. For this reason, to examine the performance of our algorithm for modest numbers of elements, we will use results obtained from \texttt{ode15s} with a tight error tolerance. With higher number of elements, \texttt{ode15s} fails to run because the time steps required become too small. Having shown the utility of our proposed semi-implicit integration algorithm, we will turn to our second contribution of this paper, namely model order reduction. Such Reduced Order Models (ROMs) can save much computational effort. In the physical system, if only a few lower modes are excited, the displacement vector can be approximated as a time-varying linear combination of those modes only. The challenge here is that the number of Gauss points used for the hysteresis needs to be reduced too, and that is where we offer a significant contribution. Note that modal reduction, without the added complication of hysteretic damping, is well known. For example, Stringer et al.\ \cite{stringer2011modal,samantaray2009steady} successfully applied modal reduction to rotor systems, including ones with gyroscopic effects. Other recent examples in dynamics and vibrations can be seen in \cite{bhattacharyya2021energy,expbhattachcusumano2021}. Proper Orthogonal Decomposition (POD) \cite{chatterjee2000introduction} based reduced order models are often used in fluid mechanics: a representative sample may be found in \cite{sengupta2015enstrophy,clark2015developing,berkooz1993proper,holmes1997low}. Finally, in work more directly related to ours, for an Euler-Bernoulli beam with hysteretic dissipation, Maity et al.\ \cite{maiti2018vibrations} used the first few undamped modes to expand the solution and performed the virtual work integration using a few Gauss points chosen over the full domain. However, they used a different hysteresis model motivated by distributed microcracks. Furthermore, in our finite element model, virtual work integrations are performed over individual elements and not the whole domain. Consequently, the number of Gauss points in the model increases with mesh refinement. When we project the response onto a few modes, we must reduce the number of Gauss points retained as well, and a practical method of doing this is the second contribution of this paper. In what follows, we present the finite element model in section \ref{fe_formulation}, outline the numerical integration algorithm in section \ref{SE_scheme}, develop the approach for model order reduction in section \ref{ROM_sec}, and present our results in section \ref{results}. \section{Governing equations and Finite Element formulation}\label{fe_formulation} \begin{figure}[ht!] \centering \includegraphics[scale=0.5]{fig1.eps} \caption{A cantilever beam.} \label{beam_fig1} \end{figure} The governing equation of the deflection $ {u}({x},{t}) $ of a cantilever beam (shown in Fig.\ \ref{beam_fig1}) with a dissipative bending moment $ M_{\trm{d}}=\gamma_{\trm{h}} z(x,t) $ is \begin{equation}\label{eq_gov} \rho A \pddf{{u}}{{t}}+\pddxtwo{{x}}\left(E I \pddf{ {u}}{{x}}+{\gamma}_{\trm{h}} z\right)=0, \end{equation} where the beam's material density, cross section and flexural rigidity are $\rho$, $A$ and $EI$ respectively. The parameter $ \gamma_{\trm{h}} $ is the strength of hysteretic dissipation. The hysteretic damping variable $ z $ is defined at each $ x $-location along the beam length, is governed pointwise by the Bouc-Wen model, and is driven pointwise by the local curvature $$ {\chi}({x},{t})\approx \pddf{{u}}{{x}}$$ in the governing equation \begin{equation}\label{Bouc_wen} \dot{z}=\left({\bar{A}}-{\alpha}\: \trm{sign}\left(\dot{\chi} \,z\right) \abs{z}^{n_{\trm{h}}}-{\beta} \abs{z}^{n_{\trm{h}}}\right)\dot{\chi} \end{equation} where the Bouc-Wen model parameters satisfy \begin{equation} \label{BWp} \bar{A}>0,\,\alpha>0,\,\beta\in (-\alpha,\alpha)\:\trm{and}\:n_{\trm{h}}>0. \end{equation} The parameters in Eq.\ (\ref{BWp}) and the hysteretic variable $z$ are dimensionless. The dissipation strength parameter $\gamma_{\trm{h}}$ has units of Nm (i.e., units of moments). The FE model involves beam elements for displacement variables, virtual work calculations for the hysteretic moment through domain integrals based on Gauss points within each element, and ODE solution in time. We use the Galerkin method wherein we look for an admissible solution $\hat{u}({x},t)$ so that the residual, \begin{equation} \mcl{R}(\hat{u})=\rho A \pddf{\hat{u}}{t}+\pddxtwo{x}\left(E I \pddf{\hat{u}}{x}+\gamma_{\trm{h}} z\right), \end{equation} is orthogonal to basis functions for the space of $\hat{u}$. Mathematically, \begin{equation}\label{galerkin} <\mcl{R}(\hat{u}),\phi_i> \, =\int_{\Omega}\mcl{R}(\hat{u})\phi_i({x}) \dd \Omega=0, \end{equation} where $$ <f_1,f_2> \, = \int_{\Omega} f_1({x}) f_2({x}) \dd \Omega,$$ and $\phi_i$ is the $i^{\rm th}$ basis function. \subsection{Element matrices and virtual work integrals} The elemental stiffness and inertia matrices are routine and given in many textbooks: each beam element has two nodes, and each node has one translation and one rotation. The hysteresis variable $z$ is a scalar quantity distributed in space. However, it enters the discretized equations through integrals, and so $z$ values only need to be known at Gauss points within each element. The evolution of $z$ at the Gauss points is computed using Eq.\ (\ref{Bouc_wen}), and is thus driven by displacement variables. Some further details are given in appendix \ref{appB}. \subsection{Global equations} After assembly we obtain a system of equations of the form \begin{subequations} \begin{equation}\label{full_model} \underset{\trm{Mass}}{\mbf{M}\ddot{\bsy{q}}}+\underset{\trm{Stiffness}}{\mbf{K}\bsy{q}}+\underset{\trm{Hysteresis}}{\mbf{A}\bsy{z}}=\underset{\trm{Forcing}}{\bsy{f}_0(t)},\,\, \mbf{M}, \mbf{K} \in \mbb{R}^{N\times N}, \, \bsy{q}\in \mbb{R}^N,\;\mbf{A}\in \mbb{R}^{N\times n_{\trm{g}} n_{\trm{e}}}, \, \bsy{z}\in\mbb{R}^{n_{\trm{g}} n_{\trm{e}}}, \end{equation} \begin{equation}\label{full_model_zdot} \dot{\bsy{z}}=(\bar{A}-\alpha\: \trm{sign}(\dot{\bsy{\chi}}\circ\bsy{z})\circ\abs{\boldsymbol{z}}^{\circ\, n_{\trm{h}}}-\beta \abs{\bsy{z}}^{\circ\, n_{\trm{h}}})\circ\dot{\boldsymbol{\chi}},\qquad \boldsymbol{\chi}=\mbf{B} \boldsymbol{q},\;\mbf{B}\in \mbb{R}^{n_{\trm{e}} n_{\trm{g}}\times N}, \end{equation} \end{subequations} where the different symbols mean the following: \begin{enumerate} \item $\bsy{q}$ is a column vector of nodal displacements and rotations for $n_{\trm{e}}$ beam elements, with $N = 2 n_{\trm{e}}$ for a cantilever beam, \item $\mbf{M}$ and $\mbf{K}$ are mass and stiffness matrices of size $2\,n_{\trm{e}} \times 2\,n_{\trm{e}}$, \item $\bsy{z}$ is a column vector of length $n_{\trm{g}} n_{\trm{e}}$ from $n_{\trm{g}}$ Gauss points per element, \item $(\cdot \circ \cdot)$ and $(\cdot)^{\circ\, (\cdot)}$ denote elementwise multiplication and exponentiation, \item $\mbf{A}$ is a matrix of weights used to convert $\bsy{z}$ values into virtual work integrals, \item $\bsy{\chi}$ is a column vector of curvatures at the Gauss points, \item $\mbf{B}$ maps nodal displacements and rotations $\bsy{q}$ to curvatures $\bsy{\chi}$ at the Gauss points, and \item $\bsy{f}_0(t)$ incorporates applied forces. \end{enumerate} In Eq.\ (\ref{full_model}) above we can include viscous damping by adding $\mbf{C}\dot{\bsy{q}}$, for some suitable $\mbf{C}$, on the left hand side. \section{Time integration} \label{SE_scheme} We will develop a semi-implicit method adapted from Pich\'e's \cite{piche1995stable} work. Equation (\ref{full_model}) has a structural part (stiffness and inertia) and a hysteresis part. The structural part is integrated implicitly (section \ref{implicit_integration}) and the hysteresis part marches in time, following an explicit algorithm which takes care of the nonsmooth slope changes due to zero-crossing of the time derivative of the curvature (section \ref{explicit_integration}). In section \ref{ROM_sec}, our semi-implicit algorithm will be used to generate a large amount of data which will be used to construct reduced order models. \subsection{Implicit integration for the structural part} \label{implicit_integration} Pich\'e's algorithm uses a numerical parameter $1-\frac{1}{\sqrt{2}}$ which is written as $\gamma$ for compact presentation. The symbol should not be confused for Euler's constant. We now proceed as follows. This subsection presents implicit integration for the structural part alone: the hysteretic variable vector $\bsy{z}$ is not integrated in an implicit step: this compromise simplies the algorithm greatly. \begin{enumerate} \item Define \[\bsy{z}(t_0)=\bsy{z}_0,\quad \bsy{F}_0=-\mbf{A}\bsy{z}_0+\bsy{f}_0(t_0),\quad \by_0=\bsy{q}(t_0),\quad \bv_0=\dot{\bsy{q}}(t_0) ,\quad \dot{\bsy{\chi}}_0=\mbf{B}\bv_0,\] \[\dot{\bsy{z}}_0= \left(\bar{A}-\alpha\,\trm{sign}(\dot{\bsy{\chi}}_0\circ\bsy{z}_0)\circ \abs{\bsy{z}_0}^{\circ n_{\trm{h}}}-\beta \abs{\bsy{z}_0}^{\circ n_{\trm{h}}} \right)\circ \dot{\bsy{\chi}}_0, \quad \dot{\bsy{F}}_0=-\mbf{A}\dot{\bsy{z}}_0+\ddx{t}\bsy{f}_0(t)\bigg\rvert_{t=t_0},\] \[\bsy{r}_0=\mbf{K}\by_0 + \mbf{C}\bv_0 \quad \trm{(the }\mbf{C}\bv_0 \trm{ is dropped if linear viscous damping is not included)}. \] \item Define \[\tilde{\mbf{M}}=\mbf{M}+\gamma h\mbf{C}+(\gamma h)^2\mbf{K}. \] \item Define (first stage) \[\tilde{\bsy{e}}=h{\tilde{\mbf{M}}}^{-1}\left(\bsy{F}_0-\bsy{r}_0+h\gamma(\dot{\bsy{F}}_0-\mbf{K}\bsy{v}_0) \right).\] \[\tilde{\bsy{d}}=h(\bsy{v}_0+\gamma\tilde{\bsy{e}}).\] \item Define (second stage) \[ \bF_{\half}=-\mbf{A}\left(\bsy{z}_0+\frac{h}{2}\dot{\bsy{z}}_0\right)+\bsy{f}_0\left(t_0+\half h\right), \quad \bsy{r}_{\half}=\mbf{K}\left(\by_0+\half\tilde{\bsy{d}}\right)+\mbf{C}\left(\bv_0+\half\tilde{\bsy{e}} \right),\] \[ \bsy{e}=h\tilde{\mbf{M}}^{-1}\left(\bF_{\half}-\bsy{r}_{\half}+(h\gamma)\left(2\gamma-\half\right)\mbf{K}\tilde{\bsy{e}}+\gamma\mbf{C}\tilde{\bsy{e}}\right) \] and \[ \bsy{d}=h\left(\bsy{v}_0+\left(\half-\gamma\right)\tilde{\bsy{e}}+\gamma\bsy{e} \right). \] \item Finally \[ \bsy{y}(t_0+h)=\by_0+\bsy{d},\qquad\bv(t_0+h)=\bv_0+\bsy{e}. \] \item Define \[ \dot{\bsy{\chi}}_1=\dot{\bsy{\chi}}(t_0+h)=\mbf{B}(\bv_0+\bsy{e}) \] \end{enumerate} Note that the assignment of $ \dot{\bsy{\chi}}_1 $ to $ \dot{\bsy{\chi}}(t_0+h) $ above is tentative at this stage; if there is a sign change, we will make a correction, as discussed in the next subsection. \subsection{Explicit integration for the hysteretic part} \label{explicit_integration} Due to the nonnalyticity of hysteresis models, we integrate the hysteresis part using an explicit step. There are slope changes in the hysteretic response whenever $\dot{\chi}$ at any Gauss point crosses zero in time. We will accommodate the sign change of $\dot{\chi}$ within an explicit time step in a second sub-step, after first taking a time step assuming that there is no sign change. Since each $\dot{z}_i$ depends on $ \dot{\chi}_i $ only, accounting for zero crossings can be done individually for individual Gauss points and after such a preliminary step. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{fig2.eps} \caption{Zero-crossing of $ \dot{\chi} $ at any particular Gauss point within a time step. For details on associated quantities shown, see the main text.} \label{crossing} \end{figure} We first check elementwise and identify the entries of $ \dot{\bsy{\chi}}_0 $ and $ \dot{\bsy{\chi}}_1 $ that have same sign, and construct sub-vectors $ \dot{\bsy{\chi}}_{0\trm{u}} $ and $ \dot{\bsy{\chi}}_{1\trm{u}} $ out of those entries. The corresponding elements from the vector $ \bsy{z}_0 $ are used to construct a sub-vector $ \bsy{z}_{0\trm{u}} $. Here the subscript ``$ {\rm u} $" stands for ``unchanged''. These elements can be used in a simple predictor-corrector step for improved accuracy as follows. \[ \bsy{s}_1=\left(\bar{A}-\alpha \,\trm{sign}(\dot{\bsy{\chi}}_{0\trm{u}}\circ\bsy{z}_{0\trm{u}})\circ\abs{\bsy{z}_{0\trm{u}}}^{\circ n_{\trm{h}}}-\beta \abs{\bsy{z}_{0\trm{u}}}^{\circ n_{\trm{h}}} \right)\circ \dot{\bsy{\chi}}_{0\trm{u}}, \] \[ \bsy{z}_{\trm{iu}}=\bsy{z}_{0\trm{u}}+h \bsy{s}_1, \] \[ \bsy{s}_2=\left(\bar{A}-\alpha \,\trm{sign}(\dot{\bsy{\chi}}_{1\trm{u}}\circ\bsy{z}_{\trm{iu}})\circ\abs{\bsy{z}_{\trm{iu}}}^{\circ n_{\trm{h}}}-\beta \abs{\bsy{z}_{\trm{iu}}}^{\circ n_{\trm{h}}} \right)\circ \dot{\bsy{\chi}}_{1\trm{u}},\] and \begin{equation} \label{eq:hst} \bsy{z}_{1\trm{u}}=\bsy{z}_{0\trm{u}}+h \frac{(\bsy{s}_1+\bsy{s}_2)}{2}. \end{equation} Next, we address the remaining entries of $\dot{\bsy{\chi}}_0$ and $\dot{\bsy{\chi}}_1$, those that have crossed zero and flipped sign. We construct sub-vectors $\dot{\bsy{\chi}}_{0\trm{f}}$ and $\dot{\bsy{\chi}}_{1\trm{f}}$ out of them, where the ``${\rm f}$'' subscript stands for ``flipped''. The corresponding elements from the vector $\bsy{z}_0$ are used to construct sub-vector $\bsy{z}_{0\trm{f}}$. Using linear interpolation to approximate the zero-crossing instants within the time step, we use \[ \bsy{h}_0=-h\frac{\dot{\bsy{\chi}}_{0\trm{f}}}{(\dot{\bsy{\chi}}_{1\trm{f}}-\dot{\bsy{\chi}}_{0\trm{f}})}\quad \trm{(defined elementwise)}. \] We define $$ \bsy{s}_1=\left(\bar{A}-\alpha \,\trm{sign}(\dot{\bsy{\chi}}_{0\trm{f}}\circ\bsy{z}_{0\trm{f}})\circ\abs{\bsy{z}_{0\trm{f}}}^{\circ n_{\trm{h}}}-\beta \abs{\bsy{z}_{0\trm{f}}}^{\circ n_{\trm{h}}} \right)\circ \dot{\bsy{\chi}}_{0\trm{f}},$$ and $$ \bsy{z}_{\trm{mf}}=\bsy{z}_{0\trm{f}}+\bsy{h}_0 \circ \frac{\bsy{s}_1}{2},$$ which is consistent with Eq.\ (\ref{eq:hst}) because $\dot{\bsy{\chi}} = \bsy{0}$ at the end of the sub-step. Next we complete the step using \[ \bsy{s}_2=\left(\bar{A}-\alpha \,\trm{sign}(\dot{\bsy{\chi}}_{1\trm{f}}\circ\bsy{z}_{\trm{mf}})\circ\abs{\bsy{z}_{\trm{mf}}}^{\circ n_{\trm{h}}}-\beta \abs{\bsy{z}_{\trm{mf}}}^{\circ n_{\trm{h}}} \right)\circ \dot{\bsy{\chi}}_{1\trm{f}} \] and \[ \bsy{z}_{1\trm{f}}=\bsy{z}_{\trm{mf}}+(\bsy{h}-\bsy{h}_0)\circ \frac{\bsy{s}_2}{2} \] where $\bsy{h}$ is a vector of the same dimensions as $\bsy{h}_0$ and with all elements equal to $h$. Finally, having the incremented values $\bsy{z}_1$ at all locations, those with signs unchanged and those with signs flipped, we use \[ \bsy{z}(t_0+h)=\bsy{z}_1. \] We clarify that the explicit integration of the hysteretic variables, as outlined in this subsection, is a compromise adopted to avoid difficulties due to the nonanalyticity of the hysteresis model. Continuing to integrate the inertia and stiffness parts explicitly will still allow us to use usefully large steps, as will be seen in section \ref{results}. Having the numerical integration algorithm in place, however, we must now turn to the second contribution of this paper, namely model order reduction. \section{Model order reduction}\label{ROM_sec} For reduced order modeling, we must reduce both the number of active vibration modes as well as the number of Gauss points used to compute the hysteretic dissipation. Of these two, the first is routine. \subsection{Reduction in the number of vibration modes}\label{modal_rom} The undamped normal modes and natural frequencies are given by the eigenvalue problem \begin{equation} (\mbf{K}-\omega^2\mbf{M})\bsy{v}=\bsy{0} \end{equation} for $N$ eigenvector-eigenvalue pairs $(\bsy{v}_i,\omega_i)$. In a finely meshed FE model, $N$ is large. In such cases, we may compute only the first several such pairs using standard built-in iterative routines in software packages. The $N$ dimensional displacement vector $\bsy{q}$ is now approximated as a linear combination of $r \ll N $ modes. To this end, we construct a matrix $\mbf{R}$ with the first $r$ eigenvectors as columns so that \begin{equation}\label{modal_approx} \bsy{q}(t)\approx\sum_{k=1}^{r}\bsy{v}_k {\xi}_k(t)=\mbf{R}\bsy{\xi}(t), \end{equation} where $\mbf{R}=[\bv_1\,\;\bv_2\,\dots\, \bv_r]$, and where the elements of $\bsy{\xi}(t)$ are called modal coordinates. Substituting Eq.\ (\ref{modal_approx}) in Eq.\ (\ref{full_model}) and pre-multiplying with the transposed matrix $\mbf{R}^{\top}$, we obtain \begin{equation}\label{modal_rom_1} \mbf{R}^{\top}\mbf{M}\mbf{R}\:\ddot{\bsy{\xi}}+ \mbf{R}^{\top}\mbf{K}\mbf{R}\:\bsy{\xi}+\mbf{R}^{\top}\mbf{A}\bsy{z}=\mbf{R}^\top \bsy{f}_0(t), \end{equation} obtaining equations of the form \begin{subequations} \begin{equation}\label{modal_rom_2} \tilde{\mbf{M}}\:\ddot{\bsy{\xi}}+\tilde{\mbf{K}}\:\bsy{\xi}+\mbf{R}^{\top}\mbf{A}\bsy{z}=\mbf{R}^\top\bsy{f}_0(t), \end{equation} \begin{equation}\label{modal_rom_z} \dot{\bsy{z}}=(\bar{A}-\alpha\: \trm{sign}(\dot{\bsy{\chi}}_{\trm{a}}\circ\bsy{z})\circ\abs{\bsy{z}}^{\circ\, n_{\trm{h}}}-\beta \abs{\bsy{z}}^{\circ\, n_{\trm{h}}})\circ\dot{\bsy{\chi}}_{\trm{a}},\quad \dot{\bsy{\chi}}_{\trm{a}}=\mbf{B}\mbf{R}\dot{\bsy{\xi}}. \end{equation} \end{subequations} In the above, due to orthogonality of the normal modes, the matrices $\tilde{\mbf{M}}$ and $\tilde{\mbf{K}}$ are diagonal. Also, $\bsy{\chi}_{\trm{a}}$ is an approximation of the original $\bsy{\chi}$ because we have projected onto a few vibration modes, but it still has the same number of elements as $\bsy{\chi}$ and requires numerical integration of the same number of nonsmooth hysteresis equations. A reduction in the number of hysteretic variables is necessary. \subsection{Reduction in the number of hysteretic variables}\label{hyst_var_reduction_section}\label{choose_z_subset} The system still has a large number ($n_{\trm{g}} n_{\trm{e}}$) of hysteretic damping variables. These are arranged in the elements of $\bsy{z}$ in Eq.\ (\ref{full_model}). Selecting a smaller set of basis vectors and projecting the dynamics of the evolving $\bsy{z}$ onto those has analytical difficulties. Here we adopt a data-driven approach to select a submatrix of $\bsy{z}$, \begin{equation}\label{z_s} \bsy{z}_{\trm{s}}=[z_{j_1}\, z_{j_2}\,\dots,z_{j_m}]^{\top}\,\qquad m\ll n_{\trm{g}} n_{\trm{e}}, \quad j_k\in \{1,2,\dots n_{\trm{g}} n_{\trm{e}}\},\,k\in\{1,2,\dots m\}. \end{equation} The indices $j_1,\,j_2,\,\dots j_m $ must now be chosen, along with a matrix ${\bf P}$, such that \begin{equation}\label{P_condition} \mbf{P}\bsy{z}_{\trm{s}} (t) \approx \mbf{R}^{\top}\mbf{A}\bsy{z}(t). \end{equation} Working with that reduced set of hysteretic variables, we will be able use a reduced set of driving local curvatures \begin{equation}\label{chi_s} \dot{\bsy{\chi}}_{\trm{s}}=[\dot{\chi}_{\trm{a}_{j_1}}\,\dot{\chi}_{\trm{a}_{j_2}}\,\dots\,\dot{\chi}_{\trm{a}_{j_m}}]^{\top}, \end{equation} a submatrix of $\dot{\bsy{\chi}}_{\trm{a}}$. In other words, we will work with a subset of the original large number of Gauss points used to compute virtual work integrals for the hysteretic dissipation. In our proposed data-driven approach, selection of the indices $j_1,\,j_2,\,\dots j_m$ is the only significant issue. The matrix ${\bf P}$ can then be found by a simple matrix calculation. However, for implementing our approach, we must first generate a sufficiently large amount of data using numerical integration of the full equations with representative initial conditions and/or forcing. To this end, we use the semi-implicit integration method presented in section \ref{SE_scheme}. This is the reason why we have offered two seemingly-unrelated contributions together in this paper; the numerical integration method proposed in the first part is used to generate the data necessary for the second part. The data generation process is relatively slow, but once it is done and the reduced order model is constructed, we can use it for many subsequent quicker calculations. Beyond its academic interest, our approach is practically useful in situations where the reduced order model will be used repeatedly after it is developed. \subsubsection{Selecting a subset of hysteretic variables} \label{z_selection} For data generation, we numerically integrate the full system, over some time interval $[0,T]$ with $N_t $ points in time. This process is carried out $N_{\trm{s}}$ times with different random representative initial conditions\footnote{ For example, for unforced transients, we set the initial modal coordinate values for the first few modes to be random numbers of appropriate size to match displacement magnitudes of interest. Each simulation would have a different random set of initial modal coordinate values. The integration duration $T$ is selected to obtain several cycles of oscillation of the lowest mode, along with some significant decay in oscillation amplitude.}. For each such solution with a random initial condition, upon arranging the $z$ variables' solutions side by side in columns, a matrix of size $n_{\trm{g}} n_{\trm{e}}\ts N_t$ is generated. This matrix is divided by its Frobenius norm, i.e., the square root of the sum of squares of all elements. We stack $N_{\trm{s}}$ such normalized matrices, one from each simulation, side by side to form a bigger matrix $\mbf{Z}$ of size ${n_{\trm{g}} n_{\trm{e}} \ts N_t N_{\trm{s}}}$. Clearly, the dimension of the row-space of $\mbf{Z}$ is at most $n_{\trm{g}}n_{\trm{e}}$, because that is the total number of rows. Identification of a useful subset of hysteretic variables is now a matter of selecting rows of $\mbf{Z}$ which contain a high level of independent information. Selecting a finite subset of the rows of $\mbf{Z}$ is a combinatorial optimization problem. For example, if we start with $n_{\trm{g}} n_{\trm{e}} = 900$ and want to select a subset of size, say, 100, then the total number of subsets possible is $$\frac{900!}{800! \, 100!} \approx 10^{135}.$$ The proper orthogonal decomposition (POD) \cite{chatterjee2000introduction} is not useful because it does not select a subset of rows. Here, for a simple practical solution, we use a greedy algorithm as follows. \begin{enumerate} \item Of all the rows of $\mbf{Z}$, we first select the one with the largest 2-norm and set it aside; we also record its index (or row number), which we call $j_1$. From the rows of $\mbf{Z}$, we subtract their respective components along the $ j^{\rm th}_1 $ row direction. This yields a reduced and modified $\mbf{Z}$. \item Next, of all the rows of the reduced and modified $\mbf{Z}$, we select the one with the largest 2-norm and set it aside; we also record its index (or row number in the original $\mbf{Z}$), and call it $j_2$. We normalize the row and subtract its component along the still-remaining rows. This yields a further reduced and further modified $\mbf{Z}$. \item To clarify, we now have two indices selected ($j_1$ and $j_2$); and $n_{\trm{g}}n_{\trm{e}}-2$ rows of $\mbf{Z}$ remaining in contention, where for each of these remaining rows their components along the already-selected rows have been removed. \item Proceeding as above, we select a third row (corresponding to largest 2-norm in the remaining portion). We record its index (call it $j_3$), and normalize the row; and remove its component from the remaining $n_{\trm{g}}n_{\trm{e}}-3$. \item We proceed like this for as many rows as we wish to include in the reduced order model. A termination criterion based on the norm of the remaining part can be used if we wish. \item In the end, we are interested only in the selected indices $j_1, j_2, \cdots, j_m$. What remains of the matrix $\mbf{Z}$ is discarded. \end{enumerate} \subsubsection{Finding matrix P}\label{P_mat} With the indices chosen, a submatrix $\mbf{Z}_{\trm{s}}$ is assembled using the $(j_1,j_2,\dots,j_m)^{\trm{th}}$ rows of the original (and not the reduced) $\mbf{Z}$. We now solve the straightforward least squares problem, \begin{equation}\label{least_sqr_problem} \mbf{P} = \mathop{\mathrm{argmin}}_{\hat{\mbf{P}}\in \mathbb{R}^{r\times m}}\;\;{\lvert\lvert \mbf{R}^{\top}\mbf{A}\mbf{Z}- \hat{\mbf{P}}\mbf {Z}_{\trm{s}} \rvert\rvert}_{\trm{F}},\quad \trm{where }\lvert\lvert\cdot\rvert\rvert_{\trm{F}} \trm{ denotes the Frobenius norm.} \end{equation} The above problem, upon taking matrix transposes, is routine in, e.g., Matlab. The final reduced order model becomes \begin{subequations} \begin{equation}\label{ROM} \tilde{\mbf{M}}\ddot{\bsy{\xi}}+\tilde{\mbf{K}}\bsy{\xi}+\mbf{P}\bsy{z}_{\trm{s}}=\mbf{R}^\top\bsy{f}_0(t). \end{equation} \begin{equation}\label{ROM_z} \dot{\bsy{z}}_{\trm{s}}=\left(\bar{A}-\alpha\: \trm{sign}(\dot{\bsy{\chi}}_{\trm{s}}\circ\bsy{z}_{\trm{s}})\circ\abs{\bsy{z}_{\trm{s}}}^{\circ\, n_{\trm{h}}}-\beta \abs{\bsy{z}_{\trm{s}}}^{\circ\, n_{\trm{h}}}\right)\circ\dot{\bsy{\chi}}_{\trm{s}}, \quad \dot{ \bsy{\chi}}_{\rm s}=\mbf{B}_{\rm s}\mbf{R}\dot{\bsy{\xi}}, \end{equation} \end{subequations} where $ \mbf{B}_{\rm s} $ is a submatrix of $ \mbf{B} $ constructed by retaining the $(j_1,j_2,\dots,j_m)^{\trm{th}}$ rows of the latter. Equations \ref{ROM} and \ref{ROM_z} are equivalent to a $2r+m$ dimensional system of first order equations ($r$ modes and $m$ hysteretic variables). Note that the right hand side of Eq.\ (\ref{ROM}) may seem like it has a large number of elements, but in practice for many problems, external forcing is restricted to a few locations or can be reduced to a few effective locations (e.g., if the forces acting at several nodes maintain fixed proportions to each other). We do not investigate this aspect of size reduction because we have so far not made simplifying assumptions about $\bsy{f}_0(t)$. When the forcing is zero, of course, the right hand side is zero. We now turn to the results obtained using, first, our integration routine; and then our approach for developing reduced order models. \section{Results}\label{results} Our main aim is accurate simulation of hysteretic dissipation, which is most easily seen in the unforced decaying response of the structure. So we will first consider some unforced transient responses of the structure to examine both stability and numerical accuracy of our semi-implicit integration algorithm. Subsequently we will present reduced order models and show their efficacy compared to the full model. In the results from direct numerical integration using our proposed semi-implicit algorithm, we will check for three things: (i) the theoretically expected power law decay for small amplitude vibrations, (ii) the absence of instabilities arising from higher modes, and (iii) overall accuracy. For error calculations with low dimensional systems, i.e., when the number of elements in the FE model is small, we will use Matlab's \texttt{ode15s} for comparison with both the absolute and relative error tolerances set to $10^{-10}$; this is because \texttt{ode15s} has built-in adaptive step sizing and is accurate when it works. For high dimensional systems \texttt{ode15s} does not work and we will use the proposed semi-implicit algorithm itself with a very small step size $\left( h=2^{-23} \right)$ for error estimates. We now select some parameter values to be used in the simulation. \subsection{Choice of parameter values}\label{parameter_choose} We consider a beam with elasticity and density matching steel; of length $1 \,\trm{m}$; with flexural rigidity $EI = 2666.7\,\trm{N}\trm{m}^2$ and mass per unit length $ \bar{m} = 3.14\, \trm{Kg-}\trm{m}^{-1}$. These parameter values are somewhat arbitrary, but representative of many engineering applications. For many metals, 0.2\% strain is near the border of elastic behavior. For the beam parameters above, if the beam is statically deflected under a single transverse tip load, then 0.2\% strain at the fixed end of the beam corresponds to a tip deflection of 6 cm, which is taken as a reasonable upper bound for the vibration amplitudes to be considered in our simulations below. Next, we consider the hysteretic damping itself. The index $n_{\rm h}$ in the Bouc-Wen model is primarily taken to be 0.5 for reasons explained a little later. A larger value is considered in subsection \ref{Lnh}, and only in subsection \ref{Lnh}, to clarify an issue in convergence. The parameters $\alpha$ and $\beta$ are somewhat arbitrary; we have found that the values $\alpha=0.8$ and $\beta=0.5$ yield hysteresis loops of reasonable shape. It remains to choose the Bouc-Wen parameter $\bar A$. It is known that for small amplitude oscillations, the Bouc-Wen dissipation follows a power law. An estimate for the upper limit of forcing amplitude in the Bouc-Wen model, below which the power law should hold, is given in \cite{bhattacharjee2013dissipation} as \begin{equation}\label{Abar_val1} \chi_{\rm max}= \left( 2\,{\frac {\bar{A}^{2-2\,n_{\trm{h}}} \left( 1+n_{\trm{h}} \right) \left( 1+2\,n_{\trm{h}} \right) \left( 2+3\,n_{\trm{h}} \right) }{ \left( 2\,{n_{\trm{h}}}^{2}{ \alpha}^{2}+4\,n_{\trm{h}}{\alpha}^{2}-n_{\trm{h}}{\beta}^{2}+2\,{\alpha}^{2} \right) \left( 2+n_{\trm{h}} \right) }} \right) ^{{\frac {1}{2\,n_{\trm{h}}}}}. \end{equation} Choosing $\chi_{\rm max}$ to correspond to the abovementioned 0.2\% strain at the fixed end, in turn corresponding to a static free-end deflection of 6 cm for the above beam, we find from Eq.\ (\ref{Abar_val1}) that $$\bar A = 0.065.$$ Only one physical model parameter remains to be chosen, namely $\gamma_{\trm{h}}$. To choose this parameter value, we note that the amplitude decay in free vibrations with Bouc-Wen hysteretic dissipation is not exponential. Thus, the idea of percentage of damping is only approximately applicable. Upon looking at the reduction in amplitude after a certain number of oscillations (say $M$), an equivalent ``percentage damping'' can be approximated using the formula \begin{equation}\label{zeta_est} \zeta_{\trm{equiv}}\approx\frac{1}{2\pi M}\ln\left(\frac{A_1}{A_{1+M}}\right). \end{equation} Metal structures often have 1-2\,\% damping \cite{orban2011damping}. We will choose values of the hysteretic dissipation $\gamma_{\trm{h}}$ (recall Eq.\ (\ref{eq_gov})) to obtain $0.01 \le \zeta_{\trm{equiv}} \le 0.02$. Numerical trial and error show that $$\gamma_{\trm{h}}=3000 \, {\rm Nm}$$ is a suitable value (see Fig.\ \ref{decay}). \begin{figure}[h!] \centering \includegraphics[scale=0.65]{fig3.eps} \caption{Tip displacement for $\gamma_{\trm{h}}=3000$ and $ n_{\rm h}=0.5 $ shows an approximate equivalent damping of about 1.6\%, as per Eq.\ (\ref{zeta_est}). Here 10 beam elements were used, with 3 hysteresis Gauss points per element. Time integration was done using Matlab's \texttt{ode15s} with error tolerances set to $10^{-10}$.} \label{decay} \end{figure} \subsection{Semi-implicit integration: stability and accuracy} Before we examine the performance of our proposed semi-implicit integration method, we will verify that the FE model indeed displays power law damping at small amplitudes, as predicted by theoretical analyses of the Bouc-Wen model. The small amplitude dissipation per cycle of the Bouc-Wen model is proportional to amplitude to the power $n_{\trm{h}}+2$ \cite{bhattacharjee2013dissipation}. Since the energy in the oscillation is proportional to amplitude squared, we should eventually observe small amplitude oscillations in the most weakly damped mode with amplitude $A$ obeying $$A\dot{A}=\mcl{O}\left(A^{n_{\trm{h}}+2}\right)$$ whence for $n_{\trm{h}} > 0$ \begin{equation} \label{power_law_dissipation} A=\mcl{O}\left(t^{-\frac{1}{n_{\trm{h}}}}\right). \end{equation} For the Bouc-Wen model, letting $n_{\trm{h}} \rightarrow 0$ produces hysteresis loops of parallelogram-like shape, and so we prefer somewhat larger values of $n_{\trm{h}}$; however, to have a significant decay rate even at small amplitudes, we prefer somewhat smaller values of $n_{\trm{h}}$. As a tradeoff, we have chosen $n_{\trm{h}}=\half$. We expect an eventual decay of vibration amplitudes like $1/t^2$. For our beam model, with 10 elements and 3 Gauss points per element, and with our semi-implicit numerical integration algorithm\footnote{% Matlab's \texttt{ode15s} struggles with such long simulations on a small desktop computer; our algorithm works quickly.}, the computed tip displacement asymptotically displays the expected power law decay rate, as seen on linear axes in Fig.\ \ref{beam_decay_image}a and more clearly in the logarithmic plot of Fig.\ \ref{beam_decay_image}b. The frequency of the residual oscillation is close to the first undamped natural frequency of the beam. Higher modes are not seen in the long-term power law decay regime. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig4.eps} \caption{The long term oscillation in the beam tip response shows very slow power law decay when $ n_{\rm h}=\half$. A small portion of the solution is shown zoomed within the left subplot. The aim of this simulation is to show that the model and numerical integration method together retain the correct asymptotic behavior far into the small-amplitude regime.} \label{beam_decay_image} \end{figure} Having checked the asymptotic power law decay rate, we next ensure that the semi-implicit algorithm does not produce spurious oscillations in higher modes within the computed solution. This issue is important because, with increasing mesh refinement in the FE model, very high frequencies are unavoidable. While those high frequency modes exist in principle, their response should be small if any external excitation is at low frequencies and initial conditions involve only the lower modes. To examine this aspect, we denote the time period of the highest mode present in the structural model by $T_{\trm{min}}$. As the number of elements increases, $T_{\trm{min}}$ decreases, as indicated in Fig.\ \ref{T_min_and_ytip_fft}a. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig5.eps} \caption{(a) Variation of $ T_{\min} $ with $ n_{\trm{e}} $ and (b) Frequency content of the transient tip displacement response when the first three modes are disturbed in a uniform FE model with 100 elements with $ n_{\rm h}=0.5$.} \label{T_min_and_ytip_fft} \end{figure} We now use our semi-implicit method to simulate an FE model with 100 elements and with nonzero initial conditions along only the first 3 modes. In the computed solution, we expect to see frequency content corresponding to the first three modes only. Figure \ref{T_min_and_ytip_fft}b indeed shows only three peaks. Spurious responses of the higher modes are not excited. Note that this simulation was done with a time step of $h=10^{-4}$, which is more than 275 times larger than the time period of the highest mode for $n_{\rm e} = 100$, which is $ T_{\min}=3.6\ts 10^{-7}\,\trm{sec} $ (see Fig.\ \ref{T_min_and_ytip_fft}a). It is the implicit part of the code that allows stable numerical integration with such a large step size. Having checked that spurious oscillations in very high modes are {\em not} excited, it remains to check that accurate results are obtained with time steps $h$ which are sufficiently small compared to the highest mode of interest. To this end, the convergence of the solution with decreasing $h$ is depicted in Fig.\ \ref{match_1}. For the beam modeled using 10 elements, the first five natural frequencies are $$16.3, \, 102.2, \, 286.2,\, 561.3 \mbox{ and } 929.3 \mbox{ Hz}.$$ Numerical simulation results using our semi-implicit algorithm are shown in Fig.\ \ref{match_1} for $h = 10^{-3}$, $10^{-4}$ and $10^{-5}$. The overall solution is plotted on the left, and a small portion is shown enlarged on the right. Only 10 elements were used for this simulation to allow use of Matlab's {\tt ode15s}, which has adaptive step sizing and allows error tolerance to be specified (here we used $10^{-10}$). It is seen in the right subplot that although all three solutions from the semi-implicit method are stable, the one with $h=10^{-3}$ does not do very well in terms of accuracy; the one with $h=10^{-4}$ is reasonably close and may be useful for practical purposes; and the one with $h=10^{-5}$ is indistinguishable at plotting accuracy from the {\tt ode15s} solution. The match between our semi-implicit integration with $h=10^{-5}$ and Matlab's {\tt ode15s} with error tolerances set to $10^{-10}$ indicates that both these solutions are highly accurate. \subsection{Order of convergence} \label{sscn_er_conv} Using simulations with relatively few elements and over relatively short times, we can compare the results obtained from our semi-implicit method (SIM) with \texttt{ode15s}, and examine convergence as $h$ is made smaller. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{fig6.eps} \caption{Tip displacement response calculated using different time steps (compared to \texttt{ode15s}) with $ n_{\rm h}=0.5$.} \label{match_1} \end{figure} Figure \ref{match_1} shows the tip response for an FE model with 10 elements for different time step sizes and compares them with the \texttt{ode15s} solution. Here we will use two different error measures. \subsubsection{RMS error} We choose a fairly large number ($N_\trm{E} = 128$) of points evenly spaced in time, and for each $h$ we calculate the response at those instants of time. The overall time interval was chosen to be [0,1]. Clearly, $h$ can be successively halved in a sequence of simulations to ensure that the solution is always computed at exactly the $N_\trm{E}$ time instants of interest (along with other intermediate points). We define our RMS error measure as \begin{equation} e_{\trm{rms}}(h)=\sqrt{ \frac{1}{N_{\trm{E}}}\sum_{k=1}^{N_{\trm{E}}}{\left(y_h(t_k)-y_{\trm{accurate}}(t_k)\right)^2}}, \quad N_\trm{E} = 128. \end{equation} In the above, when the number of elements is modest (e.g., $n_{\rm e} = 10$), we use the highly accurate solution obtained from \texttt{ode15s} as the ``accurate'' one. With larger numbers of elements, \texttt{ode15s} cannot be used for validation. Having seen the overall accuracy of the semi-implicit method (SIM) with fewer elements when compared with \texttt{ode15s}, we use the SIM solution with extremely small time steps as the ``accurate'' one for error estimation with larger number of elements (e.g., $n_{\rm e} = 30$) . Results are shown in Fig.\ \ref{error_plot1}. It is seen that for relatively smaller values of $\gamma_{\trm{h}}$, a significant regime of approximately quadratic convergence is obtained (Figs.\ (\ref{error_plot1}a and \ref{error_plot1}c)). It means that for lightly damped structures the performance of the semi-implicit algorithm is excellent. For larger values of $\gamma_{\trm{h}}$, however, the convergence plot is more complicated, and there is no significant regime of nearly quadratic convergence (Figs.\ (\ref{error_plot1}b and \ref{error_plot1}d)). This is because the damping model is strongly non-analytic, and strong damping makes the role of that non-analyticity stronger. However, over a significant regime and in an average sense, it appears that the convergence is superlinear (i.e., the average slope exceeds unity in a loglog plot), and so the integration algorithm still performs well. A larger value of $ n_{\rm h} $ will be considered in subsection \ref{Lnh}. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig7b.eps} \caption{Time step vs.\ RMS error (128 equispaced points in time) for $ n_{\rm e}=10$ and 30 with $ n_{\rm h}=0.5 $. In the plots, the lines with $ {\rm slope=2} $ indicate how close the convergence is to quadratic behaviour. For a smaller value of $ \gamma_{\rm h} $, the convergence is nearly quadratic. For a larger value of $ \gamma_{\rm h} $ there is a deviation from quadratic convergence. } \label{error_plot1} \end{figure} The role of nonanalyticity in the Bouc-Wen model, and the way in which it is handled in our semi-implicit algorithm, are worth emphasizing. See Fig.\ \ref{hys_loop}. \clearpage \begin{figure}[h!] \centering \includegraphics[scale=0.65]{fig8.eps} \caption{Hysteresis loop for the $z_1$ driven by $\chi_1$ (at Gauss point ``1'' near the fixed end).} \label{hys_loop} \end{figure} In the figure, point A shows a discontinuous slope change due to the sign change of the driving term, $\dot{\chi}$. This point is handled with an {\em explicit} attempt to locate the instant of change in direction, with one integration step taken on each side of that instant. Point B indicates a sign change in $z_1$. Our proposed algorithm does not separately identify sign changes of $z$ within one time step, in the interest of simplicity. Due to nonanalyticity at both points A and B, integration errors increase when the nonsmoothness dominates (high $\gamma_{\trm{h}}$ and/or small $n_{\rm h}$). A larger value will be considered in subsection \ref{Lnh}. \subsubsection{Error at a fixed instant of time} We now consider the absolute value of error at some fixed instant of time $t=\tau$, \begin{equation} e_{\tau}(h)=\lvert y_h(\tau)-y_{\trm{accurate}}(\tau)\rvert. \end{equation} Here, we use $\tau = 1$. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{fig9.eps} \caption{Error at $t=1$, for FE models with 10 and 30 elements with $n_{\rm h}=0.5$.} \label{fxt_error} \end{figure} Straight line fits on loglog plots of $ e_{\tau}(h) $ versus $ h $ in Figs.\ (\ref{fxt_error}a and \ref{fxt_error}b) show average slopes that slightly exceed unity. While these slopes are not rigorously proved to exceed unity, the overall convergence rate of the integration algorithm may be considered roughly linear (on average) over a sizeable range of step sizes. It is emphasized that these error computations are done in a regime where (i) Matlab's \texttt{ode15s} does not work at all, (ii) explicit methods are unstable unless extremely small time steps are used, (iii) properly implicit algorithms are both complex and not guaranteed to converge, and (iv) $n_{\rm h} = 0.5$ in the Bouc-Wen model is relatively small. Considering these four difficulties, the semi-implicit method (SIM) proposed in this paper may be said to be simple, effective, and accurate. \subsection{Larger $n_{\rm h}$} \label{Lnh} Everywhere in this paper except for this single subsection, we have used $n_{\rm h} = 0.5$. In this subsection only, we consider $n_{\rm h}=1.5$. Due to the greater smoothness of the hysteresis loop near the point B of Fig.\ \ref{hys_loop}, we expect better convergence for this higher value of $n_{\rm h}$. Some parameter choices must be made again. Using the yield criteria used in section \ref{parameter_choose}, for $ n_{\rm h}=1.5 $, we find we now require $$\bar{A}=608.9\,.$$ Subsequently, we find an approximate equivalent damping ratio $\zeta_{\rm equiv}=0.015$ for $ \gamma_{\rm h}=0.3$. It is interesting to note that with the change in $ n_{\rm h}$ and for the physical behavior regime of interest, $\bar{A}$ and $ \gamma_{\rm h}$ have individually changed a lot but their product has varied only slightly (195 Nm for about 1.6\% damping in the $n_{\rm h}=0.5$ case, and 183 Nm for about 1.5\% damping in the $n_{\rm h}=1.5$ case). The decay of tip response amplitude (Fig.\ \ref{decay_nh_1point5}) for these parameters (with $n_{\rm h}=1.5$) looks similar to the case studied in the rest of this paper ($n_{\rm h}=0.5 $, with attention to Fig.\ \ref{decay}). \begin{figure}[h] \centering \includegraphics[scale=0.65]{fig9a.eps} \caption{Tip displacement for $\gamma_{\trm{h}}=0.3$ and $ n_{\rm h}=1.5 $ shows an approximate equivalent damping of about 1.5\%, as per Eq.\ (\ref{zeta_est}). Here 10 beam elements were used, with 3 hysteresis Gauss points per element. Time integration was done using Matlab's \texttt{ode15s} with error tolerances set to $10^{-10}$.} \label{decay_nh_1point5} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{fig9b.eps} \caption{Time step vs.\ RMS error (128 equispaced points in time) for $ n_{\rm e}=10$ (a) and 30 (b) with $ n_{\rm h}=1.5 $. For both the cases, it is quite evident that the convergence is nearly quadratic. } \label{error_plot3} \end{figure} The main point to be noted in this subsection is that with $n_{\rm h}=1.5$, $\bar{A}=608.9$, $\gamma_{\rm h}=0.3$, and all other parameters the same as before, we do indeed observe superior convergence over a significant range of step sizes: see Fig.\ \ref{error_plot3} for FE models with 10 and 30 elements, and compare with Fig.\ \ref{error_plot1}. For the case with 10 elements, the convergence is nearly quadratic. With 30 elements the convergence is slightly slower than quadratic, but much faster than linear. Note that these estimates are from numerics only: analytical estimates are not available. As mentioned above some of the difficulty with accurate integration of hysteretically damped structures comes from the zero crossings of the hysteretic variable $z$ itself. In this paper, for simplicity, we have avoided special treatment of these zero crossings. However, the results of this subsection show that the effect of these zero crossings is milder if $n_{\rm h}$ is larger. We now turn to using results obtained from this semi-implicit integration method to develop data-driven lower order models of the hysteretically damped structure. \subsection{Reduced order models} In our beam system, we have 2 degrees of freedom per node (one displacement and one rotation). Let the total number of differential equations being solved, in first order form, be called $N_{\rm D}$. If there are $n_{\rm e}$ elements, we have $N_{\rm D} = 4 n_{\rm e} + n_{\rm e}n_{\rm g}$ first order differential equations. In a reduced order model with $r$ modes and $m$ Gauss points, we have $N_{\rm D}=2r+m$ first order differential equations. Although the size of the problem is reduced significantly in this way, the accuracy can be acceptable. For demonstration, we consider two FE models of a cantilever beam, with hysteresis, as follows: \begin{itemize} \item[(i)] 100 elements with 3 Gauss points each ($N_{\rm D} = 700$). \item[(ii)] 150 elements with 3 Gauss points each ($N_{\rm D} = 1050$). \end{itemize} For each of the two systems above, the datasets used for selecting the subset of hysteretic states (or Gauss points) were generated by solving the full systems 60 times each for the time interval 0 to 1 with random initial conditions and time step $h=10^{-4}$, exciting only the first 3 modes ($r=3$). Data from each of these 60 solutions were retained at 1000 equispaced points in time ($N_t=1000$; recall subsection \ref{z_selection}). All reduced order model (ROM) results below are for $r=3$ retained modes. Further, the number of hysteresis Gauss points is denoted by $m$. Results for $m=n_{\rm e} $ are shown in Figs.\ (\ref{rom_response_fig}a and \ref{rom_response_fig}c). \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{fig10.eps} \caption{Comparison between ROM and full model, for two cases (see text for details). Tip displacements (left) and hysteresis curve (right). In subplots (b) and (d), the retained hysteresis Gauss point closest to the fixed end of the beam has been selected for display of results.} \label{rom_response_fig} \end{figure} We now quantitatively assess the accuracy of the ROMs. To this end, we write $y^{(m)}_{\rm tip(ROM)} (t)$ and $y_{\rm tip(FM)}(t)$ for the ROM and full model outputs respectively. We then compute the error measure \begin{equation} \mcl{E}_{\rm rms}=\sqrt{ \frac{1}{N_{\rm E}}\sum_{k=1}^{N_{\rm E}} {\left(y^{(m)}_{\rm tip(ROM)}(t_k)- y_{\rm tip(FM)}(t_k) \right )^2} }, \, N_{\rm E}=10001, \end{equation} for different values of $m$ and for an integration time interval $[0,1]$ (with $h=10^{-4}$). \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{fig11.eps} \caption{Error convergence of ROM with increasing number of Gauss points retained. For comparison, if $y^{(m)}_{\rm tip(ROM)}(t)$ is set identically to zero, the error measure obtained is 0.006 for both models. Thus, for reasonable accuracy, $m \ge 50$ may be needed.} \label{rom_error_fig} \end{figure} \clearpage The variation of $ \mcl{E}_{\rm rms} $ with $ m $, for both models, is shown in Fig.\ \ref{rom_error_fig}. This error measure is not normalized. If we wish to normalize it, then we should use the quantity obtained when we set $y^{(m)}_{\rm tip(ROM)}(t)$ identically to zero: that quantity is 0.006 for both models. Thus, reasonable accuracy is obtained for $m \ge 50$, and good accuracy (about 1\% error) is obtained only for $m>100$. These numbers refer to a specific case with random initial conditions; however, Fig.\ \ref{rom_error_fig} is representative for other, similar, initial conditions (details omitted). \section{Conclusions} Structural damping is nonlinear and, empirically, dominated by rate-independent mechanisms. Hysteresis models are therefore suitable, but present numerical difficulties when used in large finite element models. Fully implicit numerical integration of such highly refined FE models presents difficulties with convergence in addition to algorithmic complexities. With this view, we have proposed simpler but effective approaches, in two stages, to simulations of such structural dynamics. The first stage consists of a semi-implicit integration routine that is relatively simple to implement, and appears to give linear convergence or better. Moreover, the time steps required for stability can be much larger than the time period of the highest mode in the FE model. Subsequently, we have used the results from that semi-implicit integration to develop data-driven reduced order models for subsequent rapid case studies or other simulations. Thus, our contribution is twofold: first, we present a simple and practical numerical integration method that can be easily implemented; second, we use the results from that integration algorithm to further reduce the size of the model. Although we have worked exclusively with a cantilever beam in this paper, our approach can be directly extended to frames that can be modeled using beam elements. We hope that future work will also examine ways in which the present approach can be extended to genuinely two- or three-dimensional structures. While we have not worked such cases out yet, we are hopeful that the issues addressed in the present paper will help to tackle such higher-dimensional problems. \section*{Acknowledgements} Aditya Sabale helped BG with the initial finite element formulation for beams. AC thanks the Department of Science and Technology, Government of India, for earlier support on a project to study hysteretic damping; this work arose from a continued interest in that topic.
{ "attr-fineweb-edu": 1.849609, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaq_xK6EuNCvenX7Q
\section{Introduction} Consider a normal hierarchical model where, for $i = 1, \ldots, p$, \begin{eqnarray*} (y_{i} \mid \beta_i, \sigma^2) &\sim& \mbox{N}(\beta_i, \sigma^2) \\ (\beta_i \mid \lambda^2, \sigma^2) &\sim& \mbox{N}(0, \lambda^2 \sigma^2) \\ \lambda^2 &\sim& g(\lambda^2) \, . \end{eqnarray*} This prototype case embodies a very general problem in Bayesian inference: how to choose default priors for top-level variances (here $\lambda^2$ and $\sigma^2$) in a hierarchical model. The routine use of Jeffreys' prior for the error variance, $ p( \sigma^2 ) \propto \sigma^{-2} $, poses no practical issues. This is not the case for $ p( \lambda^2 ) $, however, as the improper prior $ p( \lambda^2 ) \propto \lambda^{-2} $ leads to an improper posterior. This can be seen from the marginal likelihood: $$ p( y \mid \lambda^2 ) \propto \prod_{i=1}^p ( 1 + \lambda^2 )^{- \frac{1}{2}} \exp \left ( - \frac{1}{2} \sum_{i=1}^p \frac{ y_i^2 }{1 + \lambda^2 } \right ) \, , $$ where we have taken $ \sigma^2 = 1 $ for convenience. This is positive at $ \lambda^2 = 0$; therefore, whenever the prior $p(\lambda^{2})$ fails to be integrable at the origin, so too will the posterior. A number of default choices have been proposed to overcome this issue. A classic reference is \citet{tiao:tan:1965}; a very recent one is \citet{morris:tang:2011}, who use a flat prior $ p( \lambda^2 ) \propto 1 $. We focus on a proposal by \citet{gelman:2006}, who studies the class of half-$t$ priors for the scale parameter $\lambda$: $$ p(\lambda \mid d) \propto \left( 1 + \frac{\lambda^2}{d} \right)^{-(d+1)/2} \, $$ for some degrees-of-freedom parameter $d$. The half-$t$ prior has the appealing property that its density evaluates to a nonzero constant at $\lambda = 0$. This distinguishes it from the usual conjugate choice of an inverse-gamma prior for $\lambda^2$, whose density vanishes at $\lambda = 0$. As \citet{gelman:2006} points out, posterior inference under these priors is no more difficult than it is under an inverse-gamma prior, using the simple trick of parameter expansion. These facts lead to a simple, compelling argument against the use of the inverse-gamma prior for variance terms in models such as that above. Since the marginal likelihood of the data, considered as a function of $\lambda$, does not vanish when $\lambda = 0$, neither should the prior density $p(\lambda)$. Otherwise, the posterior distribution for $\lambda$ will be inappropriately biased away from zero. This bias, moreover, is most severe near the origin, precisely in the region of parameter space where the benefits of shrinkage become most pronounced. This paper studies the special case of a half-Cauchy prior for $\lambda$ with three goals in mind. First, we embed it in the wider class of hypergeometric inverted-beta priors for $\lambda^2$, and derive expressions for the resulting posterior moments and marginal densities. Second, we derive expressions for the classical risk of Bayes estimators arising from this class of priors. In particular, we prove a result that allows us to characterize the improvements in risk near the origin ($\Vert \boldsymbol{\beta} \Vert \approx 0$) that are possible using the wider class. Having proven our risk results for all members of this wider class, we then return to the special case of the half-Cauchy; we find that the frequentist risk profile of the resulting Bayes estimator is quite favorable, and rather similar to that of the positive-part James--Stein estimator. Therefore Bayesians can be comfortable using the prior on purely frequentist grounds. Third, we attempt to provide some insight about the use of such priors in situations where $\boldsymbol{\beta}$ is expected to be sparse. We find that the arguments of \citet{gelman:2006} in favor of the half-Cauchy are, if anything, amplified in the presence of sparsity, and that the inverse-gamma prior can have an especially distorting effect on posterior inference for sparse signals. Overall, our results provide a complementary set of arguments in addition to those of \citet{gelman:2006} that support the routine use of the half-Cauchy prior: its excellent (frequentist) risk properties, and its sensible behavior in the presence of sparsity compared to the usual conjugate alternative. Bringing all these arguments together, we contend that the half-Cauchy prior is a sensible default choice for a top-level variance in Gaussian hierarchical models. We echo the call for it to replace inverse-gamma priors in routine use, particularly given the availability of a simple parameter-expanded Gibbs sampler for posterior computation. \section{Inverted-beta priors and their generalizations} Consider the family of inverted-beta priors for $\lambda^2$: $$ p(\lambda^2) = \frac{ (\lambda^2)^{b-1} \ (1+\lambda^2)^{-(a+b)}} {\mbox{Be}(a,b)} \, , $$ where $\mbox{Be}(a,b)$ denotes the beta function, and where $a$ and $b$ are positive reals. A half-Cauchy prior for $\lambda$ corresponds to an inverted-beta prior for $\lambda^2$ with $a = b = 1/2$. This family also generalizes the robust priors of \citet{strawderman:1971} and \citet{BergerAnnals1980}; the normal-exponential-gamma prior of \citet{griffin:brown:2005}; and the horseshoe prior of \citet{Carvalho:Polson:Scott:2008a}. The inverted-beta distribution is also known as the beta-prime or Pearson Type VI distribution. An inverted-beta random variable is equal in distribution to the ratio of two gamma-distributed random variables having shape parameters $a$ and $b$, respectively, along with a common scale parameter. The inverted-beta family is itself a special case of a new, wider class of hypergeometric inverted-beta distributions having the following probability density function: \begin{equation} \label{MTIBdensity} p(\lambda^2) = C^{-1} (\lambda^2)^{b-1} \ (\lambda^2 + 1)^{-(a+b)} \ \exp \left\{ -\frac{s}{1+\lambda^2} \right\} \ \left\{ \tau^2 + \frac{1-\tau^2}{1+\lambda^2} \right\}^{-1} \, , \end{equation} for $a>0$, $b>0$, $\tau^2>0$, and $s \in \mathcal{R}$. This comprises a wide class of priors leading to posterior moments and marginals that can be expressed using confluent hypergeometric functions. In Appendix \ref{hyperg.integral.details.appendix} we give details of these computations, which yield \begin{equation} \label{normalizing.constant.phi1} C = e^{-s} \ \mbox{Be}(a, b) \ \Phi_1(b, 1, a + b, s, 1-1/\tau^2) \, , \end{equation} where $\Phi_1$ is the degenerate hypergeometric function of two variables \citep[9.261]{gradshteyn:ryzhik:1965}. This function can be calculated accurately and rapidly by transforming it into a convergent series of $\phantom{}_2 F_1$ functions \citep[\S 9.2 of][]{gradshteyn:ryzhik:1965, gordy:1998}, making evaluation of (\ref{normalizing.constant.phi1}) quite fast for most choices of the parameters. Both $\tau$ and $s$ are global scale parameters, and do not control the behavior of $p(\lambda)$ at $0$ or $\infty$. The parameters $a$ and $b$ are analogous to those of beta distribution. Smaller values of $a$ encourage heavier tails in $p(\beta$), with $a=1/2$, for example, yielding Cauchy-like tails. Smaller values of $b$ encourage $p(\beta)$ to have more mass near the origin, and eventually to become unbounded; $b = 1/2$ yields, for example, $p(\beta) \approx \log(1+1/\beta^2)$ near $0$. We now derive expressions for the moments of $p(\boldsymbol{\beta} \mid \mathbf{y}, \sigma^2)$ and the marginal likelihood $p(\mathbf{y} \mid \sigma^2)$ for priors in this family. As a special case, we easily obtain the posterior mean for $\boldsymbol{\beta}$ under a half-Cauchy prior on $\lambda$. Given $\lambda^2$ and $\sigma^2$, the posterior distribution of $\boldsymbol{\beta}$ is multivariate normal, with mean $m$ and variance $V$ given by $$ m = \left(1 - \frac{1}{1+\lambda^2} \right) \mathbf{y} \quad , \quad V = \left(1 - \frac{1}{1+\lambda^2} \right) \sigma^2 \, . $$ Define $\kappa = 1/(1+\lambda^2)$. By Fubini's theorem, the posterior mean and variance of $\boldsymbol{\beta}$ are \begin{eqnarray} \mbox{E}(\boldsymbol{\beta} \mid \mathbf{y}, \sigma^2) &=& \{ 1 - \mbox{E}(\kappa \mid \mathbf{y}, \sigma^2) \} \mathbf{y} \label{normalpostmean} \\ \mbox{var}(\boldsymbol{\beta} \mid \mathbf{y}, \sigma^2) &=& \{ 1 - \mbox{E}(\kappa \mid \mathbf{y}, \sigma^2) \} \sigma^2 \label{normalpostvar} \, , \end{eqnarray} now conditioning only on $\sigma^2$. It is most convenient to work with $p(\kappa)$ instead: \begin{equation} \label{HBdensity1} p(\kappa) \propto \kappa^{a - 1} \ (1-\kappa)^{b-1} \ \left\{ \frac{1}{\tau^2} + \left(1 - \frac{1}{\tau^2} \right) \kappa \right\}^{-1} e^{-\kappa s} \, . \end{equation} The joint density for $\kappa$ and $\mathbf{y}$ takes the same functional form: $$ p(y_1 , \ldots, y_p, \kappa) \propto \kappa^{a' - 1} \ (1-\kappa)^{b-1} \ \left\{ \frac{1}{\tau^2} + \left(1 - \frac{1}{\tau^2} \right) \kappa \right\}^{-1} e^{-\kappa s'} \, , $$ with $a' = a + p/2$, and $s' = s + Z / 2\sigma^2$ for $Z = \sum_{i=1}^p y_i^2$. Hence the posterior for $\lambda^2$ is also a hypergeometric inverted-beta distribution, with parameters $(a', b, \tau^2, s')$. Next, the moment-generating function of (\ref{HBdensity1}) is easily shown to be $$ M(t) = e^t \ \frac{\Phi_1(b, 1, a + b, s-t, 1-1/\tau^2)}{ \Phi_1(b, 1, a + b, s, 1-1/\tau^2)} \, . $$ See, for example, \citet{gordy:1998}. Expanding $\Phi_1$ as a sum of $\phantom{}_1 F_1$ functions and using the differentiation rules given in Chapter 15 of \citet{Abra:Steg:1964} yields \begin{equation} \label{equation.moments} \mbox{E}(\kappa^n \mid \mathbf{y}, \sigma^2) = \frac{(a')_n}{(a' + b)_n} \frac{\Phi_1(b, 1, a' + b + n, s', 1-1/\tau^2)}{ \Phi_1(b, 1, a' + b, s', 1-1/\tau^2)} \, . \end{equation} Combining the above expression with (\ref{normalpostmean}) and (\ref{normalpostvar}) yields the conditional posterior mean and variance for $\boldsymbol{\beta}$, given $\mathbf{y}$ and $\sigma^2$. Similarly, the marginal density $p(\mathbf{y} \mid \sigma^2)$ is a simple expression involving the ratio of prior to posterior normalizing constants: $$ p(\mathbf{y} \mid \sigma^2) = (2\pi \sigma^2)^{-p/2} \ \exp \left( - \frac{Z}{2\sigma^2} \right) \ \frac{\mbox{Be}(a', b)}{\mbox{Be}(a, b)} \ \frac{\Phi_1(b, 1, a' + b, s', 1-1/\tau^2)} {\Phi_1(b, 1, a + b, s, 1-1/\tau^2)} \, . $$ \begin{figure} \begin{center} \includegraphics[width=5.5in]{graphs/shrink-example.pdf} \caption{\label{fig:shrinkageexample} Ten true means drawn from a standard normal distribution; data from these means under standard normal noise; shrinkage estimates under the half-Cauchy prior for $\lambda$. } \end{center} \end{figure} \section{Classical risk results} These priors are useful in situations where standard priors like the inverse-gamma or Jeffreys' are inappropriate or ill-behaved. Non-Bayesians will find them useful for generating easily computable shrinkage estimators that have known risk properties. Bayesians will find them useful for generating computationally tractable priors for a variance parameter. We argue that these complementary but overlapping goals can both be satisfied for the special case of the half-Cauchy. To show this, we first characterize the risk properties of the Bayes estimators that result from the wider family of priors used for a normal mean under a quadratic loss. Our analysis shows that: \begin{enumerate} \item The hypergeometric--beta family provides a large class of Bayes estimators that will perform no worse than the MLE in the tails, i.e.~when $\Vert \boldsymbol{\beta} \Vert^2$ is large. \item Major improvements over the James--Stein estimator are possible near the origin. This can be done in several ways: by choosing $a$ large relative to $b$, by choosing $a$ and $b$ both less than $1$, by choosing $s$ negative, or by choosing $\tau < 1$. Each of these choices involves a compromise somewhere else in the parameter space. \end{enumerate} We now derive expressions for the classical risk, as a function of $\Vert \boldsymbol{\beta} \Vert$, for the resulting Bayes estimators under hypergeometric inverted-beta priors. Assume without loss of generality that $\sigma^2 = 1$, and let $p(\mathbf{y}) = \int p(\mathbf{y} | \boldsymbol{\beta}) p(\boldsymbol{\beta}) d \boldsymbol{\beta}$ denote the marginal density of the data. Following \citet{stein:1981}, write the the mean-squared error of the posterior mean $\hat{\boldsymbol{\beta}}$ as $$ \mbox{E}( \Vert \hat{\boldsymbol{\beta}} - \boldsymbol{\beta} \Vert^2) = p + \mbox{E}_{\mathbf{y}} \left ( \Vert g(\mathbf{y}) \Vert^2 + 2 \sum_{i=1}^p \frac{\partial}{\partial y_i} g (\mathbf{y}) \right ) \, , $$ where $g(\mathbf{y}) = \nabla \log p(\mathbf{y}) $. In turn this can be written as $$ \mbox{E}( \Vert \hat{\boldsymbol{\beta}} - \boldsymbol{\beta} \Vert^2) = p + 4 E_{\mathbf{y} \mid \boldsymbol{\beta}} \left ( \frac{\nabla^2 \sqrt{p(\mathbf{y})}}{\sqrt{p(\mathbf{y})} } \right ) \, . $$ We now state our main result concerning computation of this quantity. \begin{proposition} \label{expression.for.MSE} Suppose that $\boldsymbol{\beta} \sim \mbox{N}_p(0, \lambda^2 I)$, that $\kappa = 1/(1+\lambda^2)$, and that the prior $p(\kappa)$ is such that $\lim_{ \kappa \rightarrow 0 , 1} \kappa ( 1 - \kappa ) p( \kappa ) = 0 $. Define $$ m_p ( Z ) = \int_0^1 \kappa^{ \frac{p}{2} } e^{ - \frac{Z}{2} \kappa } p ( \kappa ) \ \mbox{d} \kappa \, $$ for $Z = \sum_{i=1}^p y_i^2$. Then as a function of $\boldsymbol{\beta}$, the quadratic risk of the posterior mean under $p(\kappa)$ is \begin{equation} \label{hb.risk.decomposition1} \mbox{E}( \Vert \hat{\boldsymbol{\beta}} - \boldsymbol{\beta} \Vert^2) = p + 2 E_{Z \mid \boldsymbol{\beta}} \left\{ Z \frac{ m_{p+4} ( Z ) }{m_{p} (Z)} - p g(Z) - \frac{Z}{2} g(Z)^2 \right\} \, , \end{equation} where $ g(Z)= E( \kappa \mid Z ) $, and where \begin{equation} \label{hb.risk.decomposition2} Z \frac{ m_{p+4} ( Z ) }{m_{p} (Z)} = ( p+ Z + 4) g(Z) - ( p+2 ) - E_{ \kappa \mid Z } \left\{ 2 \kappa ( 1 - \kappa ) \frac{ p^{\prime} ( \kappa ) }{ p ( \kappa ) } \right\} \, . \end{equation} \end{proposition} \begin{proof} See Appendix \ref{appendix.MSEproof}. \end{proof} \begin{figure} \begin{center} \includegraphics[width=5.5in]{graphs/MSEplot-HC.pdf} \caption{\label{horseshoeMSE1.figure} Mean-squared error as a function of $\Vert \boldsymbol{\beta} \Vert$ for $p=7$ and $p=15$. Solid line: James--Stein estimator. Dotted line: Bayes estimator under a half-Cauchy prior for $\lambda$.} \end{center} \end{figure} Proposition \ref{expression.for.MSE} is useful because it characterizes the risk in terms of two known quantities: the integral $m_p(Z)$, and the posterior expectation $g(Z) = \mbox{E}(\kappa \mid Z)$. Using the results of the previous section, these are easily obtained under a hypergeometric inverted-beta prior for $\lambda^2$. Furthermore, given $\Vert \boldsymbol{\beta} \Vert$, $Z = U^2 + V$ in distribution, where \begin{eqnarray*} U &\sim& \mbox{N} \left( \Vert \boldsymbol{\beta} \Vert, 1 \right) \\ V &\sim& \chi^2_{p-1} \, . \end{eqnarray*} The risk of the Bayes estimator is therefore easy to evaluate as a function of $\Vert \boldsymbol{\beta} \Vert^2$. These expressions can be compared to those of, for example, \citet{george:liang:xu:2006}, who consider Kullback--Leibler predictive risk for similar priors. Our interest is in the special case $a=b=1/2$, $\tau=1$, and $s=0$, corresponding to a half-Cauchy prior for the global scale $\lambda$. Figure \ref{horseshoeMSE1.figure} shows the classical risk of the Bayes estimator under this prior for $p=7$ and $p=15$. The risk of the James-Stein estimator is shown for comparison. These pictures look similar for other values of $p$, and show overall that the half-Cauchy prior for $\lambda$ leads to a Bayes estimator that is competitive with the James--Stein estimator. Figure \ref{fig:shrinkageexample} shows a simple example of the posterior mean under the half-Cauchy prior for $\lambda$ when $p=10$, calculated for fixed $\sigma$ using the results of the previous section. For this particular value of $\boldsymbol{\beta}$ the expected squared-error risk of the MLE is $10$, and the expected squared-error risk of the half-Cauchy posterior mean is $8.6$. A natural question is: of all the hypergeometric inverted-beta priors, why choose the half-Cauchy? There is no iron-clad reason to do so, of course, and we can imagine many situations where subjective information would support a different choice. But in examining many other members of the class, we have observed that the half-Cauchy seems to occupy a sensible ``middle ground'' in terms of frequentist risk. To study this, we are able to appeal to the theory of the previous section. See, for example, Figure \ref{fig:hypbetaMSE}, which compares several members of the class for the case $p=7$. Observe that large gains over James--Stein near the origin are possible, but only at the expense of minimaxity. The half-Cauchy, meanwhile, still improves upon the James--Stein estimator near the origin, but does not sacrifice good risk performance in other parts of the parameter space. From a purely classical perspective, it looks like a sensible default choice, suitable for repeated general use. \begin{figure} \begin{center} \includegraphics[width=5.2in]{graphs/hypbeta-MSE1.pdf} \\ \vspace{1pc} \includegraphics[width=5.2in]{graphs/hypbeta-MSE3.pdf} \caption{\label{fig:hypbetaMSE} Mean-squared error as a function of $\Vert \boldsymbol{\beta} \Vert^2$ for $p=7$ and various cases of the hypergeometric inverted-beta hyperparameters.} \end{center} \end{figure} \section{Global scale parameters in local-shrinkage models} A now-canonical modification of the basic hierarchical model from the introduction involves the use of local shrinkage parameters: \begin{eqnarray*} (y_{i} \mid \beta_i, \sigma^2) &\sim& \mbox{N}(\beta_i, \sigma^2) \\ (\beta_i \mid \lambda^2, u_i^2, \sigma^2) &\sim& \mbox{N}(0, \lambda^2 \sigma^2 u_i^2) \\ u_i^2 &\sim& f(u_i^2) \\ \lambda^2 &\sim& g(\lambda^2) \, . \end{eqnarray*} Mixing over $u_i$ leads to a non-Gaussian marginal for $\beta_i$. For example, choosing an exponential prior for each $u_i^2$ results in a Laplace (lasso-type) prior. This class of models provides a Bayesian alternative to penalized-likelihood estimation. When the underlying vector of means is sparse, these global-local shrinkage models can lead to large improvements in both estimation and prediction compared with pure global shrinkage rules. There is a large literature on the choice of $p(u_i^2)$, with \citet{polson:scott:2010a} providing a recent review. As many authors have documented, strong global shrinkage combined with heavy-tailed local shrinkage is why these sparse Bayes estimators work so well at sifting signals from noise. Intuitively, the idea is that $\lambda$ acts as a global parameter that adapts to the underlying sparsity of the signal. When few signals are present, it is quite common for the marginal likelihood of $\mathbf{y}$ as a function of $\lambda$ to concentrate near $0$ (``shrink globally''), and for the signals to be flagged via very large values of the local shrinkage parameters $u_i^2$ (''act locally''). Indeed, in some cases the marginal maximum-likelihood solution can be the degenerate $\hat{\lambda} = 0$ \citep[see, for example,][]{tiao:tan:1965}. The classical risk results of the previous section no longer apply to a model with these extra local-shrinkage parameters, since the marginal distribution of $\boldsymbol{\beta}$, given $\lambda$, is not multivariate normal. Nonetheless, the case of sparsity serves only to amplify the purely Bayesian argument in favor of the half-Cauchy prior for a global scale parameter---namely, the argument that $p(\lambda \mid \mathbf{y})$ should not be artificially pulled away from zero by an inverse-gamma prior. \begin{figure} \begin{center} \includegraphics[width=5.5in]{graphs/MLcomparison.pdf} \caption{\label{fig:MLcomparison} The black line shows the marginal likelihood of the data as a function of $\lambda$ under a horseshoe prior for each $\beta_i$. (The likelihood has been renormalized to obtained a maximum of $1$.) The blue and red lines show two priors for $\lambda$: the half-Cauchy, and that induced by an inverse-Gamma prior on $\lambda^2$.} \end{center} \end{figure} Figure \ref{fig:MLcomparison} vividly demonstrates this point. We simulated data from a sparse model where $\boldsymbol{\beta}$ contained the entries $(5,4,3,2,1)$ along with $45$ zeroes, and where $y_{ij} \sim \mbox{N}(0,1)$ for $j=1,2,3$. We then used Markov Chain Monte Carlo to compute the marginal likelihood of the data as a function of $\lambda$, assuming that each $\beta_i$ has a horseshoe prior \citep{Carvalho:Polson:Scott:2008a}. This can be approximated by assuming a flat prior for $\lambda$ (here truncated above at 10), and then computing the conditional likelihood $p(\mathbf{y} \mid \lambda, \sigma, u_1^2, \ldots, u_p^2)$ over a discrete grid of $\lambda$ values at each step of the Markov chain. The marginal likelihood function can then be approximated as the pointwise average of the conditional likelihood over the samples from the joint posterior. This marginal likelihood has been renormalized to obtain a maximum of $1$ and then plotted alongside two alternatives: a half-Cauchy prior for $\lambda$, and the prior induced by assuming that $\lambda^2 \sim \mbox{IG}(1/2,1/2)$. Under the inverse-gamma prior, there will clearly be an inappropriate biasing of $p(\lambda)$ away from zero, which will negatively affect the ability of the model to handle sparsity efficiently. For data sets with even more ``noise'' entries in $\mathbf{y}$, the distorting effect of a supposedly ``default'' inverse-gamma prior will be even more pronounced, as the marginal likelihood will favor values of $\tau$ very near zero (along with a small handful of very large $u_i^2$ terms). \section{Discussion} On strictly Bayesian grounds, the half-Cauchy is a sensible default prior for scale parameters in hierarchical models: it tends to a constant at $\lambda = 0$; it is quite heavy-tailed; and it leads to simple conjugate MCMC routines, even in more complex settings. All these desirable features are summarized by \citet{gelman:2006}. Our results give a quite different, classical justification for this prior in high-dimensional settings: its excellent quadratic risk properties. The fact that two independent lines of reasoning both lead to the same prior is a strong argument in its favor as a default proper prior for a shared variance component. We also recommend scaling the $\beta_i$'s by $\sigma$, as reflected in the hierachical model from the introduction. This is the approach taken by \citet[Section 5.2]{jeffreys1961}, and we cannot improve upon his arguments. In addition, our hypergeometric inverted-beta class provides a useful generalization of the half-Cauchy prior, in that it allows for greater control over global shrinkage through $\tau$ and $s$. It leads to a large family of estimators with a wide range of possible behavior, and generalizes the form noted by \citet{maruyama:1999}, which contains the positive-part James--Stein estimator as a limiting, improper case. Further study of this class may yield interesting frequentist results, quite apart from the Bayesian implications considered here. The expressions for marginal likelihoods also have connections with recent work on generalized $g$-priors \citep{maruyama:george:2008,polson:scott:2010b}. Finally, all estimators arise from proper priors on $\lambda^2$, and will therefore be admissible. There are still many open issues in default Bayes analysis for hierarchical models that are not addressed by our results. One issue is whether to mix further over the scale in the half-Cauchy prior, $ \lambda \sim C^+ ( 0 , \tau ) $. One possibility here is simply to let $ \tau \sim C^+ (0,1)$. We then get the following default ``double'' half-Cauchy prior for $ \lambda$: $$ p( \lambda ) = \frac{2}{\pi^2} \int_0^\infty \frac{1}{1+ \tau^2} \frac{1}{ \tau( 1 + \frac{\lambda^2}{\tau^2} )} d \tau = \frac{\ln | \lambda |}{\lambda^2 -1} \, . $$ Admittedly, it is difficult to know where to stop in this ``turtles all the way down'' approach to mixing over hyperparameters. (Why not, for example, mix still further over a scale parameter for $\tau$?) Even so, this prior has a number of appealing properties. It is proper, and therefore leads to a proper posterior; it is similar in overall shape to Jeffreys' prior; and it is unbounded at the origin, and will therefore not down-weight the marginal likelihood as much as the half-Cauchy for near-sparse configurations of $\boldsymbol{\beta}$. The implied prior on the shrinkage weight $\kappa$ for the double half-Cauchy is $$ p( \kappa ) \propto \frac{ \ln \left ( \frac{1-\kappa}{\kappa} \right ) }{ 1 - 2 \kappa } \frac{1}{\sqrt{\kappa(1-\kappa)}} \, . $$ This is like the horseshoe prior on the shrinkage weight \citep{Carvalho:Polson:Scott:2008a}, but with an extra factor that comes from the fact that one is letting the scale itself be random with a $C^+(0,1)$ prior. We can also transform to the real line by letting $ \psi = \ln \lambda^2 $. For the half-Cauchy prior $p(\lambda) \propto 1 /(1 + \lambda^2) $ this transformation leads to $$ p( \psi ) \propto \frac{ e^{ \frac{\psi}{2} } }{ 1+ e^\psi } = \left ( e^{ \frac{\psi}{2} } + e^{ - \frac{\psi}{2} } \right )^{-1} = sech \left ( \frac{\psi}{2} \right ) \, . $$ This is the hyperbolic secant distribution, which may provide fertile ground for further generalizations or arguments involving sensible choices for a default prior. A more difficult issue concerns the prior scaling for $\lambda$ in the presence of unbalanced designs---that is, when $y_{ij} \sim \mbox{N}(\beta_i, \sigma^2)$ for $j = 1, \ldots, n_i$, and the $n_i$'s are not necessary equal. In this case most formal non-informative priors for $\lambda$ (e.g.~the reference prior for a particular parameter ordering) involve complicated functions of the $n_i$'s \citep[see, e.g.][]{yang:berger:1997}. These expressions emerge from a particular mathematical formalism that, in turn, embodies a particular operational definition of ``non-informative.'' We have focused on default priors that occupy a middle ground between formal non-informative analysis and pure subjective Bayes. This is clearly an important situation for the many practicing Bayesians who do not wish to use noninformative priors, whether for practical, mathematical, or philosophical reasons. An example of a situation in which formal noninformative priors for $\lambda$ should not be used on mathematical grounds is when $\boldsymbol{\beta}$ is expected to be sparse; see \cite{scottberger06} for a discussion of this issue in the context of multiple-testing. It is by no means obvious how, or even whether, the $n_i$'s should play a role in scaling $\lambda$ within this (admittedly ill-defined) paradigm of ``default'' Bayes. Finally, another open issue is the specification of default priors for scale parameters in non-Gaussian models. For example, in logistic regression, the likelihood is highly sensitive to large values of the underyling linear predictor. It is therefore not clear whether something so heavy-tailed as the half-Cauchy is an appropriate prior for the global scale term for logistic regression coefficients. All of these issues merit further research.
{ "attr-fineweb-edu": 1.357422, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUasLxK19JmejM7Fc0
\section{CONCLUSION} \label{sec:conclusion} We have proposed a 3D attentional formulation to the active object recognition problem. This is mainly motivated by the mechanism resemblance between human attention and active perception~\cite{bajcsy1988active}, and the significant progress made in utilizing attentional models to address complicated vision tasks such as image captioning~\cite{Xu2015Cap}. In developing such a model, we utilize RNNs for learning and storing the internal representation of the object being observed, CNNs for performing depth-based recognition, and STNs for selecting the next-best-views and detecting foreground object. The carefully designed 3D STN makes the whole network differentiable and hence easy to train. Experiments on both real and synthetic datasets demonstrate the efficiency and robustness of our active recognition model. \paragraph*{Limitations} A drawback of learning a policy offline is that physical restrictions during testing are hard to incorporate and when the environment is changed, the computed policy would no longer be useful. This problem can be alleviated by learning from a large amount of training data and cases using a high capacity learning model such as deep neural networks as we do. Although our method employs a 2D-STN for object localization, it does not handle mutual occlusion between objects which is a frequent case in cluttered scenes. One solution is to train the recognition network using depth images with synthetic occlusion. In our current model, the 2D-STN is trained separately. We have tested training it together with the main networks, however, it makes the training converge much slower but does not improve recognition performance notably. \paragraph*{Future works} In the future, we would like to investigate a principled solution for handling object occlusion, e.g., using STN to help localize the shape parts which are both visible and discriminative, in a similar spirit to~\cite{xiao2015}. Another interesting direction is to study multi-agent attentions in achieving cooperative vision tasks, such as multi-robot scene reconstruction and understanding. It is particularly interesting to study the shared and distinct attentional patterns among heterogeneous robots such as mobile robots and drones. \input{simulation} \section{INTRODUCTION} \label{sec:intro} Active object recognition plays a central role in robot-operated autonomous scene understanding and object manipulation. The problem involves planning the views of the visual sensor of a robot to maximally increase the accuracy and confidence of object recognition. Recently, active recognition has gained much boosting~\cite{Wu15}, thanks to the fast development of 3D sensing techniques (e.g., depth cameras) and the proliferation of 3D shape repositories. Our work also adopts the 3D geometric data-driven approach, under the setting of 2.5D depth acquisition. For view selection, most existing works lie in the paradigm of information theoretic view evaluation (e.g.~\cite{denzler2002,huber2012}). For example, from a set of candidates, the view maximizing the mutual information between observations and object classes is selected. Such methods often present two issues. First, to estimate the mutual information, unobserved views must be sampled and the corresponding data must be synthesized from a learned generative model, making the view estimation inefficient~\cite{Wu15}. Second, the object recognition model is typically learned independent on the view planner, although the two are really coupled in an active recognition system~\cite{jayaraman2016}. Some works formulate active recognition as a reinforcement learning problem, to learn a viewing policy under various observations. Especially, a few recent works attempted end-to-end reinforcement learning based on recurrent neural networks~\cite{jayaraman2016,xu2016}. Applying a learned policy is apparently much more efficient than sampling from a generative model. However, these models are known to be hard to train, with difficult parameter tuning and relatively long training time~\cite{Mnih2014}. Moreover, the success of these methods highly depends on the heuristically designed reward function. \input{teaser} The recent development of attention-based deep models has led to significant success in 2D vision tasks based on RGB images~\cite{Mnih2014,Xu2015Cap}. Attention-based models achieve both efficiency and accuracy by focusing the processing only on the most informative parts of the input with respect to a given task. Information gained from different fixations is integrated into an internal representation, to approach the desired goal and guide future attention. Such mechanism, being both goal-directed and stimulus-driven~\cite{Corbetta2002}, fits well to the problem setting of active recognition, accommodating object recognition and guided acquisition in a unified optimization. However, the popular formulation of attention model based on recurrent neural networks~\cite{Mnih2014} suffers from the problem of indifferentiable recognition loss over the attentional locations, making the network optimization by back-propagation infeasible. To make it learnable, the training is often turned into a partially observable Markov decision process (POMDP), which comes back to reinforcement learning. The recently introduced differentiable Spatial Transformer Networks (STN) can be used to actively predict image locations for 2D object detection and recognition~\cite{jaderberg2015spatial}. Motivated by this, we opt to use STN units as our localization networks. However, extending the standard STN to predict views in 3D space while keeping its differentiability is non-trivial. To facilitate the back-propagation of loss gradient from a 2.5D depth image to 3D viewing parameters, we propose to parameterize the depth value at each pixel $(x,y)$ in a depth image over the parameters of the corresponding view $(\theta,\phi)$: $d(x,y)=f(\theta,\phi)$, through a ray-casting based depth computation at each pixel. Furthermore, to avoid the distraction of background clutter and make the recognition attend only to the foreground object of interest, we devise a second level 2D STN for foreground object localization in each view, saving the extra phase of static object detection~\cite{atanasov2014}. Our two-level attention model produces efficient view planning and robust object recognition, as demonstrated by experimental evaluations on both synthetic and real data. We have integrated our method into the autonomous scene scanning system in~\cite{Xu15}, operated by a PR2 robot holding a depth camera, achieving active object recognition in a cluttered scene (Figure~\ref{fig:teaser}). Our main contributions include: \begin{itemize} \item A 3D attentional network for active object recognition through integrating RNN and STN. \item A differentiable extension of STN for view selection in 3D space, leading to a learnable attentional network which can be trained efficiently. \item A 2D localization network based on STN for localizing foreground object of interest in the presence of background clutter, achieving a robust recognition. \end{itemize} \section{APPROACH} \label{sec:method} We first provide an architectural overview of our recurrent attentional model, followed by detailed introduction of the individual subnetworks. Specifically, we elaborate the two levels of spatial transformer networks, responsible for view selection and foreground detection, respectively. Finally, we provide details about model training and inference. \subsection{Architecture Overview} Figure~\ref{fig:model} shows the architecture of our recurrent attentional model. The main body of the model is a Recurrent Neural Network (RNN) for modeling the sequential dependencies among consecutive views. At each time step, the model takes a depth image as input, extracts image feature, amalgamates information of past views, makes a prediction of the object class and produces a next-best-view for future observation. To achieve that, we embed three subnetworks into the RNN: a 3D-STN for selecting views in 3D space, a 2D-STN for detecting object of interest in each view, and a shape classifier for depth-based object recognition. Our model works as follows. Given an input depth image of the current view, it first selects within the image a rectangle region (not necessarily axis-aligned) containing the object being recognized, through estimating the transformation (including translation, rotation and scaling) of the region. This is done by a 2D localization network which is essentially a stack of convolutional layers (Conv$_{\text{2D}}$). The region is then transformed back to the input image size by the depth image generator (DG$_{\text{2D}}$) and fed into another stack of convolutional layers (Conv$_{\text{3D}}$) for feature extraction. The extracted features are aggregated with those extracted from the past views, with the help of the RNN hidden layers. The aggregated features are then used for classification and next-best-view (NBV) prediction, with the fully connected layers FC$_{\text{class}}$ and FC$_{\text{loc}}$, respectively. Based on the predicted NBV parameters, a depth image is generated (rendered or captured by the depth image generator DG$_{\text{3D}}$), serving as the input to the next time step. As shown in Figure~\ref{fig:model}, the 2D-STN is the subnetwork constituted by the 2D localization network Conv$_{\text{2D}}$ and the 2D depth image sampler DG$_{\text{2D}}$. The 3D-STN is the subnetwork encompassing the convolutional layers Conv$_{\text{3D}}$, the fully connected layers FC$_{\text{loc}}$ and the depth image generator DG$_{\text{3D}}$. The shape classifier is composed of Conv$_{\text{3D}}$ and FC$_{\text{class}}$, which is a standard Convolutional Neural Network (CNN) classifier. Therefore, the convolutional layers are shared by the CNN classifier and the 3D-STN. \input{model} \subsection{Spatial Transformer Networks} Spatial transformer is a differentiable sampling-based network, which gives neural networks the ability to actively spatially transform the input data, without any spatial supervision in training. Generally, spatial transformer is composed of a localization network and a generator. STN achieves spatial attention by first passing the input image into a localization network which regresses the transformation, and then generating a transformed image by the generator. The transformed image is deemed to be easier to recognize or classify, thus better approaching the desired task. STN fits well to our problem setting. Due to its differentiability, it enables end-to-end training with back-propagation, making the network easier to learn.  It is relatively straightforward to employ it for object localization in the image of a given view. However, when using it to predict views in 3D, we face the problem of indifferentiable pixel values over viewing parameters, which is addressed by our work. \input{sphere} \subsection{2D-STN for Object Detection} Object recognition in a real scene is especially challenging due to scene clutters. The recognition of an object of interest is usually distracted by background clutter. To address this issue, a natural choice is to conduct object detection to remove the background distraction. However, object detection itself is a non-trivial task. We approach object detection using spatial transformer networks due to its easy integration with our model. Since this STN performs localization in depth images, we refer to it as 2D-STN. There are two subnetworks in 2D-STN: a \emph{2D localization network} Conv$_{\text{2D}}$ and a \emph{depth image generator} DG$_{\text{2D}}$. At a given time step $t$ in the recurrent model (Figure~\ref{fig:model}), Conv$_{\text{2D}}$ takes an input depth image (possibly with background) $d_t$ and outputs the spatial transformation parameters $m_t$: \begin{equation} m_t = f_{\text{Conv}}^{\text{2D}}(d_t, W_{\text{Conv}}^{\text{2D}}), \label{eq:2dstn1} \end{equation} where $W_{\text{Conv}}^{\text{2D}}$ are the weights to learn in Conv$_{\text{2D}}$. The parameters are then used by DG$_{\text{2D}}$ to generate a transformed image $\hat{x}_t$ containing only the foreground object: \begin{equation} \hat{d}_t = f_{\text{DG}}^{\text{2D}}(d_t, m_t). \label{eq:2dstn2} \end{equation} Our 2D-STN differs from the original STN in~\cite{jaderberg2015spatial} in a few aspects. First, for training data generation, unlike~\cite{jaderberg2015spatial} where transformation and background noise are randomly added, we opt to use depth images of indoor scene as our background, to better fit our goal of object detection from an indoor scene. Specifically, we synthesize training data by using the depth of object as foreground while adopting the depth of scenes from the NYU dataset~\cite{Silberman2012} as background. Since the background scenes also contain many household objects, 2D-STN can be enhanced by such training data, so that it is capable of detecting the foreground object being recognized, out of background clutter (see Figure~\ref{fig:vis} and~\ref{fig:sim}). Second, the network is adapted to the size our input depth images (Figure~\ref{fig:2dstn}). Third, since depth images are unlikely to distort with shear or anisotropic scaling, we remove the affine transformation in the standard STN. \subsection{3D-STN for View Selection} Given an input depth image, the goal of 3D-STN is to extract image features, regress the 3D viewing parameters of the next-best-view (with respect to the recognition task) and generate (render or capture) a new depth image from that view. 3D-STN is again comprised of two subnetworks: \emph{3D localization network} (Conv$_{\text{3D}}$ + FC$_{\text{loc}}$) and \emph{depth image generator} DG$_{\text{3D}}$. During forward passing, the localization network takes the depth image $d_t$ as input and outputs the next best view $v_{t+1}$. The viewing parameter is parameterized in the local spherical coordinate system built w.r.t. the initial view $v_0$ (Figure~\ref{fig:sphere} left), and represented as a tuple $(\theta_t,\varphi_t)$, where $\theta_t$ is azimuth and $\varphi_t$ elevation. Note that the viewing parameters does not include radial distance to the object center. This is because our 2D-STN is able to deal with object scaling more easily, thus simplifying the 3D-STN. Specifically, the convolutional network Conv$_{\text{3D}}$ first extract features from the depth image output by 2D-STN: \begin{equation} u_t = f_{\text{Conv}}^{\text{3D}}(\hat{d}_t, W_{\text{Conv}}^{\text{3D}}), \label{eq:3dstn1} \end{equation} where $W_{\text{Conv}}^{\text{3D}}$ are the weights of Conv$_{\text{3D}}$. These features are amalgamated with those of past views stored in the RNN hidden layer $h_{t-1}$: \begin{equation} h_{t} = g(W_{ih}u_t + W_{hh}h_{t-1}), \label{eq:rnn} \end{equation} where $g(\cdot)$ is a nonlinear activation function. $W_{ih}$ and $W_{hh}$ are the weights for input-to-hidden and hidden-to-hidden connections, respectively. The aggregated feature in $h_t$ is then used to regress the viewing parameters of the next step: \begin{equation} (\theta_{t+1},\varphi_{t+1}) = f_{\text{FC}}^{\text{loc}}(h_t, W_{\text{FC}}^{\text{loc}}), \label{eq:3dstn2} \end{equation} where $W_{\text{FC}}^{\text{loc}}$ are the weights of FC$_{\text{loc}}$. The viewing parameters are then used by the depth image generator DG$_{\text{3D}}$, and to render a new depth image with the training 3D shape during training, or to capture one against the object of interest in testing: \begin{equation} d_{t+1} = f_{\text{DG}}^{\text{3D}}(\theta_{t+1},\varphi_{t+1}). \label{eq:3dstn3} \end{equation} In order to make our attentional network learnable, we require the generated depth image differentiable against the viewing parameters. This is achieved by casting a ray from the camera projection center passing through a pixel, and computing the intersection point between the ray and the 3D shape (if any). Therefore, as shown in Figure~\ref{fig:sphere} (right), given the viewing parameters ($\theta_t,\varphi_t$) and a 3D shape $S$, the depth value at any given pixel $(x,y)$ of the depth image can be computed as: $d_t(x,y)=f_{\text{ray}}(x, y, p, \theta_t, \varphi_t)$, where $p$ is the intersection point. $f_{\text{ray}}$ simply computes the distance between $p$ and pixel $(x,y)$, which is differentiable. A basic assumption behind the parameterization of depth values w.r.t. viewing parameters and the intersection point is that the intersection point does not change when the view change is small, thus the first-order derivative over viewing parameters can be approximated by keeping the point fixed. \subsection{Classification Network} The depth image output by the 2D-STN at each time step is passed into a \emph{CNN classifier} (Conv$_{\text{3D}}$ + FC$_{\text{class}}$) for class label prediction. Note that the classification is also based on the aggregated features of both current and past views: \begin{equation} c_t = f_{\text{FC}}^{\text{class}}(h_t, W_{FC}^{\text{class}}), \label{eq:class} \end{equation} where $W_{\text{FC}}^{\text{class}}$ are the weights of FC$_{\text{class}}$. \input{2dstn} \subsection{Training and inference} For the sake of convergence speed, we opt to decompose the training of our model into two parts: 1) 3D-STN, RNN and CNN classifier; 2) 2D-STN. The first part is trained by virtually recognizing the 3D shapes in the training dataset, using rendered depth images. For each 3D shape, we randomly select $50$ views as the initial views. Starting from each initial view, we start the virtual recognizing for an episode of $10$ time steps. In each step, the current depth image is input to the network and output both class label and the next-best-view. The network is then trained by two loss functions: the cross entropy loss in classification and the movement cost caused by the next-best-view. The movement cost is measured as the great circle arc length from the current view to the next-best-view in the spherical coordinate system. The training tunes the parameters of the 3D-STN, the RNN and the classification network simultaneously, using back-propagation through time (BPTT)~\cite{Mozer1989}. To train the 2D-STN, we need to link it to a pre-trained CNN classifier (we use AlexNet~\cite{krizhevsky2012}) as is shown in Figure~\ref{fig:2dstn}, in order to train it with respect to an image classification task. During training, we fix the parameters of the AlexNet and only tune the parameters of the 2D-STN, thus making the training easier to converge. At inference time, a captured depth image is passed into the our attentional model, for both object classification and next-best-view prediction. This is repeated until termination conditions are met. We set two termination conditions for inference: 1) The classification uncertainty, measured by the Shannon entropy of the classification probability, is less than a threshold ($0.1$); 2) The maximum number ($15$) of time steps has been reached. \section*{APPENDIX} \bibliographystyle{IEEEtran} \section{RELATED WORK} \label{sec:related} Active object recognition has a rich literature in robotics, vision and graphics (surveys available from e.g.~\cite{scott2003,roy2004active}). We provide a brief review of the related works (categorized into information theoretic and policy learning approaches), as well as some recent attempts on end-to-end learning. \paragraph*{Information theoretic approaches} Information theoretic formulation represents a standard approach to active vision problems. The basic idea is to quantify the information gain of each view by measuring the mutual information between observations and object classes~\cite{denzler2002}, the entropy reduction of object hypotheses~\cite{borotschnig2000}, or the decrease of belief uncertainty about the object that generated the observations~\cite{callari2001}. The optimal views are those which are expected to receive the maximal information gain. The estimation of information gain usually involves learning a generative object model (likelihood or belief state) so that the posterior distribution of object class under different views can be estimated. Different methods have been utilized in estimating information gain, such as Monte Carlo sampling~\cite{denzler2002}, Gaussian Process Regression~\cite{huber2012} and reinforcement learning~\cite{arbel2001entropy}. \paragraph*{Policy learning approaches} Another line of research seeks to learn viewing policies. The problem is often viewed as a stochastic optimal control one and cast as a partially-observable Markov decision process. In~\cite{paletta2000active}, reinforcement learning is utilized to offline learn an approximate policy that maps a sequence of observations to a discriminative viewpoint. Kurniawati et al. employ a point-based approximate solver to obtain a non-greedy policy offline~\cite{kurniawati2008sarsop}. In contrast to offline learning, Lauri et al. attempted to apply Monte Carlo tree search (MCTS) to obtain online active hypothesis testing policy for view selection~\cite{lauri2015active}. Our method learns and compiles viewing policies into the hidden layers of RNN, leading to a high-capacity view planning model. \paragraph*{End-to-end learning approaches} The recent fast development of deep learning models has aroused the interest of end-to-end learning of active vision policies~\cite{levine2016end}. Malmir et al. use deep Q-learning to find the optimal policy for view selection from raw images~\cite{malmir2016deep}. Our method shares similarities with the recent work of Wu et al.~\cite{Wu15} in taking 2.5D depth images as input and 3D shapes as training data. They adopt a volumetric representation of 3D shapes and train a Convolutional Deep Belief Network (CDBN) to model the joint distribution over volume occupancy and shape category. By sampling from the distribution, shape completion can be performed based on observed depth images, over which virtual scanning is conducted to estimate the information gain of a view. Different from their method, our attention model is trained offline hence no online sampling is required, making it efficient for online active recognition. The works of Jayaraman and Grauman~\cite{jayaraman2016} and Xu et al.~\cite{xu2016} are the most related to ours, where the recognition and control modules are jointly optimized based on reinforcement learning. We employ the spatial transformer units~\cite{jaderberg2015spatial} as our locator networks to obtain a fully differentiable network. \section{RESULTS \& EVALUATION} \label{sec:results} \subsection{Parameters and statistics} \label{subsec:impl} \paragraph*{3D shape database for training} Our method has been trained and tested with a large-scale 3D shape database, i.e., the Princeton ModelNet40 dataset~\cite{Wu15} containing $12,311$ shapes in $40$ object categories. $80\%$ of the dataset was used for training and the remaining $20\%$ for testing. During training, a 3D model needs to be rendered into a $224 \times 224$ depth image from any given viewpoint, using the ray-casting algorithm (parallel implementation on the GPU). All depth values are normalized into $[0,255]$. \paragraph*{Parameters of subnetworks} The unit size for the hidden layer of the RNN is $1024$. We use the ReLU activation functions for all hidden layers, $f(x) = max(0,x)$, for fast training. The classification network (CNN) takes the AlexNet architecture~\cite{krizhevsky2012} which contains $5$ convolutional layers and $3$ fully connected layers followed by a soft-max layer. Same parameter settings as the original paper are used for the various layers. The 3D-STN contains $5$ convolutional layers (shared with the classification CNN) and $2$ fully connected layers. In summary, there are about $62M$ parameters in total being optimized in training the 3D-STN, the RNN and the CNN. The 2D-STN contains $4$ convolutional layers containing respectively, as well as a fully connected layer. The first layer has $64$ filters of size $11$ and stride $4$; the second layer has $192$ filters of size $5$; the second layer has $64$ filters of size $3$; the second layer has $20$ filters of size $3$. There are totally about $61M$ parameters in the 2D-STN. \subsection{Evaluation and comparison} \paragraph*{Comparison of NBV estimation} To evaluate the performance of NBV estimation, we compare our attentional method against four alternatives, including a baseline method and three state-of-the-art ones. The baseline method \emph{Random} selects the next view randomly. The state-of-the-art NBV techniques being compared include the \emph{3DShapeNets}~\cite{Wu15}, the \emph{MV-RNN}~\cite{Xu15} and the active recognition method based on \emph{Pairwise} learning~\cite{johns2016pairwise}. The tests were conducted on the test shapes in ModelNet40 with the task of classification over the $40$ classes. For a fair comparison on the quality of estimated NBVs, we let each method predict their own NBV sequences. and uniformly use a pre-trained Multi-View CNN shape classifier (MV-CNN)~\cite{Su15MV} for shape classification. \input{plot_nbv} Figure~\ref{fig:plot-nbv}(a) plots the recognition rate over the number of predicted NBVs. Our method achieves the fastest accuracy increase. To further evaluate the efficiency of the NBVs, we plot in (b) the information gain of different NBVs, measured by the decrease of Shannon entropy of the classification probability distributions. Compared to the alternatives, the NBV sequences output by our method attain larger information gain in early steps, demonstrating higher efficiency. \paragraph*{Timings} Table~\ref{tab:timing} lists the training and testing time of our method on both ModelNet10 (containing $4905$ shapes in $10$ categories) and ModelNet40 datasets. Since the 2D-STN is trained outside the main networks (RNN, CNN and 3D-STN), we report its training time separately. Table~\ref{tab:timecomp} compares the training time of three methods, i.e., 3DShapeNets, MV-RNN and ours. The training of 3DShapeNets involves learning the generative model with CDBN. MV-RNN is trained with reinforcement learning. The comparison shows the training efficiency of our model over the two alternatives. All timings were obtained on a workstation with an Intel$^{\circledR}$ Xeon E5-2670 @ 2.30GHz $\times$ 24, 64GB RAM and an Nvidia$^{\circledR}$ Quadro M5000 graphics card. \input{tab_timing} \input{tab_time_comp} \paragraph*{NBV estimation with background clutter} The advantage of our method is best demonstrated in a cluttered scene where the recognition of a targeted object could be distracted by background clutter. With the 2D-STN, our method can robustly ``detect'' the object out of the background. We evaluate this on a \emph{benchmark dataset} containing both synthetic and real test scenes, with inter-object occlusion (see Figure~\ref{fig:benchmark} for a few samples). Each targeted object in the test scenes was manually labeled with a ground-truth ModelNet40 category. We compare in Figure~\ref{fig:plot-2dstn} the recognition accuracy of four methods including 3DShapeNets, MV-RNN and our method with and without the object localization network (2D-STN) on the two datasets, respectively. The comparisons show that our method, enhanced by the object localization in each view, increases the recognition accuracy by $8.9\%$ and $10.3\%$ on average for the synthetic and real datasets, respectively. Our method also significantly outperforms the two state-of-the-art methods being compared, demonstrating its strong resilience against background clutter. \input{plot_2dstn} \input{benchmark} \paragraph*{Robustness against imperfect view} In real scenarios, a robot can hardly move the camera exactly on the viewing sphere around the object of interest, so the camera cannot always point to the center of the object while keeping a fixed distance to it. Such imperfect views may cause severe degrading of recognition accuracy, since the model had been trained with perfect viewing conditions. Here, a significant benefit of the 2D-STN is that our model can always ``track'' the object of interest under shifted viewing angle and varied viewing distance. To evaluate this, we test the robustness of our method against view perturbations. Specifically, we introduce view perturbations during the testing phase, through altering the viewing direction of an estimated NBV by an angle up to $\pm30^{\circ}$ and the viewing distance by up to $\pm20\%$ of the original one. In Figure~\ref{fig:plot-viewperturb}, we plot the recognition accuracy over the amount of view perturbation, for our method with and without the 2D-STN. The results demonstrate that our method gains robustness with the object localization, for perturbed views. For reference, the results of 3DShapeNets and MV-RNN are also given. \paragraph*{Visualization of attentions} To visually investigate the behavior of our attention model, we visualize in Figure~\ref{fig:vis} the sequence of NBVs along with the attended regions in the depth image at each view during the recognition of several real objects. We also test our algorithm on the newly released dataset of RGBD object scans~\cite{Choi2016}, which provides RGBD videos targeting objects in various categories, captured by humans. Since there is no camera pose information, we run SLAM-based depth fusion~\cite{Niessner2013} with each depth video and record the camera pose for each frame. Such parameterized depth video can serve as our viewing space, from which our method can ``acquire'' depth images, based on the viewing parameters. The results demonstrate that our method can correctly recognize the objects with plausibly planned views. After determining the object category, we utilize the MV-CNN method~\cite{Su15MV} to retrieve a 3D model (shown to the right) from the recognized category, using the ever-acquired depth images as input. \input{plot_viewperturb} \paragraph*{Testing on a robot platform} We have integrated our active object recognition with the autonomous scene scanning and reconstruction system propose in~\cite{Xu15}, to automate scene modeling via recognizing the objects in the scene, retrieving the best matched 3D shape and placing it into the scene, similar to~\cite{Salas-Moreno2013} and~\cite{Xu16}. The system is built on top of a Willow Garage PR2 robot holding a Microsoft Kinect (version 2), performing a rough scene reconstruction. Note that the reconstructed scene geometry was not used by our recognition, but only serves as the reference for scene modeling. Targeting to an object of interest, our method drives the robot to acquire a series of depth images from the estimated NBVs, and output its category. We then retrieve a 3D model from the category as above, and place it into the scene through aligning the depth images against the scene geometry, similar to~\cite{Xu16}. In Figure~\ref{fig:sim}, we show the active recognition for scene modeling with the PR2 platform. In each depth image along the NBV sequence, we highlight the attended regions (corresponding to the object of interest). The final retrieved 3D model is placed into the partially reconstructed scene, shown to the right. \input{visualization}
{ "attr-fineweb-edu": 1.783203, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUauTxK4sA-7sDc1nC
\section{Introduction} In this paper we study the gravitational response of vortices that carry localized amounts of external magnetic flux; called {\em Dark Strings} or {\em Dark Vortices} in the literature \cite{DStrings, DSGravity}. The goal is to understand how their back-reaction influences the transverse geometry through which they move, and the geometry that is induced on their own world-sheet. We find the initially surprising result that the gravitational response of such an object is locally independent of the amount of flux it contains, and show how this can be simply understood. \subsubsection*{Motivation} Why study the gravitational response of Dark Vortices? Vortices are among the simplest stable solitons and arise in many theories with spontaneously broken $U(1)$ gauge symmetries \cite{NOSolns}. They can arise cosmologically as relics of epochs when the Universe passes through symmetry-breaking phase transitions. Such cosmic strings are widely studied \cite{CStrings} because, unlike other types of cosmic defects, they need not be poisonous for later cosmology since the resulting cosmic tangle tends not to come to dominate the energy density in a problematic way. In the simplest models a vortex defines a region outside of which the $U(1)$ symmetry breaks while inside it remains (relatively) unbroken, and as a result all magnetic $U(1)$ flux is confined to lie completely within the vortex interior. However in theories with more than one $U(1)$ factor more complicated patterns can also exist, for which magnetic fields outside the vortex can also acquire a localized intra-vortex component. Such vortices naturally arise in `Dark Photon' models \cite{DPhoton}, for which the ordinary photon mixes kinetically \cite{Bob} with a second, spontaneously broken, $U(1)$ gauge field (as have been widely studied as Dark Matter candidates \cite{DPDarkMatter}). Cosmic strings of this type could carry localized ordinary magnetic flux, even though the $U(1)_{\scriptscriptstyle EM}$ gauge group remains unbroken \cite{DStrings, DSGravity}. Of most interest are parameters where the vortex's transverse thickness is much smaller than the sizes of interest for the geometry transverse to the source. In such situations only a few vortex properties are important, including the tension (energy per unit length) and the amount of flux localized on the vortex (or more generally brane-localized flux, or BLF for short). Indeed these two quantities (call them $T_b$ and $\zeta_b$) provide the coefficients of the leading terms in any derivative expansion of a vortex action (for which more explicit forms are also given below), \begin{equation} \label{Sbforms} S_b = - T_b \, \int \omega + \zeta_b \int \, \star \, A + \cdots \,, \end{equation} where $\omega$ is the volume form of the codimension-two surface and $\star A$ is the Hodge dual of the $U(1)$ field strength, $A_{{\scriptscriptstyle M} {\scriptscriptstyle N}} = \partial_{\scriptscriptstyle M} A_{\scriptscriptstyle N} - \partial_{\scriptscriptstyle N} A_{\scriptscriptstyle M}$ whose flux is carried by the vortex. These are the leading terms inasmuch as all terms represented by the ellipses involve two or more derivatives.\footnote{A single-derivative term involving the world-sheet extrinsic curvature is also possible, but our focus here is on straight motionless vortices.} In four dimensions both $\omega$ and $\star A$ are 2-forms and so can be covariantly integrated over the 2-dimensional world-sheet of a cosmic string, while in $D=d+2$ dimensions they are $d$ forms that can be integrated over the $d$-dimensional world volume of a codimension-2 surface.\footnote{That is, a brane with precisely two transverse off-brane dimensions.} Previous workers have studied gravitational response in the absence of brane-localized flux \cite{OtherGVs}, but our particular interest is on how $\zeta_b$ competes with $T_b$ to influence the geometry. Our analysis extends recent numerical studies \cite{DSGravity} of how dark strings gravitate, including in particular an effective field theory analysis of the BLF term and its gravitational properties. Besides being of practical interest for Dark Photon models, part of our motivation for this study also comes from brane-world models within which the familiar particles of the Standard model reside on a 3+1 dimensional brane or `vortex' within a higher-dimensional space.\footnote{Our restriction to codimension-2 branes makes $d=4$ and $D=6$ the most interesting case of this type \cite{GS}.} Comparatively little is known about how higher-codimension branes situated within compact extra dimensions back-react gravitationally to influence their surrounding geometries,\footnote{By contrast, back-reaction is fairly well-explored for codimension-1 objects due to the extensive study of Randall-Sundrum models \cite{RS}.} and codimension-2 objects provide a simple nontrivial starting point for doing so. In particular, a key question in any such model is what stabilizes the size and shape of the transverse compact dimensions, and this is a question whose understanding can hinge on understanding how the geometry responds to the presence of the branes. Since long-range inter-brane forces vary only logarithmically in two transverse dimensions, they do not fall off with distance and so brane back-reaction and inter-brane forces are comparatively more important for codimension-2 objects than they are with more codimensions. Furthermore, several mechanisms are known for stabilizing extra dimensions, and the main ones involve balancing inter-brane gravitational forces against the cost of distorting extra-dimensional cycles wrapped by branes or threaded by topological fluxes \cite{SS, GKP, GoldWis, 6DStabnb, 6DStab}. Since brane-localized flux is the leading way fluxes and uncharged branes directly couple to one another, it is crucial for understanding how flux-carrying vortices interact with one another and their transverse environment. Localized flux has recently also been recognized to play a role in the stability of compact geometries \cite{dahlen}. Finally, the fact that cosmic strings can have flat world-sheets for any value of their string tension \cite{OtherGVs} has been used to suggest \cite{CLP, CG} they may contain the seeds of a mechanism for understanding the cosmological constant problem \cite{CCprob}. But a solution to the cosmological constant problem involves also understanding how the curvature of the world-sheet varies as its tension and other properties vary. This requires a critical study of how codimension-2 objects back-react onto their own induced geometry, such as we give here. Although extra-dimensional branes are not in themselves expected to be sufficient to provide a solution (for instance, one must also deal with the higher-dimensional cosmological constant), the techniques developed here can also be applied to their supersymmetric alternatives \cite{SLED}, for which higher-derivative cosmological constants are forbidden by supersymmetry and whose ultimate prospects remain open at this point. We make this application in a companion paper \cite{Companion}. \subsubsection*{Results} Our study leads to the following result:~{\em brane-localized flux does not gravitate.} It is most intuitively understood when it is the dual field $F = \star\, A$ that is held fixed when varying the metric, since in this case the BLF term $S_{\scriptscriptstyle BLF} = \zeta \int F$ is metric-independent. We show how the same result can also be seen when $A$ is fixed; and more precisely show that the $\zeta_b$ (or BLF) term of \pref{Sbforms} induces a universal renormalization of the brane's tension and the brane gravitational response is governed only by the total tension including this renormalization. This renormalization is universal in the sense that it does not depend on the size of any macroscopic magnetic field in which the vortex may sit. (The central discussion, with equations, can be found between eqs.~\pref{Tzeta-UV} and \pref{endofhighlight}, below.) Of course the BLF term {\em does} contribute to the external Maxwell equations, generating a flux localized at the vortex position with size proportional to $\zeta_b$. Among other things this ensures that a test charge that moves around the vortex acquires the Aharonov-Bohm phase implied by the localized flux. But its gravitational influence is precisely cancelled by the back-reaction of the Maxwell field, through the gravitational field set up by the localized flux to which the BLF term gives rise. Since an external macroscopic observer cannot resolve the energy of the vortex-localized BLF term from the energy of the localized magnetic field to which it gives rise, macroscopic external gravitational measurements only see their sum, which is zero. The presence of the localized energy in the induced magnetic field does change the total energy density of the vortex, however, which can be regarded as renormalizing the vortex tension. This renormalization is independent of the strength of any outside magnetic fields. This failure of the BLF term to gravitate has important implications for the curvature that is induced on the vortex world-sheet. To see why, consider the trace-reversed Einstein equations in $D = d+2$ dimensions, which state\footnote{We use Weinberg's curvature conventions \cite{Wbg}, which differ from those of MTW \cite{MTW} only by an overall sign in the definition of the Riemann tensor. Coordinates $x^{\scriptscriptstyle M}$ label all $D$ dimensions while $x^\mu$ ($x^m$) label the $d$-dimensional (2-dimensional) subspaces.} \begin{equation} R_{{\scriptscriptstyle M}{\scriptscriptstyle N}} + \kappa^2 \left( T_{{\scriptscriptstyle M}{\scriptscriptstyle N}} - \frac{1}{d} \, g_{{\scriptscriptstyle M}{\scriptscriptstyle N}} \, {T^{\scriptscriptstyle P}}_{\scriptscriptstyle P} \right) = 0 \,. \end{equation} What is special about this equation is that the factor of $1/d$ ensures that the on-brane stress-energy often drops out of the expression for the on-brane curvature, which is instead governed purely by the {\em off}-brane stress energy. Consequently it is of particular interest to know when $T_{mn}$ vanishes for some reasonable choice of brane lagrangian. $T_{mn}$ would vanish in particular when the brane action is dominated by its tension \begin{equation} \label{tensionSE} T_{\mu\nu} = T_b \, g_{\mu\nu} \; \frac{\delta(y)}{\sqrt{g_2}} \,, \end{equation} where $\delta(y)$ is some sort of regularized delta-like function with support only at the brane position. But the derivation of \pref{tensionSE} from \pref{Sbforms} is complicated by two issues: is there a dependence on the transverse metric hidden in the regularized $\delta(y)$ (which is designed, after all, to discriminate based on proper distance from the vortex); and (for flux-containing branes) what of the metrics appearing in the Hodge dual, $\star A$, of the BLF term? The results found here imply these two issues are not obstructions to deriving \pref{tensionSE} from \pref{Sbforms}. They do this in two ways. First they show how $T_{mn}$ can be derived without ad-hoc assumptions about the metric-dependence of $\delta(y)$. Second, they show that the apparent dependence of the BLF terms on the transverse metric components, $g_{mn}$, is an illusion, because it is completely cancelled by a similar dependence in the gauge-field back-reaction. The remainder of this paper shows how this works in detail. We use three different techniques to do so. \begin{itemize} \item The first works within a UV completion of the dark vortex, for which we explicitly solve all field equations for a system that allows Nielsen-Olesen type vortex solutions. In this construction the BLF term can arise if there is a kinetic mixing, $\varepsilon Z_{{\scriptscriptstyle M} {\scriptscriptstyle N}} A^{{\scriptscriptstyle M} {\scriptscriptstyle N}}$, between the $U(1)$ gauge field, $Z_{\scriptscriptstyle M}$, of the Nielsen-Olesen vortex, and the external gauge field, $A_{\scriptscriptstyle M}$, whose flux is to be localized. In this case the mixing of the two gauge fields can be diagonalized explicitly, leading to the advertised cancellation of the BLF coupling as well as a renormalization of the $Z_{\scriptscriptstyle M}$ gauge coupling, $e^2 \to \hat e^2 = e^2 / (1 - \varepsilon^2)$. \item Second, we compute the couplings $T$ and $\zeta$ of the effective action for the codimension-2 vortex in the limit where the length scales of the transverse geometry are much larger than the vortex size. This has the form of \pref{Sbforms}, with $\zeta_b \propto \varepsilon/e$. We verify that it reproduces the physics of the full UV theory, including in particular the cancellation of BLF gravitational interaction and the renormalization of the brane tension quadratically in $\zeta$. \item Finally we compare both of these approaches to explicit numerical calculations of the metric-vortex profiles as functions of the various external parameters, to test the robustness of our results. \end{itemize} \subsubsection*{A road map} The remainder of the paper is organized as follows. The next section, \S\ref{section:system}, describes the action and field equations for the microscopic (or UV) system of interest. \S\ref{subsec:actionFE} shows this consists of a `bulk' sector (the metric plus a gauge field, $A_{\scriptscriptstyle M}$) coupled to a `vortex' sector (a charged scalar, $\Psi$, and a second gauge field, $Z_{\scriptscriptstyle M}$). The vortex sector is designed to support Nielsen-Olesen vortices and these provide the microscopic picture of how the codimension-2 objects arise. The symmetry ans\"atze used for these solutions are described in \S\ref{subsec:ansatz} and the order-of-magnitude scales given by the parameters of the system are summarized in \S\ref{subsec:scales}. Solutions to the field equations describing a single isolated vortex are then described in detail in \S\ref{section:isolatedvortex}, including both analytic and numerical results for the field profiles. The logic of this section, starting in \S\ref{subsec:vortexsoln}, is to integrate the field equations in the radial direction, starting from initial conditions at the centre of the vortex and working our way out. The goal is to compute the values of the fields and their first derivatives just outside the vortex. In general we find a three-parameter set of choices for initial conditions (modulo coordinate conditions), that can be taken to be the flux quantum, $n$, for the vortex together with two integration constants ($Q$ and $\check R$) that describe the size of the ambient external magnetic field and the curvature of the on-brane directions.\footnote{For a given vortex lagrangian the tension of the vortex is controlled in terms of $n$ by parameters in the lagrangian. We can also take the tension to be a separate dial -- independent of $n$ --- if we imagine having several vortex sectors with different coupling constants in each sector.} The resulting formulae for the fields and derivatives external to the vortex provide the initial data for further integration into the bulk, and are efficiently captured through their implications for the asymptotic near-vortex form of the bulk solutions, described in \S\ref{subsec:nearvortex}. In \S\ref{subsec:vortexEFT} these expressions for the near-vortex fields and derivatives are also used to match with the effective vortex description of \pref{Sbforms} to infer expressions for $T_b$ and $\zeta_b$ in terms of microscopic parameters. The point of view shifts in \S\ref{section:interactions} from the perspective of a single vortex to the question of how the bulk responds once the two vortices at each end are specified.\footnote{Using electrostatics in 3 spatial dimensions as an analogy, \S\ref{section:isolatedvortex} does the analog of relating the coefficient of $1/r$ in the electrostatic potential to the charge defined by the properties of the source. Then \S\ref{section:interactions} asks what the equilibrium configuration is for a collection of charges given the resulting electrostatic potential.} This is done in two ways. One can either continue integrating the field equations radially away from the first source (with $n$, $Q$ and $\check R$ specified as initial data as before) and thereby learn the properties of the source at the other end of the transverse space (by studying the singularities of the geometry where it closes off and compactifies). Alternatively, one can take the properties of the two sources as given and instead infer the values of $Q$ and $\check R$ that are consistent with the source properties: the two flux quanta $n_+$ and $n_-$, and the overall quantum $N$ for the total magnetic flux in the transverse dimensions. After \S\ref{subsec:integrals} first provides a set of exact integral expressions for quantities like $\check R$ in terms of other properties of the source and bulk solutions, \S\ref{subsec:Exactsolns} describes the exact solutions for the bulk that are maximally symmetric in the on-brane directions and interpolate between any pair of source vortices. Finally, \S\ref{section:discussion} summarizes our results and describes several open directions. Some useful but subsidiary details of the calculations are given in several Appendices. \section{The system of interest} \label{section:system} We start by outlining the action and field equations for the system of interest. Our system consists of an Einstein-Maxwell system (the `bulk') coupled to a `vortex' --- or `brane' --- sector, consisting of a complex scalar coupled to a second $U(1)$ gauge field. For generality we imagine both of these systems live in $D = d+2$ spacetime dimensions, though the most interesting cases of practical interest are the cosmic string [with $(D,d) = (4,2)$] and the brane-world picture [with $(D,d) = (6,4)$]. \subsection{Action and field equations} \label{subsec:actionFE} The action of interest is $S = S_{\scriptscriptstyle B} + S_{\scriptscriptstyle V}$ with bulk action given by \begin{eqnarray} \label{SB} S_{\scriptscriptstyle B} &=& - \int {\hbox{d}}^{d+2}x \; \sqrt{-g} \left[ \frac{1}{2\kappa^2} \; g^{{\scriptscriptstyle M}{\scriptscriptstyle N}} \, {\cal R}_{{\scriptscriptstyle M} {\scriptscriptstyle N}} + \frac14 \, A_{{\scriptscriptstyle M} {\scriptscriptstyle N}} A^{{\scriptscriptstyle M} {\scriptscriptstyle N}} + \Lambda \right] \nonumber\\ &=:& - \int {\hbox{d}}^{d+2}x \; \sqrt{-g} \; \Bigl( L_{\scriptscriptstyle EH} + L_{\scriptscriptstyle A} + \Lambda \Bigr) \end{eqnarray} where $A_{{\scriptscriptstyle M} {\scriptscriptstyle N}} = \partial_{\scriptscriptstyle M} A_{\scriptscriptstyle N} - \partial_{\scriptscriptstyle N} A_{\scriptscriptstyle M}$ is a $D$-dimensional gauge field strength, ${\cal R}_{{\scriptscriptstyle M}{\scriptscriptstyle N}}$ denotes the $D$-dimensional Ricci tensor and the last line defines the $L_i$ in terms of the corresponding item in the previous line. The vortex part of the action is similarly given by \begin{eqnarray} \label{SV} S_{\scriptscriptstyle V} &=& - \int {\hbox{d}}^{d+2}x \; \sqrt{-g} \left[ \frac14 \, Z_{{\scriptscriptstyle M} {\scriptscriptstyle N}} Z^{{\scriptscriptstyle M} {\scriptscriptstyle N}} + \frac{\varepsilon}2 \, Z_{{\scriptscriptstyle M} {\scriptscriptstyle N}} A^{{\scriptscriptstyle M} {\scriptscriptstyle N}} + D_{\scriptscriptstyle M} \Psi^* \, D^{\scriptscriptstyle M} \Psi + \lambda \, \left(\Psi^* \Psi - \frac{v^2}{2} \right)^2 \right] \nonumber\\ &=:& - \int {\hbox{d}}^{d+2}x \; \sqrt{-g} \; \Bigl( L_{\scriptscriptstyle Z} + L_{\rm mix} + L_\Psi + V_b \Bigr) \,, \end{eqnarray} where $D_{\scriptscriptstyle M} \Psi := \partial_{\scriptscriptstyle M} \Psi - i e Z_{\scriptscriptstyle M} \, \Psi$, and the second line again defines the various $L_i$. For later purposes it is useful to write $\sqrt{2} \; \Psi = \psi \, e^{i \Omega}$ and adopt a unitary gauge for which the phase, $\Omega$, is set to zero, though this gauge will prove to be singular at the origin of the vortex solutions we examine later. In this gauge the term $L_\Psi$ in $S_{\scriptscriptstyle V}$ can be written \begin{equation} L_\Psi = D_{\scriptscriptstyle M} \Psi^* D^{\scriptscriptstyle M} \Psi = \frac12 \Bigl( \partial_{\scriptscriptstyle M} \psi \, \partial^{\scriptscriptstyle M} \psi + e^2 \psi^2 Z_{\scriptscriptstyle M} Z^{\scriptscriptstyle M} \Bigr) \end{equation} and the potential becomes \begin{equation} V_b = \frac{\lambda}4 \, \Bigl( \psi^2 - v^2 \Bigr)^2 \,. \end{equation} It is also useful to group the terms in the brane and bulk lagrangians together according to how many metric factors and derivatives appear, with \begin{eqnarray} \nonumber &&\phantom{OO}L_{\rm kin} := \frac12 \, g^{{\scriptscriptstyle M}{\scriptscriptstyle N}} \partial_{\scriptscriptstyle M} \psi \, \partial_{\scriptscriptstyle N} \psi \,, \qquad L_{\rm gge} := L_{\scriptscriptstyle A} + L_{\scriptscriptstyle Z} + L_{\rm mix} \\ &&L_{\rm pot} := \Lambda + V_b \qquad \hbox{and} \qquad L_{\rm gm} := \frac12 \,e^2 \psi^2 \, g^{{\scriptscriptstyle M} {\scriptscriptstyle N}} Z_{\scriptscriptstyle M} Z_{\scriptscriptstyle N} \,. \end{eqnarray} For this system the field equations for the two Maxwell fields are \begin{equation} \label{checkAeq} \frac{1}{\sqrt{-g}} \, \partial_{\scriptscriptstyle M} \Bigl[ \sqrt{-g} \Bigl( A^{{\scriptscriptstyle M} {\scriptscriptstyle N}} + \varepsilon \, Z^{{\scriptscriptstyle M} {\scriptscriptstyle N}} \Bigr) \Bigr] = 0 \,, \end{equation} and \begin{equation} \label{Z0eq} \frac{1}{\sqrt{-g}} \, \partial_{\scriptscriptstyle M} \Bigl[ \sqrt{-g} \Bigl( Z^{{\scriptscriptstyle M} {\scriptscriptstyle N}} + \varepsilon \, A^{{\scriptscriptstyle M} {\scriptscriptstyle N}} \Bigr) \Bigr] = e^2 \Psi^2 Z^{\scriptscriptstyle N} \,. \end{equation} The scalar field equation in unitary gauge becomes \begin{equation} \label{Psieom} \frac{1}{\sqrt{-g}} \, \partial_{\scriptscriptstyle M} \Bigl( \sqrt{-g} \; g^{{\scriptscriptstyle M} {\scriptscriptstyle N}} \partial_{\scriptscriptstyle N} \psi \Bigr) = e^2 \psi Z_{\scriptscriptstyle M} Z^{\scriptscriptstyle M} + \lambda \, \psi \Bigl(\psi^2 - v^2 \Bigr) \,, \end{equation} while the Einstein equations can be written in their trace-reversed form \begin{equation} \label{TrRevEin} {\cal R}_{{\scriptscriptstyle M} {\scriptscriptstyle N}} = - \kappa^2 X_{{\scriptscriptstyle M} {\scriptscriptstyle N}} \,, \end{equation} where $X_{{\scriptscriptstyle M} {\scriptscriptstyle N}} := T_{{\scriptscriptstyle M} {\scriptscriptstyle N}} - (1/d) \, T \, g_{{\scriptscriptstyle M} {\scriptscriptstyle N}}$ and the stress-energy tensor is \begin{eqnarray} T_{{\scriptscriptstyle M}{\scriptscriptstyle N}} &=& \partial_{\scriptscriptstyle M} \psi \, \partial_{\scriptscriptstyle N} \psi + e^2 \psi^2 Z_{\scriptscriptstyle M} Z_{\scriptscriptstyle N} + A_{{\scriptscriptstyle M}{\scriptscriptstyle P}} {A_{\scriptscriptstyle N}}^{\scriptscriptstyle P} + Z_{{\scriptscriptstyle M} {\scriptscriptstyle P}} {Z_{\scriptscriptstyle N}}^{\scriptscriptstyle P} \\ && \qquad + \frac{\varepsilon}2 \, \Bigl( A_{{\scriptscriptstyle M}{\scriptscriptstyle P}} {Z_{\scriptscriptstyle N}}^{\scriptscriptstyle P} + Z_{{\scriptscriptstyle M} {\scriptscriptstyle P}} {A_{\scriptscriptstyle N}}^{\scriptscriptstyle P} \Bigr) - g_{{\scriptscriptstyle M} {\scriptscriptstyle N}} \Bigl( L_{\rm kin} + L_{\rm gm} + L_{\rm pot} + L_{\rm gge} \Bigr) \,.\nonumber \end{eqnarray} \subsection{Symmetry ans\"atze} \label{subsec:ansatz} We seek vortex solutions for which the brane/vortex sector describes energy localized along a time-like $d$-dimensional subspace, with nontrivial profiles in the two transverse dimensions. Accordingly, our interest is in configurations that are maximally symmetric in the $d$ dimensions (spanned by $x^\mu$) and axially symmetric in the 2 `transverse' dimensions (spanned by $y^m$). We take the fields to depend only on the proper distance, $\rho$, from the points of axial symmetry, and assume the only nonzero components of the gauge field strengths lie in the transverse two directions: $A_{mn}$ and $Z_{mn}$. We choose the metric to be of warped-product form \begin{equation} \label{productmetric} {\hbox{d}} s^2 = g_{{\scriptscriptstyle M} {\scriptscriptstyle N}} \, {\hbox{d}} x^{\scriptscriptstyle M} {\hbox{d}} x^{\scriptscriptstyle N} = g_{mn} \, {\hbox{d}} y^m {\hbox{d}} y^n + g_{\mu\nu} \, {\hbox{d}} x^\mu {\hbox{d}} x^\nu \,, \end{equation} with \begin{equation} \label{warpedprod} g_{mn} = g_{mn}(y) \qquad \hbox{and} \qquad g_{\mu\nu} = W^2(y) \, \check g_{\mu\nu}(x) \,, \end{equation} where $\check g_{\mu\nu}$ is the maximally symmetric metric on $d$-dimensional de Sitter, Minkowski or anti-de Sitter space. The corresponding Ricci tensor is ${\cal R}_{{\scriptscriptstyle M} {\scriptscriptstyle N}} \, {\hbox{d}} x^{\scriptscriptstyle M} {\hbox{d}} x^{\scriptscriptstyle N} = {\cal R}_{\mu\nu} \, {\hbox{d}} x^\mu {\hbox{d}} x^\nu + {\cal R}_{mn} \, {\hbox{d}} y^m {\hbox{d}} y^n$, and is related to the Ricci curvatures, $\check R_{\mu\nu}$ and $R_{mn}$, of the metrics $\check g_{\mu\nu}$ and $g_{mn}$ by \begin{equation} {\cal R}_{\mu\nu} = \check R_{\mu\nu} + g^{mn} \Bigl[ (d-1) \partial_m W \partial_n W + W \nabla_m \nabla_n W \Bigr] \, \check g_{\mu\nu} \,, \end{equation} and \begin{equation} \label{cR2vsR2} {\cal R}_{mn} = R_{mn} + \frac{d}{W} \; \nabla_m \nabla_n W \,, \end{equation} where $\nabla$ is the 2D covariant derivative built from $g_{mn}$. We work with axially symmetric 2D metrics, for which we may make the coordinate choice \begin{equation} \label{xdmetric} g_{mn} \, {\hbox{d}} y^m {\hbox{d}} y^n = A^2(r) \, {\hbox{d}} r^2 + B^2(r) \, {\hbox{d}} \theta^2 = {\hbox{d}} \rho^2 + B^2(\rho) \, {\hbox{d}} \theta^2 \,, \end{equation} where the proper radial distance satisfies ${\hbox{d}} \rho = A(r) {\hbox{d}} r$. With these choices the field equation simplify to the following system of coupled nonlinear ordinary differential equations. \medskip\noindent{\em Gauge fields} \medskip\noindent The gauge field equations become \begin{equation} \label{Acheckeom} \left( \frac{ W^d \check A_\theta'}{B} \right)' = 0 \,, \end{equation} and \begin{equation} \label{Zeom} \frac{1 - \varepsilon^2}{BW^d} \, \left( \frac{ W^d Z_\theta'}{B} \right)' = \frac{e^2 \psi^2 Z_\theta}{B^2} \,, \end{equation} where primes denote differentiation with respect to proper distance, $\rho$, and we define the mixed gauge field, \begin{equation} \label{AZcheckA} \check A_{{\scriptscriptstyle M}} := A_{{\scriptscriptstyle M}} + \varepsilon \, Z_{{\scriptscriptstyle M}} \,. \end{equation} Notice that the off-diagonal contribution to $L_{\rm gge}$ vanishes when this is expressed in terms of $\check A_{\scriptscriptstyle M}$ rather than $A_{\scriptscriptstyle M}$, since \begin{equation} \label{Lggemixed} L_{\rm gge} = L_{\scriptscriptstyle A} + L_{\scriptscriptstyle Z} + L_{\rm mix} = \check L_{\scriptscriptstyle A} + \check L_{\scriptscriptstyle Z} \,, \end{equation} where \begin{equation} \check L_{\scriptscriptstyle A} := \frac14 \, \check A_{mn} \check A^{mn} \quad \hbox{and} \quad \check L_{\scriptscriptstyle Z} := \frac14 \, (1-\varepsilon^2) Z_{mn}Z^{mn} \,. \end{equation} Notice also that \pref{Zeom} has the same form as it would have had in the absence of the $A-Z$ mixing, \pref{AZcheckA}, provided we make the replacement $e^2 \to \hat e^2$, with \begin{equation} \hat e^2 := \frac{e^2}{1 - \varepsilon^2} \,. \end{equation} Clearly stability requires the gauge mixing parameter must satisfy $\varepsilon^2 < 1$ and semi-classical methods require us to stay away from the upper limit. \medskip\noindent{\em Scalar field} \medskip\noindent The field equation for $\psi(\rho)$ similarly simplifies to \begin{equation} \label{Psieom2} \frac{1}{BW^d} \, \Bigl( BW^d \, \psi' \Bigr)' = e^2 \psi \left( \frac{Z_\theta}{B} \right)^2 + \lambda \, \psi \Bigl(\psi^2 - v^2 \Bigr) \,. \end{equation} \medskip\noindent{\em Einstein equations} \medskip\noindent The nontrivial components of the matter stress-energy become \begin{equation} \label{Tmunurhotot} T_{\mu\nu} = - g_{\mu\nu} \; \varrho_{\rm tot} \,, \qquad {T^\rho}_\rho = {\cal Z} - {\cal X} \qquad \hbox{and} \qquad {T^\theta}_\theta = -( {\cal Z} + {\cal X} ) \,, \end{equation} where \begin{equation} \varrho := L_{\rm kin} + L_{\rm gm} + L_{\rm pot} + L_{\rm gge} \,, \end{equation} and \begin{equation} {\cal X} := L_{\rm pot} - L_{\rm gge} \qquad \hbox{and} \qquad {\cal Z} := L_{\rm kin} - L_{\rm gm} \,. \end{equation} In later sections it is useful to split $\varrho = \varrho_{\rm loc} + \check \varrho_{\scriptscriptstyle B}$, ${\cal X} = {\cal X}_{\rm loc} + \check {\cal X}_{\scriptscriptstyle B}$ and ${\cal Z} = {\cal Z}_{\rm loc}+ {\cal Z}_{\scriptscriptstyle B}$ into vortex and bulk parts, which we do as follows: \begin{eqnarray} \label{StressEnergyVBsplit} &&\check \varrho_{\scriptscriptstyle B} := \Lambda + \check L_{\scriptscriptstyle A} \,, \qquad \varrho_{\rm loc} := L_{\rm kin} + L_{\rm gm} + V_b + \check L_{\scriptscriptstyle Z} \nonumber\\ &&\check {\cal X}_{\scriptscriptstyle B} := \Lambda - \check L_{\scriptscriptstyle A} \,, \qquad {\cal X}_{\rm loc} := V_b - \check L_{\scriptscriptstyle Z} \\ \hbox{and} \quad &&{\cal Z}_{\scriptscriptstyle B} := 0 \,, \qquad\qquad\;\; {\cal Z}_{\rm loc} := L_{\rm kin} - L_{\rm gm} = {\cal Z} \,. \nonumber \end{eqnarray} The components of the trace-reversed Einstein equations governing the $d$-dimensional on-vortex geometry therefore become \begin{equation} \label{4DTrRevEin} {\cal R}_{\mu\nu} = - \kappa^2 X_{\mu\nu} = - \frac{2}{d} \; \kappa^2{\cal X} \; g_{\mu\nu} \,, \end{equation} of which maximal symmetry implies the only nontrivial combination is the trace \begin{equation} \label{avR4-v1} {\cal R}_{(d)} := g^{\mu\nu} {\cal R}_{\mu\nu} = \frac{\check R}{W^2} + \frac{d}{BW^d} \Bigl( BW' W^{d-1} \Bigr)' = -2\kappa^2 {\cal X} \,, \end{equation} and we use the explicit expression for ${\cal R}_{(d)}$ in terms of $\check R$ and $W$. The components dictating the 2-dimensional transverse geometry similarly are ${\cal R}_{mn} = - \kappa^2 X_{mn}$, which has two nontrivial components. One can be taken to be its trace \begin{equation} \label{avR2} {\cal R}_{(2)} := g^{mn} {\cal R}_{mn} = R + d \left( \frac{W''}{W} + \frac{B'W'}{BW} \right) = - \kappa^2 {X^m}_m = - 2 \kappa^2 \left[ \varrho - \left( 1 - \frac{2}{d} \right) {\cal X} \right] \,, \end{equation} and the other can be the difference between its two diagonal elements \begin{equation} {{\cal G}^\rho}_\rho - {{\cal G}^\theta}_\theta = {{\cal R}^\rho}_\rho - {{\cal R}^\theta}_\theta = - \kappa^2 \left( {T^\rho}_\rho - {T^\theta}_\theta \right) \,. \end{equation} Writing out the curvature and stress energy shows this last equation becomes \begin{equation} \label{newEinstein} \frac{B}{W} \left( \frac{W'}{B} \right)' = - \frac{2}{d} \; \kappa^2 {\cal Z} \,. \end{equation} \medskip\noindent{\em Other useful combinations of Einstein equations} \medskip\noindent Other linear combinations of the Einstein equations are not independent, but are sometimes more useful. The first is the $(\theta\theta)$ component of the trace-reversed Einstein equation ${{\cal R}^\theta}_\theta = - \kappa^2 {X^\theta}_\theta $ which reads \begin{equation} \label{XthetathetaeEinstein} \frac{ (B' W^d)' }{BW^d} = - \kappa^2 \left[ \varrho - {\cal Z} - \left( 1 - \frac{2}{d} \right) {\cal X} \right] = - 2\kappa^2 \left( L_{\rm gm} + L_{\rm gge} + \frac{{\cal X}}{d} \right) \,. \end{equation} A second useful form is the $(\rho\rho)$ Einstein equation, ${{\cal G}^\rho}_\rho = - \kappa^2 {T^\rho}_\rho$, which is special in that all second derivatives with respect to $\rho$ drop out. This leaves the following `constraint' on the initial conditions for the integration in the $\rho$ direction: \begin{eqnarray} \label{constraint} d \left( \frac{B'W'}{BW} \right) + \frac{\check R}{2W^2} + \frac{d(d-1)}{2} \left( \frac{W'}{W} \right)^2 &=& \kappa^2 \Bigl( {\cal Z} - {\cal X} \Bigr) \nonumber\\ &=& \kappa^2 \Bigl( L_{\rm kin} - L_{\rm gm} - L_{\rm pot} + L_{\rm gge} \Bigr) \,. \end{eqnarray} \subsection{Scales and hierarchies} \label{subsec:scales} Before solving these field equations, we first briefly digress to summarize the relevant scales that appear in their solutions. The fundamental parameters of the problem are the gravitational constant, $\kappa$; the gauge couplings, $e$ (for $Z_{\scriptscriptstyle M}$) and $g_{\scriptscriptstyle A}$ (for $A_{\scriptscriptstyle M}$); the scalar self-coupling, $\lambda$, and the scalar vev $v$. These have the following engineering dimensions in powers of mass: \begin{equation} \left[ \kappa \right] = 1-D/2 \,, \qquad \left[ e \right] = \left[ g_{\scriptscriptstyle A} \right] = 2-D/2 \,, \qquad \left[ \lambda \right] = 4-D \,, \quad \hbox{and} \quad \left[ v \right] = D/2-1 \,. \end{equation} To these must be added the dimensionless parameter, $\varepsilon$, that measures the mixing strength for the two gauge fields. In terms of these we shall find that the energy density of the vortex is of order $e^2 v^4$ and this is localized within a region of order \begin{equation} r_\varv = \frac{1}{ev} \,. \end{equation} The effective energy-per-unit-area of the vortex is therefore of order $e^2 v^4 r_\varv^2 = v^2$. These energies give rise to $D$-dimensional curvatures within the vortex of order $1/L_\varv^2 = \kappa^2 e^2 v^4$ and integrated dimensional gravitational effects (like conical defect angles) of order $\kappa^2 v^2$. We work in a regime where $\kappa v \ll 1$ to ensure that the gravitational response to the energy density of the vortex is weak, and so, for example, defect angles are small and $L_\varv \gg r_\varv$. By contrast, far from the vortex the curvature scale in the bulk turns out to be of order $1/r_{\scriptscriptstyle B}^2$ where \begin{equation} r_{\scriptscriptstyle B} = \frac{{\cal N} \kappa}{g_{\scriptscriptstyle A}} \,, \end{equation} and ${\cal N}$ is a dimensionless measure of the total amount of $A_{\scriptscriptstyle M}$ flux that threads the compact transverse dimensions. Since our interest is in the regime where the vortex is much smaller than the transverse dimensions we throughout assume $r_\varv \ll r_{\scriptscriptstyle B}$ and so \begin{equation} \frac{g_{\scriptscriptstyle A}}{e{\cal N}} \ll \kappa v \ll 1 \,. \end{equation} \section{Isolated vortices} \label{section:isolatedvortex} We now describe some solutions to the above field equations, starting with the local properties of an isolated vortex within a much larger ambient bulk geometry. Our goal is to relate the properties of the vortex to the asymptotic behaviour of the bulk fields and their derivatives outside of (but near to) the vortex itself, with a view to using these as matching conditions when replacing the vortex with an effective codimension-2 localized object. These matching conditions are then used in later sections to see how a system of several vortices interact with one another within a compact transverse geometry. To this end we regard the field equations as to be integrated in the radial direction given a set of `initial conditions' at the vortex centre. \subsection{Vortex solutions} \label{subsec:vortexsoln} For vortex solutions the vortex scalar vanishes at $\rho = 0$, and the vortex fields approach their vacuum values, $\psi \to v$ and\footnote{In unitary gauge.} $Z_{\scriptscriptstyle M} \to 0$, at large $\rho$. Because we work in the regime $\kappa v \ll 1$ these solutions closely resemble familiar Nielsen-Olesen solutions \cite{NOSolns} in the absence of gravitational fields. Our analysis in this section reduces to that of \cite{OtherGVs} in the limit of no gauge mixing, $\varepsilon = 0$, and a trivial bulk. The asymptotic approach to the far-field vacuum values can be understood by linearizing the field equations about their vacuum configurations, writing $\psi = v + \delta \psi$ and $Z_\theta = 0 + \delta Z_\theta$. We find in this way that both $\delta \psi$ and $\delta Z_{\scriptscriptstyle M}$ describe massive particles, with respective masses given by \begin{equation} m^2_{\scriptscriptstyle Z} = \hat e^2 v^2 \quad \hbox{and} \quad m^2_\Psi = 2\lambda v^2 \,. \end{equation} From this we expect the approach to asymptopia to be exponentially fast over scales of order $r_{\scriptscriptstyle Z} = m_{\scriptscriptstyle Z}^{-1}$ and $r_\Psi = m_\Psi^{-1}$. Indeed this expectation is borne out by explicit numerical evaluation. Notice the two vortex scales are identical, $r_\varv := r_{\scriptscriptstyle Z} = r_\Psi$, in the special BPS case, defined by $\hat \beta = 1$ where \begin{equation} \hat \beta := {\hat e^2}/{2\lambda} \,, \end{equation} and so the BPS case satisfies $\hat e^2 = 2 \lambda$. For convenience we also define $\beta = e^2 / 2 \lambda = (1 - \varepsilon^2) \hat \beta$. \subsubsection*{Boundary conditions near the origin} We start with a statement of the boundary conditions to be imposed at $\rho = 0$, which express that the transverse metric, $g_{mn}$, is locally flat and that all vectors (and so in particular the gradients of all scalars) must vanish there. For the metric functions we therefore impose the conditions \begin{equation} \label{WBBC} W(0) = W_0 \,, \quad W'(0) = 0 \qquad \hbox{and} \qquad B(0) = 0 \quad \hbox{and} \quad B'(0) = 1 \,. \end{equation} We can choose $W_0 = 1$ by rescaling the $d$-dimensional coordinates, but this can only be done once so the {\em change}, $\Delta W$, between the inside and the outside of the vortex (or between the centres of different vorticies) is a physical thing to be determined by the field equations. Similarly, for the vortex scalar we demand \begin{equation} \label{psiBC} \psi(0) = \psi'(0) = 0 \,, \end{equation} or we could also trade one of these for the demand that $\psi \to v$ far from the vortex core. Nonsingularity of the bulk gauge field-strengths implies they must take the form \begin{equation} A_{mn} = f_{\scriptscriptstyle A} \, \epsilon_{mn} \,, \quad Z_{mn} = f_{\scriptscriptstyle Z} \, \epsilon_{mn} \quad \hbox{and so} \quad \check A_{mn} = \check f_{\scriptscriptstyle A} \, \epsilon_{mn} \,, \end{equation} where $\epsilon_{\rho\theta} = \sqrt{g_2} = B$ is the volume form for the 2D metric $g_{mn}$. Since $\check A_{mn}$ is nonsingular we know $\check f_{\scriptscriptstyle A}$ is regular at $\rho = 0$ and so because $B(\rho) \simeq \rho$ near $\rho = 0$ we see that $\check A_{\rho\theta} \propto \rho$ near the origin. Consequently, in a gauge where $\check A_{\scriptscriptstyle M} \, {\hbox{d}} x^{\scriptscriptstyle M} = \check A_\theta(\rho) \, {\hbox{d}} \theta$ we should expect $\check A_\theta = {\cal O}(\rho^2)$ near the origin. Naively, the same should be true for the vortex gauge fields $A_{\scriptscriptstyle M}$ and $Z_{\scriptscriptstyle M}$, however the gauge transformation required to remove the phase everywhere from the order parameter $\Psi = \psi e^{i \Omega}$ ({\em i.e.} to reach unitary gauge) is singular at the origin, where $\Psi$ vanishes and so $\Omega$ becomes ambiguous. Consequently in this gauge $Z_\theta$ (and so also $A_{\scriptscriptstyle M}$) does not vanish near the origin like $\rho^2$. Instead because in this gauge $Z_{\scriptscriptstyle M} \to 0$ far from the vortex we see that flux quantization demands that \begin{equation} \label{ZBC} -\frac{2\pi n}{e} = \Phi_{\scriptscriptstyle Z}(\rho < \rho_\varv) := \oint_{\rho = \rho_\varv} Z = 2\pi \int_0^{\rho_\varv} {\hbox{d}} \rho \, \partial_\rho Z_\theta = 2\pi \Bigl[ Z_\theta(\rho_\varv) - Z_\theta(0) \Bigr] = -2\pi Z_\theta(0)\,, \end{equation} where $n$ is an integer, and we choose $\rho = \rho_\varv$ to be far enough from the vortex that $Z_{\scriptscriptstyle M} \to 0$ there. We therefore ask $Z_\theta$ to satisfy the boundary condition: \begin{equation} Z_\theta(0) = \frac{n}{e} \quad \hbox{and so therefore} \quad A_\theta(0) = -\frac{n \varepsilon}{e} \,, \end{equation} where the second equality follows from $\check A_\theta (0) = 0$. \subsubsection*{Vortex solutions} It is convenient to normalize the vortex fields \begin{equation} Z_\theta = \frac{n}{e} \; P(\rho) \qquad \hbox{and} \qquad \psi = v \; F(\rho) \, \end{equation} so that $F = 1$ corresponds to the vacuum value $\psi = v$, while the boundary conditions at $\rho = 0$ become \begin{equation} F(0) = 0 \,, \quad P(0) = 1 \,; \end{equation} the vacuum configuration in the far-field limit is \begin{equation} F(\infty) = 1 \,, \quad P(\infty) = 0 \,. \end{equation} In terms of $P$ and $F$ the $Z_{\scriptscriptstyle M}$ field equations boil down to \begin{equation} \label{Peq} \frac{1}{BW^d} \left( \frac{ W^d \, P'}{ B} \right)' = \frac{\hat e^2 v^2 F^2 P}{B^2} \,, \end{equation} while the $\psi$ equation reduces to \begin{equation} \label{Feq} \frac{1}{BW^d} \Bigl( BW^d \; F' \Bigr)' = \frac{P^2 F}{B^2} + \lambda v^2\, F\left( F^2 - 1 \right) \,. \end{equation} Although closed form solutions to these are not known, they are easily integrated numerically for given $B$ and $W$, and the results agree with standard flat-space results when $B = \rho$ and $W = 1$. See, for example, Fig.~\ref{fig:flatprofiles}. \subsubsection*{BPS special case} In the special case where $W = 1$ and $\hat e^2 = e^2/(1-\varepsilon^2) = 2 \lambda$ (and so $\hat \beta = 1$), eqs.~\pref{Peq} and \pref{Feq} are equivalent to the first-order equations,\footnote{The simplicity of these equations is understood in supersymmetric extensions of these models, since supersymmetry can require $e^2 = 2\lambda$ and the vortices in this case break only half of the theory's supersymmetries.} \begin{equation} \label{BPSeqs} B F' = n F P \qquad \hbox{and} \qquad \frac{n P'}{\hat e B} = \sqrt{\frac{\lambda}{2}} \; v^2 \left( F^2 - 1 \right) \,. \end{equation} We show later that $W = 1$ also solves the Einstein equations when $\hat e^2 = 2 \lambda$ and so this choice provides a consistent solution to all the field equations in this case. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{flatprofiles.pdf} \caption{A comparison of BPS and non-BPS vortex profiles on a flat background for differing values of $\hat \beta = \hat e^2/(2\lambda)$. The (blue) profile vanishing at the origin is the scalar profile $F$ and the (red) profile that decreases from the origin is the vector profile $P$. To find the profiles in flat space we set $B = \rho$ and $W=1.$ The left plot uses $\hat \beta = 1$ and the right plot uses $\hat \beta = 0.1$, with this being the only parameter that controls vortex profiles in flat space. } \label{fig:flatprofiles} \end{figure} When eqs.~\pref{BPSeqs} and $W = 1$ hold, they also imply \begin{equation} L_{\rm kin} = \frac12 \, (\partial \psi)^2 = \frac{e^2}2 \, \psi^2 Z_{\scriptscriptstyle M} Z^{\scriptscriptstyle M} = L_{\rm gm}\,, \end{equation} and \begin{equation} \label{cLzeqVb} \check L_{\scriptscriptstyle Z} := \frac14 \, (1-\varepsilon^2) Z_{mn}Z^{mn} = \frac{\lambda}{4} ( \psi^2 - v^2 )^2 = V_b \,, \end{equation} which further imply that the vortex contributions to ${\cal Z}$ and ${\cal X}$ cancel out, \begin{equation} {\cal Z} = L_{\rm kin} - L_{\rm gm} = 0 \quad \hbox{and} \quad {\cal X}_{\rm loc} = V_b - \check L_{\scriptscriptstyle Z} = 0 \,, \end{equation} leaving only the bulk contribution to ${\cal X}$: \begin{equation} {\cal X} = \check {\cal X}_{\scriptscriptstyle B} = \Lambda - \check L_{\scriptscriptstyle A} \,. \end{equation} As can be seen from eq.~\pref{newEinstein}, it is the vanishing of ${\cal Z}$ that allows $W = 1$ to solve the Einstein equations. Finally, the vortex part of the action evaluates in this case to the simple result \begin{equation} {\cal T}_\varv := \frac{1}{\sqrt{- \check g } } \int {\hbox{d}}^2y \, \sqrt{-g}\Bigl[ L_\Psi + V_b + \check L_{\scriptscriptstyle Z} \Bigr] = 2\pi \int {\hbox{d}} \rho \, B \; \Bigl[ L_\Psi + V_b + \check L_{\scriptscriptstyle Z} \Bigr] = \pi n v^2 \,. \end{equation} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{profilecomparison.pdf} \caption{A comparison of the profiles $F$ and $P$ for the vortex in flat space (dashed curves) and the full gravitating vortex solution (solid lines). For each case the (blue) profile that vanishes at the origin is the scalar profile $F$ and the (red) profile that decreases from the origin is the vector profile $P$. The parameters used in the plot are $d=4$, $\varepsilon = 0.3$, $\beta = 3$, $Q = 0.01 \, e v^2$, $\Lambda = Q^2/2$, $\kappa v = 0.6$ and $\check R = 0$ with the same values of $\beta$ and $\varepsilon$ chosen for the non-gravitating solution.} \label{fig:gravprofiles} \end{figure} \subsubsection*{Bulk equations} To obtain a full solution for a vortex coupled to gravity we must also solve the bulk field equations for $W$, $B$ and $\check A_{\rho\theta}$. The simplest of these to solve is the Maxwell equation, \pref{Acheckeom}, whose solution is \begin{equation} \label{Acheckeomsoln} \check A_{\rho\theta} = \frac{QB}{W^d} \,, \end{equation} where $Q$ is an integration constant. This enters into the Einstein equations, \pref{avR4-v1}, \pref{avR2} and \pref{newEinstein}, through the combination $\check L_{\scriptscriptstyle A} = \frac12 (Q/W^d)^2$. These can be numerically integrated out from $\rho = 0$, starting with the boundary conditions \pref{WBBC} (for which we choose $W_0 = 1$), \pref{psiBC} and \pref{ZBC}, provided that the curvature scalar, $\check R$, for the metric $\check g_{\mu\nu}$ is also specified. Once this is done all field values and their derivatives are completely determined by the field equations for $\rho > 0$ and one such solution is shown in Fig.~\ref{fig:bulksoln}. As we shall see, many useful quantities far from the vortex depend only on certain integrals over the vortex profiles, rather than their detailed form. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{bulksolution.pdf} \caption{These plots illustrate the bulk geometry for BPS vortices ($\beta = 1$) with parameters $d=4$, $\varepsilon = 0$, $\beta = \hat \beta = 1$, $Q = 0.05 \, ev^2$, $\Lambda = Q^2/2$ and $\kappa v = 0.3$ (which also imply $\check R = 0$). In the top left plot, the solution for $B$ is plotted (in blue) below the (red) metric function $B_{\rm sphere}$ of a sphere with radius $r_{\scriptscriptstyle B} = (200/3) r_\varv .$ The presence of a vortex does not change the size of the bulk (since the full solution for $B$ still vanishes at $\rho = \pi r_{\scriptscriptstyle B} $) and the metric function $B$ is still approximately spherical with $B \approx 0.95 \times B_{\rm sphere}$ for these parameters. The top right plot shows that when $\beta = 1$ and $\Lambda = Q^2 / 2$, a constant warp factor solves the field equations. The bottom left plot shows that the derivative of the metric function $B' \approx 0.95$ outside of the vortex core, at $\rho \roughly> 4 r_\varv$. The bottom right plot shows that $B' \approx -0.95$ at the pole which lies opposite to the vortex core, indicating the presence of a conical singularity at that pole.} \label{fig:bulksoln} \end{figure} \subsection{Integral relations} Our main interest in later sections is in how the vortices affect the bulk within which they reside, and this is governed by the boundary conditions they imply for the metric --- {\em i.e.} on quantities like $W$, $W'$, $B$, $B'$ and $\check R$ --- as well as for other bulk fields exterior to, but nearby, the vortex. In particular, simple integral expressions exist for derivatives of bulk fields --- {\em e.g.} $W'$ and $B'$ --- in this near-vortex limit, and we pause here to quote explicit expressions for these. \begin{figure}[t] \centering \includegraphics[width=0.55\textwidth]{cap.pdf} \caption{An illustration of the matching done at $\rho = \rho_\varv.$ The light grey surface is a cartoon of the bulk geometry. The bump on top of the surface represents the localized modifications to the approximately spherical bulk geometry that arise due to the vortex. The dark ring represents the circle at $\rho = \rho_\varv$ that lies sufficiently far outside the vortex that its fields are exponentially suppressed, but close enough to the vortex so that that its proper distance from the pole is still ${\cal O}(r_\varv).$} \label{fig:cap} \end{figure} For instance, imagine integrating the Einstein equation, \pref{avR4-v1}, over the transverse dimensions out to a proper distance $\rho = \rho_\varv \simeq {\cal O}(r_\varv)$ outside of (but not too far from) the vortex (see Figure \ref{fig:cap}). This gives \begin{equation} \label{intWval} d \,B W^d \; \partial_\rho \ln W \Bigr|_{\rho=\rho_\varv} = \left[ B \Bigl( W^d \Bigr)' \right]_{\rho=\rho_\varv} = - \frac{1}{2\pi} \Bigl\langle 2\kappa^2 {\cal X} + W^{-2} \check R \Bigr\rangle_{\varv} \,, \end{equation} where we introduce the notation \begin{equation} \left\langle {\cal O} \right\rangle_\varv := \frac{1}{\sqrt{- \check g } } \int\limits_{X_\varv} {\hbox{d}}^2x \, \sqrt{-g} \; {\cal O} = 2\pi \int_0^{\rho_\varv} {\hbox{d}} \rho \, B W^d \; {\cal O} \,, \end{equation} and use the boundary condition $W'(0) = 0$ at the vortex centre. This identifies explicitly the specific combination of vortex quantities relevant for specifying $W'$ just outside the vortex. A second integral relation of this type starts instead with the $(\theta\theta)$ component of the trace-reversed Einstein equation: ${{\cal R}^\theta}_\theta = - \kappa^2 {X^\theta}_\theta$, or \pref{XthetathetaeEinstein}, which integrates to give \begin{equation} \label{BprmWdint} \Bigl( B' W^d \Bigr)_{\rho = \rho_\varv} = 1 - \frac{\kappa^2}{2\pi} \left\langle \varrho - \left( 1 - \frac{2}{d} \right) {\cal X} - {\cal Z} \right\rangle_{\varv} \,, \end{equation} given the boundary condition $B'W^d \to 1$ as $\rho \to 0$. This can be used to infer the implications of the vortex profiles on $B'$ just outside the vortex. For many purposes our interest is in the order of magnitude of the integrals on the right-hand sides of expressions like \pref{intWval} or \pref{BprmWdint} and these sometimes contain a surprise. In particular, naively one might think the integrals on the right-hand sides would generically be order $v^2$ and so would contribute at order $\kappa^2 v^2$ to the quantities on the left-hand sides. Although this is true for $\varrho$, the surprise is that the quantities $\langle {\cal X} \rangle_\varv$ and $\langle {\cal Z} \rangle_\varv$ can be much smaller than this, being suppressed by powers of $r_\varv/r_{\scriptscriptstyle B}$ when the vortex is much smaller than the transverse space, $r_\varv \ll r_{\scriptscriptstyle B}$, and this has important implications for how vortices influence their surroundings. One way to understand this suppression is to evaluate explicitly the suppressed quantities in the flat-space limit, where it can be shown (for instance) that the vortex solutions described above imply $\langle {\cal X} \rangle_{\varv\,{\rm flat}} = 0$. Appendix \ref{appsec:SEConservation} proves this as a general consequence of stress-energy conservation (or hydrostatic equilibrium) within the vortex, with the vortex dynamically adjusting to ensure it is true. (Alternatively, the vanishing of $\langle {\cal X} \rangle_{\varv}$ on flat space can also be derived as a consequence of making the vortex action stationary with respect to rescalings of the size of the vortex.) More generally, for curved geometries we find numerically that in the generic situation when $r_\varv \sim r_{\scriptscriptstyle B}$ all terms in \pref{intWval} are similar in size and not particularly small, but this is no longer true once a hierarchy in scales exists between the size of the vortex and that of the transverse dimensions. In particular, as shown in Appendix \ref{appsec:SEConservation}, for solutions where $\check R$ is $1/r_{\scriptscriptstyle B}^2$ suppressed the vortex dynamically adjusts also to suppress $\langle {\cal X} \rangle_\varv$ by powers of $1/r_{\scriptscriptstyle B}$. The next sections provide several other ways to understand this suppression, associated with the constraints imposed by the Bianchi identities on the left-hand sides of near-vortex boundary conditions. \subsection{Near-vortex asymptotics} \label{subsec:nearvortex} Because the vortex fields, $\delta \psi = \psi - v$ and $Z_{\scriptscriptstyle M}$, fall off exponentially they can be neglected to exponential accuracy `outside' of the vortex; {\em i.e.} at distances $\rho_\varv \roughly> r_\varv \sim 1/ev$. The form for the metric functions $B$ and $W$ are then governed by the Einstein equations with only bulk-field stress-energy. This section describes the approximate form taken by these bulk solutions outside of the vortex sources, but not far outside (in units of the bulk curvature radius, say). We do so in two steps. We first solve for the bulk fields external to an isolated vortex in an infinite transverse space. We then find approximate asymptotic solutions for vortices sitting within compact spaces, under the assumption that the compact space is much larger than the transverse vortex size and that the region of interest for the solutions is very close to the vortex: $r_\varv \roughly< \rho \ll r_{\scriptscriptstyle B}$. \subsubsection*{Infinite transverse space} We start with solutions where the space transverse to the vortex is not compact, since these should share many features of the bulk sufficiently close to a vortex residing within a large but finite transverse space. Concretely, the merit of seeking non-compact solutions is that the boundary conditions at infinity are fixed and determine many of the bulk integration constants. As seen in \S\ref{section:interactions}, compact spaces are more complicated from this point of view because these constants are instead set dynamically by the adjustment of the various vortices to each other's presence. But those near-vortex boundary conditions that are dictated by vortex properties should not care about distant details of the vortex environment, and so can be explored most simply within an isolated-vortex framework. To find isolated solutions we first write the Einstein equations in the exterior region $\rho > \rho_\varv$ where the vortex fields can be neglected: \begin{equation} \label{einsteinthetatheta} \frac{( W^d B' )'}{W^d B} = -\kappa^2 \left[ \left( \frac{d-1}{d} \right) Q^2 W^{-2d} + \frac{2 \Lambda}{d} \right] \,, \end{equation} and \begin{equation} \label{einsteindiff} d B \left( \frac{W'}{B} \right)' = 0 \,, \end{equation} and \begin{equation} \label{einsteinrhorho} W^{-2} \check R + \frac{( (W^d)' B)'}{W^d B} = \kappa^2 \left( Q^2 W^{-2d } - 2 \Lambda \right) \,. \end{equation} In this section (and only this section) we assume the transverse space does not close off, so $B(\rho) > 0$ strictly for all values $\rho > \rho_\varv$. Integrating \pref{einsteindiff} from $\rho_\varv$ to arbitrary $\rho > \rho_\varv$ gives \begin{equation} \label{kcoeff} \frac{W'}{B} = \frac{W'_\varv}{B_\varv} = k \,, \end{equation} where $k$ is an integration constant and a $\varv$ subscript indicates that the bulk field is evaluated at $\rho = \rho_\varv$. Evaluating this at infinity tells us $k = 0$ if we demand $W'$ vanishes there. More generally, if $k \ne 0$ and $B$ monotonically increases then $|W|$ must diverge at infinity, even if $B$ (and so $W'$) is bounded. Since $B > 0$ this excludes $k < 0$ since this would imply $W$ vanishes at finite $\rho > \rho_\varv$. If we also exclude $W \to \infty$ as $\rho \to \infty$ then we must have $k = 0$, for which integrating eq.~\pref{kcoeff} implies $W = W_\varv$ is constant everywhere outside the vortex. Using this result in eq.~\pref{einsteinthetatheta} gives \begin{equation} \frac{B''}{B} = -\kappa^2 \left[ \left( \frac{d-1}{d} \right) Q^2 W_\varv^{-2d} + \frac{2 \Lambda}{d} \right] =: Y_d \,, \end{equation} where the constancy of the right hand side (which we call $Y_d$) implies the transverse dimensions have constant curvature. Solving this for $B$ in the region $\rho > \rho_\varv$ gives elementary solutions whose properties depend on the sign of $Y_d$ : \begin{itemize} \item $Y_d = - 1/\ell^2 < 0$: This implies $B$ is a linear combination of $\sin(\rho/\ell)$ and $\cos(\rho/\ell)$ and so eventually passes through zero to pinch off at some $r_\star > \rho_\varv$. This gives a compact transverse space, which we discard in this section. % \item $Y_d = + 1/\ell^2 > 0:$ This implies $B$ is a linear combination of $\sinh(\rho/\ell)$ and $\cosh(\rho/\ell)$ and so increases exponentially for large $\rho$. This corresponds to a vortex sitting within an infinite-volume transverse hyperbolic space with curvature radius $\ell$. % \item $Y_d = 0:$ This forces $B'' = 0$ which gives the flat solution $B = B_\varv + B_\varv'(\rho - \rho_\varv)$. \end{itemize} A flat transverse space is found by tuning the bulk cosmological constant such that $Y_d = 0$, and so \begin{equation} \label{Lambdachoice} \Lambda = - \frac{1}{2} \left( d-1 \right) Q^2 W_\varv^{-2d} < 0 \,. \end{equation} Having $\Lambda$ more negative than this gives a hyperbolic transverse space and more positive gives a compact transverse space. Evaluating \pref{einsteinrhorho} at the position $\rho = \rho_\varv$, using \pref{Lambdachoice} and constant $W = W_\varv$ then gives \begin{equation} \label{Rsolved} W_\varv^{-2} \check R = \kappa^2 \left( Q^2 W_\varv^d - 2 \Lambda \right) = d \kappa^2 Q^2 W^{-2d}_\varv = - 2 \kappa^2 \Lambda \left( \frac{d}{d-1} \right) > 0 \,, \end{equation} which in our curvature conventions represents a strictly anti-de Sitter (AdS) geometry for the directions parallel to the vortex whenever the transverse directions are noncompact. As argued in more detail in \S\ref{subsec:Exactsolns}, in general the 2D curvature scale, $R = \pm 2/\ell^2$, and the $d$ D scale, $\check R$, are independent functions of the two dimensionful parameters: $1/r_\Lambda^2 \propto \kappa^2\Lambda$ and $1/r_{\scriptscriptstyle A}^2 \propto \kappa^2 Q^2$. Of special interest is the one-parameter subspace of configurations for which either $R$ or $\check R$ vanish, and the above shows that the case $\check R = 0$ necessarily involves finite-volume transverse dimensions while flat transverse space ($R = 0$) implies an AdS on-vortex geometry, so the two subspaces intersect only as both $r_\Lambda$ and $r_{\scriptscriptstyle A}$ tend to infinity ({\em ie} for $\Lambda,\, Q \to 0$). It is the constancy of $W = W_\varv$ in the bulk for isolated vortices that reflects something general about vortices: that $W' \to 0$ in the near-vortex limit. Indeed, although \S\ref{section:Rugbyballgeom} gives explicit compact solutions with $W' \ne 0$ in the bulk, in all cases $W'$ approaches zero in the immediate vicinity of the small source vortices. This carries implications for the integrated vortex stress-energy, such as $\langle {\cal X} \rangle_\varv$. Using $W'_{\scriptscriptstyle V} = 0$ in \pref{intWval} allows us to write \begin{equation} 2 \kappa^2 {\langle} {\cal X} {\rangle}_\varv = - \check R \, {\langle} W^{-2} {\rangle}_\varv \,, \end{equation} which is useful because it shows that $\langle {\cal X} \rangle_\varv$ is very generally suppressed by two powers of a curvature scale, being order $\rho_\varv^2/\check\ell^2 \ll 1$ if $\check R \sim 1/\check\ell^2 \ll 1/\rho_\varv^2$. We expect this result also to hold for vortices situated within compact transverse dimensions. \subsubsection*{Asymptotic forms} We next return to the case of real interest: small vortices situated within a much larger (but compact) transverse space. In general, the presence of a vortex introduces apparent singularities into the bulk geometry whose properties are dictated by those of the vortex. These singularities are only apparent because they are smoothed out once the interior structure of the vortex is included, since the geometry then responds to the stress-energy of the vortex interior. This section characterizes these singularities more precisely with a view to relating them to the properties of the source vortices. One way to characterize the position of the apparent singularity is to define it to occur at the point where the expression for $B_{\rm bulk}(\rho)$ obtained using only the bulk field equations would vanish: $B_{\rm bulk}(\rho_\star) = 0$ (see Figure \ref{fig:rhostar}). Here $\rho_\star$ is of order the vortex size, and need not occur precisely at $\rho = 0$ (despite the boundary condition $B(0) = 0$ inside the vortex) because $B_{\rm bulk}$ is found by solving only the bulk field equations without the vortex fields. \begin{figure}[h] \centering \includegraphics[width=0.55\textwidth]{rhostar.pdf} \caption{A cartoon illustration of the definition of $\rho_\star$. The (blue) metric function $B$ increases linearly away from the origin with unit slope $B(\rho) \approx \rho$. Outside of the vortex $\rho \roughly> \rho_\varv$ the solution is also linear in $\rho$ but with $B(\rho) \approx \alpha \rho$. The straight (red) line extrapolates this exterior behaviour to the point, $\rho = \rho_\star$, where the external $B$ would have vanished if the vortex had not intervened first. } \label{fig:rhostar} \end{figure} The nature of the singularity at $\rho = \rho_\star$ is most simply described by expanding the bulk field equations in powers of proper distance, $\hat\rho = \rho - \rho_\star$, away from the apparent singularity, \begin{eqnarray} \label{powerformsapp} W &=& W_0 \left( \frac{\hat\rho}{r_{\scriptscriptstyle B}} \right)^w + W_1 \left( \frac{\hat\rho}{r_{\scriptscriptstyle B}} \right)^{w+1} + W_2 \left( \frac{\hat\rho}{r_{\scriptscriptstyle B}} \right)^{w+2} + \cdots \,, \nonumber\\ B &=& B_0 \left( \frac{\hat\rho}{r_{\scriptscriptstyle B}} \right)^b + B_1 \left( \frac{\hat\rho}{r_{\scriptscriptstyle B}} \right)^{b+1} + B_2 \left( \frac{\hat\rho}{r_{\scriptscriptstyle B}} \right)^{b+2} + \cdots \,. \end{eqnarray} where $r_{\scriptscriptstyle B}$ is again a scale of order the bulk curvature scale. It is the leading powers, $b$ and $w$, that describe potential singularity, and their form is constrained by the bulk field equations. In particular, as shown in Appendix~\ref{KasnerApp}, the leading terms in the expansion of the Einstein equations around $\hat\rho = 0$ imply that $w$ and $b$ satisfy the two Kasner conditions\footnote{Our treatment here follows closely that of \cite{6DdS}, which in turn is based on the classic BKL treatment of near-singularity time-dependence \cite{BKL}. } \cite{Kasner}: \begin{equation} \label{branekasner} dw + b = 1 \qquad \text{and} \qquad d w^2 + b^2 = 1 \,. \end{equation} The last of these in turn implies $w$ and $b$ must reside within the intervals \begin{equation} | w | \le \frac{1}{\sqrt{d}} \qquad \text{and} \qquad |b| \le 1 \,. \end{equation} The Kasner solutions have precisely two solutions: either $w = 0$ and $b = 1$ (as is true for flat-space solutions) or $w = 2/(1+d)$ and $b = (1-d)/(1+d)$. Since we know that a non-gravitating vortex lives in a geometry with $w = 0$ and $b = 1$, this is also the root we must use in the weak-gravity limit $( \kappa v )^2 \ll 1$. This describes a conical singularity if $B'(\rho = \rho_\star) \ne 1$. The field equations also dictate all but two of the remaining coefficients, $B_i$ and $W_i$, of the series solution. For instance eq.~\pref{newEinstein} applied outside the vortex implies $W' = k B$ for constant $k$. This implies $W_1 = 0$ and $W_2 = \frac{1}{2} \, k \, \alpha\,r_{\scriptscriptstyle B}^2$ and so on, giving \begin{eqnarray} \label{powerformsolnW'kB} W &=& W_\star + \left( \frac{k \alpha}{2} \right) \hat\rho^2 + \cdots \,, \nonumber\\ B &=& \alpha \, \hat \rho + \cdots \,, \end{eqnarray} where $W_\star = \lim_{\rho \to \rho_\star} W$. For any such a singular point we therefore have the boundary conditions \begin{equation} \lim_{\rho \to \rho_\star} W^\prime = 0 \qquad \text{and} \qquad \lim_{\rho \to \rho_\star} B^\prime =: \alpha = {\rm const} \,, \end{equation} as is indeed found in detailed numerical integrations (see Figure \ref{fig:logprofiles}). \begin{figure}[t] \centering \includegraphics[width=\textwidth]{loglog.pdf} \caption{Log-log plots of the the near vortex geometry for parameters $d=4$, $\beta = 3$, $\varepsilon = 0.3$, $Q = 0.01 \, ev^2$, $\Lambda = Q^2/2$, $\kappa v = 0.6$ and $\check R = 0.$ The bulk in this case has a radius of $r_{\scriptscriptstyle B} = (500/3) r_\varv$. Outside of the vortex $\rho \roughly> r_\varv$ the geometry exhibits Kasner-like behaviour $B' \approx \alpha \ne 1$ and $W' \approx 0$. } \label{fig:logprofiles} \end{figure} It is the slope $B_\star^\prime = \alpha$ and $W_\star$ (where we affix $W(0) = 1$ within the vortex and so are not free to again choose $W_\star = 1$ elsewhere) that convey the properties of the vortex to the bulk, and so should be governed by vortex properties, such as by boundary conditions like \pref{intWval} or \pref{BprmWdint}, rather than by bulk field equations. Notice that we expect both $W_\star - 1$ and $\alpha -1$ to be of order $\kappa^2 v^2$ (see below) and so if $W_2 = \frac{1}{2} k \, \alpha \, r_{\scriptscriptstyle B}^2$ is ${\cal O}(1)$ then we expect $k \simeq {\cal O}(1/r_{\scriptscriptstyle B}^2)$. This, in turn, implies that $W' \simeq {\cal O}(r_\varv / r_{\scriptscriptstyle B}^2)$ at any near-vortex point of order $r_\varv$ away from $\rho_\star$. For $r_\varv \ll r_{\scriptscriptstyle B}$ we expect $W$ to be approximately constant in the near-vortex region exterior to the vortex, up to ${\cal O}(\rho^2/r_{\scriptscriptstyle B}^2)$ corrections. We also expect $B'$ to be similarly constant, up to ${\cal O}(\rho/r_{\scriptscriptstyle B})$ corrections. These expectations are verified by explicit numerical integrations of the vortex/bulk profiles, such as in Fig.~\ref{fig:logprofiles}. The explicit relation between $\alpha$ and vortex properties is set by near-vortex boundary conditions, such as \pref{intWval} or \pref{BprmWdint}. Using the series expansion to evaluate $W$ and $B$ at $\rho = \rho_\varv$, \begin{equation} W = W_\varv + W'_{\scriptscriptstyle V} (\rho - \rho_\varv) + \cdots \qquad \hbox{and} \qquad B = B_\varv + \alpha \, ( \rho - \rho_\varv) + \cdots \,, \end{equation} where $W_\varv = W_\star + \frac{k \alpha}{2} (\rho_\varv - \rho_\star)^2 + \cdots$, while $W_\varv' = k B_{\scriptscriptstyle V} = k \alpha (\rho_\varv - \rho_\star) + \cdots$ and so on. Inserting these into the left-hand side of eqs.~\pref{intWval} then gives \begin{equation} \label{intWvalmatch} d \,B_\varv W_\varv^{d-1} W_\varv' = dk W_\star^{d-1} \alpha^2 \hat\rho_\varv^2 + \cdots = - \frac{1}{2\pi} \Bigl\langle 2\kappa^2 {\cal X} + W^{-2} \check R \Bigr\rangle_{\varv} \,, \end{equation} which confirms that the vortex adjusts to make the right-hand side ${\cal O}(r_\varv^2/r_{\scriptscriptstyle B}^2)$. Similarly \pref{BprmWdint} becomes \begin{equation} \label{BprmWdintmatch} B_\varv' W_\varv^d = \alpha W_\star^d + \cdots = 1 - \frac{\kappa^2}{2\pi} \left\langle \varrho - \left( 1 - \frac{2}{d} \right) {\cal X} - {\cal Z} \right\rangle_{\varv} \,, \end{equation} and so on. \subsection{Effective description of a small vortex} \label{subsec:vortexEFT} If the vortex is much smaller than the transverse space then most of the details of its structure should not be important when computing how it interacts with its environment. Its dynamics should be well described by an effective $d$-dimensional action that captures its transverse structure in a multipole expansion. The lowest-derivative `brane' action of this type that depends on the nontrivial bulk fields outside the vortex is $S_b = \int {\hbox{d}}^d x \; {\cal L}_b$ with \begin{equation} \label{dSeff} {\cal L}_b = -\sqrt{-\gamma} \left[ T - \frac{\zeta}{d\,!} \, \epsilon^{\mu\nu\lambda\rho} \tilde A_{\mu\nu\lambda\rho} + \cdots \right]_{\rho = \rho_b} = -\sqrt{-\gamma} \left[ T + \frac{\zeta}{2} \, \epsilon^{mn} A_{mn} + \cdots \right]_{\rho = \rho_b} \,, \end{equation} where $\gamma$ denotes the determinant of the induced metric on the $d$-dimensional world-volume of the vortex centre of mass (which in the coordinates used here is simply $\gamma_{\mu\nu} = g_{\mu\nu}$ evaluated at the brane position). The tensor $\tilde A_{\mu\nu\lambda\rho} := \frac12 \, \epsilon_{\mu\nu\lambda\rho mn} A^{mn}$ is proportional to the $D$-dimensional Hodge dual of the bulk field strength; a quantity that can be invariantly integrated over the $d$-dimensional world-volume of the codimension-2 vortex. All unwritten terms covered by the ellipses in \pref{dSeff} involve two or more derivatives. The dimensionful effective parameters $T$ and $\zeta$ respectively represent the vortex's tension and localized flux, in a way we now make precise. To fix them in terms of the properties of the underlying vortex we perform a matching calculation; computing their effects on the bulk fields and comparing this to the parallel calculation using the full vortex solution. To do this we must be able to combine the $d$-dimensional action \pref{dSeff} with the $D$-dimensional action, $S_{\scriptscriptstyle B}$, for the bulk fields. To make this connection we promote \pref{dSeff} to a $D$-dimensional action by multiplying it by a `localization' function, $\delta(y)$, writing the $D$-dimensional lagrangian density as \begin{equation} \label{Leffdelta} {\cal L}_{\rm tot} = {\cal L}_{\scriptscriptstyle B}(g_{{\scriptscriptstyle M}{\scriptscriptstyle N}},A_{\scriptscriptstyle M}) + {\cal L}_b (g_{{\scriptscriptstyle M}{\scriptscriptstyle N}},A_{\scriptscriptstyle M}) \, \delta(y) \,. \end{equation} Here ${\cal L}_{\scriptscriptstyle B}$ is as given in \pref{SB} and $\delta(y)$ is a delta-function-like regularization function that has support only in a narrow region around the vortex position $\rho = \rho_b$, normalized so that $\int_{\scriptscriptstyle V} {\hbox{d}}^2 y \; \delta(y) = 1$. Although we can regard $\delta(y)$ as being independent of the $d$-dimensional metric, $g_{\mu\nu}$, and gauge field, $A_{\scriptscriptstyle M}$, we {\em cannot} consider it to be independent of the transverse metric, $g_{mn}$, because $\delta(y)$ must depend on the proper distance from the vortex. Much of the trick when matching with regularized delta-functions is to avoid questions that involve making assumptions about the detailed $g_{mn}$-dependence of the brane action. This is most awkward when calculating the brane's gravitational response, but we show below how to infer this response in a model-independent way that does not make ad-hoc assumptions about how $\delta(y)$ is regulated. \subsubsection*{Gauge-field matching} We start with the determination of the coupling $\zeta$ from the vortex's gauge-field response. To determine $\zeta$ we compute the contribution of $S_b$ to the gauge field equation, which becomes modified to \begin{equation} \partial_m \Bigl( \sqrt{-g} \; A^{mn} \Bigr) + \frac{\delta S_b}{\delta A_n} = \partial_m \Bigl[ \sqrt{-g} \left( A^{mn} + \zeta \, \epsilon^{mn} \, \frac{\delta(y)}{\sqrt{g_2}} \right) \Bigr] = 0 \,. \end{equation} This has solution \begin{equation} \label{Aeffeom} A_{\rho\theta} = \frac{QB}{W^d} - \zeta \, \epsilon_{\rho\theta} \, \frac{\delta(y)}{\sqrt{g_2}} = \frac{QB}{W^d} - \zeta \, \delta(y) \,, \end{equation} where $Q$ is an integration constant, and so --- when integrated over a transverse volume, $X_\varv$, completely containing the vortex --- gives the flux \begin{equation} \Phi_{\scriptscriptstyle A}(X_\varv) = \int\limits_{X_\varv} {\hbox{d}} A = Q \int\limits_{X_\varv} {\hbox{d}}^2 y \left( \frac{B}{W^d} \right) - \zeta \,. \end{equation} Comparing this to the vortex result in the full UV theory \begin{equation} \Phi_{\scriptscriptstyle A}(X_\varv) = \check \Phi_{\scriptscriptstyle A}(V) - \varepsilon \, \Phi_{\scriptscriptstyle Z}(V) = Q \int\limits_{X_\varv} {\hbox{d}}^2 y \left( \frac{B}{W^d} \right) + \frac{2\pi n \varepsilon}{e} \,, \end{equation} shows that $\zeta$ is given at the classical level by \begin{equation} \label{zetamatch} \zeta = -\frac{2\pi n \varepsilon}{e} \,. \end{equation} Notice that this argument does not make use of any detailed properties of $\delta(y)$ beyond its normalization and independence of $A_m$. \subsubsection*{Gauge-field back-reaction} Before repeating this argument to match the tension, $T$, and determine the gravitational response, we first pause to draw attention to an important subtlety. The subtlety arises because the presence of localized flux causes the gauge field to back-react in a way that contributes to the localized energy density, in a manner similar to the way the classical Coulomb field back-reacts to renormalize the mass of a point charged particle. To set up this discussion, notice that the effective lagrangian, \pref{dSeff}, can be regarded as the macroscopic contribution of the vortex part of the lagrangian regarded as a function of applied fields $A_m$ and $g_{\mu\nu}$. Consequently we expect the transverse average of \pref{Leffdelta} to give the same answer as the transverse average of the full lagrangian of the UV theory. Comparing the $A_m$-dependent and -independent terms of this average then suggests the identifications \begin{eqnarray} \label{Tzeta-UV} T \, W_b^d &=& \Bigl\langle L_{\rm kin} + V_b + L_{\rm gm} + L_{\scriptscriptstyle Z} \Bigr\rangle_\varv \nonumber\\ \hbox{and} \quad \frac{\zeta}{2} \, W_b^d \, \epsilon^{mn} A_{mn} &=& \Bigl\langle L_{\rm mix} \Bigr\rangle_\varv = \frac{\varepsilon}{2} \Bigl \langle Z^{mn} A_{mn} \Bigr \rangle_\varv \,, \end{eqnarray} where $W_b = W(\rho_b)$ is the warp factor evaluated at the brane position, and the factors $W_b^d$ come from the ratio of $\sqrt{-\gamma}/\sqrt{- \check g}$. Now comes the main point. The existence of the localized piece in the solution, \pref{Aeffeom}, for $A_m$ has two related consequences in such a transverse average. \begin{itemize} \item First, evaluating the localized-flux term at the solution to the $A_{mn}$ field equation, \pref{Aeffeom}, shows that the localized component of $A_m$ renormalizes the tension, % \begin{equation} \label{LbevalA} W_b^d \left( T + \frac{\zeta}{2} \, \epsilon^{mn}A_{mn} \right)_{\rho=\rho_b} = W_b^d \left[ T + \frac{\zeta \, Q}{W_b^d} - \zeta^2 \left( \frac{\delta(y)}{B} \right)_{\rho=\rho_b} \right] \,, \end{equation} % where this follows from taking $\delta(y)$ to be sufficiently peaked so that its integral can be treated like that of a Dirac delta-function. Notice that the last term in the last equality is singular as the vortex size goes to zero, requiring a regularization in order to be unambiguous. Such divergences are common for back-reacting objects with codimension-2 or higher, and are ultimately dealt with by renormalizing the action \pref{dSeff} even at the classical level \cite{ClassRenorm}. The $\zeta$-dependent part of this is to be compared with % \begin{equation} \Bigl\langle L_{\rm mix} \Bigr\rangle_\varv = -\frac{2\pi \varepsilon Q \, n}{e} - 2\varepsilon^2 \Bigl\langle L_{\scriptscriptstyle Z} \Bigr\rangle_\varv\,, \end{equation} % which uses \pref{AZcheckA} and \pref{Acheckeomsoln} to evaluate the integration over $L_{\rm mix}$, and shows that the result agrees with \pref{LbevalA}, both on the value of the term linear in $Q$ (once the matching value, \pref{zetamatch}, for $\zeta$ is used) and by providing an explicit regularization of the singular ${\cal O}(\varepsilon^2)$ term. \item The second way the localized term in \pref{Aeffeom} contributes is by introducing a localized contribution to the Maxwell action, $L_{\scriptscriptstyle A}$, which was naively not part of the vortex % \begin{eqnarray} \Bigl \langle L_{\scriptscriptstyle A} \Bigr \rangle_\varv &=& \frac{Q^2}{2} \int\limits_{X_\varv} {\hbox{d}}^2 y \left( \frac{B}{W^d} \right) - W_b^d \left[ \frac { \zeta \, Q }{W_b^d} - \frac{\zeta^2}{2} \, \left( \frac{\delta(y)}{B} \right)_{\rho=\rho_b} \right] \nonumber\\ &=& \Bigl \langle \check L_{\scriptscriptstyle A} \Bigr \rangle_\varv - W_b^d \left[ \frac { \zeta \, Q }{W_b^d} - \frac{\zeta^2}{2} \, \left( \frac{\delta(y)}{B} \right)_{\rho=\rho_b} \right] \,. \end{eqnarray} % This exactly cancels the linear dependence on $Q$ in \pref{LbevalA}, and partially cancels the localized renormalization of the tension. \end{itemize} We see from this that the localized part of the gauge response to the brane action contributes a localized contribution to the bulk action (and energy density) that combines with the direct brane action in precisely the same way as happens microscopically from the mixing from $A_m$ to $\check A_m$ (see, for example, \pref{Lggemixed}). This suggests another useful notion of brane lagrangian, defined as the total localized contribution when $Q$ is fixed (rather than $A_m$), leading to \begin{equation} \label{endofhighlight} \check L_b := \check T \, W_b^d := \Bigl \langle L_{\rm kin} + V_b + L_{\rm gm} + \check L_{\scriptscriptstyle Z} \Bigr \rangle_\varv = W_b^d \left[ T - \frac{\zeta^2}{2} \left( \frac{\delta(y)}{B} \right)_{\rho=\rho_b} \right] \,. \end{equation} We see that the tension renormalizations described above --- associated with the $[\delta(y)/B]_{\rho_b}$ terms --- are the macroscopic analogs of the renormalization $e^2 \to \hat e^2 = e^2/(1 - \varepsilon^2)$ that occurs with the transition from $L_{\scriptscriptstyle Z}$ to $\check L_{\scriptscriptstyle Z}$ in the microscopic vortex picture. Whether $L_b$ or $\check L_b$ is of interest depends on the physical question being asked. $L_b$ arises in deriving the brane contribution to the $A_m$ field equations, as above. But because it is $\check L_b$ that contains all of the brane-localized contributions to the energy, it plays a more important role in the brane's gravitational response (as we now explore in more detail). \subsubsection*{On-brane stress energy} With the above definitions of $L_b$ and $\check L_b$ in hand we now turn to the determination of the brane's local gravitational response. To determine the tension, $T$ (or $\check T$), we compute the $(\mu\nu)$ component of the Einstein equations (which we can do unambiguously because we know $\delta(y)$ does not depend on $g_{\mu\nu}$). We can do so using either $L_b$ or $\check L_b$ to define the brane action. Using $L_b$ leads to the following stress energy \begin{equation} \label{branestressenergy} T^{\mu\nu}_{(b)} = \frac{2}{\sqrt{-g}} \, \left( \frac{\delta S_b}{\delta g_{\mu\nu}} \right) = - W_b^d \left( T + \frac{\zeta}{2} \, \epsilon^{mn}A_{mn} \right) \frac{\delta(y)}{\sqrt{g_2}} \; g^{\mu\nu} \,, \end{equation} and so $\varrho$ becomes $\varrho = \Lambda + L_{\scriptscriptstyle A} + \varrho_b$ with \begin{equation} \label{rhobdelta} \varrho_b = W^d_b \left( T + \frac{\zeta}{2} \, \epsilon^{mn}A_{mn} \right) \frac{\delta(y)}{\sqrt{g_2}} \,. \end{equation} Alternatively, using $\check L_b$ leads to the stress energy \begin{equation} \check T^{\mu\nu}_{(b)} = \frac{2}{\sqrt{-g}} \, \left( \frac{\delta \check S_b}{\delta g_{\mu\nu}} \right) = - \check T \, W_b^d \; \frac{\delta(y)}{\sqrt{g_2}} \; g^{\mu\nu} \,, \end{equation} and so $\varrho$ becomes $\varrho = \Lambda + \check L_{\scriptscriptstyle A} + \check \varrho_b$ with \begin{equation} \label{rhobdeltacheck} \check \varrho_b = \check T \, W_b^d \; \frac{\delta(y)}{\sqrt{g_2}} = W_b^d \left[ T - \frac{\zeta^2}{2} \left( \frac{\delta(y)}{B} \right)_{\rho=\rho_b} \right] \frac{\delta(y)}{\sqrt{g_2}} \,. \end{equation} In either case the {\em total} energy density is the same, \begin{equation} \label{rhobevalA} \left\langle \varrho \right\rangle_\varv = \Bigl \langle \Lambda + L_{\scriptscriptstyle A} \Bigr \rangle_{\scriptscriptstyle V} + W_b^d \left( T + \frac{\zeta}{2} \, \epsilon^{mn}A_{mn} \right)_{\rho=\rho_b} = \Bigl \langle \Lambda + \check L_{\scriptscriptstyle A} \Bigr \rangle_\varv + W_b^d \, \check T \,, \end{equation} which is the analog of the microscopic statement \pref{StressEnergyVBsplit} \begin{equation} \bigl\langle \varrho \bigr\rangle_\varv = \Bigl\langle \Lambda + L_{\scriptscriptstyle A} + L_{\rm kin} + L_{\rm gm} + V_b + L_{\scriptscriptstyle Z} + L_{\rm mix} \Bigr\rangle_\varv = \Bigl\langle \Lambda + \check L_{\scriptscriptstyle A} + L_{\rm kin} + L_{\rm gm} + V_b + \check L_{\scriptscriptstyle Z} \Bigr\rangle_\varv \,. \end{equation} The advantage of using \pref{rhobdeltacheck} rather than \pref{rhobdelta} is that $\check\varrho_b$ contains {\em all} of the brane-localized stress energy, unlike $\varrho_b$ which misses the localized energy hidden in $L_{\scriptscriptstyle A}$. \subsubsection*{IR metric boundary conditions} A second important step in understanding the effective theory is to learn how the effective action modifies the field equations. So we restate here the general way of relating brane properties to near-brane derivatives of bulk fields \cite{BraneToBC}. The idea is to integrate the bulk field equations (including the brane sources) over a small region not much larger than (but totally including) the brane. For instance for a bulk scalar field, $\Phi$, coupled to a brane one might have the field equation \begin{equation} \Box \Phi + J_{\scriptscriptstyle B} + j_b \, \delta(y) = 0 \,, \end{equation} where $J_{\scriptscriptstyle B}$ is the contribution of bulk fields that remains smooth near the brane position and $j_b$ is the localized brane source. Integrating this over a tiny volume surrounding the brane and taking its size to zero --- {\em i.e.} $\rho_\varv/r_{\scriptscriptstyle B} \to 0$ --- then gives \begin{equation} \lim_{\rho_\varv \to 0} \Bigl \langle \Box \Phi \Bigr \rangle_\varv = 2\pi \lim_{ \rho_\varv \to 0} B_\varv W_\varv^d \, \Phi_\varv' = - \lim_{\rho_\varv \to 0} \Bigl \langle J_{\scriptscriptstyle B} + j_b \, \delta (y) \Bigr\rangle_\varv = - j_b(\rho = \rho_b) \,, \end{equation} where the assumed smoothness of $J_{\scriptscriptstyle B}$ at the brane position ensures $\langle J_{\scriptscriptstyle B} \rangle_\varv \to 0$ in the limit $\rho_\varv \to 0$. The equality of the second and last terms of this expression gives the desired relation between the near-brane derivative of $\Phi$ and the properties $j_b$ of the brane action. Applying this logic to the Einstein equations, integrating over a tiny volume, $X_\varv$, completely enclosing a vortex gives \begin{equation} 0 = \left \langle \frac{g^{{\scriptscriptstyle M} {\scriptscriptstyle P}} }{\sqrt{-g}} \frac{\delta S}{ \delta g^{{\scriptscriptstyle N} {\scriptscriptstyle P}} } \right \rangle_\varv = \left \langle \frac{g^{{\scriptscriptstyle M} {\scriptscriptstyle P}} }{\sqrt{-g}} \frac{\delta S_{{\scriptscriptstyle E} {\scriptscriptstyle H}} }{ \delta g^{{\scriptscriptstyle N} {\scriptscriptstyle P}} } \right \rangle_\varv + \left \langle \frac{g^{{\scriptscriptstyle M} {\scriptscriptstyle P}} }{\sqrt{-g}} \frac{\delta S_{\scriptscriptstyle M}}{ \delta g^{{\scriptscriptstyle N} {\scriptscriptstyle P}} } \right \rangle_\varv \end{equation} where we have split the action into an Einstein-Hilbert part $S_{{\scriptscriptstyle E} {\scriptscriptstyle H}}$ and a matter part $S_{\scriptscriptstyle M}.$ This matter part can be further divided into a piece that is smooth at the brane position \begin{equation} \check S_{\scriptscriptstyle B} = - \int \mathrm{d}^{\scriptscriptstyle D} x \sqrt{-g} \left( \check L_{\scriptscriptstyle A} + \Lambda \right) \,, \end{equation} and one that contains {\em all} of the localized sources of stress energy, \begin{equation} \check S_b = - \int \mathrm{d}^{\scriptscriptstyle D} x \sqrt{-g} \left( \frac{ \delta(y)}{\sqrt{g_2}} \right) \; \check T = - \int \mathrm{d}^d x \sqrt{- \gamma} \; \check T \,. \end{equation} As above, for a sufficiently small volume, $X_\varv$, we need keep only the highest-derivative part of the Einstein-Hilbert term\footnote{Being careful to include the Gibbons-Hawking-York action \cite{GHY} on the boundary.}, since the remainder vanishes on integration in the limit $\rho_\varv \to 0.$ The $S_{\scriptscriptstyle M}^{\scriptscriptstyle B}$ term also vanishes in this limit, by construction, so the result becomes \begin{equation} \label{intermedmatching} 0 = \frac{1}{2\kappa^2} \int {\hbox{d}} \theta \left[ \sqrt{-g} \Bigl( K^i{}_j - K \, \delta^i{}_j \Bigr) \right]^{\rho_\varv}_0 + \sqrt{-\check g} \left\langle \frac{g^{i k} }{\sqrt{-g} } \frac{\delta \check S_b }{\delta g^{j k} }\right\rangle_\varv \, \quad \text{as} \quad \rho_\varv \to 0 \,, \end{equation} where $i$ and $j$ run over all coordinates except the radial direction, $\rho$, and $K^{ij}$ is the extrinsic curvature tensor for the surfaces of constant $\rho$. To proceed, we assume that the derivative of the brane action is also localized such that its integral can be replaced with a quantity evaluated at the brane position \begin{equation} \left\langle \frac{g^{{\scriptscriptstyle N} {\scriptscriptstyle P}} }{\sqrt{-g} } \frac{\delta \check S_b }{\delta g_{{\scriptscriptstyle M} {\scriptscriptstyle P}} }\right\rangle_\varv = \int\limits_{X_\varv} \mathrm{d}^2 y \, \left( \frac{g^{{\scriptscriptstyle N} {\scriptscriptstyle P}}}{\sqrt{- \check g} } \frac{\delta \check S_b}{\delta g^{{\scriptscriptstyle M} {\scriptscriptstyle P}}} \right) = \left( \frac{ g^{{\scriptscriptstyle N} {\scriptscriptstyle P} } }{\sqrt{- \check g} } \frac{ \delta \check S_b }{\delta g_b^{{\scriptscriptstyle M} {\scriptscriptstyle P}} } \right)_{\rho = \rho_b} \,. \end{equation} The $b$ subscript in the functional derivative of the last term denotes that it is taken at the fixed point where $\delta(y)$ is localized, and so it contains no dependence on the bulk coordinates, and in particular no factors of $\delta(y)$. For example its $\mu\nu$ components read \begin{equation} \label{branemunuderiv} \frac{ \delta \check S_b }{\delta g^{\mu \nu}_b } = - \left. \frac{1}{2} \sqrt{-\gamma} \; \check T \, g_{\mu \nu} \right|_{\rho = \rho_b} \,. \end{equation} However, at this point we remain agnostic about how to calculate the off-brane component $\delta \check S_b / \delta g_{\theta \theta}$. Returning to the matching condition \pref{intermedmatching} we have the final result \begin{equation} \label{diffmatching} \lim_{\rho_\varv \to 0} \int {\hbox{d}} \theta \left[ \sqrt{-g} \Bigl( K^i{}_j - K \, \delta^i{}_j \Bigr) \right]^{\rho_\varv}_0 = - 2 \kappa^2 \left( g^{ik }\frac{ \delta \check S_b }{\delta g_b^{jk} } \right)_{\rho = \rho_b} \,, \end{equation} which can be explicitly evaluated for the geometries of interest. \subsubsection*{Brane stress-energies} We now turn to the determination of the off-brane components of the brane stress-energy. We can learn these directly by computing the left hand side of \pref{diffmatching} in the UV theory, before taking the limit $\rho_\varv \to 0$. We will first do this very explicitly for the $(\mu \nu)$ components of the brane stress-energy, and then proceed to deduce the off-brane components of the brane stress-energy. \medskip\noindent{\em The $(\mu \nu)$ stress-energy} \medskip\noindent For the metric ansatz ${\hbox{d}} s^2 = W^2(\rho) \, \check g_{\mu\nu} \, {\hbox{d}} x^\mu {\hbox{d}} x^\nu + {\hbox{d}} \rho^2 + B^2(\rho) \, {\hbox{d}} \theta^2$, the extrinsic curvature evaluates to $K_{ij} = \frac12 \, g_{ij}'$. This gives \begin{equation} K^{\mu\nu} = \frac{W'}{W} \, g^{\mu\nu} \qquad \hbox{and} \qquad K^{\theta\theta} = \frac{B'}{B} \, g^{\theta\theta} \,. \end{equation} The trace of the $(\mu\nu)$ components of the condition \pref{diffmatching} therefore evaluates to \begin{equation} \label{metricmatchingmunu} \lim_{\rho_\varv \to 0} \left\{ W_\varv^d B_\varv \left[ (1-d) \left( \frac{W'_\varv}{W_\varv} \right) - \frac{B'_\varv}{B_\varv} \right] + 1 \right\} = - \frac{\kappa^2/\pi d}{ \sqrt{-\check g} } \left( g^{\mu\nu} \, \frac{\delta \check S_b}{\delta g_b^{\mu\nu}} \right)_{\rho= \rho_b} = \frac{\kappa^2 \, W_b^d \, \check T}{2\pi} \,, \end{equation} for which the limit on the left-hand side can be evaluated using the limit $B_\varv \to 0$ as $\rho_\varv \to 0$. The result shows that it is the renormalized tension, $\check T$, that determines the defect angle just outside the vortex, \begin{equation} \label{defectmatching} 1 - \alpha = \frac{\kappa^2 \,W_b^d \, \check T}{2 \pi} \,. \end{equation} This is the macroscopic analog of \pref{BprmWdint}. \begin{figure}[t] \centering \includegraphics[width=1\textwidth]{defectangle.pdf} \caption{A plot of defect angle matching in the region exterior but near to the vortex core. The solid (blue) lines represent the metric function $W^4 B'$ and the dotted (red) lines represent $1 - \kappa^2 \check T / 2 \pi$ computed independently for different values of $\varepsilon = \{-0.2, 0.2, 0.4, 0.6\}$ with the other parameters fixed at $d=4$, $\beta = 3$, $Q = 1.25 \times 10^{-4} \, ev^2$, $\Lambda = Q^2/2$, $\kappa v = 0.5$ and $\check R = 0$. This size of the defect angle $B^\prime_{\scriptscriptstyle V} \approx \alpha$ matches very well with $1 - \kappa^2 \check T / 2 \pi$ at $\rho = \rho_\varv \approx 4 r_\varv .$ The solutions for $W^4 B'$ overlap perfectly when $\varepsilon = \pm 0.2$, as indicated by the dashes in the line. This illustrates that the defect angle is controlled by $\check T$, and the linear dependence of the the defect angle on $\varepsilon$ is cancelled. } \label{fig:defect} \end{figure} \medskip\noindent{\em The $(\theta\theta)$ stress-energy} \medskip\noindent The $(\theta\theta)$ component of the metric matching condition, \pref{diffmatching}, evaluates to \begin{equation} \label{metricmatchingthth} \lim_{\rho_\varv \to 0} W^d_\varv B_\varv \left( \frac{W'_\varv}{W_\varv} \right) = \frac{\kappa^2/\pi d}{ \sqrt{-\check g} } \left( g^{\theta \theta} \, \frac{\delta \check S_b}{\delta g_b^{\theta \theta}} \right)_{\rho= \rho_b} \,. \end{equation} but at first sight this is less useful because the unknown $g_{mn}$ dependence of $\delta(y)$ precludes evaluating its right-hand side. This problem can be side-stepped by using the constraint, eq.~\pref{constraint}, evaluated at $\rho = \rho_\varv$ (just outside the brane or vortex) to evaluate $W_\varv'/W_\varv = {\cal O}(\rho_\varv/r_{\scriptscriptstyle B}^2)$ (and so also the left-hand side of \pref{metricmatchingthth}) in terms of the quantities $B_\varv'/B_\varv = 1/\rho_\varv + \cdots$, $\check R/W_\varv^2$ and ${\cal X}_{\scriptscriptstyle B}$. Once this is done we instead use the $(\theta\theta)$ matching condition to infer the $(\theta\theta)$ component of the vortex stress energy. Solving the constraint, \pref{constraint}, for $W'/W$ at $\rho_\varv$ (just outside the vortex, where ${\cal Z} = 0$ and ${\cal X} = {\cal X}_{\scriptscriptstyle B} = \check {\cal X}_{\scriptscriptstyle B}$) gives \begin{eqnarray} (d-1) \left( \frac{W'_\varv}{W_\varv} \right) &=& - \frac{B'_\varv}{B_\varv} + \sqrt{ \left( \frac{B'_\varv}{B_\varv} \right)^2 - \left( 1 - \frac{1}{d} \right) \left( 2 \kappa^2 {\cal X}_{\scriptscriptstyle B}(\rho_\varv) + \frac{\check R}{W_\varv^2} \right)} \nonumber\\ &\simeq& - \frac{1}{2} \left( 1 - \frac{1}{d} \right) \rho_\varv \left( 2 \kappa^2 {\cal X}_{\scriptscriptstyle B}(\rho_\varv) + \frac{\check R}{W_\varv^2} \right) + \cdots \,, \end{eqnarray} where the root is chosen such that $W'_\varv/W_\varv$ vanishes if both $\check R$ and ${\cal X}_{\scriptscriptstyle B}(\rho_\varv)$ vanish. With this expression we see that $B_\varv W_\varv^d (W_\varv'/W_\varv) \to 0$ as $\rho_\varv \to 0$, and so \pref{metricmatchingthth} then shows that \begin{equation} \label{ththzero} \left( g^{\theta \theta} \, \frac{\delta \check S_b}{\delta g_b^{\theta \theta}} \right)_{\rho= \rho_b} = 0 \,, \end{equation} for any value of $T$ (or $\check T$) and $\zeta$. Notice that eq.~\pref{ththzero} is precisely what is needed to ensure $W_b' \to 0$ at the brane, as required by the Kasner equations \pref{branekasner} that govern the near-vortex limit of the bulk. Also notice that \pref{ththzero} would be counter-intuitive if instead one were to evaluate directly $\delta S_b/\delta g_{\theta\theta}$ by assuming $\delta(y)$ was metric independent and using the explicit metrics that appear within $\epsilon^{mn}A_{mn}$. What is missed by this type of naive calculation is the existence of the localized energy coming from the Maxwell action, $L_{\scriptscriptstyle A}$, and its cancellation of the terms linear in $\zeta$ when converting $S_b$ to $\check S_b$. \medskip\noindent{\em The $(\rho\rho)$ stress-energy} \medskip\noindent Although the $(\rho \rho)$ component of the extrinsic curvature tensor is not strictly well-defined, we can still consider the $(\rho \rho)$ components of the boundary condition in the following form \begin{equation} 0 = \lim_{\rho_\varv \to 0} \left \langle \frac{g^{\rho \rho} }{\sqrt{-g}} \frac{\delta S_{{\scriptscriptstyle E} {\scriptscriptstyle H}} }{ \delta g^{\rho \rho} } \right \rangle_\varv + \left( \frac{ g^{\rho \rho} }{\sqrt{- \check g} } \frac{ \delta \check S_b }{\delta g_b^{\rho \rho} } \right)_{\rho = \rho_b} \,. \end{equation} By definition, we have \begin{equation} \frac{g^{\rho \rho} }{\sqrt{-g}} \frac{\delta S_{{\scriptscriptstyle E} {\scriptscriptstyle H}} }{ \delta g^{\rho \rho} } = -\frac{1}{2 \kappa^2}{\cal G}^\rho{}_\rho \,. \end{equation} As noted in \pref{constraint}, this component of the Einstein tensor is contains only first derivatives of the metric field. It follows that \begin{equation} \lim_{\rho_\varv \to 0} \left \langle \frac{g^{\rho \rho} }{\sqrt{-g}} \frac{\delta S_{{\scriptscriptstyle E} {\scriptscriptstyle H}} }{ \delta g^{\rho \rho} } \right \rangle_\varv = -\frac{1}{2 \kappa^2} \left \langle {\cal G}^\rho{}_\rho \right \rangle_\varv = 0 \end{equation} since metric functions and their first derivatives are assumed to be smooth. In this simple way, we once again use the Hamiltonian constraint to conclude that the off-brane component of the brane stress energy is vanishing \begin{equation} \left( \frac{ g^{\rho \rho} }{\sqrt{- \check g} } \frac{ \delta \check S_b }{\delta g_b^{\rho \rho} } \right)_{\rho = \rho_b} = 0 \,. \end{equation} So both off-brane components of brane stress-energy vanish in the limit $\rho_\varv \to 0$, and from this we also infer that their sums and differences also vanish: \begin{equation} \label{XZbmatching} {\cal X}_b = {\cal Z}_b = 0 \,. \end{equation} These results are the analog for the effective theory of the KK-suppression of $\langle {\cal X} \rangle_\varv$ in the UV theory once $r_\varv \ll r_{\scriptscriptstyle B}$. As a consequence in the effective theory \begin{equation} {\cal Z} = 0 \qquad \hbox{and} \qquad {\cal X} = \check {\cal X}_{\scriptscriptstyle B} = \Lambda - \check L_{\scriptscriptstyle A} \,. \end{equation} \section{Compactification and interacting vortices} \label{section:interactions} We next turn to how several small vortices interact with one another and with their environment. In particular, if the flux in the transverse dimensions does not fall off quickly enough its gravitational field eventually dominates and drives $B(\rho)$ to zero for positive $\rho$, thereby pinching off and compactifying the two transverse dimensions. We explore in detail the situation of two small vortices situated at opposite sides of such a compact space. For this part of the discussion it is more convenient to use the effective description of the vortices as codimension-2 branes than to delve into their detailed vortex substructure, though we do this as well to see how the effective description captures the full theory's low-energy behaviour. As we saw above, in the effective limit the vortex properties are encoded in the near-brane derivatives of the bulk fields (through the defect angle and localized flux). So to discuss brane interactions it is useful to start with the general solution to the bulk field equations outside the vortices, since it is the trading of the integration constants of this solution for the near-brane boundary conditions that expresses how brane properties back-react onto their environs. \subsection{Integral relations} \label{subsec:integrals} Before delving into explicit solutions to the bulk field equations, it is worth first recording some exact results that can be obtained by applying the integrated forms of the field equations to the entire transverse space, and not just to a small region encompassing each vortex. In the UV theory these integrals simplify because all fields are everywhere smooth and so the integral over a total derivative vanishes. The same need not be true for the effective theory with point brane sources, since in principle fields can diverge at the brane locations. However we can then ask how the UV integral relations arise in the effective theory. For instance if eq.~\pref{intWval} is integrated over the entire compact transverse space then its left-hand side integrates to zero, leaving the following exact relation for $\check R$: \begin{equation} \label{intWvaltot} 0 = \Bigl\langle 2\kappa^2 {\cal X} \Bigr \rangle_{\rm tot} + \Bigl \langle W^{-2} \Bigr\rangle_{\rm tot} \check R = 2\kappa^2 \left( \langle {\cal X} \rangle_{\rm tot} + \frac{ \check R}{2\kappa^2_d} \right) \,. \end{equation} Here the last equality uses the relation between $\kappa^2$ and its $d$-dimensional counterpart. This shows that it is $\langle {\cal X} \rangle_{\rm tot}$ that ultimately determines the value of the on-brane curvature. Eq.~\pref{intWvaltot} is particularly powerful in the effective theory, for which we have seen that the branes satisfy $\check {\cal X}_b = 0$ and so ${\cal X} = \check {\cal X}_{\scriptscriptstyle B} = \Lambda - \check L_{\scriptscriptstyle A}$. In this case \pref{intWvaltot} shows us that it is really only through \begin{equation} \label{checkLAvac} \langle \check L_{\scriptscriptstyle A} \rangle_{\rm tot} = 2\pi Q^2 \int_{\rm tot} {\hbox{d}} \rho \,\left( \frac{B}{W^d} \right) \end{equation} that the brane properties determine the on-brane curvature, as they modify the functional form of $B$ and $W^d$ through boundary conditions, and $Q$ through flux quantization. A second exact integral relation comes from integrating the $(\theta\theta)$ component of the trace-reversed Einstein equation, eq.~\pref{XthetathetaeEinstein}, over the transverse dimensions. Again the left-hand side integrates to zero leaving the constraint \begin{equation} \left\langle \varrho - {\cal Z} - \left( 1 - \frac{2}{d} \right) {\cal X} \right \rangle_{\rm tot} = 0 \,. \end{equation} Combining this with \pref{intWvaltot} then implies \begin{equation} \label{tRvsvarrho} \check R = - \left( \frac{2d}{d-2} \right) \, \kappa_d^2 \Bigl \langle \varrho - {\cal Z} \Bigr \rangle_{\rm tot} \,. \end{equation} Notice that in $d$ dimensions Einstein's equations with a cosmological constant, $V_{\rm eff}$, have the form \begin{equation} \check R_{\mu\nu} - \frac12 \, \check R \, \check g_{\mu\nu} = \kappa_d^2 \, V_{\rm eff} \, \check g_{\mu\nu} \,, \end{equation} and so the scalar curvature satisfies \begin{equation} \label{tRvsVeff} \check R = - \left( \frac{2d}{d-2} \right) \, \kappa_d^2 V_{\rm eff} \,. \end{equation} Comparing this with eq.~\pref{tRvsvarrho} then gives a general expression for the effective $d$-dimensional cosmological constant \begin{equation} V_{\rm eff} = \Bigl \langle \varrho - {\cal Z} \Bigr \rangle_{\rm tot} \,. \end{equation} \subsection{General static bulk solutions} \label{subsec:Exactsolns} This section presents the general bulk solutions for two branes. We start with the simple rugby-ball geometries that interpolate between two branes sourcing identical defect angles and then continue to the general case of two different branes. The solutions we find are all static -- actually maximally symmetric in the $d$ Lorentzian on-brane directions -- and symmetric under axial rotations about the two brane sources. Rather than rewriting all of the field equations for the bulk region away from the branes, we note that these are easily obtained from the field equations of previous sections in the special case that $Z_{\scriptscriptstyle M} = 0$ and $\psi = v$. Notice that $Z_{\scriptscriptstyle M} = 0$ and $\psi = v$ already solve the $Z_{\scriptscriptstyle M}$ and $\psi$ field equations, so it is only the others that need solutions, which must be the case since we have replaced the vortex degrees of freedom with an effective brane description. These choices imply \begin{equation} L_{\rm kin} = L_\Psi = V_b = L_{\rm gm} = L_{\scriptscriptstyle Z} = L_{\rm mix} = 0 \quad\hbox{and so} \quad L_{\rm gge} = \check L_{\scriptscriptstyle A} = L_{\scriptscriptstyle A} = \frac12 \left( \frac{Q}{W^d} \right)^2 \,. \end{equation} As a consequence of these we know \begin{equation} {\cal Z} = 0 \,, \qquad {\cal X} = \check {\cal X}_{\scriptscriptstyle B} = \Lambda - \check L_{\scriptscriptstyle A} \qquad \hbox{and} \qquad \varrho = \check \varrho_{\scriptscriptstyle B} = \Lambda + \check L_{\scriptscriptstyle A}\,. \end{equation} \subsubsection*{Rugby-ball geometries} \label{section:Rugbyballgeom} Because ${\cal Z} = 0$ the solutions to the field equations can be (but need not be) locally maximally symmetric in the transverse 2 dimensions, rather than just axially symmetric there. For such solutions $W'$ must vanish and so the geometry is completely described by the constant scalar curvatures, $\check R$ and $R$. The transverse dimensions are locally spherical, but with a defect angle at both poles corresponding to the removal of a wedge of the sphere. Explicitly, we have $B = \alpha \ell \sin(\rho/\ell)$, and the polar defect angle has size $\delta = 2\pi(1 - \alpha)$. The sphere's curvature and volume are \begin{equation} R = \frac{2B''}{B} = - \frac{2}{\ell^2} \qquad \hbox{and} \qquad {\cal V}_2 := 2\pi \int_0^{\pi\ell} {\hbox{d}} \rho \; B = 4 \pi \alpha \, \ell^2 \,, \end{equation} where $\ell$ is the `radius' of the sphere. The relevant bulk field equations are the two Einstein equations \begin{equation} \label{Rdeq2} \check R = - 2 \kappa^2 \left[ \Lambda - \frac12 \left( \frac{Q}{W^d} \right)^2 \right] \,, \end{equation} and \begin{equation} \label{R2eq2} -R = \frac{2}{\ell^2} = 2 \kappa^2 \left[ \frac{2\Lambda}{d} + \left( 1 - \frac{1}{d} \right) \left( \frac{Q}{W^d} \right)^2 \right] \,, \end{equation} with $Q$ fixed by flux quantization to be \begin{equation} \label{RBfluxQ} \frac{Q}{W^d} = \frac{{\cal N}}{2 g_{\scriptscriptstyle A} \alpha \, \ell^2} \qquad \hbox{where} \qquad {\cal N} := N - n_{\rm tot} \varepsilon \left( \frac{g_{\scriptscriptstyle A}}{e} \right) \,, \end{equation} where $n_{\rm tot} = n_+ + n_-$ is the sum of the flux quantum for each vortex.\footnote{We take for simplicity the gauge coupling of the two vortices to be equal. See Appendix \ref{AppFluxQuantization} for a discussion of flux quantization for the $Z_{\scriptscriptstyle M}$ and $A_{\scriptscriptstyle M}$ fields.} As shown in Appendix \ref{App:RugbyBalls}, the stable solution to these equations has compact transverse dimensions with radius \begin{equation} \frac{1}{\ell^2} = \frac{1}{r_{\scriptscriptstyle A}^2} \left( 1 + \sqrt{ 1 - \frac{r_{\scriptscriptstyle A}^2}{r_\Lambda^2} } \,\right) \,, \end{equation} where the two intrinsic length-scales of the problem are defined by \begin{equation} r^2_\Lambda := \frac{d}{4\kappa^2 \Lambda} \qquad \hbox{and} \qquad r_{\scriptscriptstyle A}^2(\alpha) := \frac12 \left( 1 - \frac{1}{d} \right) \left( \frac{{\cal N} \kappa}{g_{\scriptscriptstyle A} \alpha} \right)^2 \,. \end{equation} Clearly $\ell \simeq r_{\scriptscriptstyle A}/\sqrt2$ when $r_\Lambda \gg r_{\scriptscriptstyle A}$ and increases to $\ell = r_{\scriptscriptstyle A}$ when $r_\Lambda = r_{\scriptscriptstyle A}$. It is here that we first see why it is the combination ${\cal N} \kappa/g_{\scriptscriptstyle A}$ that sets the size of the extra dimensions. No solutions of the type we seek exist at all unless $r_\Lambda \ge r_{\scriptscriptstyle A}$, which requires \begin{equation} \label{Lambdaxbound} \Lambda \le \frac{d-1}{2} \left( \frac{\alpha \, g_{\scriptscriptstyle A}}{{\cal N} \kappa^2} \right)^2 \,. \end{equation} Finally, the on-brane curvature is \begin{eqnarray} \label{r0elltildeR} \check R = - \frac{d}{2\, r_\Lambda^2} + \frac{d}{2(d-1)} \left( \frac{r_{\scriptscriptstyle A}^2}{\ell^4} \right) &=& - \frac{d^2}{2(d-1)r_\Lambda^2} + \left( \frac{d}{d-1} \right) \frac{1}{r_{\scriptscriptstyle A}^2} \left(1 + \sqrt{1 - \frac{r_{\scriptscriptstyle A}^2}{r_\Lambda^2} } \right) \nonumber\\ &=& \frac{d}{d-1} \left( - \frac{d}{2 r_\Lambda^2} + \frac{1}{\ell^2} \right) \,, \end{eqnarray} which shows \begin{equation} \check R \simeq \left( \frac{2d}{d-1} \right) \frac{1}{r_{\scriptscriptstyle A}^2} \qquad \hbox{has AdS sign when} \qquad r_\Lambda \gg r_{\scriptscriptstyle A} \,, \end{equation} but changes to dS sign \begin{equation} \check R \to - \left( \frac{d-2}{d-1} \right) \frac{2}{r_{\scriptscriptstyle A}^2} \qquad \hbox{when} \qquad r_\Lambda \to r_{\scriptscriptstyle A} \,. \end{equation} The on-brane curvature passes through zero when $\Lambda$ is adjusted to satisfy ${r_{\scriptscriptstyle A}^2}/{r_\Lambda^2} = {4(d-1)}/{d^2}$ (which is $\le 1$ for $d \ge 2$), and $\ell^2 = r_0^2 := (2/d) r_\Lambda^2$. \subsubsection*{Geometries for general brane pairs} Explicit closed-form solutions are also known where the branes at either end of the space have different properties. The difference between the two branes induces nontrivial warping and thereby breaks the maximal 2D symmetry of the transverse dimensions down to simple axial rotations. The resulting solutions can be found by double Wick-rotating a charged black hole solution in $D$ dimensions \cite{SolnsByRotation, MSYK}, leading to the metric \begin{eqnarray} {\hbox{d}} s_0^2 &=& W^2(\vartheta) \, \check g_{\mu\nu} \, {\hbox{d}} x^\mu {\hbox{d}} x^\nu + r_0^2\left(\frac{{\hbox{d}} \vartheta^2}{K(\vartheta)} + \alpha_0^2 \, K(\vartheta) \sin^2\vartheta \,{\hbox{d}} \theta^2\right) \nonumber\\ &=& W^2(\vartheta) \,\check g_{\mu\nu} \, {\hbox{d}} x^\mu {\hbox{d}} x^\nu + r^2(\vartheta)\Big({\hbox{d}} \vartheta^2 + \alpha^2(\vartheta) \, \sin^2\vartheta \,{\hbox{d}} \theta^2\Big) \,, \end{eqnarray} where \begin{equation} W(\vartheta) := W_0 \Bigl( 1 + \eta\,\cos\vartheta \Bigr) \,, \qquad r(\vartheta) := \frac{r_0}{\sqrt{K(\vartheta)}} \quad \hbox{and} \quad \alpha(\vartheta) := \alpha_0 \, K(\vartheta) \,, \end{equation} where $\eta$ is an integration constant and $r_0^{-2}: = 2\kappa^2\Lambda = (d/2)\, r_\Lambda^{-2}$. Notice that $r^2(\vartheta) \alpha(\vartheta) = r_0^2 \, \alpha_0$ for all $\vartheta$, and the vanishing of $g_{\theta\theta}$ implies the `radial' coordinate lies within the range $\vartheta_+ := 0 < \vartheta < \vartheta_- := \pi$. The geometry at the endpoints has a defect angle given by $\alpha_\pm = \alpha(\vartheta_\pm)$ and the derivative of the warp factor vanishes at both ends: ${\hbox{d}} W/{\hbox{d}} \vartheta \to 0$ as $\vartheta \to \vartheta_\pm$ (as required by the general Kasner arguments of earlier sections). In these coordinates the Maxwell field solves $\sqrt{-g} \; \check A^{\vartheta\theta} = Q$, which implies \begin{equation} \check A_{\vartheta\theta} = \frac{Q \, r_0^2 \, \alpha_0 \sin\vartheta}{W^d(\vartheta)} \,. \end{equation} Other properties of this metric --- including the explicit form for the function $K(\vartheta)$ --- are given in Appendix \ref{App:BeyondRugby}. All told, the solution is characterized by three independent integration constants, in terms of which all other quantities (such as $\check R$) can be computed. These constant can be taken to be $Q$ as well as an independent defect angle, $\alpha_+$ and $\alpha_-$, at each of the two poles. These three constants are themselves determined in terms of the source brane properties through the near-brane boundary conditions and the flux-quantization condition \begin{equation} \label{fqappwarp} \frac{{\cal N}}{ g_{\scriptscriptstyle A} \alpha_0 \, r_0^2} = Q \int_0^\pi \!{\hbox{d}}\vartheta \,\frac{\sin\vartheta} {W^d(\vartheta)} \,, \end{equation} where, as before, ${\cal N} = N - n_{\rm tot}\varepsilon g_{\scriptscriptstyle A}/e$ represents the vortex-modified flux-quantization integer. \subsubsection*{Near rugby-ball limit} \label{section:Nearrugby} Although the general expressions are more cumbersome, it is possible to give simple formulae for physical quantities in terms of $Q$ and $\alpha_\pm$ in the special case where the geometry is not too different than a rugby ball. Because nonzero $\eta$ quantifies the deviation from a rugby-ball solution, in this regime we may expand in powers of $\eta$. In this section we quote explicit expressions that hold at low order in this expansion. In the rugby-ball limit the functions $r(\vartheta)$, $\alpha(\vartheta)$ and $W(\vartheta)$ degenerate to constants, with $W(\vartheta) = W_0$ and $r(\vartheta) = \ell$ given explicitly in terms of $r_0^2 = (2/d) r_\Lambda^2$ and $\check R$ by eq.~\pref{r0elltildeR}. Since $r(\vartheta)^2 \, \alpha(\vartheta) = r_0^2 \, \alpha_0$ this implies $\alpha_0$ is related to the limiting rugby-ball defect angle, $\alpha$, by \begin{equation} \label{ralpharbdefs} \alpha = \alpha_0 \left[ 1 + \left( \frac{d-1}{d} \right) \check R \, r_0^2 \right] \,. \end{equation} It happens that to linear order in $\eta$ the geometry near each pole takes the form \begin{equation} {\hbox{d}} s_{\pm}^2 \simeq W_0^2 (1 \pm 2 \,\eta) {\hbox{d}} s_4^2 + \ell_\pm^2 \Bigl[ {\hbox{d}}\vartheta^2 + \alpha_\pm^2 (\vartheta - \vartheta_\pm)^2 \, {\hbox{d}}\theta^2 \Bigr] \end{equation} where \begin{eqnarray} \Delta W &:=& W_+ - W_- \simeq 2 \, W_0 \, \eta + {\cal O}(\eta^2)\nonumber\\ \ell_\pm^2 &\simeq& \ell^2 \Bigl( 1 \pm {\cal C}_{\scriptscriptstyle H} \eta \Bigr) + {\cal O}\left[\eta(\vartheta-\vartheta_\pm)^2,\eta^2 \right] \,,\quad \\ \alpha_\pm &\simeq& \alpha \Bigl( 1 \mp {\cal C}_{\scriptscriptstyle H} \eta \Bigr) + {\cal O}\left[\eta(\vartheta-\vartheta_\pm)^2,\eta^2 \right] \,. \end{eqnarray} with \begin{equation} {\cal C}_{\scriptscriptstyle H} := \frac{d - \frac23 + (d-1) \check R \, r_0^2}{1-(d-1) \check R \, r_0^2/d} \,. \end{equation} This shows that the apparent rugby-ball radius and defect angle seen by a near-brane observer at each pole begins to differ for each brane at linear order in $\eta$. To use these expressions to determine quantities in terms only of $Q$ and $\alpha_\pm$ requires knowledge of $\check R$, and the field equations imply this is given for small $\eta$ by \begin{equation} \label{tRsmalleta} \check R = - \frac{1}{r_0^2} \left[ 1 + \left( \frac{2d-3}3 \right) \eta^2 \right] + \kappa^2 Q^2 \Bigl[ 1 + (d-1) \eta^2 \Bigr] + {\cal O}(\eta^4) \,. \end{equation} To complete the story we solve for $\eta$ in terms of $\alpha_\pm$ using \begin{equation} \label{etavsalpha} \frac{\alpha_- - \alpha_+}{\alpha} = 2\,{\cal C}_{\scriptscriptstyle H} \eta\,, \end{equation} with $\alpha \simeq \frac12(\alpha_+ + \alpha_-)$, and use this to evaluate all other quantities. For small $\eta$ the flux-quantization condition also simplifies, becoming \begin{equation} \label{fqappwarpsmeta} \frac{{\cal N}}{2 g_{\scriptscriptstyle A} \alpha_0 \, r_0^2} = \frac{{\cal N}}{2 g_{\scriptscriptstyle A} \alpha \, \ell^2} = \frac{Q}{2} \int_0^\pi \!{\hbox{d}}\vartheta \, \frac{\sin\vartheta}{W^d(\vartheta)} \simeq Q \left[ 1 + \frac{d(d+1)}6 \,\eta^2 + {\cal O}(\eta^4) \right] \,. \end{equation} \subsection{Relating bulk to vortex properties} \label{subsec:bulkvortexmatching} We see that the bulk solutions are determined by three parameters, $\alpha_\pm$ and $Q$. Earlier sections also show how these are related to the physical properties of the two source branes, since the near-brane defects are related to the renormalized brane tensions by \begin{equation} 1 - \alpha_\pm = \frac{\kappa^2 \,W_\pm^d \check T_\pm}{2\pi} \,, \end{equation} and $Q$ is determined in terms of brane properties by flux quantization, \pref{fqappwarp} (or, for small $\eta$, \pref{fqappwarpsmeta}). \medskip\noindent{\em Parameter counting} \medskip\noindent An important question is to count parameters to see if there are enough integration constants in the bulk solutions to interpolate between arbitrary choices for the two vortex sources. In total the source branes are characterized by a total of four physical choices: their tensions ({\em i.e.} defect angles) and localized flux quanta, $n_\pm$, to which we must add the overall flux quantum choice, $N$, for the bulk. But varying these only sweeps out a three-parameter set of bulk configurations because the flux choices ($n_\pm$ and $N$) only appear in the bulk geometry through flux quantization, and so only through the one combination, ${\cal N} = N - \varepsilon (n_+ + n_-)(g_{\scriptscriptstyle A}/e)$, that fixes $Q$. (Although they do not affect the geometry independent of ${\cal N}$, the $n_\pm$ do govern the Bohm-Aharonov phase acquired by test particles that move about the source vortices.) Consequently the three free constants --- $Q$ and $\alpha_\pm$ --- are sufficient to describe the static gravitational field set up by any pair of vortices, and once the brane properties (and $N$) are specified then all geometrical properties are completely fixed. The rugby ball geometries correspond to the special cases where $\check T_+ = \check T_-$. This point of view, where the bulk dynamically relaxes in response to the presence of two brane sources, is complimentary to our earlier perspective which regarded integrating the field equations as an `evolution' in the radial direction (and so for which initial conditions at one brane completely specify the whole geometry --- and by so doing also fix the properties of the antipodal brane). They are equivalent because in the evolutionary point of view two of the integration constants to be chosen were $Q$ and $\check R$, which are completely arbitrary from the perspective of any one brane. Their choices dictate the form of the interpolating geometry and correspond to the two-parameter family of branes (labeled by ${\cal N}$ and $\alpha$) that could have been chosen to sit at the antipodal position. \subsubsection*{On-brane curvature response} Of particular interest is how the on-brane curvature, $\check R$, responds to different choices for brane properties. In general this is given by \pref{tRvsvarrho}, in which we use the brane-vortex matching results --- \pref{rhobevalA} and \pref{XZbmatching} --- appropriate when the vortex size is negligibly small compared with the transverse KK scale, ${\cal Z}_b = {\cal X}_b = 0$ and $\varrho_b = W_b^d \, \check T_b \delta(y)/\sqrt{g_2}$, ensuring that \begin{equation} \langle {\cal Z} \rangle_{\rm tot} \simeq 0 \,, \qquad \langle {\cal X} \rangle_{\rm tot} \simeq \langle \Lambda - \check L_{\scriptscriptstyle A} \rangle_{\rm tot} \qquad \hbox{and} \qquad \langle \varrho \rangle_{\rm tot} = \sum_b W_b^d \, \check T_b + \langle \Lambda + \check L_{\scriptscriptstyle A} \rangle_{\rm tot} \,. \end{equation} With these results \pref{tRvsvarrho} shows $\check R$ takes the value that would be expected in $d$ dimensions in the presence of a cosmological constant of size $V_{\rm eff} = \langle \varrho \rangle_{\rm tot} = \left( 1 - 2/d \right) \langle {\cal X} \rangle_{\rm tot}$, and so \begin{equation} \label{Veffbrane} V_{\rm eff} = \sum_b W_b^d \, \check T_b + \langle \Lambda + \check L_{\scriptscriptstyle A} \rangle_{\rm tot} = \left( 1 - \frac{2}{d} \right) \langle \Lambda - \check L_{\scriptscriptstyle A} \rangle_{\rm tot} \,, \end{equation} In general $\check R$ is not small. Since all quantities in $\varrho$ are positive (except perhaps for $\Lambda$), the resulting geometry is de Sitter-like unless cancelled by sufficiently negative $\Lambda$. Notice also that the second equality implies \begin{equation} \sum_b W_b^d \check T_b = - \frac{2}{d} \, \langle \Lambda \rangle_{\rm tot} - 2\left( 1 - \frac{1}{d} \right) \langle \check L_{\scriptscriptstyle A} \rangle_{\rm tot} \,, \end{equation} is always true. This states that for codimension-2 systems like this the `probe' approximation is never a good one: that is, it is {\em never} a good approximation to neglect the bulk response (the right-hand side) relative to the source tensions (the left-hand side) themselves. \medskip\noindent{\em Near-flat response} \medskip\noindent Of particular interest are near-flat solutions for which $\Lambda$ is initially adjusted to cancel the rest of $\langle \varrho \rangle_{\rm tot}$, after which brane properties are varied (without again readjusting $\Lambda$). One can ask how $\check R$ responds to this variation. To determine this response explicitly we use the near-rugby solution considered above, in the case where the unperturbed flat geometry is a rugby ball and for which the brane parameters are independently tweaked. To this end we take the initial unperturbed configuration to satisfy $W_0 = 1$ and \begin{equation} \Lambda = \Lambda_0:= \frac{Q_0^2}2 \quad \hbox{and} \quad \eta_0 =0 \quad\implies\quad \check R_0 = 0 \end{equation} and then introduce small perturbations through $\delta \alpha$ and $\delta {\cal N}$. From eq.~\pref{tRsmalleta}, we see immediately that \begin{equation} \delta \check R = \frac2{r_0^2} \,\frac{\delta Q}{Q_0} + {\cal O}(\eta^2) \end{equation} and --- from the flux quantization condition in eq.~\pref{fqappwarpsmeta} --- we see that the leading perturbations are \begin{equation} \frac{\delta Q}{Q_0} = \frac{\delta {\cal N}}{{\cal N}_0} -\frac{\delta \alpha_0}{\alpha_0} + {\cal O}(\eta^2) \,. \end{equation} Lastly, since it is $\alpha = \frac12(\alpha_+ + \alpha_-)$ (and not $\alpha_0$) that is determined by $\check T_\pm$, we must use the relation --- eq.~\pref{ralpharbdefs} --- to write \begin{equation} \frac{\delta \alpha_0}{\alpha_0} = \frac{\delta\alpha}{\alpha} - r_0^2\left(1-\frac1d\right)\delta \check R + {\cal O}(\eta^2) \,. \end{equation} Combining these formulae gives \begin{equation} \left(\frac{d-2}d\right)\delta \check R \simeq \frac2{r_0^2} \left(\frac{\delta \alpha}{\alpha} - \frac{\delta {\cal N}}{{\cal N}}\right) \,, \end{equation} to leading order, and so given perturbations of the form \begin{equation} \delta \alpha_\pm = -\frac{\kappa^2 \delta \check T_\pm}{2\pi} \,,\quad \delta {\cal N} = - \frac{g_{\scriptscriptstyle A}}{2\pi}\sum_{b\,\in \pm} \delta \xi_b \,, \end{equation} the corresponding change in $\check R$ has the form \begin{equation} \left( \frac{d-2}{d} \right) \delta \check R \simeq -\frac{1}{r_0^2} \sum_{b = \pm} \left( \frac{\kappa^2 \delta \check T_b}{2\pi \alpha} - \frac{g_{\scriptscriptstyle A}}{\pi{\cal N}} \,\delta \xi_b \right) \,. \end{equation} Comparing with \pref{tRvsVeff} --- and using $\kappa_d^{-2} = 4\pi \alpha r_0^2/\kappa^2$ and $\kappa^2 Q_0^2 r_0^2 = 1$ for the unperturbed flat rugby-ball geometry --- then shows that this curvature is what would have arisen from the $d$-dimensional vacuum energy \begin{equation} V_{\rm eff} = -\frac12\left(1-\frac 2d\right) \frac{\delta\check R}{\kappa_d^2} \simeq \sum_{b = \pm}\left( \delta \check T_b - Q_0 \, \delta \xi_b\right) \,. \end{equation} We see from this that when $\delta {\cal N} = 0$ the curvature obtained is precisely what would be expected in $d$ dimensions given the energy change $\sum_b \check T_b$. \section{Discussion} \label{section:discussion} In this paper we investigated the gravitational properties of branes that carry localized flux of a bulk field, or BLF branes. As noted in the introduction, the treatment of a gravitating BLF branes is not straightforward because the delta-like function used to represent their localization must depend on the proper distance away from the brane. Because of their particularly simple structure, this is not a problem for branes described only by their tension $\propto T$. However, the presence of metric factors in the BLF term $\propto \epsilon^{mn} A_{mn}$ complicates any calculation of transverse components of the brane's stress energy. We resolved this ambiguity by constructing an explicit UV completion of BLF branes using Nielsen-Oleson vortices whose gauge sector mixes kinetically with a bulk gauge field. The gauge kinetic mixing, which is controlled by a dimensionless parameter $\varepsilon$, endows the bulk field with a non-zero flux in the localized region, even in the limit that this region is taken to be vanishingly small. This allows the UV theory to capture the effects of brane-localized flux. The main result is that, in the UV picture, the gauge kinetic mixing can be diagonalized resulting in variables that clearly separate the localized sources from the bulk sources. In the diagonal basis, the energy associated with localized flux is always cancelled, and the canonical vortex gauge coupling is renormalized: $\hat e^2 = e^2 / (1 - \varepsilon^2)$. This allows us to identify the renormalized vortex tension as the quantity that controls the size of the defect angle in the geometry exterior to the vortex. We also find that the vortex relaxes to ensure that the average of the localized contributions to the transverse stress energy are controlled by the ratio between the size of the vortex and the characteristic bulk length scale $r_\varv / r_{\scriptscriptstyle B}.$ This informs our treatment of the IR theory with branes. We find that the delta-function treatment of the brane is particularly useful for calculating the flux of the bulk field, including its localized contributions, and a delta-function shift in the bulk gauge field strength can diagonalize the brane-localized flux term. This change of variables endows the action with a divergent term that we can interpret as a renormalization of the brane tension, in analogy with the $e \to \hat e$ renormalization of the gauge coupling. We also show that the transverse components of the brane stress energy must vanish without explicitly calculating them. Rather, we use the Hamiltonian constraint and energy conservation to relate these stress energies to quantities which vanish as $r_\varv/r_{\scriptscriptstyle B} \to 0$, thereby circumventing any ambiguity in the metric dependence of the corresponding brane interactions. The techniques we employ here should be relevant to other brane couplings that contain metric factors. For example, there is a codimension-$k$ analogue of the BLF term that involves the Hodge dual of an $k$-form. Of particular interest is the case $k=1$ where the brane can couple to the derivative of a bulk scalar field $\phi$ as follows $ S_b \propto \int \star \, \mathrm{d} \phi $, or a bulk gauge field $A$ as follows $S_b \propto \int \star A$. We have also provided an explicit regularization of a $\delta(0)$ divergence. These are commonplace in treatments of brane physics, and usually deemed problematic. However, there is likely a similar renormalization story in these other cases. Lastly, we plan \cite{Companion} to also apply these techniques to a supersymmetric brane-world models that aim to tackle the cosmological constant problem \cite{SLED}.~The back-reaction of branes is a crucial ingredient of such models, and understanding the system in greater detail with an explicit UV completion will put these models on firmer ground and hopefully shed light on new angles from which to attack the CC problem. \section*{Acknowledgements} We thank Ana Acucharro, Asimina Arvanitaki, Savas Dimopoulos, Gregory Gabadadze, Ruth Gregory, Mark Hindmarsh, Stefan Hoffmann, Florian Niedermann, Leo van Nierop, Massimo Porrati, Fernando Quevedo, Robert Schneider and Itay Yavin for useful discussions about self-tuning and UV issues associated with brane-localized flux. The Abdus Salam International Centre for Theoretical Physics (ICTP), the Kavli Institute for Theoretical Physics (KITP), the Ludwig-Maximilian Universit\"at, Max-Planck Institute Garsching and the NYU Center for Cosmology and Particle Physics (CCPP) kindly supported and hosted various combinations of us while part of this work was done. This research was supported in part by funds from the Natural Sciences and Engineering Research Council (NSERC) of Canada, and by a postdoctoral fellowship from the National Science Foundation of Belgium (FWO), and by the Belgian Federal Science Policy Office through the Inter-University Attraction Pole P7/37, the European Science Foundation through the Holograv Network, and the COST Action MP1210 `The String Theory Universe'. Research at the Perimeter Institute is supported in part by the Government of Canada through Industry Canada, and by the Province of Ontario through the Ministry of Research and Information (MRI). Work at KITP was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915.
{ "attr-fineweb-edu": 1.942383, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaw7xK6EuNCvendzG
\section{Introduction} \label{sec:intro} In hierarchical cold dark matter cosmologies, large structures form through the accretion of smaller structures \citep[e.g.,][]{1993MNRAS.261L...8W}. In particular, mergers and infall of halos play a fundamental role in determining the structure and dynamics of massive clusters of galaxies, where mergers can induce large--scale bulk motions with velocities of the order of $\sim 1000$ km s$^{-1}$ or larger. This results in complex hydrodynamic flows where most of the kinetic energy is quickly dissipated to heat by shocks, but some part may in principle also excite long-lasting turbulent gas motions. Numerical simulations of merging clusters \citep[e.g.,][]{1993A&A...272..137S,1997ApJS..109..307R,2001ApJ...561..621R, 2005astro.ph..5274T} provide a detailed description of the gas-dynamics during a merger event. It has been shown that infalling sub-clusters can generate laminar bulk flows through the primary cluster and inject turbulent eddies via Kelvin-Helmholtz (KH) instabilities at interfaces between the bulk flows and the primary cluster gas. Such eddies redistribute the energy of the merger through the cluster volume in a few turnover times, which corresponds to a time interval of order 1 Gyr. The largest eddies decay with time into more random and turbulent velocity fields, eventually developing a turbulent cascade with a spectrum of fluctuations expected to be close to a Kolmogorov spectrum. Observationally, spatially resolved gas pressure maps of the Coma cluster obtained from a mosaic of XMM--Newton observation have indeed revealed the signature of mildly supersonic turbulence, at least in the central regions of the cluster \citep{2004A&A...426..387S}. It has also been suggested that the micro--calorimeters on-board of future X--ray satellites such as ASTRO-E2 should be able to detect the turbulent broadening of the lines of heavy ions in excess of the thermal broadening \citep{2003AstL...29..791I}, which would represent a direct measurement of cluster turbulence. Cluster turbulence could in principle store an appreciable fraction of the thermal energy of massive clusters, which would make it an important factor for understanding the structure of the ICM. Shear flows associated with cluster turbulence and the resulting dynamo processes could considerably amplify the magnetic field strength in the ICM \citep[e.g.,][]{1999A&A...348..351D,Dolag:2002}. In addition, magnetohydrodynamic waves can be efficiently injected in the ICM directly by shocks, by Kelvin-Helmholtz or Rayleigh-Taylor instabilities, or by the decay of turbulent eddies at larger scales. These waves, as well as shocks, can efficiently accelerate supra--thermal particles in the ICM to higher energies. Although there is still some debate concerning the detailed mechanism responsible for the origin of relativistic particles and magnetic fields in the ICM \citep[e.g.,][]{2003mecg.conf..349B}, the presence of relativistic electrons and of $\sim \mu$G--strength magnetic fields in the ICM is proven by non--thermal emission studied with radio observations and possibly observations of hard X-Ray emission \citep[e.g.,][for a review]{2002mpgc.book.....F,2003A&A...398..441F}. In addition, the occurrence of non--thermal phenomena is found to be related to the dynamical state and mass of the parent cluster \citep{1999NewA....4..141G,buote2001,2001A&A...378..408S,2002IAUS..199..133F}, which suggests a connection between cluster mergers and non--thermal activity. Despite this potentially significant relevance of turbulence for the ICM, quantitative studies have received comparatively little attention in hydrodynamical cluster simulations thus far. One reason for this is that 3D turbulence is difficult to resolve in any numerical scheme, because these always introduce some finite numerical viscosity, effectively putting a limit on the Reynolds numbers that can still be adequately represented. In the Lagrangian SPH method, which has been widely employed for studies of cluster formation, an artificial viscosity is used to capture shocks. The original parameterisation of this viscosity \citep{1983JCoPh..52....374S} makes the scheme comparatively viscous; it smoothes out small-scale velocity fluctuations and viscously damps random gas motions well above the nominal resolution limit. This hampers the ability of the original SPH to develop fluid turbulence down to the smallest resolved scales. However, the numerical viscosity of SPH can in principle be reduced by using a more sophisticated parameterisation of the artificial viscosity. Ideally, the viscosity should only be present in a hydrodynamical shock, but otherwise it should be negligibly small. To come closer to this goal, \citet{1997JCoPh..136....41S} proposed a numerical scheme where the artificial viscosity is treated as an independent dynamical variable for each particle, with a source term triggered by shocks, and an evolutionary term that lets the viscosity decay in regions away from shocks. In this way, one can hope that shocks can still be captured properly, while in the bulk of the volume of a simulation, the effective viscosity is lower than in original SPH. We adopt this scheme and implement it in a cosmological simulation code. We then apply it to high-resolution simulations of galaxy cluster formation, allowing us to examine a better representation of turbulent gas motions in SPH simulations of clusters. This also shines new light on differences in the results of cosmological simulations between different numerical techniques. In Section~\ref{sec:code}, we discuss different ways of implementing the artificial viscosity in SPH. We demonstrate in Section~\ref{sec:tests} the robustness of our new low--viscosity scheme by applying it to several test problems. In Sections~\ref{sec:simulations}, \ref{sec:turbulence} and \ref{sec:cluster}, we introduce our set of cluster simulations, the algorithm to detect and measure turbulence, and the implications of the presence of turbulence for the structure and properties of galaxy clusters. In Section~\ref{sec:lines}, we consider the effects of turbulence on the line-width of narrow X-ray metal lines. Finally, in Section~\ref{sec:radio} we apply the results from our new simulations to models for the production of radio emission due to turbulent acceleration processes. We give our conclusions in Section~\ref{sec:conclusions}. \section{Simulation Method} \label{sec:code} The smoothed particle hydrodynamics method treats shocks with an artificial viscosity, which leads to a broadening of shocks and a relatively rapid vorticity decay. To overcome these problems, \citet{1997JCoPh..136....41S} proposed a new parameterisation of the artificial viscosity capable of reducing the viscosity in regions away from shocks, where it is not needed, while still being able to capture strong shocks reliably. We have implemented this method in the cosmological SPH code {\small GADGET-2} \citep{2005astro.ph..5010S}, and describe the relevant details in the following. In {\small GADGET-2}, the viscous force is implemented as \begin{equation} \frac{{\mathrm d}v_i}{{\mathrm d}t} = - \sum_{j=1}^{N}m_j\Pi_{ij}\nabla_i\bar{W}_{ij}, \end{equation} and the rate of entropy change due to viscosity is \begin{equation} \frac{{\mathrm d}A_i}{{\mathrm d}t} = - \frac{1}{2}\frac{\gamma-1}{\rho_i^{\gamma-1}} \sum_{j=1}^{N}m_j\Pi_{ij}v_{ij}\cdot\nabla_i\bar{W}_{ij}, \end{equation} where $A_i=(\gamma-1)u_i/\rho_i^{\gamma-1}$ is the entropic function of a particle of density $\rho_i$ and thermal energy $u_i$ per unit mass, and $\bar{W}_{ij}$ denotes the arithmetic mean of the two kernels $W_{ij}(h_i)$ and $W_{ij}(h_j)$. The usual parameterisation of the artificial viscosity \citep{1983JCoPh..52....374S,1995JCoPh.121..357B} for an interaction of two particles $i$ and $j$ includes terms to mimic a shear and bulk viscosity. For standard cosmological SPH simulations, it can be written as \begin{equation} \Pi_{ij}=\frac{-\alpha c_{ij}\mu_{ij}+\beta\mu_{ij}^2}{\rho_{ij}}f_{ij}, \label{eqn:visc} \end{equation} for $\vec{r}_{ij}\cdot\vec{v}_{ij}\le 0$ and $\Pi_{ij} = 0$ otherwise, i.e.~the pair-wise viscosity is only non-zero if the particle are approaching each other. Here \begin{equation} \mu_{ij}=\frac{h_{ij}\vec{v}_{ij}\cdot\vec{r}_{ij} }{\vec{r}_{ij}^2+\eta^2}, \end{equation} $c_{ij}$ is the arithmetic mean of the two sound speeds, $\rho_{ij}$ is the average of the densities, $h_{ij}$ is the arithmetic mean of the smoothing lengths, and $\vec{r}_{ij}=\vec{r}_i-\vec{r}_j$ and $\vec{v}_{ij}=\vec{v}_i-\vec{v}_j$ are the inter-particle distance and relative velocity, respectively. We have also included a viscosity-limiter $f_{ij}$, which is often used to suppress the viscosity locally in regions of strong shear flows, as measured by \begin{equation} f_i=\frac{|\left<\vec{\nabla}\cdot\vec{v}\right>_i|}{|\left<\vec{\nabla}\cdot\vec{v}\right>_i| + |\left<\vec{\nabla}\times\vec{v}\right >_i|+\sigma_i}, \end{equation} which can help to avoid spurious angular momentum and vorticity transport in gas disks \citep{1996IAUS..171..259S}. Note however that the parameters describing the viscosity (with common choices $\alpha=0.75-1.0$, $\beta=2\alpha$, $\eta=0.01 h_{ij}$, and $\sigma_i=0.0001 c_i/h_i$ ) stay here fixed in time. This then defines the `original' viscosity scheme usually employed in cosmological SPH simulations. We refer to runs performed with this viscosity scheme as {\it ovisc} simulations. As a variant of the original parameterisation of the artificial viscosity, {\small GADGET-2} can use a formulation proposed by \cite{1997JCoPh..136....298S} based on an analogy with Riemann solutions of compressible gas dynamics. In this case, $\mu_{ab}$ is defined as \begin{equation} \mu_{ij}=\frac{\vec{v}_{ij}\cdot\vec{r}_{ij} }{|\vec{r}_{ij}|}, \end{equation} and one introduces a signal velocity $v_{ij}^{\rm sig}$, for example in the form \begin{equation} v_{ij}^{\rm sig} = c_i + c_j - 3\mu_{ij}. \end{equation} The resulting viscosity term then changes into \begin{equation} \Pi_{ij}=\frac{-0.5\alpha v_{ij}^{sig} \mu_{ij}}{\rho_{ij}}f_{ij}. \label{eqn:visc2} \end{equation} We have also performed simulations using this signal velocity based artificial viscosity and found that it performs well in all test problems we examined so far, while in some cases it performed slightly better, in particular avoiding post shock oscillations in a more robust way. We refer to simulations performed using this `signal velocity' based viscosity scheme as {\it svisc} simulations. The idea proposed by \cite{1997JCoPh..136....41S} is to give every particle its own viscosity parameter $\alpha_i$, which is allowed to evolve with time according to \begin{equation} \frac{{\rm d}\alpha_i}{{\rm d}t}=-\frac{\alpha_i-\alpha_{\rm min}}{\tau}+S_i. \end{equation} This causes $\alpha_i$ to decay to a minimum value $\alpha_{\rm min}$ with an e-folding time $\tau$, while the source term $S_i$ is meant to make $\alpha_i$ rapidly grow when a particle approaches a shock. For the decay timescale, \cite{1997JCoPh..136....41S} proposed to use \begin{equation} \tau=h_{i}\,/\,(c_i\,l), \end{equation} where $h_i$ is the smoothing length, $c_{i}$ the sound speed and $l$ a free (dimensionless) parameter which determines on how many information crossing times the viscosity decays. For an ideal gas and a strong shock, this time scale can be related to a length scale $\delta=0.447/l$ (in units of the smoothing length $h_i$) on which the viscosity parameter decays behind the shock front. For the source term $S_i$, we follow \cite{1997JCoPh..136....41S} and adopt \begin{equation} S_i = S^* f_i\,\, \mathrm{max}(0,-|\left<\vec{\nabla}\cdot\vec{v}\right>_i|), \end{equation} where $\left<\vec{\nabla}\cdot\vec{v}\right>_i$ denotes the usual SPH estimate of the divergence around the particle $i$. Note that it would in principle be possible to use more sophisticated shock detection schemes here, but the simple criterion based on the convergence of the flow is already working well in most cases. We refer to simulations carried our with this `low' viscosity scheme as {\it lvisc} runs. Usually we set $S^* = 0.75$ and choose $l=1$. We also restrict $\alpha_i$ to be in the range $\alpha_{\rm min}=0.01$ and $\alpha_{\rm max}=0.75$. Choosing $\alpha_{\rm min}>0$ has the advantage, that possible noise which might be present in the velocity representation by the particles on scales below the smoothing length still will get damped with time. Increasing $S^*$ can give a faster response of the artificial viscosity to the shock switch without inducing higher viscosity than necessary elsewhere. We also note that we replace $\alpha$ in equation \ref{eqn:visc} (and equation \ref{eqn:visc2} respectively) by the arithmetic mean $\alpha_{ij}$ of two interacting particles. Depending on the problem, we initialise $\alpha_i$ at the start of a simulation either with $\alpha_{\rm min}$ or $\alpha_{\rm max}$, depending on whether or not there are already shocks present in the initial conditions, respectively. While the approach to reduce numerical viscosity with a time-variable $\alpha_i$ works well with both basic parameterisations of the artificial viscosity, most of our cosmological simulations were carried out with the 'original' parameterisation because the signal velocity variant became available in {\small GADGET-2} only recently. \begin{figure} \includegraphics[width=0.49\textwidth]{sod.eps} \caption{A standard shock tube problem \citep{1978JCoPh..27....1S} computed with the low--viscosity scheme with an individual, time-dependent viscosity. From top to bottom, we show current value of the strength of the artificial viscosity $\alpha_i$, density, velocity, pressure, and internal energy, averaged for bins with spacing equal to the SPH smoothing length for particles in the low density region. The analytic solution of the problem for the time $t=5.0$ is shown as a solid line.} \label{fig:sod} \end{figure} \begin{figure*} \includegraphics[width=1.0\textwidth]{bertschinger.eps} \caption{Profiles of velocity (left column), pressure (middle left column) entropy (middle right column) and viscosity constant $\alpha$ (right column) for the spherical collapse test at 2 different times (from top to bottom). The thin line marks the analytic solution, diamonds give the result obtained by the new SPH formulation for the time dependent viscosity. The thick line is the analytic solution adaptively smoothed with the SPH kernel, using the smoothing length of the particles in each bin. The lengths of the horizontal lines plotted at each data point correspond to the smoothing lengths of the SPH particles at this position. } \label{fig:bert1} \end{figure*} \section{Test Simulations} \label{sec:tests} To verify that the low--viscosity formulation of SPH with its time-dependent artificial viscosity is still able to correctly capture strong shocks, we computed a number of test problems. We here report on a standard shock tube test, and a spherical collapse test, which both have direct relevance for the cosmological formation of halos. As a more advanced test for the ability of the code to follow vorticity generation, we investigate the problem of a strong shock striking an overdense cloud in a background medium. This test can also give hints whether turbulence is less strongly suppressed in the low--viscosity treatment of SPH than in the original formulation. \subsection{Shock-Tube Test} First, we computed a shock tube problem, which is a common test for hydrodynamical numerical schemes \citep{1978JCoPh..27....1S}. For definiteness, a tube is divided into two halves, having density $\rho_1=1$ and pressure $p_1=1$ on the left side, and $\rho_2=0.125$ and $p_2=0.1$ on the right side, respectively. Like in Sod (1978), we assume an ideal gas with $\gamma=1.4$. To avoid oscillations in the post shock region (note that a shock is present in the initial conditions) we initialise the viscosity values of the particles with $\alpha_{\rm max}=0.75$. We compute the test in 3D and make use of periodic boundary conditions. The initial particle grid is 5x5x50 on one half, and 10x10x100 on the other half, implying an equal particle mass for both sides. In Figure \ref{fig:sod}, we show the state of the system at simulation time $t=5$ in terms of density, velocity, and internal energy with a binning which corresponds to the smoothing length for particles in the low density region. We also include the analytic expectation for comparison. In addition, we plot the values of the artificial viscosity parameter of the particles. Clearly visible is that the viscosity is close to $\alpha_{\rm min}$ everywhere, except in the region close to the shock. One can also see how the viscosity builds up to its maximum value in the pre-shock region and decays in the post shock region. We note that the final post-shock state of the gas agrees well with the theoretical expectation, and is indistinguishable from the case where the original viscosity parameterisation is used. \begin{figure*} \includegraphics[width=0.49\textwidth]{cloud_a_0010.eps} \includegraphics[width=0.49\textwidth]{cloud_a_nv_0010.eps}\\ \includegraphics[width=0.49\textwidth]{cloud_a_0050.eps} \includegraphics[width=0.49\textwidth]{cloud_a_nv_0050.eps}\\ \includegraphics[width=0.49\textwidth]{cloud_a_0090.eps} \includegraphics[width=0.49\textwidth]{cloud_a_nv_0090.eps}\\ \includegraphics[width=0.49\textwidth]{cloud_a_0130.eps} \includegraphics[width=0.49\textwidth]{cloud_a_nv_0130.eps}\\ \includegraphics[width=0.49\textwidth]{cloud_a_0170.eps} \includegraphics[width=0.49\textwidth]{cloud_a_nv_0170.eps}\\ \caption{Time evolution of the interaction of a strong shock wave with an overdense cloud. We show the projected gas density and compare simulations carried out with original SPH (left) and the low--viscosity formulation (right). The incident shock wave has Mach number 10, and the cloud is initially at pressure equilibrium in the ambient medium and has overdensity 5.} \label{fig:tube2d} \end{figure*} \subsection{Self-similar spherical collapse} A test arguably more relevant for cosmological structure formation is the self-similar, spherical collapse of a point perturbation in a homogeneous expanding background \citep{1985ApJS...58....1B}. This test is difficult for grid and SPH codes alike. The gas cloud collapses self similarly, forming a very strong shock (with formally has infinite Mach number) at the accretion surface. While grid codes with shock capturing schemes can usually recover the sharp shock surface very well, the large dynamic range in the post shock region with its singular density cusp, as well as the strict spherical symmetry of the problem, are challenging for mesh codes. On the other hand, Lagrangian SPH codes tend to have fewer problems with the central structure of the post-shock cloud, but they broaden the shock surface substantially, and typically show appreciable pre-shock entropy injection as result of the artificial viscosity. We have computed the self-similar collapse test and compared the results for the new viscosity parameterisation with the analytic expectation. The very strong spherical shock of this problem is a particularly interesting test, because we can here test whether the low--viscosity formulation is still able to capture the strongest shocks possible. In Figure \ref{fig:bert1}, we show the structure of the shock at 2 consecutive times, scaled to the self-similar variables. In general, the SPH result recovers the analytic solution for the post-shock state very well, especially when the entropy profile is considered. However, the shock is substantially broadened, and some pre-heating in front of the shock is clearly visible. In the velocity field, some weak post-shock oscillations are noticable. We have also indicated the smoothing lengths of the SPH particles as horizontal error bars for each of the data points (the points at which the SPH kernel falls to zero is reached at twice this length). For comparison, we additionally over-plotted the analytic solution adaptively smoothed with the SPH kernel size at each bin. The panels of the right column in Figure \ref{fig:bert1} show the profile of the viscosity parameter, which was set to $\alpha_{\rm min}$ at the beginning of the simulation, as the initial conditions do not contain a shock. The viscosity parameter builds up immediately after starting the simulation as the strong shock forms. Later one can see how the viscosity parameter begins to decay towards $\alpha_{\rm min}$ in the inner part, how it builds up to $\alpha_{\rm max}$ towards the shock surface, and how a characteristic profile develops as the shock moves outward. In the post-shock region an intermediate viscosity values is maintained for some time due to some non-radial motions of gas particles in this region. \begin{table*}non radiative \caption{Main characteristics of the non radiative galaxy cluster simulations. Column 1: identification label. Columns 2 and 3: mass of the dark matter ($M_{\rm DM}$) and gas ($M_{\rm gas}$) components inside the virial radius. Column 4: virial radius $R_{\rm v}$. Column 5: X-ray luminosity inside the virial radius $L_x$. Columns 6 and 7: mass-weighted temperature ($T_{\rm MW}$) and spectroscopic like temperature ($T_{\rm SL}$).} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{c}{Simulations} & \multicolumn{2}{c}{$M_{\rm DM}(h^{-1} 10^{14} M_{\odot})$} & \multicolumn{2}{c}{$M_{\rm GAS}(h^{-1} 10^{13} M_{\odot})$} & \multicolumn{2}{c}{$R_{\rm v}(h^{-1}$ kpc)} & \multicolumn{2}{c}{$L_x (10^{44} \,{\rm erg\,s^{-1}})$} & \multicolumn{2}{c}{$T_{\rm MW}$(keV)} & \multicolumn{2}{c}{$T_{\rm SL}$(keV)}\\ \hline & svisc & lvisc & svisc & lvisc & svisc & lvisc & svisc & lvisc & svisc & lvisc & svisc & lvisc \\ g1 &14.5 &14.5 &17.5 &17.0 &2360 &2355 &47.1 & 21.3& 7.2&7.1& 5.8 &5.6 \\ g8 &22.6 &22.4 &19.8 &19.8 &2712 &2705 &63.1 & 32.1& 9.3&9.1& 6.2 &5.7 \\ g51 &13.0 &13.0 &11.5 &11.5 &2255 &2251 &30.8 & 17.9& 6.4&6.3& 4.6 &4.7 \\ g72 &13.5 &13.4 &12.0 &11.9 &2286 &2280 &18.3 & 14.1& 5.8&5.8& 4.0 &4.0 \\ g676 & 1.1 & 1.0 & 0.95& 0.91& 983 & 972 & 3.2 & 1.4& 1.3&1.3& 1.6 &1.5 \\ g914 & 1.2 & 1.0 & 1.07& 0.91&1023 & 971 & 4.2 & 1.7& 1.4&1.3& 1.6 &1.7 \\ g1542 & 1.1 & 1.0 & 0.95& 0.90& 982 & 967 & 3.0 & 1.4& 1.3&1.2& 1.4 &1.5 \\ g3344 & 1.1 & 1.1 & 1.00& 0.96&1002 & 993 & 2.2 & 1.4& 1.4&1.3& 1.4 &1.5 \\ g6212 & 1.1 & 1.1 & 1.00& 1.01&1000 &1006 & 3.0 & 1.5& 1.3&1.3& 1.6 &1.7 \\ \hline \end{tabular} \label{tab:char} \end{center} \end{table*} \subsection{Schock-cloud interaction} To verify that the low--viscosity scheme also works in more complex hydrodynamical situations, we simulate a test problem where a strong shock strikes a gas cloud embedded at pressure equilibrium in a lower density environment. A recent discussion of this setup can be found in \cite{2002ApJ...576..832P} and references therein. SPH is able to reproduce the main features expected in this test problem reasonably well, like reverse and reflected shocks, back-flow, primary and secondary Mach stems, primary and secondary vortices, etc.~\citep[see][]{2005astro.ph..5010S}. Our purpose here is to check whether the new scheme for a time-variable viscosity performs at least equally well as the original approach. In Figure \ref{fig:tube2d}, we compare the time evolution of the projected gas density for the original viscosity scheme (left hand side) with the new low--viscosity scheme (right hand side). Overall, we find that the new scheme produces quite similar results as the original method. But there are also a number of details where the low--viscosity scheme appears to work better. One is the external reverse bow shock which is resolved more sharply with the new scheme compared to the original one. This is consistent with our findings from the previous tests, where we could also notice that shocks tend to be resolved somewhat sharper using the new scheme. We also note that instabilities along shear flows (e.g. the forming vortexes or the back-flow) are appearing at an earlier time, as expected if the viscosity of the numerical scheme is lower. This should help to resolve turbulence better. In summary, the low--viscosity scheme appears to work without problems even in complex situations involving multiple shocks and vorticity generation, while it is still able to keep the advantage of a reduced viscosity in regions away from shocks. We can therefore expect this scheme to also work well in a proper environment of cosmological structure formation, and simulations should be able to benefit from the reduced viscosity characteristics of the scheme. \section{Cosmological Cluster Simulations} \label{sec:simulations} We have performed high-resolution hydrodynamical simulations of the formation of 9 galaxy clusters. The clusters span a mass-range from $10^{14}\,h^{-1}{\rm M}_{\odot}$ to $2.3\times 10^{15}h^{-1}{\rm M}_{\odot}$ and have originally been selected from a DM--only simulation \citep{YO01.1} with box-size $479\,h^{-1}$Mpc of a flat $\Lambda$CDM model with $\Omega_0=0.3$, $h=0.7$, $\sigma_8=0.9$ and $\Omega_{\rm b}=0.04$. Using the `Zoomed Initial Conditions' (ZIC) technique \citep{1997MNRAS.286..865T}, we then re-simulated the clusters with higher mass and force resolution by populating their Lagrangian regions in the initial conditions with more particles, adding additional small-scale power appropriately. The selection of the initial region was carried out with an iterative process, involving several low resolution DM-only resimulations to optimise the simulated volume. The iterative cleaning process ensured that all of our clusters are free from contaminating boundary effects out to at least 3 - 5 virial radii. Gas was introduced in the high--resolution region by splitting each parent particle into a gas and a DM particle. The final mass--resolution of these simulations was $m_{\rm DM}=1.13\times 10^9\,h^{-1}{\rm M}_\odot$ and $m_{\rm gas}=1.7\times 10^8\,h^{-1}{\rm M}_\odot$ for dark matter and gas within the high--resolution region, respectively. The clusters were hence resolved with between $2\times10^5$ and $4\times10^6$ particles, depending on their final mass. For details on their properties see Table \ref{tab:char}. The gravitational softening length was $\epsilon=5.0\, h^{-1}$kpc (Plummer--equivalent), kept fixed in physical units at low redshift and switched to constant comoving softening of $\epsilon=30.0\, h^{-1}$kpc at $z\ge 5$. Additionally we re-simulated one of the smaller cluster ({\it g676}) with 6 times more particles (HR), decreasing the softening by a factor of two to $\epsilon=2.5\, h^{-1}$kpc. We computed three sets of simulations using non radiative gas dynamics, where each cluster was simulated three times with different prescriptions for the artificial viscosity. In our first set, we used the original formulation of artificial viscosity within SPH. In the second set, we used the parametrisation based on signal velocity, but with a fixed coefficient for the viscosity. Finally, in our third set, we employed the time dependent viscosity scheme, which we expect to lead to lower residual numerical viscosity. Our simulations were all carried out with an extended version of {\small GADGET-2} \citep{2005astro.ph..5010S}, a new version of the parallel TreeSPH simulation code {\small GADGET} \citep{SP01.1}. We note that the formulation of SPH used in this code follows the `entropy-conserving' method proposed by \citet{2002MNRAS.333..649S}. \section{Identifying Turbulence} \label{sec:turbulence} \begin{figure} \includegraphics[width=0.5\textwidth]{figura6.eps} \caption{Mean local velocity dispersion for the central $500^{3}{\rm kpc}^{3}$ box as a function of the resolution adopted for the TSC--smoothing of the local mean field. Results are plotted for a low viscosity simulation.} \label{fig:disp_resol} \end{figure} In the idealized case of homegeneus and isotropic turbulence, the autocorrelation function of the velocity field of the fluid should not depend on the position (homogeneity) and it should only depend on the magnitude of the distance $\vec{r}$ between points (isotropy). The tensor of the correlation function of the velocities is thus given by \citep[e.g.][]{1998pfp..conf.....C}: \begin{equation} R_{ij}(r) = \left< v_{i}(\vec{x}) v_{j}(\vec{x}+\vec{r}) \right> \label{rij} \end{equation} \noindent where $\vec{x}$ is the position of a fluid particle. The 3D power spectral density of the turbulent field is given by \citep[e.g.][]{1998pfp..conf.....C}: \begin{equation} \Phi_{ij}(\vec{k})={1\over{(2\pi)^3}}\int R_{ij}(\vec {r}) \exp ( {\it i}\, \vec{k}\vec{r} )\, {\rm d}\vec{r}. \label{Phi_ij} \end{equation} \noindent The energy spectrum, $E(k)$, associated with the fluctuations of the velocity field is related to the diagonal parts of both the tensor of the correlation function, and that of the power spectral density. This energy spectrum is given by \citep[e.g.][]{1998pfp..conf.....C}: \begin{equation} E(k) = 2 \pi k^2 \Phi_{ii}(k), \label{ek} \end{equation} \noindent and the total turbulent energy per unit mass is \begin{equation} u_{\rm turb}={1\over 2} \left< v^2 \right> = {1\over 2} R_{ii}(\vec{r}=0)= \int_0^{\infty} E(k)\,{\rm d}k, \label{energy_ek} \end{equation} where the summation convention over equal indices is adopted. The real case of the intracluster medium is however much more complex, in particular not homogeneous and isotropic. The gravitational field induces density and temperature gradients in the ICM, and the continuous infall of substructures drives bulk motions through the ICM. These effects break both homogeneity and isotropy at some level, at least on the scale of the cluster, and thus demand a more complicated formalism to appropriately characterise the turbulent field. It is not the aim of the present paper to solve this problem completely. Instead we focus on a zero--order description of the energy stored in turbulence in the simulated boxes, and for this purpose the basic formalism described below should be sufficient. \begin{figure*} \includegraphics[width=0.49\textwidth]{74_new_tfract_v.eps} \includegraphics[width=0.49\textwidth]{74_tfract_v.eps}\\ \includegraphics[width=0.49\textwidth]{74_new_tfract_suny_v.eps} \includegraphics[width=0.49\textwidth]{74_tfract_suny_v.eps} \caption{Gas velocity field in a slice through the central Mpc of a cluster simulation {\it g72} after subtracting the {\em global} mean bulk velocity of the cluster. The panels on the left is for a run with the original viscosity of SPH while the panels on the right shows the result for the low viscosity scheme. The underlying colour maps represent the turbulent kinetic energy content of particles, inferred using the local velocity method (upper row) or the standard velocity method (lower row). For the local velocity method a conservative $64^3$ grid is used in the TSC smoothing. The cluster centre is just below the lower-left corner of the images. The vertical lines in the upper row show where the 1--dimensional profile for the simulated radio--emission of Fig.~\ref{fig:radio1d} are taken.} \label{fig:vel_mean} \end{figure*} \begin{figure*} \includegraphics[width=0.49\textwidth]{74_new_tfract_disp.eps} \includegraphics[width=0.49\textwidth]{74_tfract_disp.eps} \caption{Same slice of the Gas velocity field as in figure \ref{fig:vel_mean} of cluster {\it g72} after subtracting the {\em local} mean velocity of the cluster. The panel on the left is for a run with the original viscosity of SPH while the panel on the right shows the result for the low viscosity scheme.} \label{fig:vel_local} \end{figure*} A crucial issue in describing turbulent fields in the ICM is the distinction between large-scale coherent velocity field and small-scale `random' motions. Unfortunately, the definition of a suitable mean velocity field is not unambiguous because the involved scale of averaging introduces a certain degree of arbitrariness. Perhaps the simplest possible procedure is to take the mean velocity computed for the cluster volume (calculated, for example, within a sphere of radius $R_{\rm vir}$) as the coherent velocity field, and then to define the turbulent velocity component as a residual to this velocity. This simple approach (hereafter {\it standard} approach) has been widely employed in previous works \citep[e.g.,][]{1999rgm87conf..106N,2003AstL...29..783S}, and led to the identification of ICM turbulence in these studies. However, an obvious problem with this method is that this global subtraction fail to distinguish a pure laminar bulk flow from a turbulent pattern of motion. Note that such a large scale laminar flows are quite common in cosmological simulations, where the growth of clusters causes frequent infalls and accretions of sub--halos. This infall of substructures is presumably one of the primary injection mechanisms of ICM turbulence. To avoid this problem, a mean velocity field smoothed on scales smaller than the whole box can be used, and then the field of velocity fluctuations is defined by subtracting this mean--local velocity, $\overline{\vec{v}}(\vec{x})$, from the individual velocities $\vec{v}_{i}$ of each gas particle. We note that if the smoothing scale is chosen too small, one may risk loosing large eddies in the system if they are present, but at least this procedure does not overestimate the level of turbulence. Following this second approach (hereafter {\it local--velocity} approach), we construct a mean local velocity field $\overline{{\vec{v}}}(x)$ on a uniform mesh by assigning the individual particles to a mesh with a {\it Triangular Shape Cloud} (TSC) window function. The mesh covers a region of 1.0 comoving Mpc on a side and typically has between $8^3$ and $64^3$ cells, which is coarse enough to avoid undersampling effects. The equivalent width of the TSC kernel is approximatively 3 grid cells in each dimension, corresponding to a smoothing scale of $\approx 360-45$ kpc, respectively. As our analysis is restricted only to the highest density region in the clusters, the scale for the TSC--smoothing is always larger than the SPH smoothing lengths for the gas particles, which typically span the range $7.5 - 15\, h^{-1}{\rm kpc}$ in the box we consider. We then evaluate the local velocity dispersion at the position $\vec x$ of each mesh cell over all particles $a$ in the cell by: \begin{equation} \sigma^2_{ij}(\vec x) \simeq \left< \left[v_{a,i} - \bar{v}_i(\vec x)\right]\left[v_{a,j} - \bar{v}_j(\vec x)\right] \right>_{\rm cell}, \label{sigmai} \end{equation} where $i$ and $j$ are the indices for the three spatial coordinates, and $\langle \rangle_{\rm cell}$ denotes the average over particles within each cell. \noindent The diagonal part of the tensor of the correlation function of the field of velocity fluctuations at $r=0$ in the simulated box can then be approximated by \begin{equation} R_{ii}({r}=0)\simeq \left< \sigma^2_{ii}(\vec{x}) \right>_{\rm Box} . \label{rii} \end{equation} \noindent Based on Equation (\ref{energy_ek}), we can then estimate the energy density of the turbulence in real space as \begin{equation} \rho(\vec{x}) \int E(k) \, {\rm d}k \sim {1\over 2} \rho(\vec{x}) \times \left\lbrace \begin{array}{lll} \left< \sigma^2_{ii}(\vec{x}) \right>_{\rm box} \, , \\ \\ \left< v^2_{i}(\vec{x}) \right>_{\rm box} \, , \end{array} \right. \label{ek_box} \end{equation} \noindent in the local--velocity and standard case, respectively. Here $\rho(\vec{x})$ is the gas density within the cells. The subtraction of a local velocity from the velocity distribution of the particles is expected to efficiently filter out the contribution from laminar bulk--flows with a scale $\geq$ 3 times the size of the cells used in the TSC smoothing. However, a large-scale turbulent velocity field component, if it exists, would also be suppressed, so that this procedure can be expected to reduce the turbulent velocity field to a certain degree. As shown in Figure~\ref{fig:disp_resol}, this depends on the resolution of the mesh used in the TSC assignment. Fig.~\ref{fig:disp_resol} shows that the increase of the turbulent velocity dispersion with the cell size is not dramatic for cell sizes larger than 100 kpc. We find that (Vazza et al., in prep.) a TSC smoothing with larger cell sizes would not efficiently filter out contributions from laminar bulk--motions. It can be tentatively concluded that the local velocity approach with a smoothing with $16^3-32^3$ cells in the central $(1.0\,{\rm Mpc})^{3}$ volume catches the bulk of the turbulent velocity field in the simulated box. Therefore, if not specified otherwise, all the numerical quantities given in the following are obtained using a TSC--assignment procedure based on $32^3$ cells. A more detailed discussion of this method and tests of the parameters involved is reported elsewhere (Vazza et al., in prep.). Figures \ref{fig:vel_mean} and \ref{fig:vel_local} give examples of the turbulent velocity field calculated with both the standard and local velocity methods, showing the same galaxy cluster in both cases, but in one case simulated with the signal--velocity variant of the viscosity, and in the other with the new time-dependent low--viscosity scheme. Note that we here selected a situation where a large (ca. 500 kpc long) laminar flow pattern can be easily identified close to the centre of one of our simulated clusters ({\it g72}). When the mean cluster velocity field is subtracted as in Figure~\ref{fig:vel_mean}, large residual bulk flow patterns remain visible, caused by a substructure moving through the cluster atmosphere. We colour-coded the turbulent kinetic energy of particles, $E_t(\vec{x}) \sim 1/2\, \rho(\vec{x}) \sigma_v(\vec{x})^2$, after subtracting the local mean velocity field (here smoothed onto a $64^{3}$ mesh) for the upper panels and aster subtracting the {\em global} mean bulk velocity of the cluster for the lower panels. One can see that fluid instabilities of Kelvin-Helmholtz type are growing along the interfaces of the large laminar flow pattern, visible in the upper left panel. As expected, the strength of this turbulent velocity field is considerably larger in the simulation obtained with the new low--viscosity scheme, providing evidence that such instabilities are less strongly damped in this scheme. This can also be seen by the longer flow field lines in Figure~\ref{fig:vel_local}. Figures \ref{fig:vel_mean} also visually confirms the differences in the two approaches of filtering the velocity field. Whereas the local--velocity approach highlights the energy within the velocity structure along boundary layers, the energy within the large, bulk motions are preferentially selected when only subtraction the {\em global} mean bulk velocity. The total cumulative kinetic energy in the random gas motions inside our mesh (cantered on the cluster centre) reaches 5\%-30\% of the thermal energy for the simulations using the new, low--viscosity scheme, whereas it stays at much lower levels ($\approx$2\%-10\%) when the signal velocity parameterisation of the viscosity is used. If the original viscosity scheme is used, it is typically at even lower values ($\approx$1\%-5\%). In general, we find that more massive clusters tend to have a higher fraction of turbulent energy content. However, given that our simulations have fixed mass resolution, this trend could in principle simply reflect a numerical resolution effect. In order to get further information on this, we have re-simulated one of the smaller clusters ({\it g676}) with 6 times better mass resolution using the signal velocity parameterisation of the viscosity. At $z=0$, this cluster is then resolved by nearly as many particles as the massive clusters simulated with our standard resolution. We find that for this high-resolution simulation the level of turbulence ($\approx 3$\%) is increased compared with the normal resolution ($\approx 2$\%), but it stays less to what we found for the low--viscosity scheme at our normal resolution ($\approx 5$\%). This confirms two expectations. First, the low viscosity scheme effectively increases the resolution on which SPH simulations can resolve small-scale velocity structure, which otherwise gets already suppressed on scales larger than the smoothing length by spurious viscous damping effects due to the artificial viscosity. Second, the amount of turbulence in the high resolution version of {\it g676} is still less than what we find with the same viscosity implementation in the larger systems, and even much smaller than what we find with the low--viscosity scheme in the large clusters. This tentatively suggests that the trend of a mass-dependence of the importance of turbulence is not caused by numerical effects. Note that with a fixed physical scale of $1\,{\rm Mpc}$ we are sampling different fractions of $R_{\rm vir}$ in clusters of different masses. However, if, in case of the less massive system, we restrict the sampling relative to $R_{\rm vir}$ to measure within comparable volumes, the fraction of turbulent energy content found in the small cluster increase roughly by a factor of two. Thereby we still find a significant trend with mass when measuring turbulence within a fixed fraction of $R_{\rm vir}$. Although it should be mentioned, that unless the dissipation of turbulence on small scales will me modeled correctly in a physical granted way, the different formation time scales of systems with different masses can potentially also contribute to such a trend. In order to verify that our method for measuring the local velocity dispersion gives reasonable values, Figure~\ref{fig:p_prof} shows a radial profile of the volume-weighted, relative difference between thermal pressure for the signal velocity based and low--viscosity run. Here we used the an average over the three massive clusters ({\it g1},{\it g51} and {\it g72}) which have comparable masses. The solid line shows the relative difference in radial shells and indicates that the turbulent pressure support can reach even up to 50\% in the central part and drops to 0 at approximately 0.2 $R_{\rm vir}$. The dashed line shows the cumulative difference, which over the total cluster volume contributes between 2\% and 5\% to the total pressure. The diamonds mark the measurement inferred from the local velocity dispersion within centred boxes of various sizes. We also calculate the difference between the signal velocity based and low--viscosity runs using the mean values over the three clusters. Qualitatively, there is good agreement of results obtained with this approach with the cumulative curve. This confirms that our method to infer the turbulent energy content from the local velocity dispersion of the gas is meaningful. Note that the temperature which is used to calculate the pressure is determined by strong shock heating. As different resimulations of the same object can lead to small (but in this context non-negligible) timing differences, this can introduce sizable variations in the calculated pressure, especially during merging events. We verified that these differences for individual clusters are significantly larger than the differences between the cumulative curve (dashed line) and the data points from the local velocity dispersion (diamonds). Therefore we can only say that the two methods agree well within their uncertainties. Finally, the inlay of Figure~\ref{fig:p_prof} gives the absolute contribution from the low--viscosity, the original viscosity in its two variants using the local velocity dispersion respectively. It seems that using the signal based viscosity in general leads already to more turbulence than the ``old'' original viscosity, but the time-dependent treatment of the viscosity works even more efficiently. \begin{figure} \includegraphics[width=0.5\textwidth]{p_prof_vol.eps} \caption{Radial profile of the relative thermal pressure difference averaged over three nearly equally massive clusters ({\it g1},{\it g51} and {\it g72}), comparing the signal velocity based and low--viscosity runs (lines). The dashed line is the cumulative difference, whereas the solid line marks the profile in radial shells. The diamonds mark the difference in the turbulent energy support we inferred from the local velocity dispersion within several concentric cubes of different sizes ($l_{\rm cube}=2r$) for the same runs. This should be compared with the dashed line. The inlay shows the absolute value inferred from the local velocity dispersion from the different viscosity parameterisations, respectively.} \label{fig:p_prof} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{figura7.eps} \caption{The energy spectra of the standard velocity fluctuations ({\it upper curves}) and of the local velocity fluctuations ({\it lower curves}) of gas particles in the central $500^{3}{\rm kpc}^{3}$ region of a cluster simulated with the original recipe for the artificial viscosity, with signal--velocity and with the low viscosity implementation. Additionally a Kolmogorov slope (dot-dot-dashed) is drawn for comparison.} \label{fig:spect_1} \end{figure} Although we are using a formalism which is suitable only for isotropic and homogeneous turbulence, the study of the turbulent energy spectrum may provide some useful insight. In the local mean velocity approach, we can obtain the diagonal part of the turbulent energy spectrum using Equation~(\ref{Phi_ij}), with $R_{ii}$ approximated as \begin{equation} R_{ii}(r)= \left< \left[{v}_{a,i} - \overline{{v}}_i(\vec{x}_a) \right] \left[{v}_{b,i} - \overline{{v}}_i(\vec{x}_b) \right] \right>_{\rm box}, \label{rii_tsc} \end{equation} where $\overline{\vec{v}}(\vec{x}_a)$ is the TSC--mean velocity of the cell which contains the point $\vec{x}_a$, and the average is over all pairs $(a,b)$ in the box with a certain distance $r$. \noindent In the standard approach, we would here subtract the centre-of-mass velocity of the cluster instead. A major problem for estimating the correlation functions $R_{ii}(r)$ in this way, and with the energy spectrum calculated from SPH simulations (and in general from adaptive resolution approaches), is given by the non--uniform sampling of the point masses in the simulated box. To reduce this problem we focus on regions corresponding to the cores of galaxy clusters. Here the requirement of isotropic and homogeneous turbulence is hopefully better full filled. Also the density profile is relatively flat such that the sampling with gas particles is not too non-uniform. In addition, we estimate the correlation function as an average of dozens of Monte--Carlo extractions of gas particles from the simulated output, where we picked one particle from each of the $(15.6\,{\rm kpc})^{3}$ cells in order to have a uniform, volume-weighted set of particles. In Figure~\ref{fig:spect_1}, we show examples for the energy spectra we obtained for the two approaches by Fourier-transforming the measurements for $R_{ii}(r)$. The energy spectra for the two methods for treating the mean velocities are reasonably similar in shape, but the spectrum calculated with the local mean velocity has a lower normalisation, independent of resolution. This is expected because the TSC smoothing filters out contributions from laminar motions, and may also damp the turbulent field at some level. Both spectra show a slope nearly as steep as a Kolmogorov spectrum (which has $E(k) \propto k^{-5/3}$) at intermediate scales, but exhibit a significant flattening at smaller scales (i.e. large $k$). The flattening at small scales could be caused by numerical effects inherent in the SPH technique, where an efficient energy transfer to small-scale turbulent cells on scales approaching the numerical resolution is prevented, and thus a complete cascade cannot develop. Additional, the lack of numerical viscosity in the low--viscosity scheme can in principle lead to an increase of the noise level within the velocity field representation by the SPH particles on scales below the smoothing length. Such noise in general could contribute to the flattening at small scales. It is however not clear how to separate this noise component from a turbulent cascade reaching scales similar or below the smoothing length. Therefore one focus of future investigations has clearly be towards this issue. It is however important to note that the largest turbulent energy content (expecially at small scales) is always found in the clusters simulated with the low--viscosity scheme. This is particularly apparent in the energy spectra when the local velocity approach is used and suggests that the energy spectrum obtained with the standard approach is significantly affected by laminar bulk--flows, which are not sensitive to a change in parameterisation of the artifical viscosity. \begin{figure*} \includegraphics[width=0.49\textwidth]{g1_gas_new_Lx.eps} \includegraphics[width=0.49\textwidth]{g1_gas_nv_Lx.eps}\\ \includegraphics[width=0.49\textwidth]{g1_gas_new_Tsl.eps} \includegraphics[width=0.49\textwidth]{g1_gas_nv_Tsl.eps}\\ \caption{ Unsharp mask images of X-ray maps for one of the massive clusters ({\it g1}), comparing runs with the low--viscosity scheme (right panels) with the original SPH scheme (left panels). The top row gives maps of surface brightness, while the bottom one compares maps of the `spectroscopic like' temperature, both within 2 Mpc centred on the cluster. We can see evidence for an increased level of turbulent motions behind the infalling substructures, and a break-up of fluid interfaces for the reduced viscosity scheme is clearly visible. Also, there is a general increase of turbulence (appearing as lumpiness) towards the centre. However, the most prominent signals in the map stem from the higher density or different temperature of substructures relative to their surrounding, or from shocks and contact discontinuities. For this reason, turbulence can be better identified in pressure maps (see Figure~\ref{fig:pmaps}). } \label{fig:lxtxmaps} \end{figure*} \begin{figure*} \includegraphics[width=0.49\textwidth]{g1_gas_new_SZ.eps} \includegraphics[width=0.49\textwidth]{g1_gas_nv_SZ.eps}\\ \includegraphics[width=0.49\textwidth]{g1_gas_new_Px.eps} \includegraphics[width=0.49\textwidth]{g1_gas_nv_Px.eps}\\ \caption{Unsharp mask images of pressure maps of one of the massive clusters ({\it g1}), comparing runs with the low--viscosity scheme (right panels) with the original SPH scheme (left panels). We also compare different methods for determining the pressure maps. The panels of the top row show Compton-$y$ maps (which can be associated with projected, thermal pressure maps), whereas the maps in the bottom row are pressure maps derived based on X-ray surface brightness and spectroscopic temperature maps, see equation \ref{eqn:p1} and \ref{eqn:p2}. Both kinds of maps show an increase of structure (lumpiness) for the simulation which uses the reduced viscosity scheme (right panels) when compared with the original SPH viscosity scheme (left panels). The maps based on X-ray observables show a larger degree of lumpiness due to the gas around substructures, especially in the vicinity of infalling subgroups.} \label{fig:pmaps} \end{figure*} \section{Cluster Properties} \label{sec:cluster} Different levels of small-scale random gas motions within the ICM have only mild effects on global properties of clusters like mass or temperature, as evidenced by the measurements in Table~\ref{tab:char}. However, additional kinetic energy in turbulent motions changes the central density and entropy structure, which in turn has a sizable effect on the X-ray luminosity. We investigate the resulting changes in cluster scaling relations and radial profiles in more detail within this section. \subsection{Maps} The presence of a significant turbulent energy component in the intra-cluster medium manifests itself in a modification of the balance between the gravitational potential and the gas pressure. There are in fact observational reports that claim to have detected such fluctuations in pressure maps derived from X-ray observations \citep{2004A&A...426..387S}. We here calculate artificial pressure $P_{\rm art}$ maps for our simulations, based on surface brightness maps ($L_x$) and spectroscopic-like \citep{2004MNRAS.354...10M} temperature ($T_{sl}$) maps. This allows artificial pressure maps to be estimated as \begin{equation} P_{\rm art}=n\,T_{sl}, \label{eqn:p1} \end{equation} where we defined \begin{equation} n=\left(L_x/\sqrt{T_{sl}}\right)^{\frac{1}{2}}. \label{eqn:p2} \end{equation} Figures~\ref{fig:lxtxmaps} and \ref{fig:pmaps} show a comparison of a number of cluster maps produced using an unsharp-mask technique of the form \begin{equation} \mathrm{Image}_\mathrm{unsharp mask} = \mathrm{Image}-\mathrm{Smoothed}(\mathrm{Image,\sigma}), \end{equation} where a Gaussian smoothing with FWHM of $\sigma=200$ kpc was applied. We analyse maps of the X-ray surface brightness, spectroscopic-like temperature, `true' pressure maps (e.g.~based on Compton $y$) and artificial pressure maps constructed as described above. All maps show the central 2 Mpc of the cluster run {\it g1}, simulated with the low--viscosity scheme (right panels) compared with the original SPH scheme (left panels). Disregarding the large contribution by substructure in all the X-ray related maps (therefore also in the artificial pressure map), all types of maps show clear signs of turbulence. It is noticeable in both runs, but it has a much larger extent and intensity in the the low--viscosity run. Note in particular the turbulent motions (appearing as lumpiness in the unsharp-mask images) in the wake of infalling substructures, and the earlier break-up of fluid interfaces when the new, reduced viscosity scheme is used. Pressure maps (and therefore SZ maps) are arguably the most promising type of map when searching for observational imprints of turbulence. Apart from reflecting the large scale shape of the cluster they are known to be relatively featureless, because most of the substructures in clusters are in pressure equilibrium with their local surroundings, making them in principle invisible in pressure maps. On the other hand, the contribution of the turbulent motion to the local pressure balance can be expected to leave visible fluctuations in the thermal pressure map. This can indeed be seen nicely in the pressure (e.g. SZ) maps in Figure~\ref{fig:pmaps}. Note that the amplitudes of the turbulent fluctuations in the case of the low--viscosity run are larger and also spatially more extended in the core of the cluster. Artificial pressure maps constructed from the X-ray observables still show such fluctuations, but they are partially swamped by the signatures of the infalling substructures, making it difficult to quantify the amount of turbulence present in clusters using such X-ray based artificial pressure maps. The small displacements seen in the substructure positions between the two runs are due to small differences in their orbital phases. Besides the general problem to precisely synchronise cluster simulations with different dynamical processes involved, it is well known \citep[e.g.][]{2004MNRAS.350.1397T,2005astro.ph..4206P} that the interaction of the gas with its environment can significantly change the orbits of infalling substructure. The different efficiencies in stripping the gas from the infalling substructure in the simulations with different viscosity prescription can therefore lead to small differences in the timing and orbits between the two simulations. \subsection{Scaling Relations} In Figure~\ref{fig:t_t}, we compare the mass-weighted temperature of our galaxy clusters for simulations with the original viscosity and for runs with the low--viscosity scheme. There are no significant changes. Comparing the X-ray luminosity instead, we find that it drops significantly by a factor of $\approx 2$ for clusters simulated with the low--viscosity scheme, as shown in Figure~\ref{fig:lx_lx}. This is quite interesting in the context of the long-standing problem of trying to reproduce the observed cluster scaling relations in simulations. In particular, since non radiative cluster simulations tend to produce an excess of X-ray luminosity, this effect would help. However, one has to keep in mind that the inclusion of additional physical processes like radiative cooling and feedback from star formation can have an even larger impact on the cluster luminosity, depending on cluster mass, so a definite assessment of the scaling relation issue has to await simulations that also include these effects. \begin{figure} \includegraphics[width=0.5\textwidth]{t_t_turb.eps} \caption{Comparison of the virial temperature of the 9 clusters when different parameterisations of the viscosity are employed. The solid line marks the one-to-one correspondence.} \label{fig:t_t} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{lx_lx_turb.eps} \caption{Comparison of the bolometric luminosity of the 9 clusters when different parameterisations of the viscosity are employed. The solid line marks the one-to-one correspondence. It is evident that clusters with a larger degree of turbulence have a lower luminosity.} \label{fig:lx_lx} \end{figure} \subsection{Radial profiles} The presence of turbulence manifests itself in an increase of the velocity dispersion of the cluster gas, especially towards the centre, while the dark matter velocity dispersion should be unaffected. In Figure~\ref{fig:vdisp}, we compare the velocity dispersion of gas and dark matter, averaged over the low- and high-mass clusters in our set. As expected, the velocity dispersion of the dark matter does not change in the low--viscosity simulations, where a larger degree of turbulence is present in the ICM. On the other hand, the central velocity dispersion of the gas increases, reaching of order of $400\,{\rm km\,s^{-1}}$ for our massive clusters. As the gas is in pressure equilibrium with the unchanged gravitational potential, the hydrodynamic gas pressure will be correspondingly lower in the centre due to the presence of these random gas motions. In Figure~\ref{fig:temp_prof}, we show the mean cluster temperature profiles, which only shows a very mild trend of increasing temperature in the central part of clusters when using the new, low--viscosity scheme. However, the central gas density drops significantly in the low--viscosity scheme, as shown in Figure~\ref{fig:dens_prof}. This change in density is restricted to inner parts of the cluster, roughly to within $~0.1R_{vir}$, which may be taken as an indication of the size of the region where turbulent motions are important. Quite interestingly, the presence of turbulence also changes the entropy profiles. In Figure~\ref{fig:entr_prof}, we show the radial entropy profiles of our clusters, which in the case of the low--viscosity scheme exhibit an elevated level of entropy in the core, with a flattening similar to that inferred from X-ray observations. It is remarkable that this central increase of entropy occurs despite the fact that the source of entropy generation, the artificial viscosity, is in principle less efficient in the low--viscosity scheme. There are two main possibilities that could explain this result. Either the low--viscosity scheme allows shocks to penetrate deeper into the core of the cluster and its progenitors such that more efficient entropy production in shocks occurs there, or alternatively, the reduced numerical viscosity changes the mixing processes of infalling material, allowing higher entropy material that falls in late to end up near the cluster centre. In order to investigate a possible change of the accretion behaviour, we traced back to high redshift all particles that end up at $z=0$ within 5\% of $R_{\rm vir}$ of the cluster centre. We find that most of the central material is located in the centres of progenitor halos at higher redshift, which is a well known result. However, in the simulations with the time dependend, low--viscosity scheme, there is a clear increase of the number of particles which are not associated with the core of a halo at higher redshift. We illustrate this with the histograms shown in Figure~\ref{fig:part_hist}, which gives the distribution of the distance to the nearest halo in units of $R_{\rm vir}$ of the halo. All particles at distances larger than 1 are not associated with any halo at corresponding epoch. Compared to the low entropy material that is already bound in a dense core at this epoch, this diffuse gas is brought to much higher entropy by shocks. When it is later accreted onto the cluster and mixed into the core, it can then raise the entropy level observed there. We note that Eulerian hydrodynamics simulations also show a flattening of the entropy profile. While the exact degree to which numerical and physical (turbulent) mixing contribute to producing this result is still a matter of debate, it is intriguing that a larger level of turbulence in the SPH calculations substantially alleviates the discrepancies in the results otherwise obtained with the two techniques \citep{1999ApJ...525..554F,2003astro.ph.12651O}. \begin{figure} \includegraphics[width=0.5\textwidth]{veldisp_pro.eps} \caption{Radial velocity dispersion profile for dark matter (black) and gas (blue) particles. The thick lines represent the average over the 4 massive clusters, whereas the thin lines give the average over the 5 low mass systems. The dashed lines are drawn from the original viscosity simulations, the solid lines from the low--viscosity simulations.\label{fig:vdisp}} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{dens_pro.eps} \caption{Radial gas density profile. The thick lines represent the average over the 4 massive clusters, whereas the thin lines give the average over the 5 low mass systems. The dashed lines are drawn from the original viscosity simulations, the solid lines from the low--viscosity simulations.\label{fig:dens_prof}} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{temp_pro.eps} \caption{Mass-weighted gas temperature profile. The thick lines represent the average over the 4 massive clusters, whereas the thin lines give the average over the 5 low mass systems. The dashed lines are drawn from the original viscosity simulations, the solid lines from the low--viscosity simulations.\label{fig:temp_prof}} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{entr_pro.eps} \caption{Radial entropy profiles of the ICM gas. Thin lines are individual profiles for the 9 clusters, thick lines are averages. The dashed lines are drawn from the original viscosity simulations, the solid lines from the low--viscosity simulations.\label{fig:entr_prof}} \end{figure} \section{Metal Lines} \label{sec:lines} Turbulent gas motions can lead to substantial Doppler broadening of X-ray metal lines, in excess of their intrinsic line widths. Given the exquisite spectral resolution of upcoming observational X-ray mission, this could be used to directly measure the degree of ICM turbulence \citep{2003AstL...29..783S} by measuring, e.g., the shape of the $6.702\,{\rm keV}$ iron emission line. One potential difficulty in this is that multiple large-scale bulk motions of substructure moving inside the galaxy cluster along the line of sight might dominate the signal. To get a handle on this, we estimate the line-of-sight emission of the $6.702\,{\rm keV}$ iron line within columns through the simulated clusters, where the column size was chosen to be $300\,h^{-1}{\rm kpc}$ on a side, which at the distance of the Coma cluster corresponds roughly to one arcmin, the formal resolution of {\small ASTRO-E2}. For simplicity, we assign every gas particle a constant iron abundance and an emissivity proportional to $n_e^2\times f(T_e) \times \Delta V$, where $n_e$ is the electron density, and $\Delta V\propto \rho^{-1}$ is the volume represented by the particle. As a further approximation we set the electron abundance equal to unity. We also neglect thermal broadening and other close lines (like the $6.685\,{\rm keV}$ iron line), given that the $6.702\,{\rm keV}$ iron line is clearly the strongest. In Figure~\ref{fig:feline}, we show the resulting distributions for several lines of sight, here distributed on a grid with $-500$, $-250$, $0$, $250$ and $500\,h^{-1}{\rm kpc}$ impact parameter in $x$-direction, and $-250$, $0$ and $250,h^{-1}{\rm kpc}$ impact parameter in the $y$-direction, respectively. The different lines in each panel correspond to simulations with the signal-velocity based viscosity (dashed line) and with the time-depended low--viscosity scheme (solid lines). Both results have been normalised to the total cluster luminosity, such that the integral under the curves corresponds to the fraction of the total luminosity. \begin{figure*} \includegraphics[width=1.0\textwidth]{acreat_hist.eps} \caption{ Distribution of the distance of particles to their nearest halo at high redshift. The particles selected here end up within 5\% of $R_{\rm vir}$ at $z=0$. The dashed lines are for the original viscosity scheme, while the solid lines mark the result for the low--viscosity simulations.} \label{fig:part_hist} \end{figure*} We note that this measurement is very sensitive to small timing differences between different simulations, and therefore a comparison of the same cluster run with different viscosity should be carried out in a statistical fashion, even if some features look very similar in both runs. In general we confirm previous findings \citep[e.g.][]{2003AstL...29..791I} that large bulk motions can lead to spectral features which are several times wider than expected based on thermal broadening alone. Additional complexity is added by beam smearing effects, thermal broadening, and by the local turbulence in the ICM gas, such that an overall very complex line shape results. In our simulations with the low--viscosity scheme, where we have found an increased level of fluid turbulence, the final line shapes are indeed more washed out. However, the complexity of the final line shapes suggests that it will be very difficult to accurately measure the level of fluid turbulence with high resolution spectra of X-ray emission lines, primarily because of the confusing influence of large-scale bulk motions within galaxy clusters. \begin{figure*} \includegraphics[width=1.0\textwidth]{feline_g72_1as_coma.eps} \caption{Distribution of the Doppler-shifted emission of the iron $6.702\,{\rm keV}$ line for 15 lines of sight through the cluster {\it g72}. Every panel corresponds to a column of side length $300\,h^{-1}{\rm kpc}$ through the virial region of the cluster. This roughly corresponds to one arcmin resolution (comparable to the ASTRO-E2 specifications) at the distance of the Coma cluster. The columns from left to right correspond to $-500$, $-250$, $0$, $250$, and $500\,h^{-1}{\rm kpc}$ impact parameter in the $x$-direction, and the rows correspond to $-250$, $0$, and $250,h^{-1}{\rm kpc} $ impact parameter in the $y$-direction. The dashed lines give results for the original viscosity run, while the solid line is for the low--viscosity run. The thick bar in the center panel marks the expected energy resolution of 12 eV of the XRS instrument on-board ASTRO-E2.} \label{fig:feline} \end{figure*} \section{Application to Radio Halos} \label{sec:radio} One promising possibility to explain the extended radio emission on Mpc-scales observed in a growing number of galaxy clusters is to attribute it to electron acceleration by cluster turbulence \citep[e.g.,][]{1987A&A...182...21S,2001MNRAS.320..365B}. Having high resolution cluster simulations at hand, which thanks to the new viscosity scheme are able to develop significant turbulence within the ICM, it is of interest to explore this possibility here. Obviously, due to the uncertainties in the complex physical processes of dissipation of the turbulent energy -- which up to this point can not be explicitly modelled in the simulations -- our analysis is limited to a check whether or not turbulent reacceleration can plausibly reproduce some of the main properties of radio halos. In this scenario, the efficiency of electron acceleration depends on the energy density of magnetohydrodynamic waves (Alfv\'en waves, Fast Mode waves, \ldots), on their energy spectrum, and on the physical conditions in the ICM (i.e., density and temperature of the thermal plasma, strength of the magnetic field in the ICM, number density and spectrum of cosmic rays in the ICM). A number of approaches for studying the acceleration of relativistic electrons in the ICM have been successfully developed by focusing on the case of Alfv\'en waves \citep{2002ApJ...577..658O,2004MNRAS.350.1174B} and, more recently, on Fast Mode-waves \citep{2005MNRAS.357.1313C}. It should be stressed, however, that analytical and/or semi--analytical computations are limited to very simple assumptions for the generation of turbulence in the ICM. Full numerical simulations represent an ideal complementary tool for a more general analysis, where the injection of turbulence into the cluster volume by hierarchical merging processes can be studied realistically. Low numerical viscosity and high spatial resolution are however a prerequisite for reliable estimates of turbulence. As we have seen earlier, previous SPH simulations based on original viscosity parameterisations have suppressed random gas motions quite strongly, but the low--viscosity scheme explored here does substantially better in this respect. In this Section, we carry out a first exploratory analysis of the efficiency of electron acceleration derived in the low--viscosity scheme, and we compare it to results obtained with a original SPH formulation. For definiteness, we assume that a fraction $\eta_t$ of the estimated energy content of the turbulent velocity fields in the cluster volume, measured by the local velocity dispersion (equation \ref{sigmai}, section \ref{sec:turbulence}), is in the form of Fast Mode waves. We focus on these modes since relativistic electrons are mainly accelerated by coupling with large scale modes (e.g., $k^{-1} \geq$ kpc, $k$ being the wave number) whose energy density, under the above assumption, can hopefully be constrained with the numerical simulations in a reliable fashion. In addition, the damping and time evolution of Fast Modes basically depend only on the properties of the thermal plasma and are essentially not sensitive to the presence of cosmic ray protons in the ICM \citep{2005MNRAS.357.1313C}. Relativistic particles couple with Fast Modes via magnetic Landau damping. The necessary condition for Landau damping \citep{1968Ap&SS...2..171M,1979ApJ...230..373E} is $\omega - k_{\Vert} v_{\Vert}=0$, where $\omega$ is the frequency of the wave, $k_{\Vert}$ is the wavenumber projected along the magnetic field, and $v_{\Vert}=v\mu$ is the projected electron velocity. Note that in this case - in contrary to the Alfv\'enic case - particles may also interact with large scale modes. In the collisionless regime, it can be shown that the resulting acceleration rate in an isotropic plasma (modes' propagation and particle momenta) is given by \citep[e.g.,][]{2005MNRAS.357.1313C} \begin{equation} {{ {\rm d} p }\over{{\rm d} t}} \sim 180 \,{{ v_{\rm M}^2 }\over{c}} {{p}\over{B^2}} \int {\rm k} {W}^{B}_{k} {\rm d}k \label{dppms}, \label{dpdt} \end{equation} where $v_{\rm M}$ is the magneto--sonic velocity, and ${W}^{B}_{k}$ is the energy spectrum of the magnetic field fluctuations \citep[e.g.,][]{1973ApJ...184..251B,2005MNRAS.357.1313C}. We estimate the rate of injection of Fast Modes, $I^{FM}_k$, assuming that a fraction, $\eta_t$, of fluid turbulence is associated with these modes. We parameterise the injection rate assuming that turbulence is injected (and also dissipated) in galaxy clusters within a time of the order of a cluster--crossing time, $\tau_{\rm cross}$ \citep[see][for a more detailed discussion]{2005MNRAS.357.1313C,2005astro.ph..5144S}. One then has: \begin{equation} \int I^{FM}_k \,{\rm d}k \sim \eta_t {{ E_t}\over{\tau_{\rm cross}}} \sim {1\over 2} \eta_t \rho_{\rm gas} \sigma_v^2 \tau_{\rm inject}^{-1}\\ \label{inj} \end{equation} \noindent Here, $\tau_{\rm inject}$ is the time over which a merging substructure is able to inject turbulence in a given volume element in the main cluster. This can be estimated as the size of the subhalo divided by its infalling velocity. As the size of a halo is only a weak function of its mass, we approximate $\tau_{\rm inject}$ with a generic value of $\tau_{\rm inject}=0.5\,{\rm Gyr}$. This is only a very crude estimate and more generally one should think of an effective efficiency parameter $\eta_t^{\rm eff}=\eta_t/\tau_{\rm inject}$ which we set to $0.1/(0.5\,{\rm Gyr})$ as argued before. Note also that for estimating $\sigma_v^2$ we used a $64^3$ TSC-grid, which is a conservative estimate, as shown in Figure~\ref{fig:disp_resol}, and therefore equation~(\ref{inj}) should still reflect a lower limit. Following Cassano \& Brunetti (2005), the spectrum of the magnetic fluctuations associated with Fast Modes is computed under stationary conditions taking into account the damping rate of these modes with thermal electrons, $\Gamma_k=\Gamma_o k$. One then has \begin{equation} W_k^B \sim {{B_o^2}\over{8 \pi}} {1\over{P_{\rm gas}}} {{I^{FM}_k}\over{\Gamma_o k}}. \label{wbk} \end{equation} \noindent Thus the integral in Eqn.~(\ref{dpdt}) at each position of the grid can be readily estimated as \begin{eqnarray} \int k W_k^B\,{\rm d}k & \sim & {{B_o^2}\over{8 \pi}} {1\over{ \Gamma_o P_{\rm gas}}} \int I^{FM}_k\, {\rm d}k \\ & \sim & \eta_t^{\rm eff} {{ B^2({\bf x}) }\over{16 \pi}} {{\rho_{\rm gas}({\bf x}) \sigma_{ii}^2({\bf x}) }\over {P_{\rm gas}({\bf x})}} {{ 1 }\over{\Gamma_o}} \label{integral} \end{eqnarray} where $\Gamma_o$ depends on the temperature of the ICM (Cassano \& Brunetti 2005)\footnote{Note that under these assumptions the efficiency of the particle acceleration does not depend on the spectrum of the waves}. In this Section we are primarily interested in determining the maximum energy of accelerated electrons, given the adopted energy density for Fast Modes. Under typical conditions in the ICM, the maximum energy of electrons is reached at energies where radiative losses balance the effect of the acceleration. The radiative synchrotron and inverse Compton losses with CMB photons are given by \citep[e.g.,][]{1999ApJ...520..529S} \begin{eqnarray} \lefteqn{ \left( {{{\rm d} p}\over{{\rm d} t}}\right)_{\rm rad}=\, -4.8\times10^{-4} p^2 \left[\left({{ B_{\mu G}}\over{ 3.2}}\right)^2{{\sin^2\theta}\over{2/3}} +(1+z)^4 \right] }{} \nonumber\\ & & {} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ = -\frac{\beta\, p^{2}}{m_e\,c} \label{syn+ic} \label{losses}, \end{eqnarray} where $B_{\mu G}$ is the magnetic field strength in $\mu G$, and $\theta$ is the pitch angle of the emitting electrons. If an efficient isotropisation of electron momenta can be assumed, it is possible to average over all possible pitch angles, so that $\left<\sin^2\theta\right> = 2/3$. In Figure~\ref{fig:radio1d}, we plot the maximum energy of the fast electrons obtained from Eqs.~(\ref{wbk}) and (\ref{losses}) along one line-of-sight through the cluster atmosphere. The two different lines are for the same cluster, simulated with our two main schemes for the artificial viscosity. The two vertical lines in Figure~\ref{fig:vel_mean} are indicating the position of these cuts. When the new low--viscosity scheme is used, enough turbulence is resolved to maintain high energy electrons (and thus synchrotron radio emission) almost everywhere out to a distance of 1 Mpc from the cluster centre, whereas in the original formulation of SPH, turbulence is much more strongly suppressed, so that the maximum energy of the accelerated electrons remains a factor of about three below that in the low--viscosity case. The results reported in Figure~\ref{fig:radio1d} are obtained assuming a reference value of $\eta_t^{\rm eff}=\eta_t/\tau_{\rm inject}=0.1/(0.5\,{\rm Gyr})$. The averaged volume weighted magnetic field strength in the considered cluster region is fixed at $0.5\mu G$ and a simple scaling from magnetic flux--freezing, $B\propto\rho^{(2/3)}$, is adopted in the calculations, resulting in a central magnetic field strength of $B_0=5.0\mu G$. It is worth noting that the maximum energy of the accelerated electrons, $\gamma_{\rm max}$, scales with the energy density of the turbulence (and with the fraction of the energy of this turbulence channelled into Fast Modes $\eta_t$, Eq.~\ref{wbk}). However the synchrotron frequency emitted by these electrons scales with the square of the turbulent energy ($\gamma_{\rm max}^2$). Interestingly, with the parameter values adopted in Figure~\ref{fig:radio1d}, a maximum synchrotron frequency of order 10$^2( {{\eta_t}/{0.1}} )^2$ MHz is obtained in a Mpc--sized cluster region, which implies diffuse synchrotron radiation up to GHz frequencies if a slightly larger value of $\eta_t$ is adopted\footnote{Note that \citep[e.g.,][]{2005MNRAS.357.1313C} required $\eta_t \sim 0.2-0.3$ in order to reproduce the observed statistics of radio halos.}. On the other hand, we note that essentially no significant radio emission would be predicted if we had used the simulations with the original SPH viscosity scheme. In real galaxy clusters, the level of turbulence which can form will also depend on the amount of physical viscosity present in the ICM gas (i.e.~on its Reynolds number), which in turn depends on the magnetic field topology and gas temperature. It will presumably still take a while before the simulations achieve sufficient resolution that the numerical viscosity is lower than this physical viscosity. In addition, the details of the conversion process of large--scale velocity fields into MHD modes is still poorly understood and well beyond the capabilities of presently available cosmological simulations. However, our results here show that a suitable modification of the artificial viscosity parameterisation within SPH can be of significant help in this respect, and it allows a first investigation of the role of turbulence for feeding non--thermal phenomena in galaxy clusters. \begin{figure} \includegraphics[width=0.5\textwidth]{1D_gamma_profile.eps} \caption{One-dimensional profile of the maximum energy of the electrons accelerated via the turbulent-magneto-sonic model, along the same vertical lines drawn in Figure~\ref{fig:vel_mean}. Dashed lines are for the original viscosity run, while solid lines are for the low--viscosity scheme. Here, a conservative $64^3$ grid is used in the TSC smoothing.} \label{fig:radio1d} \end{figure} \section{Conclusions} \label{sec:conclusions} We implemented a new parameterisation of the artificial viscosity of SPH in the parallel cosmological simulation code {\small GADGET-2}. Following a suggestion by \cite{1997JCoPh..136....41S}, this method amounts to an individual, time-dependent strength of the viscosity for each particle which increases in the vicinity of shocks and decays after passing through a shock. As a result, SPH should show much smaller numerical viscosity in regions away from strong shocks than original formulations. We applied this low--viscosity formulation of SPH to a number of test problems and to cosmological simulations of galaxy cluster formations, and compared the results to those obtained with the original SPH formulation. Our main results can be summarised as follows: \begin{itemize} \item The low--viscosity variant of SPH is able to capture strong shocks just as well as the original formulation, and in some cases we even obtained improved results due to a reduced broadening of shock fronts. In spherical accretion shocks, we also obtained slightly better results due to a reduction of pre-shock entropy generation. \item Using the low--viscosity scheme, simulated galaxy clusters developed significant levels of turbulent gas motions, driven by merger events and infall of substructure. We find that the kinetic energy associated with turbulent gas motion within the inner $\sim 1\,{\rm Mpc}$ of a $10^{15}\,h^{-1}{\rm M}_\odot$ galaxy cluster can be up to 30\% of the thermal energy content. This value can be still larger and reach up to 50\% in the very central part of massive clusters. In clusters with smaller masses ($\sim 10^{14} \,h^{-1}{\rm M}_\odot$) we find a smaller turbulent energy content, reaching only 5\% within the central Mpc. Within a comparable fraction of the virial radius, the corresponding fraction is however still of order 10\%. These values are much larger than what is found when the original SPH viscosity is employed, which strongly suppresses turbulent gas motions. \item The presence of such an amount of turbulence has an imprint on global properties of galaxy clusters, most notably reducing the bolometric X-ray luminosity in non radiative simulations by a factor of $\approx 2$. However, the global, mass-weighted temperature does not change. \item The temperature profiles of galaxy clusters are only mildly changed by the presence of turbulence, but we observe a strong decrease of density within the central region of galaxy clusters, where the turbulence is providing a significant contribution to the total pressure. Also the radial entropy profiles show a significant flattening towards the cluster centre. This makes them more similar to the observed profiles based on X-ray observations. Note however that radiative cooling -- which was not included in our simulations -- can also modify the profiles substantially. We find that the higher entropy in the centre found in the low viscosity simulations is largely a result of the more efficient transport and mixing of low-entropy in infalling material into the core of the cluster. We note that the elevated entropy levels found in our low--viscosity runs are more similar to the results found with Eulerian hydrodynamic codes than the original SPH ones. \item Turbulence in galaxy clusters broadens the shape of metal lines observable with high-resolution X-ray spectrographs like XRT on board of {\small ASTRO-E2}. Depending on the strength of the turbulence and the dynamical state of the cluster, prominent features due to large-scale bulk motions may however get washed out and blended into a very complex line structure. In general it will therefore be difficult to isolate the signature of the turbulent broadening and to differentiate it unambiguously from the more prominent features of large scale bulk motions. \item Applying a model for accelerating relativistic electrons by ICM turbulence we find that galaxy clusters simulated with reduced viscosity scheme may develop sufficient turbulence to account for the radio emission that is observed in many galaxy clusters, provided that a non--negligible fraction of the turbulent energy in the real ICM is associated with Fast Modes. \end{itemize} In summary, our results suggest that ICM turbulence might be an important ingredient in the physics of galaxy clusters. If present at the levels inferred from our low--viscosity simulations, it has a significant effect on the radial structure and on the scaling relations of galaxy clusters. We also note that the inferred reduction of the X-ray luminosity has a direct influence on the strength of radiative cooling flows. The more efficient mixing processes would also help to understand the nearly homogeneous metal content observed for large parts of the cluster interior. Finally, cluster turbulence may also play an important role for the dynamics of non-thermal processes in galaxy clusters. Although we observe a rather high level of turbulence in the very centre of our simulated galaxy clusters when we use the low--viscosity scheme, it is likely that we are still missing turbulence due to the remaining numerical viscosity of our hydrodynamical scheme, and due to the resolution limitations, particularly in low density regions, of our simulations. This problem should in principle become less and less severe as the resolution of the simulations is increased in future calculations. However, given that there is a some physical viscosity in real galaxy clusters which limits the Reynolds number of the ICM, it cannot be the goal to model the ICM as a pure ideal gas. Instead, future work should concentrate on accurately characterising this physical viscosity of the ICM, which could then be directly incorporated into the simulations by means of the Navier-Stokes equations. Our results suggest that the low--viscosity formulation of SPH should be of significant help in reducing the numerical viscosity of SPH simulation below the level of this physical viscosity, and the present generation of simulations may already be close to this regime. \section*{acknowledgements} Many thanks to Volker Springel for providing {\small GADGET-2} and initial conditions for test problems. We acknowledge fruitful discussions with Stefano Borgani and want to thank Volker Springel and Torsten Ensslin for carefully reading and fruitful suggestions to improve the manuscript. The simulations were carried out on the IBM-SP4 machine at the ``Centro Interuniversitario del Nord-Est per il Calcolo Elettronico'' (CINECA, Bologna), with CPU time assigned under an INAF-CINECA grant, on the IBM-SP3 at the Italian Centre of Excellence ``Science and Applications of Advanced Computational Paradigms'', Padova and on the IBM-SP4 machine at the ``Rechenzentrum der Max-Planck-Gesellschaft'' in Garching. K.~D.~acknowledges support by a Marie Curie Fellowship of the European Community program "Human Potential" under contract number MCFI-2001-01227. G.~B.~acknowledges partial support from MIUR through grant PRIN2004 and from INAF through grant D4/03/15. \bibliographystyle{mnras}
{ "attr-fineweb-edu": 1.951172, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUb83xK7Ehm4VQuc_H
\section{Introduction}\label{sec-1} In this paper we are concerned with the time periodic problem for following complex Ginzburg-Landau equation in a bounded domain \(\Omega \subset \mathbb{R}^N\) with smooth boundary \(\partial \Omega\):\vspace{-1mm} \begin{equation} \tag*{(CGL)} \left\{ \begin{aligned} &\partial_t u(t, x) \!-\! (\lambda \!+\! i\alpha)\Delta u \!+\! (\kappa \!+\! i\beta)|u|^{q-2}u \!-\! \gamma u \!=\! f(t, x) &&\hspace{-2mm}\mbox{in}\ (t, x) \in [0, T] \times \Omega,\\ &u(t, x) = 0&&\hspace{-2mm}\mbox{on}\ (t, x) \in [0, T] \times \partial\Omega,\\ &u(0, x) = u(T,x)&&\hspace{-2mm}\mbox{in}\ x \in \Omega, \end{aligned} \right. \end{equation} where \(\lambda, \kappa > 0\), \(\alpha, \beta, \gamma \in \mathbb{R}\) are parameters; \(i = \sqrt{-1}\) is the imaginary unit; \(f: \Omega \times [0, T] \rightarrow \mathbb{C}\) (\(T > 0\)) is a given external force. Our unknown function \(u:\overline{\Omega} \times [0,\infty) \rightarrow \mathbb{C}\) is complex valued. Under a suitable assumption on \(\lambda,\kappa,\alpha,\beta\), that is, they belong to the so-called CGL-region (see Figure \ref{CGLR}), we shall establish the existence of periodic solutions without any restriction on \(q\in(2,\infty)\), \(\gamma\in\mathbb{R}^1\) and the size of \(f\). As extreme cases, (CGL) corresponds to two well-known equations: semi-linear heat equations (when \(\alpha=\beta=0\)) and nonlinear Schr\"odinger equations (when \(\lambda=\kappa=0\)). Thus in general, we can expect (CGL) has specific features of two equations. Equation (CGL) was introduced by Landau and Ginzburg in 1950 \cite{GL1} as a mathematical model for superconductivity. Since then it has been revealed that many nonlinear partial differential equations arising from physics can be rewritten in the form of (CGL) (\cite{N1}) as well. Mathematical studies of the initial boundary value problem for (CGL) are pursued extensively by several authors. The first treatment was due to Temam \cite{T1}, where weak global solution was constructed by the Galerkin method. Levermore-Oliver \cite{LO1} constructed weak global solutions to (CGL) on \(N\)-dimensional torus \(\mathbb{T}^N\) by an argument similar to that in the proof of Leray's existence theorem for global weak solutions of Navier-Stokes equations. They also showed the existence of unique global classical solution of (CGL) for \(u_0 \in {\rm C}^2(\mathbb{T}^N)\) under the conditions \(q \leq 2N/(N-2)_+\) and \(\left(\frac{\alpha}{\lambda}, \frac{\beta}{\kappa}\right) \in {\rm CGL}(c_q^{-1})\) (see Figure \ref{CGLR}). Ginibre-Velo \cite{GinibreJVeloG1996} showed the existence of global strong solutions for (CGL) in whole space \(\mathbb{R}^N\) under the condition \(\left(\frac{\alpha}{\lambda}, \frac{\beta}{\kappa}\right) \in {\rm CGL}(c_q^{-1})\) without any upper bounds on \(q\) with initial data taken form \(\mathrm{H}^1(\mathbb{R}^N)\cap\mathrm{L}^q(\mathbb{R}^N)\). Subsequently, Okazawa-Yokota \cite{OY1} treated (CGL) in the framework of the maximal monotone operator theory in complex Hilbert spaces and proved global existence of solutions and a smoothing effect for bounded domains with \(\lambda,\kappa,\alpha,\beta\) belonging to the CGL-region. Another approach based on the perturbation theory for subdifferential operator was introduced in \cite{KOS1}, where the existence of global solutions together with some smoothing effects in general domains is discussed. As for the global dynamics of solutions for (CGL) is studied in Takeuchi-Asakawa-Yokota \cite{TAY1}. They showed the existence of global attractor for bounded domains. In this paper, we show the existence of time periodic solutions for (CGL) in bounded domains. Here we regard (CGL) as a parabolic equation governed by the leading term \(-\lambda\Delta u+\kappa|u|^{q-2}u\) which can be described as a subdifferential operator of a certain convex functional, with a monotone perturbation \(-i\alpha\Delta u\) together with a non-monotone perturbation \(i\beta|u|^{q-2}u\). The time periodic problem for parabolic equations governed by subdfferential operators with non-monotone perturbations is already discussed in \^Otani \cite{O3}, which has a fairly wide applicability. This theory, however, cannot be applied directly to (CGL) because of the presence of the monotone term, \(-i\alpha\Delta u\). To cope with this difficulty, we first introduce an auxiliary equation, (CGL) added \(\varepsilon|u|^{r-2}u\) (\(\varepsilon>0,r>q\)) and show the existence of periodic solutions for this equation. Then letting \(\varepsilon\downarrow0\) with suitable a priori estimates, we obtain desired periodic solutions to (CGL). This paper consists of five sections. In \S2, we fix some notations, prepare some preliminaries and state our main result. In \S3, we consider an auxiliary equation and show the existence of periodic solutions to the equation. Our main result is proved in \S4. \section{Notations and Preliminaries}\label{sec-2} In this section, we first fix some notations in order to formulate (CGL) as an evolution equation in a real product function space based on the following identification: \[ \mathbb{C} \ni u_1 + iu_2 \mapsto (u_1, u_2)^{\rm T} \in \mathbb{R}^2. \] Moreover we set: \[ \begin{aligned} &(U \cdot V)_{\mathbb{R}^2} := u_1 v_1 + u_2 v_2,\quad |U|=|U|_{\mathbb{R}^2}, \qquad U=(u_1, u_2)^{\rm T}, \ V=(v_1, v_2)^{\rm T} \in \mathbb{R}^2,\\[1mm] &\mathbb{L}^2(\Omega) :={\rm L}^2(\Omega) \times {\rm L}^2(\Omega),\quad (U, V)_{\mathbb{L}^2} := (u_1, v_1)_{{\rm L}^2} + (u_2, v_2)_{{\rm L}^2},\\[1mm] &\qquad U=(u_1, u_2)^{\rm T},\quad V=(v_1, v_2)^{\rm T} \in \mathbb{L}^2(\Omega),\\[1mm] &\mathbb{L}^r(\Omega) := {\rm L}^r(\Omega) \times {\rm L}^r(\Omega),\quad |U|_{\mathbb{L}^r}^r := |u_1|_{{\rm L}^r}^r + |u_2|_{{\rm L}^r}^r\quad\ U \in \mathbb{L}^r(\Omega)\ (1\leq r < \infty),\\[1mm] &\mathbb{H}^1_0(\Omega) := {\rm H}^1_0(\Omega) \times {\rm H}^1_0(\Omega),\ (U, V)_{\mathbb{H}^1_0} := (u_1, v_1)_{{\rm H}^1_0} + (u_2, v_2)_{{\rm H}^1_0}\ \ U, V \in \mathbb{H}^1_0(\Omega). \end{aligned} \] We use the differential symbols to indicate differential operators which act on each component of \({\mathbb{H}^1_0}(\Omega)\)-elements: \[ \begin{aligned} & D_i = \frac{\partial}{\partial x_i}: \mathbb{H}^1_0(\Omega) \rightarrow \mathbb{L}^2(\Omega),\\ &D_i U = (D_i u_1, D_i u_2)^{\rm T} \in \mathbb{L}^2(\Omega) \ (i=1, \cdots, N),\\[2mm] & \nabla = \left(\frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_N}\right): \mathbb{H}^1_0(\Omega) \rightarrow ({\rm L}^2(\Omega))^{2 N},\\ &\nabla U=(\nabla u_1, \nabla u_2)^T \in ({\rm L}^2(\Omega))^{2 N}. \end{aligned} \] We further define, for \(U=(u_1, u_2)^{\rm T},\ V= (v_1, v_2)^{\rm T},\ W = (w_1, w_2)^{\rm T}\), \[ \begin{aligned} &U(x) \cdot \nabla V(x) := u_1(x) \nabla v_1(x) + u_2(x) \nabla v_2(x) \in \mathbb{R}^N,\\[2mm] &( U(x) \cdot \nabla V(x) ) W(x) := ( u_1(x) \nabla v_1(x)w_1(x) , u_2(x) \nabla v_2(x) w_2(x))^{\rm T} \in \mathbb{R}^{2N},\\[2mm] &(\nabla U(x) \cdot \nabla V(x)) := \nabla u_1(x) \cdot \nabla v_1(x) + \nabla u_2(x) \cdot \nabla v_2(x) \in \mathbb{R}^1,\\[2mm] &|\nabla U(x)| := \left(|\nabla u_1(x)|^2_{\mathbb{R}^N} + |\nabla u_2(x)|^2_{\mathbb{R}^N} \right)^{1/2}. \end{aligned} \] In addition, \(\mathcal{H}^S\) denotes the space of functions with values in \(\mathbb{L}^2(\Omega)\) defined on \([0, S]\) (\(S > 0\)), which is a Hilbert space with the following inner product and norm. \[ \begin{aligned} &\mathcal{H}^S := {\rm L}^2(0, S; \mathbb{L}^2(\Omega)) \ni U(t), V(t),\\ &\quad\mbox{with inner product:}\ (U, V)_{\mathcal{H}^S} = \int_0^S (U, V)_{\mathbb{L}^2}^2 dt,\\ &\quad\mbox{and norm:}\ |U|_{\mathcal{H}^S}^2 = (U, U)_{\mathcal{H}^S}. \end{aligned} \] As a realization in \(\mathbb{R}^2\) of the imaginary unit \(i\) in \(\mathbb{C}\), we introduce the following matrix \(I\), which is a linear isometry on \(\mathbb{R}^2\): \[ I = \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix}. \] We abuse \(I\) for the realization of \(I\) in \(\mathbb{L}^2(\Omega)\), i.e., \(I U = ( - u_2, u_1 )^{\rm T}\) for all \(U = (u_1, u_2)^{\rm T} \in \mathbb{L}^2(\Omega)\). Then \(I\) satisfies the following properties (see \cite{KOS1}): \begin{enumerate} \item Skew-symmetric property: \begin{equation} \label{skew-symmetric_property} (IU \cdot V)_{\mathbb{R}^2} = -\ (U \cdot IV)_{\mathbb{R}^2}; \hspace{4mm} (IU \cdot U)_{\mathbb{R}^2} = 0 \hspace{4mm} \mbox{for each}\ U, V \in \mathbb{R}^2. \end{equation} \item Commutative property with the differential operator \(D_i = \frac{\partial}{\partial x_i}\): \begin{equation} \label{commutative_property} I D_i = D_i I:\mathbb{H}^1_0 \rightarrow \mathbb{L}^2\ (i=1, \cdots, N). \end{equation} \end{enumerate} Let \({\rm H}\) be a Hilbert space and denote by \(\Phi({\rm H})\) the set of all lower semi-continuous convex function \(\phi\) from \({\rm H}\) into \((-\infty, +\infty]\) such that the effective domain of \(\phi\) given by \({\rm D}(\phi) := \{u \in {\rm H}\mid \ \phi(u) < +\infty \}\) is not empty. Then for \(\phi \in \Phi({\rm H})\), the subdifferential of \(\phi\) at \(u \in {\rm D}(\phi)\) is defined by \[ \partial \phi(u) := \{w \in {\rm H}\mid (w, v - u)_{\rm H} \leq \phi(v)-\phi(u) \hspace{2mm} \mbox{for all}\ v \in {\rm H}\}, \] which is a possibly multivalued maximal monotone operator with domain\\ \({\rm D}(\partial \phi) = \{u \in {\rm H}\mid \partial\phi(u) \neq \emptyset\}\). However for the discussion below, we have only to consider the case where \(\partial \phi\) is single valued. We define functionals \(\varphi, \ \psi_r:\mathbb{L}^2(\Omega) \rightarrow [0, +\infty]\) (\(r\geq2\)) by \begin{align} \label{varphi} &\varphi(U) := \left\{ \begin{aligned} &\frac{1}{2} \displaystyle\int_\Omega |\nabla U(x)|^2 dx &&\mbox{if}\ U \in \mathbb{H}^1_0(\Omega),\\[3mm] &+ \infty &&\mbox{if}\ U \in \mathbb{L}^2(\Omega)\setminus\mathbb{H}^1_0(\Omega), \end{aligned} \right. \\[2mm] \label{psi} &\psi_r(U) := \left\{ \begin{aligned} &\frac{1}{r} \displaystyle\int_\Omega |U(x)|_{\mathbb{R}^2}^r dx &&\mbox{if}\ U \in \mathbb{L}^r(\Omega) \cap \mathbb{L}^2(\Omega),\\[3mm] &+\infty &&\mbox{if}\ U \in \mathbb{L}^2(\Omega)\setminus\mathbb{L}^r(\Omega). \end{aligned} \right. \end{align} Then it is easy to see that \(\varphi, \psi_r \in \Phi(\mathbb{L}^2(\Omega))\) and their subdifferentials are given by \begin{align} \label{delvaphi} &\begin{aligned}[t] &\partial \varphi(U)=-\Delta U\ \mbox{with} \ {\rm D}( \partial \varphi) = \mathbb{H}^1_0(\Omega)\cap\mathbb{H}^2(\Omega),\\[2mm] \end{aligned}\\ \label{delpsi} &\partial \psi_r(U) = |U|_{\mathbb{R}^2}^{r-2}U\ {\rm with} \ {\rm D}( \partial \psi_r) = \mathbb{L}^{2(r-1)}(\Omega) \cap \mathbb{L}^2(\Omega). \end{align} Furthermore for any $\mu>0$, we can define the Yosida approximations \(\partial \varphi_\mu,\ \partial \psi_\mu\) of \(\partial \varphi,\ \partial \psi\) by \begin{align} \label{Yosida:varphi} &\partial \varphi_\mu(U) := \frac{1}{\mu}(U - J_\mu^{\partial \varphi}U) = \partial \varphi(J_\mu^{\partial \varphi} U), \quad J_\mu^{\partial \varphi} : = ( 1 + \mu\ \partial \varphi)^{-1}, \\[2mm] \label{Yosida:psi} &\partial \psi_{r,\mu}(U) := \frac{1}{\mu} (U - J_\mu^{\partial \psi_r} U) = \partial \psi_r( J_\mu^{\partial \psi_r} U ), \quad J_\mu^{\partial \psi_r} : = ( 1 + \mu\ \partial \psi_r)^{-1}. \end{align} The second identity holds since \(\partial\varphi\) and \(\partial\psi_r\) are single-valued. Then it is well known that \(\partial \varphi_\mu, \ \partial \psi_{r,\mu}\) are Lipschitz continuous on \(\mathbb{L}^2(\Omega)\) and satisfies the following properties (see \cite{B1}, \cite{B2}): \begin{align} \label{asd} \psi_r(J_\mu^{\partial\psi_r}U)&\leq\psi_{r,\mu}(U) \leq \psi_r(U),\\ \label{as} |\partial\psi_{r,\mu}(U)|_{{\rm L}^2}&=|\partial\psi_{r}(J_\mu^{\partial\psi}U)|_{{\rm L}^2}\leq|\partial\psi_r(U)|_{\mathbb{L}^2}\quad\forall\ U \in {\rm D}(\partial\psi_r),\ \forall \mu > 0, \end{align} where \(\psi_r\) is the Moreau-Yosida regularization of \(\psi\) given by the following formula: \[ \psi_{r,\mu}(U) = \inf_{V \in \mathbb{L}^2(\Omega)}\left\{ \frac{1}{2\mu}|U-V|_{\mathbb{L}^2}^2+\psi_r(V) \right\} =\frac{\mu}{2}|(\partial\psi_r)_\mu(U)|_{\mathbb{L}^2}^2+\psi_r(J_\mu^{\partial\psi}U)\geq0. \] Moreover since \(\psi_r(0)=0\), it follows from the definition of subdifferential operators and \eqref{asd} that \begin{align}\label{asdf} (\partial\psi_{r,\mu}(U),U)_{\mathbb{L}^2} = (\partial\psi_{r,\mu}(U),U-0)_{\mathbb{L}^2} \leq \psi_{r,\mu}(U) - \psi_{r,\mu}(0) \leq \psi_r(U). \end{align} Here for later use, we prepare some fundamental properties of \(I\) in connection with \(\partial \varphi,\ \partial \psi_r,\ \partial \varphi_\mu,\ \partial \psi_{r,\mu}\). \begin{Lem2}[(c.f. \cite{KOS1} Lemma 2.1)] The following angle conditions hold. \label{Lem:2.1} \begin{align} \label{orth:IU} &(\partial \varphi(U), I U)_{\mathbb{L}^2} = 0\quad \forall U \in {\rm D}(\partial \varphi),\quad (\partial \psi_r(U), I U)_{\mathbb{L}^2} = 0\quad \forall U \in {\rm D}(\partial \psi_r), \\[2mm] \label{orth:mu:IU} &(\partial \varphi_\mu(U), I U)_{\mathbb{L}^2} = 0,\quad (\partial \psi_{r,\mu}(U), I U)_{\mathbb{L}^2} = 0 \quad \forall U \in \mathbb{L}^2(\Omega), \\[2mm] \label{orth:Ipsi} &\begin{aligned} &(\partial \psi_q(U), I \partial \psi_r(U))_{\mathbb{L}^2} = 0,\\ &(\partial \psi_q(U), I \partial \psi_{r,\mu}(U))_{\mathbb{L}^2}=0\quad \forall U \in {\rm D}(\partial \psi_r), \forall q,r \geq 2, \end{aligned}\\ \label{angle} &(\partial\varphi(U),\partial\psi_r(U))_{\mathbb{L}^2} \geq 0. \end{align} \end{Lem2} \begin{proof} We only give a proof of the second property in \eqref{orth:Ipsi} here. Let \(W=J_\mu^{\partial\psi_r}U\), then \(U = W + \mu\partial\psi_r(W)\). It holds \[ (\partial \psi_q(U), I \partial \psi_{r,\mu}(U))_{\mathbb{L}^2} = (|U|^{q-2}(W + \mu|W|^{r-2}W),I|W|^{r-2}W)_{\mathbb{L}^2}=0. \] \end{proof} Thus (CGL) can be reduced to the following evolution equation: \[ \tag*{(ACGL)} \left\{ \begin{aligned} &\frac{dU}{dt}(t) \!+\! \lambda\partial\varphi(U) \!+\! \alpha I \partial \varphi(U) \!+\! (\kappa+ \beta I) \partial \psi_q(U) \!-\! \gamma U \!=\! F(t),\quad t \in (0,T),\\ &U(0) =U(T), \end{aligned} \right. \] where \(f(t, x) = f_1(t, x) + i f_2(t, x)\) is identified with \(F(t) = (f_1(t, \cdot), f_2(t, \cdot))^{\rm T} \in \mathbb{L}^2(\Omega)\). In order to state our main results, we introduce the following region: \[ \begin{aligned} \label{CGLR} {\rm CGL}(r) &:= \left\{(x,y) \in \mathbb{R}^2 \mid xy \geq 0\ \mbox{or}\ \frac{|xy| - 1}{|x| + |y|} < r\right\}\\ &= {\rm S}_1(r) \cup {\rm S}_2(r) \cup\ {\rm S}_3(r) \cup {\rm S}_4(r), \end{aligned} \] where \({\rm S}_i(\cdot)\) (\(i=1,2,3,4\)) are given by \begin{equation} \label{CGLRS1-4} \begin{aligned} &{\rm S}_1(r) := \left\{(x, y) \in \mathbb{R}^2 \mid |x| \leq r\right\},\\ &{\rm S}_2(r) := \left\{(x, y) \in \mathbb{R}^2 \mid |y| \leq r\right\},\\ &{\rm S}_3(r) := \left\{(x, y) \in \mathbb{R}^2 \mid xy > 0\right\},\\ &{\rm S}_4(r) := \left\{(x, y) \in \mathbb{R}^2 \mid |1 + xy| < r |x - y|\right\}. \end{aligned} \end{equation} This region is frequently referred as the CGL region. Also, we use the parameter \(c_q \in [0,\infty)\) measuring the strength of the nonlinearity: \[ \label{cq} c_q := \frac{q - 2}{2\sqrt{q - 1}}. \] This exponent \(c_q\) will play an important role in what follows, which is based on the following inequality. \begin{Lem2}[(c.f. \cite{KOS1} Lemma 4.1)] The following inequalities hold for all \label{key_inequality} $U \in {\rm D}(\partial \varphi) \cap {\rm D}(\partial \psi_q)$: \begin{align} \label{key_inequality_1} &|(\partial \varphi(U), I \partial \psi_q(U))_{\mathbb{L}^2}| \leq c_{q}(\partial \varphi(U), \partial \psi_q(U))_{\mathbb{L}^2}. \end{align} \end{Lem2} In this paper we are concerned with periodic solutions of (ACGL) in the following sense: \begin{Def}[Periodic solutions] A function \(U \in {\rm C}([0,T];\mathbb{L}^2(\Omega))\) is a periodic solution of (ACGL) if the following conditions are satisfied: \begin{enumerate}\renewcommand{\labelenumi}{(\roman{enumi})} \item \(U \in {\rm D}(\partial\varphi)\cap{\rm D}(\partial\psi_q)\) and satisfies (ACGL) for a.e. \(t\in(0,T)\), \item \(\frac{dU}{dt},\partial\varphi(U),\partial\psi_q(U)\in{\rm L}^2(0,T;\mathbb{L}^2(\Omega))\), \item \(\varphi(U(t))\) and \(\psi_q(U(t))\) are absolutely continuous on \([0,T]\), \item \(U(0)=U(T)\). \end{enumerate} \end{Def} We note that condition (iii) follows from (ii) and hence periodic solutions \(U\) belong to \({\rm C}([0,T];\mathbb{L}^q(\Omega)\cap\mathbb{H}^1_0(\Omega))\). Our main results can be stated as follows. \begin{Thm}[Existence of Periodic Solutions]\label{MTHM} Let \(\Omega \subset \mathbb{R}^N\) be a bounded domain of \({\rm C}^2\)-regular class. Let \((\frac{\alpha}{\lambda}, \frac{\beta}{\kappa}) \in {\rm CGL}(c_q^{-1})\) and \(\gamma \in \mathbb{R}\). Then for all \(F \in {\rm L}^2(0, T;\mathbb{L}^2(\Omega))\) with given \(T > 0\), there exists a periodic solution to (ACGL). \end{Thm} \section{Auxiliary Problems}\label{sec-4} In this section, we consider the following auxiliary equation: \[ \tag*{(AE)\(_\varepsilon\)} \left\{ \begin{aligned} &\frac{dU}{dt}(t) \!+\! \lambda\partial\varphi(U)\!+\!\alpha I\partial\varphi(U)\!+\!\varepsilon\partial\psi_r(U)\!+\!(\kappa\!+\! \beta I) \partial \psi_q(U) \!-\! \gamma U \!=\! F(t),\ t \!\in\! (0,T),\\ &U(0) = U(T), \end{aligned} \right. \] with \(r > q\) and \(\varepsilon>0\). Then we have \begin{Prop} Let \(F \in \mathcal{H}^T\), \(\varepsilon>0\) and \(r > q > 2\).\label{GWP} Then there exists a periodic solution for (AE)\(_\varepsilon\). \end{Prop} In order to prove this proposition, for a given \(h\in\mathcal{H}^T\) we consider Cauchy problem of the form: \[ \tag*{(IVP)\(_\mu^h\)} \left\{ \begin{aligned} &\frac{dU}{dt}(t) \!+\! \lambda\partial\varphi(U)\!+\!\varepsilon\partial\psi_r(U)\!+\!\kappa\partial \psi_q(U)\!+\!\alpha I\partial\varphi_\mu(U)\!+\! U \!-\!h(t) \!=\! F(t),\ t \!\in\! (0,T),\\ &U(0) = U_0, \end{aligned} \right. \] where \(U_0 \in \mathbb{L}^2(\Omega)\). We claim that this equation has a unique solution \(U = U^h \in {\rm C}([0,T];\mathbb{L}^2(\Omega))\) satisfying the following regularities \begin{enumerate}\renewcommand{\labelenumi}{(\roman{enumi})} \item $U \in {\rm W}^{1,2}_{\rm loc}((0,T);\mathbb{L}^2(\Omega))$, \item $U(t) \in {\rm D}(\partial \varphi) \cap {\rm D}(\partial \psi_q) \cap {\rm D}(\partial \psi_r)$ for a.e. $t \in (0,T)$ and satisfies (AE)\(_\mu\) for a.e. $t \in (0,T)$, \item $\varphi(U(\cdot))$, \(\psi_q(U(\cdot))\), \(\psi_r(U(\cdot)) \in {\rm L}^1(0,T)\) and $t\varphi(U(t))$, \(t\psi_q(U(t))\), \(t\psi_r(U(t)) \in {\rm L}^\infty(0,T)\), \item $\sqrt{t} \frac{d}{dt}U(t)$, $\sqrt{t} \partial \varphi(U(t))$, $\sqrt{t} \partial \psi_q(U(t))$, $\sqrt{t} \partial \psi_r(U(t)) \in {\rm L}^2(0,T;\mathbb{L}^2(\Omega))$. \end{enumerate} Since \(\partial\varphi_\mu(U)\) is a Lipschitz perturbation, to ensure the above claim, we only have to check the following: \begin{Lem}\label{asta} The operator \(\lambda\partial\varphi + \varepsilon\partial\psi_r + \kappa\partial\psi_q\) is maximal monotone in \(\mathbb{L}^2(\Omega)\) and satisfies \begin{equation} \label{ast} \lambda\partial\varphi + \varepsilon\partial\psi_r + \kappa\partial\psi_q = \partial(\lambda\varphi+\varepsilon\psi_r+\kappa\psi_q). \end{equation} \end{Lem} Since \(\lambda\partial\varphi+\varepsilon\partial\psi_r+\kappa\partial\psi_q\) is monotone, and \(\partial(\lambda\varphi+\varepsilon\psi_r+\kappa\psi_q)\subset\lambda\partial\varphi+\varepsilon\partial\psi_r+\kappa\partial\psi_q\), to show \eqref{ast}, it suffices to check that \(\lambda\partial\varphi+\varepsilon\partial\psi_r+\kappa\partial\psi_q\) is maximal monotone. For this purpose, we rely on the following Proposition as in the proof of Lemma 2.3. in \cite{KOS1}. \begin{Prop2}[(Br\'ezis, H. \cite{B2} Theorem 9)] \label{angler} Let \(B\) be maximal monotone in \({\rm H}\) and \(\phi \in \Phi({\rm H})\). Suppose \begin{align} \phi((1+\mu B)^{-1}u) \leq \phi(u), \hspace{4mm} \forall \mu>0 \hspace{2mm} \forall u \in {\rm D}(\phi). \label{angle1} \end{align} Then \(\partial \phi + B\) is maximal monotone in \({\rm H}\). \end{Prop2} \begin{proof}[Proof of Lemma \ref{asta}] We note that \(\varepsilon\partial\psi_q+\kappa\partial \psi_q\) is obviously maximal monotone in \(\mathbb{L}^2(\Omega)\). First we show \((1 + \mu \{\varepsilon\partial\psi_q+\kappa\partial \psi_q\})^{-1}{\rm D}(\varphi) \subset {\rm D}(\varphi)\), where \({\rm D}(\varphi) = \mathbb{H}^1_0(\Omega)\). Let \(U \in \mathbb{C}^1_\mathrm{c}(\Omega) := {\rm C}^1_0(\Omega) \times {\rm C}^1_0(\Omega)\) and \(V:=(1+\mu\{\varepsilon\partial\psi_q+\kappa\partial \psi_q\})^{-1}U\), which implies \(V(x)+\mu \{\varepsilon|V(x)|^{r-2}_{\mathbb{R}^2}V(x)+\kappa|V(x)|_{\mathbb{R}^2}^{q-2}V(x)\}=U(x)\) for a.e. \(x \in \Omega\). Here define \(G:\mathbb{R}^2 \rightarrow \mathbb{R}^2\) by \(G : V \mapsto G(V) =V+\mu \{\varepsilon|V|^{r-2}_{\mathbb{R}^2}V+\kappa|V|_{\mathbb{R}^2}^{q-2}V\}\), then we get \(G(V(x))=U(x)\). Note that G is of class \({\rm C}^1\) and bijective from \(\mathbb{R}^2\) into itself and its Jacobian determinant is given by \[ \begin{aligned} &\det D G(V)\\ &= (1 + \mu \{\varepsilon|V|_{\mathbb{R}^2}^{r-2}+\kappa|V|^{q-2}\}) (1 + \mu\{\varepsilon(r-1)|V|^{r-2}+\kappa (q-1) |V|_{\mathbb{R}^2}^{q-2}\}) \neq 0\\ &\qquad \mbox{for each}\ V \in \mathbb{R}^2. \end{aligned} \] Applying the inverse function theorem, we have \(G^{-1} \in {\rm C}^1(\mathbb{R}^2;\mathbb{R}^2)\). Hence \(V(\cdot)=G^{-1}(U(\cdot)) \in \mathbb{C}^1_\mathrm{c}(\Omega),\) which implies \((1 + \mu\{\varepsilon\partial\psi_q+\kappa\partial \psi_q\})^{-1} \mathbb{C}^1_\mathrm{c}(\Omega) \subset \mathbb{C}^1_\mathrm{c}(\Omega)\). Now let \(U_n \in \mathbb{C}^1_\mathrm{c}(\Omega)\) and \(U_n \rightarrow U\) in \(\mathbb{H}^1(\Omega)\). Then \(V_n := (1 + \mu \{\varepsilon\partial\psi_q+\kappa\partial \psi_q\})^{-1}U_n \in \mathbb{C}^1_\mathrm{c}(\Omega)\) satisfy \[ \begin{aligned} |V_n - V|_{\mathbb{L}^2} &= |(1 + \mu \{\varepsilon\partial\psi_q+\kappa\partial \psi_q\})^{-1}U_n - (1+\mu \{\varepsilon\partial\psi_q+\kappa\partial \psi_q\})^{-1}U|_{\mathbb{L}^2}\\ &\leq |U_n - U|_{\mathbb{L}^2} \rightarrow 0 \hspace{4mm} \mbox{as}\ n \rightarrow \infty, \end{aligned} \] whence it follows that \(V_n \rightarrow V\) in \(\mathbb{L}^2(\Omega)\). Also differentiation of \(G(V_n(x))=U_n(x)\) gives \begin{equation} \label{adfjakfjdkaj} \begin{aligned} &(1+\mu\{\varepsilon|V_n(x)|^{r-2}+\kappa|V_n(x)|_{\mathbb{R}^2}^{q-2}\})\nabla V_n(x)\\ &+ \mu\{\varepsilon(r-2)|V_n(x)|_{\mathbb{R}^2}^{r-4}+\kappa(q-2)|V_n(x)|_{\mathbb{R}^2}^{q-4}\} (V_n(x) \cdot \nabla V_n(x)) ~\! V_n(x) = \nabla U_n(x). \end{aligned} \end{equation} Multiplying \eqref{adfjakfjdkaj} by \(\nabla V_n(x)\), we easily get \(|\nabla V_n(x)|^2 \leq (\nabla U_n(x) \cdot \nabla V_n(x))\). Therefore by the Cauchy-Schwarz inequality, we have \(\varphi(V_n) \leq \varphi(U_n) \rightarrow \varphi(U)\). Thus the boundedness of \(\{ |\nabla V_n| \}\) in \({\rm L}^2\) assures that \(V_n \to V\) weakly in \(\mathbb{H}^1_0(\Omega)\), hence we have \((1+\mu \partial \psi)^{-1}{\rm D}(\varphi) \subset {\rm D}(\varphi)\). Furthermore, from the lower semi-continuity of the norm in the weak topology, we derive \(\varphi(V) \leq \varphi(U)\). This is nothing but the desired inequality \eqref{angle1}. \end{proof} Thus by the standard argument of subdifferential operator theory (see Br\'ezis \cite{B1,B2}), we have a unique solution \(U=U_\mu^h\) of (IVP)\(_\mu^h\) satisfying (i)-(iv). Then by letting \(\mu\downarrow0\), we can easily show that \(U_\mu^h\) converges to the unique solution\(U^h\) (satisfying regularity (i)-(iv)) of the following Cauchy problem (cf. proof of Theorem 2 of \cite{KOS1}): So far we have the unique solution of the following initial value problem: \[ \tag*{(IVP)\(^h\)} \left\{ \begin{aligned} &\frac{dU}{dt}(t) \!+\! \lambda\partial\varphi(U)\!+\!\varepsilon\partial\psi_r(U)\!+\!\kappa\partial \psi_q(U)\!+\!\alpha I\partial\varphi(U)\!+\!U\!-\!h(t) \!=\! F(t),\ t \!\in\! (0,T),\\ &U(0) = U_0. \end{aligned} \right. \] For all \(U_0, V_0 \in \mathbb{L}^2(\Omega)\) and the corresponding solutions \(U(t), V(t)\) of (IVP)\(^h\), by the monotonicity of \(\partial\varphi, I\partial\varphi, \partial\psi_r, \partial\psi_q\), we easily obtain \[ \frac{1}{2}\frac{d}{dt}|U(t)-V(t)|_{\mathbb{L}^2}^2+|U(t)-V(t)|_{\mathbb{L}^2}^2\leq 0, \] whence follows \[ |U(T) - V(T)|_{\mathbb{L}^2} \leq e^{-T}|U_0 - V_0|_{\mathbb{L}^2}. \] Then the Poincar\'e map: \(\mathbb{L}^2(\Omega) \ni U_0 \mapsto U(T) \in \mathbb{L}^2(\Omega)\) becomes a strict contraction. Therefore the fixed point of the Poincar\'e map gives the unique periodic solution of (AE)\(^h_\varepsilon\). \[ \tag*{(AE)\(^h_\varepsilon\)} \left\{ \begin{aligned} &\frac{dU}{dt}(t) \!+\! \lambda\partial\varphi(U)\!+\!\varepsilon\partial\psi_r(U)\!+\!\kappa\partial \psi_q(U)\!+\!\alpha I\partial\varphi(U)\!+\! U \!-\!h(t) \!=\! F(t),\ t \!\in\! (0,T),\\ &U(0) = U(T). \end{aligned} \right. \] Here we note that since \(U(T) \in {\rm D}(\varphi)\cap{\rm D}(\psi_q)\cap{\rm D}(\psi_r)\) by (iii), we automatically have \(U(0)\in {\rm D}(\varphi)\cap{\rm D}(\psi_q)\cap{\rm D}(\psi_r)\). We next define the mapping \[ \mathcal{F}: \mathcal{H}^T \supset {\rm B}_R \ni h \mapsto U_h \mapsto \beta I\partial\psi_q(U) - (\gamma + 1)U \in {\rm B}_R \subset \mathcal{H}^T, \] where \({\rm B}_R\) is the ball in \(\mathcal{H}^T\) centered at the origin with radius \(R > 0\), to be fixed later, and \(U_h\) is the unique solution of (AE)\(_\varepsilon^h\) with given \(h \in {\rm B}_R\). In order to ensure \(\mathcal{F}(h) \in {\rm B}_R\), we are going to establish a priori estimates for solutions \(U=U_h\) of (AE)\(_\varepsilon^h\). \begin{Lem} Let \(U = U_h\) be the periodic solution for (AE)\(_\varepsilon^h\). Then there exist a constant \(C\) constant depending only on \(|\Omega|\), \(T\), \(r\), \(\varepsilon\) and \(|F|_{\mathcal{H}^T}\) such that \begin{equation}\label{plm} \sup_{t\in[0,T]}|U(t)|_{\mathbb{L}^2} \leq C + C|h|_{\mathcal{H}^T}^{\frac{1}{r-1}} + C|h|_{\mathcal{H}^T}^{\frac{r}{2(r-1)}}. \end{equation} \end{Lem} \begin{proof} Multiplying (AE)\(_\varepsilon^h\) by the solution \(U\), we obtain \[ \begin{aligned} \frac{1}{2}\frac{d}{dt}|U|_{\mathbb{L}^2}^2 + \varepsilon|\Omega|^{1 - \frac{r}{2}}|U|_{\mathbb{L}^2}^r &\leq \frac{1}{2}\frac{d}{dt}|U|_{\mathbb{L}^2}^2 + 2\lambda\varphi(U) + r\varepsilon\psi_r(U) + q\kappa\psi_q(U) + |U|_{\mathbb{L}^2}^2\\ &\leq (|F|_{\mathbb{L}^2} + |h|_{\mathbb{L}^2})|U|_{\mathbb{L}^2}, \end{aligned} \] where we used \(|\Omega|^{1 - \frac{r}{2}}|U|_{\mathbb{L}^2}^r \leq r\psi_r(U)\). Moreover by Young's inequality, we have \begin{equation} \label{wer} \begin{aligned} \frac{1}{2}\frac{d}{dt}|U|_{\mathbb{L}^2}^2 + \frac{\varepsilon|\Omega|^{1 - \frac{r}{2}}}{2}|U|_{\mathbb{L}^2}^r \leq C_1 (|F|_{\mathbb{L}^2}^{\frac{r}{r-1}} + |h|_{\mathbb{L}^2}^{\frac{r}{r-1}}),&\\ C_1=\left(1-\frac{1}{r}\right)\left(\frac{r\varepsilon|\Omega|^{1-\frac{r}{2}}}{4}\right)^{-\frac{1}{r-1}}.& \end{aligned} \end{equation} Put \(m = \min_{0 \leq t \leq T}|U(t)|_{\mathbb{L}^2}\) and \(M = \max_{0 \leq t \leq T}|U(t)|_{\mathbb{L}^2}\). Then we have \[ \begin{aligned} M^2 &\leq m^2 + 2C_1 \int_0^T(|F|_{\mathbb{L}^2}^{\frac{r}{r-1}} + |h|_{\mathbb{L}^2}^{\frac{r}{r-1}})dt\\ &\leq m^2 + 2C_1 T^{\frac{r-2}{2(r-1)}} (|F|_{\mathcal{H}^T}^{\frac{r}{r-1}} + |h|_{\mathcal{H}^T}^{\frac{r}{r-1}}), \end{aligned} \] whence follows \begin{equation} \label{asda} M \leq m + \tilde{C}_1(|F|_{\mathcal{H}^T}^{\frac{r}{2(r-1)}} + |h|_{\mathcal{H}^T}^{\frac{r}{2(r-1)}}), \end{equation} where \[ \tilde{C}_1= \sqrt{2}C_1^{\frac{1}{2}}T^{\frac{r-2}{r-1}}. \] On the other hand, integrating \eqref{wer} over \((0,T)\), we obtain \[ \frac{1}{2}T\varepsilon|\Omega|^{1 - \frac{r}{2}}m^r \leq C_1T^{\frac{r-2}{2(r-1)}} (|F|_{\mathcal{H}^T}^{\frac{r}{r-1}} + |h|_{\mathcal{H}^T}^{\frac{r}{r-1}}) \] which implies \begin{equation} \label{lkj} m \leq C_2(|F|_{\mathcal{H}^T}^{\frac{1}{r-1}} + |h|_{\mathcal{H}^T}^{\frac{1}{r-1}}), \end{equation} where \[ C_2= \left(\frac{2C_1}{\varepsilon|\Omega|^{1 - \frac{r}{2}}}\right)^{\frac{1}{r}}\frac{1}{T^{\frac{1}{2(r-1)}}}. \] Combining \eqref{asda} with \eqref{lkj}, we have the desired inequality. \end{proof} We note that \(r > 2\) implies \(\frac{1}{r-1}<1\) and \(\frac{r}{2(r-1)} < 1\). \begin{Lem} Let \(U = U_h\) be the periodic solution for (AE)\(^h_\varepsilon\). Then there exists a constant \(C\) depending only on \(|\Omega|\), \(T\), \(r\), \(\lambda\), \(\alpha\), \(\varepsilon\) and \(|F|_{\mathcal{H}^T}\) such that \begin{equation}\label{zaqw} \sup_{t\in[0,T]}\varphi(U(t)) +\int_0^T|\partial\varphi(U(t))|_{\mathbb{L}^2}^2dt +\int_0^T|\partial\psi_r(U(t))|_{\mathbb{L}^2}^2dt \leq C + C|h|_{\mathcal{H}^T}^{2}. \end{equation} \end{Lem} \begin{proof}\mbox{}\hspace{\parindent} Multiplying (AE)\(_\varepsilon^h\) by \(\partial\varphi(U)\), then by using \eqref{skew-symmetric_property} and \eqref{angle}, we get \begin{equation}\label{zz} \frac{d}{dt}\varphi(U) + \frac{\lambda}{2}|\partial\varphi(U)|_{\mathbb{L}^2}^2 + 2\varphi(U)\leq \frac{1}{\lambda}(|F|_{\mathbb{L}^2}^2 + |h|_{\mathbb{L}^2}^2). \end{equation} Set \(m_1 = \min_{0 \leq t \leq T}\varphi(U)\) and \(M_1 = \max_{0 \leq t \leq T}\varphi(U)\). Then we have \[ M_1 \leq m_1 + \frac{1}{\lambda}(|F|_{\mathcal{H}^T}^2 + |h|_{\mathcal{H}^T}^2). \] On the other hand, integrating \eqref{zz} over \((0,T)\), we have \begin{equation}\label{zaq} \frac{\lambda}{2}\int_0^T|\partial\varphi(U)|_{\mathbb{L}^2}^2dt+ 2Tm_1 \leq \frac{1}{\lambda}(|F|_{\mathcal{H}^T}^2 + |h|_{\mathcal{H}^T}^2), \end{equation} whence follows \begin{equation}\label{za} M_1=\max_{t\in[0,T]}\varphi(U(t)) \leq \left(1 + \frac{1}{2T}\right)\frac{1}{\lambda}(|F|_{\mathcal{H}^T}^2 + |h|_{\mathcal{H}^T}^2). \end{equation} Next we multiply (AE)\(_\varepsilon^h\) by \(\partial\psi_r(U)\), then in view of \((\partial\psi_r(U),\partial\psi_q(U))_{\mathbb{L}^2}\geq0\) and \eqref{angle}, we get \begin{equation}\label{zzz} \frac{d}{dt}\partial\psi_r(U) + \frac{\varepsilon}{4}|\partial\psi_r(U)|_{\mathbb{L}^2}^2 \leq \frac{1}{\varepsilon}(\alpha^2|\partial\varphi(U)|_{\mathbb{L}^2}^2 + |F|_{\mathbb{L}^2}^2 + |h|_{\mathbb{L}^2}^2) \end{equation} Integrating \eqref{zzz} with respect to \(t\) over \((0,T)\), we obtain by \eqref{zaq} \[ \frac{\varepsilon}{4} |\partial\psi_r(U)|_{\mathcal{H}^T}^2= \frac{\varepsilon}{4}\int_0^T|\partial\psi_r(U)|^2_{\mathbb{L}^2}dt \leq \frac{1}{\varepsilon}\left(1 + \frac{2\alpha^2}{\lambda^2}\right) (|F|_{\mathcal{H}^2}^2 + |h|_{\mathcal{H}^2}^2). \] \end{proof} \begin{proof}[Proof of Proposition \ref{GWP}] By the interpolation inequality, we find that for any \(\eta>0\), there exists \(C_\eta>0\) such that \begin{equation}\label{zaqws} |\partial\psi_q(U)|_{\mathbb{L}^2}^2 \leq |\partial\psi_r(U)|_{\mathbb{L}^2}^{2\frac{q-2}{r-2}}|U|_{\mathbb{L}^2}^{2\frac{r-q}{r-2}} \leq \eta|\partial\psi_r(U)|_{\mathbb{L}^2}^2 + C_\eta|U|_{\mathbb{L}^2}^2. \end{equation} Hence, by virtue of \eqref{orth:IU}, \eqref{plm} and \eqref{zaqw}, we get \[ |\mathcal{F}(h)|_{\mathcal{H}^T}^2 \begin{aligned}[t] &= |\beta I \partial\psi_q(U) - (\gamma + 1) U|_{\mathcal{H}^T}^2\\ &= |\beta|^2|\partial\psi_q(U)|_{\mathcal{H}^T}^2 + |\gamma + 1|^2|U|_{\mathcal{H}^T}^2\\ &\leq \eta|\beta|^2\{C + C|h|_{\mathcal{H}^T}^{2}\} + (|\gamma + 1|^2+C_\eta|\beta|^2) T\left\{ C + C|h|_{\mathcal{H}^T}^{\frac{1}{r-1}} + C|h|_{\mathcal{H}^T}^{\frac{r}{2(r-1)}}\right\}^2. \end{aligned} \] Here we fix \(\eta\) such that \[ \eta = \frac{1}{2}|\beta|^{-2}C^{-1} \] and take a sufficient large \(R\) such that \[ \eta|\beta|^2C + \frac{1}{2}R^2 + (|\gamma + 1|^2+C_\eta|\beta|^2) T\left\{ C + CR^{\frac{1}{r-1}} + CR^{\frac{r}{2(r-1)}}\right\}^2 \leq R^2. \] Thus we conclude that \(\mathcal{F}\) maps \(\mathrm{B}_R\) into itself. Next we ensure that \(\mathcal{F}\) is continuous with respect to the weak topology of \(\mathcal{H}^T\). Let \(h_n \rightharpoonup h\) weakly in \({\rm B}_R \subset {\rm L}^2(0,T;\mathbb{L}^2(\Omega))\) and let \(U_n\) be the unique periodic solution of (AE)\(_\varepsilon^{h_n}\). The estimates \eqref{plm}, \eqref{zaqw} and Rellich-Kondrachov's theorem ensure that \(\{U_n(t)\}_{n\in\mathbb{N}}\) is precompact in \(\mathbb{L}^2(\Omega)\) for all \(t\in[0,T]\). On the other hand, from estimates \eqref{zaqw}, \eqref{zaqws} and equation (AE)\(_\varepsilon^h\), we derive \begin{equation}\label{zaqwsx} \int_0^T\left|\frac{dU_n}{dt}(t)\right|^2dt\leq C \end{equation} for a suitable constant \(C\), whence it follows that \(\{U_n(t)\}_{n\in\mathbb{N}}\) forms an equi-continuous family in \(\mathrm{C}([0,T];\mathbb{L}^2(\Omega))\). Hence we can apply Ascoli's theorem to obtain a strong convergent subsequence in \({\rm C}([0,T];\mathbb{L}^2(\Omega))\) (denoted again by \(\{U_n\}\)). Thus by virtue of \eqref{zaqw} and the demi-closedness of operators \(\partial\varphi,\partial\psi_q,\partial\psi_r,\frac{d}{dt}\), we obtain \begin{align} U_n&\to U&&\mbox{strongly in}\ {\rm C}([0,T];\mathbb{L}^2(\Omega)),\\ \partial\varphi(U_n)&\rightharpoonup \partial\varphi(U)&&\mbox{weakly in}\ {\rm L}^2(0,T;\mathbb{L}^2(\Omega)),\\ \partial\psi_q(U_n)&\rightharpoonup \partial\psi_q(U)&&\mbox{weakly in}\ {\rm L}^2(0,T;\mathbb{L}^2(\Omega)),\\ \label{pl}\partial\psi_r(U_n)&\rightharpoonup \partial\psi_r(U)&&\mbox{weakly in}\ {\rm L}^2(0,T;\mathbb{L}^2(\Omega)),\\ \frac{dU_n}{dt}&\rightharpoonup \frac{dU}{dt}&&\mbox{weakly in}\ {\rm L}^2(0,T;\mathbb{L}^2(\Omega)). \end{align} Consequently \(U\) satisfies \[ \tag*{(AE)\(_\varepsilon^h\)} \left\{ \begin{aligned} &\frac{dU}{dt}(t) \!+\! \lambda\partial\varphi(U)\!+\!\varepsilon\partial\psi_r(U)\!+\!\kappa\partial \psi_q(U)\!+\!\alpha I\partial\varphi(U)\!+\! U \!-\!h(t) \!=\! F(t),\ t \!\in\! (0,T),\\ &U(0) = U(T), \end{aligned} \right. \] that is, \(U\) is the unique periodic solution of (AE)\(_h\). Since the argument above does not depend on the choice of subsequences, we can conclude that \(\mathcal{F}\) is weakly continuous in \(\mathcal{H}^T={\rm L}^2(0, T;\mathbb{L}^2(\Omega))\). Therefore by Schauder's fixed point theorem, we obtain a fixed point of the mapping \(\mathcal{F}\) which give the desired periodic solution of (AE)\(_\varepsilon\). \end{proof} \section{Proof of Theorem \ref{MTHM}} In this section, we discuss the convergence of periodic solutions \(U_\varepsilon\) of (AE)\(_\varepsilon\) as \(\varepsilon\to0\), by establishing a priori estimates independent of \(\varepsilon\). \[ \tag*{(AE)\(_\varepsilon\)} \left\{ \begin{aligned} &\frac{dU}{dt}(t) \!+\! \lambda\partial\varphi(U)\!+\!\alpha I\partial\varphi(U)\!+\!\varepsilon\partial\psi_r(U)\!+\!(\kappa\!+\! \beta I) \partial \psi_q(U) \!-\! \gamma U \!=\! F(t),\ t \!\in\! (0,T),\\ &U(0) = U(T), \end{aligned} \right. \] with \(r > q\) and \(\varepsilon>0\). To this end, we mainly rely on our key inequality \eqref{key_inequality_1} and repeat much the same arguments as those in our previous paper \cite{KOS1}. \begin{Lem} Let \(U = U_\varepsilon\) be the periodic solution for (AE)\(_\varepsilon\). \label{1st_energy} Then there exists a constant \(C\) depending only on \(|\Omega|\), \(T\), \(\lambda,\kappa,\gamma\) \(q\) and \(|F|_{\mathcal{H}^T}\) but not on \(\varepsilon\) such that \begin{equation}\label{1e} \sup_{t\in[0,T]}|U(t)|_{\mathbb{L}^2}^2 +\int_0^T\varphi(U(t))dt + \varepsilon\int_0^T\psi_r(U(t))dt + \int_0^T\psi_q(U(t))dt \leq C. \end{equation} \end{Lem} \begin{proof} Multiplying (AE)\(_\varepsilon\) by \(U\), we get by \eqref{orth:IU} \[ \begin{aligned} &\frac{1}{2}\frac{d}{dt}|U|_{\mathbb{L}^2}^2 + \frac{\kappa}{2}|\Omega|^{1 - \frac{q}{2}}|U|_{\mathbb{L}^2}^q +2\lambda\varphi(U) + r\varepsilon\psi_r(U) + \frac{q\kappa}{2}\psi_q(U)-\gamma|U|_{\mathbb{L}^2}^2\\ &\leq \frac{1}{2}\frac{d}{dt}|U|_{\mathbb{L}^2}^2 + 2\lambda\varphi(U) + r\varepsilon\psi_r(U) + q\kappa\psi_q(U)-\gamma|U|_{\mathbb{L}^2}^2\\ &\leq |F|_{\mathbb{L}^2}|U|_{\mathbb{L}^2}, \end{aligned} \] where we used \(|\Omega|^{1 - \frac{q}{2}}|U|_{\mathbb{L}^2}^q\leq q\psi_q(U)\). Since there exists a constant \(C_3\) such that \[ \gamma|U|_{\mathbb{L}^2}^2\leq\frac{\kappa}{4}|\Omega|^{1-\frac{q}{2}}|U|_{\mathbb{L}^2}^q+C_3, \] we obtain \[ \begin{aligned} &\frac{1}{2}\frac{d}{dt}|U|_{\mathbb{L}^2}^2 + \frac{\kappa|\Omega|^{1 - \frac{q}{2}}}{4}|U|_{\mathbb{L}^2}^q +2\lambda\varphi(U) + r\varepsilon\psi_r(U) + \frac{q\kappa}{2}\psi_q(U)\\ &\leq |F|_{\mathbb{L}^2}|U|_{\mathbb{L}^2}+C_3. \end{aligned} \] Moreover by Young's inequality, we have \begin{equation} \label{wers} \begin{aligned} &\frac{1}{2}\frac{d}{dt}|U|_{\mathbb{L}^2}^2 + \frac{\kappa|\Omega|^{1 - \frac{q}{2}}}{8}|U|_{\mathbb{L}^2}^q +2\lambda\varphi(U) + r\varepsilon\psi_r(U) + \frac{q\kappa}{2}\psi_q(U)\\ &\leq \left(1-\frac{1}{q}\right) \left(\frac{q\kappa|\Omega|^{1-\frac{q}{2}}}{8}\right)^{-\frac{1}{q-1}} |F|_{\mathbb{L}^2}^{\frac{q}{q-1}}+C_3. \end{aligned} \end{equation} Put \(m = \min_{0 \leq t \leq T}|U(t)|_{\mathbb{L}^2}\) and \(M = \max_{0 \leq t \leq T}|U(t)|_{\mathbb{L}^2}\). Then we have \[ \begin{aligned} M^2 &\leq m^2 + 2\left(1-\frac{1}{q}\right) \left(\frac{q\kappa|\Omega|^{1-\frac{q}{2}}}{8}\right)^{-\frac{1}{q-1}} \int_0^T|F|_{\mathbb{L}^2}^{\frac{q}{q-1}}dt + 2C_3T\\ &\leq m^2 + 2\left(1-\frac{1}{q}\right) \left(\frac{q\kappa|\Omega|^{1-\frac{q}{2}}}{8}\right)^{-\frac{1}{q-1}} T^{\frac{q-2}{2(q-1)}} |F|_{\mathcal{H}^T}^{\frac{q}{q-1}} + 2C_3T, \end{aligned} \] whence follows \begin{equation} \label{asdas} M \leq m + \sqrt{2}\left(1-\frac{1}{q}\right)^{\frac{1}{2}} \left(\frac{q\kappa|\Omega|^{1-\frac{q}{2}}}{8}\right)^{-\frac{1}{2(q-1)}} T^{\frac{q-2}{4(q-1)}} |F|_{\mathcal{H}^T}^{\frac{q}{2(q-1)}} +\sqrt{2C_3T}. \end{equation} Consequently, integrating \eqref{wers} with respect to \(t\) over \((0,T)\), we obtain \[ \begin{aligned} &\frac{\kappa|\Omega|^{1 - \frac{q}{2}}}{8}Tm^q +2\lambda\int_0^T\varphi(U(t))dt + r\varepsilon\int_0^T\psi_r(U(t))dt + \frac{q\kappa}{2}\int_0^T\psi_q(U(t))dt\\ &\leq \left(\frac{q\kappa|\Omega|^{1-\frac{q}{2}}}{8}\right)^{-\frac{1}{q-1}} T^{\frac{q-2}{2(q-1)}} |F|_{\mathcal{H}^T}^{\frac{q}{q-1}} +C_3T, \end{aligned} \] which implies \begin{equation} \label{lkjs} m \leq \left[ \frac{8}{T\kappa|\Omega|^{1 - \frac{q}{2}}} \left\{ \left(\frac{q\kappa|\Omega|^{1-\frac{q}{2}}}{8}\right)^{-\frac{1}{q-1}} T^{\frac{q-2}{2(q-1)}} |F|_{\mathcal{H}^T}^{\frac{q}{q-1}} +C_3T \right\} \right]^{\frac{1}{q}}. \end{equation} Thus \eqref{1e} follows from \eqref{asdas} and \eqref{lkjs}. \end{proof} \begin{Lem}\label{2nd_energy} Let \(U = U_\varepsilon\) be the periodic solution of (AE)\(_\varepsilon\) and assume \((\frac{\alpha}{\lambda},\frac{\beta}{\kappa})\in{\rm CGL}(c_q^{-1})\). Then there exists a constant \(C\) depending only on \(|\Omega|\), \(T\), \(q\), \(\lambda,\kappa,\alpha,\beta,\gamma\) and \(|F|_{\mathcal{H}^T}\) but not on \(\varepsilon\) such that \begin{equation}\label{zaqwss} \begin{aligned} &\sup_{t \in [0,T]}\varphi(U(t)) +\sup_{t \in [0, T]}\psi_q(U(t)) +\sup_{t \in [0, T]}\varepsilon \psi_r(U(t))\\ &+\int_0^T|\partial \varphi(U(t))|_{\mathbb{L}^2}^2dt +\int_0^T|\partial \psi_q(U(t))|_{\mathbb{L}^2}^2dt\\ &+\int_0^T \left| \frac{dU(t)}{dt}\right|_{\mathbb{L}^2}^2 dt + \varepsilon^2\int_0^T|\partial\psi_r(U(t))|_{\mathbb{L}^2}^2dt \leq C. \end{aligned} \end{equation} \end{Lem} \begin{proof} Multiplication of (AE)\(_\varepsilon\) by $\partial \varphi(U)$ and $\partial \psi_q(U)$ together with \eqref{angle} and \eqref{skew-symmetric_property} give \begin{align} \label{afdidfhafdkjftt} & \frac{d}{dt} \varphi(U) + \lambda |\partial \varphi(U)|_{\mathbb{L}^2}^2 + \kappa G + \beta B \leq 2 \gamma_+ \varphi(U) + (F, \partial \varphi(U))_{\mathbb{L}^2}, \\ \label{fsdksdfk} & \frac{d}{dt} \psi_q(U(t)) + \kappa |\partial \psi_q(U)|_{\mathbb{L}^2}^2 + \lambda G - \alpha B \leq q \gamma_+ \psi_q(U(t)) + (F, \partial \psi_q(U))_{\mathbb{L}^2}, \end{align} where $\gamma_+:=\max \{\gamma, 0\}$ and \begin{equation*} G:=(\partial \varphi(U), \partial \psi_q(U))_{\mathbb{L}^2}, \quad B:=(\partial \varphi(U), I \partial \psi_q(U))_{\mathbb{L}^2}. \end{equation*} We add \eqref{afdidfhafdkjftt}$\times \delta^2$ to \eqref{fsdksdfk} for some $\delta >0$ to get \begin{equation}\label{sfdsldflsdfsl} \begin{aligned} &\frac{d}{dt} \left\{ \delta^2 \varphi(U) + \psi_q(U) \right\} + \delta^2 \lambda |\partial \varphi(U)|_{\mathbb{L}^2}^2 + \kappa |\partial \psi_q(U)|_{\mathbb{L}^2}^2 \\ &+(\delta^2 \kappa+\lambda) G +(\delta^2 \beta - \alpha )B \\ & \leq \gamma_+\left\{ 2 \delta^2 \varphi(U) + q \psi_q(U) \right\} + (F, \delta^2 \partial \varphi(U) + \partial \psi_q(U))_{\mathbb{L}^2}. \end{aligned} \end{equation} Here we introduce an another parameter \(\epsilon\in(0,\min\{\lambda, \kappa\})\). By the inequality of arithmetic and geometric means, and the Bessel' inequality, we have \begin{equation} \begin{aligned} &\delta^2 \lambda |\partial \varphi(U)|_{\mathbb{L}^2}^2 + \kappa |\partial \psi_q(U)|_{\mathbb{L}^2}^2 \\ & =\epsilon \left\{ \delta^2 |\partial \varphi(U)|_{\mathbb{L}^2}^2 + |\partial \psi_q(U)|_{\mathbb{L}^2}^2 \right\} +(\lambda- \epsilon) \delta^2 |\partial \varphi(U)|_{\mathbb{L}^2}^2 +(\kappa - \epsilon) |\partial \psi_q(U)|_{\mathbb{L}^2}^2 \\ & \geq \epsilon \left\{ \delta^2 |\partial \varphi(U)|_{\mathbb{L}^2}^2 + |\partial \psi_q(U)|_{\mathbb{L}^2}^2 \right\} +2\sqrt{ (\lambda-\epsilon)(\kappa-\epsilon)\delta^2 |\partial \varphi(U)|_{\mathbb{L}^2}^2 |\partial \psi_q(U)|_{\mathbb{L}^2}^2 } \\ \label{fdfkskgkd} & \geq \epsilon \left\{ \delta^2 |\partial \varphi(U)|_{\mathbb{L}^2}^2 + |\partial \psi_q(U)|_{\mathbb{L}^2}^2 \right\} +2\sqrt{ (\lambda-\epsilon)(\kappa-\epsilon)\delta^2 (G^2+B^2) }. \end{aligned} \end{equation} We here recall the key inequality \eqref{key_inequality_1} \begin{align} \label{fadfks} G \geq c_q^{-1}|B|. \end{align} Hence \eqref{sfdsldflsdfsl}, \eqref{fdfkskgkd} and \eqref{fadfks} yield \begin{equation} \begin{aligned} &\frac{d}{dt} \left\{ \delta^2 \varphi(U) \!+\! \psi_q(U) \right\} \!+\!\epsilon \left\{ \delta^2 |\partial \varphi(U)|_{\mathbb{L}^2}^2 \!+\! |\partial \psi_q(U)|_{\mathbb{L}^2}^2 \right\} \!+\!J(\delta, \epsilon)|B| \\ \label{sfdsldflsksfdhsl} & \leq \gamma_+\left\{ 2 \delta^2 \varphi(U) \!+\!q\psi_q(U) \right\} \!+\! (F, \delta^2 \partial \varphi(U) \!+\! \partial \psi_q(U))_{\mathbb{L}^2}. \end{aligned} \end{equation} where \begin{align*} J(\delta, \epsilon):=2 \delta \sqrt{ (1+c_q^{-2})(\lambda-\epsilon)(\kappa-\epsilon)} + c_q^{-1}(\delta^2 \kappa+\lambda) -|\delta^2 \beta - \alpha|. \end{align*} Now we are going to show that $(\frac{\alpha}{\lambda}, \frac{\beta}{\kappa}) \in {\rm CGL}(c_q^{-1})$ assures $J(\delta, \epsilon) \geq 0$ for some $\delta$ and $\epsilon$. By the continuity of $ J(\delta, \cdot) : \epsilon \mapsto J(\delta, \epsilon)$ it suffices to show $J(\delta, 0) > 0$ for some $\delta$. When $\alpha \beta >0$, it is enough to take $\delta=\sqrt{\alpha / \beta}$. When $\alpha \beta \leq 0$, we have $|\delta^2 \beta - \alpha| =\delta^2 |\beta|+ |\alpha|$. Hence \begin{align*} J(\delta, 0) = (c_q^{-1}\kappa -|\beta|)\delta^2 +2 \delta \sqrt{(1+c_q^{-2})\lambda \kappa} +(c_q^{-1}\lambda-|\alpha|). \end{align*} Therefore if $|\beta|/ \kappa \leq c_q^{-1}$, we get $J(\delta, 0) > 0$ for sufficiently large $\delta >0$. If $c_q^{-1} < |\beta| / \kappa$, we find that it is enough to see the discriminant is positive: \begin{align} D/4:=(1+c_q^{-2})\lambda \kappa -(c_q^{-1}\kappa -|\beta|) (c_q^{-1}\lambda-|\alpha|)>0. \end{align} Since \begin{align*} D/4>0 \Leftrightarrow \frac{|\alpha|}{\lambda}\frac{|\beta|}{\kappa}-1 <c_q^{-1}\left( \frac{|\alpha|}{\lambda}+\frac{|\beta|}{\kappa} \right), \end{align*} the condition $(\frac{\alpha}{\lambda}, \frac{\beta}{\kappa}) \in {\rm CGL}(c_q^{-1})$ yields $D>0$, whence $J(\delta, 0)>0$ for some $\delta\neq0$. Now we take $\delta\neq0$ and $\epsilon>0$ such that $J(\delta, \epsilon) \geq 0$. Applying Young's inequality to \eqref{sfdsldflsksfdhsl}, we obtain \begin{equation} \label{qazs} \begin{aligned} &\frac{d}{dt} \left\{ \delta^2 \varphi(U) \!+\! \psi_q(U) \right\} \!+\!\frac{\epsilon}{2} \left\{ \delta^2 |\partial \varphi(U)|_{\mathbb{L}^2}^2 \!+\! |\partial \psi_q(U)|_{\mathbb{L}^2}^2 \right\} \!+\!J(\delta, \epsilon)|B| \\ & \leq \gamma_+\left\{ 2 \delta^2 \varphi(U) \!+\!q\psi_q(U) \right\} \!+\! \frac{1+\delta^2}{2\epsilon}|F|_{\mathbb{L}^2}^2. \end{aligned} \end{equation} We integrate \eqref{qazs} with respect to \(t\) over \((0,T)\) and then by Lemma \ref{1st_energy} we obtain \begin{equation}\label{plmj} \int_0^T|\partial \varphi(U(t))|_{\mathbb{L}^2}^2dt +\int_0^T|\partial \psi_q(U(t))|_{\mathbb{L}^2}^2dt\leq C. \end{equation} Multiplying (AE)\(_\varepsilon\) by \(\varepsilon\partial\psi_r(U)\) and applying Young's inequality, we obtain by \eqref{orth:Ipsi} and \eqref{angle}, \begin{equation}\label{plmk} \varepsilon\frac{d}{dt}\psi_r(U) + \frac{\varepsilon^2}{2}|\partial\psi_r(U)|_{\mathbb{L}^2}^2 \leq r\gamma_+\varepsilon\psi_r(U) + \alpha^2|\partial\varphi(U)|_{\mathbb{L}^2}^2 + |F(t)|_{\mathbb{L}^2}^2. \end{equation} We integrate \eqref{plmk} with respect to \(t\) over \((0,T)\), then by \eqref{1e} and \eqref{plmj}, we obtain \begin{equation} \varepsilon^2\int_0^T|\partial \psi_r(U(t))|_{\mathbb{L}^2}^2dt\leq C. \end{equation} Hence by equation (AE)\(_\varepsilon\), we can estimate the time derivative of solutions, i.e., \begin{equation} \int_0^T\left|\frac{dU(t)}{dt}\right|_{\mathbb{L}^2}^2dt\leq C. \end{equation} Now we are going to derive a priori estimates for the first three terms in \eqref{zaqwss}. Applying \eqref{angle} and Young's inequality to \eqref{afdidfhafdkjftt} and \eqref{fsdksdfk}, we obtain \begin{align} \label{wsx} & \begin{aligned} &\frac{d}{dt} \varphi(U) + \frac{\lambda}{2} |\partial \varphi(U)|_{\mathbb{L}^2}^2 +\varphi(U)\\ & \leq (2 \gamma_+ +1)\varphi(U) + \frac{1}{\lambda}|F|_{\mathbb{L}^2}^2 + \frac{\beta^2}{\lambda}|\partial\psi_q(U)|_{\mathbb{L}^2}^2, \end{aligned}\\ \label{edc} & \begin{aligned} &\frac{d}{dt} \psi_q(U(t)) + \frac{\kappa}{2} |\partial \psi_q(U)|_{\mathbb{L}^2}^2 + \psi_q(U(t))\\ & \leq (q \gamma_++1) \psi_q(U(t)) + \frac{1}{\kappa}|F|_{\mathbb{L}^2}^2+ \frac{\alpha^2}{\kappa}|\partial\varphi(U)|_{\mathbb{L}^2}^2. \end{aligned} \end{align} Moreover we modify \eqref{plmk} as \begin{equation} \begin{aligned} &\varepsilon\frac{d}{dt}\psi_r(U) + \frac{\varepsilon^2}{2}|\partial\psi_r(U)|_{\mathbb{L}^2}^2+\varepsilon\psi_r(U)\\ &\leq (r\gamma_++1)\varepsilon\psi_r(U) + \alpha^2|\partial\varphi(U)|_{\mathbb{L}^2}^2 + |F(t)|_{\mathbb{L}^2}^2. \end{aligned} \end{equation} Since the arguments for deducing estimates concerning \(\sup_{t\in[0,T]}\psi_q(U(t))\) and \(\sup_{t\in[0,T]}\varepsilon\psi_r(U(t))\) are the same as that for \(\sup_{t\in[0,T]}\varphi(U(t))\), we here only show how to deduce the estimate for \(\sup_{t\in[0,T]}\varphi(U(t))\). Set \(m_1 = \min_{0 \leq t \leq T}\varphi(U) = \varphi(U(t_1))\) and \(M_1 = \max_{0 \leq t \leq T}\varphi(U)=\varphi(U(t_2))\) with \(t_2\in(t_1,t_1+T]\). Then we integrate \eqref{wsx} with respect to \(t\) on \((t_1,t_2)\). Noting \eqref{1e} and \eqref{plmj}, we obtain \[ M_1 \leq m_1 + C. \] On the other hand, integrating \eqref{wsx} with respect to \(t\) over \((0,T)\), by \eqref{1e} and \eqref{plmj} we obtain \[ m_1T \leq C, \] whence the following estimate holds: \[ M_1 \leq \left(1+\frac{1}{T}\right)C, \] which gives the desired estimate. \end{proof} \begin{proof}[Proof of Theorem \ref{MTHM}] By \eqref{1e}, \eqref{zaqwss} and Rellich-Kondrachov's theorem, \(\{U_\varepsilon(t)\}_{\varepsilon>0}\) forms a compact set for all \(t\in[0,T]\). Moreover the \(\mathrm{L}^2(0,T;\mathbb{L}^2(\Omega))\) estimate for \(\frac{dU}{dt}\) in \eqref{zaqwss} ensures that \(\{U_\varepsilon\}_{\varepsilon>0}\) is equi-continuous. Hence by Ascoli's theorem, there exists subsequence \(\{U_n\}_{n\in\mathbb{N}}:=\{U_{\varepsilon_n}\}_{n\in\mathbb{N}}\) of \(\{U_\varepsilon(t)\}_{\varepsilon>0}\) which converges strongly in \({\rm C}([0,T];\mathbb{L}^2(\Omega))\). On the other hand, by \eqref{zaqwss} and the demi-closedness of \(\partial\varphi,\partial\psi_q,\frac{d}{dt}\) we can extract a subsequence of \(\{U_n\}_{n\in\mathbb{N}}\) denoted again by \(\{U_n\}_{n\in\mathbb{N}}\) such that \begin{align}\label{rfv} U_n&\to U&&\mbox{strongly in}\ {\rm C}([0,T];\mathbb{L}^2(\Omega)),\\ \partial\varphi(U_n)&\rightharpoonup \partial\varphi(U)&&\mbox{weakly in}\ {\rm L}^2(0,T;\mathbb{L}^2(\Omega)),\\ \partial\psi_q(U_n)&\rightharpoonup \partial\psi_q(U)&&\mbox{weakly in}\ {\rm L}^2(0,T;\mathbb{L}^2(\Omega)),\\ \varepsilon_n\partial\psi_r(U_n)&\rightharpoonup g&&\mbox{weakly in}\ {\rm L}^2(0,T;\mathbb{L}^2(\Omega)),\\ \frac{dU_n}{dt}&\rightharpoonup \frac{dU}{dt}&&\mbox{weakly in}\ {\rm L}^2(0,T;\mathbb{L}^2(\Omega)), \end{align} where \(g \in {\rm L}^2(0,T;\mathbb{L}^2(\Omega))\). What we have to do is to show that \(g=0\). Due to \eqref{zaqwss}, we get \[ \begin{aligned} |\varepsilon|U_\varepsilon|^{r - 2}U_\varepsilon|_{\mathbb{L}^{\frac{r}{r - 1}}}^{\frac{r}{r - 1}}= \int_\Omega\varepsilon^{\frac{r}{r - 1}}|U_\varepsilon|^rdx= \varepsilon^{\frac{1}{r - 1}}\varepsilon\int_\Omega|U_\varepsilon|^rdx \leq \varepsilon^{\frac{1}{r - 1}}rC \to 0&\\ \mbox{as}\ \varepsilon \to 0\quad\mbox{uniformly on}\ [0,T],& \end{aligned} \] which yields \begin{equation} \label{edelpsir-Uerr-1} \varepsilon_n|U_n|^{r - 2}U_n \rightarrow 0\quad\mbox{strongly in}\ \mathbb{L}^{\frac{r}{r - 1}}(\Omega)\quad\mbox{uniformly on}\ [0, T]. \end{equation} Hence \[ \varepsilon_n|U_n|^{r - 2}U_n \rightarrow 0=g\quad\mbox{in}\ \mathcal{D}'((0,T)\times\Omega). \] Therefore \(U\) satisfies the equation (ACGL) and the convergence \eqref{rfv} ensures that \(U(0)=U(T)\). \end{proof}
{ "attr-fineweb-edu": 1.075195, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUb9vxK7Ehm4VQueuJ
\section{Introduction} It is widely held misconception that there can be no general, type independent theory for the existence and regularity of solutions to nonlinear PDEs. Arnold \cite{Arnold} ascribes this to the more complicated geometry of $\mathbb{R}^{n}$, as apposed to $\mathbb{R}$, which is relevant to ODEs alone. Evans \cite{Evans}, on the other hand, cites the wide variety of physical and probabilistic phenomena that are modelled with PDEs. There are, however, two general, type independent theories for the solutions of nonlinear PDEs. The Central Theory for PDEs, as developed by Neuberger \cite{Neuberger 1}, is based on a generalized method of steepest descent in suitably constructed Hilbert spaces. It delivers generalized solutions to nonlinear PDEs in a type independent way, although the method is \textit{not} universally applicable. However, it does yield spectacular numerical results. The Order Completion Method \cite{Obergugenberger and Rosinger}, on the other hand, yields the generalized solutions to arbitrary, continuous nonlinear PDEs of the form \begin{eqnarray} T\left(x,D\right)u\left(x\right)=f\left(x\right)\label{PDE} \end{eqnarray} where $\Omega\subseteq\mathbb{R}^{n}$ is open and nonempty, $f$ is continuous, and the PDE operator $T\left(x,D\right)$ is defined through some jointly continuous mapping \begin{eqnarray} F:\Omega\times\mathbb{R}^{K}\rightarrow \mathbb{R} \end{eqnarray} by \begin{eqnarray} T\left(x,D\right):u\left(x\right)\mapsto F\left(x,u\left(x\right),...,D^{\alpha}u\left(x\right),..\right) \end{eqnarray} The generalized solutions are obtained as elements of the Dedekind completion of certain spaces of functions, and may be assimilated with usual Hausdorff continuous, interval valued functions on the domain of the PDE operator \cite{Anguelov and Rosinger 1}. Recently, see \cite{vdWalt5} through \cite{vdWalt7}, the Order Completion Method \cite{Obergugenberger and Rosinger} was \textit{reformulated} and \textit{enriched} by introducing suitable uniform convergence spaces, in the sense of \cite{Beattie}. In this new setting it is possible, for instance, to treat PDEs with addition smoothness, over and above the mere \textit{continuity} of the PDE operator, in a way that allows for a significantly higher degree of regularity of the solutions \cite{vdWalt6}. The aim of this paper is to show how the ideas developed in \cite{vdWalt5} through \cite{vdWalt7} may be applied to initial and / or boundary value problems. In this regard, we consider a family of nonlinear first order Cauchy problems. The generalized solutions are obtained as elements of the completion of a suitably constructed uniform convergence space. We note the relative ease and simplicity of the method presented here, compared to the usual linear function analytic methods. In this way we come to note another of the advantages in solving initial and/or boundary value problems for linear and nonlinear PDEs in this way. Namely, initial and/or boundary value problems are solved by precisely the same kind of constructions as the free problems. On the other hand, as is well known, this is not so when function analytic methods - in particular, involving distributions, their restrictions to lower dimensional manifolds, or the associated trace operators - are used for the solution of such problems. The paper is organized as follows. In Section 2 we introduce some definitions and results as are required in what follows. We omit the proofs, which can be found in \cite{vdWalt5} through \cite{vdWalt7}. Section 3 is concerned with the solutions of a class of nonlinear first order Cauchy problems. In Section 4 we discuss the possible interpretation of the generalized solutions obtained. \section{Preliminaries} Let $\Omega$ be some open and nonempty subset of $\mathbb{R}^{n}$, and let $\overline{\mathbb{R}}$ denote the extended real line \begin{eqnarray} \overline{\mathbb{R}}=\mathbb{R}\cup\{\pm\infty\}\nonumber \end{eqnarray} A function $u:\Omega\rightarrow \overline{\mathbb{R}}$ belongs to $\mathcal{ML}^{m}_{0}\left(\Omega\right)$, for some integer $m$, whenever $u$ is normal lower semi-continuous, in the sense of Dilworths \cite{Dilworth}, and \begin{eqnarray} \begin{array}{ll} \exists & \Gamma_{u}\subset\Omega\mbox{ closed nowhere dense :} \\ & \begin{array}{ll} 1) & \textit{mes}\left(\Gamma_{u}\right)=0 \\ 2) & u\in\mathcal{C}^{m}\left(\Omega\setminus\Gamma_{u}\right) \\ \end{array}\\ \end{array}\label{ML0Def} \end{eqnarray} Here $\textit{mes}\left(\Gamma_{u}\right)$ denotes the Lebegue measure of the set $\Gamma_{u}$. Recall \cite{Anguelov} that a function $u:\Omega\rightarrow \overline{\mathbb{R}}$ is normal lower semi-continuous whenever \begin{eqnarray} \begin{array}{ll} \forall & x\in\Omega\mbox{ :} \\ & I\left(S\left(u\right)\right)\left(x\right)=u\left(x\right) \\ \end{array}\label{NLSCDef} \end{eqnarray} where \begin{eqnarray} \begin{array}{ll} \forall & u:\Omega\rightarrow \overline{\mathbb{R}}\mbox{ :} \\ & \begin{array}{ll} 1) & I\left(u\right):\Omega\ni x\mapsto \sup\{\inf\{u\left(y\right)\mbox{ : }y\in B_{\delta}\left(x\right)\}\mbox{ : }\delta>0\}\in\mathbb{\overline{R}} \\ 2) & S\left(u\right):\Omega\ni x\mapsto \inf\{\sup\{u\left(y\right)\mbox{ : }y\in B_{\delta}\left(x\right)\}\mbox{ : }\delta>0\}\in\mathbb{\overline{R}} \\ \end{array} \\ \end{array}\nonumber \end{eqnarray} are the lower- and upper- Baire Operators, respectively, see \cite{Anguelov} and \cite{Baire}. Note that each function $u\in\mathcal{ML}^{m}_{0}\left(\Omega\right)$ is measurable and nearly finite with respect to Lebesgue measure. In particular, the space $\mathcal{ML}^{m}_{0}\left(\Omega\right)$ contains $\mathcal{C}^{m}\left(\Omega\right)$. In this regard, we note that the partial differential operators \begin{eqnarray} D^{\alpha}:\mathcal{C}^{m}\left(\Omega\right)\rightarrow \mathcal{C}^{0}\left(\Omega\right)\mbox{, }|\alpha|\leq m\nonumber \end{eqnarray} extend to mappings \begin{eqnarray} \mathcal{D}^{\alpha}:\mathcal{ML}^{m}_{0}\left(\Omega\right)\ni u \rightarrow \left(I\circ S\right)\left(D^{\alpha}u\right)\in \mathcal{ML}^{0}_{0}\left(\Omega\right)\label{PDOps} \end{eqnarray} A convergence structure $\lambda_{a}$, in the sense of \cite{Beattie}, may be defined on $\mathcal{ML}^{0}_{0}\left(\Omega\right)$ as follows. \begin{definition}\label{CAEDef} For any $u\in \mathcal{ML}^{0}_{0}\left(\Omega\right)$, and any filter $\mathcal{F}$ on $\mathcal{ML}^{0}_{0}\left(\Omega\right)$, \begin{eqnarray} \mathcal{F}\in\lambda_{a}\left(u\right)\Leftrightarrow \left(\begin{array}{ll} \exists & E\subset\Omega\mbox{ :} \\ & \begin{array}{ll} a) & \textit{mes}\left(E\right)=0 \\ b) & x\in\Omega\setminus E \Rightarrow \mathcal{F}\left(x\right)\mbox{ converges to }u\left(x\right) \\ \end{array} \\ \end{array}\right)\nonumber \end{eqnarray} Here $\mathcal{F}\left(x\right)$ denotes the filter of real numbers given by \begin{eqnarray} \mathcal{F}\left(x\right)=[\{\{v\left(x\right)\mbox{ : }v\in F\}\mbox{ : }F\in\mathcal{F}\}] \end{eqnarray} \end{definition} That $\lambda_{a}$ does in fact constitute a uniform convergence structure on $\mathcal{ML}^{0}_{0}\left(\Omega\right)$ follows by \cite[Example 1.1.2 (iii)]{Beattie}. Indeed, $\lambda_{a}$ is the almost everywhere convergence structure, which is Hausdorff. One may now introduce a \textit{complete} uniform convergence structure $\mathcal{J}_{a}$, in the sense of \cite{Beattie}, on $\mathcal{ML}^{0}_{0}\left(\Omega\right)$ in such a way that the induced convergence structure \cite[Definition 2.1.3]{Beattie} is $\lambda_{a}$. \begin{definition}\label{UCAEDef} A filter $\mathcal{U}$ on $\mathcal{ML}^{0}_{0}\left(\Omega\right) \times \mathcal{ML}^{0}_{0}\left(\Omega\right)$ belongs to $\mathcal{J}_{a}$ whenever there exists $k\in\mathbb{N}$ such that \begin{eqnarray} \begin{array}{ll} \forall & i=1,...,k\mbox{ :} \\ \exists & u_{i}\in\mathcal{ML}^{0}_{0}\left(\Omega\right)\mbox{ :} \\ \exists & \mathcal{F}_{i}\mbox{ a filter on $\mathcal{ML}^{0}_{0}\left(\Omega\right)$ :} \\ & \begin{array}{ll} a) & \mathcal{F}_{i}\in\lambda_{a}\left(u_{i}\right) \\ b) & \left(\mathcal{F}_{1}\times\mathcal{F}_{1}\right)\cap...\cap\left(\mathcal{F}_{k}\times\mathcal{F}_{k}\right)\subseteq\mathcal{U} \\ \end{array} \\ \end{array}\nonumber \end{eqnarray} \end{definition} The uniform convergence structure $\mathcal{J}_{a}$ is referred to as the uniform convergence structure associated with the convergence structure $\lambda_{a}$, see \cite[Proposition 2.1.7]{Beattie}. We note that the concept of a convergence structure on a set $X$ is a generalization of that of topology on $X$. With every topology $\tau$ on $X$ one may associate a convergence structure $\lambda_{\tau}$ on $X$ through \begin{eqnarray} \begin{array}{ll} \forall & x\in X\mbox{ :} \\ \forall & \mathcal{F}\mbox{ a filter on $X$ :} \\ & \mathcal{F}\in\lambda_{\tau}\left(x\right) \Leftrightarrow \mathcal{V}_{\tau}\left(x\right)\subseteq\mathcal{F} \\ \end{array}\nonumber \end{eqnarray} where $\mathcal{V}_{\tau}\left(x\right)$ denotes the filter of $\tau$-neighborhoods at $x$. However, not every convergence structure $\lambda$ on $X$ is induced by a topology in this way. Indeed, the convergence structure $\lambda_{a}$ specified above is one such an example. A uniform convergence space is the generalization of a uniform space in the context of convergence spaces. The reader is referred to \cite{Beattie} for details concerning convergence spaces. \section{First Order Cauchy Problems} Let $\Omega=\left(-a,a\right)\times \left(-b,b\right)\subset\mathbb{R}^{2}$, for some $a,b>0$, be the domain of the independent variables $\left(x,y\right)$. We are given \begin{eqnarray} F:\overline{\Omega}\times\mathbb{R}^{4}\rightarrow \mathbb{R} \label{DefFunc} \end{eqnarray} and \begin{eqnarray} f:[-a,a]\rightarrow \mathbb{R}\label{ICFunc} \end{eqnarray} Here $F$ is jointly continuous in all of its variables, and $f$ is in $\mathcal{C}^{1}[-a,a]$. We consider the Cauchy problem \begin{eqnarray} D_{y}u\left(x,y\right)+F\left(x,y,u\left(x,y\right), u\left(x,y\right),D_{x}u\left(x,y\right)\right)= 0\mbox{, }\left(x,y\right)\in\Omega\label{Equation} \end{eqnarray} \begin{eqnarray} u\left(x,0\right)=f\left(x\right)\mbox{, }x\in\left(-a,a\right)\label{ICon} \end{eqnarray} Denote by $T:\mathcal{C}^{1}\left(\Omega\right) \rightarrow \mathcal{C}^{0}\left(\Omega\right)$ the nonlinear partial differential operator given by \begin{eqnarray} \begin{array}{ll} \forall & u\in\mathcal{C}^{1}\left(\Omega\right)\mbox{ :} \\ \forall & \left(x,y\right)\in\Omega\mbox{ :} \\ & Tu:\left(x,y\right)\mapsto D_{y}u\left(x,y\right)+F\left(x,y,u\left(x,y\right), u\left(x,y\right),D_{x}u\left(x,y\right)\right) \\ \end{array}\label{PDEOpClassical} \end{eqnarray} Note that the equation (\ref{Equation}) may have several classical solutions. Indeed, in the particular case when the operator $T$ is linear and homogeneous, there is at least one classical solution to (\ref{Equation}) which is the function which is everywhere equal to $0$. However, the presence of the initial condition (\ref{ICon}) may rule out some or all of the possible classical solutions. What is more, there is a well known \textit{physical} interest in \textit{nonclassical} or \textit{generalized} solutions to (\ref{Equation}) through (\ref{ICon}). Such solution may, for instance, model shocks waves in fluids. In this regard, it is convenient to extend the PDE operator $T$ to $\mathcal{ML}^{1}_{0}\left(\Omega\right)$ through \begin{eqnarray} \begin{array}{ll} \forall & u\in\mathcal{ML}^{1}_{0}\left(\Omega\right)\mbox{ :} \\ \forall & \left(x,y\right)\in\Omega\mbox{ :} \\ & \mathcal{T}u:\left(x,y\right)\mapsto \left(I\circ S\right)\left(\mathcal{D}_{y}u+F\left(\cdot,\mathcal{D}_{x}u,u\right)\right)\left(x,y\right) \\ \end{array}\nonumber \end{eqnarray} As mentioned, the solution method for the Cauchy problem (\ref{Equation}) through (\ref{ICon}) uses exactly the same techniques that apply to the free problem \cite{vdWalt6}. However, in order to incorporate the additional condition (\ref{ICon}), we must adapt the method slightly. In this regard, we consider the space \begin{eqnarray} \mathcal{ML}^{1}_{0,0}\left(\Omega\right)=\{u\in \mathcal{ML}^{1}_{0}\left(\Omega\right)\mbox{ : }u\left(\cdot,0\right) \in\mathcal{C}^{1}[-a,a]\}\label{ML00Space} \end{eqnarray} and the mapping \begin{eqnarray} \mathcal{T}_{0}:\mathcal{ML}^{1}_{0,0}\left(\Omega\right)\ni u\mapsto \left(\mathcal{T}u,\mathcal{R}_{0}u\right)\in \mathcal{ML}^{0}_{0}\left(\Omega\right)\times \mathcal{C}^{1}[-a,a]\label{IConOp} \end{eqnarray} where \begin{eqnarray} \begin{array}{ll} \forall & u\in\mathcal{ML}^{1}_{0,0}\left(\Omega\right)\mbox{ :} \\ \forall & x\in [-a,a]\mbox{ :} \\ & \mathcal{R}_{0}u:x\mapsto u\left(x,0\right) \\ \end{array}\nonumber \end{eqnarray} That is, $\mathcal{R}^{0}$ assigns to each $u\in\mathcal{ML}^{1}_ {0,0}\left(\Omega\right)$ its restriction to $\{\left(x,y\right)\in\Omega\mbox{ : }y=0\}$. This amounts to a \textit{separation} of the initial value problem (\ref{ICon}) form the problem of satisfying the PDE (\ref{Equation}). For the sake of a more compact exposition, we will denote by $X$ the space $\mathcal{ML}^{1}_ {0,0}\left(\Omega\right)$ and by $Y$ the space $\mathcal{ML}^{0}_{0}\left(\Omega\right) \times\mathcal{C}^{1}[-a,a]$. On $\mathcal{C}^{1}[-a,a]$ we consider the convergence structure $\lambda_{0}$, and with it the associated u.c.s. $\mathcal{J}_{0}$. \begin{definition} For any $f\in\mathcal{C}^{1}[-a,a]$, and any filter $\mathcal{F}$ on $\mathcal{C}^{1}[-a,a]$, \begin{eqnarray} \mathcal{F}\in\lambda_{0}\left(f\right) \Leftrightarrow [f]\subseteq \mathcal{F} \nonumber \end{eqnarray} Here $[x]$ denotes the filter generated by $x$. That is, \begin{eqnarray} [x]=\{F\subseteq\mathcal{C}^{1}[-a,a]\mbox{ : }x\in F\}\nonumber \end{eqnarray} \end{definition} The associated u.c.s. $\mathcal{J}_{0}$ on $\mathcal{C}^{1}[-a,a]$ consists of all filters $\mathcal{U}$ on $\mathcal{C}^{1}[-a,a]\times \mathcal{C}^{1}[-a,a]$ that satisfies \begin{eqnarray} \begin{array}{ll} \exists & k\in\mathbb{N}\mbox{ :} \\ & \left(\begin{array}{ll} \forall & i=1,...,k\mbox{ :} \\ \exists & f_{i}\in\mathcal{C}^{1}[-a,a]\mbox{ :} \\ \exists & \mathcal{F}_{i}\mbox{ a filter on $\mathcal{C}^{1}[-a,a]$ :} \\ & \begin{array}{ll} a) & \mathcal{F}_{i}\in\lambda_{0}\left(u_{i}\right) \\ b) & \left(\mathcal{F}_{1}\times\mathcal{F}_{1}\right)\cap...\cap\left(\mathcal{F}_{k}\times\mathcal{F}_{k}\right)\subseteq\mathcal{U} \\ \end{array} \\ \end{array}\right) \\ \end{array} \end{eqnarray} This u.c.s. is uniformly Hausdorff and complete. The space $Y$ carries the product u.c.s. $\mathcal{J}_{Y}$ with respect to the u.c.s's $\mathcal{J}_{a}$ on $\mathcal{ML}^{0}_{0}\left(\Omega\right)$ and $\mathcal{J}_{0}$ on $\mathcal{C}^{1}[-a,a]$. That is, for any filter $\mathcal{V}$ on $Y\times Y$ \begin{eqnarray} \mathcal{V}\in\mathcal{J}_{Y}\Leftrightarrow \left( \begin{array}{ll} a) & \left(\pi_{0}\times\pi_{0}\right)\left(\mathcal{V}\right)\in\mathcal{J}_{a} \\ b) & \left(\pi_{1}\times\pi_{1}\right)\left(\mathcal{V}\right)\in\mathcal{J}_{0} \\ \end{array}\right) \end{eqnarray} Here $\pi_{0}$ denotes the projection on $\mathcal{ML}^{0}_{0}\left(\Omega\right)$, and $\pi_{1}$ is the projection on $\mathcal{C}^{1}[-a,a]$. With this u.c.s., the space $Y$ is uniformly Hausdorff and complete \cite[Proposition 2.3.3 (iii)]{Beattie}. On the space $X$ we introduce an equivalence relation $\sim_{\mathcal{T}_{0}}$ through \begin{eqnarray} \begin{array}{ll} \forall & u,v\in X\mbox{ :} \\ & u\sim_{\mathcal{T}_{0}}v\Leftrightarrow \mathcal{T}_{0}u=\mathcal{T}_{0}v \\ \end{array}\label{T0Eq} \end{eqnarray} The quotient space $X/\sim_{\mathcal{T}_{0}}$ is denoted $X_{\mathcal{T}_{0}}$. With the mapping $\mathcal{T}_{0}:X\rightarrow Y$ we may now associate an injective mapping $\widehat{\mathcal{T}}_{0}:X_{\mathcal{T}_{0}}\rightarrow Y$ so that the diagram\\ \\ \begin{math} \setlength{\unitlength}{1cm} \thicklines \begin{picture}(13,6) \put(1.9,5.4){$X$} \put(2.3,5.5){\vector(1,0){6.0}} \put(8.5,5.4){$Y$} \put(4.9,5.7){$\mathcal{T}_{0}$} \put(2.0,5.2){\vector(0,-1){3.5}} \put(2.4,1.4){\vector(1,0){6.0}} \put(1.9,1.3){$X_{\mathcal{T}_{0}}$} \put(8.5,1.3){$Y$} \put(1.4,3.4){$q_{\mathcal{T}_{0}}$} \put(8.8,3.4){$i_{Y}$} \put(8.6,5.2){\vector(0,-1){3.5}} \put(4.9,1.6){$\widehat{\mathcal{T}}_{0}$} \end{picture} \end{math}\\ commutes. Here $q_{\mathcal{T}_{0}}$ denotes the quotient mapping associated with the equivalence relation $\sim_{\mathcal{T}_{0}}$, and $i_{Y}$ is the identity mapping on $Y$. We now define a u.c.s. $\mathcal{J}_{\mathcal{T}_{0}}$ on $X_{\mathcal{T}_{0}}$ as the \textit{initial} u.c.s. \cite[Proposition 2.1.1 (i)]{Beattie} on $X_{\mathcal{T}_{0}}$ with respect to the mapping $\widehat{\mathcal{T}}_{0}:X_{\mathcal{T}_{0}}\rightarrow Y$. That is, \begin{eqnarray} \begin{array}{ll} \forall & \mathcal{U}\mbox{ a filter on $X_{\mathcal{T}_{0}}$ :} \\ & \mathcal{U}\in\mathcal{J}_{\mathcal{T}_{0}}\Leftrightarrow \left(\widehat{\mathcal{T}}_{0}\times \widehat{\mathcal{T}}_{0}\right)\left(\mathcal{U}\right)\in\mathcal{J}_{Y} \\ \end{array} \end{eqnarray} Since $\widehat{\mathcal{T}}_{0}$ is injective, the u.c.s. $\mathcal{J}_{\mathcal{T}_{0}}$ is uniformly Hausdorff, and $\widehat{\mathcal{T}}_{0}$ is actually a uniformly continuous embedding. Moreover, if $X_{\mathcal{T}_{0}}^{\sharp}$ denotes the completion of $X_{\mathcal{T}_{0}}$, then there exists a unique uniformly continuous embedding \begin{eqnarray} \widehat{\mathcal{T}}_{0}^{\sharp}: X_{\mathcal{T}_{0}}^{\sharp} \rightarrow Y\nonumber \end{eqnarray} such that the diagram\\ \\ \begin{math} \setlength{\unitlength}{1cm} \thicklines \begin{picture}(13,6) \put(2.2,5.4){$X_{\mathcal{T}_{0}}$} \put(2.8,5.5){\vector(1,0){5.8}} \put(8.8,5.4){$Y$} \put(5.2,5.7){$\widehat{\mathcal{T}}_{0}$} \put(2.5,5.2){\vector(0,-1){3.5}} \put(2.7,1.4){\vector(1,0){5.9}} \put(2.1,1.2){$X_{\mathcal{T}_{0}}^{\sharp}$} \put(8.8,1.3){$Y$} \put(1.6,3.4){$\iota_{X_{\mathcal{T}_{0}}}$} \put(9.1,3.4){$i_{Y}$} \put(8.9,5.2){\vector(0,-1){3.5}} \put(5.2,1.6){$\widehat{\mathcal{T}}_{0}^{\sharp}$} \end{picture} \end{math}\\ commutes. Here $\iota_{X_{\mathcal{T}_{0}}}$ denotes the uniformly continuous embedding associated with the completion $X_{\mathcal{T}_{0}}^{\sharp}$ of $X_{\mathcal{T}_{0}}$. A generalized solution to (\ref{Equation}) through (\ref{ICon}) is a solution to the equation \begin{eqnarray} \widehat{\mathcal{T}}_{0}U^{\sharp}=\left(0,f\right)\label{GenEq} \end{eqnarray} The existence of generalized solutions is based on the following basic approximation result \cite[Section 8]{Obergugenberger and Rosinger}. \begin{lemma}\label{Approx} We have \begin{eqnarray} \begin{array}{ll} \forall & \epsilon>0\mbox{ :} \\ \exists & \delta>0\mbox{ :} \\ \forall & \left(x_{0},y_{0}\right)\in\Omega\mbox{ :} \\ \exists & u=u_{\epsilon,x_{0},y_{0}}\in\mathcal{C}^{1}\left(\overline{\Omega}\right)\mbox{ :} \\ \forall & \left(x,y\right)\mbox{ :} \\ & \left(\begin{array}{l} |x-x_{0}|<\delta \\ |y-y_{0}|<\delta \\ \end{array}\right)\Rightarrow -\epsilon\leq Tu\left(x,y\right)\leq 0 \\ \end{array}\label{1stApprox} \end{eqnarray} Furthermore, we also have \begin{eqnarray} \begin{array}{ll} \forall & \epsilon>0\mbox{ :} \\ \exists & \delta>0\mbox{ :} \\ \forall & x_{0}\in[-a,a]\mbox{ :} \\ \exists & u=u_{\epsilon,x_{0}}\in\mathcal{C}^{1}\left(\overline{\Omega}\right)\mbox{ :} \\ & \begin{array}{ll} a) & \forall \mbox{ }\mbox{ } \left(x,y\right)\in\overline{\Omega}\mbox{ :} \\ & \mbox{ }\mbox{ }\mbox{ }\left(\begin{array}{l} |x-x_{0}|<\delta \\ |y|<\delta \\ \end{array}\right) \Rightarrow -\epsilon\leq Tu\left(x,y\right)\leq 0 \\ \end{array} \\ & \begin{array}{ll} b) & u\left(x,0\right)=f\left(x\right)\mbox{, }x\in [-a,a]\mbox{, }|x-x_{0}|<\delta \\ \end{array}\\ \end{array}\label{2ndApprox} \end{eqnarray} \end{lemma} As a consequence of the approximation result above, we now obtain the \textit{existence} and \textit{uniqueness} of generalized solutions to (\ref{Equation}) through (\ref{ICon}). In this regard, we introduce the concept of a finite initial adaptive $\delta$-tiling. A finite initial adaptive $\delta$-tiling of $\Omega$ is any \textit{finite} collection $\mathcal{K}_{\delta}=\{K_{1},...,K_{\nu}\}$ of perfect, compact subsets of $\mathbb{R}^{2}$ with pairwise disjoint interiors such that \begin{eqnarray} \begin{array}{ll} \forall & K_{i}\in\mathcal{K}_{\delta}\mbox{ :} \\ & \left(x,y\right),\left(x_{0},y_{0}\right)\in K_{i}\Rightarrow \left(\begin{array}{l} |x-x_{0}|<\delta \\ |y-y_{0}|<\delta \\ \end{array}\right) \\ \end{array}\label{FIADTiling1} \end{eqnarray} and \begin{eqnarray} \{\left(x,0\right)\mbox{ : }-a\leq x\leq a\}\cap\left( \cup_{K_{i}\in\mathcal{K}_{\delta}}\partial K_{i}\right)\mbox{ at most finite}\label{FIADTiling2} \end{eqnarray} where $\partial K_{i}$ denotes the boundary of $K_{i}$. For any $\delta>0$ there exists at least one finite initial adaptive $\delta$-tiling of $\Omega$, see for instance \cite[Section 8]{Obergugenberger and Rosinger}. \begin{theorem}\label{ExTheorem} For any $f\in\mathcal{C}^{1}[-a,a]$, there exists a unique $U^{\sharp}\in X_{\mathcal{T}_{0}}^{\sharp}$ that satisfies (\ref{GenEq}). \end{theorem} \begin{proof} For every $n\in\mathbb{N}$, set $\epsilon_{n}=1/n$. Applying Lemma \ref{Approx}, we find $\delta_{n}>0$ such that \begin{eqnarray} \begin{array}{ll} \forall & \left(x_{0},y_{0}\right)\in\Omega\mbox{ :} \\ \exists & u=u_{n,x_{0},y_{0}}\in\mathcal{C}^{1}\left(\overline{\Omega}\right)\mbox{ :} \\ \forall & \left(x,y\right)\mbox{ :} \\ & \left(\begin{array}{l} |x-x_{0}|<\delta_{n} \\ |y-y_{0}|<\delta_{n} \\ \end{array}\right)\Rightarrow -\frac{\epsilon_{n}}{2}\leq Tu\left(x,y\right)\leq 0 \\ \end{array}\label{1stApprox} \end{eqnarray} and \begin{eqnarray} \begin{array}{ll} \forall & x_{0}\in[-a,a]\mbox{ :} \\ \exists & u=u_{n,x_{0}}\in\mathcal{C}^{1}\left(\overline{\Omega}\right)\mbox{ :} \\ & \begin{array}{ll} a) & \forall \mbox{ }\mbox{ } \left(x,y\right)\in\overline{\Omega}\mbox{ :} \\ & \mbox{ }\mbox{ }\mbox{ }\left(\begin{array}{l} |x-x_{0}|<\delta \\ |y|<\delta \\ \end{array}\right) \Rightarrow -\frac{\epsilon_{n}}{2}\leq Tu\left(x,y\right)\leq 0 \\ \end{array} \\ & \begin{array}{ll} b) & u\left(x,0\right)=f\left(x\right)\mbox{, }x\in [-a,a]\mbox{, }|x-x_{0}|<\delta \\ \end{array}\\ \end{array}\label{2ndApprox} \end{eqnarray} Let $\mathcal{K}_{\delta_{n}}=\{K_{1},...,K_{\nu_{n}}\}$ be a finite initial adaptive $\delta_{n}$-tiling. If $K_{i}\in \mathcal{K}_{\delta_{n}}$, and \begin{eqnarray} K_{i}\cap\{\left(x,0\right)\mbox{ : }|x|\leq a\}=\emptyset \end{eqnarray} then take any $\left(x_{0},y_{0}\right)\in\textrm{int}\left(K_{i}\right)$ and set \begin{eqnarray} u^{i}_{n}=u_{n,x_{0},y_{0}}\nonumber \end{eqnarray} Otherwise, select $\left(x_{0},0\right)\in \left([-a,a]\times \{0\}\right)\cap K_{i}$ and set \begin{eqnarray} u^{i}_{n}=u_{n,x_{0}} \nonumber \end{eqnarray} Consider the function $u_{n}\in\mathcal{ML}^{1}_{0,0} \left(\Omega\right)$ defined as \begin{eqnarray} u_{n}=\left(I\circ S\right)\left(\sum_{i=1}^{\nu}\chi_{i}u_{n}^{i}\right)\nonumber \end{eqnarray} where, for each $i$, $\chi_{i}$ is the indicator function of $\textrm{int}\left(K_{i}\right)$. It is clear that \begin{eqnarray} \begin{array}{ll} \forall & \left(x,y\right)\in\Omega\mbox{ :} \\ & -\epsilon_{n}< \mathcal{T}u_{n}\left(x,y\right)\leq 0 \\ \end{array}\nonumber \end{eqnarray} and \begin{eqnarray} \begin{array}{ll} \forall & x\in [-a,a]\mbox{ :} \\ & \mathcal{R}_{0}u_{n}\left(x\right)=0 \end{array}\nonumber \end{eqnarray} Let $U_{n}$ denote the $\sim_{\mathcal{T}_{0}}$ equivalence class associated with $u_{n}$. Then the sequence $\left(\widehat{\mathcal{T}}_{0}U_{n}\right)$ converges to $\left(0,f\right)\in Y$. Since $\widehat{\mathcal{T}}_{0}$ is uniformly continuous embedding, the sequence $\left(U_{n}\right)$ is a Cauchy sequence in $X_{\mathcal{T}_{0}}$ so that there exists $U^{\sharp}\in X_{\mathcal{T}_{0}}^{\sharp}$ that satisfies (\ref{GenEq}). Moreover, this solution is unique since $\widehat{\mathcal{T}}_{0}^{\sharp}$ is injective. \end{proof} \section{The Meaning of Generalized Solutions} Regarding the meaning of the \textit{existence} and \textit{uniqueness} of the generalized solution $U^{\sharp}\in X_{\mathcal{T}_{0}}^{\sharp}$ to (\ref{Equation}) through (\ref{ICon}), we recall the abstract construction of the completion of a uniform convergence space. Let $\left(Z,\mathcal{J}\right)$ be a Hausdorff uniform convergence space. A filter $\mathcal{F}$ on $Z$ is a $\mathcal{J}$-Cauchy filter whenever \begin{eqnarray} \mathcal{F}\times\mathcal{F}\in\mathcal{J}\nonumber \end{eqnarray} If $\textbf{C}\left(Z\right)$ denotes the collection of all $\mathcal{J}$-Cauchy filters on $Z$, one may introduce an equivalence relation $\sim_{c}$ on $\textbf{C}\left(Z\right)$ through \begin{eqnarray} \begin{array}{ll} \forall & \mathcal{F},\mathcal{G}\in\textbf{C}\left(Z\right)\mbox{ :} \\ & \mathcal{F}\sim_{c}\mathcal{G}\Leftrightarrow \left(\begin{array}{ll} \exists & \mathcal{H}\in\textbf{C}\left(Z\right)\mbox{ :} \\ & \mathcal{H}\subseteq \mathcal{G}\cap\mathcal{F} \\ \end{array}\right) \\ \end{array} \end{eqnarray} The quotient space $Z^{\sharp}=\textbf{C}\left(Z\right)/\sim_{c}$ serves as the underlying set of the completion of $Z$. Note that, since $Z$ is Hausdorff, if the filters $\mathcal{F},\mathcal{G}\in\textbf{C}\left(Z\right)$ converge to $x\in Z$ and $z\in Z$, respectively, then \begin{eqnarray} \mathcal{F}\sim_{c}\mathcal{G}\Leftrightarrow x=z\nonumber \end{eqnarray} so that one obtains an injective mapping \begin{eqnarray} \iota_{Z}:Z\ni z\mapsto [\lambda\left(z\right)]\in Z^{\sharp}\nonumber \end{eqnarray} where $[\lambda\left(z\right)]$ denotes the equivalence class generated by the filters which converge to $z\in Z$. One may now equip $Z^{\sharp}$ with a u.c.s. $\mathcal{J}^{\sharp}$ in such a way that the mapping $\iota_{Z}$ is a uniformly continuous embedding, $Z^{\sharp}$ is complete, and $\iota_{Z}\left(Z\right)$ is dense in $Z^{\sharp}$. In the context of PDEs, and in particular the first order Cauchy problem (\ref{Equation}) through (\ref{ICon}), the generalized solution $U^{\sharp}\in X_{\mathcal{T}_{0}}$ to (\ref{Equation}) through (\ref{ICon}) may be expressed as \begin{eqnarray} U^{\sharp}=\left\{\mathcal{F}\in\textbf{C}\left(X_{\mathcal{T}_{0}}\right)\begin{array}{|ll} a) & \pi_{0}\left(\widehat{\mathcal{T}}_{0}\left(\mathcal{F}\right)\right)\in\lambda_{a}\left(0\right) \\ & \\ b) & \pi_{1}\left(\widehat{\mathcal{T}}_{0}\left(\mathcal{F}\right)\right)\in\lambda_{0}\left(f\right) \\ \end{array}\right\} \end{eqnarray} Any classical solution $u\in\mathcal{C}^{1}\left(\Omega\right)$, and more generally any shockwave solution $u\in\mathcal{ML}^{1}_{0,0}\left(\Omega\right)=X$ to (\ref{Equation}) through (\ref{ICon}), is mapped to the generalized solution $U^{\sharp}$, as may be seen form the following commutative diagram.\\ \\ \begin{math} \setlength{\unitlength}{1cm} \thicklines \begin{picture}(13,6) \put(2.4,5.4){$X$} \put(2.8,5.5){\vector(1,0){5.8}} \put(8.8,5.4){$Y$} \put(5.2,5.7){$\mathcal{T}_{0}$} \put(2.5,5.2){\vector(0,-1){3.5}} \put(2.7,1.4){\vector(1,0){5.9}} \put(2.1,1.2){$X_{\mathcal{T}_{0}}$} \put(8.8,1.3){$Y$} \put(1.6,3.4){$q_{\mathcal{T}_{0}}$} \put(9.1,3.4){$i_{Y}$} \put(8.9,5.2){\vector(0,-1){3.5}} \put(5.2,1.6){$\widehat{\mathcal{T}}_{0}$} \put(2.4, 0.9){\vector(0,-1){3.5}} \put(8.9, 1.0){\vector(0,-1){3.5}} \put(2.1,-3.0){$X_{\mathcal{T}_{0}}^{\sharp}$} \put(8.8,-3.0){$Y$} \put(2.7,-2.9){\vector(1,0){5.9}} \put(5.2,-2.7){$\widehat{\mathcal{T}}_{0}^{\sharp}$} \put(1.6,-1.0){$\iota_{X_{\mathcal{T}_{0}}}$} \put(9.1,-1.0){$i_{Y}$} \end{picture} \end{math}\\ \\ \\ \\ \\ \\ \\ \\ Hence there is a \textit{consistency} between the generalized solutions $U^{\sharp}\in X_{\mathcal{T}_{0}}^{\sharp}$ and any classical and shockwave solutions that may exists. \section{Conclusion} We have shown how the ideas developed in \cite{vdWalt5} through \cite{vdWalt7} may be applied to solve initial and / or boundary value problems for nonlinear systems of PDEs. In this regard, we have established the existence and uniqueness of generalized solutions to a family of nonlinear first order Cauchy problems. The generalized solutions are seen to be consistent with the usual classical solutions, if the latter exists, as well as with shock wave solutions. It should be noted that the above method applies equally well to arbitrary nonlinear \textit{systems} of equations.
{ "attr-fineweb-edu": 1.768555, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbEzxK1UJ-rRH-TAJ
\section{Introduction} \subsection{Phase field PDE model of cell motility} This work is motivated by a 2D phase field model of crawling cell motility introduced in \cite{Zie12}. This model consists of a system of two PDEs for the phase field function and the orientation vector due to polymerization of actin filaments inside the cell. In addition it obeys a volume preservation constraint. In \cite{Ber14} this system was rewritten in the following simplified form suitable for asymptotic analysis so that all key features of its qualitative behavior are preserved. \\ Let $\Omega\subset \mathbb{R}^2$ be a smooth bounded domain. Then, consider the following phase field PDE model of cell motility, studied in \cite{Ber14}: \begin{equation} \frac{\partial \rho_\varepsilon}{\partial t}=\Delta \rho_\varepsilon -\frac{1}{\varepsilon^2}W^{\prime}(\rho_\varepsilon - P_\varepsilon\cdot \nabla \rho_\varepsilon +\lambda_\varepsilon(t)\;\; \text{ in }\Omega, \label{eq1} \end{equation} \begin{equation} \frac{\partial P_\varepsilon}{\partial t}=\varepsilon\Delta P_\varepsilon -\frac{1}{\varepsilon}P_\varepsilon -\beta \nabla \rho_\varepsilon\;\; \text{ in } \Omega, \label{eq2} \end{equation} where \begin{equation*} \label{lagrange} \lambda_\varepsilon(t)=\frac{1}{|\Omega|}\int_\Omega\left(\frac{1}{\varepsilon^2}W^\prime(\rho_\varepsilon) + P_\varepsilon\cdot \nabla \rho_\varepsilon \right)\, dx, \end{equation*} is a {\em Lagrange multiplier} term responsible for total volume preservation of $\rho_\varepsilon$, and \begin{equation}\label{doublewell} W(\rho)=\frac{1}{4} \rho^2 (1-\rho)^2 \end{equation} is the Allen-Cahn (scalar Ginzburg-Landau) double equal well potential, and $\beta\geq 0$ is a physical parameter (see \cite{Zie12}). \begin{figure}[H] \centering \includegraphics[width = .5\textwidth]{phasefield} \caption{Sketch of phase-field parameter $\rho_\varepsilon$}\label{figs:phasefield} \end{figure} In \eqref{eq1}-\eqref{eq2}, $\rho_\varepsilon\colon \Omega\rightarrow \mathbb{R}$ is the phase-field parameter that, roughly speaking, takes values 1 and 0 inside and outside, respectively, a subdomain $D(t)\subset \Omega$ occupied by the moving cell. These regions are separated by a thin ``interface layer'' of width $O(\varepsilon)$ around the boundary $\Gamma(t):=\partial D(t)$, where $\rho_\varepsilon(x,t)$ sharply transitions from 1 to 0. The vector valued function $P_\varepsilon \colon \Omega\rightarrow \mathbb{R}^2$ models the orientation vector due to polymerization of actin filaments inside the cell. On the boundary $\partial \Omega$ Neumann and Dirichlet boundary conditions respectively are imposed: $\partial_\nu \rho_\varepsilon=0$ and $P_\varepsilon=0$. It was shown in \cite{Ber14} that $\rho_\varepsilon(x,t)$ converges to a characteristic function $\chi_{D(t)}$ as $\varepsilon\to 0$, where $D(t)\subset \mathbb{R}^2$ . Namely, the phase-field parameter $0\leq \rho_\varepsilon(x,t)\leq 1$ is equal to $1$ when $x\in D(t)$ and equal to $0$ outside of $D(t)$. This is referred to as the sharp interface limit of $\rho_\varepsilon$ and we write $\Gamma(t):= \partial D(t)$. More precisely, given a closed non self-intersecting curve $\Gamma(0)\subset \mathbb{R}^2$, consider the initial profile \begin{equation*}\label{phaseIC} \rho_\varepsilon(x,0) = \theta_0\left(\frac{\operatorname{dist}(x,\Gamma(0))}{\varepsilon}\right) \end{equation*} where \begin{equation*}\label{steadysolution} \theta_0(z) = \frac{1}{2}\left(\tanh\left(\frac{z}{2\sqrt{2}}\right)+1\right) \end{equation*} is the standing wave solution of the Allen-Cahn equation, and $\operatorname{dist}(x,\Gamma(0))$ is the signed distance from the point $x$ to the curve $\Gamma(0)$. Then, $\rho_\varepsilon(x,t)$ has the asymptotic form \begin{equation*}\label{asymptoticform} \rho_\varepsilon(x,t) = \theta_0\left(\frac{\operatorname{dist}(x,\Gamma(t))}{\varepsilon}\right)+ O(\varepsilon), \end{equation*} where $\Gamma(t)$ is a curve which describes the boundary of the cell. \newline It was formally shown in \cite{Ber14} that the curves $\Gamma(t)$ obey the evolution equation \begin{equation}\label{interface0} V(s,t) = \kappa(s,t) + \Phi(V(s,t)) - \frac{1}{|\Gamma(t)|}\int_\Gamma \kappa(s',t) +\Phi(V(s',t)) ds', \end{equation} where $s$ is the arc length parametrization of the curve $\Gamma(t)$, $V(s,t)$ is the normal velocity of curve $\Gamma(t)$ w.r.t. inward normal at location $s$, $|\Gamma(t)|$ is the length of $\Gamma(t)$, $\kappa(s,t)$ is the signed curvature of $\Gamma(t)$ at location $s$, and $\Phi(\cdot)$ is a known smooth, non-linear function. \newline \begin{remark}\label{rem:betaequiv} In \cite{Ber14} it was shown that $\Phi(V)=\beta\Phi_0(V)$ where $\Phi_0(V)$ is given by the equation \begin{equation}\label{phi} \Phi_0(V) := \int_\mathbb{R} \psi(z; V)(\theta_0')^2 dz \end{equation} and $\psi(z)=\psi(z;V)$ is the unique solution of \begin{equation}\label{phi1} \psi''(z)+V\psi'(z)-\psi(z)+ \theta_0' =0, \end{equation} with $\psi(\pm \infty)=0$. \end{remark} The case $\beta=0$ in equations \eqref{eq1}-\eqref{eq2} leads to $\Phi\equiv 0$ in \eqref{interface0}, thus reducing to a mass preserving analogue of the Allen-Cahn equation. Properties of this equation were studied in \cite{Che10,Gol94}, and it was shown that the sharp interface limit as $\varepsilon\to 0$ recovers volume preserving mean curvature motion: $V=\kappa-\frac{1}{|\Gamma|}\int_{\Gamma} \kappa ds.$ Equations \eqref{eq1}-\eqref{eq2} are a singularly perturbed parabolic PDE system in two spatial dimensions. Its sharp interface limit given by \eqref{interface0} describes evolution of the curve $\Gamma(t)$ (the sharp interface). Since $V(s,t)$ and $\kappa(s,t)$ are expressed via first and second derivatives of $\Gamma(t)(=\Gamma(s,t))$, equation \eqref{interface0} can be viewed as the second order PDE for $\Gamma(s,t)$. Since this PDE has spatial dimension one and it does not contain a singular perturbation, qualitative and numerical analysis of \eqref{interface0} is much simpler than that of the system \eqref{eq1}-\eqref{eq2}. \begin{remark} It was observed in \cite{Ber14} that both the analysis and the behavior of solutions of system \eqref{eq1}-\eqref{eq2} crucially depends on the parameter $\beta$. Specifically there is critical value $\beta_{cr}$ such that for $\beta>\beta_{cr}$ complicated phenomena of non-uniqueness and hysteresis arise. This critical value is defined as the maximum $\beta$ for which $V-\beta\Phi_0(V)$ is a monotone function of $V$. While this supercritical regime is a subject of the ongoing investigation, in this work focus on providing a rigorous analysis of subcritical regime $\beta<\beta_{cr}$. For equation \eqref{interface0} the latter regime corresponds to the case of {\it monotone function} $V-\Phi(V)$. \end{remark} \subsection{Biological background: cell motility problem} In \cite{Zie12} a phase field model that describes crawling motion of keratocyte cells on substrates was introduced. Keratocyte cells are typically harvested from scales of fish (e.g., cichlids \cite{Ker08}) for {\em in vitro} experiments. Additionally, humans have keratocyte cells in their corneas. These cells are crucial during wound healing, e.g., after corrective laser surgery \cite{Moh03}. The biological mechanisms which give rise to keratocyte cell motion are complicated and they are an ongoing source of research. Assuming that a directional preference has been established, a keratocyte cell has the ability to maintain self-propagating cell motion via internal forces generated by a protein {\em actin}. Actin monomers are polarized in such a way that several molecules may join and form filaments. These actin filaments form a dense and rigid network at the leading edge of the cell within the cytoskeleton, known as the lamellipod. The lamellipod forms adhesions to the substrate and by a mechanism known as {\em actin tread milling} the cell protrudes via formation of new actin filaments at the leading edge. We may now explain the heuristic idea behind the model. Roughly speaking, cell motility is determined by two (competing) mechanisms: surface tension and protrusion due to actin polymerization. The domain where $\rho_\varepsilon(x)\approx 1$ is occupied the cell and $P_\varepsilon$ as the local averaged orientation of the filament network. Surface tension enters the model \eqref{eq1}-\eqref{eq2} via the celebrated Allen-Cahn equation with double-well potential \eqref{doublewell}: \begin{equation*}\label{ACmotion} \frac{\partial \rho_\varepsilon}{\partial t} = \Delta \rho_\varepsilon -\frac{1}{\varepsilon^2} W'(\rho_\varepsilon). \end{equation*} In the sharp interface limit ($\varepsilon\to 0$), surface tension leads to the curvature driven motion of the interface. The actin polymerization enters the system \eqref{eq1}-\eqref{eq2} through the $-P_\varepsilon \cdot \nabla \rho_\varepsilon$ term. Indeed, recall \begin{equation*}\label{materialderivative} \frac{D \rho_\varepsilon}{Dt} = \frac{\partial \rho_\varepsilon}{\partial t} + P_\varepsilon \cdot \nabla \rho_\varepsilon \end{equation*} as the material derivative of $\rho_\varepsilon$ subject to the velocity field $P_\varepsilon$. Thus the term $-P_\varepsilon \cdot \nabla \rho_\varepsilon$ is an advective term generated by actin polymerization. \newline The last term of \eqref{eq1}, $\lambda_\varepsilon(t)$ is responsible for volume preservation, which is an important physical condition. The diffusion term $\varepsilon \Delta P_\varepsilon$ corresponds to diffusion of actin and does not significantly affect the dynamics of $\rho_\varepsilon$. The term $-\beta\nabla \rho_\varepsilon$ describes the creation of actin by polymerization, which leads to a protrusion force. It gives the rate of growth of polymerization of actin: $\frac{\partial P_\varepsilon}{\partial t} \sim -\beta \nabla \rho_\varepsilon$. The $\frac{1}{\varepsilon} P_\varepsilon$ term provides decay of $P_\varepsilon$ away from the interface, for example due to depolymerization. The system \eqref{eq1}-\eqref{eq2} is a slightly modified form of the model proposed in \cite{Zie12}. It preserves key features of the qualitative behavior yet is more convenient for mathematical analysis. \subsection{Overview of results and techniques}\label{sec:overview} A main goal of this work is to prove existence of a family of curves which evolve according to the equation \eqref{interface0} (that describes evolution of the sharp interface) and investigate their properties. The problem of mean curvature type motion was extensively studied by mathematicians from both PDE and geometry communities for several decades. A review of results on unconstrained motion by mean curvature can be found \cite{Bra78,GagHam86,Gra87}. Furthermore the viscosity solutions techniques have been efficiently applied in the PDE analysis of such problems. These techniques do not apply to mean curvature motion with volume preservation constraints \cite{Gag86,Che93,Ell97}, and the analysis becomes especially difficult in dimensions greater than two \cite{Esc98}. Note that existence in two dimensional mean curvature type motions were recently studied (e.g., \cite{Bon00,Che93, Ell97}) and appropriate techniques of regularization were developed. Recently, analogous issues resurfaced in the novel context of biological cell motility problems, after a phase-field model was introduced in \cite{Zie12}. The problem studied in the present work is two dimensional (motion of a curve on the plane). The distinguished feature of this problem is that the velocity enters the evolution equation implicitly via a {\em non-linear} function $V-\Phi(V)$. Therefore the time derivative in the corresponding PDE that describes the signed distance, $u(\sigma,t)$ of the curve from a given reference curve also enters the PDE implicitly, which leads to challenges in establishing existence. The following outlines the basic steps of the analysis. First, we consider smooth ($H^2$) initial data and generalize the regularization idea from \cite{Che93} (see also \cite{Ell97}) for the implicit dependence described above. Here, the main difficulty is to establish existence on a time interval that does not depend on the regularization parameter $\varepsilon$. To this end, we derive $L^2$ (in time and space) a priori estimates independent of $\varepsilon$ for third order derivatives and uniform in time $L^2$ estimates for second order derivatives. These estimates allow us to show existence and to pass to the limit as $\varepsilon\to 0$, and they are derived by considering the equation for $u_\sigma=\frac{\partial u}{\partial \sigma}$. We use ``nice'' properties of this equation to obtain higher order estimates independent of $\varepsilon$ (which are not readily available for the equation for $u$). In particular, it turns out that the equation for $u_\sigma$ can be written as a quasi linear parabolic PDE in divergence form. For such equations quite remarkable classical results establish Holder continuity of solutions for even for discontinuous initial data \cite{Lad68}. This provides a lower bound on the possible blow up time, which does depend on $H^2$ norm of initial data for $u$ in our problem. As a result, we establish existence on a time interval that depends on the $H^2$ norm of initial data. Second, observe that experiments for cell motility show that the cell shape is not necessarily smooth. Therefore one needs to consider more realistic models where smoothness assumptions on initial conditions are relaxed. In particular, one should allow for corner-type singularities. To this end, we pass to generic $W^{1,\infty}$ initial curves. For the limiting equations for $u$ and $u_\sigma$, we show existence on a time interval that does not depend on the $H^2$ (and $H^1$ for $u_\sigma$) norms of initial conditions. This is necessary because these norms blow up for non-smooth functions from $W^{1,\infty}\setminus H^2$. The existence is proved by a maximum principle type argument, which is not available for the regularized equations that contain fourth order derivatives. Also it is crucial to establish H\"older continuity results for $u_\sigma$, rewriting the equation for $u_\sigma$ as a quasilinear divergence form parabolic PDE. After proving short time existence we address the issues of global existence of such curves. The latter is important for the comparison of theoretical and experimental results on cell motility. We will present an exploratory study which combines analytical and numerical results for the long time behavior of the cell motility model. Analytically, we prove that similarly to the classical curvature driven motion with volume preservation, traveling waves with nonzero velocity do not exist. While through numerical experiments, we observe a nontrivial (transient) net motion resulting from the non-linearity and asymmetry of the initial shape. This observation shows an essential difference from the classical area preserving curvature driven motion. Numerically solving \eqref{interface0} is a nontrivial task due to the non-linearity and non-locality in the formulation of the normal velocity. Classical methods such as level-set methods cannot be readily used here. We introduce an efficient algorithm which separates the difficulties of non-linearity and non-locality and resolves them independently through an iterative scheme. The accuracy of the algorithm is validated by numerical convergence. Our numerical experiments show that both non-linearity and asymmetry of the initial shape (in the form of non-convexity) can introduce a drift of the center of mass of the cell. Increasing the effects of the non-linearity or increasing the asymmetry results in an increase in the drift distance. \textcolor{blue}{} \section{Existence of solutions to the evolution equation (\ref{interface0})} \label{sec:existence} We study curves propagating via the evolution equation \eqref{interface0}. The case $\Phi\equiv 0$ corresponds to well-studied volume preserving curvature motion (see, e.g., \cite{Che93, Ell97,Esc98,Gag86}). We emphasize that the presence of $\Phi(V)$ results in an implicit, non-linear and non-local equation for $V$, which leads to challenges from an analytical and numerical standpoint. \newline The goal of this section is to prove the following: \begin{theorem}\label{thm1} Let $\Phi\in L^\infty(\mathbb{R})$ be a Lipschitz function satisfying \begin{equation}\label{phibound} \|\Phi'\|_{L^\infty(\mathbb{R})} < 1. \end{equation} Then, given $\Gamma_0\in W^{1,\infty}$, a closed and non self-intersecting curve on $\mathbb{R}^2$, there is a time $T=T(\Gamma_0)>0$ such that a family of curves $\Gamma(t)\in H^2$ exists for $t\in (0,T]$ which satisfies the evolution equation \eqref{interface0} with initial condition $\Gamma(0)=\Gamma_0$. \end{theorem} \begin{remark} The classes $W^{1,\infty}$ and $H^2$ above refer to curves which are parametrized by mappings from Sobolev spaces $W^{1,\infty}$ and $H^2$ correspondingly. \end{remark} \begin{remark} After time $T$ the curve could self-intersect or blow-up in the parametrization map (e.g., a cusp) could occur. \end{remark} We first prove the existence for smooth ($H^2$) initial data. The main effort is to pass to non-smooth initial conditions (e.g., initial curves with corners), see discussion in Section \ref{sec:overview}. \noindent{Proof of Theorem \ref{thm1}:} The proof of Theorem \ref{thm1} is split into 4 steps. In Step 1 we present a PDE formulation of the evolution problem \eqref{interface0} and introduce its regularization by adding a higher order term with the small parameter $\varepsilon>0$. In Step 2 we prove a local in time existence result for the regularized problem. In Step 3 we establish a uniform time interval of existence for solutions of the regularized problem via a priori estimates. These estimates allow us to pass to the limit $\varepsilon\to 0$, which leads to existence for \eqref{interface0} for smooth initial data. Finally, Step 4 is devoted to the transition from $H^2$-smooth initial data to $W^{1,\infty}$ ones. A crucial role here plays derivation of $L^\infty$ bounds for the solution and its first derivative independent of $H^2$ norm of the initial data. \noindent {\bf Step 1.} {\em Parametrization and PDE forms of \eqref{interface0}}. Let $\tilde\Gamma_0$ be a $C^4$ smooth reference curve in a small neighborhood of $\Gamma_0$ and let $\tilde\Gamma_0$ be parametrized by arc length parameter $\sigma\in I$. Let $\kappa_0(\sigma)$ be the signed curvature of $\tilde\Gamma_0$ and $\nu(\sigma)$ be the the inward pointing normal vector to $\tilde\Gamma_0$. Consider the tubular neighborhood \begin{equation*}\label{existball} U_{\delta_0} := \{x \in \mathbb{R}^2 \mid {\rm dist}(x,\tilde{\Gamma}_0)<2\delta_0\}. \end{equation*} \begin{figure}[h] \includegraphics[width = .8\textwidth]{existenceball} \caption{Visualization of $\mathbb{B}_{\delta_0}$ and the relation between $d(t)$ and $\Gamma(t)$} \label{fig:existenceball} \end{figure} One can choose $\tilde \Gamma_0$ and $\delta_0$ so that the map \[ Y\colon I\times (-2\delta_0,2\delta_0) \rightarrow U_{\delta_0}, \; Y(\sigma,u) := \tilde\Gamma_0(\sigma)+u \nu(\sigma) \] is a $C^2$ diffeomorphism between $I\times (-2\delta_0,2\delta_0)$ and its image, and $\Gamma_0\subset Y(I\times (-2\delta_0,2\delta_0))$ . Then $\Gamma_0$ can be parametrized by $\Gamma_0=\tilde \Gamma_0(\sigma)+u_0(\sigma)\nu(\sigma)$, $\sigma\in I$, for some periodic function $u_0(\sigma)$. Finally, we can assume that $\delta_0$ is sufficiently small so that \begin{equation}\label{smalldelta} \delta_0\|\kappa_0\|_{L^\infty} < 1, \end{equation} where $\kappa_0$ denotes the curvature of $\tilde \Gamma$. A continuous function $u: I\times [0,T]\rightarrow [-\delta_0,\delta_0]$, periodic in the $\sigma$ variable, describes a family of closed curves via the mapping \begin{equation}\label{equivalenceofD} \Gamma(\sigma,t) =\tilde \Gamma_0(\sigma)+u(\sigma,t)\nu(\sigma). \end{equation} That is, there is a well-defined correspondence between $\Gamma(\sigma,t)$ and $u(\sigma,t)$. Recall the Frenet-Serre formulas applied to $\tilde\Gamma_0$: \begin{equation}\label{dtau} \frac{d\tau}{d\sigma}(\sigma)= \kappa_0(\sigma) \nu(\sigma) \end{equation} \begin{equation}\label{dn} \frac{d\nu}{d\sigma}(\sigma) =-\kappa_0(\sigma)\tau(\sigma) \end{equation} where $\tau$ is the unit tangent vector. Using \eqref{equivalenceofD}-\eqref{dn} we express the normal velocity $V$ of $\Gamma$ as \begin{equation}\label{veqn} = \frac{1-u\kappa_0}{S}u_t \end{equation} where $S=S(u)= \sqrt{u^2_\sigma + (1-u\kappa_0)^2}$. Also, in terms of $u$, curvature of $\Gamma$ is given by \begin{equation}\label{curvature} \kappa(u) = \frac{1}{S^3} \Bigl((1-u \kappa_0) u_{\sigma\sigma} + 2\kappa_0 u_\sigma^2+(\kappa_0)_\sigma u_\sigma u + \kappa_0(1-u \kappa_0)^2\Bigr). \end{equation} Note that if $u$ is sufficiently smooth \cite{Doc76} one has $$ \int_I \kappa(u) Sd\sigma=\int_{\Gamma} \kappa ds =2\pi, $$ in particular this holds for every $u\in H^2_{per}(I)$ such that $|u|\leq \delta_0$ on $I$. Hereafter, $H_{per}^k(I)$ denote the Sobolev spaces of periodic functions on $I$ with square-integrable derivatives up to the $k$-th order and $W^{1,\infty}_{per}(I)$ denotes the space of periodic functions with the first derivative in $L^\infty(I)$. Combining \eqref{veqn} and \eqref{curvature}, we rewrite \eqref{interface0} as the following PDE for $u$: \begin{equation}\label{deqn} \begin{aligned} u_t -\frac{S}{1-u \kappa_0}\Phi\left(\frac{1-u\kappa_0}{S}u_t\right) &= \frac{S}{1-u \kappa_0}\kappa(u)\\ &-\frac{S}{(1-u \kappa_0)L[u]}\Bigl( \int_I \Phi\left(\frac{1-u \kappa_0}{S} u_t\right)Sd\sigma+2\pi\Bigr), \end{aligned} \end{equation} where $$ L[u]=\int_I S(u)d\sigma $$ is the total length of the curve. The initial condition $u(\sigma,0)=u_0(\sigma)$ corresponds to the initial profile $\Gamma(0)=\Gamma_0$. Since (\ref{deqn}) is not resolved with respect to the time derivative $u_t$, it is natural to resolve \eqref{interface0} with respect to $V$ to convert (\ref{deqn}) into a parabolic type PDE where the time derivative $u_t$ is expressed as a function of $u$, $u_\sigma$, $u_{\sigma\sigma}$. The following lemma shows how to rewrite \eqref{deqn} in such a form. This is done by resolving \eqref{interface0} with respect to $V$ to get a Lipschitz continuous resolving map, provided that $\Phi \in L^\infty(\mathbb{R})$ and $\|\Phi^\prime\|_{L^\infty(\mathbb{R})}<1$. It is convenient to rewrite in the form \eqref{interface0} \begin{equation}\label{lameq1} V = \kappa(u)+ \Phi(V) -\lambda,\quad \int_I V S(u)d\sigma=0, \end{equation} where both normal velocity $V$ and constant $\lambda$ are considered as unknowns. \begin{lemma} \label{lem1} Suppose that $\Phi\in L^\infty(\mathbb{R})$ and $\|\Phi' \|_{L^\infty(\mathbb{R})}<1$. Then for any $u (\sigma)~\in H^2_{per}(I)$, such that $|u(\sigma)|\leq \delta_0$, there exists a unique solution $(V(\sigma), \lambda)\in L^2(I)\times \mathbb{R}$ of \eqref{lameq1}. Moreover, the resolving map $\mathcal{F}$ assigning to a given $u\in H^2_{per}(I)\cap \{u; \,|u|\leq\delta_0\}$ the solution $V=\mathcal{F}(u)\in L^2(I)$ of \eqref{lameq1} is locally Lipschitz continuous. \end{lemma} \begin{proof} Let $J:=\|\Phi' \|_{L^\infty(\mathbb{R})}$. Fix $\kappa\in C_{per}(I)$, a positive function $S\in C_{per}(I)$ and $\lambda\in \mathbb{R}$, and consider the equation \begin{equation}\label{lameq} V = \kappa+ \Phi(V) -\lambda. \end{equation} It is immediate that the unique solution of \eqref{lameq} is given by $V=\Psi(\kappa-\lambda)$, where $\Psi$ is the inverse map to $V-\Phi(V)$. Note that $\frac{1}{1+J}\leq \Psi^\prime\leq \frac{1}{1-J}$, therefore $\Psi$ is strictly increasing function and $\Psi(\kappa-\lambda)\to\pm\infty$ uniformly as $\lambda\to\mp \infty$. It follows that there exists a unique $\lambda\in \mathbb{R}$ such that $V=\Psi(\kappa-\lambda)$ is a solution of \eqref{lameq} satisfying \begin{equation} \int_I V S d\sigma=0. \label{DopCondLambda} \end{equation} Next we establish the Lipschitz continuity of the resolving map $(\kappa,S)\mapsto V\in L^2(I)$ as a function of $\kappa\in L^2(I)$ and $S\in L^\infty_{per}(I)$, $S\geq 1-\delta_0\|\kappa_0\|_{L^\infty(I)}>0$ (cf. \eqref{smalldelta}), still assuming that $\kappa,S\in C_{per}(I)$. Multiply \eqref{lameq} by $VS$ and integrate over $I$ to find with the help of the Cauchy-Schwarz inequality, \begin{equation*} \begin{aligned} \int_I V^2 Sd\sigma &= \int_I \kappa V Sd\sigma + \int_I \Phi(V) V Sd\sigma \\ &\leq \|S\|_{L^\infty(I)}\bigl(\|\kappa\|_{L^2(I)}+(|I|\|\Phi\|_{L^\infty(\mathbb{R})})^{1/2}\bigr)\Bigl( \int_I V^2 Sd\sigma\Bigr)^{1/2}. \end{aligned} \end{equation*} Recalling that $S \geq \omega:= 1-\delta_0\|\kappa_0\|_{L^\infty(I)}>0$, we then obtain \begin{equation}\label{Vbounded} \|V\|_{L^2(I)} \leq \frac{\|S\|_{L^\infty(I)}}{\omega^{1/2}}\bigl( \|\kappa\|_{L^2(I)}+(|I|\|\Phi\|_{L^\infty(\mathbb{R})})^{1/2} \bigr). \end{equation} To see that $\kappa\mapsto V$ (for fixed $S$) is Lipschitz continuous, consider solutions $V_1$, $V_2$ of \eqref{lameq}-\eqref{DopCondLambda} with $\kappa=\kappa_1$ and $\kappa=\kappa_2$. Subtract the equation for $V_2$ from that for $V_1$ multiply by $S(V_1-V_2)$ and integrate over $I$, $$ \int_I \bigl( (V_1-V_2)^2-(\Phi(V_1)-\Phi(V_2))(V_1-V_2)\bigr) Sd\sigma = \int_I (\kappa_1-\kappa_2) (V_1-V_2) Sd\sigma. $$ Then, since $|\Phi(V_1)-\Phi(V_2)|\leq J |V_1-V_2|$ we derive that $$ \|V_1-V_2\|_{L^2(I)}\leq \frac{\|S\|_{L^\infty(I)}}{\omega^{1/2}(1-J)} \|\kappa_1-\kappa_2\|_{L^2(I)}. $$ Next consider solutions of \eqref{lameq}-\eqref{DopCondLambda}, still denoted $V_1$ and $V_2$, which correspond now $S=S_1$ and $S=S_2$ with the same $\kappa$. Subtract the equation for $V_2$ from that for $V_1$ multiply by $S_1(V_1-V_2)+V_2(S_1-S_2)$ and integrate over $I$ to find $$ \begin{aligned} \int_I \bigl( (V_1-V_2)^2-&(\Phi(V_1)-\Phi(V_2))(V_1-V_2)\bigr) S_1 d\sigma \\&= \int_I V_2(S_1-S_2)\bigl(\Phi(V_1)-\Phi(V_2) -(V_1-V_2)\Bigr) d\sigma. \end{aligned} $$ Then applying the Cauchy-Schwarz inequality we derive \begin{equation*} \|V_1-V_2\|_{L^2(I)}\leq \frac{1+J}{\omega (1-J)} \|V_2\|_{L^2(I)}\|S_1-S_2\|_{L^\infty(I)}. \end{equation*} Thanks to \eqref{Vbounded} this completes the proof of local Lipschitz continuity of the resolving map on the dense subset (of continuous functions) in $$\Theta=\Bigl\{(\kappa,S)\in L^2(I)\times \{S\in L^\infty_{per}(I); S\geq \omega\}\Bigr\}$$ and thus on the whole set $\Theta$. It remains to note that the map $u\mapsto (\kappa(u), S(u))$ is locally Lipschitz on $\{u\in H^2_{per}(I);\, |u|\leq \delta_0\ \text{on}\ I\}$, which completes the proof of the Lemma. \end{proof} \begin{remark}The parameter $\lambda\in \mathbb{R}$ with the property that the solution $V$ of \eqref{lameq} satisfies $\int_I V S d\sigma =0$ is easily seen to be \begin{equation} \lambda = \frac{1}{L[u]} \int_I (\kappa(u)+\Phi(V)S d\sigma= \frac{\pi}{L[u]}+\frac{1}{L[u]}\int_I \Phi(V) S d\sigma. \end{equation} \end{remark} \begin{remark} Under conditions of Lemma \ref{lem1}, if $\kappa(u)\in H_{per}^1(I)$ ($u\in H^3_{per}(I)$) then it holds that $V=\mathcal{F}(u)\in H_{per}^1(I)$. \end{remark} Equation \eqref{interface0} is equivalently rewritten in terms of the resolving operator $\mathcal{F}$ as \begin{equation}\label{deqn1} u_t = \frac{S(u)} {1- u \kappa_0}\mathcal{F}(u),\ \text{or}\ u_t = \mathcal{\tilde F}(u), \end{equation} where $\mathcal{\tilde F}(u):=S(u)\mathcal{F}(u)/(1- u \kappa_0)$. \noindent {\bf Step 2.} {\em Introduction and analysis of regularized PDE.} We now introduce a small parameter regularization term to $\eqref{deqn}$ which allows us to apply standard existence results. To this end, let $u^\varepsilon=u^\varepsilon(\sigma,t)$ solve the following regularization of equation \eqref{deqn1} for $0<\varepsilon\leq1$, \begin{equation}\label{evoMotion} u^\varepsilon_t + \epsilon u^\varepsilon_{\sigma\sigma\sigma\sigma} = \mathcal{\tilde F}(u^\varepsilon), \end{equation} with $u^\varepsilon(\sigma,0)=u_0$. Define \begin{equation*}\label{regVelocity} V^\varepsilon := \frac{1-u^\varepsilon\kappa_0}{S^\varepsilon}\left(u_t^\varepsilon + \varepsilon u_{\sigma\sigma\sigma\sigma}^\varepsilon\right), \end{equation*} where $S^\varepsilon=S(u^\varepsilon)$. Since $V^\varepsilon = \mathcal{F}(u^\varepsilon)$, then by definition of the resolving map $\mathcal{F}$ we have that $u^\varepsilon$ satisfies the following equation: \begin{equation}\label{eqnfordve} \begin{aligned} u_t^\varepsilon +\varepsilon u_{\sigma\sigma\sigma\sigma}^\varepsilon - \frac{S^\varepsilon}{1-u \kappa_0}\Phi(V^\varepsilon) &= \frac{S^\varepsilon}{1-u^\varepsilon \kappa_0}\kappa(u^\varepsilon)\\ &-\frac{S^\varepsilon}{(1-u^\varepsilon \kappa_0)L[u^\varepsilon]} \Bigl( \int_I \Phi(V^\varepsilon)S^\varepsilon d\sigma+2\pi\Bigr). \end{aligned} \end{equation} Hereafter we consider $H^2_{per}(I)$ equipped with the norm \begin{equation*} \|u\|^2_{H^2(I)}:= \|u\|_{L^2(I)}^2+\|u_{\sigma\sigma}\|_{L^2(I)}^2. \end{equation*} \begin{proposition}\label{prop1} Let $\Gamma_0$ and $\Phi(\cdot)$ satisfy the conditions of Theorem \ref{thm1}. Assume that $u_0\in H^2_{per}$ and $\max|u_0|<\delta_0$. Then there exists a non-empty interval $[0,T^\varepsilon]$ such that a solution $u^\varepsilon$ of \eqref{eqnfordve} with initial data $u^\varepsilon(\eta,0)=u_0(\eta)$ exists and \begin{equation} \label{regularity} \begin{aligned} u^\varepsilon \in L^2(0,T^\varepsilon; H^4_{per}(I)) &\cap H^1(0,T^\varepsilon;L^2(I)) \cap L^\infty(0,T^\varepsilon; H^2_{per})\\ &\text{and}\ \sup_{t\in[0,T]} \|u^\varepsilon(t)\|_{L^\infty(I)}\leq \delta_0. \end{aligned} \end{equation} Furthermore, this solution can be extended on a bigger time interval $[0,T^\varepsilon+\Delta_t]$ so long as $\delta_0-\max u(\sigma,T^\varepsilon)\geq \alpha$ and $\|u(T^\varepsilon)\|_{H^2(I)}\leq M$ for some $\alpha>0$ and $M<\infty$, where $\Delta_t$ depend on $\varepsilon$, $\alpha$ and $M$, $\Delta_t:=\Delta_t(\alpha, M,\varepsilon)>0$. \end{proposition} \begin{proof} Choose $T^\varepsilon>0$ and $M>\|u_0\|_{H^2(I)}$, and introduce the set $$ K:=\{u: \|u\|_{H^2}\leq M,\, |u|\leq \delta_0\, \text{in} \ I\}. $$ Given $\tilde{u}\in L^\infty(0,T^\varepsilon;K)$, consider the following auxiliary problem \begin{equation}\label{linear} u_t + \varepsilon u_{\sigma\sigma\sigma\sigma}= \mathcal{\tilde F}(\tilde{u}) \end{equation} with $u(\sigma,0)=u_0(\sigma)$. Classical results (e.g., \cite{Lio72}) yield existence of a unique solution $u$ of \eqref{linear} which possesses the following regularity \begin{equation*} u\in L^2(0,T^\varepsilon; H^4_{per}(I)) \cap H^1(0,T^\varepsilon;L^2(I))\cap L^\infty(0,T^\varepsilon;H^2_{per}(I)). \end{equation*} That is, a resolving operator \begin{equation*} \mathcal{T}\colon L^\infty(0,T^\varepsilon;K)\to L^\infty(0,T^\varepsilon;H^2(I)) \end{equation*} which maps $\tilde{u}$ to the solution $u$ is well defined. Next we show that $\mathcal{T}$ is a contraction in $K$, provided that $T^\varepsilon$ is chosen sufficiently small. Consider $\tilde{u}_1,\tilde{u}_2\in L^\infty(0,T^\varepsilon; K)$ satisfying the initial condition $\tilde{u}_1(\sigma,0)=\tilde{u}_2(\sigma,0)=u_0(\sigma)$ and define $u_1:=\mathcal{T}(\tilde{u}_1)$, $u_2:=\mathcal{T}(\tilde{u}_2)$. Let $\bar{u}:=u_1-u_2$. Then multiply the equality $ \bar{u}_t + \varepsilon \bar{u}_{\sigma\sigma\sigma\sigma} = \mathcal{\tilde F}(\tilde{u}_1)-\mathcal{\tilde F}(\tilde{u}_2) $ by $(\bar{u}_{\sigma\sigma\sigma\sigma}+\bar{u})$ and integrate. Integrating by parts and using the Cauchy-Schwarz inequality yields \begin{align*} \frac{1}{2}\frac{d}{dt}\int_I (\bar{u}_{\sigma\sigma}^2+ \bar{u}^2)d\sigma&+\varepsilon \|\bar{u}_{\sigma\sigma\sigma\sigma}\|^2_{L^2(I)}+\varepsilon \|\bar{u}_{\sigma\sigma}\|^2_{L^2(I)} \\ &\leq \|\mathcal{\tilde F}(\tilde u_1)-\mathcal{\tilde F}(\tilde u_2)\|_{L^2(I)}(\|\bar{u}_{\sigma\sigma\sigma\sigma}\|_{L^2(I)}+\|\bar{u}\|_{L^2(I)}). \end{align*} Note that by Lemma \ref{lem1} the map $\mathcal{F}(u)$ with values in $L^2(I)$ is Lipschitz on $K$; since $\mathcal{\tilde F}(u)=S(u)\mathcal{F}(u)/(1- u \kappa_0)$ it is not hard to see that $\mathcal{\tilde F}(u)$ is also Lipschitz. Using this and applying Young's inequality to the right hand side we obtain \begin{equation} \label{contproof1} \frac{1}{2}\frac{d}{dt}\|\bar{u}\|^2_{H^2(I)}\leq C\|\tilde{u}_1-\tilde{u}_2\|_{H^2(I)}^2+\frac{1}{2}\|\bar{u}\|_{H^2(I)}^2 \end{equation} with a constant $C$ independent of $\tilde{u}_1$ and $\tilde{u}_2$ and $T_\varepsilon$. Applying the Gr\"onwall inequality on \eqref{contproof1} we get \begin{equation} \label{contend} \sup_{0\leq t\leq T^\varepsilon} \|\mathcal{T}(\tilde{u}_1)-\mathcal{T}(\tilde{u}_2)\|^2_{H^2(I)} \leq 2(e^{T^\varepsilon}-1)C\|\tilde{u}_1-\tilde{u}_2\|_{L^\infty(0,T^\varepsilon;H^2(I))} \end{equation} Similar arguments additionally yield the following bound for $u=\mathcal{T}(\tilde u)$, \begin{equation} \label{notFar} \sup_{0\leq t\leq T^\varepsilon}\|u(t)\|_{H^2(I)}^2 \leq (e^{T^\varepsilon}-1)C_1 +e^{T^\varepsilon} \|u_0\|_{H^2(I)}^2, \end{equation} with $C_1$ independent of $\tilde u\in L^\infty(0,T^\varepsilon;K)$. Choosing $T^\varepsilon$ sufficiently small we get that $\|u(t)\|_{H^2(I)}\leq M$ for $0<t<T^\varepsilon$. Finally, multiply \eqref{linear} by $(u-u_0)$ and integrate. After integrating by parts and using the fact that $\|u\|_{H^2(I)}\leq M$ we obtain $$ \sup_{0\leq t\leq T^\varepsilon}\|u(t)-u_0\|_{L^2(I)}^2\leq C_3(e^{T^\varepsilon}-1). $$ Then using the interpolation inequality $$ \|u-u_0\|^2_{C(I)}\leq C\|u-u_0\|_{H^2(I)}\|u-u_0\|_{L^2(I)} $$ we get the bound \begin{equation} \label{Linftyblizko} \sup_{0\leq t\leq T^\varepsilon}\|u(t)-u_0\|_{C(I)}^4\leq C_4\|u(t)-u_0\|_{L^2(I)}^2\leq C_5(e^{T^\varepsilon}-1). \end{equation} Now by \eqref{contend} and \eqref{Linftyblizko} we see that, possibly after passing to a smaller $T^\varepsilon$, $\mathcal{T}$ maps $K$ into $K$ and it is a contraction on $K$. \end{proof} \noindent{\bf Step 3.} {\em Regularized equation: a priori estimates, existence on time interval independent of $\varepsilon$, and limit as $\varepsilon\to 0$}.\\ In this step we derive a priori estimates which imply existence of a solution of \eqref{eqnfordve} on a time interval independent of $\varepsilon$. These estimates are also used to pass to the $\varepsilon\to 0$ limit. \begin{lemma}\label{aprioriestH2uniform} Assume that $u_0\in H^2_{per}$ and $\| u_0 \|_{L^\infty(I)}<\delta_0$. Let $u^\varepsilon$ solve \eqref{eqnfordve} on a time interval $[0,T^\varepsilon]$ with initial data $u^\varepsilon(0)=u_0$, and let $u^\varepsilon$ satisfy $|u^\varepsilon(\sigma,t)|\leq \delta_0$ on $I\times T^\varepsilon$. Then \begin{equation} \label{unifH2bound} \|u^\varepsilon_{\sigma\sigma}\|_{L^2(I)}^2\leq a(t), \end{equation} where $a(t)$ is the solution of \begin{equation} \label{supersol1a} \dot a=2P a^3+2Q, \quad a(0)=\|(u_0)_{\sigma\sigma}\|_{L^2}^2 \end{equation} (continued by $+\infty$ after the blow up time), and $0<P<\infty$, $0<Q<\infty$ are independent of $\varepsilon$ and $u_0$. \end{lemma} \begin{proof} For brevity we adopt the notation $u:=u^\varepsilon,\; V:=V^\varepsilon,\; S:=S^\varepsilon$. Differentiate equation \eqref{eqnfordve} in $\sigma$ to find that \begin{equation}\label{apriori1} \begin{aligned} u_{\sigma t} + \varepsilon \frac{\partial^5 u}{\partial \sigma^5} &-\frac{S}{1-u \kappa_0} \Phi^\prime (V) V_\sigma = \frac{S}{1-u \kappa_0} (\kappa(u))_\sigma\\ &+ \Bigl(\frac{S}{1-u \kappa_0}\Bigr)_\sigma \Bigl(\kappa(u)-\Phi(V)-\frac{2\pi}{L[u]}- \frac{1}{L[u]} \int_I \Phi(V)S d\sigma\Bigr). \end{aligned} \end{equation} Next we rewrite \eqref{eqnfordve} in the form $V = \Phi(V)+\kappa - \lambda$ to calculate $V_\sigma$, $$(1- \Phi'(V))V_\sigma = \kappa_\sigma$$ whence \[ V_\sigma = \frac{1}{1- \Phi^\prime(V)} \kappa_\sigma. \] Now we substitute this in \eqref{apriori1} to find that \begin{equation}\label{aprioriFinal} \begin{aligned} u_{\sigma t} + \varepsilon \frac{\partial^5 u}{\partial \sigma^5} &= \frac{u_{\sigma\sigma\sigma}}{S^2(1-\Phi^\prime(V))} -\frac{\Phi^\prime(V)+2}{(1-\Phi^\prime(V))S^4} u_\sigma u_{\sigma\sigma}^2 + A(\sigma,V, u,u_\sigma)u_{\sigma\sigma} \\ &+B(\sigma,V, u,u_\sigma)+ \Bigl(\frac{S}{1-u \kappa_0}\Bigr)_\sigma \Bigl(\Phi(V)-\frac{2\pi}{L[u]}- \frac{1}{L[u]} \int_I \Phi(V)S d\sigma\Bigr), \end{aligned} \end{equation} where $A(\sigma,V, u,p)$ and $B(\sigma,V,u,p)$ are bounded continuous functions on $I\times \mathbb{R}\times [-\delta_0,\delta_0] \times \mathbb{R}$. Multiply \eqref{aprioriFinal} by $u_{\sigma\sigma\sigma}$ and integrate over $I$, integrating by parts on the left hand side. We find after rearranging terms and setting $\gamma:=\sup (1-\Phi^\prime(V))$ ($0<\gamma<\infty$), \begin{equation} \label{TrudnayaOtsenka} \begin{aligned} \frac{1}{2}\frac{d}{dt}\|u_{\sigma\sigma}\|_{L^2(I)}^2 + \varepsilon \|u_{\sigma\sigma\sigma\sigma}\|_{L^2(I)}^2&+ \frac{1}{\gamma}\int_I |u_{\sigma\sigma\sigma}|^2\frac{d\sigma}{S^2}\leq C \int_I |u_{\sigma\sigma}|^2|u_{\sigma\sigma\sigma}|\frac{d\sigma}{S^3}\\ &+ C_1\bigl(\|u_{\sigma\sigma}\|_{L^2(I)}^3+1\bigr)\Bigl(\int_I |u_{\sigma\sigma\sigma}|^2\frac{d\sigma}{S^2}\Bigr)^{1/2}, \end{aligned} \end{equation} where we have also used the inequality $\|u_\sigma\|_{L^\infty(I)}^2\leq |\,I\,| \|u_{\sigma\sigma}\|^2_{L^2(I)}$ and estimated various terms with the help of the Cauchy-Schwarz inequality. Next we estimate the first term in the right hand side of \eqref{TrudnayaOtsenka} as follows \begin{equation} \label{PromezhOtsenka} \int_I |u_{\sigma\sigma}|^2|u_{\sigma\sigma\sigma}|\frac{d\sigma}{S^3}\leq C_\gamma \int_I |u_{\sigma\sigma}|^4\frac{d\sigma}{S^4} +\frac{1}{4C_\gamma} \int_I |u_{\sigma\sigma\sigma}|^2\frac{d\sigma}{S^2} \end{equation} and apply the following interpolation type inequality, whose proof is given in Lemma \ref{interplemma}: for all $u\in H^3_{per}(I)$ such that $|u|\leq \delta_0$ it holds that \begin{equation} \label{firstInterpolation} \int_I |u_{\sigma\sigma}|^4\frac{d\sigma}{S^4} \leq \mu \int_I |u_{\sigma\sigma\sigma}|^2 \frac {d\sigma}{S^2}+C_2\Bigl(\int u_{\sigma\sigma}^2d\sigma\Bigr)^3+C_3\quad \forall \mu>0, \end{equation} where $C_2$ and $C_3$ depend only on $\mu$. Now we use \eqref{PromezhOtsenka} and \eqref{firstInterpolation} in \eqref{TrudnayaOtsenka}, and estimate the last term of \eqref{TrudnayaOtsenka} with the help of Young's inequality to derive that \begin{equation} \label{FinalnayaOtsenka} \frac{1}{2}\frac{d}{dt}\|u_{\sigma\sigma}\|_{L^2(I)}^2 + \varepsilon \|u_{\sigma\sigma\sigma\sigma}\|_{L^2(I)}^2+ \frac{1}{4\gamma}\int_I |u_{\sigma\sigma\sigma}|^2\frac{d\sigma}{S^2}\leq P\|u_{\sigma\sigma}\|^6_{L^2(I)} +Q. \end{equation} Then by a standard comparison argument for ODEs $\|u_{\sigma\sigma}\|_{L^2(I)}^2\leq a(t)$, where $a$ solves \eqref{supersol1a}. The Lemma is proved. \end{proof} \begin{lemma}\label{interplemma} Assume that $u\in H^3_{per}(I)$ and $|u|\leq \delta_0$ on $I$. Then \eqref{firstInterpolation} holds with $C_2$ and $C_3$ depending on $\mu$ only. \end{lemma} \begin{proof} The following straightforward bounds will be used throughout the proof, \begin{equation} \label{AUX1S} \frac{1}{\sqrt{2}}(|u_\sigma|+(1-\delta_0\|\kappa_0\|_{L^\infty(I)}))\leq S(u) \leq |u_\sigma|+1, \end{equation} \begin{equation} \label{AUX2S} |(S(u))_\sigma|\leq \frac{1}{S} (|u_\sigma||u_{\sigma\sigma}|+C|u_\sigma|+C_1). \end{equation} Note that \begin{equation}\label{Otsenka_lem21} \int_I u_{\sigma\sigma}^4\frac{d\sigma}{S^4} \leq C \left\| u^2_{\sigma\sigma}/S\right\|_{L^\infty(I)} \int_I u_{\sigma\sigma}^2d\sigma, \end{equation} where $C>0$ is independent of $u$. Since $\int_I u_{\sigma\sigma} d\sigma=0$ we have \begin{equation} \label{Otsenka_lem22} \| u_{\sigma\sigma}^2/S\|_{L^\infty(I)} \leq \int_I|(u_{\sigma\sigma}^2/S)_\sigma|d\sigma. \end{equation} Next we use \eqref{AUX1S}, \eqref{AUX2S} to estimate the right hand side of \eqref{Otsenka_lem22} with the help of the Cauchy-Schwarz inequality, \begin{equation} \label{Otsenka_lem23} \begin{aligned} \int_I|(u_{\sigma\sigma}^2/S)_\sigma|d\sigma&\leq 2 \int_I\left|u_{\sigma\sigma\sigma}u_{\sigma\sigma}\right|\frac{d\sigma}{S} + C\left(\int_I |u_{\sigma\sigma}|^3 \frac{d\sigma}{S^2}+\int_I u_{\sigma\sigma}^2d\sigma\right) \\ &\leq 2 \left(\int_Iu^2_{\sigma\sigma\sigma}\frac{d\sigma}{S^2}\right)^{1/2} \left(\int_I u_{\sigma\sigma}^2 d\sigma \right)^{1/2} \\ &+ C\left(\int_Iu^4_{\sigma\sigma} \frac{d\sigma}{S^4}\right)^{1/2} \left(\int_I u_{\sigma\sigma}^2 d\sigma\right)^{1/2}+ C \int_I u_{\sigma\sigma}^2d\sigma. \end{aligned} \end{equation} Plugging \eqref{Otsenka_lem22}-\eqref{Otsenka_lem23} into \eqref{Otsenka_lem21} and using Young's inequality yields \begin{equation*} \begin{aligned} \int_I u_{\sigma\sigma}^4\frac{d\sigma}{S^4} \leq& 2 \left(\int_Iu^2_{\sigma\sigma\sigma} \frac{d\sigma}{S^2}\right)^{1/2} \left(\int_I u_{\sigma\sigma}^2d\sigma \right)^{3/2} \\ &+ \frac{1}{2}\int_I u^4_{\sigma\sigma}\frac{d\sigma}{S^4} + C\left(\int_I u_{\sigma\sigma}^2d\sigma \right)^{3}+C. \end{aligned} \end{equation*} Finally, using here Young's inequality once more time we deduce \eqref{firstInterpolation}. \end{proof} Consider a time interval $[0,T^*]$ with $T^*>0$ slightly smaller than the blow up time $T^{bu}$ of \eqref{supersol1a}. As a byproduct of Lemma \ref{aprioriestH2uniform} (cf. \eqref{FinalnayaOtsenka}) we then have for any $0<T\leq \min\{T^\varepsilon,T^*\}$ \begin{equation} \label{byprod1} \sup_{t\in[0,T]}\|u_{\sigma\sigma}\|^2_{L^2(I)}+ \varepsilon \int_0^{T}\|u_{\sigma\sigma\sigma\sigma}\|_{L^2(I)}^2 dt+ \int_0^{T}\int_I \|u_{\sigma\sigma\sigma}\|^2_{L^2(I)}dt\leq C \end{equation} where $u:=u^\varepsilon(\sigma,t)$ is the solution of \eqref{eqnfordve} described in Proposition \ref{prop1}, and $C$ is independent of $\varepsilon$ and $T$. In order to show that the solution of \eqref{eqnfordve} exists on a time interval independent of $\varepsilon$, it remains to obtain a uniform in $\varepsilon$ estimate on the growth of $\|u^\varepsilon-u_0\|_{C(I)}$ in time. Arguing as in the end of the proof of Proposition \ref{prop1} and using \eqref{byprod1} one can prove \begin{lemma} \label{afteraprioriestinC} Assume that $u_0\in H^2_{per}(I)$, $\| u_0 \|_{L^\infty(I)}<\delta_0$ and assume that the solution $u^\varepsilon$ of \eqref{eqnfordve} satisfies $|u^\varepsilon(\sigma,t)|\leq \delta_0$ on $I\times [0,T^\varepsilon]$. Then for all $0<t\leq \min\{T^\varepsilon, T^*\}$, \begin{equation} \label{uniformw12bound} \|u^\varepsilon-u_0\|_{C(I)}^4\leq C(e^{t}-1) \end{equation} where $C$ is independent of $\varepsilon$. In particular, there exists $0<T^{**}\leq T^*$, independent of $\varepsilon$, such that $\sup \bigl\{\|u^\varepsilon(t)\|_{L^\infty(I)};\, 0\leq t\leq \min\{T^\varepsilon,T^{**}\}\bigr\}< \delta_0$. \end{lemma} Combining Proposition \ref{prop1} with Lemma~\ref{aprioriestH2uniform} and Lemma~\ref{afteraprioriestinC} we see that the solution $u^\varepsilon$ of \eqref{eqnfordve} exists on the time interval $[0,T^{**}]$ and \eqref{byprod1} holds with $T=T^{**}$. Now, it is not hard to pass to the limit $\varepsilon\to 0$ in \eqref{eqnfordve}. Indeed, exploiting \eqref{byprod1} we see that, up to extracting a subsequence, $u^\varepsilon\rightharpoonup u$ weak star in $L^\infty(0,T^{**}; H^2_{per}(I))$ as $\varepsilon\to 0$. Using \eqref{byprod1} in \eqref{evoMotion} we also conclude that the family $\{u^\varepsilon_t\}_{0<\varepsilon\leq 1}$ is bounded in $L^2(I\times [0,T^{**}])$. Combining uniform estimates on $\|u^\varepsilon_{\sigma\sigma\sigma}\|_{L^2(I\times [0,T])}$ (from \eqref{byprod1}) and $\|u_t^\varepsilon\|_{L^2(I\times [0,T^{**}])}$ we deduce $u^\varepsilon\to u$ strongly in $L^2(0,T^{**}; H^2_{per}(I))\cap C(0,T^{**};H^1_{per}(I))$. Moreover, $u^\varepsilon_t+\varepsilon u^\varepsilon_{\sigma\sigma\sigma\sigma}\rightharpoonup u_t$ weak star in $L^\infty(0,T^{**}; L^2(I))$ and $u^\varepsilon_t+\varepsilon u^\varepsilon_{\sigma\sigma\sigma\sigma}\to u_t$ strongly in $L^2(0,T^{**}; H^2_{per}(I))$. Thus, in the limit we obtain a solution $u$ of \eqref{deqn}. That is we have proved \begin{theorem}(Existence for smooth initial data)\label{thm_smooth_in_data} For any $u_0\in H^2_{per}(I)$ such that $\|u_0\|_{L^\infty(I)}<\delta_0$ there exists a solution $u$ of \eqref{deqn} on a time interval $[0,T]$, with $T>0$ depending on $u_0$. \end{theorem} \begin{remark} \label{Remark_universal} Note that the time interval in Theorem \ref{thm_smooth_in_data} can by chosen universally for all $u_0$ such that $\|u_0\|_{L^\infty(I)}\leq \alpha<\delta_0$ and $\|u_0\|_{H^2(I)}\leq M<\infty$, that is $T=T(M,\alpha)>0$. \end{remark} \noindent{\bf Step 4.} {\em Passing to $W^{1,\infty}$ initial data} \\ In this step we consider a solution $u$ of \eqref{deqn} granted by Theorem \ref{thm_smooth_in_data} and show that the requirement on $H^2$ smoothness of the initial data can be weakened to $W^{1,\infty}$. To this end we pass to the limit in \eqref{deqn} with an approximating sequence of smooth initial data. The following result establishes a bound on $\|u\|_{L^{\infty}(I)}$ independent of the $H^2$-norm of initial data (unlike in Lemma \ref{afteraprioriestinC}) and provides also an estimate for $\|u\|_{W^{1,\infty}(I)}$. \begin{lemma} \label{Lemma_various_uniformbounds} Let $u$ be a solution of \eqref{deqn} (with initial value $u_0$) on the interval $[0,T]$ satisfying $\|u(t)\|_{L^\infty(I)}\leq\delta_0$ for all $t<T$. Then \begin{equation} \label{Thefirstl_inf_bound} \|u(t)\|_{L^\infty(I)}\leq \|u_0\|_{L^\infty(I)}+Rt \end{equation} where $R\geq 0 $ is a constant independent of $u_0$. Furthermore, the following inequality holds \begin{equation} \label{Thesecond_inf_bound} \|u_\sigma\|_{L^\infty(I)}\leq a_1(t), \end{equation} where $a_1(t)$ is the solution of \begin{equation} \label{supersol2a} \frac{d a_1}{d t}=P_1 a_1^2+Q_1 a_1+R_1, \quad a(0)=\|(u_0)_{\sigma}\|_{L^\infty(I)} \end{equation} (continued by $+\infty$ after the blow up time) and $P_1$, $Q_1$, $R_1$ are positive constants independent of $u_0$. \end{lemma} \begin{proof} Both bounds \eqref{Thefirstl_inf_bound} and \eqref{Thefirstl_inf_bound} are established by using the maximum principle. Consider $g(\sigma,t)=g(t):=\|u_0\|_{L^\infty(I)}+Rt$, where $R>0$ is to be specified. Since $\Phi$ is bounded it holds that $ g_t-\Phi(g_t)\geq R-\sup\Phi $. Assuming that $u-g$ attains its maximum on $(0,t]\times I$, say at $(\sigma_0, t_0)$, we have $u_\sigma(\sigma_0,t_0)=0$, $u_{\sigma\sigma}(\sigma_0,t_0)\leq 0$, $g_t(\sigma_0,t_0)\leq u_t(\sigma_0,t_0)$ and $S/(1-u\kappa_0)=1$. Using \eqref{deqn} and monotonicity of the function $V-\Phi(V)$ we also get for $t=t_0$, $\sigma=\sigma_0$ $$ \begin{aligned} R-\sup\Phi \leq g_t-\Phi(g_t)& \leq u_t - \frac{S}{1-u \kappa_0}\Phi\Bigl(\frac{1-u \kappa_0}{S}u_t\Bigr) \\ &\leq\frac{\kappa_0}{1-u \kappa_0}-\frac{1}{L[u]}\Bigl( \int_I \Phi\left(\frac{1-u \kappa_0}{S} u_t\right)Sd\sigma+2\pi\Bigr). \end{aligned} $$ The last term in this inequality is uniformly bounded for all functions $u$ satisfying $|u|\leq \delta_0$, and therefore $u-g$ cannot attain its maximum on $(0,t]\times I$ when $R$ is sufficiently large, hence $u(\sigma,t)\leq \|u_0\|_{L^\infty(I)}+Rt$. Similarly one proves that $u(\sigma,t)\geq -\|u_0\|_{L^\infty(I)}-Rt$. To prove \eqref{Thesecond_inf_bound} we write down the equation obtained by differentiating \eqref{deqn} in $\sigma$ (cf. \eqref{aprioriFinal}), \begin{equation} \label{aprioriFinal1} \begin{aligned} u_{\sigma t} - \frac{u_{\sigma\sigma\sigma}}{S^2(1-\Phi^\prime(V))}&+ \frac{\Phi^\prime(V)+2}{(1-\Phi^\prime(V))S^4} u_\sigma u_{\sigma\sigma}^2 = Au_{\sigma\sigma} \\ &+B+ \Bigl(\frac{S}{1-u \kappa_0}\Bigr)_\sigma \Bigl(\Phi(V)-\frac{2\pi}{L[u]}- \frac{1}{L[u]} \int_I \Phi(V)S d\sigma\Bigr), \end{aligned} \end{equation} where we recall that $A(\sigma,V, u,p)$ and $B(\sigma,V,u,p)$ are bounded continuous functions on $I\times \mathbb{R}\times [-\delta_0,\delta_0] \times \mathbb{R}$. Consider the function $u_\sigma(\sigma,t)-a_1(t)$, with $a_1(t)$ satisfying $a_1(0)=\|u_\sigma(0)\|_{L^{\infty}(I)}$. If this function attains its maximum over $I\times (0,t]$ at a point $(\sigma_0, t_0)$ with $t_0>0$, we have at this point \begin{equation} \label{ContradictionPointwise} \begin{aligned} \frac{ d a_1}{d t}\leq u_{\sigma t}&\leq B+ \Bigl(\frac{S}{1-u \kappa_0}\Bigr)_\sigma \Bigl(\Phi(V)-\frac{2\pi}{L[u]}- \frac{1}{L[u]} \int_I \Phi(V)S d\sigma\Bigr)\\ &< C_1+C_2u_\sigma^2+C_3 u_\sigma, \end{aligned} \end{equation} where $C_1$, $C_2$, $C_3$ are some positive constants independent of $u$. We see now that for $P_1\geq C_2$, $Q_1\geq C_3$ and $R_1\geq C_1$, either $u_\sigma(\sigma_0,t_0)\leq a_1(t_0)$ or inequality \eqref{ContradictionPointwise} contradicts \eqref{supersol2a}. This yields that for all $\sigma\in I$ and all $t< \sup\{\tau; a_1(\tau)\ \text{is finite}\}$, $$ u_\sigma(\sigma,t)-a_1(t)\leq \max\{0,\max_{\sigma\in I} u_\sigma(\sigma,0)-a_1(0)\}= 0. $$ The lower bound for $u_\sigma(\sigma,t)$ is proved similarly. \end{proof} Lemma \ref{Lemma_various_uniformbounds} shows that for some $T_1>0$ the solution $u$ of \eqref{deqn} satisfies \begin{equation} \label{Linfsummary} \|u\|_{L^\infty(I)}<\delta_0 \ \text{and}\ \|u_\sigma\|_{L^\infty(I)}\leq M_1 , \end{equation} when $0<t\leq \min\{T,T_1\}$, where $T$ is the maximal time of existence of $u$. Moreover, $T_1$ and constant $M_1$ in \eqref{Linfsummary} depend only on the choice of constants $0<\alpha<1$ and $M>0$ in the bounds for the initial data $|u_0|\leq \alpha$ and $\|(u_0)_\sigma\|_{L^\infty(I)}\leq M$. We prove next that actually $T\geq T_1$. \begin{lemma} \label{Holder_continuity} Let $u$ solve \eqref{deqn} and let \eqref{Linfsummary} hold on $0\leq t\leq T_2$, where $0<T_2\leq T_1$. Then for any $\delta>0$ \begin{equation} \label{HolderBound} |u_\sigma(\sigma^\prime,t)-u_\sigma(\sigma^{\prime\prime},t)|\leq C_\delta |\sigma^\prime-\sigma^{\prime\prime}|^\vartheta \ \text{when} \ \delta\leq t\leq T_2, \end{equation} where $0<\vartheta<1$ and $C_\delta$ depend only on $\delta$ and constant $M_1$ in \eqref{Linfsummary}. \end{lemma} \begin{proof} Recall from Lemma \ref{lem1} that $\Psi$ denotes the inverse function of $V-\Phi(V)$, and that \begin{equation*} \lambda(t):=\frac{1}{L[u]}\Bigl(2\pi+\int_I \Phi(V)S d\sigma\Bigr), \end{equation*} so that the solution $V$ of the equation $V=\Phi(V)+\kappa-\lambda$ is given by $\Psi(\kappa-\lambda)$. This allows us to write the equation \eqref{aprioriFinal1} for $v:=u_\sigma$ in the form of a quasilinear parabolic equation, \begin{equation} \label{convenientForm} v_t=\bigl(a(v_\sigma,\sigma,t)\bigr)_\sigma-\frac{\lambda(t) u_\sigma}{S(u){(1-u \kappa_0)}} v_\sigma -\lambda(t)(u\kappa_0)_\sigma \frac{u_\sigma^2}{S(u)(1-u \kappa_0)^2}, \end{equation} where $$ a(p,\sigma,t)= \frac{S(u)}{1-u\kappa_0}\Bigl(\Phi\bigl(\Psi[\bar\kappa(p,\sigma,t)-\lambda(t)]\bigr)+\bar\kappa(p,\sigma,t)\Bigr) $$ with $$ \bar\kappa(p,\sigma,t)= \frac{1}{S^3(u)} \Bigl((1-u \kappa_0) p + 2\kappa_0 u_\sigma^2+(\kappa_0)_\sigma u_\sigma u + \kappa_0(1-u \kappa_0)^2\Bigr). $$ Note that setting $V:=\Psi[\bar\kappa(p,\sigma,t)-\lambda(t)]$ we have $$ \frac{\partial a}{\partial p}(p,\sigma, t)= \frac{S(u)}{1-u\kappa_0}\Bigl(\frac{\Phi^\prime(V)}{1-\Phi^\prime(V)}+1\Bigr) \frac{\partial \bar\kappa}{\partial p}(p,\sigma,t)=\frac{1}{S^2(u)(1-\Phi^\prime(V))}. $$ It follows that if \eqref{Linfsummary} holds then the quasilinear divergence form parabolic equation \eqref{convenientForm} satisfies all necessary conditions to apply classical results on H\"older continuity of bounded solutions, see e.g. \cite{Lad68} [Chapter V, Theorem 1.1]; this completes the proof. \end{proof} The property of H\"older continuity established in Lemma \ref{Holder_continuity} allows us to prove the following important result \begin{lemma} \label{LemmaProGlobalGronwal} Let $u$ solve \eqref{deqn} and assume that \eqref{Linfsummary} holds then for any $\tau\geq \delta>0$ \begin{equation} \label{GronFinal} \|u_{\sigma\sigma}(t)\|^2_{L^2(I)}\leq (\|u_{\sigma\sigma}(\tau)\|^2_{L^2(I)}+1)e^{P_2 (t-\tau)} \quad \text{when}\quad t\geq \tau, \end{equation} where $P_2$ depends only on $\delta$ and constant $C$ in \eqref{Linfsummary} (the latter constant depend not on the $H^2$ norm of $u_0$ but on its norm in the space $W^{1,\infty}(I)$). \end{lemma} \begin{proof} Introduce smooth cutoff functions $\phi_n(\sigma)$ satisfying \begin{equation} \begin{cases} \phi_n(\sigma)=1 \ \text{on} \ [1/n,2/n]\\ 0\leq \phi_n(\sigma)\leq 1 \ \text{on} \ [0,1/n]\cup[2/n,3/n]\\ \phi_n(\sigma)=0 \ \text{otherwise}, \end{cases} \quad \text{and}\quad |(\phi_n)_\sigma|\leq Cn. \end{equation} Consider $t\geq \delta$. Multiply \eqref{aprioriFinal1} by $-\bigl( \phi_n^2 u_{\sigma\sigma}\bigr)_\sigma$ and integrate in $\sigma$, integrating by parts in the first term. We obtain \begin{equation} \label{Schitaem} \begin{aligned} \frac{1}{2}\frac{d}{dt} \int_I \phi_n^2u_{\sigma\sigma}^2 d\sigma+ \frac{1}{\gamma^\prime}\int_I \phi_n^2u_{\sigma\sigma\sigma}^2 d\sigma & \leq C\int_I \phi_n^2(u_{\sigma\sigma}^2+|u_{\sigma\sigma}|)|u_{\sigma\sigma\sigma}| d\sigma\\ &+ Cn\int_{I} \phi_n\bigl(|u_{\sigma\sigma\sigma}||u_{\sigma\sigma}|+|u_{\sigma\sigma}|^2+|u_{\sigma\sigma}|^3\bigr) d\sigma, \end{aligned} \end{equation} where $\gamma ^\prime >0$ and $C$ are independent of $u$ and $n$. Applying the Cauchy-Schwarz and Young's inequalities to various terms in the right hand side of \eqref{Schitaem} leads to \begin{equation} \label{Schitaem1} \frac{1}{2}\frac{d}{dt} \int_I \phi_n^2u_{\sigma\sigma}^2 d\sigma+ \frac{1}{2\gamma^\prime}\int_I \phi_n^2u_{\sigma\sigma\sigma}^2 d\sigma \leq C_1 \int_I \phi_n^2u_{\sigma\sigma}^4 d\sigma +C_2 n^2 \int_{{\rm s}(\phi_n)}(|u_{\sigma\sigma}|^2+1) d\sigma , \end{equation} where $C_1$, $C_2$ are independent of $u$ and $n$ and ${\rm s}(\phi_n)$ denotes the support of $\phi_n$. Next we apply the following interpolation type inequality (see, e.g., \cite{Lad68}, Chapter II, Lemma 5.4) to the first term in the right hand side of \eqref{Schitaem1}: \begin{equation} \label{interpolLadyzh} \begin{aligned} \int_I \phi_n^2u_{\sigma\sigma}^4 d\sigma\leq C_3\bigl(&\sup\{|u_\sigma(\sigma^\prime)-u_\sigma(\sigma^{\prime\prime})|;\, \sigma^\prime,\sigma^{\prime\prime}\in {\rm s}(\phi_n)\}\bigr)^2\\ & \times \Bigl(\int_I \phi_n^2u_{\sigma\sigma\sigma}^2 d\sigma+\int_{{\rm s}(\phi_n)}|u_{\sigma\sigma}|^2|(\phi_n)_\sigma|^2 d\sigma \Bigr). \end{aligned} \end{equation} Now we use \eqref{HolderBound} in \eqref{interpolLadyzh} to bound $\sup\{|u_\sigma(\sigma^\prime)-u_\sigma(\sigma^{\prime\prime})|;\, \sigma^\prime,\sigma^{\prime\prime}\in {\rm s}(\phi_n)\}$ by $C_\delta(1/n)^\vartheta$ and choose $n$ so large that $C_2C_3 C_\delta^2(1/n)^{2\vartheta}\leq 1/4\gamma^\prime$, then \eqref{Schitaem1} becomes \begin{equation} \label{Schitaem2} \frac{1}{2}\frac{d}{dt} \int_I \phi_n^2u_{\sigma\sigma}^2 d\sigma+ \frac{1}{4\gamma^\prime}\int_I \phi_n^2u_{\sigma\sigma\sigma}^2 d\sigma \leq C_4 n^2 \int_{{\rm s}(\phi_n)}(|u_{\sigma\sigma}|^2+1) d\sigma. \end{equation} It is clear that we can replace $\phi_n(\sigma)$ in \eqref{Schitaem2} by its translations $\phi_n(\sigma+k/n)$, $k\in \mathbb{Z}$, then taking the sum of obtained inequalities we derive \begin{equation} \label{Schitaem3} \frac{1}{2}\frac{d}{dt} \int_I \overline{\phi}_n u_{\sigma\sigma}^2 d\sigma+ \frac{1}{4\gamma^\prime}\int_I \overline \phi_n u_{\sigma\sigma\sigma}^2 d\sigma \leq C_5 \Bigl(\int_{I}|u_{\sigma\sigma}|^2 d\sigma +1\Bigr), \end{equation} where $C_5$ is independent of $u$, and $ \overline{\phi}_n= \sum_k \phi_n^2(\sigma+k/n)$. Note that $1\leq \overline{\phi}_n\leq 3$, therefore applying Gr\"onwall's inequality to \eqref{Schitaem3} we obtain \eqref{GronFinal}. \end{proof} \begin{corollary} \label{CorBoundFor3der} Assume that a solution $u$ of \eqref{deqn} exists on $[0,T]$ for some $T>0$, and \eqref{Linfsummary} holds. Then, given an arbitrary positive $\delta<T$, we have \begin{equation} \label{u_sss_norm} \int_\tau^T\int_I |u_{\sigma\sigma\sigma}|^2d\sigma d t\leq \tilde{C}_\delta \|u_{\sigma\sigma}(\tau)\|^2_{L^2(I)}, \end{equation} for all $\delta\leq\tau\leq T$, where $\tilde{C}_\delta$ depends only on $\delta$. \end{corollary} \begin{proof} The bound follows by integrating \eqref{Schitaem3} in time and using \eqref{GronFinal}. \end{proof} Using Lemma \ref{LemmaProGlobalGronwal} and Lemma \ref{Lemma_various_uniformbounds}, taking into account also Remark \ref{Remark_universal}, we see that solutions in Theorem \ref{thm_smooth_in_data} exist on a common interval $T=T(\alpha, M)$, provided that $\|u_0\|\leq \alpha<1$ and $\|(u_0)_\sigma\|_{L^\infty(I)}\leq M$. In the following Lemma we establish an integral bound for $\|u_{\sigma\sigma}\|_{L^2(I)}$. \begin{lemma} \label{LemmaIntegralBound} Let $u$ be a solution \eqref{deqn} on $[0,T]$ satisfying \eqref{Linfsummary}. Then \begin{equation} \label{IntegralBound} \|u_{\sigma\sigma}\|^2_{L^2(I\times [0,T])} \leq C, \end{equation} where $C$ depends only on the constants in \eqref{Linfsummary}. \end{lemma} \begin{proof} To obtain \eqref{IntegralBound} one multiplies \eqref{deqn} by $u_{\sigma\sigma}$ integrates in $\sigma$, integrating by parts in the first term. Then applying the Cauchy-Schwarz and Young's inequalities and integrating in $t$ one derives \eqref{IntegralBound}, details are left to the reader. \end{proof} Various estimates obtained in Lemmas \ref{Lemma_various_uniformbounds}, \ref{LemmaProGlobalGronwal}, and Corollary \ref{CorBoundFor3der} as well as Lemma \ref{LemmaIntegralBound} make it possible to pass to general initial data $u_0 \in W^{1,\infty}_{per}(I)$. Indeed, assume $\alpha:=\|u_0\|_{L^\infty(I)}<1$ and let $M:=\|(u_0)_\sigma \|_{L^\infty(I)}$. Construct a sequence $u_0^k \rightharpoonup u_0$ converging weak star in $W^{1,\infty}_{per}(I)$ as $k\to\infty$, where $u_0^k\in H^2_{per}(I)$. This can be done in a standard way by taking convolutions with mollifiers so that $\|u^k_0\|_{L^\infty(I)}\leq \alpha$ and $\|(u_0^k)_\sigma\|_{L^\infty(I)}\leq M$. Let $u^k(\sigma,t)$ be solutions of \eqref{deqn} corresponding to the initial data $u^k_0(\sigma)$. We know that all these solutions exist on a common time interval $[0,T]$ and that we can choose $T>0$ such that \eqref{Linfsummary} holds with a constant $C$ independent of $k$. By Lemma \ref{LemmaIntegralBound} the sequence $u^k(\sigma, t)$ is bounded in $L^2(0,T; H^2(I))$. Therefore, up to a subsequence, $u^k(\sigma,t)$ weakly converges to some function $u(\sigma,t)\in L^2(0,T; H^2(I))$. Using \eqref{deqn} we conclude that $u^k_t(\sigma,t)$ converge to $u_t(\sigma,t)$ weakly in $L^2(0,T; L^2(I))$. It follows, in particular, that $u(\sigma,0)=u_0(\sigma)$. Next, let $\delta>0$ be sufficiently small. It follows from Lemma \ref{LemmaIntegralBound} that there exists $\tau\in [\delta,2\delta]$ such that $\|u_{\sigma\sigma}^k(\tau)\|\leq C/\delta$. Then by Lemma \ref{LemmaProGlobalGronwal} and Corollary \ref{CorBoundFor3der} norms of $u^k$ in $L^\infty(2\delta,T; H^2_{per}(I))$ and $L^2(2\delta,T; H^3_{per}(I))$ are uniformly bounded. Thus $u^k(\sigma, t)$ converge to $u(\sigma,t)$ strongly in $L^2(2\delta,T; H^2_{per}(I))$. This in turn implies that $u^k_t(\sigma,t)$ converge to $u_t(\sigma,t)$ strongly in $L^2(0,T; L^2(I))$. Therefore the function $u(\sigma,t)$ solves \eqref{deqn} on $[2\delta,T]$. Since $\delta>0$ can be chosen arbitrarily small, Theorem \ref{thm1} is completely proved. \section{Non-existence of traveling wave solutions} \label{sec:nonexistence} The following result proves that smooth ($H^2$) non-trivial traveling wave solutions of \eqref{interface0} do not exist. The idea of the proof is to write equations of motion of the front and back parts of a curve, which is supposed to be a traveling wave solution, using a Cartesian parametrization. Next we show that it is not possible to form a closed, $H^2$-smooth curve from these two parts. We note that every traveling wave curve is always $H^2$ smooth since it is the same profile for all times up to translations. \begin{theorem}\label{thm2} Let $\Phi$ satisfy conditions of Theorem \ref{thm1}. If $\Gamma(\sigma,t $ is a family of closed curves which are a traveling wave solution of \eqref{interface0}, that is $\Gamma(\sigma,t)=\Gamma(\sigma,0)+vt$, then $v=0$ and $\Gamma(\sigma,0)$ is a circle. \end{theorem} \begin{proof} It is clear that if $v=0$ then a circle is the unique solution of \eqref{interface0}. Let $\Gamma(\sigma,t)$ be a traveling wave solution of \eqref{interface0} with non-zero velocity $v$. By Theorem \ref{thm1}, $\Gamma(\cdot,t)$ is smooth ($H^2$) for all $t>0$. By rotation and translation, we may assume without loss of generality that $v_x=0$, $v_y=c$ with $c>0$ and that $\Gamma(\sigma,t)$ is contained in the upper half plane for all $t\geq 0$. Let $\Gamma(\sigma_0,0)$ be a point of $\Gamma(\sigma,0)$ which is closest to the $x$-axis. Without loss of generality we assume that $\Gamma(\sigma_0,0)=0$. Locally, we can represent $\Gamma(\sigma,t)$ as a graph over the $x$-axis, $y=y(x) + ct$. Observe that the normal velocity is given by \begin{equation} V =\frac{c}{\sqrt{1+(y'(x)^2)}} \end{equation} and the curvature $\kappa$ is expressed as \begin{equation} \kappa(x)= \frac{y''(x)}{(1+(y'(x))^2)^{3/2}}. \end{equation} Adopting the notation \begin{equation} f_\lambda^c(z) := \left(\frac{c}{\sqrt{1+z^2}}-\Phi\left(\frac{c}{\sqrt{1+z^2}}\right)+\lambda\right)(1+z^2)^{3/2}, \end{equation} it follows that $y$ solves the equation \begin{equation}\label{phieqn} y'' = f_\lambda^c(y') \end{equation} where, by construction, \begin{equation} y(0)=y'(0)=0. \end{equation} Observe that \eqref{phieqn} is a second order equation for $y$ which depends only on $y'$. Thus, we may equivalently study $w:=y'$ which solves \begin{equation}\label{weqn} w' = f_\lambda^c(w),\;\;\;\; w(0)=0. \end{equation} Note that \eqref{weqn} is uniquely solvable on its interval of existence by Lipschitz continuity of $f_\lambda^c$. Further, the definition of $f_\lambda^c$ guarantees that $w$ has reflectional symmetry over the $y$-axis. If \eqref{weqn} has a global solution, $w$, then it is immediate that $y(x) := \int_0^x w(s)ds$ cannot describe part of a closed curve. As such we restrict to the case where $w$ has finite blow-up. Assume the solution $w_B$ of \eqref{weqn} has finite blow-up, $w_B(x)\to +\infty$ as $x\to x^*_B$ for some $0<~x^*_B<~\infty$. Then $y_B(x):=\int_0^x w_B(s)ds$ has a vertical tangent vector at $\pm x^*_B$. To extend the solution beyond the point $x^*_B$, we consider $w_F$, the solution of \eqref{weqn} with right hand side $f^{-c}_{\lambda}$. As above we assume that $w_F$ has a finite blow-up at $x^*_F>0$. Defining $y_F(x):=\int_0^x w_F(s)ds$, we have the following natural transformation, \begin{equation} \hat{y}_F(x) := -y_F(x-(x^*_B-x^*_F))+y_B(x^*_B)+y_B(x^*_F). \end{equation} Note that gluing $\hat{y}_F$ to $y_B$ forms an $H^2$ smooth curve at the point $(x^*_B,y_B(x^*_B))$ if and only if $w_F(x)\to +\infty$ as $x\to x^*_F$. We claim that this is the unique, smooth extension of $y_B$ at $x^*_B$. To that end, consider the rotated coordinate system $(x,y)\mapsto (y,-x)$. In this frame, the traveling wave moves with velocity $v_x=c$, $v_y=0$, and can be locally represented as the graph $x=x(y)+ct$, $x(y)$ solving \begin{equation}\label{rotatedeqn} x'' = g_\lambda^c(x') \end{equation} with \begin{equation} g_\lambda^c(z):= \left(\frac{-cz}{\sqrt{1+z^2}}-\Phi\left(\frac{-cz}{\sqrt{1+z^2}}\right)+\lambda\right)(1+z^2)^{3/2}. \end{equation} As before, $g_\lambda^c$ is Lipschitz and so solutions of \eqref{rotatedeqn} are unique, establishing the claim. To complete the proof, we prove that $x^*_F>x^*_B$, which guarantees that the graphs of $y_B(x)$ and $\hat{y}_F(x)$ can not smoothly meet at $-x^*_B$. Due to the monotonicity of $V-\Phi(V)$ we have that for any $w$, f^{c}_\lambda(w)>f^{-c}_\lambda(w $ Thus $w'_B> w'_F$ for any fixed $w$. Since $w_B(0)=w_F(0)$, we deduce that $w_B(x)> w_F(x)$ for all $x > 0$. It follows that $x^*_F\geq x^*_B$. Let $x_2\in(0, x^*_B)$ and observe by continuity of $w_B$ that there exists $x_1\in (0,x_2)$ such that $w_B(x_1)=w_F(x_2)$. Consider the solution $\tilde{w}$ of \begin{equation} \tilde{w}' = f^c_\lambda(\tilde{w})\;\;\; \tilde{w}(x_2)=w_F(x_2). \end{equation} Note that $\tilde{w}(x) = w_B(x-(x_2-x_1))$ and so the blow-up of $\tilde{w}(x)$ occurs at $x_B^*+x_2-x_1$. Since $\tilde{w}(x)\geq w_F(x)$ for all $x\in (x_2,x_B^*+x_2-x_1)$ it follows that $x^*_F\geq x_B^*+x_2-x_1>x_B^*$. \end{proof} \begin{figure}[H] \centering \includegraphics[width=.5\textwidth]{nonexistencepic.pdf} \caption{A closed curve cannot be a traveling wave solution} \end{figure} \section{Numerical simulation; comparison with volume preserving curvature motion} \label{sec:numerics} The preceding Section \ref{sec:existence} proves short time existence of curves propagating via \eqref{interface0}, and Section \ref{sec:nonexistence} shows the nonexistence of traveling wave solution. In this section we will numerically solve \eqref{interface0} and for that purpose we will introduce a new splitting algorithm. Using this algorithm, we will be able to study the long time behavior of the cell motion by numerical experiments, in particular, we find that both non-linearity and asymmetry of the initial shape will result in a net motion of the cell. Specifically we numerically solve the equation \eqref{interface0} written so that the dependence on $\beta$ is explicit ($\beta \Phi_0(V)=\Phi(V)$, see remark \ref{rem:betaequiv}): \begin{equation}\label{linearinterface} V(s,t) = \kappa(s,t) + \beta \Phi_0(V(s,t)) - \frac{1}{|\Gamma(t)|}\int (\kappa(s',t)+\beta\Phi_0(V(s',t)))ds', \end{equation} We propose an algorithm and use it to compare curves evolving by \eqref{linearinterface} with $0<\beta<\beta_{cr}$, with curves evolving by volume preserving curvature motion ($\beta=0$). For simplicity, we assume that $\Phi_0(V)$ defined via \eqref{phi} has a Gaussian shape $ \Phi_1(V) := e^{-|V|^2} $ (see Figure \ref{phicomparison}) which agrees well with the actual graph of $\Phi_0$. This significantly decreases computational time in numerical simulations. \begin{figure}[h!] \centering \includegraphics[width=.5\textwidth]{phicomparison1} \caption{Graph of $\Phi_0$}\label{phicomparison} \end{figure} \subsection{Algorithm to solve (\ref{linearinterface})} In the case $\beta=0$ (corresponding to volume preserving curvature motion), efficient techniques such as level-set methods \cite{Osh88,Sme03} and diffusion generated motion methods \cite{Mer93,Ruu03} can be used to accurately simulate the evolution of curves by \eqref{linearinterface}. There is no straightforward way to implement these methods when $\beta> 0$ since $V$ enters equation \eqref{linearinterface} implicitly. Moreover, due to the non-local term, a naive discrete approximation of \eqref{linearinterface} leads to a system of non-linear equations. Instead of using a non-linear root solver, we introduce a {\em splitting scheme} which resolves the two main computational difficulties (non-linearity and volume preservation) separately. In particular, we decouple the system by solving the $N$ local equations \begin{equation}\label{localversion} V_i = \kappa_i +\beta\Phi_1(V_i)-C, \end{equation} where $C$ is a constant representing the volume preservation constraint which must be determined. For $\beta<\beta_{cr}$, \eqref{localversion} can be solved using an iterative method which (experimentally) converges quickly. The volume constraint can be enforced by properly changing the value of $C$. We recall the following standard notations. Let $p_i = (x_i,y_i)$, $i=1,\dots, N$ be a discretization of a curve. Then $h:=1/N$ is the grid spacing and \begin{equation} Dp_i := \frac{p_{i+1}-p_{i-1}}{2h}\text{ and } D^2p_i := \frac{p_{i+1}-2p_i+p_{i-1}}{h^2} \;\;\; (h=1/N) \end{equation} are the second-order centered approximations of the first and second derivatives, respectively. Additionally, $(a,b)^\perp = (-b,a)$. We introduce the following algorithm for a numerical solution of \eqref{linearinterface}. \\%will be used as a numerical solution of \eqref{linearinterface}. \begin{algorithm} To solve \eqref{linearinterface} up to time $T>0$ given the initial shape $\Gamma(0)$. \begin{enumerate} \item[{\em Step 1:}] (Initialization) Given a closed curve $\Gamma(0)$, discretize it by $N$ points $p^0_i=(x^0_i,y^0_i)$. Use the shoelace formula to calculate the area of $\Gamma(0)$: \begin{equation}\label{eqn:shoelace} A^o = \frac{1}{2} \left| \sum_{i=1}^{n-1} x^0_i y^0_{i+1} + x^0_n y^0_1 - \sum_{i=1}^{n-1} x^0_{i+1}y^0_i - x^0_1 y^0_n \right|. \end{equation} Set time $t:=0$, time step $\Delta t>0$, and the auxiliary parameter $C:=0$. \item[{\em Step 2:}] (time evolution) If $t<T$, calculate the curvature at each point, $\kappa_i$ using the formula \begin{equation} \kappa_i = \frac{\operatorname{det}(Dp^t_i,D^2 p^t_i)}{\|Dp^t_i\|^3}, \end{equation} where $\|\cdot\|$ is the standard Euclidean norm. Use an iteration method to solve \begin{equation}\label{discreteV} V^t_i = \kappa_i+\beta \Phi_1(V^t_i)-C \end{equation} to within a fixed tolerance $\varepsilon>0$. Define the temporary curve \begin{displaymath} p^{temp}_i := p^t_i+V^t_i \nu_i\Delta t, \end{displaymath} where $\nu_i=(D p_i^0)^\perp/\|Dp_i^0\|$ is the inward pointing normal vector Calculate the area of the temporary curve $A^{temp}$ using the shoelace formula \eqref{eqn:shoelace} and compute the discrepancy \begin{displaymath} \Delta A:=(A^{temp}-A^{o})\cdot (A^o)^{-1}, \end{displaymath} If $|\Delta A|$ is larger than a fixed tolerance $\varepsilon$, adjust $C \mapsto C +\Delta A$ and return to solve \eqref{discreteV} with updated $C$. Otherwise define $p_i^{t+\Delta t} := p_i^{temp}$ and \begin{displaymath} \Gamma(t+\Delta t):=\{p_i^{t+\Delta t}\}, \end{displaymath} Let $t=t+\Delta t$, if $t<T$ iterate Step 2; else, stop. \end{enumerate} \end{algorithm} In practice, we additionally reparametrize the curve by arc length after a fixed number of time steps in order to prevent regions with high density of points $p_i$ which could lead to potential numerical instability and blow-up. \begin{remark}\label{iterativealg} In Step 2, for $\beta\leq 1$, the right hand side of \eqref{discreteV} is contractive and thus we may guarantee convergence of the iterative solver to the solution of \eqref{discreteV}. \end{remark} We implement the above algorithm in C++ and visualize the data using Scilab. We choose the time step $\Delta t$ and spatial discretization step $\displaystyle h = \frac{1}{N}$ so that \begin{equation*} \frac{\Delta t}{h^2} \leq \frac{1}{2}, \end{equation*} in order to ensure convergence. Further, we take an error tolerance \begin{equation*} \varepsilon = .0001 \end{equation*} in Step 2 for both iteration in \eqref{discreteV} and iteration in $C$. \subsection{Convergence of numerical algorithm} To validate results, we check convergence of the numerical scheme in the following way: taking a fixed initial curve $\Gamma(0)$, we discretize it with varying numbers of points: $N=2^m$ for $m=5,\dots,8$, and fix a final time step sufficiently large so that $\Gamma(k\Delta t)$ reaches a steady state circle. Since there is no analytic solution to \eqref{interface0}, there is no absolute measure of the error. Rather, we define the error between successive approximations $N$ and $2N$. \newline To this end, we calculate the center location, $C_N=(C_N^1,C_N^2)$, of each steady state circle as the arithmetic mean of the data points. Define the error between circles as \begin{equation*} err_N = \|C_N - C_{2N}\|_{\ell^2}. \end{equation*} Then, the convergence rate can be expressed as \begin{equation*} \rho := \lim_{N\to \infty} \rho_N = \lim_{N\to \infty} \log_2 \frac{err_N}{err_{2N}}. \end{equation*} We record our results in Table \ref{tab:1}, as well as in Figure \ref{circleconvzoom}. Note that $\rho\approx 1$, namely our numerical method is first-order. \begin{table}[h] \centering \begin{tabular}{| c | l | c|} \hline $N$ & $err_N$ & $\rho_N$ \\ \hline 32 & .2381754 & .9141615 \\ \hline 64 & .1263883 & .9776830 \\ \hline 128 & .0641793 & 1.0019052 \\ \hline 256 & .0320473 & - \\ \hline \end{tabular} \vspace*{3mm} \caption{Table for convergence of numerical simulations}\label{tab:1} \end{table} \begin{figure}[h!] \centering \includegraphics[width = .7\linewidth]{circleconvzoom} \caption{Convergence of numerical simulation for decreasing mesh size $h=1/N$ (zoomed in)}\label{circleconvzoom} \end{figure} \subsection{Numerical experiments for the long time behavior of cell motion} We next present two numerical observations for the subcritical case $\beta<\beta_{cr}$. \\ \noindent{\bf 1.} {\em If a curve globally exists in time, then it tends to a circle.} That is, in the subcritical $\beta$ regime, curvature motion dominates non-linearity due to $\Phi(V)$. This is natural, since for small $\beta$ the equation \eqref{linearinterface} can be viewed as a perturbation of volume preserving curvature motion, and it has been proved (under certain hypotheses) that curves evolving via volume preserving curvature motion converge to circles \cite{Esc98}. \newline In contrast, the second observation distinguishes the evolution of \eqref{linearinterface} from volume preserving curvature motion.\\ \noindent{\bf 2. } {\em There exist curves whose centers of mass exhibit net motion on a finite time interval (transient motion).}\\ The key issue in cell motility is (persistent) net motion of the cell. Although Theorem \ref{thm2} implies that no non-trivial traveling wave solution of \eqref{interface0} exists, observation 2 implies that curves propagating via \eqref{linearinterface} may still experience a transient net motion compared to the evolution of curves propagating via volume preserving curvature motion. We investigate this transient motion quantitatively with respect to the non-linear parameter $\beta$ and the initial geometry of the curve. \subsubsection{Quantitative investigation of observation 2} Given an initial curve $\Gamma(0)$ discretized into $N$ points $p_i^0$ and given $0\leq \beta<\beta_{cr}$, we let $\Gamma_\beta(k\Delta t):=\{p_i^k\}$ be the curve at time $k\Delta t$, propagating by \eqref{linearinterface}. In particular, $\Gamma_0(k\Delta t)$ corresponds to the evolution of the curve by volume preserving curvature motion. Our prototypical initial curve $\Gamma(0)$ is parametrized by four ellipses and is sketched in Figure \ref{fig:initialcurves}. \begin{figure}[h!] \centering \includegraphics[width=.4\linewidth]{initialcurve.pdf} \caption{Initial curve}\label{fig:initialcurves} \end{figure} \begin{align} \label{paramstart}(4\cos(\theta),3\sin(\theta)) &\text{ for } -\frac{\pi}{2}\leq \theta\leq \frac{\pi}{2} \\ (2\cos(\theta),\frac{3}{4}\sin(\theta)+\frac{9}{4}) &\text{ for } \frac{\pi}{2}\leq \theta\leq \frac{3\pi}{2} \\ \label{zetaeqn} (\zeta \cos(\theta),\frac{3}{2} \sin(\theta)) &\text{ for } -\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}\\ \label{paramend} (2\cos(\theta), \frac{3}{4}\sin(\theta)-\frac{9}{4}) &\text{ for } \frac{\pi}{2}\leq \theta \leq \frac{3\pi}{2}. \end{align} The parameter $\zeta$ determines the depth of the non-convex well and is used as our measure of initial asymmetry of the curve. \begin{figure}[h!] \centering \includegraphics[scale=.3]{final.pdf} \caption{Overall drift of curve with $\beta=1$ (blue, solid) compared to $\beta=0$ (green, dashed), starting from initial data \eqref{paramstart}-\eqref{paramend} with $\zeta=2$}\label{shift} \end{figure} To study the effect of $\beta$ and asymmetry on the overall motion of the curve, we measure the total transient motion by the following notion of the {\em drift of $\Gamma_\beta$}. First fix $\Gamma(0)$. Note that observation 1 implies that for sufficiently large time $k\Delta t$, $\Gamma_\beta(k\Delta t)$ and $\Gamma_0(k\Delta t)$ will both be steady state circles. Define the {\em drift of $\Gamma_\beta$} to be the distance between the centers of these two circles. \begin{remark} Note that this definition is used in order to account for numerical errors which may accumulate over time. Numerical drift of the center of mass of the curve caused by errors/approximations is offset by ``calibrating'' to the $\beta=0$ case. \end{remark} We consider the following two numerical tests: \begin{enumerate} \item {\em Dependence of drift on $\beta$:} Starting with the initial profile \eqref{paramstart}-\eqref{paramend} with $\zeta=1$, compute the drift of $\Gamma_{\beta}$ for various values of $\beta$. \item {\em Dependence of drift on initial asymmetry:} Starting with the initial curve \eqref{paramstart}-\eqref{paramend} with $\beta=1$, compute the drift of $\Gamma_\beta$ for various values of $\zeta$. \end{enumerate} Taking $T=20$ is sufficient for simulations to reach circular steady state. We observe that drift increases with respect to $\zeta$ and increases linearly with respect to $\beta$. These data are recorded in Figure \ref{betatest}. \begin{figure}[h!] \centering \includegraphics[width = .45\textwidth]{betatest} \includegraphics[width = .45\textwidth]{asymtest} \caption{Dependence of drift on the parameters $\beta$ and $\zeta$}\label{betatest} \end{figure} \medskip \bibliographystyle{siam}
{ "attr-fineweb-edu": 1.939453, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbHw4eILhP_ZFABYZ
\section{introduction} In this paper we are concerned with wave equations associated to some Liouville-type problems on compact surfaces arising in mathematical physics: sinh-Gordon equation \eqref{w-sg} and some general Toda systems \eqref{w-toda}. The first wave equation we consider is \begin{equation} \label{w-sg} \partial_t^2u-\Delta_gu=\rho_1\left(\frac{e^u}{\int_{M}e^u}-\frac{1}{|M|}\right) -\rho_2\left(\frac{e^{-u}}{\int_{M}e^{-u}}-\frac{1}{|M|}\right) \quad \mbox{on } M, \end{equation} with $u:\R^+\times M\to \R$, where $(M,g)$ is a compact Riemann surface with total area $|M|$ and metric $g$, $\Delta_g$ is the Laplace-Beltrami operator and $\rho_1, \rho_2$ are two real parameters. Nonlinear evolution equations have been extensively studied in the literature due to their many applications in physics, biology, chemistry, geometry and so on. In particular, the sinh-Gordon model \eqref{w-sg} has been applied to a wide class of mathematical physics problems such as quantum field theories, non commutative field theories, fluid dynamics, kink dynamics, solid state physics, nonlinear optics and we refer to \cite{a, cm, ch, cho, mmr, nat, w, zbp} and the references therein. \medskip The stationary equation related to \eqref{w-sg} is the following sinh-Gordon equation: \begin{equation} \label{sg} -\Delta_gu=\rho_1\left(\frac{e^u}{\int_{M}e^u}-\frac{1}{|M|}\right) -\rho_2\left(\frac{e^{-u}}{\int_{M}e^{-u}}-\frac{1}{|M|}\right). \end{equation} In mathematical physics the latter equation describes the mean field equation of the equilibrium turbulence with arbitrarily signed vortices, see \cite{jm, pl}. For more discussions concerning the physical background we refer for example to \cite{c,l,mp,n,os} and the references therein. On the other hand, the case $\rho_{1}=\rho_{2}$ has a close relationship with constant mean curvature surfaces, see \cite{w1,w2}. Observe that for $\rho_{2}=0$ equation (\ref{sg}) reduces to the following well-know mean field equation: \begin{equation} \label{mf} -\Delta_g u=\rho\left(\frac{e^u}{\int_{M}e^u}-\frac{1}{|M|}\right), \end{equation} which has been extensively studied in the literature since it is related to the prescribed Gaussian curvature problem \cite{ba, sch} and Euler flows \cite{cag, kies}. There are by now many results concerning \eqref{mf} and we refer to the survey \cite{tar}. On the other hand, the wave equation associated to \eqref{mf} for $M=\S^2$, that is \begin{equation} \label{w-mf} \partial_t^2u-\Delta_g u=\rho\left(\frac{e^u}{\int_{\S^2}e^u}-\frac{1}{4\pi}\right) \quad \mbox{on } \S^2, \end{equation} was recently considered in \cite{cy}, where the authors obtained some existence results and a first blow up criterion. Let us focus for a moment on the blow up analysis. They showed that in the critical case $\rho=8\pi$ for the finite time blow up solutions to \eqref{w-mf} there exist a sequence $t_k\to T_0^{-}<+\infty$ and a point $x_1\in \S^2$ such that for any $\varepsilon>0,$ \begin{equation} \label{mf-cr} \lim_{k\to+\infty}\frac{\int_{B(x_{1},\varepsilon)}e^{u(t_k,\cdot)}}{\int_{\S^2}e^{u(t_k,\cdot)}}\geq 1-\varepsilon, \end{equation} i.e. the measure $e^{u(t_k)}$ (after normalization) concentrates around one point on $\S^2$ (i.e. it resembles a one bubble). On the other hand, for the general super-critical case $\rho>8\pi$ the blow up analysis is not carried out and we are missing the blow up criteria. One of our aims is to substantially refine the latter analysis and to give general blow up criteria, see Corollary \ref{cr1.1}. As a matter of fact, this will follow by the analysis we will develop for more general problems: the sinh-Gordon equation \eqref{w-sg} and Toda systems \eqref{w-toda}. \medskip Let us return now to the sinh-Gordon equation \eqref{sg} and its associated wave equation \eqref{w-sg}. In the last decades the analysis concerning \eqref{mf} was generalized for treating the sinh-Gordon equation \eqref{sg} and we refer to \cite{ajy, jwy, jwy2, jwyz, os} for blow up analysis, to \cite{gjm} for uniqueness aspects and to \cite{bjmr, j1, j2, j3} for what concerns existence results. On the other hand, for what concerns the wave equation associated to \eqref{sg}, i.e. \eqref{w-sg}, there are few results mainly focusing on traveling wave solutions, see for example \cite{a, fll, nat, w, zbp}. One of our aims is to develop the analysis for \eqref{w-sg} in the spirit of \cite{cy} and to refine it with some new arguments. More precisely, by exploiting the variational analysis derived for equation \ref{sg}, see in particular \cite{bjmr}, we will prove global existence in time for \eqref{w-sg} for the sub-critical case and we will give general blow up criteria for the super-critical and critical case. The sub/super-critical case refers to the sharp constant of the associated Moser-Trudinger inequality, as it will be clear in the sequel. \medskip Before stating the results, let us fix some notation. Given $T>0$ and a metric space $X$ we will denote $C([0,T];X)$ by $C_T(X)$. $C^k_T(X)$ and $L^k_T(X)$, $k\geq1$, are defined in an analogous way. When we are taking time derivative for $t\in[0,T]$ we are implicitly taking right (resp. left) derivative at the endpoint $t=0$ (resp. $t=T$). When it will be clear from the context we will simply write $H^1, L^2$ to denote $H^1(M), L^2(M)$, respectively, and $$ \|u\|_{H^1(M)}^2=\|\n u\|_{L^2(M)}^2+\|u\|_{L^2(M)}^2. $$ \medskip Our first main result is to show that the initial value problem for \eqref{w-sg} is locally well-posed in $H^1\times L^2$. \begin{theorem} \label{th.local-sg} Let $\rho_1,\rho_2\in\mathbb{R}$. Then, for any $(u_0,u_1)\in H^1(M)\times L^2(M)$ such that $\int_{M}u_1=0$, there exist $T=T(\rho_1, \rho_2, \|u_0\|_{H^1}, \|u_1\|_{L^2})>0$ and a unique, stable solution, i.e. depending continuously on $(u_0,u_1)$, $$ u:[0,T]\times M\to\mathbb{R}, \quad u\in C_T(H^1)\cap C_T^1(L^2), $$ of \eqref{w-sg} with initial data $$ \left\{ \begin{array}{l} u(0,\cdot)=u_0, \\ \partial_t u(0,\cdot)=u_1. \end{array} \right. $$ Furthermore \begin{align} \label{u-const} \int_{M}u(t,\cdot)=\int_{M}u_0 \quad \mbox{for all } t\in[0,T]. \end{align} \end{theorem} \medskip \begin{remark} The assumption on the initial datum $u_1$ to have zero average guarantees that the average \eqref{u-const} of the solution $u(t,\cdot)$ to \eqref{w-sg} is preserved in time. A consequence of the latter property is that the energy $E(u(t,\cdot))$ given in \eqref{energy-sg} is preserved in time as well, which will be then crucially used in the sequel, see the discussion later on. \end{remark} The proof is based on a fixed point argument and the standard Moser-Trudinger inequality \eqref{mt}, see section \ref{sec:proof}. Once the local existence is established we address the existence of a global solution to \eqref{w-sg}. Indeed, by exploiting an energy argument jointly with the Moser-Trudinger inequality associated to \eqref{sg}, see \eqref{mt-sg}, we deduce our second main result. \begin{theorem} \label{th.global-sg} Suppose $\rho_1,\rho_2<8\pi.$ Then, for any $(u_0,u_1)\in H^1(M)\times L^2(M)$ such that $\int_{M}u_1=0$, there exists a unique global solution $u\in C(\R^+;H^1)\cap C^1(\R^+;L^2)$ of \eqref{w-sg} with initial data $(u_0,u_1)$. \end{theorem} The latter case $\rho_1,\rho_2<8\pi$ is referred as the sub-critical case in relation to the sharp constant $8\pi$ in the Moser-Trudinger inequality \eqref{mt-sg}. The critical and super-critical case in which $\rho_i\geq8\pi$ for some $i$ is subtler since the solutions to \eqref{sg} might blow up. However, by exploiting the recent analysis concerning \eqref{sg}, see in particular \cite{bjmr}, based on improved versions of the Moser-Trudinger inequality, see Proposition \ref{prop-sg-mt}, we are able to give quite general blow up criteria for \eqref{w-sg}. Our third main result is the following. \begin{theorem} \label{th.con-sg} Suppose $\rho_i\geq8\pi$ for some $i$. Let $(u_0,u_1)\in H^1(M)\times L^2(M)$ be such that $\int_{M}u_1=0$ and let $u$ be the solution of \eqref{w-sg} obtained in Theorem \ref{th.local-sg}. Suppose that $u$ exists in $[0,T_0)$ for some $T_0<+\infty$ and it can not be extended beyond $T_0$. Then, there exists a sequence $t_k\to T_0^{-}$ such that \begin{align*} \lim_{k\to+\infty}\|\nabla u(t_k,\cdot)\|_{L^2}=+\infty, \quad \lim_{k\to+\infty}\max\left\{\int_{M}e^{u(t_k,\cdot)}, \int_{M}e^{-u(t_k,\cdot)}\right\} =+\infty. \end{align*} Furthermore, if $\rho_1\in[8m_1\pi,8(m_1+1)\pi)$ and $\rho_2\in[8m_2\pi,8(m_2+1)\pi)$ for some $m_1,m_2\in\mathbb{N}$, then, there exist points $\{x_1,\dots,x_{m_1}\}\subset M$ such that for any $\varepsilon>0,$ either $$ \lim_{k\to+\infty}\frac{\int_{\bigcup_{l=1}^{m_1}B(x_{l},\varepsilon)}e^{u(t_k,\cdot)}}{\int_{M}e^{u(t_k,\cdot)}}\geq 1-\varepsilon, $$ or there exist points $\{y_1,\dots,y_{m_2}\}\subset M$ such that for any $\varepsilon>0,$ $$ \lim_{k\to+\infty}\frac{\int_{\bigcup_{l=1}^{m_2}B(y_{l},\varepsilon)}e^{-u(t_k,\cdot)}}{\int_{M}e^{-u(t_k,\cdot)}}\geq 1-\varepsilon. $$ \end{theorem} The latter result shows that, once the two parameters $\rho_1, \rho_2$ are fixed in a critical or super-critical regime, the finite time blow up of the solutions to \eqref{w-sg} yields the following alternative: either the measure $e^{u(t_k)}$ (after normalization) concentrates around (at most) $m_1$ points on $M$ (i.e. it resembles a $m_1$-bubble) or $e^{-u(t_k)}$ concentrates around $m_2$ points on $M$. We point out this is new for the mean field equation \eqref{w-mf} as well and generalizes the previous blow up criterion \eqref{mf-cr} obtained in \cite{cy} for $\rho=8\pi$. More precisely, the general blow up criteria for the super-critical mean field equation are the following. \begin{corollary} \label{cr1.1} Suppose $\rho\in[8m\pi,8(m+1)\pi)$ for some $m\in\mathbb{N}$, $m\geq1$. Let $(u_0,u_1)\in H^1(M)\times L^2(M)$ be such that $\int_{M}u_1=0$, and let $u$ be a solution of \eqref{w-mf}, where $\S^2$ is replaced by a compact surface $M$. Suppose that $u$ exists in $[0,T_0)$ for some $T_0<+\infty$ and it can not be extended beyond $T_0$. Then, there exist a sequence $t_k\to T_0^{-}$ and $m$ points $\{p_1,\dots,p_m\}\subset M$ such that for any $\varepsilon>0$, $$\lim_{k\to\infty}\frac{\int_{\bigcup_{l=1}^{m}B(p_{l},\varepsilon)}e^{u(t_k,\cdot)}}{\int_{M}e^{u(t_k,\cdot)}} \geq1-\varepsilon.$$ \end{corollary} Finally, it is worth to point out some possible generalizations of the results so far. \begin{remark} We may consider the following more general weighted problem $$ \partial_t^2u-\Delta_gu=\rho_1\left(\frac{h_1e^u}{\int_{M}h_1e^u}-\frac{1}{|M|}\right) -\rho_2\left(\frac{h_2e^{-u}}{\int_{M}h_2e^{-u}}-\frac{1}{|M|}\right), $$ where $h_i=h_i(x)$ are two smooth functions such that $\frac 1C \leq h_i \leq C$ on $M$, $i=1,2$, for some $C>0$. It is easy to check that Theorems \ref{th.local-sg}, \ref{th.global-sg} and \ref{th.con-sg} extend to this case as well. The same argument applies also to the Toda system \eqref{w-toda}. On the other hand, motivated by several applications in mathematical physics \cite{a, ons, ss} we may consider the following asymmetric sinh-Gordon wave equation $$ \partial_t^2u-\Delta_gu=\rho_1\left(\frac{e^u}{\int_{M}e^u}-\frac{1}{|M|}\right) -\rho_2\left(\frac{e^{-au}}{\int_{M}e^{-au}}-\frac{1}{|M|}\right), $$ with $a>0$. For $a=2$, which corresponds to the Tzitz\'eica equation, we can exploit the detailed analysis in \cite{jy} to derive Theorems \ref{th.local-sg}, \ref{th.global-sg} and \ref{th.con-sg} for this case as well (with suitable modifications accordingly to the associated Moser-Trudinger inequality). On the other hand, for general $a>0$ the complete analysis is still missing and we can rely for example on \cite{j4} to get at least the existence results of Theorems \ref{th.local-sg}, \ref{th.global-sg}. \end{remark} \ We next consider the wave equation associated to some general Toda system, \begin{equation} \label{w-toda} \p_t^2u_i-\Delta_gu_i=\sum_{j=1}^na_{ij}\,\rho_j\left(\frac{e^{u_j}}{\int_{M}e^{u_j}}-\frac{1}{|M|}\right) \quad \mbox{on } M, \quad i=1,\dots,n, \end{equation} where $\rho_i$, $i=1,\dots,n$ are real parameters and $A_n=(a_{ij})_{n\times n}$ is the following rank $n$ Cartan matrix for $SU(n+1)$: \begin{equation} \label{matr} {A}_n=\left(\begin{matrix} 2 & -1 & 0 & \cdots & 0 \\ -1 & 2 & -1 & \cdots & 0 \\ \vdots & \vdots &\vdots & \ddots & \vdots\\ 0 & \cdots & -1 & 2 & -1 \\ 0 & \cdots & 0 & -1 & 2 \end{matrix}\right). \end{equation} The stationary equation related to \eqref{w-toda} is the following Toda system: \begin{equation} \label{toda} -\Delta_gu_i=\sum_{j=1}^na_{ij}\,\rho_j\left(\frac{e^{u_j}}{\int_{M}e^{u_j}}-\frac{1}{|M|}\right), \quad i=1,\dots,n, \end{equation} which has been extensively studied since it has several applications both in mathematical physics and in geometry, for example non-abelian Chern-Simons theory \cite{d1, tar2, y} and holomorphic curves in $\mathbb{C}\mathbb{P}^n$ \cite{bw, cal, lwy-c}. There are by now many results concerning Toda-type systems in particular regarding existence of solutions \cite{bjmr, jkm, mr}, blow up analysis \cite{jlw, lwy} and classification issues \cite{lwy-c}. \medskip On the other hand, only partial results concerning the wave equation associated to the Toda system \eqref{w-toda} were obtained in \cite{cy} which we recall here. First, the local well-posedness of \eqref{w-toda} analogously as in Theorem \ref{th.local-sg} is derived for a general $n\times n$ symmetric matrix $A_n$. Second, by assuming $A_n$ to be a positive definite symmetric matrix with non-negative entries the authors were able to deduce a global existence result by exploiting a Moser-Trudinger-type inequality suitable for this setting, see \cite{sw}. On the other side, no results are available neither for mixed positive and negative entries of the matrix $A_n$ (which are relevant in mathematical physics and in geometry, see for example the above Toda system) nor for blow up criteria. Our aim is to complete the latter analysis. \medskip Before stating the results, let us fix some notation for the system setting. We denote the product space as $(H^1(M))^n=H^1(M)\times\dots\times H^1(M)$. To simplify the notation, to take into account an element $(u_1,\dots,u_n)\in (H^1(M))^n$ we will rather write $H^1(M)\ni\u: M\mapsto(u_1,\dots,u_n)\in\R^n$. With a little abuse of notation we will write $\int_{M} \u$ when we want to consider the integral of each component $u_i$, $i=1,\dots,n$. \medskip Since the local well-posedness of \eqref{w-toda} is already known from \cite{cy}, our first result concerns the global existence in time. \begin{theorem} \label{th.global-toda} Suppose $\rho_i<4\pi$ for all $i=1,\dots,n.$ Then, for any $(\u_0,\u_1)\in H^1(M)\times L^2(M)$ such that $\int_{M}\u_1=0$, there exists a unique global solution $$ \u:\R^+\times M\to\mathbb{R}^n, \quad \u\in C(\R^+;H^1)\cap C^1(\R^+;L^2), $$ of \eqref{w-toda} with initial data $$ \left\{ \begin{array}{l} \u(0,\cdot)=\u_0, \\ \partial_t \u(0,\cdot)=\u_1. \end{array} \right. $$ \end{theorem} The latter result follows by an energy argument and a Moser-Trudinger-type inequality for systems as obtained in \cite{jw}. On the other hand, when $\rho_i\geq4\pi$ for some $i$ the Moser-Trudinger inequality does not give any control and the solutions of \eqref{toda} might blow up. In the latter case, by exploiting improved versions of the Moser-Trudinger inequality for the system recently derived in \cite{b} we are able to give the following general blow up criteria. \begin{theorem} \label{th.blowup-toda} Suppose $\rho_i\geq4\pi$ for some $i$. Let $(\u_0,\u_1)\in H^1(M)\times L^2(M)$ be such that $\int_{M}\u_1=0$, and let $\u$ be the solution of \eqref{w-toda}. Suppose that $\u$ exists in $[0,T_0)$ for some $T_0<\infty$ and it can not be extended beyond $T_0$. Then, there exists a sequence $t_k\to T_0^{-}$ such that \begin{align*} \lim_{k\to+\infty} \max_j \|\nabla u_j(t_k,\cdot)\|_{L^2}=+\infty, \quad \lim_{k\to+\infty}\max_j\int_{M}e^{u_j(t_k,\cdot)}=+\infty. \end{align*} Furthermore, if $\rho_i\in[4m_i\pi,4(m_i+1)\pi)$ for some $m_i\in\mathbb{N}$, $i=1,\dots,n$, then there exists at least one index $j\in\{1,\dots,n\}$ and $m_j$ points $\{x_{j,1},\dots,x_{j,m_j}\}\in M$ such that for any $\varepsilon>0$, $$\lim_{k\to\infty}\frac{\int_{\bigcup_{l=1}^{m_j}B(x_{j,l},\varepsilon)}e^{u_j(t_k,\cdot)}}{\int_{M}e^{u_j(t_k,\cdot)}} \geq1-\varepsilon.$$ \end{theorem} Therefore, for the finite time blow up solutions to \eqref{w-toda} there exists at least one component $u_j$ such that the measure $e^{u(t_k)}$ (after normalization) concentrates around (at most) $m_j$ points on $M$. One can compere this result with the on for the sinh-Gordon equation \eqref{w-sg} or the mean field equation \eqref{w-mf}, see Theorem \ref{th.con-sg} and Corollary \ref{cr1.1}, respectively. \medskip Finally, we have the following possible generalization of the system \eqref{w-toda}. \begin{remark} We point out that since the improved versions of the Moser-Trudinger inequality in \cite{b} hold for general symmetric, positive definite matrices $A_n=(a_{ij})_{n\times n}$ with non-positive entries outside the diagonal, we can derive similar existence results and blow up criteria as in Theorems \ref{th.global-toda}, \ref{th.blowup-toda}, respectively, for this general class of matrices as well. In particular, after some simple transformations (see for example the introduction in \cite{ajy}) we may treat the following Cartan matrices: $$ B_n=\left(\begin{matrix} 2 & -1 & 0 & \cdots & 0 \\ -1 & 2 & -1 & \cdots & 0 \\ \vdots & \vdots &\vdots & \ddots & \vdots\\ 0 & \cdots & -1 & 2 & -2 \\ 0 & \cdots & 0 & -1 & 2 \end{matrix}\right), \quad {C}_n=\left(\begin{matrix} 2 & -1 & 0 & \cdots & 0 \\ -1 & 2 & -1 & \cdots & 0 \\ \vdots & \vdots &\vdots & \ddots & \vdots\\ 0 & \cdots & -1 & 2 & -1 \\ 0 & \cdots & 0 & -2 & 2 \end{matrix}\right), \vspace{0.2cm} $$ $$ {G}_2=\left(\begin{matrix} 2&-1\\ -3&2 \end{matrix}\right), $$ which are relevant in mathematical physics, see for example \cite{d1}. To simplify the presentation we give the details just for the matrix $A_n$ in \eqref{matr}. \end{remark} \medskip The paper is organized as follows. In section \ref{sec:prelim} we collect some useful results and in section \ref{sec:proof} we prove the main results of this paper: local well-posedness, global existence and blow up criteria. \medskip \section{Preliminaries} \label{sec:prelim} In this section we collect some useful results concerning the stationary sinh-Gordon equation \eqref{sg}, Toda system \eqref{toda} and the solutions of wave equations which will be used in the proof of the main results in the next section. \medskip In the sequel the symbol $\ov u$ will denote the average of $u$, that is $$ \ov u= \fint_{M} u=\frac{1}{|M|}\int_{M} u. $$ Let us start by recalling the well-known Moser-Trudinger inequality \begin{equation} \label{mt} 8\pi \log\int_{M} e^{u-\ov u} \leq \frac 12 \int_{M} |\n u|^2 + C_{(M,g)}\,, \quad u\in H^1(M). \end{equation} For the sinh-Gordon equation \eqref{sg}, a similar sharp inequality was obtained in \cite{os}, \begin{equation} \label{mt-sg} 8\pi \left(\log\int_{M} e^{u-\ov u} + \log\int_{M} e^{-u+\ov u}\right) \leq \frac 12 \int_{M} |\n u|^2 + C_{(M,g)}\,,\quad u\in H^1(M). \end{equation} We recall now some of main features concerning the variational analysis of the sinh-Gordon equation \eqref{sg} recently derived in \cite{bjmr}, which will be exploited later on. First of all, letting $\rho_1, \rho_2\in\R$ the associated Euler-Lagrange functional for equation \eqref{sg} is given by $J_{\rho_1,\rho_2}:H^1(M)\to \R$, \begin{equation} \label{functional-sg} J_{\rho_1,\rho_2}(u)=\frac12\int_{M}|\nabla u|^2-\rho_1\log\int_{M}e^{u-\overline{u}} -\rho_2\log\int_{M}e^{-u+\overline{u}}. \end{equation} Observe that if $\rho_1, \rho_2\leq8\pi$, by \eqref{mt-sg} we readily have $$ J_{\rho_1,\rho_2}(u)\geq -C, $$ for any $u\in H^1(M)$, where $C>0$ is a constant independent of $u$. On the other hand, as soon as $\rho_i>8\pi$ for some $i=1,2$ the functional $J_{\rho_1,\rho_2}$ is unbounded from below. To treat the latter super-critical case one needs improved versions of the Moser-Trudinger inequality \eqref{mt-sg} which roughly assert that the more the measures $e^u, e^{-u}$ are spread over the surface the bigger is the constant in the left hand side of \eqref{mt-sg}. More precisely, we have the following result. \begin{proposition} \emph{(\cite{bjmr})} \label{prop-sg-mt} Let $\d, \th>0$, $k,l\in\N$ and $\{\o_{1,i},\o_{2,j}\}_{i\in\{1,\dots,k\},j\in\{1,\dots,l\}}\subset M$ be such that \begin{align*} & d(\o_{1,i},\o_{1,i'})\ge\d,\quad \forall \, i,i'\in\{1,\dots,k\}, \, i\ne i', \\ & d(\o_{2,j},\o_{2,j'})\ge\d,\quad \forall \, j,j' \in\{1,\dots,l\}, \, j\ne j'. \end{align*} Then, for any $\e>0$ there exists $C=C\left(\e,\d,\th,k,l,M\right)$ such that if $u\in H^1(M)$ satisfies \begin{align*} \int_{\o_{1,i}} e^{u} \ge \th\int_{M} e^{u}, \,\, \forall i\in\{1,\dots,k\}, \qquad \int_{\o_{2,j}} e^{-u} \ge \th\int_{M} e^{-u}, \,\, \forall j\in\{1,\dots,l\}, \end{align*} it follows that $$ 8k\pi\log\int_{M} e^{u-\ov{u}}+8l\pi\log\int_{M} e^{-u+\ov u}\leq \frac{1+\e}{2}\int_{M} |\n u|^2\,dV_g+C. $$ \end{proposition} From the latter result one can deduce that if the $J_{\rho_1,\rho_2}(u)$ is large negative at least one of the two measures $e^u, e^{-u}$ has to concentrate around some points of the surface. \begin{proposition} \emph{(\cite{bjmr})} \label{prop-sg} Suppose $\rho_i\in(8m_i\pi,8(m_i+1)\pi)$ for some $m_i\in\N$, $i=1,2$ ($m_i\geq 1$ for some $i=1,2$). Then, for any $\varepsilon, r>0$ there exists $L=L(\varepsilon,r)\gg1$ such that for any $u\in H^1(M)$ with $J_{\rho_1,\rho_2}(u)\leq-L$, there are either some $m_1$ points $\{x_1,\dots,x_{m_1}\}\subset M$ such that $$ \frac{\int_{\cup_{l=1}^{m_1}B_r(x_l)}e^{u}}{\int_{M}e^u}\geq 1-\varepsilon, $$ or some $m_2$ points $\{y_1,\dots,y_{m_2}\}\subset M$ such that $$ \frac{\int_{\cup_{l=1}^{m_2}B_r(y_l)}e^{-u}}{\int_{M}e^{-u}}\geq 1-\varepsilon. $$ \end{proposition} We next briefly recall some variational aspects of the stationary Toda system \eqref{toda}. Recall the matrix $A_n$ in \eqref{matr} and the notation of $\u$ introduced before Theorem~\ref{th.global-toda} and write $\rho=(\rho_1,\dots,\rho_n)$. The associated functional for the system \eqref{toda} is given by $J_{\rho}:H^1(M)\to \R$, \begin{equation} \label{functional-toda} J_{\rho}(\u)=\frac12\int_{M}\sum_{i,j=1}^n a^{ij}\langle\nabla u_i,\nabla u_j\rangle -\sum_{i=1}^n\rho_i\log\int_{M}e^{u_i-\overline{u}_i}, \end{equation} where $(a^{ij})_{n\times n}$ is the inverse matrix $A_n^{-1}$ of $A_n$. A Moser-Trudinger inequality for \eqref{functional-toda} was obtained in \cite{jw}, which asserts that \begin{equation} \label{mt-toda} J_{\rho}(\u)\geq C, \end{equation} for any $\u\in H^1(M)$, where $C$ is a constant independent of $\u$, if and only if $\rho_i\leq 4\pi$ for any $i=1,\dots,n.$ In particular, if $\rho_i>4\pi$ for some $i=1,\dots,n$ the functional $J_\rho$ is unbounded from below. As for the sinh-Gordon equation \eqref{sg} we have improved versions of the Moser-Trudinger inequality \eqref{mt-toda} recently derived in \cite{b} (see also \cite{bjmr}) which yield concentration of the measures $e^{u_j}$ whenever $J_{\rho}(\u)$ is large negative. \begin{proposition} \emph{(\cite{b, bjmr})} \label{prop-toda} Suppose $\rho_i\in(4m_i\pi,4(m_i+1)\pi)$ for some $m_i\in\N,~i=1,\dots,n$ ($m_i\geq1$ for some $i=1,\dots,n$). Then, for any $\varepsilon, r>0$ there exists $L=L(\varepsilon,r)\gg 1$ such that for any $\u\in H^1(M)$ with $J_{\rho}(\u)\leq-L$, there exists at least one index $j\in\{1,\dots,n\}$ and $m_j$ points $\{x_1,\dots,x_{m_j}\}\subset M$ such that $$ \frac{\int_{\cup_{l=1}^{m_j}B_r(x_l)}e^{u_j}}{\int_{M}e^{u_j}}\geq 1-\varepsilon. $$ \end{proposition} \medskip Finally, let us state a standard result concerning the wave equation, that is the Duhamel principle. Let us first recall that every function in $L^2(M)$ can be decomposed as a convergent sum of eigenfunctions of the Laplacian $\Delta_g$ on $M$. Then, one can define the operators $\cos(\sqrt{-\Delta_g})$ and $\frac{\sin(\sqrt{-\Delta_g})}{\sqrt{-\Delta_g}}$ acting on $L^2(M)$ using the spectral theory. Consider now the initial value problem \begin{equation} \label{wave-eq} \left\{\begin{array}{l} \partial_t^2v-\Delta_gv=f(t,x),\vspace{0.2cm}\\ v(0,\cdot)=u_0,\quad \partial_tv(0,\cdot)=u_1, \end{array}\right. \end{equation} on $[0,+\infty)\times M$. Recall the notation of $C_T(X)$ and $\u$ before Theorems \ref{th.local-sg} and \ref{th.global-toda}, respectively. Then, the following Duhamel formula holds true. \begin{proposition} \label{pr2.du} Let $T>0$, $(u_0,u_1)\in H^1(M)\times L^2(M)$ and let $f\in L^1_T(L^2(M))$. Then, \eqref{wave-eq} has a unique solution $$ v:[0,T)\times M\to\mathbb{R}, \quad v\in C_T(H^1)\cap C_T^1(L^2), $$ given by \begin{align} \label{2.wave-ex} v(t,x)=\cos\left(t\sqrt{-\Delta_g}\right)u_0+\frac{\sin(t\sqrt{-\Delta_g})}{\sqrt{-\Delta_g}}\,u_1+ \int_0^t\frac{\sin\bigr((t-s)\sqrt{-\Delta_g}\bigr)}{\sqrt{-\Delta_g}}\,f(s)\,\mathrm{d}s. \end{align} Furthermore, it holds \begin{equation} \|v\|_{C_T(H^1)}+\|\partial_tv\|_{C_T(L^2)}\leq 2\left(\|u_0\|_{H^1}+\|u_1\|_{L^2}+\|f\|_{L_T^1(L^2)}\right). \end{equation} The same results hold as well if $u_0,u_1,$ and $f(t,\cdot)$ are replaced by $\u_0, \u_1$ and $\mathbf{f}(t,\cdot)$, respectively. \end{proposition} \medskip \section{Proof of the main results} \label{sec:proof} In this section we derive the main results of the paper, that is local well-posedness, global existence and blow up criteria for the wave sinh-Gordon equation \eqref{w-sg}, see Theorems \ref{th.local-sg}, \ref{th.global-sg} and \ref{th.con-sg}, respectively. Since the proofs of global existence and blow up criteria for the wave Toda system \eqref{w-toda} (Theorems \ref{th.global-toda} and \ref{th.blowup-toda}) are obtained by similar arguments, we will present full details for what concerns the wave sinh-Gordon equation and point out the differences in the two arguments, where necessary. \subsection{Local and global existence.} We start by proving the local well-posedness of the wave sinh-Gordon equation \eqref{w-sg}. The proof is mainly based on a fixed point argument and the Moser-Trudinger inequality \eqref{mt}. \medskip \noindent {\em Proof of Theorem \ref{th.local-sg}.} Let $(u_0,u_1)\in H^1(M)\times L^2(M)$ be such that $\int_{M}u_1=0$. Take $T>0$ to be fixed later on. We set \begin{align} \label{R} R=3\left(\|u_0\|_{H^1}+\|u_1\|_{L^2}\right),\quad I=\fint_{ M}u_0=\frac{1}{|M|}\int_{M}u_0, \end{align} and we introduce the space $B_T$ given by \begin{align*} B_T=\left\{u\in C_T(H^1(M))\cap C_T^1(L^2(M)) \,:\, \|u\|_*\leq R,~ \fint_{M}u(s,\cdot)=I \, \mbox{ for all } s\in[0,T]\right\}, \end{align*} where \begin{align*} \|u\|_*=\|u\|_{C_T(H^1)}+\|\partial_tu\|_{C_T(L^2)}. \end{align*} For $u\in B_T$ we consider the initial value problem \begin{equation} \label{w-Bt} \left\{\begin{array}{l} \partial_t^2v-\Delta_gv=f(s,x)=\rho_1\left(\frac{e^u}{\int_{M}e^u}-\frac{1}{|M|}\right) -\rho_2\left(\frac{e^{-u}}{\int_{M}e^{-u}}-\frac{1}{|M|}\right), \vspace{0.2cm}\\ v(0,\cdot)=u_0,\quad \partial_tv(0,\cdot)=u_1, \end{array}\right. \end{equation} on $[0,T]\times M$. Applying Proposition \ref{pr2.du} we deduce the existence of a unique solution of \eqref{w-Bt}. \medskip \noindent \textbf{Step 1.} We aim to show that $v\in B_{T}$ if $T$ is taken sufficiently small. Indeed, still by Proposition \ref{pr2.du} we have \begin{equation} \label{3.ine} \begin{aligned} \|v\|_*\leq~&2\left(\|u_0\|_{H^1}+\|u_1\|_{L^2}\right) +2\rho_1\int_0^T\left\|\left(\frac{e^u}{\int_{M}e^u}-\frac{1}{|M|}\right)\right\|_{L^2}\mathrm{d}s\\ &+2\rho_2\int_0^T\left\|\left(\frac{e^{-u}}{\int_{M}e^{-u}}-\frac{1}{|M|}\right)\right\|_{L^2}\mathrm{d}s. \end{aligned} \end{equation} Since $u\in B_T$ we have $\fint_{M}u(s,\cdot)=I$ for all $s\in[0,T]$ and therefore, by the Jensen inequality, \begin{align*} \fint_{M}e^{u}\geq e^{\fint_{M}u}=e^{I}\quad\mbox{and}\quad \fint_{M}e^{-u}\geq e^{-\fint_{M}u}=e^{-I}. \end{align*} Therefore, we can bound the last two terms on the right hand side of \eqref{3.ine} by \begin{align*} CT(|\rho_1|+|\rho_2|)+CT|\rho_1|e^{-I}\max_{s\in[0,T]}\|e^{u(s,\cdot)}\|_{L^2}+CT|\rho_2|e^{I}\max_{s\in[0,T]}\|e^{-u(s,\cdot)}\|_{L^2}, \end{align*} for some $C>0$. On the other hand, recalling the Moser-Trudinger inequality \eqref{mt}, we have for $s\in[0,T]$ \begin{equation} \label{exp} \begin{aligned} \|e^{u(s,\cdot)}\|_{L^2}^2=~&\int_{M}e^{2u(s,\cdot)}=\int_{M}e^{2(u(s,\cdot)-\overline{u})}e^{2I} \\ \leq~&C\exp\left(\frac{1}{4\pi}\int_{M}|\nabla u(s,\cdot)|^2\right)e^{2I} \leq Ce^{2I}e^{\frac{1}{4\pi}R^2}, \end{aligned} \end{equation} for some $C>0$, where we used $\|u\|_*\leq R$. Similarly, we have $$ \|e^{-u(s,\cdot)}\|_{L^2}^2\leq Ce^{-2I}e^{\frac{1}{4\pi}R^2}. $$ Hence, recalling the definition of $R$ in \eqref{R}, by \eqref{3.ine} and the above estimates we conclude \begin{align*} \|v\|_* &\leq 2\left(\|u_0\|_{H^1}+\|u_1\|_{L^2}\right)+CT(|\rho_1|+|\rho_2|)+CT(|\rho_1|+|\rho_2|)e^{\frac{1}{8\pi}R^2} \\ & = \frac23 R+CT(|\rho_1|+|\rho_2|)+CT(|\rho_1|+|\rho_2|)e^{\frac{1}{8\pi}R^2}. \end{align*} Therefore, If $T>0$ is taken sufficiently small, $T=T(\rho_1,\rho_2,\|u_0\|_{H^1},\|u_1\|_{L^2})$, then $\|v\|_*\leq R$. \medskip Moreover, observe that if we integrate both sides of \eqref{w-Bt} on $M$ we get $$ \partial_t^2\ov{v}(s)=0, \quad \mbox{for all } s\in[0,T] $$ and hence, $$ \partial_t\ov{v}(s)=\partial_t\ov{v}(0)=\ov{u}_1=0, \quad \mbox{for all } s\in[0,T]. $$ It follows that $$ \fint_{M}v(s,\cdot)=\fint_{M}v(0,\cdot)=\fint_{M}u_0=I \quad \mbox{for all } s\in[0,T]. $$ Thus, for this choice of $T$ we conclude that $v\in B_T$. \medskip Therefore, we can define a map $$ \mathcal{F}:B_T\to B_T, \quad v=\mathcal{F}(u). $$ \noindent \textbf{Step 2.} We next prove that by taking a smaller $T$ if necessary, $\mathcal{F}$ is a contraction. Indeed, let $u_1,u_2\in B_T$ be such that $v_i=\mathcal{F}(u_i), \,i=1,2$. Then, $v=v_1-v_1$ satisfies \begin{equation*} \begin{cases} \p_t^2v-\Delta_gv=\rho_1\left(\frac{e^{u_1}}{\int_{M}e^{u_1}}-\frac{e^{u_2}}{\int_{M}e^{u_2}}\right) -\rho_2\left(\frac{e^{-u_1}}{\int_{M}e^{-u_1}}-\frac{e^{-u_2}}{\int_{M}e^{-u_2}}\right),\\ v(0,\cdot)=0,\quad \p_tv(0,\cdot)=0. \end{cases} \end{equation*} Hence, by Proposition \ref{pr2.du} we have \begin{equation} \label{contraction-estimate} \begin{aligned} \|v\|_*\leq~&2\rho_1\int_0^T\left\|\left(\frac{e^{u_1}}{\int_{ M}e^{u_1}} -\frac{e^{u_2}}{\int_{ M}e^{u_2}}\right)\right\|_{L^2}\mathrm{d}s\\ &+2\rho_2\int_0^T\left\|\left(\frac{e^{-u_1}}{\int_{ M}e^{-u_1}} -\frac{e^{-u_2}}{\int_{ M}e^{-u_2}}\right)\right\|_{L^2}\mathrm{d}s. \end{aligned} \end{equation} For $s\in[0,T]$, we use the following decomposition, \begin{equation} \label{estimate-1} \left\|\left(\frac{e^{u_1(s,\cdot)}}{\int_{ M}e^{u_1(s,\cdot)}} -\frac{e^{u_2(s,\cdot)}}{\int_{ M}e^{u_2(s,\cdot)}}\right)\right\|_{L^2} \leq \left\|\frac{e^{u_1}-e^{u_2}}{\int_{ M}e^{u_1}}\right\|_{L^2} +\left\|\frac{e^{u_2}\left(\int_{ M}e^{u_1}-\int_{ M}e^{u_2}\right)} {(\int_{ M}e^{u_1})(\int_{ M}e^{u_2})}\right\|_{L^2}. \end{equation} Reasoning as before, the first term in the righ hand side of the latter estimate is bounded by \begin{align} &Ce^{-I}\left\|(u_1(s,\cdot)-u_2(s,\cdot))(e^{_1(s,\cdot)}+e^{u_2(s,\cdot)})\right\|_{L^2} \nonumber\\ &\leq C e^{-I}\|u_1(s,\cdot)-u_2(s,\cdot)\|_{L^4}\left(\|e^{u_1(s,\cdot)}\|_{L^4}+\|e^{u_2(s,\cdot)}\|_{L^4}\right), \label{eq.rhs1} \end{align} for some $C>0$, where we used the H\"older inequality. Moreover, we have \begin{equation*} \begin{aligned} \|e^{u_i(s,\cdot)}\|_{L^4}^4=~&\int_{ M}e^{4(u_i(s,\cdot)-\overline{u}_1(s))}e^{4I} \leq Ce^{4I}\exp\left(\frac{1}{\pi}\int_{ M}|\nabla u_i(s,\cdot)|^2\right)\\ \leq~& Ce^{4I}e^{\frac{1}{\pi}R^2}, \quad i=1,2, \end{aligned} \end{equation*} for some $C>0$. Using the latter estimate for the second term in \eqref{eq.rhs1} and the Sobolev inequality for the first term, we can bound \eqref{eq.rhs1} by \begin{align*} Ce^{\frac{1}{4\pi}R^2}\|u_1-u_2\|_{H^1} \end{align*} and hence \begin{equation} \label{term1} \left\|\frac{e^{u_1}-e^{u_2}}{\int_{ M}e^{u_1}}\right\|_{L^2} \leq Ce^{\frac{1}{4\pi}R^2}\|u_1-u_2\|_{H^1}. \end{equation} On the other hand, by using \eqref{exp}, the second term in \eqref{estimate-1} is bounded by \begin{equation*} \begin{aligned} &Ce^{-2 I}\|e^{u_2(s,\cdot)}\|_{L^2}\int_{ M}|u_1(s,\cdot)-u_2(s,\cdot)|\left(e^{u_1(s,\cdot)}+e^{u_2(s,\cdot)}\right)\\ &\leq Ce^{-2I}\|e^{u_2(s,\cdot)}\|_{L^2} \left(\|e^{u_1(s,\cdot)}\|_{L^2}+\|e^{u_2(s,\cdot)}\|_{L^2}\right)\|u_1(s,\cdot)-u_2(s,\cdot)\|_{L^2}\\ &\leq Ce^{\frac{1}{4\pi}R^2}\|u_1(s,\cdot)-u_2(s,\cdot)\|_{H^1}, \end{aligned} \end{equation*} for some $C>0$, where in the last step we used the Sobolev inequality. \medskip In conclusion, we have \begin{align*} \left\|\left(\frac{e^{u_1(s,\cdot)}}{\int_{ M}e^{u_1(s,\cdot)}} -\frac{e^{u_2(s,\cdot)}}{\int_{ M}e^{u_2(s,\cdot)}}\right)\right\|_{L^2} \leq Ce^{\frac{1}{4\pi}R^2}\|u_1(s,\cdot)-u_2(s,\cdot)\|_{L^2}. \end{align*} Similarly, \begin{align*} \left\|\left(\frac{e^{-u_1(s,\cdot)}}{\int_{ M}e^{-u_1(s,\cdot)}} -\frac{e^{-u_2(s,\cdot)}}{\int_{ M}e^{-u_2(s,\cdot)}}\right)\right\|_{L^2} \leq Ce^{\frac{1}{4\pi}R^2}\|u_1(s,\cdot)-u_2(s,\cdot)\|_{L^2}. \end{align*} Finally, by the latter estimate, \eqref{term1} and by \eqref{estimate-1}, \eqref{contraction-estimate}, we conclude that \begin{align*} \|v\|_* & \leq CT(|\rho_1|+|\rho_2|)e^{\frac{1}{4\pi}R^2}\|u_1(s,\cdot)-u_2(s,\cdot)\|_{H^1} \\ & \leq CT(|\rho_1|+|\rho_2|)e^{\frac{1}{4\pi}R^2}\|u_1-u_2\|_*\,. \end{align*} Therefore, If $T>0$ is taken sufficiently small, $T=T(\rho_1,\rho_2,\|u_0\|_{H^1},\|u_1\|_{L^2})$, then $\mathcal{F}$ a contraction map. The latter fact yields the existence of a unique fixed point for $\mathcal{F}$, which solves \eqref{w-sg} with initial conditions $(u_0, u_1)$. \medskip The same arguments with suitable adaptations show that the initial value problem \eqref{w-sg} is locally well-posed so we omit the details. The proof is completed. \hfill $\square$ \ We next prove that if the two parameters in \eqref{w-sg} are taken in a sub-critical regime, then there exists a global solution to the initial value problem associated to \eqref{w-sg}. To this end we will exploit an energy argument jointly with the Moser-Trudinger inequality related to \eqref{sg}, see \eqref{mt-sg}. For a solution $u(t,x)$ to \eqref{w-sg} we define its energy as \begin{equation} \label{energy-sg} E(u(t,\cdot))=\frac12\int_{ M}(|\partial_tu|^2+|\nabla u|^2)-\rho_1\log\int_{ M}e^{u-\overline{u}} -\rho_2\log\int_{ M}e^{-u+\overline{u}}, \end{equation} for $t\in[0,T]$. We point out that $$ E(u(t,\cdot))=\frac12\int_{ M}|\partial_tu|^2 +J_{\rho_1,\rho_2}(u(t,\cdot)), $$ where $J_{\rho_1,\rho_2}$ is the functional introduced in \eqref{functional-sg}. We first show that the latter energy is conserved in time along the solution $u$. \begin{lemma} \label{lem-const} Let $\rho_1,\rho_2\in\mathbb{R}$ and let $(u_0,u_1)\in H^1( M)\times L^2( M)$ be such that $\int_{ M}u_1=0$. Let $ u\in C_T(H^1)\cap C_T^1(L^2)$, for some $T>0$, be a solution to \eqref{w-sg} with initial data $(u_0, u_1)$ and let $E(u)$ be defined in \eqref{energy-sg}. Then, it holds $$ E(u(t,\cdot))=E(u(0,\cdot)) \quad \mbox{for all } t\in[0,T]. $$ \end{lemma} \begin{proof} We will show that $$ \partial_tE(u(t,\cdot))=0 \quad \mbox{for all } t\in[0,T]. $$ We have \begin{equation} \label{diff-energy} \begin{aligned} \partial_tE(u(t,\cdot))=\int_{ M}(\p_tu)(\p_t^2u)+\int_{ M}\langle\nabla\p_tu,\nabla u\rangle-\rho_1\frac{\int_{ M}e^{u}\p_tu}{\int_{ M}e^{u}}+\rho_2\frac{\int_{ M}e^{-u}\p_tu}{\int_{ M}e^{-u}}. \end{aligned} \end{equation} After integration by parts, the first two terms in the right hand side of the latter equation give \begin{equation*} \begin{aligned} \int_{ M}(\p_tu)(\p_t^2u-\Delta_gu)=\int_{ M}\p_tu\left(\rho_1\left(\frac{e^u}{\int_{ M}e^u}-\frac{1}{|M|}\right) -\rho_2\left(\frac{e^{-u}}{\int_{ M}e^{-u}}-\frac{1}{|M|}\right)\right), \end{aligned} \end{equation*} where we have used the fact that $u$ satisfies \eqref{w-sg}. Plugging the latter equation into \eqref{diff-energy} we readily have $$ \partial_tE(u(t),\cdot)=\frac{\rho_2-\rho_1}{|M|}\int_{ M}\p_tu=\frac{\rho_2-\rho_1}{|M|}\,\p_t\left(\int_{ M}u\right)=0 \quad \mbox{for all } t\in[0,T], $$ since $\int_{ M}u(t,\cdot)=\int_{ M}u_0$ for all $t\in[0,T]$, see Theorem \ref{th.local-sg}. This concludes the proof. \end{proof} \ We can now prove the global existence result for \eqref{w-sg} in the sub-critical regime $\rho_1,\rho_2<8\pi$. \medskip \noindent {\em Proof of Theorem \ref{th.global-sg}.} Suppose $\rho_1, \rho_2<8\pi$. Let $(u_0,u_1)\in H^1( M)\times L^2( M)$ be such that $\int_{ M}u_1=0$ and let $u$ be the solution to \eqref{w-sg} with initial data $(u_0,u_1)$ obtained in Theorem \ref{th.local-sg}. Suppose that $u$ exists in $[0,T_0)$. With a little abuse of notation $C([0,T_0);H^1)$ will be denoted here still by $C_{T_0}(H^1)$. Analogously we will use the notation $C_{T_0}^1(L^2)$. We have that $u\in C_{T_0}(H^1)\cap C_{T_0}^1(L^2)$ satisfy \begin{equation*} \p_t^2u-\Delta u=\rho_1\left(\frac{e^u}{\int_{ M}e^u}-\frac{1}{|M|}\right)-\rho_2\left(\frac{e^{-u}}{\int_{ M}e^{-u}}-\frac{1}{|M|}\right) \ \ \mbox{on}~[0,T_0)\times M. \end{equation*} We claim that \begin{equation} \label{a-priori} \|u\|_{C_{T_0}(H^1)}+\|\partial_t u\|_{C_{T_0}(L^2)}\leq C, \end{equation} for some $C>0$ depending only on $\rho_1,\rho_2$ and $(u_0,u_1)$. Once the claim is proven we can extend the solution $u$ for a fixed amount of time starting at any $t\in[0,T_0)$, which in particular implies that the solution $u$ can be extended beyond time $T_0$. Repeating the argument we can extend $u$ for any time and obtain a global solution as desired. \medskip Now we shall prove \eqref{a-priori}. We start by recalling that the energy $E(u(t,\cdot))$ in \eqref{energy-sg} is conserved in time, see Lemma \ref{lem-const}, that is, \begin{equation} \label{const} E(u(t,\cdot))=E(u(0,\cdot)) \quad \mbox{for all } t\in[0,T_0). \end{equation} Suppose first $\rho_1,\rho_2\in(0,8\pi)$. By the Moser-Trudinger inequality \eqref{mt-sg} we have \begin{equation*} 8\pi\left(\log\int_{ M}e^{u(t,\cdot)-\overline{u}(t)}+\log\int_{ M}e^{-u(t,\cdot)+\overline{u}(t)}\right) \leq\frac{1}{2}\int_{ M}|\nabla u(t,\cdot)|^2+C, \quad t\in[0,T_0), \end{equation*} where $C>0$ is independent of $u(t,\cdot)$. Observe moreover that by the Jensen inequality it holds \begin{equation} \label{jensen} \log\int_{ M}e^{u(t,\cdot)-\overline{u}(t)}\geq0, \quad \log\int_{ M}e^{-u(t,\cdot)+\overline{u}(t)}\geq0 , \quad t\in[0,T_0). \end{equation} Therefore, letting $\rho=\max\{\rho_1,\rho_2\}$ we have \begin{align} E(u(t,\cdot))&\geq \frac12\int_{ M}(|\partial_tu(t,\cdot)|^2+|\nabla u(t,\cdot)|^2) \nonumber\\ & \quad -\rho\left(\log\int_{ M}e^{u(t,\cdot)-\overline{u}(t)} -\log\int_{ M}e^{-u(t,\cdot)+\overline{u}(t)}\right) \nonumber\\ & \geq \frac12\int_{ M}(|\partial_tu(t,\cdot)|^2+|\nabla u(t,\cdot)|^2)-\frac{\rho}{16\pi}\int_{ M} |\nabla u(t,\cdot)|^2 - C\rho, \label{es1} \end{align} for $t\in[0,T_0)$, where $C>0$ is independent of $u(t,\cdot)$. Finally, since $\rho<8\pi$ and by using \eqref{const} we deduce \begin{equation*} \begin{aligned} &\frac12\left(1-\frac{\rho}{8\pi}\right)\left( \|\p_tu(t,\cdot)\|_{L^2}^2+\|\nabla u(t,\cdot)\|_{L^2}^2\right) \\ &\quad \leq \frac12\int_{ M}\left(|\partial_tu(t,\cdot)|^2+\left(1-\frac{\rho}{8\pi}\right)|\nabla u(t,\cdot)|^2\right) \\ &\quad \leq E(u(t,\cdot))+C\rho=E(u(0,\cdot))+C\rho, \end{aligned} \end{equation*} where $C>0$ is independent of $u(t,\cdot)$. On the other hand, to estimate $\|u(t,\cdot)\|_{L^2}$ we recall that $\int_{ M}u(t,\cdot)=\int_{ M}u_0$ for all $t\in[0,T_0)$, see Theorem \ref{th.local-sg}, and use the Poincar\'e inequality to get \begin{align*} \|u(t,\cdot)\|_{L^2} & \leq \|u(t,\cdot)-\ov{u}(t)\|_{L^2} + \|\ov{u}(t)\|_{L^2} \leq C\|\nabla u(t,\cdot)\|_{L^2} + C\ov{u}(t) \\ & = C\|\nabla u(t,\cdot)\|_{L^2} + C\ov{u}_0, \end{align*} where $C>0$ is independent of $u(t,\cdot)$. By the latter estimate and \eqref{es1} we readily have \eqref{a-priori}. \medskip Suppose now one of $\rho_1,\rho_2$'s is not positive. Suppose without loss of generality $\rho_1\leq 0$. Then, recalling \eqref{jensen} and by using the standard Moser-Trudinger inequality \eqref{mt} we have \begin{equation*} \begin{aligned} E(u(t,\cdot))\geq~&\frac12\int_{ M}(|\partial_tu(t,\cdot)|^2+|\nabla u(t,\cdot)|^2)-\rho_2\log\int_{ M}e^{u(t,\cdot)-\overline{u}(t)}\\ \geq~&\frac12\int_{ M}\left(|\partial_tu(t,\cdot)|^2+\left(1-\frac{\rho_2}{8\pi}\right)|\nabla u(t,\cdot)|^2\right)-C\rho_2. \end{aligned} \end{equation*} Reasoning as before one can get \eqref{a-priori}. \medskip Finally, suppose $\rho_1,\rho_2\leq0$. In this case, we readily have $$E(u(0,\cdot))=E(u(t,\cdot))\geq\frac12\int_{ M}(|\partial_tu(t,\cdot)|^2+|\nabla u(t,\cdot)|^2),$$ which yields \eqref{a-priori}. The proof is completed. \hfill $\square$ \medskip \begin{remark} \label{rem-toda} For what concerns the wave equation associated to the Toda system \eqref{w-toda} we can carry out a similar argument to deduce the global existence result in Theorem~\ref{th.global-toda}. Indeed, for a solution $\u=(u_1,\dots,u_n)$ to \eqref{w-toda} we define its energy as \begin{equation*} E(\u(t,\cdot))= \frac12\int_{M}\sum_{i,j=1}^n a^{ij}\left( (\partial_tu_i)(\partial_tu_j) + \langle\nabla u_i,\nabla u_j\rangle \right) -\sum_{i=1}^n\rho_i\log\int_{ M}e^{u_i-\overline{u}_i}, \end{equation*} where $(a^{ij})_{n\times n}$ is the inverse matrix $A_n^{-1}$ of $A_n$. Analogous computations as in Lemma~\ref{lem-const} show that the latter energy is conserved in time, i.e. $$ E(\u(t,\cdot))=E(\u(0,\cdot)) \quad \mbox{for all } t\in[0,T]. $$ To prove the global existence in Theorem~\ref{th.global-toda} for $\rho_i<4\pi$, $i=1,\dots,n$, one can then follow the argument of Theorem \ref{th.global-sg} jointly with the Moser-Trudinger inequality associated to the Toda system \eqref{toda}, see \eqref{mt-toda}. \end{remark} \bigskip \subsection{Blow up criteria.} We next consider the critical/super-critical case in which $\rho_i\geq8\pi$ for some $i$. The fact that the solutions to \eqref{sg} might blow up makes the problem more delicate. By exploiting the analysis introduced in \cite{bjmr}, in particular the improved version of the Moser-Trudinger inequality in Proposition \ref{prop-sg-mt} and the concentration property in Proposition \ref{prop-sg}, we derive the following general blow up criteria for \eqref{w-sg}. We stress this is new for the wave mean field equation \eqref{w-mf} as well. \medskip \noindent {\em Proof of Theorem \ref{th.con-sg}.} Suppose $\rho_i\geq8\pi$ for some $i$. Let $(u_0,u_1)\in H^1( M)\times L^2( M)$ be such that $\int_{ M}u_1=0$ and let $u$ be the solution of \eqref{w-sg} obtained in Theorem \ref{th.local-sg}. Suppose that $u$ exists in $[0,T_0)$ for some $T_0<+\infty$ and it can not be extended beyond $T_0$. Then, we claim that there exists a sequence $t_k\to T_0^-$ such that either \begin{equation} \label{one-infty} \lim_{k\to\infty}\int_{ M}e^{u(t_k,\cdot)}=+\infty \quad \mbox{or} \quad \lim_{k\to\infty}\int_{ M}e^{-u(t_k,\cdot)}=+\infty. \end{equation} Indeed, suppose this is not the case. Recall the definition of $E(u)$ in \eqref{energy-sg} and the fact that it is conserved in time \eqref{const}. Recall moreover that $\int_{ M}u(t,\cdot)=\int_{ M}u_0$ for all $t\in[0,T_0)$, see Theorem \ref{th.local-sg}. Then, we would have \begin{align*} & \frac12\int_{ M}(|\partial_tu(t,\cdot)|^2+|\nabla u(t,\cdot)|^2) \\ & \quad = E(u(t,\cdot)) +\rho_1\log\int_{ M}e^{u(t,\cdot)-\overline{u}(t)} +\rho_2\log\int_{ M}e^{-u(t,\cdot)+\overline{u}(t)} \\ & \quad \leq E(u(t,\cdot)) +(\rho_2-\rho_1)\overline{u}(t) +C \\ & \quad = E(u(0,\cdot)) +(\rho_2-\rho_1)\overline{u}(0) +C \\ & \quad \leq C \quad \mbox{for all } t\in[0,T_0), \end{align*} for some $C>0$ depending only on $\rho_1,\rho_2$ and $(u_0,u_1)$. Thus, we can extend the solution $u$ beyond time $T_0$ contradicting the maximality of $T_0$. We conclude \eqref{one-infty} holds true. Now, since $\overline{u}(t)$ is constant in time The Moser-Trudinger inequality \eqref{mt} yields \begin{equation} \lim_{k\to\infty}\|\nabla u(t_k,\cdot)\|_{L^2}=+\infty. \end{equation} This concludes the first part of Theorem \ref{th.con-sg}. \medskip Finally, suppose $\rho_1\in[8m_1\pi,8(m_1+1)\pi)$ and $\rho_2\in[8m_2\pi,8(m_2+1)\pi)$ for some $m_1,m_2\in\mathbb{N}$, and let $t_k$ be the above defined sequence. Next we take $\tilde{\rho}_i>\rho_i$ such that $\tilde{\rho}_i\in(8m_i\pi,8(m_i+1)\pi),~i=1,2$, and consider the following functional as in \eqref{functional-sg}, \begin{align*} J_{\tilde{\rho}_1,\tilde{\rho}_2}(u)=\frac12\int_{ M}|\nabla u|^2-\tilde{\rho}_1\int_{ M}e^{u-\overline{u}}-\tilde{\rho}_2\int_{ M}e^{-u+\overline{u}}. \end{align*} Since $\tilde{\rho}_i>\rho_i,~i=1,2$ and since $E(u(t_i,\cdot))$, $\overline{u}(t)$ are preserved in time, we have \begin{equation*} \begin{aligned} J_{\tilde{\rho}_1,\tilde{\rho}_2}(u(t_k,\cdot)) &= E(u(t_k,\cdot))-\frac12\int_{ M}|\partial_tu(t,\cdot)|^2 \\ &\quad -(\tilde\rho_1-\rho_1)\log\int_{ M}e^{u(t_k,\cdot)-\overline{u}(t_k,\cdot)}-(\tilde\rho_2-\rho_2)\log\int_{ M}e^{-u(t_k,\cdot)+\overline{u}(t_k,\cdot)} \\ & \leq E(u(0,\cdot))+(\tilde\rho_1-\rho_1)\ov u(0)-(\tilde\rho_2-\rho_2)\ov u(0) \\ & \quad -(\tilde\rho_1-\rho_1)\log\int_{ M}e^{u(t_k,\cdot)}-(\tilde\rho_2-\rho_2)\log\int_{ M}e^{-u(t_k,\cdot)}\to-\infty, \end{aligned} \end{equation*} for $k\to+\infty$, where we used \eqref{one-infty}. Then, by the concentration property in Proposition \ref{prop-sg} applied to the functional $J_{\tilde{\rho}_1,\tilde{\rho}_2}$, for any $\varepsilon>0$ we can find either some $m_1$ points $\{x_1,\dots,x_{m_1}\}\subset M$ such that, up to a subsequence, $$ \lim_{k\to+\infty}\frac{\int_{\cup_{l=1}^{m_1}B_r(x_l)}e^{u(t_k,\cdot)}}{\int_{ M}e^{u(t_k,\cdot)}}\geq 1-\varepsilon, $$ or some $m_2$ points $\{y_1,\dots,y_{m_2}\}\subset M$ such that $$ \lim_{k\to+\infty}\frac{\int_{\cup_{l=1}^{m_2}B_r(y_l)}e^{-u(t_k,\cdot)}}{\int_{ M}e^{-u(t_k,\cdot)}}\geq 1-\varepsilon. $$ This finishes the last part of Theorem \ref{th.con-sg}. \hfill $\square$ \medskip \begin{remark} The general blow up criteria in Theorem \ref{th.blowup-toda} for the wave equation associated to the Toda system \eqref{w-toda} in the critical/super critical regime $\rho_i\geq 4\pi$ are obtained similarly. More precisely, one has to exploit the conservation of the energy of solutions to \eqref{w-toda}, see Remark \ref{rem-toda}, and the concentration property for the Toda system \eqref{toda} in Proposition \ref{prop-toda}. \end{remark} \ \begin{center} \textbf{Acknowledgements} \end{center} The authors would like to thank Prof. P.L. Yung for the comments concerning the topic of this paper. The research of the first author is supported by the grant of thousand youth talents plan of China. The research of the second author is partially supported by PRIN12 project: \emph{Variational and Perturbative Aspects of Nonlinear Differential Problems} and FIRB project: \emph{Analysis and Beyond}. \
{ "attr-fineweb-edu": 1.5, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbKM5qhDACqrAN7tj
\section{Introduction} \indent In perturbation theory, the advantage of a coordinate space description as a way of studying the divergences arising in Feynman diagrams has been discussed by many authors ({\it e.g.}\ \cite{Col,Freed}). In the continuum, the analytic expression for the scalar field propagator in position space is well-known. On the lattice, however, the standard representation for the scalar propagator involves integrals over Bessel functions and has proved to be very difficult to analyse in the continuum limit.\\ \indent Several attempts have been made to derive a suitable expansion for the lattice scalar propagator in the limit where the lattice spacing goes to zero. In particular, we mention the analysis performed by L\"{u}scher and Weisz for the massless propagator \cite{Lus} as well as the research carried out by Burgio, Caracciolo and Pelissetto in \cite{Pel}. In the first of these studies, the $x$- dependence of the massless propagator is asymptotically derived. In the second, the $m$-dependence is obtained for the propagator at $x = 0$. The propagator at any other point $x \neq 0$ is then expressed, through a set of recursion relations, in terms of the value which its massless counterpart assumes on an element of the unit hypercube (i. e, $x_{\mu}$ either 0 or 1). In the present paper we wish to tackle the most general case of both $m$ and $x$ non-vanishing. \\ \indent The procedure that we adopt attacks the question directly at its core. We derive an asymptotic expansion of the modified Bessel function of the first kind which appears in the expression of the lattice propagator. The order and argument of this Bessel function go to infinity at different rates as the lattice spacing decreases to zero. We have found no tabulated asymptotic expansion for this particular case. Expansions for the modified Bessel function of the first kind, $I_{\nu}(x)$, are, indeed, available for the cases where either the argument $x$ or the order $\nu$ becomes large, or for the case where both $x$ and $\nu$ grow to infinity keeping the value of their respective ratio constant (approximantion by tangents). Unfortunately, none of the cases just mentioned characterizes the modified Bessel function at hand, and we have had to develop, as a result, a new asymptotic expansion.\\ \noindent As a perturbative application of the technique developed in the analysis of the continuum limit expansion of the lattice scalar propagator, in the closing section of this paper we consider the mass renormalization of the discrete $\lambda \phi^{4}$ theory in coordinate space. \\ \indent We introduce now briefly the notations that will be assumed in the following sections. Throughout this paper, we shall work on a $n$-dimensional lattice of finite spacing $a$. The convention \begin{equation} x^{p} = \sum_{\mu = 1}^{n} x_{\mu}^{p}, \; \; \; \; \mbox{with $p$ an integer} \; , \label{eq:notation} \end{equation} \noindent will also be used. \\ \noindent The Euclidean free scalar propagators in a $n$-dimensional configuration space will be denoted by $\Delta^{\mathrm{C}}(x;n)$ and $\Delta^{\mathrm{L}}(x; n)$ with \begin{equation} \Delta^{\mathrm{C}}(x; n) = \int_{- \infty}^{+ \infty} \frac{d^{n}p}{(2 \pi)^n} \frac{e^{\imath p x}}{m^2 + p^2} \; , \label{eq:defcon} \end{equation} \begin{equation} \Delta^{\mathrm{L}}(x; n) = \int_{- \frac{\pi}{a}}^{+ \frac{\pi}{a}} \frac{d^{n}p}{(2 \pi)^n} \frac{e^{\imath p x}}{m^2 + \hat{p}^2} \; , \label{eq:deflat} \end{equation} \noindent referring to the propagator evaluated in the continuum and on the lattice, respectively. Note that in eq. (\ref{eq:deflat}) we have introduced the short-hand notation \begin{equation} \hat{p}^2 = \frac{4}{a^2} \sum_{\mu = 1}^{n} \sin^{2} \left( \frac{p_{\mu} a}{2} \right). \label{eq:shorthand} \end{equation} \section{The propagator on the lattice} \indent It is a well- known result that the continuum scalar field propagator in a 4- dimensional configuration space can be written in terms of the modified Bessel function $\mathrm{K}_{1}$. The technique consists in using the Schwinger's representation for the propagator in momentum space. Re-expressing, then, the integral over the four-momentum as a product of four one-dimensional integrals and completing the square in the resulting exponential, we can finally perform the integration and obtain our final formula. Adopting the notational convention given by eq. (\ref{eq:notation}), we write the $n$-dimensional expression as follows \begin{eqnarray} \Delta^{\mathrm{C}}(x; n) & = & \int_{- \infty}^{+ \infty} \frac{d^{n}p}{(2 \pi)^n} \frac{e^{\imath p x}}{m^2 + p^2} \nonumber \\ & = & \int_{0}^{\infty} d\alpha e^{- m^{2} \alpha} \prod_{\mu = 1}^{n} \int_{- \infty}^{+ \infty} \frac{dp_{\mu}}{(2 \pi)} \exp \left \{- p_{\mu}^2 \alpha + \imath p_{\mu} x_{\mu} \right \} \nonumber \nonumber \\ & = & (2 \pi)^{- n/2} \left[ \frac{(x^2)^{1/2}}{m} \right]^{1 - n/2} \mathrm{K}_{1 - n/2}\left[ m (x^2)^{1/2} \right]. \label{eq:npropcon} \end{eqnarray} \indent The derivation of the standard representation for the lattice propagator in configuration space is carried out in a much similar fashion. Indeed, we have \begin{eqnarray} \Delta^{\mathrm{L}}(x; n) & = & \int_{- \frac{\pi}{a}}^{+ \frac{\pi}{a}} \frac{d^{n}p}{(2 \pi)^n} \frac{e^{\imath p x}} {m^2 + \hat{p}^2} \nonumber \\ & = & \int_{0}^{\infty} d\alpha e^{- m^2 \alpha} \left \{ \prod_{\mu = 1}^{n} e^{- \frac{2 \alpha}{a^2}} \left( \frac{1}{a} \right) \int_{- \pi}^{\pi} \frac{d\vartheta_{\mu}}{(2 \pi)} e^{\frac{2 \alpha}{a^2} \cos\vartheta_{\mu}} \cos \left[ \left( \frac{x_{\mu}}{a} \right) \vartheta_{\mu} \right] \right \} \nonumber \\ & = & \int_{0}^{\infty} d\alpha e^{- m^2 \alpha} \left \{\prod_{\mu = 1}^n e^{- \frac{2 \alpha}{a^2}} \left(\frac{1}{a} \right) \mathrm{I}_{\frac{x_{\mu}}{a}} \left(\frac{2 \alpha}{a^2}\right) \right \} \label{eq:nproplat} \end{eqnarray} \noindent with $\mathrm{I}_{\frac{x_{\mu}}{a}} \left(\frac{2 \alpha}{a^2}\right)$ corresponding to the modified Bessel function of the first kind. \\ \indent Unfortunately, the integral appearing in eq. (\ref{eq:nproplat}) cannot be trivially solved. As a consequence, we are not able in this case to express the propagator in closed form. What we really wish to do here, though, is to show that in the continuum limit, {\it i.e. \/} when $a \rightarrow 0$, $\Delta^{\mathrm{L}}(x; n)$ is given by the sum of $\Delta^{\mathrm{C}}(x; n)$ plus a series of correction terms depending on increasing powers of the lattice spacing $a$. \\ \indent As already mentioned in the introduction, the most direct way to proceed in order to achieve our goal is trying to derive the asymptotic expansion for the modified Bessel function $\mathrm{I}_{\frac{x_{\mu}}{a}}(\frac{2 \alpha}{a^2})$ as $a \rightarrow 0$. \\ \noindent The strategy to adopt for the actual derivation of the expansion is determined by the fact that the order and the argument of the modified Bessel function become large at different rates. The standard techniques of {\em global \/} analysis (e.g, the steepest descents method) are not of much use in this case. As a consequence, we are forced to choose here a {\em local \/} analysis approach. This implies beginning our study by examining the differential equation satisfied by the modified Bessel function at hand. With the purpose of determining uniquely the asymptotic behaviour of the solution, we shall also impose in the end the condition that the series representation derived reproduce (through its leading term) the continuum result $\Delta^{\mathrm{C}}(x; n)$. \\ \indent We first commence by setting \begin{equation} \beta \equiv \frac{2 \alpha}{x_{\mu}^{2}} \;\;\; {\mbox and} \;\;\; \nu \equiv \frac{x_{\mu}}{a} \; \; \; , \label{eq:set} \end{equation} \noindent {\it i.e. \/} $\, \mathrm{I}_{x_{\mu}/a}(2 \alpha/ a^{2}) \rightarrow \mathrm{I}_{\nu}(\nu^{2} \beta)$. \\ \noindent Thus, we wish to find an expansion for $\mathrm{I}_{\nu}(\nu^{2} \beta)$ as $\nu \rightarrow \infty$. We observe now that $\mathrm{I}_{\nu}(\nu^{2} \beta)$ satisfies a differential equation of the form: $x^{2} \frac{d^{2}y}{dx^2} + x \frac{dy}{dx} - (x^{2} + \nu^{2}) y = 0$. Hence, performing the change of variable \begin{equation} x \rightarrow \nu^{2} \beta \label{eq:changex} \end{equation} \noindent we obtain \begin{equation} \frac{\partial^{2} \mathrm{I}_{\nu}}{\partial \beta^2}(\nu^{2} \beta) + \frac{1}{\beta} \frac{\partial \mathrm{I}_{\nu}}{\partial \beta}(\nu^{2} \beta) - \nu^{2} \left[ \frac{1}{\beta^2} + \nu^{2} \right] \mathrm{I}_{\nu}(\nu^{2} \beta) = 0. \label{eq:besdif} \end{equation} \noindent Eq. (\ref{eq:besdif}) simplifies slightly if we make the substitution \begin{equation} \mathrm{I}_{\nu}(\nu^{2} \beta) = \frac{\mathrm{C}}{\sqrt{\beta}} \mathrm{Y}(\beta, \nu). \label{eq:besiny} \end{equation} \noindent $\mathrm{C}$ represents here the free parameter whose value shall be fixed later on according to the prescription made at the beginning of this paragraph. \\ \indent By using eq. (\ref{eq:besiny}), we now get a new differential equation for $\mathrm{Y}(\beta, \nu)$, namely \begin{equation} \frac{\partial^{2} \mathrm{Y}(\beta, \nu)}{\partial \beta^2} - \left[ \nu^{4} + \frac{ \nu^{2} - \frac{1}{4}}{\beta^{2}} \right] \mathrm{Y}(\beta, \nu) = 0. \label{eq:ydif} \end{equation} \indent We consider at this stage the limit $\nu \rightarrow \infty$. We now need to make some kind of assumption on the form of leading term governing the expansion of the solution to eq. (\ref{eq:ydif}) as $\nu$ becomes large. With this purpose, we consider a substitution originally suggested by Carlini (1817), Liouville (1837) and Green (1837) and whose underlying idea is that the controlling factor in the asymptotic expansion of a function is usually in the form of an exponential. Hence, we assume the asymptotic relation \begin{equation} \mathrm{Y}(\beta, \nu) \sim e^{\nu^{2} \beta + \int w(\beta, \nu) d\beta}. \label{eq:carlini} \end{equation} \indent Actually, the form which we have used here for $\mathrm{Y}(\beta, \nu)$ differs slightly from the standard Carlini's substitution since it implicitly assumes that not only the leading term, but also all the subsequent terms in the expansion are expressible as exponentials. The reason which led us to adopt eq. (\ref{eq:carlini}) as a viable option is essentially twofold: on one hand, it is dictated by the necessity of recovering the well-known expression for the free scalar propagator in the continuum; on the other, the idea spurred in analogy to the method adopted in Watson \cite{Wat} and originally due to Meissel (1892) in order to deduce the asymptotic expansion for the Bessel function of the first kind, $\mathrm{J}_{\nu}(\nu z)$ with $\nu$ large. \\ \indent Carrying out the substitution in the differential equation for $\mathrm{Y}(\beta, \nu)$, we end up with a first-order inhomogeneous differential equation for the new function $w(\beta, \nu)$, {\it i.e. \/} \begin{equation} \frac{\partial w(\beta, \nu)}{\partial \beta} + w^{2}(\beta, \nu) + 2 \nu^{2} w(\beta, \nu) - \frac{\nu^{2} - \frac{1}{4}}{\beta^{2}} \sim 0. \label{eq:wdif} \end{equation} \noindent Our aim, at this point, is to look for a suitable series representation for the solution to this equation. With this purpose, we make the following observations. First of all, if we suppose that $2 \nu^{2} w(\beta, \nu) - \nu^{2}/ \beta^{2} \sim 0$ represents a good approximation to eq. (\ref{eq:wdif}) when $\nu \rightarrow \infty$, then it is straightforward to derive the leading behaviour of $w(\beta, \nu)$ as $(2 \beta^2)^{-1}$. We note that this gives exactly the continuum result $\Delta^{\mathrm{C}}(x; n)$ once the overall solution to eq. (\ref{eq:besdif}) is normalized by fixing the arbitrary parameter $\mathrm{C}$ to be equal to $(2 \pi \nu^2)^{- 1/2}$. Secondly, we observe that all the non-leading terms in the expansion of $w(\beta, \nu)$ should feature only even powers of $\nu$, since only even powers of this variable appear in eq. (\ref{eq:wdif}). \\ \noindent As a result of the remarks just made, we assume that $w(\beta, \nu)$ admits an asymptotic series representation for $\nu$ large of the type \begin{equation} w(\beta, \nu) = \sum_{n = 0}^{\infty} a_{n}(\beta) \nu^{- 2n}. \label{eq:wseries} \end{equation} \noindent This is consistent with the approximation that we considered when we deduced the form of the leading term of the solution. Substituting now this formula into eq. (\ref{eq:wdif}), we get \begin{equation} 2 \sum_{n = - 1}^{\infty} a_{n + 1}(\beta) \nu^{- 2n} \sim \frac{\nu^{2} - \frac{1}{4}}{\beta^{2}} - \sum_{n = 0}^{\infty} \left \{ \left[ \sum_{m = 0}^{n} a_{m}(\beta) a_{n - m}(\beta) \right] + \frac{\partial}{\partial \beta} a_{n}(\beta) \right \} \nu^{-2n}. \label{eq:wdifference} \end{equation} \indent Matching, order by order, the coefficients corresponding to the same power of $\nu$, we are now able to deduce the set of relations defining the coefficients $a_{n}(\beta)$. We have \begin{eqnarray} a_{0}(\beta) & = & \frac{1}{2 \beta^2} \; \; \; \; \; ; \; \; \; \; \; a_{1}(\beta) = - \frac{1}{8 \beta^{2}} + \frac{1}{2 \beta^{3}} - \frac{1}{8 \beta^{4}} \; \; \; \; \; ; \label{eq:a0a1} \\ a_{n + 1}(\beta) & = & - \frac{1}{2} \left \{ \sum_{m = 0}^{n} a_{m}(\beta) a_{n - m}(\beta) + \frac{\partial}{\partial \beta} a_{n}(\beta) \right \} \; \; \; \; \; n \geq 1. \label{eq:anrec} \end{eqnarray} \noindent The coefficients $a_{n}(\beta)$ can be, at this stage, computed iteratively leading to \begin{eqnarray} \lefteqn{w(\beta, \nu) = \frac{1}{2 \beta^{2}} - \left[ \frac{1}{8 \beta^{2}} - \frac{1}{2 \beta^{3}} + \frac{1}{8 \beta^{4}} \right] \nu^{- 2}} \nonumber \\ & & + \left[ - \frac{1}{8 \beta^{3}} + \frac{13}{16 \beta^{4}} - \frac{1}{2 \beta^{5}} + \frac{1}{16 \beta^{6}} \right] \nu^{- 4} \nonumber \\ & & + \left[ - \frac{25}{128 \beta^{4}} + \frac{7}{4 \beta^{5}} - \frac{115}{64 \beta^{6}} + \frac{1}{2 \beta^{7}} - \frac{5}{128 \beta^{8}} \right] \nu^{- 6} + {\mathrm{O}}(\nu^{- 8}). \label{eq:wexp} \end{eqnarray} \indent The asymptotic expansion of $\mathrm{I}_{\nu}(\nu^{2} \beta)$ as $\nu \rightarrow \infty$ can be now easily derived. The result reads as follows \begin{eqnarray} \lefteqn{\mathrm{I}_{\nu}(\nu^{2} \beta) \sim _{\nu \rightarrow \infty} (2 \pi \nu^{2} \beta)^{- \frac{1}{2}} \exp \left( \nu^{2} \beta - \frac{1}{2 \beta} \right)} \nonumber \\ & & \times \left \{ 1 + \left[ \frac{1}{8 \beta} - \frac{1}{4 \beta^2} + \frac{1}{24 \beta^3} \right] \nu^{- 2} \nonumber \right. \\ & & \; \; \; \; \; + \left[ \frac{9}{128 \beta^2} - \frac{29}{96 \beta^3} + \frac{31}{192 \beta^4} - \frac{11}{480 \beta^5} + \frac{1}{1152 \beta^6} \right] \nu^{- 4} \nonumber \\ & & \; \; \; \; \; + \left[ \frac{75}{1024 \beta^3} - \frac{751}{1536 \beta^4} + \frac{1381}{3072 \beta^5} - \frac{1513}{11520 \beta^6} + \frac{4943}{322560 \beta^7} \right. \nonumber \\ & & \; \; \; \; \; \; \; - \left. \left. \frac{17}{23040 \beta^8} + \frac{1}{82944 \beta^9} \right] \nu^{- 6} + {\mathrm{O}}(\nu^{- 8}) \right \}. \label{eq:besexp} \end{eqnarray} \noindent Dividing this equation by $\mathrm{I}_{\nu}(\nu^2 \beta)$ and plotting the result against $\beta$ for a fixed (large) value of $\nu$, we see a fluctuation of the ratio around unity. This reproduces exactly the behaviour which ought to be expected by the ratio of two functions asymptotic to each other. Furthermore, we observe that the fluctuation around 1 is a feature of the whole positive $\beta$-axis, therefore implicitly suggesting the validity of our expansion even for large values of $\beta$. \begin{figure}[h] \epsfxsize = 10cm \epsfbox{finalfig.ps} \caption {Behaviour of the ratio of the asymptotic expansion of $\mathrm{I}_{\nu}(\nu^2 \beta)$ with $\mathrm{I}_{\nu}(\nu^2 \beta)$ as a function of $\beta$ with $\nu$ equal to 100.} \label{fig:ratio} \end{figure} \indent Recalling the expressions of $\beta$ and $\nu$ in terms of $\alpha$, $x_{\mu}$ and $a$, we are now able to derive the formula for the $n$- dimensional propagator up to sixth order in the lattice spacing. In fact, we can do more than that. We consider at this point a generalisation of $\Delta^{\mathrm{L}}(x; n)$ by introducing an arbitrary exponent $q$ in the denominator of eq. (\ref{eq:nproplat}); that is, we now look at \begin{eqnarray} \Delta^{\mathrm{L}}(x; n; q) & = & \int_{- \frac{\pi}{a}}^{+ \frac{\pi}{a}} \frac{d^{n}p}{(2 \pi)^n} \frac{e^{\imath p x}} {\left \{ m^2 + \hat{p}^2 \right \}^{q}} \nonumber \\ & = & \frac{1}{\Gamma(q)} \int_{0}^{\infty} d\alpha \alpha^{q - 1} e^{- m^2 \alpha} \left \{\prod_{\mu = 1}^n e^{- \frac{2 \alpha}{a^2}} \left(\frac{1}{a} \right) \mathrm{I}_{\frac{x_{\mu}}{a}} \left(\frac{2 \alpha}{a^2}\right) \right \}. \label{eq:general} \end{eqnarray} \noindent The main reason for considering this generalised quantity rests in the fact that eq. (\ref{eq:general}) represents a key element in the expression of general one-loop lattice integrals with bosonic propagators and zero external momenta \cite{Pel}. \\ \indent Having reached a formula for the asymptotic expansion of the modified Bessel function as the lattice spacing vanishes, the study of the continuum limit of $\Delta^{\mathrm{L}}(x; n; q)$ is not technically more difficult than the analysis of the same limit for the $n$- dimensional lattice propagator $\Delta^{\mathrm{L}}(x; n) \equiv \Delta^{\mathrm{L}}(x; n; 1)$. Indeed, we simply have now to substitute eq. (\ref{eq:besexp}) into eq. (\ref{eq:general}) and carry out the product over the dimensional index $\mu$. The resulting $\alpha$-integrals are all well-defined and finite. We can, therefore, proceed to their evaluation and obtain \begin{eqnarray} \Delta^{\mathrm{L}}(x; n; q) & \sim_{a \rightarrow 0} & \frac{(4 \pi)^{- n/2}}{\Gamma(q)} \left \{ \Delta^{\mathrm{L}}_{0}(x; n; q) + a^2 \Delta^{\mathrm{L}}_{2}(x; n; q) \right. \nonumber \\ & & + \left. a^4 \Delta^{\mathrm{L}}_{4}(x; n; q) + a^6 \Delta^{\mathrm{L}}_{6}(x; n; q) + {\mathrm{O}} (a^8) \right \}. \label{eq:genexp} \end{eqnarray} \noindent The full expression of each of the coefficients $\Delta^{\mathrm{L}}_{2i}(x; n; q)$ ($i = 0, 1, 2, 3$) is given in terms of the new function ${\mathrm{P}}_{\rho}(m; x)$ defined as \begin{equation} {\mathrm{P}}_{\rho}(m; x) = \left[ \frac{2 m}{(x^2)^{\frac{1}{2}}} \right]^{\rho} \mathrm{K}_{\rho} \left[ m \left( x^2 \right)^{\frac{1}{2}} \right] \label{eq:deffunp} \end{equation} \noindent with $\mathrm{K}_{\rho} \left[ m \left( x^2 \right)^{\frac{1}{2}} \right]$ representing, as usual, the modified Bessel function of the second kind and $\rho$ a real number. Therefore, we have \begin{eqnarray} \lefteqn{\Delta^{\mathrm{L}}_{0}(x; n; q) = 2 {\mathrm{P}}_{\frac{n}{2} - q}(m; x)} \label{eq:zeroord} \\ \lefteqn{\Delta^{\mathrm{L}}_{2}(x; n; q) = \frac{n}{8} {\mathrm{P}}_{1 + \frac{n}{2} - q}(m; x) - \frac{x^2}{8} {\mathrm{P}}_{2 + \frac{n}{2} - q}(m; x) + \frac{x^4}{96} {\mathrm{P}}_{3 + \frac{n}{2} - q}(m; x)} \label{eq:secondord} \\ \lefteqn{\Delta^{\mathrm{L}}_{4}(x; n; q) = \frac{n (n + 8)}{256} {\mathrm{P}}_{2 + \frac{n}{2} - q}(m; x) - \left[ \left( \frac{n}{128} + \frac{13}{192} \right) x^2\right] {\mathrm{P}}_{3 + \frac{n}{2} - q}(m; x)} \nonumber \\ & + & \left[\frac{n + 24}{1536} x^4 + \frac{(x^2)^2}{256} \right] {\mathrm{P}}_{4 + \frac{n}{2} - q}(m; x) \nonumber \\ & - & \left[\frac{x^6}{1280} + \frac{x^2 x^4}{1536} \right] {\mathrm{P}}_{5 + \frac{n}{2} - q}(m; x) + \frac{(x^4)^2}{36864} {\mathrm{P}}_{6 + \frac{n}{2} - q}(m; x) \label{eq:fourthord} \\ \lefteqn{\Delta^{\mathrm{L}}_{6}(x; n; q) = \left[ \frac{n (n - 1) (n - 2)}{12288} + \frac{3 n (3 n + 22)}{4096} \right] {\mathrm{P}}_{3 + \frac{n}{2} - q}(m; x)} \nonumber \\ & - & \left[ \frac{(n - 1) (n - 2)}{4096} + \frac{85 n + 666}{12288} \right] x^2 {\mathrm{P}}_{4 + \frac{n}{2} - q}(m; x) \nonumber \\ & + & \left[ \frac{(n - 1) (n - 2) + 59 n + 1102}{49152} x^4 + \frac{3 n + 52}{12288} (x^2)^2 \right] {\mathrm{P}}_{5 + \frac{n}{2} - q}(m; x) \nonumber \\ & - & \left[\frac{3 n + 160}{61440} x^6 + \frac{(x^2)^3}{12288} + \frac{3 n + 98}{73728} x^2 x^4 \right] {\mathrm{P}}_{6 + \frac{n}{2} - q}(m; x) \nonumber \\ & + & \left[\frac{5}{57344} x^8 + \frac{x^4 (x^2)^2}{49152} + \frac{n + 48}{589824} (x^4)^2 + \frac{x^2 x^6}{20480} \right] {\mathrm{P}}_{7 + \frac{n}{2} - q}(m; x) \nonumber \\ & - & \left[\frac{x^2 (x^4)^2}{589824} + \frac{x^4 x^6}{245760} \right] {\mathrm{P}}_{8 + \frac{n}{2} - q}(m; x) + \frac{(x^4)^3}{21233664} {\mathrm{P}}_{9 + \frac{n}{2} - q}(m; x). \label{eq:sixthord} \end{eqnarray} \indent The expansion obtained for $\Delta^{\mathrm{L}}(x; n; q)$ clearly shows how the finite corrections introduced by formulating the theory on a lattice can be analytically expressed by a series of increasing (even) powers of the lattice spacing {\em a \/} with coefficients given by analytic functions of the mass and space coordinate times a modified Bessel function of the second kind of increasing order $\rho$. \\ \indent We intend now to demonstrate how eq. (\ref{eq:genexp}) is in perfect agreement with both the studies performed in \cite{Lus} and \cite{Pel}. With this aim, we analyse $\Delta^{\mathrm{L}}(x; n; q)$ in the limit $m (x^2)^{1/2} \rightarrow0$. Given the functional dependence of eq. (\ref{eq:genexp}) on ${\mathrm{P}}_{\rho}(m; x)$ and given the definition in eq. (\ref{eq:deffunp}), this translates into considering the appropriate expansion for the Bessel function $\mathrm{K}_{\rho}\left[ m (x^2)^{\frac{1}{2}} \right]$. We wish to recall at this point that the series representation of the modified Bessel function of the second kind assumes different forms depending on whether the order $\rho$ is a real ($\rho_{\mathrm{re}}$) or integer ($\rho_{\mathrm{in}}$). In particular, for $m (x^2)^{1/2}$ small and $\rho = \rho_{\mathrm{re}}$ we have \begin{equation} {\mathrm{P}}_{\rho_{\mathrm{re}}}(m; x) \sim \frac{\pi}{2} \frac{1}{\sin \rho_{\mathrm{re}} \pi} \left \{ \frac{2^{2 \rho_{\mathrm{re}}}} {\Gamma(1 - \rho_{\mathrm{re}})} \frac{1}{(x^2)^{\rho_{\mathrm{re}}}} - \frac{m^{2 \rho_{\mathrm{re}}}} {\Gamma(1 + \rho_{\mathrm{re}})} \right \} \label{eq:preal} \end{equation} \noindent while, for $m (x^2)^{1/2}$ still small and $\rho = \rho_{\mathrm{in}}$ ($\rho_{\mathrm{in}} \neq 0$) \footnote{For $\rho_{\mathrm{in}} = 0$ the correct expansion reads ${\mathrm{P}}_{0}(m ; x) \sim \psi(1) - \ln \left[ \frac{m (x^2)^{1/2}}{2} \right]$.}, we find \begin{eqnarray} \lefteqn{{\mathrm{P}}_{\rho_{\mathrm{in}}}(m; x) \sim \frac{2^{2 \rho_{\mathrm{in}} - 1}}{(x^2)^{\rho_{\mathrm{in}}}} \Gamma(\rho_{\mathrm{in}})} \nonumber \\ & + & (- 1)^{\rho_{\mathrm{in}} + 1} \frac{m^{2 \rho_{\mathrm{in}}}}{\Gamma(1 + \rho_{\mathrm{in}})} \left \{ \ln \left[\frac{m (x^2)^{1/2}}{2} \right] - \frac{1}{2} \psi(1) - \frac{1}{2} \psi(1 + \rho_{\mathrm{in}}) \right \} \label{eq:pinteger} \end{eqnarray} \noindent with $\psi$ denoting the $\psi$- function \cite{Ryz}. \\ \noindent Using the relation $\Gamma(1 - \rho_{\mathrm{re}}) \Gamma(\rho_{\mathrm{re}}) = \pi/ \sin \rho_{\mathrm{re}} \pi$, we find that ${\mathrm{P}}_{\rho_{\mathrm{re}}}(m; x)$ and ${\mathrm{P}}_{\rho_{\mathrm{in}}}(m; x)$ assume the same functional form $2^{\rho - 1} \Gamma(\rho)/(x^2)^{\rho}$ ($\rho > 0$) as $m \rightarrow 0$. The limits $m \rightarrow 0$ and $\rho_{\mathrm{re}} \rightarrow \rho_{\mathrm{in}}$ are, therefore, uniform. After performing the final mechanical analysis to set $n=4$ and $q=1$, we find that the massless limit of our continuum expansion for $\Delta^{\mathrm{L}}(x; n; q)$ reproduces exactly the expansion obtained by L\"{u}scher and Weisz for the massless 4-dimensional propagator \cite{Lus}\\ \noindent The limit $x \rightarrow 0$ proves to be more difficult to analyse. The short-distance behaviour is, indeed, singular and the limit not uniform in this case. Note that this observation matches the analogous remark made in \cite{Col} about the behaviour of the dimensionally regularised propagator. Observe also that the logarithmic mass-behaviour described in \cite{Pel} by Burgio, Caracciolo and Pelissetto is recovered, in our formulation, for integral values of $\rho$. \section{The tadpole diagram in $\lambda \phi^{4}$ theory} \indent As a direct implementation of the result obtained for the continuum expansion of the scalar propagator, we now wish to derive the one-loop renormalization mass counterterm of $\lambda \phi^{4}$ theory through the study of the lattice tadpole diagram. \begin{figure}[h] \begin{center} \begin{fmfchar*}(40,25) \fmfleft{i} \fmfv{label=$x$, label.angle=-90,label.dist=50}{i} \fmfright{o} \fmfv{label=$y$, label.angle=-90,label.dist=50}{o} \fmf{plain}{i,v,v,o} \fmfdot{v} \fmfv{label=$z$,label.angle=-90,label.dist=50}{v} \end{fmfchar*} \label{fig:tadpole} \end{center} \caption{Self-energy tadpole diagram in $\lambda \phi^{4}$ theory.} \end{figure} \noindent In a $n$-dimensional continuum space, the contribution to the full propagator associated with the graph in Fig. 2 is well-known and proportional to the integral $\int d^{n}z \Delta^{\mathrm{C}}(x - z) \Delta^{\mathrm{C}}(0) \Delta^{\mathrm{C}}(z - y)$. The ultraviolet behaviour of the diagram is entirely due to the divergence in $\Delta^{\mathrm{C}}(0)$ \cite{Col}. \\ \indent In terms of our notational conventions, the lattice version of the tadpole graph in $n$-dimensions is immediately written as \begin{equation} \mathrm{M}^{\mathrm{L}}_{\mathrm{tad}} = - \mu^{4 - n} \left(\frac{\lambda}{2}\right) a^{n} \sum_{z} \Delta^{\mathrm{L}}(x - z; n) \Delta^{\mathrm{L}}(z - y; n) \Delta^{\mathrm{L}}(0; n) \; \; , \label{eq:tadlat} \end{equation} \noindent with $\lambda$ the coupling constant and $\mu^{4 - n}$ a multiplicative dimensional factor introduced to preserve the dimensional correctness of the theory. Note that, in eq. (\ref{eq:tadlat}), the lattice propagators have replaced their continuous counterparts and the summation over the lattice sites has taken the place of the continuum integral. \\ \indent Our present goal is to examine eq. (\ref{eq:tadlat}) as $a \rightarrow 0$. Using the asymptotic expansion of the propagator as derived in eq. (\ref{eq:genexp}), it is straightforward to obtain up to fourth order in the lattice spacing \begin{eqnarray} & & \mathrm{M}^{\mathrm{L}}_{\mathrm{tad}} \sim \int d^{n}z \left \{ f(x, y, z) \Delta^{\mathrm{L}}_{0}(0; n) + a^2 \left[ g(x, y, z) \Delta^{\mathrm{L}}_{0}(0; n) + f(x, y, z) \Delta^{\mathrm{L}}_{2}(0; n) \right] + \right. \nonumber \\ & & \; \; \; \; \; \; \left. a^4 \left[ h(x, y, z) \Delta^{\mathrm{L}}_{0}(0; n) + g(x, y, z) \Delta^{\mathrm{L}}_{2}(0; n) + f(x, y, z) \Delta^{\mathrm{L}}_{4}(0; n) \right] + \ldots \right \} \; , \label{eq:tadexp} \end{eqnarray} \noindent with $f(x, y, z)$, $g(x, y, z)$ and $h(x, y, z)$ denoting functions which are associated with products of the type $\Delta^{\mathrm{L}}_{0} \Delta^{\mathrm{L}}_{0}$, $\Delta^{\mathrm{L}}_{2} \Delta^{\mathrm{L}}_{0}$ and $\Delta^{\mathrm{L}}_{4} \Delta^{\mathrm{L}}_{0}$ plus $\Delta^{\mathrm{L}}_{2} \Delta^{\mathrm{L}}_{2}$, respectively. \\ \noindent We observe that the n-dimensional coefficients which appear in the expressions of $f$, $g$ and $h$ are exclusively evaluated at $x - z$ and $z - y$ and correspond, hence, to infinities which are, ultimately, integrable. As a result, the singularities in $\mathrm{M}^{\mathrm{L}}_{\mathrm{tad}}$ are fundamentally generated by the poles in $\Delta^{\mathrm{L}}_{0}(0; n)$, $\Delta^{\mathrm{L}}_{2}(0; n)$ and $\Delta^{\mathrm{L}}_{4}(0; n)$ only. It is now of paramount importance to remark that each of the latter quantities scales, implicitly, with the lattice spacing $a$. This scaling needs to be made explicit in the analysis by considering the mass as physical, {\it i.e.\/} $m_{\mathrm{R}} = m a$, and by recalling that, in a discrete formulation, the space coordinate is also defined in terms of the lattice spacing. \\ \noindent At this stage, the isolation of the $\mathrm{UV}$-divergences can take place along the lines of the method introduced in Collins for the study of the continuum case. Indeed, performing a {\em point-splitting\/} of the tadpole $z$-vertex through the introduction of an arbitrary variable $\varepsilon_{\mathrm{R}} = \varepsilon / a$, such that $\varepsilon_{\mathrm{R}} \rightarrow 0$ (a fixed), the investigation of the divergent behaviour in each of the $\Delta^{\mathrm{L}}_{2 i}(0; n)$ ($i = 0, 1, 2$) translates now into examining the divergences in $\Delta^{\mathrm{L}}_{2 i}(\varepsilon_{\mathrm{R}}; m_{\mathrm{R}}; n)$ \footnote{The explicit presence of the mass in the argument of the propagator coefficients has been included, in this case, simply to recall that it is $m_{\mathrm{R}}$ rather than the bare $m$ which now figures in the calculation.} for vanishing $\varepsilon_{\mathrm{R}}$ and $n$ fixed and non-integral. We note that each of the series expansions obtained splits, naturally, into two sub-series, with the poles in $\varepsilon_{\mathrm{R}}$ all contained in the first. In terms of $\varepsilon_{\mathrm{R}}$, the second sub-series is, in fact, analytic. Thus, focusing on the singular contribution, we take the limit $n \rightarrow 4$ and extract the infinities in $\varepsilon_{\mathrm{R}}$ through the singularities of the $\Gamma$-function which appears in the series. The procedure leads to a renormalization mass counterterm of the type \begin{eqnarray} & & \delta_{\mathrm{L}}m^{2}_{\mathrm{R}} = \frac{1}{a^2} \left \{ \frac{m^{2}_{\mathrm{R}}}{8 \pi^{2}} \frac{\mu^{(n - 4)}}{(n - 4)} \right \} + a^{2} \left \{ \left(\frac{1}{a^{2}}\right)^{2} \left[ \frac{m^{2}_{\mathrm{R}}}{8 \pi^{2}} \frac{\mu^{(n - 4)}}{(n - 4)} - \frac{m^{4}_{\mathrm{R}}}{64 \pi^{2}} \frac{\mu^{(n - 4)}}{(n - 4)} \right] + \right \} \nonumber \\ & & a^{4} \left \{ \left(\frac{1}{a^{2}}\right)^{3} \left[\frac{m^{2}_{\mathrm{R}}}{8 \pi^{2}} \frac{\mu^{(n - 4)}}{(n - 4)} - \frac{m^{4}_{\mathrm{R}}}{64 \pi^{2}} \frac{\mu^{(n - 4)}}{(n - 4)} + \frac{m^{6}_{\mathrm{R}}}{1536 \pi^{2}} \frac{\mu^{(n - 4)}}{(n - 4)}\right] \right\} + \ldots . \label{eq:countn} \end{eqnarray} \noindent As expected, the counterterm evaluated through the study of the lattice continuum limit in $n$-dimensions contains a divergence in $n$ for $n \rightarrow 4$ as well as a quadratic divergence for $a \rightarrow 0$. However, multiplying eq. (\ref{eq:countn}) by $a^{2}$, the second order pole in $a$ disappears completely leaving the equation, now expressed in lattice units, finite (for $n \neq 4$). \\ \indent The lattice derivation of the mass conterterm can be also performed directly in four-dimensions. Due to the important differences existing between the expansions in eq. (\ref{eq:preal}) and eq. (\ref{eq:pinteger}), a few changes take now place in the investigation, though. The isolation of the infinities is no longer accomplished through the extraction of the poles in the $\Gamma$-functions, but resides ultimately in the observation that the divergences in the lattice spacing mirror exactly the analogue divergent behaviours in $\varepsilon_{\mathrm{R}}$. Operationally, the latter remark produces a four-dimensional lattice counterterm of the type \begin{eqnarray} & & \delta^{4}_{\mathrm{L}}m^{2}_{\mathrm{R}} = \frac{1}{4 \pi^{2}} \left \{ \left(\frac{1}{a^{2}}\right) + \frac{m^{2}_{\mathrm{R}}}{4 a^{2}} \log(m^{2}_{\mathrm{R}}) - \frac{m^{2}_{\mathrm{R}}}{4 a^{2}} \left[ 4 + \psi(1) + \psi(2)\right] + \right. \nonumber \\ & & a^{2} \left \{ \left(\frac{1}{a^{2}}\right) + \frac{m^{2}_{\mathrm{R}}}{4 a^{2}} \log(m^{2}_{\mathrm{R}}) - \frac{m^{2}_{\mathrm{R}}}{4 a^{2}} \left[ 4 + \psi(1) + \psi(2)\right] + \right. \nonumber \\ & & \; \; \left. \left(\frac{1}{4 \pi}\right) \left \{ \frac{4}{(a^{2})^{2}} - \frac{1}{8} \left(\frac{m^{2}_{\mathrm{R}}}{a^{2}}\right)^{2} \left[ \log\left(\frac{m^{2}_{\mathrm{R}}}{4}\right) - \psi(1) - \psi(3) \right] \right \} \right \} + \nonumber \\ & & a^{4} \left \{ \left(\frac{1}{a^{2}}\right) + \frac{m^{2}_{\mathrm{R}}}{4 a^{2}} \log(m^{2}_{\mathrm{R}}) - \frac{m^{2}_{\mathrm{R}}}{4 a^{2}} \left[ 4 + \psi(1) + \psi(2)\right] + \right. \nonumber \\ & & \; \; \left(\frac{1}{4 \pi}\right) \left \{ \frac{4}{(a^{2})^{2}} - \frac{1}{8} \left(\frac{m^{2}_{\mathrm{R}}}{a^{2}}\right)^{2} \left[ \log\left(\frac{m^{2}_{\mathrm{R}}}{4}\right) - \psi(1) - \psi(3) \right] \right \} + \nonumber \\ & & \; \; \left. \left. \left(\frac{1}{4 \pi}\right) \left \{ \frac{12}{(a^{2})^{3}} + \frac{3}{64} \left(\frac{m^{2}_{\mathrm{R}}}{a^{2}}\right)^{3} \left[ \log\left(\frac{m^{2}_{\mathrm{R}}}{4}\right) - \psi(1) - \psi(4) \right] \right \} \right \} + \ldots \right\} \label{eq:count4} \end{eqnarray} \noindent We notice that, in agreement with standard results \cite{Mun}, in this case both a quadratic and a logarithmic counterterm is needed at every order in the calculation. Nevertheless, in terms of lattice units, eq.(\ref{eq:count4}) is, again, finite. \section{Conclusions} \indent In the present work, we derived an asymptotic expansion for the modified Bessel function $\mathrm{I}_{\nu}(\nu^{2} \beta)$ as $\nu \rightarrow \infty$. The expansion obtained was of vital importance to analytically evaluate the continuum expansion of both the lattice scalar propagator in a $n$-dimensional configuration space and its related generalised quantity $\Delta^{\mathrm{L}}(x; n; q)$. The study of the small $m (x^2)^{1/2}$-behaviour of $\Delta^{\mathrm{L}}(x; n; q)$ in the limit $a \rightarrow 0$ was shown to involve only the standard series expansion of modified Bessel functions of either real ($\rho_{\mathrm{re}}$) or integer ($\rho_{\mathrm{in}}$) order. The uniformity of the limits $m \rightarrow 0$ and $\rho_{\mathrm{re}} \rightarrow \rho_{\mathrm{in}}$ was observed and the L\"{u}scher and Weisz expansion for the massless propagator in four-dimensions recovered. The result obtained in \cite{Pel} for the mass-dependence was also reproduced for integral values of $\rho$. Finally, as a perturbative application of the results obtained, the one-loop mass counterterm in $\lambda \phi^{4}$ lattice theory was evaluated for both the $n-$ and $4-$dimensional case. \ack{Beatrice Paladini wishes to thank both the European Human and Capital Mobility Program and Hitachi Dublin Laboratory for their financial support towards the completion of this work.}
{ "attr-fineweb-edu": 1.223633, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbL05qoYDgY37zFT4
\section{Introduction} Since their introduction by Drinfeld~\cite{Drinfeld:quasibialgebras,Drinfeld:quasibialgebras1} quantum doubles of Hopf algebras, in particular of group algebras, have been the subject of some attention, mostly in connection with conformal f\/ield theory, representation theory, quantum integrability and topological quantum computation~\cite{CGR:modulardata,DiPaRo:double,Finch-et-al, KacTodorov,KoornEtAl,Ostrik:modulecategoryfordoubles,MThompson:quantumcomputing, Witherspoon}. Actually the possibility of associating an ``exotic Fourier transform'' (to become the $S$ matrix of a~quantum double) with any f\/inite group was introduced by Lusztig already in~\cite{Lusztig}, see also~\cite{Lusztig:exotic}. The purpose of the present paper is two-fold. As the Drinfeld doubles of specif\/ic f\/inite groups may be determined quite explicitly, at least when their order and class number (number of irreps) are not too big, our f\/irst purpose is to invite the reader to a~tour through a~selection of examples taken from the list of f\/inite subgroups of SU(2) and SU(3). This is clearly an arbitrary choice, which can be justif\/ied from the interest of those subgroups in the construction of the so-called orbifolds models in CFT, and also because their modular data and fusion rules have been determined~\cite{Coq-12}. Accordingly the reader will f\/ind here a~selected number of data, tables and graphs (in particular fusion graphs), and we hope that our discussion of the various examples will bring some better understanding in the study of quantum doubles of f\/inite groups. More data are available on request and/or on a~dedicated web site\footnote{See \url{http://www.cpt.univ-mrs.fr/~coque/quantumdoubles/comments.html}.}. \looseness=-1 Secondly, we want to use these data to explore further an issue that has remained somewhat elusive so far: in a~recent paper~\cite{RCJBZ:sumrules}, we uncovered identities satisf\/ied by sums of tensor or fusion multiplicities under complex conjugation of representations. This is recalled in detail in Section~\ref{section23} below. These identities were proved for simple Lie algebras of f\/inite or af\/f\/ine type as resulting from a~case by case analysis, but no conceptually simple interpretation of that result was proposed. In contrast, these identities were found to fail in a~certain number of f\/inite groups. It was suggested~\cite{ChSchweigert} that it would be interesting to test these identities on Drinfeld doubles, as they share with af\/f\/ine Lie algebras the property of being modular tensor categories, in contradistinction with f\/inite groups. We shall see below that it turns out that our identities are generally not satisf\/ied by Drinfeld doubles, which indicates that the modular property is not the decisive factor. \looseness=-1 The paper is organized as follows. Section~\ref{section2} recalls a~few facts on orbifolds and Drinfeld doubles; it also displays the expressions of modular $S$ and $T$ matrix given in the literature~\cite{Coq-12, CGR:modulardata} and reviews the symmetry properties of $S$, to be used in the following. Section~\ref{subsection24} presents the sum rules of the fusion coef\/f\/icients and of the $S$ matrix, to be investigated below. In Section~\ref{section3}, we found it useful to collect a~number of ``well known'' facts on f\/inite groups, that we use in the sequel. Finally in Section~\ref{section4}, we come to the explicit case of f\/inite subgroups of~SU(2) and~SU(3). For each of these two Lie groups, a~short recapitulation of what is known on their f\/inite subgroups is given, before a~case by case analysis of their Drinfeld doubles. The discussion is exhaustive for SU(2), whereas for SU(3) we content ourselves with a~detailed discussion of the ``exceptional'' subgroups and a~few comments on some subgroups of inf\/inite series. Our results are gathered in Tables~\ref{ssgrSU2} and~\ref{ssgrSU3}. Finally Appendices collect additional comments on f\/inite groups, explicit modular matrices for the particular case of the Drinfeld double of the Klein group $\Sigma_{168}$, and \dots\ {a surprise picture}. \section{Orbifolds, doubles and modular data}\label{section2} \subsection{Remarks about orbifolds models} In CFT, a~general orbifold model is specif\/ied by a~pair $({\Gamma},k)$ where ${\Gamma}$ is a~Lie group and $k$ is a~positive integer, together with a~f\/inite subgroup $G$ of ${\Gamma}$. When $k=0$, orbifold models (called holomorphic) are actually specif\/ied by a~pair $(G,\omega)$ where $G$ is a~f\/inite group and $\omega$ is a~cocycle belonging to $H^3(G,{\rm U}(1))$. The group $G$ can be for instance a~subgroup of some given Lie group~${\Gamma}$ (but the latter plays no role in the construction when $k=0$). General orbifold models (with $k\neq0$) are discussed in~\cite{KacTodorov}. These data determine a~f\/inite collection of ``primary f\/ields'', or ``simple objects'', or irreps, for short. It also determines a~fusion ring linearly spanned by the irreps. Those are the so-called chiral data on which one builds a~CFT, here an orbifold theory. In BCFT, one has further to specify the kind of ``boundary'' one considers. A specif\/ic boundary type determines a~module (also called nimrep) over the fusion ring~\cite{BPPZ, Ca}. Using the vocabulary of category theory, one may say that an orbifold model def\/ines a~fusion category ${\mathcal A}$ whose simple objects are the primary f\/ields $\sigma_n$, labelled by $n$ running from $1$ to~$r_A$. The fusion ring is generated linearly by the primary f\/ields $\sigma_n$. The ring structure $\sigma_m \sigma_n=\sum\limits_p N_{mn}^{\phantom{m}p}\sigma_p$ is specif\/ied by non-negative integers $N_{mn}^{\phantom{m}p}$. The category is modular: we have~$S$ and~$T$ matrices, of dimensions $r_A\times r_A$, representing the generators of the group ${\rm SL}(2,\mathbb{Z})$; they obey $S^2=(ST)^3=C$, $C^2=\ensuremath{\,\,\mathrm{l\!\!\!1}}$. The matrix $C$ is called the conjugation matrix. {$S$ and $T$ are unitary, $S$ is symmetric.} The fusion coef\/f\/icients $N_{mn}^{\phantom{m}p}$ are given by the Verlinde formula~\cite{Verlinde}{\samepage \begin{gather} \label{Verl} N_{mn}^{\phantom{m}p}=\sum_{\ell}\frac{S_{m\ell}S_{n\ell}S_{p\ell}^{*}}{S_{1\ell}}. \end{gather} In the case $k=0$ (level $0$) the Lie group ${\Gamma}$ plays no role, and we set ${\mathcal A}={\mathcal A}(G,\omega)$.} A BCFT def\/ines (or is def\/ined by) a~module-category ${\mathcal E}$ over ${\mathcal A}$. Its simple objects (called boundary states, in BCFT), labelled by $a$ from $1$ to $r_E$ are denoted $\tau_a$. They generate an abelian group which is a~module over the ring of ${\mathcal A}$. Explicitly, $\sigma_m\,\tau_a=\sum\limits_b F_{m,a}^{\phantom{m,}b}\tau_b$. The constants $F_{m,a}^{\phantom{m,}b}$ are also non-negative integers (whence the acronym nimreps). A BCFT (a choice of ${\mathcal E}$) is associated with a~symmetric matrix $Z$, of dimensions $r_A\times r_A$, also with non-negative integer coef\/f\/icients, that commutes with $S$ and $T$. For this reason $Z$ is called the modular invariant matrix. It is normalized by the condition $Z_{1,1}=1$. For the particular choice ${\mathcal E}={\mathcal A}$, the boundary states coincide with the primary f\/ields, $r_E=r_A$, and the modular invariant matrix $Z=\ensuremath{\,\,\mathrm{l\!\!\!1}}$ is the unit matrix. The construction of $Z$ from the BCFT/module category ${\mathcal E}$ or vice versa remains in practice a~matter of art \dots. In general, from any f\/inite group $G$, one can build ${\mathcal A}(G)={\mathcal A}(G,0)$ by the so-called Drinfeld double construction \cite{Drinfeld:quasibialgebras,Drinfeld:quasibialgebras1}: it is the representation category of a~Hopf algebra {$D(G)$} called the Drinfeld double of $G$, or the (untwisted) quantum double of $G$. More generally, from any f\/inite group $G$, together with a~cocycle\footnote{In $H_3(G,\mathbb{Z})\cong H^3(G,\mathbb{C}^\times)\cong H^3(G,{\rm U}(1))$.} $\omega$, one can build a~fusion category ${\mathcal A}(G,\omega)$ by a~method called the twisted Drinfeld double construction. The genuine Hopf algebra $D(G)$ is replaced by a~quasi-Hopf algebra $D_\omega(G)$. The latter is a~quasi-bialgebra, not a~bialgebra, because the coproduct is not co-associative. \begin{remark} One may often use various {methods} to build the same category ${\mathcal A}$, up to equivalence. The Hopf algebra $D(G)$ and the twisted Hopf algebras $D_\omega(G)$ have been used in~\cite{DiPaRo:double} to build~${\mathcal A}(G)$ and~${\mathcal A}(G,\omega)$, but other constructions should be possible. \end{remark} \looseness=-1 According to~\cite{Ostrik:modulecategoryfordoubles} the indecomposable\footnote{Not equivalent to a~direct sum.} nimreps ${\mathcal E}$ of ${\mathcal A}(G,\omega)$ or in other words the indecomposable module-categories ${\mathcal E}$ over ${\mathcal A}(G,\omega)$ are parametrized by the conjugacy classes of pairs $(K,\psi)$ where $K\subset G\times G$ is a~subgroup, $\psi$ a~cohomology class in $H^2(K,\mathbb{C}^\times)$, and $K$ is such that the natural extension\footnote{$\tilde\omega=p_1^\star\omega-p_2^\star\omega$, where $p_i$ are projections $G\times G\to G$: $(g_1,g_2)\mapsto g_i$.} $\tilde\omega$ of the cohomology class $\omega$ to $H^3(G\times G,\mathbb{C}^\times)$ is trivial on $K$. Such subgroups $K$ of $G\times G$ are called admissible for $(G,\omega)$. This latter freedom, that usually (not always) changes the modular invariant partition function but not the modular data, was called ``discrete torsion'' in~\cite{Vafa:discretetorsion}, and in~\cite{CGR:modulardata}. It is clear that any subgroup $K$ of $G\times G$ is admissible for $\omega=0$. In what follows we shall only consider holomorphic orbifolds, and moreover often assume that the cocycle $\omega$ is trivial (in other words we shall consider ``untwisted holomorphic orbifolds''). For this reason, we shall write ``Drinfeld double'' instead of ``quantum double'' in the following. Moreover, we shall not discuss boundary states, BCFT, nimreps, and the like~$\ldots$. Nevertheless, we believe that it was not useless to remind the reader of the above facts concerning module-categories associated with orbifold models, in order to better understand ``where we stand''! So, what matters for us in this paper is mostly the (modular) fusion category ${\mathcal A}(G)$ associated with the choice of a~f\/inite group $G$. It will actually be enough to know how matrices~$S$ and~$T$ are constructed from some f\/inite group data (see formulae~\eqref{ST-formulas} below). The multiplicative structure (the fusion ring coef\/f\/icients $N_{mn}^{\phantom{m}p}$) can be obtained from $S$ via Verlinde equations~\cite{Verlinde}. In particular the Drinfeld double construction (twisted or not), {which may be used to obtain the general formulae in Section~\ref{formulaForS} will not be explicitly used in the sequel}. In what follows we shall only consider fusion rings obtained from Drinfeld doubles of f\/inite groups, and we therefore drop the general notation ${\mathcal A}(G)$ and write $D(G)$ instead. \subsection{General properties of Drinfeld doubles} \label{formulaForS} \begin{itemize}\itemsep=0pt \item We shall call ``rank'' $r$ the total number of irreps of $D(G)$. As the irreps of $D(G)$ are labelled by pairs $({[c]},\sigma_{c})$, where ${[c]}$ is a~conjugacy class of the group $G$ and $\sigma_{c}$ an irrep of the centralizer in $G$ of (any representative ${c}$ of) ${[c]}$,\footnote{Two elements of the same conjugacy class have isomorphic centralizers.} we group together in ``blocks'' those associated with the same conjugacy class and centralizer. For each example we shall list the number $N_c$ of elements (i.e.\ of irreps $\sigma_{c}$) in each block~$c$. Their sum is thus equal to the rank $r$. We call ``classical'' those irreps of the Drinfeld double that correspond to the f\/irst block called ``classical block'', associated with the trivial conjugation class (the centralizer of the identity being $G$ itself, these irreps can be identif\/ied with the irreps of the group~$G$). Their number is equal to the number of conjugacy classes of $G$, that we call the class number. \item Quantum dimensions of irreps, for fusion models built from Lie groups at f\/inite levels (WZW theories), are usually not integers, but quantum dimensions of irreps of doubles of f\/inite groups are always integers. When those irreps are classical, their quantum dimensions coincide with the dimensions of the corresponding irreps of the group. \item If $\chi=(c,\sigma_c)$ is an irrep of $D(G)$, its quantum dimension is $\mu(\chi)$, and the global dimension of $D(G)$ is def\/ined as $\vert D(G)\vert=\sum \mu(\chi)^2$. In the case of Drinfeld doubles, where the cocycle is trivial, we have $\vert D(G)\vert=|G|^2$, where $|G|$ is the order of $G$. \item For each of the examples that we consider later, we shall also give the integer $d_{\mathcal B}=\sum\limits_m(\sum\limits_{n,p}N_{mn}^{\phantom{m}p})^2$ whose interpretation as the dimension of a~weak Hopf algebra ${\mathcal B}$ (or double triangle algebra~\cite{Ocneanu:Fields}) will not be discussed in this paper, see also~\cite{CoquereauxIsasiSchieber:TQFT, Hayashi, Ostrik, PetkovaZuber:cells}. \item In writing the $S$, $T$ and fusion matrices, we sort the irreps as follows. First of all we sort the conjugacy classes according to the increasing order $p$ (in the sense $g^p=1$) of its representatives. For instance the conjugacy class of the identity (for which $p=1$) always appears f\/irst. Whenever two classes have the same $p$, their relative order is arbitrarily chosen. Finally, for a~given conjugacy class, the irreps of the associated centralizer are ordered (not totally) according to their increasing classical dimension. \item Formulae for $S$ and $T$: We copy from~\cite{CGR:modulardata} the following expressions for the untwisted case. More general expressions that are valid for any twist can be found in the same reference, where they are used to explicitly determine the corresponding $S$ and $T$ matrices in several cases, in particular for the odd dihedral groups (any twist). As recalled above, there is a~one to one correspondence between irreps of $D(G)$ and pairs $([c],\sigma_c)$, where $[c]$ is a~conjugacy class of $G$, and $\sigma_c$ denotes an irrep of the centralizer $C_G(c)$ of (any representative $c$ of) class $[c]$ in $G$. Then \begin{gather} \nonumber S_{([c],\sigma_c)([d],\sigma_d)}=\frac{1}{|C_G(c)||C_G(d)|}\sum_{\scriptstyle{g\in G} \atop\scriptstyle{c g d g^{-1}=g d g^{-1}c}}\chi_{\sigma_c}\big(g d g^{-1}\big)^{*}\chi_{\sigma_d}\big(g^{-1}cg\big)^{*} \\ \label{ST-formulas} \phantom{S_{([c],\sigma_c)([d],\sigma_d)}} =\frac{1}{|G|}\sum_{g\in[c],h\in[d]\cap C_G(c)}\chi_{\sigma_c}\big(x h x^{-1}\big)^{*}\chi_{\sigma_d}\big(ygy^{-1}\big)^{*}, \\ T_{([c],\sigma_c)([d],\sigma_d)}=\delta_{cd}\delta_{\sigma_c\sigma_d}\frac{\chi_{\sigma_c}(c)} {\chi_{\sigma_c}(e)}, \nonumber \end{gather} where $x$ and $y$ are arbitrary solutions of $g=x^{-1}cx$ and $h=y^{-1}dy$. In practice, it is more convenient to use a~variant of~\eqref{ST-formulas}~\cite{Coq-12}. Let ${\mathcal T}_c=\{c_i\}$ (resp.\ ${\mathcal T}_d=\{d_j\}$) be a~system of coset representatives for the {left} classes of~$G/C_G(c)$ (resp.\ a system of coset representatives for the left classes of~$G/C_G(d)$), then \begin{gather} \label{S-alt} S_{([c],\sigma_c)([d],\sigma_d)}=\frac{1}{\vert G\vert} \sum_{c_i,d_j\atop g_{ij}=c_i d_j^{-1}} \!\!\!\!\!\!\! {\vphantom{\sum}}' \; \chi_{{\sigma_c}}\big(g_{ij}d g_{ij}^{-1}\big)^{*}\chi_{{\sigma_d}}\big(g_{ij}^{-1}cg_{ij}\big)^{*}, \end{gather} where the primed sum runs over pairs of $c_i\in{\mathcal T}_c$, $d_j\in{\mathcal T}_d$ that obey $[b_j^{-1}b b_j,a_i^{-1}a~a_i]=1$; here $[{}\;,\,{}]$ is the commutator def\/ined as $[a,b]=a^{-1}b^{-1}ab$. This reformulation of~\eqref{ST-formulas}, also used implicitly in~\cite{MThompson:quantumcomputing}, is handy because sets of coset representatives are provided by GAP~\cite{GAP}. \end{itemize} \subsection[Symmetries of the $S$ matrix]{Symmetries of the $\boldsymbol{S}$ matrix}\label{section23} \begin{itemize}\itemsep=0pt \item The most conspicuous property of the $S$-matrix in~\eqref{ST-formulas} or~\eqref{S-alt} is its symmetry: \begin{gather*} {S_{([c],\sigma_c)([d],\sigma_d)}=S_{([d],\sigma_d)([c],\sigma_c)}.} \end{gather*} Compare with the case of an ordinary f\/inite group $G$, for which the tensor product multiplicities are given by \begin{gather}\label{tensor-group} N_{rs}^{\phantom{r}t}=\sum_c\frac{\hat\chi_c^{\phantom{c}r}\hat\chi_c^{\phantom{c}s}\hat\chi_c^{\phantom{c}t*}}{\hat\chi_c^{\phantom{c}1}}, \end{gather} where $\hat\chi_c^{\phantom{c}r}=\sqrt{\frac{|c|}{|G|}}\chi_c^{\phantom{c}r}$ is the normalized character of irrep $r$ in class $c$, an expression which looks like Verlinde formula~\eqref{Verl}. In that case, however, there is no reason that the diagonalizing matrix $\hat\chi$ of multiplicities be symmetric, and it is generically not. In contrast in a~Drinfeld double, that matrix, called now $S$, is symmetric. In other words, there is not only an equal number of classes and irreps in a~double, there is also a~canonical bijection between them. The origin of that symmetry may be found in a~CFT interpretation~\cite{DiPaRo:double}, or alternatively, may be derived directly~\cite{KoornEtAl}. \item The $S$-matrix has other properties that are basic for our purpose: \begin{itemize}\itemsep=0pt \item it is unitary, $S.S^\dagger=I$; \item its square $S^2=C$ satisf\/ies $C.C=I$, i.e.~$S^4=I$. As recalled above, this is, with $(S.T)^3=C$, one of the basic relations satisf\/ied by generators of the modular group. Since $S^{*}=S^\dagger=S^3=S.C=C.S$, the matrix $C$ is the conjugation matrix, $C_{ij}=\delta_{i\bar\jmath}$. \end{itemize}\itemsep=0pt \item As just mentioned, under complex conjugation, $S$ transforms as \begin{gather*} S_{ij}^{*}=S_{\bar\imath j}=S_{i\bar\jmath}, \end{gather*} where $\bar\imath$ refers to the complex conjugate irrep of $i$; in the case of the double, where $i$ stands for $([c],\sigma_c)$, $\bar\imath$ stands for $\overline{([c],\sigma_c)}=([c^{-1}],\overline{\sigma_c})$. This follows from the formulae in~\eqref{ST-formulas} and~\cite{CGR:modulardata}. By Verlinde formula this implies that \begin{gather} \label{conj-fusion} N_{\bar\imath}=N_i^T. \end{gather} Thus, for tensor product (fusion), complex conjugation amounts to transposition, a~pro\-perty also enjoyed by~\eqref{tensor-group}. Moreover, $N_{\bar\imath}=C.N_i.C$. \item Other symmetries of the $S$ matrix of the double are associated with the existence of units in the fusion ring. An invertible element in a~ring is usually called a~unit. A fusion ring is a~$\mathbb{Z}_+$ ring, i.e., it comes with a~$\mathbb{Z}_+$ basis (the irreps), and in such a~context, one calls units those irreps that are invertible (in the context of CFT, units are generally called {\it simple currents}). Therefore if $u$ is a~unit, hence an irrep such that $N_u$ is invertible, necessarily $N_u^{-1}=N_{\bar u}=N_u^T$, $N_u$ is an orthogonal matrix and $\det N_u=\pm1$. In view of~\eqref{conj-fusion}, $N_u$ is an orthogonal integer-valued matrix, hence a~permutation matrix, $(N_u)_i^{\phantom{i}j}=\delta_{J_u(i),j}$, where $J_u$ is a~permutation. \item In the following we denote \begin{gather} \label{def-phi} \phi_i(\ell)=\frac{S_{i\ell}}{S_{1\ell}} \end{gather} the eigenvalues of the $N_i$ fusion matrix. Note that for a~unit $u$, $\phi_u(\ell)$ is a~root of unity. Moreover, as $\phi_u(1)$ is also a~quantum dimension, hence a~positive number, this must necessarily be equal to~1. \item The existence of units entails the existence of symmetries of the {{\it fusion graphs}, also called {\it representation graphs} in the literature, namely the graphs\footnote{All the graphs given in this paper, as well as many calculations involving fusion matrices, have been obtained with the symbolic package Mathematica~\cite{Mathematica}, interfaced with GAP~\cite{GAP}.} whose adjacency matrices are the fusion matrices.} Each permutation $J_u$, for $u$ a~unit, acts on irreps in such a~way that \begin{gather*} \forall\, i \quad N_i=N_u N_{\bar u}N_i=N_u N_i N_{\bar u}\ \Rightarrow \ N_{ij}^{\phantom{i}k}=N_{iJ_u(j)}^{\phantom{i}J_u(k)}, \end{gather*} hence may be regarded as an automorphism of the fusion rules and a~symmetry of the fusion graphs: on the fusion graph of any $N_i$, there is an edge from $j$ to $k$ if\/f there is one from $J_u(j)$ to $J_u(k)$. A particular case of such automorphisms is provided by the automorphisms of weight systems in af\/f\/ine algebras used in~\cite{RCJBZ:sumrules}. \item As all irreps of the double, units are labelled by pairs $([c],\psi)$, but here the class $[c]$ is the center $Z(G)$ of $G$, its centralizer is $G$ itself, and $\psi$ is a~1-dimensional irrep of $G$. Indeed, for Drinfeld doubles, the quantum dimension $S_{([c],\sigma_c)([e],1)}/S_{([e],1)([e],1)}$ of an irrep $j=([c],\sigma_c)$ is equal to $\vert[c]\vert\times \dim(\sigma_c)$, but for a~unit $([c],\psi=\sigma_c)$, the quantum dimension is equal to~$1$ (see above after~\eqref{def-phi}), and $c$ is central ($\vert[c]\vert=1$), so $\psi$ is of degree $1$. The set of the latter is given by the `abelianization' $G/G'$ of $G$, with $G'$ the commutator subgroup of~$G$. Thus the group of units is isomorphic to $Z(G)\times G/G'$. \end{itemize} \subsection[Sum rules for the $S$ matrix]{Sum rules for the $\boldsymbol{S}$ matrix}\label{subsection24} Let $N_{ij}^{\phantom{i}k}$ stand for the multiplicity of irrep $k$ in $i\otimes j$ and let $\bar\imath$ refer to the complex conjugate irrep of $i$. According to~\cite{RCJBZ:sumrules}, both in the case of semi-simple Lie groups and in the case of fusion categories def\/ined by a~pair ${(\Gamma,k)}$ (WZW models), we have \begin{gather} \label{sumrule} \forall\, i,j \qquad {\sum_k N_{ij}^{\phantom{i}k}{=}\sum_k N_{\bar\imath j}^{\phantom{i}k}. } \end{gather} or equivalently \begin{gather}\label{sumrule'} \forall\, j,k \qquad \sum_i N_{ij}^{\phantom{i}k}{=}\sum_i N_{i j}^{\phantom{i}\bar k}.\tag{$7'$} \end{gather} In the case of WZW models, where the category is modular, we have shown the above property to be equivalent to the following: if an irrep $j$ is of complex type, then \begin{gather} \label{sumruleS} \Sigma_j:=\sum_i S_{ij}=0, \end{gather} and we shall say below that the irrep $j$ has a~vanishing $\Sigma$. Actually we have shown in~\cite{RCJBZ:sumrules} that the last property also holds when $j$ is of quaternionic type. Def\/ining the charge conjugation matrix $C=S^2$ and the path matrix $X=\sum\limits_i N_i$, it is a~standard fact that $C.X.C=X$. Property \eqref{sumrule'} reads instead \begin{gather} \label{XeqXC} X=X.C=C.X. \end{gather} The f\/irst natural question is to ask whether property (9) holds for f\/inite groups. As noticed in~\cite{RCJBZ:sumrules}, the answer is {in general} negative (although it holds {in many cases}). To probe equation~\eqref{sumrule}, we have to look at groups possessing complex representations. In the case of ${\rm SU}(2)$ subgroups, equation~\eqref{sumrule} holds, and this was easy to check since only the cyclic and binary tetrahedral subgroups have complex representations. It was {then} natural to look at subgroups of ${\rm SU}(3)$ and we found that {\eqref{sumrule}} holds true for most subgroups of ${\rm SU}(3)$ but fails for some subgroups like $F=\Sigma_{72\times3}$ or $L=\Sigma_{360\times3}$. The second property~\eqref{sumruleS} does not make sense for a~f\/inite group since there is no invertible $S$ matrix, and Verlinde formula cannot be used. The next natural question\footnote{We thank Ch.~Schweigert for raising that issue.} is to ask if the above properties~\eqref{sumrule}, \eqref{sumruleS} hold for Drinfeld doubles of f\/inite groups. As we shall see in a~forthcoming section, the answer is again negative. Let us now prove now the following \begin{proposition}\label{proposition1} For a~Drinfeld double: $($equation~\eqref{sumrule}$)$ $\Leftrightarrow$ $\forall\, j\ne\bar j$, $\Sigma_j=0$. \end{proposition} Our proof follows closely the steps of the proof of a~similar statement in~\cite{RCJBZ:sumrules} in the case of f\/inite dimensional or af\/f\/ine Lie algebras, although here neither property is necessarily valid. \begin{itemize}\itemsep=0pt \item~\eqref{sumruleS} $\Rightarrow$~\eqref{sumrule}. Suppose that only self-conjugate irreps have a~non vanishing $\Sigma$ and use~\eqref{Verl} to write \begin{gather*} \sum_k N_{ij}^{\phantom{i}k}=\sum_\ell\frac{S_{i\ell}S_{j\ell}\sum_k S^{*}_{k\ell}}{S_{1\ell}}=\sum_{\ell=\bar\ell} \frac{S_{i\ell}S_{j\ell}\sum_k S^{*}_{k\ell}}{S_{1\ell}} =\sum_k\sum_{\ell=\bar\ell} \frac{S_{\bar\imath\ell}S_{j\ell}S^{*}_{k\ell}}{S_{1\ell}}=\sum_k N_{\bar\imath j}^{\phantom{\bar\imath}k}. \end{gather*} \item \eqref{sumrule} $\Rightarrow$~\eqref{sumruleS}. Suppose that $\sum\limits_i N_{ij}^{\phantom{i}k}=\sum\limits_i N_{i j}^{\phantom{i}\bar k}$ for all $i$, $j$, $k$. Use again~\eqref{Verl} and~\eqref{sumrule} to write \begin{gather*} \left(\sum_i S_{i\ell}\right)S_{j\ell}=\sum_k\sum_i N_{ij}^{\phantom{i}k}S_{k\ell}S_{1\ell}=\sum_k\sum_i N_{ij} ^{\phantom{i}\bar k}S_{k\ell}S_{1\ell} \\ \phantom{\left(\sum_i S_{i\ell}\right)S_{j\ell}} =\sum_k\sum_i\! N_{ij}^{\phantom{i}k}S_{\bar k\ell}S_{1\ell}=\sum_k\sum_i\! N_{ij} ^{\phantom{i}k}S_{k\bar\ell}S_{1\bar\ell}=\sum_i\! S_{i\bar\ell}S_{j\bar\ell}= \left(\sum_i S_{i\ell}\right) \! S_{j\bar\ell}, \end{gather*} from which we conclude that if $\sum\limits_i S_{i\ell}\ne0$, then $S_{j\ell}=S_{j\bar\ell}$, which cannot hold for all~$j$ unless $\ell=\bar\ell$ (remember that $S_{j\ell}/S_{1j}$ and $S_{j\bar\ell}/S_{1j}$ are the eigenvalues of two fusion matrices~$N_\ell$ and $N_{\bar\ell}$ which are dif\/ferent if $\ell\ne\bar\ell$). Thus, assuming~\eqref{sumrule} (which is not always granted), if $\ell\ne\bar\ell$, $\sum\limits_i S_{i\ell}=0$. \end{itemize} \begin{proposition}\label{proposition2} In any modular tensor category, the complex conjugation is such that properties~\eqref{sumrule} and~\eqref{sumruleS} are simultaneously true or wrong. \end{proposition} \begin{remark} As proved in~\cite{RCJBZ:sumrules}, property~\eqref{sumrule} hold in the case of Lie groups, and for af\/f\/ine Lie algebras at level $k$ (WZW models), both properties~\eqref{sumrule} and~\eqref{sumruleS} hold. In the case of Drinfeld doubles of f\/inite groups, it is not always so. \end{remark} Whenever the fusion/tensor ring has units, we may state the following \begin{proposition}\label{proposition3} Consider an irrep $j$ such that there exists a~unit $u$ with \begin{gather*} \phi_u(j)=\frac{S_{uj}}{S_{1j}}\ne1. \end{gather*} Then~\eqref{sumruleS} holds true: $\sum\limits_i S_{ij}=0$. \end{proposition} We write simply, using the fact that $N_u=J_u$ is a~permutation \begin{gather*} \nonumber \sum_i S_{ij}=\sum_i S_{J_u (i)j}=\phi_u(j)\sum_i S_{ij} \end{gather*} and $\phi_u(j)\ne1\Rightarrow\sum\limits_i S_{ij}=0$. One f\/inds, however, cases of complex irreps $j$ for which all units $u$ give $\phi_u(j)=1$ and Proposition~\ref{proposition3} cannot be used. In the examples (see below), we shall encounter the two possibilities: \begin{itemize}\itemsep=0pt \item Complex irreps (with all $\phi_u(j)=1$) such that $\Sigma_j\ne0$, hence counter-examples to pro\-per\-ty~\eqref{sumruleS}. \item Vanishing $\Sigma_{j}$ (cf.~\eqref{sumruleS}) for complex, quaternionic and even real irreps $j$ for which all \mbox{$\phi_u(j)=1$}. We call such cases ``accidental cancellations'', by lack of a~better understanding. \end{itemize} \section{Finite group considerations}\label{section3} \subsection{About representations, faithfulness, and embeddings}\label{subsection31} Before embarking into the study of Drinfeld doubles for f\/inite subgroups of ${\rm SU}(2)$ and ${\rm SU}(3)$, we need to introduce some terminology and remind the reader of a~few properties that belong to the folklore of f\/inite group theory but that we shall need in the sequel. Any faithful unitary $n$-dimensional (linear) representation of a~f\/inite group $G$ on a~complex vector space def\/ines an embedding of $G$ into ${\rm U}(n)$. An extra condition (the determinant of the representative group elements should be $1$) is required in order for $G$ to appear as a~subgroup of ${\rm SU}(n)$. Let us assume, from now on, that $G$ is a~subgroup of ${\rm SU}(n)$. When the chosen $n$-dimensional representation def\/ining the embedding is irreducible, $G$ itself is called irreducible with respect to ${\rm SU}(n)$. When $n>2$, we call {\it embedding representations} with respect to ${\rm SU}(n)$, those irreps that are $n$-dimensional, irreducible and faithful. The type of a~representation can be real, complex, or quaternionic. As the fundamental (and natural) representation of ${\rm SU}(2)$ is quaternionic, we adopt in that case a~slighly more restrictive def\/inition. For a~f\/inite group~$G$, isomorphic with an irreducible subgroup of ${\rm SU}(2)$, we call {\it embedding representations} with respect to ${\rm SU}(2)$, those irreps that are $2$-dimensional, irreducible, faithful and of quaternionic type. More details can be found in Appendix~\ref{appendixA}. At times, we shall need the following notion. A f\/inite subgroup $G$ of a~Lie group is called Lie primitive (we shall just write ``primitive'') if it is not included in a~proper closed Lie subgroup. Although irreducible with respect to their embedding into ${\rm SU}(3)$, some of the subgroups that we shall consider later are primitive, others are not. More details can be found in Appendix~\ref{appendixB}. The fundamental representation of dimension $3$ of ${\rm SU}(3)$, or its conjugate, is usually called the natural (or def\/ining) representation, and it is faithful. In this paper we shall mostly, but not always, consider subgroups that are both irreducible and primitive, and the given notion of embedding representation is appropriate (it may be non-unique, see below). However, in some cases, the previous notion of embedding representation should be amended. This is in particular so for the cyclic subgroups of ${\rm SU}(2)$ where no irreducible, faithful, 2-dimensional, and of quaternionic type exists. Notice that for some ${\rm SU}(3)$ subgroups, there are cases where the embedding representation~-- as def\/ined previously~-- is not of complex type, for the reason that no such representation exists: see below the examples of of $\Delta(3\times2^2)$, $\Delta(6\times2^2)$, and $\Sigma(60)$, the tetrahedral, octahedral, and icosahedral subgroups of ${\rm SO}(3)\subset{\rm SU}(3)$, which have no {\it complex} 3-dimensional irrep. They are not primitive subgroups of ${\rm SU}(3)$. In the present paper we are not so much interested in fusion graphs associated with f\/inite groups $G$, rather we are interested in their Drinfeld doubles $D(G)$. There is one fusion graph for each irrep of $D(G)$ and it is of course out of question to draw all of them in this paper. For this reason, we are facing the problem of which representation to select. As recalled above, irreps of~$D(G)$ are labeled by pairs (a conjugacy class of~$G$ and an irrep of the corresponding centralizer). One special conjugacy class is the class of the neutral element $\{e\}$ of $G$, which has only one element, its centralizer being $G$ itself. As we are mostly interested in irreducible embeddings that def\/ine f\/inite subgroups~$G$ of~${\rm SU}(n)$ for $n=2$ or~$3$, the selected representation of~$D(G)$ will {(with a~few exceptions, see later)} be of the type $(\{e\},\rho)$ with $\rho$ chosen among the irreducible faithful representations of~$G$ of the appropriate dimension $n$. We shall call ``embedding irrep'' of the Drinfeld double of $G$, any pair $(\{e\},\rho)$ where $\rho$ is an embedding representation for $G$, with the above meaning. \subsection{About faithfulness, and connectedness of fusion graphs} Faithfulness of the selected embedding representation (of~$G$) can be associated with several concepts and observations related to connectedness properties of the associated fusion graph of~$G$ or of~$D(G)$: the former is connected whereas the latter appears to have a~number of connected components equal to the class number of~$G$. Fundamental representations of a~simple complex Lie group or of a~real compact Lie group can be def\/ined as those irreps whose highest weight is a~fundamental weight. These irreps generate by fusion (tensor product) all irreps. Still for Lie groups, and more generally, we may ask for which irreducible representation $\rho$, if any, fundamental or not, can we obtain each irreducible representation as a~subrepresentation of a~tensor power $\rho^{\otimes k}$, for some~$k$. This is a~classical problem and the answer is that $\rho$ should be faithful~\cite{Huang}; in the same reference it is shown that, except when $G={\rm Spin}(4n)$, there exist faithful irreducible representations for all the simple compact Lie groups. In the theory of f\/inite groups, there is no such notion as being fundamental, for an irreducible representation. However, one can still ask the same question as above, and the result turns out to be the same (Burnside, as cited in~\cite{Curtis-Reiner}): if $G$ is a~f\/inite group and $\rho$ is a~faithful representation of~$G$, then every irreducible representation of $G$ is contained in some tensor power of~$\rho$. In other words, the fusion graph associated with a~faithful representation of a~f\/inite group is connected, since taking tensor powers of this representation amounts to following paths on its fusion graph, and all the irreps appear as vertices. Let $H$ be a~subgroup of the f\/inite group $G$ and let $\rho$ be a~faithful representation of~$G$. Then~$\rho_H$, the restriction of $\rho$ to~$H$, may not be irreducible, even if $\rho$ is, but it is clearly faithful: its kernel, a~subgroup of~$H$, is of course trivial since the kernel of $\rho$ was already trivial in the f\/irst place. Therefore every irreducible representation of~$H$ is contained in some tensor power of~$\rho_H$. Writing $\rho_H a =\sum\limits_b F_{\rho,a}^{\phantom{\rho,}b} b$, where $a$, $b$, $\ldots$ are irreps of $H$, def\/ines a~matrix $F_\rho$ which is the adjacency matrix of a~graph. This (fusion) graph is connected, for the same reason as before. Notice that $\rho_H$ itself may not appear among its vertices since it may be non irreducible. As mentioned previously every representation $\rho$ of $G$ determines a~representation $(e,\rho)$ of~$D(G)$. The representation rings for the group $G$ and for the algebra~$D(G)$ are of course dif\/ferent, the fusion coef\/f\/icients of the former being obtained from its character table, those of the latter from the modular $S$-matrix and the Verlinde formula, but the former can be consi\-de\-red as a~subring of the latter. Since irreps of the double fall naturally into blocks indexed by conjugacy classes, we expect that the fusion graph of an embedding irrep of~$D(G)$ will have several connected components, one for each conjugacy class, i.e., a~number of components equal to the number of classes of~$G$, i.e., to the class number. This graph property is actually expected for all the irreps of $D(G)$ stemming from $(z,\rho)$, with $z\in Z(G)$ and a~faithful irreducible representation $\rho$ of $G$. Indeed, the usual character table of $G$ can be read from the $S$ matrix in the following way: extract from $S$ the submatrix made of its f\/irst $\ell$ rows (the ``classical irreps''~$r$), in the latter keep only the f\/irst column of each of the $\ell$ blocks (corresponding to dif\/ferent classes~$c$ of~$G$) and f\/inally multiply these columns by ${|G|/|c|}$, resp.\ $\sqrt{{|G|/|c|}}$; this yields the matrix $\chi^{\phantom{c}r}_c$, resp.\ $\hat\chi^{\phantom{c}r}_c$ def\/ined in~\eqref{tensor-group}. A similar construction applies to the character tables pertaining to the dif\/ferent centralizers of conjugacy classes, which may also be extracted from the $S$-matrix~-- the latter is much more than a~simple book-keeping device for the character tables of the dif\/ferent centralizers of conjugacy classes since it couples these dif\/ferent blocks in a~non-trivial way. On the basis of all examples that we have been considering, and in view of the above discussion, we conjectured {\it The fusion graph of an embedding irrep of $D(G)$ has $\ell$ connected components, with $\ell$, the class number, equal to the number of irreps or of conjugacy classes of $G$.} A formal proof of this property, that we shall not give\footnote{Note added: As noticed by an anonymous referee to whom we are deeply indebted, such a~proof actually follows from simple considerations making use of results of~\cite{Cib, DGNO}, in particular of the formula $(e,\rho)\otimes(a,\delta)=(a,\rho\downarrow^G_{C_G(a)}\otimes \delta)$ where $\rho$ is an irrep of $G$ and $\delta$ an irrep of the centralizer $C_G(a)$ of $a\in G$.}, can exploit, in the language of fusion categories, the relation between the representation rings of $G$ and $D(G)$, in particular a~generalization (see for instance~\cite{Witherspoon}) of the mechanism of induction and restriction. Notice that an embedding fusion graph, for the group~$G$ (the fusion graph of an embedding representation) can be obtained by selecting the connected component of~$(e,1)$ in the graph of the corresponding embedding graphs of its double. Since it describes a~faithful representation, the number of vertices of this connected component is also equal to the class number of~$G$. \subsection{Additional remarks} In general, a~f\/inite group $G$ may have more than one irreducible faithful representation of given dimension $n$ (up to conjugation, of course); for instance a~representation of complex type will always appear together with its complex conjugate in the table of characters, but there are groups that possess several conjugated pairs of inequivalent faithful irreps of complex type, with the same dimension, and they may also possess self-dual (i.e., real or quaternionic) faithful irreps, also of the same dimension, see the example of $\Sigma_{168}\times\mathbb{Z}_3$ below. There are even f\/inite groups that have more than one pair of faithful irreducible representations of complex type, with dif\/ferent dimensions, for instance the ${\rm SU}(3)$ subgroup $\Sigma_{168}\times\mathbb{Z}_3$ possesses faithful irreps of complex type in dimensions $3$, $6$, $7$, $8$. With the exception of cyclic groups, all the f\/inite subgroups of ${\rm SU}(2)$ have at least one $2$-dimensional irreducible faithful representation of quaternionic type. \begin{figure}[htp]\centering \includegraphics[scale=0.45]{CoquereauxZuber-Fig1a} \hspace{4.0cm} \includegraphics[scale=0.5]{CoquereauxZuber-Fig1b} \\ \includegraphics[scale=0.5]{CoquereauxZuber-Fig1c} \hspace{5mm} \includegraphics[scale=0.55]{CoquereauxZuber-Fig1d} \caption{Fusion graphs of the classical irreps of dimension $2$ for the binary dihedral $\widehat{D}_2$, $\widehat{D}_3$, $\widehat{D}_4$, $\widehat{D}_5$. The graphs of the faithful irreps have a~number of connected components equal to the class number (resp.\ 5, 6, 7, 8).}\label{fig:D2D3D4D5} \end{figure} The smallest binary dihedral group (or quaternion group) $\hat D_2$ has only one such irrep, and it is quaternionic, but higher $\hat D_n$'s have $2$-dimensional irreps that may be real and non faithful or quaternionic and faithful; as it was explained, we shall call embedding representations with respect to ${\rm SU}(2)$, only those that are faithful. One can check on Fig.~\ref{fig:D2D3D4D5} giving for binary dihedral groups all the fusion graphs associated with $2$-dimensional representations of the classical block of the Drinfeld double, that, as expected, only the faithful ones have a~number of components equal to the number of classical irreps. The binary tetrahedral group $\hat T$ has three $2$-dimensional irreps and they are faithful: only one (that we label 4) is quaternionic, whereas those labelled 5 and 6 are complex (and conjugated). The fusion graph associated with $N_4$, that we call embedding representation with respect to~${\rm SU}(2)$, is the af\/f\/ine $E_6$ graph (by McKay correspondence, see below); the fusion graphs associated with $N_5$ or $N_6$ are also connected graphs (these representations are faithful!), but they have rather dif\/ferent features. The binary octahedral group has also three $2$-dimensional irreps, but one ($N_3$) is real and not faithful, the other two, $N_4$ and $N_5$, that we call embedding irreps, are both faithful and quaternionic. The binary icosahedral group has two $2$-dimensional irreps ($N_2$, $N_3$), they are both faithful and quaternionic. \begin{figure}[htp]\centering \includegraphics[width=6.5cm]{CoquereauxZuber-Fig2} \caption{Fusion graph $N_6$ of the subgroup $\Sigma_{168}\times\mathbb{Z}_3$.} \label{fig:Sigma168xZ3} \end{figure} The case of ${\rm SU}(3)$ is a~bit more subtle because irreducible subgroups like $\Sigma_{60}$ may be asso\-ciated with imprimitive embedding. At this place we just illustrate on one example the importance of the faithfulness requirement: the group $\Sigma_{168}\times\mathbb{Z}_3$ has $18$ irreps, six of them, actually three pairs of complex conjugates, labelled $4$, $5$, $6$, $7$, $8$, $9$ are of dimension $3$, but the irreps of the pair $(4,5)$ are not faithful whereas the two pairs $(6,7)$ and $(8,9)$ are. As it happens the fusion graphs associated with the faithful representations (namely those labeled $6$, $7$, $8$, $9$, $11$, $12$, $14$, $15$, $17$, $18$) are connected; this is not so for the others, in particular for the $3$-dimensional irreps labelled $4$ and $5$. So the natural (or embedding) irreps, with respect to ${\rm SU}(3)$, are $6$, $7$, $8$, $9$ and we may choose to draw the fusion graph of $N_6$ for instance (see Fig.~\ref{fig:Sigma168xZ3}). The Drinfeld double of this group {$\Sigma_{168}\times\mathbb{Z}_3$} has $288$ irreps, its f\/irst block has $18$ irreps, as expected, that we label as for the f\/inite group itself, and again, we shall give the fusion graph of $N_6$. This graph is no longer connected but its number of connected components is equal to the number of blocks (also the number of conjugacy classes, or of irreps of $G$ itself, namely $18$), see below Fig.~\ref{fig:Sigma168x3_6}. These features are shared by $N_6$, $N_7$, $N_8$, $N_9$ but not by $N_4$ or $N_5$. The conjugacy class determined by every central element of $G$ contains this element only, and the centralizer of a~central element of $G$ is $G$ itself, therefore the irreps of $G$ should ap\-pear~$|Z(G)|$ times in the list of irreps of $D(G)$, where $Z(G)$ is the centre of $G$. For example, the group $\Sigma_{36\times3}$ has four pairs of $3$-dimensional complex conjugated irreps (every one of them can be considered as an embedding irrep), and its center is $\mathbb{Z}_3$, so we expect that $3\times(4\times2)=24$ irreps, among the $168$ irreps of the double, will have similar properties, in particular the same quantum dimensions (namely~$3$, since, as we know, quantum dimensions of irreps of Drinfeld doubles are integers), and also isomorphic fusion graphs (i.e., forgetting the labeling of vertices). \section{Drinfeld doubles (examples)}\label{section4} \looseness=-1 In the following, we review a~certain number of f\/inite subgroups of ${\rm SU}(2)$ and ${\rm SU}(3)$, giving for each its essential\footnote{For ``famous'' groups of relatively small order, one can retrieve a~good amount of information from~\cite{grouppropssite}.} data, the order of the group $G$, its name in GAP nomenclature, class number, and for its Drinfeld double $D(G)$, the rank (number of irreps), $N_c$ the dimensions of the blocks, and quantum dimensions. In order to shorten the often long lists of quantum dimensions, we write $n_s$ when $n$ is repeated $s$ times in a~list\footnote{The subindex $s$ therefore does not refer to an $s$-integer (all our quantum dimensions are integers in this paper!), and $s$ does not denote a~multiplicity in the usual sense.}. We also give the label(s) of the embedding representation(s), the fusion graph of which is then displayed. On the connected component of the identity representation (i.e., the ``classical block") one recognizes the corresponding fusion graph of the original group $G$. Moreover one checks that the total number of connected components of any of these embedding fusion graphs equals the class number of $G$, as conjectured in Section~\ref{subsection31}. \subsection[Drinfeld doubles of f\/inite subgroups of ${\rm SU}(2)$]{Drinfeld doubles of f\/inite subgroups of $\boldsymbol{{\rm SU}(2)}$} \subsubsection[Remarks about f\/inite subgroups of ${\rm SU}(2)$]{Remarks about f\/inite subgroups of $\boldsymbol{{\rm SU}(2)}$} \label{fin-sbgps-SU2} {\bf General.} ${\rm Spin}(3)={\rm SU}(2)$ is the (universal) double cover of ${\rm SO}(3)$. $\mathbb{Z}_2$ is the center of ${\rm SU}(2)$. With every subgroup $\Gamma$ of {\rm SO}(3) is associated its binary cover $\widehat{\Gamma}$, a~subgroup of SU(2). Then, of course, $\Gamma\cong\widehat{\Gamma}/\mathbb{Z}_2$. Finite subgroups of ${\rm SU}(2)$ are of the following kind: cyclic, binary dihedral, binary tetrahedral, binary octahedral, binary icosahedral. The fusion graphs presented below refer to the fusion matrix of the embedding representation, unless stated otherwise. {\bf Cyclic groups.} \begin{itemize}\itemsep=0pt \item $\mathbb{Z}_2$ cannot be a~subgroup of $\mathbb{Z}_q$ when $q$ is odd (the order of a~subgroup should divide the order of the group!) but it is a~subgroup of $\mathbb{Z}_q$ when $q$ is even ($q=2p$), and $\mathbb{Z}_p=\mathbb{Z}_q/\mathbb{Z}_2$. \item Cyclic groups $\mathbb{Z}_q$, for all $q\in\mathbb{N}$, are subgroups of ${\rm SO}(3)$ and also subgroups of ${\rm SU}(2)$. \item When $q$ is even ($q=2p$), we may consider the subgroup $\mathbb{Z}_q$ of ${\rm SU}(2)$ as the binary group corresponding to the subgroup $\mathbb{Z}_p$ of ${\rm SO}(3)$ ({and} this $p$ can be even or odd). \item When $q$ is odd, $\mathbb{Z}_q$ is a~subgroup of ${\rm SU}(2)$, but not the binary of a~subgroup of ${\rm SO}(3)$. \item The homology or cohomology groups $H_2(\mathbb{Z}_q,\mathbb{Z})\cong H^2(\mathbb{Z}_q,\mathbb{C}^\times)\cong H^2(\mathbb{Z}_q,U(1))$ are trivial (``the Schur multiplier is trivial''). Hence any Schur cover of $\mathbb{Z}_q$ is equal to itself. Nevertheless, the cyclic group of order $2p$ can be considered as an extension, by $\mathbb{Z}_2$, of a~cyclic group of order $p$. {$\mathbb{Z}_{2p}$} is the binary cover of $\mathbb{Z}_{p}$ but it is not a~Schur cover of the latter. \end{itemize} {\bf Dihedral groups and their binary covers.} \begin{itemize}\itemsep=0pt \item Dihedral groups $D_n$, of order $2n$ are, for all $n\in\mathbb{N}$, subgroups of ${\rm SO}(3)$. \item The smallest one, $D_1$, of order $2$, is isomorphic {to} $\mathbb{Z}_2$ and is usually not considered as a~dihedral group. \item In the context of the study of the covering ${\rm SU}(2)\to{\rm SO}(3)$, all of these groups $D_n$ can be covered by subgroups {$\widehat{D}_n$} of ${\rm SU}(2)$ of order $4n$, called binary dihedral groups (they are also called dicyclic groups). \item The Schur multiplier of dihedral groups $H_2(D_{n},\mathbb{Z})\cong H^2(D_{n},\mathbb{C}^\times)\cong H^2(D_{n},U(1))$ is trivial when $n$ is odd, and is $\mathbb{Z}_2$ when $n$ is even. Nevertheless, in both cases (even or odd) one may consider the corresponding binary dihedral groups (of order $4n$) that are subgroups of ${\rm SU}(2)$. \end{itemize} {\bf The tetrahedral group $\boldsymbol{T}$ and its binary cover $\boldsymbol{\widehat{T}\cong{\rm SL}(2,3)}$.} The tetrahedral group is $T=\widehat{T}/\mathbb{Z}_2\cong A_4$ (alternating group on four elements). {\bf The cubic (or octahedral) group $\boldsymbol{O}$ and its binary cover $\boldsymbol{\widehat{O}}$.} The octahedral group is $O=\widehat{O}/\mathbb{Z}_2\cong S_4$ (symmetric group on four elements) and its Schur multiplier is $\mathbb{Z}_2$. Warning: $\widehat{O}$ is {\it not} isomorphic {to} ${\rm GL}(2,3)$, although this wrong statement can be found in the literature. It can be realized in ${\rm GL}(2,9)$, as the matrix subgroup generated by $a=((-1,1),(0,-1))$ and $b=((-u,-u),(-u,0))$, where $u$, obeying $u^2=-1$, is an element added to $F_3$ to generate $F_9$. We thank~\cite{OlegCubicF9} for this information. If $w$ generates $F_9$ (so $w^9={w}$), we take $u=w^2$. To our knowledge, this is the smallest realization of $G$ as a~matrix group, and we used it to calculate the Drinfeld double of the binary octahedral group. Using GAP nomenclature, $\widehat{O}$ can be recognized as SmallGroup(48,28). {\bf The icosahedral group $\boldsymbol{I}$ and its binary cover $\boldsymbol{\widehat{I}\cong{\rm SL}(2,5)}$.} The icosahedral group is $I\cong A_5$ (alternating group on f\/ive elements), the smallest non-abelian simple group, and its Schur multiplier is $\mathbb{Z}_2$. {\bf Remarks about f\/inite subgroups of $\boldsymbol{{\rm SO}(3)}$ and $\boldsymbol{{\rm SU}(2)}$ (continuation).} \begin{itemize}\itemsep=0pt \item Dihedral groups $D_{n}$, of order $2n$, with $n$ odd, and cyclic groups (all of them) are the only subgroups of ${\rm SO}(3)$ that have trivial Schur multiplier. \item The polyhedral groups are subgroups of ${\rm SO}(3)$. The binary polyhedral groups are subgroups of ${\rm SU}(2)$. The so-called ``full polyhedral groups'' (that we do not use in this paper) are subgroups of O(3) and should not be confused with the binary polyhedral groups. Notice however that the full tetrahedral group is isomorphic to the octahedral group (both being isomorphic to $S_4$). \item The Schur multiplier of the exceptional polyhedral groups (tetrahedral, cube and icosahedral) are non trivial (they are equal to $\mathbb{Z}_2$) and, for them, the binary cover can be used as a~Schur cover (however a f\/inite group may have several non isomorphic Schur covers, see the remarks in Appendix~\ref{appendixC}). The same is true, when $n$ is even, for the dihedral groups~$D_n$. \item All discrete f\/inite subgroups $G$ of ${\rm SU}(2)$ have trivial second cohomology $H^2(G,\mathbb{C})=1$ and thus trivial Schur multiplier. \end{itemize} {\bf About the fusion graphs of the doubles.} \begin{itemize}\itemsep=0pt \item We shall focus our attention on the embedding irreps (or one of them if there are several) as def\/ined at the beginning of this section and on its fusion matrix and graph. Its label will be called ``embedding label'' in the Tables below, and its fusion graph called the embedding graph. \item In all these embedding graphs, the connected part relative to the ``classical'' representations is isomorphic to an af\/f\/ine Dynkin diagram of type $A_{\cdot}^{(1)}$, $D_{\cdot}^{(1)}$, $E_6^{(1)}$, $E_7^{(1)}$, $E_8^{(1)}$. This is of course nothing else than a~manifestation of the celebrated McKay correspondence in this context~\cite{JMK}. \item Another comment is that in Drinfeld doubles of binary groups, the classes of the identity $I$ and of its opposite $-I$ have the same centralizer, v.i.z.\ the group itself. To any embedding representation of the double (in the block of $I$) there is an associated irreducible in the block of~$-I$. We found it useful to draw the two graphs of these two irreducibles together in dif\/ferent colors in several cases, see below Figs.~\ref{fig:E6sim},~\ref{fig:edge},~\ref{fig:edgeI}. \end{itemize} \subsubsection{Drinfeld doubles of the (binary) cyclic subgroups} We consider the example of $\mathbb{Z}_6$. Order of the group: $6$ {GAP nomenclature: SmallGroup(6,2)} Class number: $\ell=6$ Rank: {(def\/ined as the number of irreps of $D(G)$}) ${r}=36$ Numbers $N_c$ of irreps of $D(G)$ in each block = ${6,6,6,6,6,6}$ Quantum dimensions: $(1_6;1_6;1_6;1_6;1_6;1_6)$ in which we use a~shorthand notation: $p_s$ indicates that there are $s$ irreps of dimension $p$; dif\/ferent blocks are separated by semi-colons. $d_{\mathcal B}={{2^6}{3^6}}$ Embedding labels: 4$\oplus$6. \looseness=-1 As explained above, in such an abelian group in which irreps are one-dimensional, the 2-dimensional embedding representation is the direct sum of two irreps: its is reducible as a~complex representation, irreducible as a~real one.) See the fusion graph of $N_4$ on Fig.~\ref{fig:Z6}. The ``embedding graph'' associated with $N_4+N_6$ looks the same, but with unoriented edges between vertices. \begin{figure}[htp]\centering \includegraphics[width=6.4cm]{CoquereauxZuber-Fig3} \caption{Fusion graph $N_4$ of the Drinfeld double of the cyclic group $\mathbb{Z}_6$.} \label{fig:Z6} \end{figure} Take $G=\mathbb{Z}_p$, with $p$ odd. This is not a~binary cover, but it is nevertheless a~subgroup of~${\rm SU}(2)$. Only the trivial representation is of real type. All others are complex. Observation: Only the trivial representation has non-vanishing $\Sigma$. In particular, all complex representations of the double have vanishing $\Sigma$. The sum rule~\eqref{sumrule} holds. Take $G=\mathbb{Z}_p$, with $p$ even (so $G$ is a~binary cover). There are several representations of real type, all the others being complex. Observation: Only the trivial representation has non-vanishing $\Sigma$. In particular, all complex representations of the double have vanishing $\Sigma$. The sum rule~\eqref{sumrule} holds. \subsubsection{Drinfeld doubles of the binary dihedral subgroups} We consider the example of $\widehat{D}_5$. Order of the group: $20$ {GAP nomenclature: SmallGroup(20,1)} Class number: $\ell=8$ Rank: ${r}=64$ $ N_c={8,8,4,4,10,10,10,10} $ Quantum dimensions: $(1_4,2_4;1_4,2_4;5_4;5_4;2_{10};2_{10};2_{10};2_{10})$ $ d_{\mathcal B}={{2^8}{4363^1}} $ {Embedding labels:} {$5,7$}. See the fusion graph of $N_5$ on Fig.~\ref{fig:D5}. That of $N_7$ looks very similar, up to a~permutation of labels. \begin{figure}[htp]\centering \includegraphics[width=7.2cm]{CoquereauxZuber-Fig4} \caption{Fundamental fusion graph of $N_5$ in the Drinfeld double of the binary dihedral $\widehat{D}_5$.} \label{fig:D5} \end{figure} The f\/irst $n+3$ irreps of the Drinfeld double of the binary dihedral group $\widehat{D}_n$ are classical (they can be identif\/ied with the irreps of $\widehat{D}_n$). They are of dimensions $1$ or $2$. Their square sum is $4n$. Among them, $n-1$ are of dimension $2$ (the others are of dimension $1$). We draw below (Fig.~\ref{fig:D2D3D4D5}) the fusion graphs associated with these irreps of dimension~$2$. $\widehat{D}_2$: ${N_c=}{5,5,4,4,4}$. Quantum dimensions: $(1_4,2_1;1_4,2_1;2_4;2_4;2_4)$. $\widehat{D}_3$: ${N_c=}{6,6,6,4,4,6}$. Quantum dimensions: $(1_4,2_2;1_4,2_2;2_6;3_4;3_4;2_6)$. $\widehat{D}_4$: $N_c={7,7,8,4,4,8,8}$. Quantum dimensions: $(1_4,2_3;1_4,2_3;2_8;4_4;4_4;2_8;2_8)$ $\widehat{D}_5$: $N_c={8,8,4,4,10,10,10,10}$. Quantum dimensions: $(1_4,2_4;1_4,2_4;5_4;5_4;2_{10};2_{10};2_{10};2_{10})$. We analyse the case of $\widehat{D}_5$, the other tested cases are similar. The 12 irreps labeled {3, 4, 11, 12, 17, 18, 19, 20, 21, 22, 23, 24} on Fig.~\ref{fig:D5} are complex. The others ($64-12=52$) are self-conjugate, with 28 being real and 24 quaternionic. The sum rule~\eqref{sumrule} holds. Note: The sum $\Sigma_j$ vanishes for 50 irreps: the 12 irreps that are complex, but also for 38 others, that are self-conjugate, namely all the 24 quaternionic and 14 real. On the other hand $\Sigma_j$ does not vanish on the 14 real irreps {1, 2, 6, 8, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43}. All these vanishing $\Sigma$ follow from the existence of some unit, as in Proposition~\ref{proposition3}. Thus there are no accidental cancellation in that case. See Table~\ref{ssgrSU2} for a~summary. \begin{table}[h] \centering \caption{Data and status of the sumrules~\eqref{sumrule} and~\eqref{sumruleS} for Drinfeld doubles of some subgroups of ${\rm SU}(2)$. In each box, $n \checkmark$ means the number of irreps which satisfy the sumrule in question, the sign $\checkmark$ alone meaning ``all of them'', $n A$ gives the number of ``accidental'' vanishings, not due to the existence of a~unit as in Proposition~\ref{proposition3}.}\label{ssgrSU2} \vspace{1mm} $ \noindent\renewcommand{\arraystretch}{1.2} \begin{array}{|c||c||c|c|c|c|c|c|c|} \hline {\mbox{ name}} & {\eqref{sumrule}\atop{\rm before\atop doubling}}&r&\#i,\;i\ne\bar\imath,\;\forall\, j\atop\sum\limits_k N_{ij}^k\buildrel{?}\over=\sum\limits_k N_{\bar\imath j}^k&{\#\;\hbox{complex}\atop\#\;\sum\limits_j S_{i j}=0}&{\#\;\hbox{quatern.}\atop\#\sum\limits_j S_{i j}=0}&{\#\;\hbox{real}\atop\#\sum\limits_j S_{i j}=0}&\#\hbox{units} \\ [4pt] \hline \hline \mathbb{Z}_5&\checkmark&25&\checkmark&24\atop24\checkmark\;0A&0&1\atop0&25 \\[0pt] \hline \mathbb{Z}_6&\checkmark&36&\checkmark&32\atop32\checkmark\;0A&0&4\atop3\checkmark\;0A&36 \\[0pt] \hline \hline \widehat{D}_2&\checkmark&22&\checkmark&0&8\atop8\checkmark\;0A&14\atop6\checkmark\;0A&8 \\[0pt] \hline \widehat{D}_3&\checkmark&32&\checkmark&12\atop12\,\checkmark\;0A&{8}\atop{8}\checkmark\;0A&12\atop6\checkmark\;0A&8 \\[2pt] \hline \widehat{D}_4&\checkmark&46&\checkmark&0&20\atop20\checkmark\;0A&26\atop12\checkmark\;0A&8 \\[0pt] \hline \widehat{D}_5&\checkmark&64&\checkmark&12\atop12\checkmark\;0A&24\atop24\checkmark\;0A&28\atop14\checkmark\;0A&8 \\[0pt] \hline \hline \widehat{T}&\checkmark&42&\checkmark&32\atop32\,\checkmark\;4A&{4}\atop{4}\checkmark\;0A&6\atop2\checkmark\;2A&6 \\[2pt] \hline \widehat{O}&\checkmark&56&\checkmark&0&26\atop26\checkmark\;0A\,\checkmark&30\atop13\checkmark\;3A&4 \\[1pt] \hline \widehat{I}&\checkmark&74&\checkmark&0&36\atop36\checkmark\;0A&38\atop16\checkmark\;16A&{2} \\[3pt] \hline \hline \end{array} $ \end{table} \subsubsection{Drinfeld double of the binary tetrahedral} \hspace*{5mm}Order of the group: $24$ {GAP nomenclature: SmallGroup(24,3)}. {Alternate name: SL(2,3)} Class number: {$\ell=7$} Rank: ${r}=42$ $N_c=7$, $7$, $6$, $6$, $4$, $6$, $6$ Quantum dimensions: $(1_3,2_3,3_1;1_3,2_3,3_1;4_6;4_6;6_4;4_6;4_6)$ $d_{\mathcal B}={{2^5}{3^1}{13^1}{599^1}}$ Embedding label: 4. See Fig.~\ref{fig:E6}. \begin{figure}[htp]\centering \includegraphics[width=8.1cm]{CoquereauxZuber-Fig5} \caption{Fusion graph $N_4$ of the Drinfeld double of the binary tetrahedral.} \label{fig:E6} \end{figure} The graph of $N_4$ is displayed on Fig.~\ref{fig:E6}. There is a~similar graph for the fusion matrix $N_{11}$. See Fig.~\ref{fig:E6sim} for a~joint plot of both. \begin{figure}[htp]\centering \includegraphics[width=9.2cm]{CoquereauxZuber-Fig6} \caption{Fusion graphs $N_4$ and $N_{11}$ of the double of the binary tetrahedral (simultaneous plot).} \label{fig:E6sim} \end{figure} The 10 irreps labeled {1, 4, 7, 8, 11, 14, 27, 28, 29, 30} on Fig.~\ref{fig:E6} are self-conjugate. The others ($42-10=32$) are complex. The sum rule~\eqref{sumrule} holds. {\bf Note.} The sum $\Sigma_j$ vanishes for 38 irreps: the 32 that are complex, but also for 6 others, including 2 real and the 4 quaternionic. In other words $\Sigma_j$ does not vanish for the 4 real irreps~{1,~7, 27,~29}. In 4 complex and 2 real cases, the vanishing of $\Sigma$ is accidental. \subsubsection{Drinfeld double of the binary octahedral} \hspace*{5mm}Order of the group: $48$ {GAP nomenclature: SmallGroup(48,28)} Class number: {$\ell=8$} Rank: ${r}=56$ $N_c=8$, $8$, $6$, $8$, $4$, $6$, $8$, $8$ Quantum dimensions: {$(1_2,2_3,3_2,4_1;1_2,2_3,3_2,4_1;8_6;6_8;12_4;8_6;6_8;6_8)$} $d_{\mathcal B}={{2^7}{37447^1}}$ Embedding labels: 4, 5. See Fig.~\ref{fig:E7}. There is a~similar graph for the fusion matrix $N_{12}$ (and also for $N_{5}$ and $N_{13}$). See Fig.~\ref{fig:edge} for a~simultaneous plot of both $N_4$ and $N_{12}$. The 56 irreps of the Drinfeld double are self-conjugate, 26 are quaternionic, 30 are real. The sum rule~\eqref{sumrule} holds trivially. Note: $\Sigma_j$ vanishes nevertheless for 39 irreps, all the quater\-nio\-nic~(26) and~13 real. In three real cases, the vanishing is accidental. \begin{figure}[th!]\centering \includegraphics[width=8.3cm]{CoquereauxZuber-Fig7} \caption{Fusion graph $N_4$ of the Drinfeld double of the binary octahedral.} \label{fig:E7} \end{figure} \begin{figure}[th!]\centering \includegraphics[width=10.9cm]{CoquereauxZuber-Fig8} \caption{Fusion graphs $N_4$ and $N_{12}$ of the double of the binary octahedral (simultaneous plot).} \label{fig:edge} \end{figure} \subsubsection{Drinfeld double of the binary icosahedral} \hspace*{5mm}Order of the group: $120$ Class number: {$\ell=9$} {GAP nomenclature: SmallGroup(120,5).} {Alternate name: SL(2,5)} Rank: ${r}=74$ $N_c=9$, $9$, $6$, $4$, $10$, $10$, $6$, $10$, $10$ Quantum dimensions: {$(1,2_2,3_2,4_2,5,6;1,2_2,3_2,4_2,5,6;20_6;30_4;12_{10};12_{10};20_6;12_{10};12_{10})$} $d_{\mathcal B}={{2^5}{61^1}{89^1}{263^1}}$ Embedding labels: 2 and 3. See Fig.~\ref{fig:E8}. There is a~similar graph for the fusion matrix $N_{11}$ (and also for $N_{3}$ and $N_{12}$). The 74 irreps of the Drinfeld double are self-conjugate: 38 are real, 36 are quaternionic. The sum rule~\eqref{sumrule} holds trivially. Note: $\Sigma_j$ vanishes nevertheless for 52 irreps, (16 real and all the 36 quaternionic ones). In the 16 real cases, the vanishing is accidental. All the data concerning sumrules~\eqref{sumrule}, \eqref{sumruleS} for doubles of subgroups of SU(2) have been gathered in Table~\ref{ssgrSU2}. \begin{figure}[th!]\centering \includegraphics[width=8.6cm]{CoquereauxZuber-Fig9} \caption{Fusion graph $N_2$ of the Drinfeld double of the binary icosahedral.} \label{fig:E8} \end{figure} \begin{figure}[th!]\centering \includegraphics[width=8.0cm]{CoquereauxZuber-Fig10} \caption{Fusion graphs $N_2$ and $N_{11}$ of the double of the binary icosahedral (simultaneous plot).} \label{fig:edgeI} \end{figure} \subsection[Drinfeld doubles of f\/inite subgroups of ${\rm SU}(3)$]{Drinfeld doubles of f\/inite subgroups of $\boldsymbol{{\rm SU}(3)}$} \subsubsection[Remarks about f\/inite subgroups of ${\rm SU}(3)$]{Remarks about f\/inite subgroups of $\boldsymbol{{\rm SU}(3)}$} {\bf About the classif\/ication.} The classif\/ication of the discrete f\/inite groups of ${\rm SU}(3)$ is supposed to be well-known, since it goes back to 1917, see~\cite{Blichfeldt}. It has been however the object of some confusion in the more recent literature. We recall its main lines below and proceed to the analysis of several Drinfeld doubles. Historically, these subgroups were organized into twelve families\footnote{Or, sometimes, into only $10$ families, since $H^\star\sim H\times\mathbb{Z}_3$ and $I^\star\sim I\times\mathbb{Z}_3$ are trivially related to $H$ and~$I$.}: four series called~$A$,~$B$, $C$,~$D$ and eight ``exceptional'' (precisely because they do not come into series) called $E$, $F$, $G$, $H$, $I$,~$J$, and $H^\star$, $I^\star$. However, those called $H$ and $H^\star$ are not primitive, so that the primitive exceptional are only the six types $E$, $F$, $G$, $I$, $I^\star$, $J$ (or only f\/ive if one decides to forget about~$I^\star$). Members of the types $A$, $B$, $C$, $D$ are never primitive: as described in~\cite{Yu}, all the~$A$, and some of the $B$, $C$ and $D$ are subgroups of a~Lie subgroup of ${\rm SU}(3)$ isomorphic with ${\rm SU}(2)$, the other~$B$ are in ${\rm U}(2)\sim({\rm SU}(2)\times{\rm U}(1))\vert\mathbb{Z}_2$ whereas the other~$C$ and $D$, as well as~$H$ and $H^\star$ are in a~subgroup generated by ${\rm SO}(3)\subset{\rm SU}(3)$ and the center of ${\rm SU}(3)$. A construction of all these groups with explicit generators and relations can be found in~\cite{Roan}. Another way of organizing these subgroups (again into $12$ families, but not quite the same) can be found in~\cite{HuJungCai}, and a~new proposal, including only four general types, was recently made and described in~\cite{Yu}. A number of authors have entered the game in the past, or very recently, often with new notations and classif\/ications, and presenting many explicit results that can be found in the following references (with hopefully not too many omissions)~\cite{FFK,HananyEtAl:discretetorsion,Fly,Ludl1,Ludl, ParattuWinterger,Yau-Yu}. There is no apparent consensus about the way one should classify these subgroups, not to mention the notations to denote them (!), but everybody agrees about what they are, and in particular everybody agrees about the list of exceptional ones. We shall refrain from entering a~dogmatic taxonomic discussion since our purpose is mostly to discuss a~few examples of Drinfeld doubles associated with the f\/inite subgroups of ${\rm SU}(3)$, and although we shall consider all the exceptional cases, we shall be happy with selecting only a~few examples taken in the inf\/inite series. Nevertheless, for the purpose of the description, we need to present our own notations. We organize the list of subgroups of ${\rm SU}(3)$ as follows: With notations of the previous section, we have the subgroups $\mathbb{Z}_m\times\mathbb{Z}_n$ and $\langle \mathbb{Z}_m,\widehat{D}_n\rangle $, $\langle \mathbb{Z}_m,\widehat{T}\rangle $, $\langle \mathbb{Z}_m,\widehat{O}\rangle $, $\langle \mathbb{Z}_m,\widehat{I}\rangle $ whose origin can be traced back to the fact that ${\rm U}(2)\sim({\rm U}(1)\times{\rm SU}(2))\vert\mathbb{Z}_2$ is a~subgroup of ${\rm SU}(3)$. Here, by $\langle G_1,G_2\rangle $ we mean the group generated by the elements of the ${\rm SU}(3)$ subgroups $G_1$ and $G_2$ (this is not, in general, a~direct product of $G_1$ and $G_2$). The orders of the above subgroups are respectively: $m\times n$, $m\times2n$, $m\times24$, $m\times48$, $m\times120$ if $m$ is odd, and $m\times n$, $m/2\times2n$, $m/2\times24$, $m/2\times48$, $m/2\times120$ if~$m$ is even. Then we have the ordinary dihedral $D_n$ (of order $2n$) as well as the subgroups $\langle \mathbb{Z}_m,D_n\rangle $, of order $m\times2n$ if $m$ or $n$ is odd, and of order $m/2\times2n$ if $m$ and $n$ are both even. We have two series of subgroups, called $\Delta_{3n^2}$ {$=(\mathbb{Z}_n\times\mathbb{Z}_n)\rtimes\mathbb{Z}_3$}, and $\Delta_{6n^2}$ {$=(\mathbb{Z}_n\times\mathbb{Z}_n)\rtimes S_3$}, for all positive integers $n$, that appear as ${\rm SU}(3)$ analogues of the binary dihedral groups (their orders appear as indices and the sign $\rtimes$ denotes a~semi-direct product). The $\Delta_{3n^2}$ and $\Delta_{6n^2}$ may themselves have subgroups that are not of that kind. For instance, for specif\/ic values of $p$ and $q$, with $p\neq q$, we have subgroups of the type $(\mathbb{Z}_p\times\mathbb{Z}_q)\rtimes\mathbb{Z}_3$ or $(\mathbb{Z}_p\times\mathbb{Z}_q)\rtimes S_3$, but we shall not discuss them further with the exception of Frobenius subgroups (see next entry) for the reason that such a~digression would lie beyond the scope of our paper. Moreover the construction of a~semi-direct product also involves a~{twisting} morphism (that is not explicit in the previous notation) and, for this reason, it may happen that two groups built as semi-direct products from the same components are nevertheless non-isomorphic. The structure of the smallest ${\rm SU}(3)$ subgroups that cannot be written as direct product with cyclic groups, that do not belong to the previous types, that are neither exceptional (see below) nor of the Frobenius type (see next item) can be investigated from GAP. One f\/inds $(\mathbb{Z}_9\times\mathbb{Z}_3)\rtimes\mathbb{Z}_3$, $(\mathbb{Z}_{14}\times\mathbb{Z}_2)\rtimes\mathbb{Z}_3$, $\mathbb{Z}_{49}\rtimes\mathbb{Z}_3$, $(\mathbb{Z}_{26}\times\mathbb{Z}_2)\rtimes\mathbb{Z}_3$, $(\mathbb{Z}_{21}\times\mathbb{Z}_3)\rtimes\mathbb{Z}_3$, $(\mathbb{Z}_{38}\times\mathbb{Z}_2)\rtimes\mathbb{Z}_3$, $(\mathbb{Z}_{91}\rtimes\mathbb{Z}_3)^\prime$, $(\mathbb{Z}_{91}\rtimes\mathbb{Z}_3)^{\prime\prime}$, $(\mathbb{Z}_{18}\times\mathbb{Z}_6)\rtimes\mathbb{Z}_3$, $(\mathbb{Z}_{28}\times\mathbb{Z}_4)\rtimes\mathbb{Z}_3$,~$\ldots$, {see the tables displayed in~\cite{ParattuWinterger}} or~\cite{Ludl512}. To illustrate a~previous remark, notice that the {above} list includes two non-isomorphic subgroups of the type $\mathbb{Z}_{91}\rtimes\mathbb{Z}_3$, recognized as SmallGroup(273,3) and SmallGroup(273,4), that dif\/fer by the choice of the twisting morphism. The Frobenius subgroups. They are of the type $F_{3m}=\mathbb{Z}_m\rtimes\mathbb{Z}_3$, but $m$ should be prime of the type $6p+1$, i.e., $m=7$, $13$, $19$, $31$, $\ldots$. Their order is therefore $3m=21$, $39$, $57$, $93$, $\ldots$. These subgroups are themselves subgroups of the $\Delta_{3n^2}$ family, but they share common features and are somehow important for us. Indeed we checked that the property~\eqref{sumrule} fails systematically for them, and~\eqref{sumrule}, \eqref{sumruleS} fails systematically for their Drinfeld double. We shall explicitly describe the Drinfeld double of $F_{21}$. The latter group was recently used for particle physics phenomenological purposes in~\cite{RamondEtAl}. The above subgroups exhaust the inf\/inite series $A$, $B$, $C$, $D$ of~\cite{Blichfeldt}, for instance the abelian subgroups $\mathbb{Z}_m\times\mathbb{Z}_n$ correspond to the diagonal matrices (the A type), but we prefer to describe explicitly a~subgroup by its structure, for instance such as it is given by GAP, so that obtaining a~direct correspondence with groups def\/ined in terms of generators and relations, an information that we did not recall anyway, is not expected to be obvious. We are now left with the eight exceptional subgroups. We sort them by their order and call them $\Sigma_{60}$, $\Sigma_{36\times3}$, $\Sigma_{168}$, $\Sigma_{60}\times\mathbb{Z}_3$, $\Sigma_{72\times3}$, $\Sigma_{168}\times\mathbb{Z}_3$, $\Sigma_{216\times3}$, $\Sigma_{360\times3}$. They correspond respectively to the groups $H$, $E$, $I$, $H^\star$, $F$, $I^\star$, $G$, $J$, in this order, of the Blichfeldt classif\/ication. The structure of these groups will be recalled later (see in particular Table~\ref{infossgrSU3}). Six of them are ternary covers of exceptional groups in ${\rm SU}(3)/\mathbb{Z}_3$ called $\Sigma_{36}$, $\Sigma_{60}$, $\Sigma_{72}$, $\Sigma_{168}$, $\Sigma_{216}$ and $\Sigma_{360}$. There is a~subtlety: the subgroups $\Sigma_{60}$ and $\Sigma_{168}$ of ${\rm SU}(3)/\mathbb{Z}_3$ are also subgroups of ${\rm SU}(3)$; their ternary covers in ${\rm SU}(3)$ are isomorphic to direct products by $\mathbb{Z}_3$ (this explains our notation); the other ternary covers are not direct products by $\mathbb{Z}_3$. Thus both $\Sigma_{60}$ and $\Sigma_{168}$, together with $\Sigma_{60}\times\mathbb{Z}_3$ and $\Sigma_{168}\times\mathbb{Z}_3$, indeed appear in the f\/inal list. Remember that only six exceptional subgroups~-- not the same ``six'' as before~-- def\/ine primitive inclusions in ${\rm SU}(3)$, since $\Sigma_{60}$ (the icosahedral subgroup of ${\rm SO}(3)$) and $\Sigma_{60}\times\mathbb{Z}_3$ don't. Some people use the notation $\Sigma_p$, where $p$ is the order, to denote all these exceptional subgroups, but this notation blurs the above distinction, and moreover it becomes ambiguous for $p=216$ since $\Sigma_{216}$ (a subgroup of ${\rm SU}(3)/\mathbb{Z}_3$) is not isomorphic to $\Sigma_{72\times3}$ (a subgroup of ${\rm SU}(3)$). \begin{table}[t] \centering \caption{Extra information on some subgroups of SU(3). Here, the integers $n$ always refer to cyclic groups $\mathbb{Z}_n$, so that, for instance, $2\times2$ means $\mathbb{Z}_2\times\mathbb{Z}_2$. Unless specif\/ied otherwise, the chains of symbols in the second and penultimate columns should be left parenthesized; for instance $3\times3\rtimes3\rtimes Q_8\rtimes3$ means $((((\mathbb{Z}_3\times\mathbb{Z}_3)\rtimes\mathbb{Z}_3)\rtimes Q_8)\rtimes\mathbb{Z}_3)$. Here $Q_8\cong {\rm Dic}_2\cong\widehat{D}_2$ is the quaternion group. {$A_n$} and $S_n$ are the alternating and symmetric groups. The ``dot'' in $3.A_6$ denotes a~triple cover of $A_6$. {$Z(G)$} denotes the center of $G$, $G^\prime$ its commutator subgroup, and $G/G^\prime$ its abelianization. The integer $\vert Z(G)\vert\vert G/G^\prime\vert$ gives the number of units in the fusion ring. {${\rm Aut}(G)$} is the group of automorphisms of $G$. {${\rm Out}(G)$} is the quotient ${\rm Aut}(G)/{\rm Inn}(G)$, where ${\rm Inn}(G)$ is the group of inner automorphisms, isomorphic with $G/Z(G)$. Finally $M(G)$ is the Schur multiplier of~$G$.}\label{infossgrSU3} \vspace{1mm} $ \renewcommand{\arraystretch}{1.2} \begin{array}{|@{\,\,}c@{\,\,}||@{\,\,}c@{\,\,}||@{\,\,}c@{\,\,}|@{\,\,}c@{\,\,}|@{\,\,}c@{\,\,}|@{\,\,}c@{\,\,}|@{\,\,}c@{\,\,}|} \hline {\mbox{name}} & \mbox{structure} & Z(G) & G/G^\prime & {\rm Out}(G) & {{\rm Aut}(G)} & M(G) \\ \hline \hline T=\Delta(3\times 2^2) & A_4 &1 & 3 & 2 & S_4 &2 \\ \hline O=\Delta(6\times 2^2) &S_4 &1 & 2 & 1 & S_4 &2 \\ \hline F_{21} & 7\rtimes 3 &1 &3 &2 & 7\rtimes 3 \rtimes 2 & 1 \\ \hline \hline I=\Sigma_{60} &A_5 &1 &1 &2 & S_5 & 2 \\ \hline \Sigma_{36\times 3} &3\times3\rtimes 3\rtimes 4 &3 &4 &2 \times 2 & 3 \rtimes 3 \rtimes 8 \rtimes 2 &1 \\ \hline \Sigma_{168} &{\rm SL}(3,2) &1 & 1& 2& {\rm SL}(3,2) \rtimes 2 & 2 \\ \hline \Sigma_{60}\times \Bbb{Z}_3 &{\rm GL}(2,4) &3 & 3 &2\times2 & 2 \times S_5 & 2 \\ \hline \Sigma_{72\times 3} &3\times 3\rtimes 3\rtimes Q_8 &3 &2 \times 2 &S_3 & 3 \times 3 \rtimes Q_8 \rtimes 3 \rtimes 2 &1 \\ \hline \Sigma_{168}\times \Bbb{Z}_3 &3\times {\rm SL}(3,2) & 3&3 &2 \times 2 & 2 \times ({\rm SL}(3,2) \rtimes 2) &2 \\ \hline \Sigma_{216\times 3} &3\times 3\rtimes 3\rtimes Q_8\rtimes 3 & 3&3 & 6& 3 \times (3 \times 3 \rtimes Q_8 \rtimes 3 \rtimes 2) &1 \\ \hline \Sigma_{360\times 3} &3. A_6 & 3&1 &2\times 2 &A_6\rtimes 2 \rtimes 2 & 2 \\ \hline \end{array} $ \end{table} {\bf Miscellaneous remarks.} In the literature, one can f\/ind several families of generators for the above groups; very often these generators are given as elements of ${\rm GL}(3)$, of ${\rm SL}(3)$, or of~${\rm U}(3)$, not of~${\rm SU}(3)$. Such f\/indings do not imply any contradiction with the above classif\/ication since the latter is only given up to isomorphism. Because of the embeddings of Lie groups ${\rm SU}(2)\subset{\rm SU}(3)$ and ${\rm SO}(3)\subset{\rm SU}(3)$, all the f\/inite subgroups of ${\rm SU}(2)$ are f\/inite subgroups of ${\rm SU}(3)$ and all the f\/inite subgroups of ${\rm SO}(3)$ are subgroups of ${\rm SU}(3)$ as well. In particular all polyhedral groups and all binary polyhedral groups are subgroups of ${\rm SU}(3)$. Using the fact that $T\cong\Delta(3\times2^2)$, $O\cong\Delta(6\times2^2)$, {and ${I}\cong A_5\cong\Sigma_{60}$}, the reader can recognize all of them in the above list. Several calculations whose results are given below rely on some group theoretical information (for instance the determination of conjugacy classes and centralisers) that was taken from the {GAP} smallgroup library. We shall now review a~certain number of f\/inite subgroups of ${\rm SU}(3)$ and their Drinfeld double. Like in the case of SU(2), we list for each of them a~certain of data, and display one of their embedding fusion graphs. These graphs become fairly involved for large subgroups, and we label their vertices only for the smallest groups, while for the larger ones, only the connected component of the identity representation is labelled. Data on the way sum rules~\eqref{sumrule} and~\eqref{sumruleS} are or are not satisf\/ied, and the numbers of ``accidental'' vanishings, are gathered in Table~\ref{ssgrSU3}. \subsubsection[Drinfeld double of $F_{21}=\mathbb{Z}_7\rtimes\mathbb{Z}_3$]{Drinfeld double of $\boldsymbol{F_{21}=\mathbb{Z}_7\rtimes\mathbb{Z}_3}$} \hspace*{5mm}Order of the group: $21$ GAP nomenclature: $F_{21}={\rm SmallGroup}(21,1)$ Class number: $\ell=5$ Classical dimensions: {1, 1, 1, 3, 3} Rank: ${r}=25$ $N_c=5$, $3$, $3$, $7$, $7$ Quantum dimensions: $(1_3,3_2;7_3;7_3;3_7;3_7)$ $d_{\mathcal B}={{5^1}{11^1}{23^1}{137^1}}$ Embedding labels: 4 and 5; see Fig.~\ref{fig:F21} for the fusion graph of $N_4$. \begin{figure}[th!]\centering \includegraphics[width=7.2cm]{CoquereauxZuber-Fig11} \vspace{-1mm} \caption{Fusion graph $N_4$ of the Drinfeld double of the group $F_{21}$.} \label{fig:F21} \end{figure} \subsubsection[Drinfeld double of $\Sigma_{60}$]{Drinfeld double of $\boldsymbol{\Sigma_{60}}$} Remember that $\Sigma_{60}$ is both a~subgroup of ${\rm SU}(3)/\mathbb{Z}_3$ and a~subgroup of ${\rm SU}(3)$. Also, recall that this group is isomorphic to the icosahedron group $I$ of Section~\ref{fin-sbgps-SU2}. Thus the 5 classical irreps of its double identify with the zero-``bi-ality'' irreps of the {\it binary} icosahedron group $\widehat{I}$. Order of the group: $60$ GAP nomenclature: ${\rm SmallGroup}(60,5)$. Alternate names: $A_5$, ${\rm SL}(2,4)$, I (Icosahedral). Class number: $\ell=5$ Classical dimensions: {1, 3, 3, 4, 5} Rank: ${r}=22$ $N_c={5,4,3,5,5}$ Quantum dimensions: $(1,3_2,4,5;15_4;20_3;12_5;12_5)$ $d_{\mathcal B}={{2^1}{5^1}{11^1}{10853^1}}$ Embedding labels: 2 and 3; see Fig.~\ref{fig:Sigma60} for the fusion graph of $N_2$. All the representations of $\Sigma(60)$ and of its double are real. \begin{figure}[th!]\centering \includegraphics[width=7.2cm]{CoquereauxZuber-Fig12} \vspace{-1mm} \caption{Fusion graph $N_2$ of the Drinfeld double of the group $\Sigma_{60}$.} \label{fig:Sigma60} \end{figure} \subsubsection[Drinfeld double of $\Sigma_{36\times3}$]{Drinfeld double of $\boldsymbol{\Sigma_{36\times3}}$} \hspace*{5mm}Order of the group: $108$ GAP nomenclature: $\Sigma_{36\times3}={\rm SmallGroup}(108,15)$ Class number: $\ell=14$ Classical dimensions: ${1,1,1,1,3,3,3,3,3,3,3,3,4,4}$ Rank: ${{r}}=168$ $N_c=14$, $12$, $14$, $14$, $9$, $9$, $12$, $12$, $12$, $12$, $12$, $12$, $12$, $12$ Quantum dimensions:~$(1_4,3_8,4_2;9_{12};1_4,3_8,4_2;1_4,3_8,4_2;12_{9};12_{9};9_{12};9_{12};9_{12};9_{12};9_{12};$ \hspace*{38mm}$9_{12};9_{12};9_{12})$ $d_{\mathcal B}={{2^3}{3^5}{124477^1}}$ Embedding labels: 5 and 6; see Fig.~\ref{fig:Sigma108} for the fusion graph $N_5$. \begin{figure}[htp]\centering \includegraphics[width=4.7cm]{CoquereauxZuber-Fig13a} \includegraphics[width=14.5cm]{CoquereauxZuber-Fig13b} \caption{Fusion graph $N_5$ of the Drinfeld double of the group {$\Sigma_{36\times3}$}. Only the f\/irst connected component has been displayed with the labels of the vertices (irreps).} \label{fig:Sigma108} \end{figure} For illustration, let us analyze in some details the fusion graph of the quantum double of the group $G=\Sigma_{36\times3}$, see Fig.~\ref{fig:Sigma108}. The class number of $G$ is $14$. {$G$} has nine $3$-dimensional embedding representations labelled from $5$ to $12$. They all give fusion graphs sharing the same overall features. We have chosen to display $N_5$. The fusion graph of $G$ itself (the ``classical graph'') appears on the top. It is itself connected (it would not be so if we had chosen for instance $N_{13}$ or $N_{14}$, which correspond to $4$-dimensional non faithful representations). It has $14$~vertices. The fusion graph of the Drinfeld double~$D(G)$ has $14$ connected components. As the center of~$G$ is~$\mathbb{Z}_3$, the classical graph appears three times in the fusion graph of~$D(G)$. The group~$G$ has $14$ conjugacy classes, but their stabilizers fall into only three types (up to isomorphisms): $G$~itself for the center ($3$ times), the cyclic group $\mathbb{Z}_3\times\mathbb{Z}_3$, that appears twice, and the cyclic group $\mathbb{Z}_{12}$, that appears nine times. The corresponding three kinds of connected components appear on Fig.~\ref{fig:Sigma108}. Apart from~$G$ itself, these stabilizers are abelian, so that the number of their irreps (number of vertices of the corresponding connected components) are given by their order, respectively~$9$ and~$12$. The unity of the fusion ring (trivial representation) is labelled~$1$. This ring has twelve units. Four ($1,2,3,4$), the ``classical units'', appear as the endpoints of the classical graph. The other eight ($4+4$) are the corresponding endpoints on the two other copies of $G$. The units $1$, $2$ are of real type, the units~$3$,~$4$ are of complex type (and actually conjugated). The eight embedding representations $5\ldots12$ are connected to the classical units. They appear in conjugate pairs $(5,6)$, $(7,8)$ connected respectively to $1$ and $2$, and $(9,10)$, $(11,12)$ connected to $3$ and $4$. Notice that a~conjugated pair is attached to the same unit when this unit is real but to complex conjugate units when the unit is complex. Of course, the edges of the graph are oriented since $5$ is not equivalent to its complex conjugate; the fusion graph $N_6$ can be obtained from the given graph, $N_5$, by reversing the arrows. The action of units clearly induces geometrical symmetries on the given fusion graph, nevertheless one expects that the fusion graphs associated with $5$, $6$, $7$, $8$ on the one hand, or with $9$, $10$, $11$, $12$ on the other, or with any of the corresponding vertices belonging to the three copies of the classical graph, although sharing the same overall features, will look slightly dif\/ferent, and the reader can check (for instance by drawing the graph $N_9$) that it is indeed so. (As a~side remark, the classical graph of $\Sigma_{36\times3}$, drawn dif\/ferently (see~\cite[Fig.~14]{DiFrancescoZuber}) leads after amputation of some vertices and edges to the fusion graph ${\mathcal E^{(8)}}$, the star-shaped exceptional module of ${\rm SU}(3)$ at level~$5$). The exponent of~$G$~\cite{CGR:modulardata} is $m=12$, it is equal to the order of the modular matrix~$T$, like for all Drinfeld doubles, and the entries of~$S$ and~$T$ (these are $168\times168$ matrices with entries labelled by the vertices of the Fig.~\ref{fig:Sigma108}) lie in the cyclotomic f\/ield $\mathbb{Q}(\xi)$ where $\xi=\exp(2i\pi/m)$. We did not discuss Galois automorphisms in this paper, but let us mention nevertheless that there is also a~Galois group acting by permutation on vertices, it is isomorphic to the multiplicative group~$\mathbb{Z}_{m}^\times$ of integers coprime to~$m$,~\cite{CGR:modulardata, Gannon:modular}. \subsubsection[Drinfeld double of $\Sigma_{168}$]{Drinfeld double of $\boldsymbol{\Sigma_{168}}$} Remember that $\Sigma_{168}$ is both a~subgroup of ${\rm SU}(3)/\mathbb{Z}_3$ and a~subgroup of ${\rm SU}(3)$. The group $\Sigma_{168}$ is the second smallest simple non-abelian group (the smallest being the usual icosahedral group ${I}\cong A_5$ of course!). It is {often} called the Klein group, or the smallest Hurwitz group. GAP nomenclature: $\Sigma_{168}={\rm SmallGroup}(168,42)$. Alternate names: ${\rm SL}(3,2)\cong{\rm PSL}(3,2)\cong{\rm GL}(3,2)\cong{\rm PSL}(2,7)$. Order of the group: $168$ Class number: $\ell=6$ Classical dimensions: {1, 3, 3, 6, 7, 8} Rank: ${r}=32$ $N_c=6$, $5$, $3$, $4$, $7$, $7$ Quantum dimensions: $(1,3_2,6,7,8;21_4,42;56_3;42_4;24_7;24_7)$ $d_{\mathcal B}={{2^2}\,{4126561^1}}$ Embedding labels: 2 and 3. See Fig.~\ref{fig:Sigma168} for the fusion graph $N_2$. The other embedding fusion graph $N_3$ is obtained from $N_2$~\eqref{fig:Sigma168} by reversing the arrows. For illustration, we shall give explicitly the $S$ matrix of this Drinfeld double in Appendix~\ref{appendixD}. \begin{figure}[th!]\centering \includegraphics[width=11.0cm]{CoquereauxZuber-Fig14} \caption{Fusion graph $N_2$ of the Drinfeld double of the Hurwitz group $\Sigma_{168}$.} \label{fig:Sigma168} \end{figure} \subsubsection[Drinfeld double of $\Sigma_{60}\times\mathbb{Z}_3$]{Drinfeld double of $\boldsymbol{\Sigma_{60}\times\mathbb{Z}_3}$} \hspace*{5mm}Order of the group: $180$ GAP nomenclature: $\Sigma_{60}\times\mathbb{Z}_3={\rm SmallGroup}(180,19)$. Alternate names: ${\rm GL}(2,4)$. Class number: $\ell=15$ Classical dimensions: {1, 1, 1, 3, 3, 3, 3, 3, 3, 4, 4, 4, 5, 5, 5} {Rank: ${r}=198$} $N_c=15$, $12$, $15$, $15$, $9$, $9$, $9$, $15$, $15$, $12$, $12$, $15$, $15$, $15$, $15$ Quantum dimensions: $(1_3,3_6,4_3,5_3;15_{12};1_3,3_6,4_3,5_3;1_3,3_6,4_3,5_3;20_{9};20_{9};20_{9};12_{15};12_{15};$ \hspace*{38mm}$15_{12};15_{12};12_{15};12_{15};12_{15};12_{15})$ $d_{\mathcal B}={{2^1}{3^6}{5^1}{11^1}{10853^1}}$ Embedding labels: {6 and its conjugate 9, or 7 and its conjugate 8} See Fig.~\ref{fig:Sigma60xZ3} for the fusion graph of {$N_6$}. \begin{figure}[th!]\centering \includegraphics[width=4.5cm]{CoquereauxZuber-Fig15a} \includegraphics[width=14.5cm]{CoquereauxZuber-Fig15b} \caption{Fusion graph $N_6$ of the Drinfeld double of the group $\Sigma_{60}\times\mathbb{Z}_3$.} \label{fig:Sigma60xZ3} \end{figure} \subsubsection[Drinfeld double of $\Sigma_{72\times3}$]{Drinfeld double of $\boldsymbol{\Sigma_{72\times3}}$} \hspace*{5mm}Order of the group: $216$ GAP nomenclature: $\Sigma_{72\times3}={\rm SmallGroup}(216,88)$ Class number: $\ell=16$ Classical dimensions: {1, 1, 1, 1, 2, 3, 3, 3, 3, 3, 3, 3, 3, 6, 6, 8} Rank: $r=210$ $N_c=16$, $15$, $16$, $16$, $9$, $12$, $12$, $12$, $15$, $15$, $12$, $12$, $12$, $12$, $12$, $12$ Quantum dimensions: $(1_4,2,3_8,6_2,8;9_{12},18_3;1_4,2,3_8,6_2,8;1_4,2,3_8,6_2,8;24_9;$ \hspace*{38mm}$18_{12};18_{12};18_{12};9_{12},18_{3};9_{12},18_{3};18_{12};18_{12};18_{12};18_{12};18_{12};18_{12})$ $d_{\mathcal B}={{2^2}{3^3}{23^1}{59^1}{8941^1}}$ Embedding labels: 6, \dots, 13. \begin{figure}[th!]\centering \includegraphics[width=5.5cm]{CoquereauxZuber-Fig16a} \includegraphics[width=14.2cm]{CoquereauxZuber-Fig16b} \caption{Fusion graph $N_{{6}}$ of the Drinfeld double of the group $\Sigma_{72\times3}$.} \end{figure} \subsubsection[Drinfeld double of $\Sigma_{168}\times\mathbb{Z}_3$]{Drinfeld double of $\boldsymbol{\Sigma_{168}\times\mathbb{Z}_3}$} \hspace*{5mm}Order of the group: 504 GAP nomenclature: $\Sigma_{168}\times\mathbb{Z}_3={\rm SmallGroup}(504,157)$. Class number: $\ell=18$ Classical dimensions: $1$, $1$, $1$, $3$, $3$, $3$, $3$, $3$, $3$, $6$, $6$, $6$, $7$, $7$, $7$, $8$, $8$, $8$ Rank: $r=288$ $N_c=18$, $15$, $18$, $18$, $9$, $9$, $9$, $12$, $15$, $15$, $21$, $21$, $12$, $12$, $21$, $21$, $21$, $21$ Quantum dimensions: $(1_3,3_6,6_3,7_3,8_3;21_{12},42_3;1_3,3_6,6_3,7_3,8_3;1_3,3_6,6_3,7_3,8_3;56_9;56_9;$ \hspace*{38mm}$42_{12};21_{12},42_3;21_{12},42_3;24_{21};24_{21};42_{12};42_{12};24_{21};24_{21};24_{21};24_{21})$ $d_{\mathcal B}={{2^2}{3^6}{4126561^1}}$ Embedding labels: 6, 7 and their conjugates 8, 9. See the graph of irrep 6 on Fig.~\ref{fig:Sigma168x3_6}. \begin{figure}[htp]\centering \includegraphics[width=4.5cm]{CoquereauxZuber-Fig17a} \includegraphics[width=14.5cm]{CoquereauxZuber-Fig17b} \caption{Fusion graph $N_6$ of the Drinfeld double of the group $\Sigma_{168}\times\mathbb{Z}_3$.} \label{fig:Sigma168x3_6} \end{figure} \subsubsection[Drinfeld double of $\Sigma_{216\times3}$]{Drinfeld double of $\boldsymbol{\Sigma_{216\times3}}$} The group $\Sigma_{216\times3}$ is called the Hessian group, but this name also refers to $\Sigma_{216}$. Order of the group: $648$ GAP nomenclature: $\Sigma_{216}\times\mathbb{Z}_3={\rm SmallGroup}(648,532)$ Class number: $\ell=24$ Classical dimensions: {1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 6, 6, 6, 6, 6, 6, 8, 8, 8, 9,9} Rank: ${r}=486$ $N_c=\{24,21,24,24,27,9,9,12,21,21,27,27,27,27,27,27,12,12, 18,18,18,18,18,18\}$ Quantum dimensions: $(1_3,2_3,3_7,6_6,8_3,9_2;9_9,18_9,27_3;1_3,2_3,3_7,6_6,8_3,9_2;1_3,2_3,3_7,6_6,8_3,9_2;$ \hspace*{38mm}$24_{27};\!72_{9};\!72_{9};\!54_{12};\!9_9,18_9,27_3;\!9_9,18_9,27_3;\!12_{18},24_{9};\!12_{18},24_{9};\!12_{18},24_{9};$ \hspace*{38mm}$12_{18},24_{9};\!12_{18},24_{9};\!12_{18},24_{9};\!54_{12};\!54_{12};\!36_{18};\!36_{18};\!36_{18};\!36_{18};\!36_{18};\!36_{18})$ $d_{\mathcal B}={{2^2}{3^6}{13^1}{787^1}{1481^1}}$ Embedding labels: (pairwise conjugates) 8, 10, 11 and 9, 13, 12. \begin{figure}[htp]\centering \includegraphics[width=4.5cm]{CoquereauxZuber-Fig18a} \includegraphics[width=14.5cm]{CoquereauxZuber-Fig18b} \caption{Fusion graph $N_8$ of the Drinfeld double of the group $\Sigma_{216\times3}$.} \end{figure} \subsubsection[Drinfeld double of $\Sigma_{360\times3}$]{Drinfeld double of $\boldsymbol{\Sigma_{360\times3}}$} The group $\Sigma_{360\times3}=\Sigma_{1080}$ (not always realized as a~subgroup of ${\rm SU}(3)$) is sometimes called the Valentiner group but this name also refers to $\Sigma_{360}$. Order of the group: $1080$ GAP nomenclature: $\Sigma_{360\times3}={\rm SmallGroup}(1080,260)$. {Alternate names: $3.A_6$.} Class number: $\ell=17$ Classical dimensions: {1, 3, 3, 3, 3, 5, 5, 6, 6, 8, 8, 9, 9, 9, 10, 15, 15} Rank: $r=240$ $N_c={17,15,17,17,9,9,12,15,15,15,15,12,12,15,15,15,15}$ Quantum dimensions: $(1,3_4,5_2,6_2,8_2,9_3,10,15_2;45_{12},90_{3};1,3_4,5_2,6_2,8_2,9_3,10,15_2;$ \hspace*{38mm}$1,3_4,5_2,6_2,8_2,9_3,10,15_2;120_9;120_9;90_{12};72_{15};72_{15};45_{12},90_3;$ \hspace*{38mm}$45_{12},90_3;90_{12};90_{12};72_{15};72_{15};72_{15};72_{15})$ $d_{\mathcal B}={{2^1}{3^3}{734267099^1}}$ Embedding labels: 2 and 4. See the graph of $N_2$ on Fig.~\ref{fig:Sigma1080}. \begin{figure}[htp]\centering \includegraphics[width=6.5cm]{CoquereauxZuber-Fig19a} \includegraphics[width=14.5cm]{CoquereauxZuber-Fig19b} \caption{Fusion graph $N_2$ of the Drinfeld double of the group $\Sigma_{360\times3}$.} \label{fig:Sigma1080} \end{figure} {\bf Note.} All our results about these subgroups of ${\rm SU}(3)$, the violations of sumrules~\eqref{sumrule},~\eqref{XeqXC} and~\eqref{sumruleS} and the ``accidental'' vanishings have been collected in Table~\ref{ssgrSU3}. \begin{table}[t]\centering \caption{Data and status of the sumrules~\eqref{sumrule} and~\eqref{sumruleS} for Drinfeld doubles of some subgroups of ${\rm SU}(3)$. The meaning of symbols is like in Table~\ref{ssgrSU2}.} \label{ssgrSU3} \vspace{1mm} $ \hskip5mm \noindent\renewcommand{\arraystretch}{1.2} \begin{array} {|c||c||c|c|c|c|c|c|} \hline {\mbox{ name}} & {\eqref{sumrule} \atop {\rm before\atop doubling}}&r & \# i,\;i \neq {\bar i},\;\forall\, j\atop \sum\limits_k N_{ij}^k\buildrel{?}\over =\sum\limits_k N_{\bar \imath j}^k & { \# \;\hbox{complex} \atop \# \;\sum\limits_j S_{ i j} =0 } & { \# \;\hbox{quatern.} \atop \# \sum\limits_j S_{ i j} =0} & { \# \;\hbox{real} \atop \# \sum\limits_j S_{ i j} =0} & \# \hbox{units} \\[4pt] \hline \hline \Delta_{3\times 2^2}=T & \checkmark &14& 2 &8\atop 6\checkmark\;0 A & 0&6\atop 0 &3 \\[0pt] \hline \Delta_{6\times 2^2}=O &\checkmark&21& \checkmark &0 & 0 & 21\atop 9\checkmark\;1 A&2 \\[0pt] \hline F_{21} &{\times}&25 &2 &24 \atop 6 \checkmark\;0 A & 0 & 1\atop 0 & 3 \\[0pt] \hline \hline \Sigma_{60}=I & \checkmark &22 &\checkmark & 0 &0&22\atop 4\checkmark \;0A & 1 \\[0pt] \hline \Sigma_{36\times 3} &\checkmark &168 &\checkmark&156\atop 156\,\checkmark\;14A& 0 &12 \atop 3\checkmark \;1 A & 12 \\[0pt] \hline \Sigma_{168} &\checkmark& 32 &2&16\atop 14\,\checkmark\;14 A & 0&16\atop 0 &1 \\[2pt] \hline \Sigma_{60}\times \Bbb{Z}_3 &\checkmark &198 & 176 & 176\atop 176\,\checkmark\;0A & 0& 22\atop 4\checkmark \;4 A & 9 \\[0pt] \hline \Sigma_{72\times 3} &\times &210 & 46 &184\atop 162\,\checkmark \qquad 0 A &8\atop 6\,\checkmark\;0 A&18\atop 6 \checkmark\;0 A &12 \\[0pt] \hline \Sigma_{168}\times \Bbb{Z}_3 &\checkmark&288 & 146& 272 \atop 270 \, \checkmark\;14 A &0&16\atop 0 & 9 \\[2pt] \hline \Sigma_{216\times 3} &\checkmark& 486 & 472 & 472\atop 472 \,\checkmark\;58A & 4 \atop 4\checkmark\;4 A \,\checkmark &10 \atop 2\checkmark \;2A&9 \\[1pt] \hline \Sigma_{360\times 3} &\times&240 &52 &208\atop 176\,\checkmark\;20A & 0 &32\atop 0 &3 \\[3pt] \hline \hline \end{array} $ \end{table} \subsection{Drinfeld doubles of other f\/inite groups (examples)} We decided, in this paper, to discuss Drinfeld doubles of f\/inite subgroups of ${\rm SU}(2)$ and of ${\rm SU}(3)$. But this overgroup plays no role and it could have been quite justif\/ied to organize our results in a~dif\/ferent manner. Some f\/inite subgroups of ${\rm SO}(3)$ have been already encountered as subgroups of ${\rm SU}(3)$, see above the cases of $T\cong\Delta_{3\times2^2}$, $O\cong\Delta_{6\times2^2}$ and $I\cong\Sigma_{60}$. We have also looked at the eight Mathieu groups, see for instance~\cite{grouppropssite}, $M_9$, $M_{10}$, $M_{11}$, $M_{12}$, $M_{21}$, $M_{22}$, $M_{23}$, $M_{24}$. Some ($M_9, M_{10}, M_{12}, M_{21}$) satisfy the sumrule~\eqref{sumrule} before doubling, the others don't, and we were unable to understand the reason behind this. \vspace{-1mm} \section{Conclusion and outlook} Along this guided tour of Drinfeld doubles of f\/inite subgroups of ${\rm SU}(2)$ and ${\rm SU}(3)$ we have put the emphasis on the discussion of the ``sum rules'' discovered in~\cite{RCJBZ:sumrules}: we have found that they are not generally satisf\/ied, just like in the case of f\/inite groups. In that respect the modularity of the fusion category of Drinfeld doubles is of no help. But we also found that, like in simple or af\/f\/ine Lie algebras, the two properties~\eqref{sumrule} and~\eqref{sumruleS} are equivalent, in the sense that they are simultaneously satisf\/ied for all $i$ and $j$ or not, a~property that has no equivalent for f\/inite groups, see Proposition~\ref{proposition1}. We found that certain conditions, like the existence of units, may grant the vanishing of some of these sum rules (Proposition~\ref{proposition3}). But a~certain number of observed vanishing sum rules remains with no obvious explanation, whence the name ``accidental'' that we gave them. The bottom line is that at this stage, we have no precise criterion to decide for which group $G$ and for which irrep of $D(G)$ the sum rule~\eqref{sumruleS} is satisf\/ied. Clearly these points should deserve more attention. The paper contains many f\/igures. One of the curious properties of the fusion graph of an ``embedding'' representation of $D(G)$~-- the appropriate generalization of the concept of fundamental faithful representation in this context~-- is that it contains as many connected components as there are irreps or classes in $G$. Finally we want to mention another open issue that was not explicitly discussed in the present paper. It turns out that fusion graphs of doubles of f\/inite subgroups of SU(2) have lots of similarities with those appearing in the discusion of rational $c=1$ conformal f\/ield theories obtained by orbifolding SU(2) by these f\/inite subgroups~\cite{CappDAppo,DijkgraafEtAl, Ginsparg, Vafa:discretetorsion}. The precise relation, however, remains elusive. We hope to return to this point in the future.
{ "attr-fineweb-edu": 1.274414, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbNI4eILhP_ZFAHla
\section{Introduction} Density functional theory (DFT)~\cite{HK, KS} has provided valuable information of the electronic structure for a wide range of materials over several decades. However, it is now well known that popular approximations in DFT such as local density approximation (LDA) and generalized gradient approximation (GGA) lack accuracy for strongly correlated systems. One possible strategy to go beyond is to develop highly accurate DFT functionals. Such trials have improved the situation to some extent, but the exact form of the exchange-correlation energy functional is out of reach owing to its non-trivial nature. Another strategy is to employ a promising theoretical framework independent of DFT: wave function theory (WFT) such as quantum chemical methods~\cite{Szabo, FCIQMC} and several flavors of novel quantum Monte Carlo (QMC) methods~\cite{QMCreview, FCIQMC}. WFT explicitly handles a many-body wave function, which enables systematic improvement of accuracy. Although this strategy works well for molecular systems, we have been faced with expensive computational cost for solids as a result of such an explicit treatment of many-body problems. From this viewpoint, the transcorrelated (TC) method~\cite{BoysHandy, Handy, Ten-no1, Umezawa}, one of WFTs, has been attracting much attention because of its fascinating advantages with practical computational cost. The TC method makes use of a single-particle picture under an effective Hamiltonian explicitly represented with the Jastrow correlation factor. For molecular systems, some recent works have shown a high potentiality of the TC method~\cite{Ten-no1,Ten-no2,Ten-no3}. Optimization of the wave function with a help of Monte Carlo integration~\cite{Umezawa,LuoTC,LuoVTC} and development of the canonical TC theory~\cite{CanonicalTC} are also remarkable recent innovation. For solid-states calculations, the TC method is shown to reproduce accurate band structures~\cite{Sakuma, TCaccel, TCjfo} and optical absorption spectra~\cite{TCCIS}. The TC method and a related theory have been successfully applied also to the Hubbard model~\cite{TCHubbard,LieAlgebra}. It is noteworthy that the TC method has a close relationship with some QMC methods in the sense that both methods employ the Jastrow correlation factor. As of now, the TC method has been applied only to weakly correlated systems in solid-states calculations. To apply the TC method to strongly correlated solids, it is necessary to employ an efficient basis set for representing the one-electron orbitals in the TC method. In the former studies~\cite{Sakuma, TCaccel}, the TC orbitals are expanded with the LDA orbitals. Whereas a small number of LDA orbitals is required for convergence in weakly correlated systems, its number will inevitably increase when one handles localized states, which are not well-described by LDA. As a matter of fact, we shall see that an enormously large number of LDA orbitals are sometimes required and then computational requirement becomes very expensive. This problem should be resolved for further advance in the TC method, especially for application to strongly correlated systems. Note that because an effective Hamiltonian in the TC method is non-Hermitian, whether an efficient convergence is achieved in the TC method just like other conventional methods is a non-trivial problem. In this study, we develop an iterative diagonalization scheme in solving a one-body self-consistent-field (SCF) equation in the TC method using a plane-wave basis set. We find that a subspace dimension to represent the TC orbitals drastically decreases compared with the former LDA-basis approach. This new development enhances applicability of the TC method to various systems. As a test, we apply our new algorithm to some simple $sp$-electron systems with deep core states (i.e., a part of the deep core states are not just included in pseudopotentials but explicitly treated in a many-body wave function) and clarify how the core electrons affect the TC band structures. Treatment of core electrons has recently been recognized as an important issue to be taken care of. In the {\it GW} method~\cite{GW1,GW2,GW3}, it has been pointed out that an error arising from the pseudopotential is not negligible in some cases~\cite{GWpp1,GWpp2,GWpp3,GWpp4,GWpp5,GWpp6,GWpp7,GWpp8}. The same situation was reported also in QMC calculations~\cite{QMCreview,QMCpp1,QMCpp2}. In the TC method, such an effect has not been investigated, but to elucidate the impact of core electrons on the band structure is an important and unavoidable problem. In addition, because the TC method employs a many-body wave function common to some QMC methods, such investigation will provide a valuable information also for the QMC community. We find that an explicit treatment of core states improves a position of deep valence states whereas the band structures in upper energy region do not exhibit large changes, which means that our choice of the Jastrow factor can provide consistent electronic structures for a wide energy range whether the core electrons are explicitly treated or not. Our findings in this study encourage application of the TC method to strongly correlated materials with $d$ electrons where an explicit treatment of semi-core $sp$ electrons with the same principal quantum number will be necessary~\cite{GWpp8,QMCpp1}. The present paper is organized as follows. In Sec.~\ref{sec:2}, we give a brief introduction of basic features of the TC method. In Sec.~\ref{sec:3}, we present a block-Davidson algorithm to solve a one-body SCF equation in the TC method using a plane-wave basis set. In Sec.~\ref{sec:4}, we demonstrate how efficiently our new algorithm works, and using this algorithm, we compare the band structures calculated with core electrons explicitly treated in many-body wave functions and those just included in pseudopotentials for some simple $sp$-electron systems. Section~\ref{sec:5} summarizes this study. \section{Transcorrelated method}\label{sec:2} Since a detailed theoretical framework of the TC method was presented in previous papers~\cite{BoysHandy, Ten-no1, Umezawa}, we just make a brief review here. In electronic structure calculations, our objective is to solve the Schr{\" o}dinger equation \begin{equation} \mathcal{H}\Psi = E\Psi, \end{equation} for the many-body Hamiltonian $\mathcal{H}$ under the external potential $v_{\mathrm{ext}}(x)$, \begin{equation} \mathcal{H}=\sum_{i=1}^{N} \left( -\frac{1}{2}\nabla_i ^2 + v_{\mathrm{ext}}(x_i) \right) + \frac{1}{2} \sum_{i=1}^{N}\sum_{j=1(\neq i)}^N \frac{1}{|\mathbf{r}_i-\mathbf{r}_j|},\label{eq:Hamil} \end{equation} where $x=(\mathbf{r},\sigma)$ denotes a set of spatial and spin coordinates associated with an electron. First, we formally factorize the many-body wave function $\Psi$ as $\Psi=F\Phi$ where $F$ is the Jastrow factor \begin{equation} F=\mathrm{exp}(-\sum_{i,j(\neq i)=1}^N u(x_i,x_j)), \end{equation} and $\Phi$ is defined as $\Psi/F$. Next, we perform a similarity transformation of the Hamiltonian as \begin{equation} \mathcal{H}\Psi = E\Psi \Leftrightarrow \mathcal{H}_{\mathrm{TC}}\Phi = E \Phi \ \ \ (\mathcal{H}_{\mathrm{TC}} =F^{-1}\mathcal{H}F). \end{equation} In this way, the electron correlation described with the Jastrow factor is incorporated into the similarity-transformed Hamiltonian $\mathcal{H}_{\mathrm{TC}}$, which we called the TC Hamiltonian hereafter. A Jastrow function $u$ is chosen as the following simple form:~\cite{TCjfo,QMCreview} \begin{equation} u(x,x')=\frac{A}{|\mathbf{r}-\mathbf{r'}|} \left( 1-\mathrm{exp}\left( -\frac{|\mathbf{r}-\mathbf{r'}|}{C_{\sigma,\sigma'} } \right) \right) , \label{eq:Jastrow} \end{equation} where \begin{align} A&=\sqrt{\frac{V}{4\pi N}}\times \sqrt{1-\frac{1}{\varepsilon}}, \label{eq:JastrowA} \\ C_{\sigma, \sigma'} &= \sqrt{2A}\ (\sigma=\sigma'), \sqrt{A}\ (\sigma\neq\sigma'), \end{align} with $N$, $V$, and $\varepsilon$ being the number of electrons in the simulation cell, the volume of the simulation cell, and the static dielectric constsant, respectively. Core electrons effectively included in pseudopotentials are not counted in the definition of $N$, which should be the same as that in Eq.~(\ref{eq:Hamil}). A choice of $\varepsilon$ in this study shall be described in Sec.~\ref{sec:4A}. The asymptotic behavior of this function is determined so as to reproduce the screened electron-electron interaction in solids $1/(\varepsilon r)$ in the long-range limit~\cite{BohmPines, TCjfo} and satisfy the cusp condition~\cite{cusp,cusp2} in the short-range limit. To satisfy the exact cusp condition for singlet and triplet pairs of electrons, one should adopt the operator representation of the Jastrow function, which introduces troublesome non-terminating series of the effective interaction in the TC Hamiltonian~\cite{Ten-nocusp}. Thus we use the approximated cusp conditions in the same manner as in QMC studies~\cite{QMCreview}. Because this Jastrow function captures both the short- and long-range correlations, a mean-field approximation for the TC Hamiltonian is expected to work well. Here we approximate $\Phi$ to be a single Slater determinant consisting of one-electron orbitals: $\Phi=\mathrm{det}[ \phi_i(x_j) ]$, and then a one-body SCF equation is derived: \begin{align} \left( -\frac{1}{2}\nabla_1^2 +v_{\mathrm{ext}}(x_1) \right) \phi_i (x_1)\notag \\ + \sum_{j=1}^N \int \mathrm{d}x_2\ \phi_j^*(x_2) v_{\mathrm{2body}}(x_1,x_2) \mathrm{det} \left[ \begin{array}{rrr} \phi_i(x_1) & \phi_i(x_2) \\ \phi_j(x_1) & \phi_j(x_2) \\ \end{array} \right] \notag \\ - \frac{1}{2}\sum_{j=1}^N \sum_{k=1}^N \int \mathrm{d}x_2 \mathrm{d}x_3\ \phi_j^*(x_2)\phi_k^*(x_3)v_{\mathrm{3body}}(x_1,x_2,x_3) \notag \\ \times \mathrm{det} \left[ \begin{array}{rrr} \phi_i(x_1) & \phi_i(x_2) & \phi_i(x_3) \\ \phi_j(x_1) & \phi_j(x_2) & \phi_j(x_3) \\ \phi_k(x_1) & \phi_k(x_2) & \phi_k(x_3) \end{array} \right] = \sum_{j=1}^N \epsilon_{ij} \phi_j(x_1), \label{eq:SCF} \end{align} where $v_{\mathrm{2body}}(x_1,x_2)$ and $v_{\mathrm{3body}}(x_1,x_2,x_3)$ are the effective interactions in the TC Hamiltonian defined as \begin{align} v_{\mathrm{2body}}(x_1,x_2)\notag\\ \equiv \frac{1}{|\mathbf{r}_1-\mathbf{r}_2|}+\frac{1}{2}\big(\nabla_1^2 u(x_1,x_2)+\nabla_2^2 u(x_1,x_2)\notag \\ -(\nabla_1 u(x_1,x_2))^2-(\nabla_2 u(x_1,x_2))^2\big) \notag \\ + \nabla_1 u(x_1,x_2)\cdot \nabla_1 + \nabla_2 u(x_1,x_2)\cdot \nabla_2, \end{align} and \begin{align} v_{\mathrm{3body}}(x_1,x_2,x_3)\notag\\ \equiv\nabla_1 u(x_1,x_2)\cdot \nabla_1 u(x_1,x_3) + \nabla_2 u(x_2,x_1) \cdot \nabla_2 u(x_2,x_3) \notag \\ + \nabla_3 u(x_3,x_1) \cdot \nabla_3 u(x_3,x_2). \end{align} By solving Eq.~(\ref{eq:SCF}), one can optimize TC one-electron orbitals. This procedure costs just the same order as the uncorrelated Hartree-Fock (HF) method with a help of an efficient algorithm~\cite{TCaccel}. By construction, $\Phi$ can be systematically improved over a single Slater determinant~\cite{Ten-no1,Ten-no2,Ten-no3,TCCIS,TCMP2}, which is an important advantage of the TC method, but in this study, we focus on the case where $\Phi$ is supposed to be a single Slater determinant. We note that the non-Hermiticity of the TC Hamiltonian originating from the non-unitarity of the Jastrow factor is essential in the description of the electron correlation effects. It is obvious that the purely imaginary Jastrow function $u(x_i, x_j)$ that makes the Jastrow factor exp[$-\sum_{i,j(\neq i)} u(x_i, x_j)$] unitary cannot fulfill the cusp condition and the long-range asymptotic behavior mentioned before. Although some previous studies adopted approximations for the effective interaction in the TC Hamiltonian to restore the Hermiticity for molecular systems with a small number of electrons~\cite{LuoVTC,CanonicalTC}, it is unclear that such approximation is valid for general molecular and periodic systems. Therefore, in this study, we explicitly handle the non-Hermiticity of the TC Hamiltonian without introducing additional approximations. We also mention the bi-orthogonal formulation of the TC method, which we called the BiTC method. The BiTC method was applied to molecules~\cite{Ten-no2} and recently also to solids~\cite{TCMP2}. A detailed description of the BiTC method can be found in these literatures. In the BiTC method, we use left and right Slater determinants consisting of different one-electron orbitals: $X=\mathrm{det}[\chi_i(x_j)]$ and $\Phi=\mathrm{det}[\phi_i(x_j)]$, respectively, with the bi-orthogonal condition $\langle \chi_i | \phi_j \rangle = \delta_{i,j}$ and the normalization condition $\langle \phi_i | \phi_i \rangle = 1$. Then a one-body SCF equation becomes slightly different from Eq.~(\ref{eq:SCF}) in the sense that `bra' orbitals $\phi^*(x)$ are replaced with $\chi^*(x)$. Because the similarity transformation of Hamiltonian introduces non-Hermiticity, such formulation yields a different result from the ordinary TC method. In this study, we investigate both the TC and BiTC methods. \section{Block-Davidson algorithm for plane-wave calculation}\label{sec:3} In the former studies of the TC method applied to solid-state calculations, we have used LDA orbitals as basis functions to expand the TC orbitals in solving the SCF equation for efficient reduction of computational cost~\cite{Sakuma}. This prescription works well if a moderate number of LDA orbitals are required for convergence as in previous studies. However, this is not necessarily the case when we deal with the systems where TC and LDA orbitals are expected to exhibit sizable differences, e.g., for strongly correlated systems. To overcome this problem, we develop an iterative diagonalization scheme using a plane-wave basis set. Because the TC Hamiltonian is non-Hermitian owing to the similarity transformation, some standard methods such as the conjugate gradient (CG) method~\cite{CG} do not work and so we adopted the block-Davidson algorithm~\cite{Davidson1,Davidson2}. Whereas the block-Davidson algorithm has been successfully applied to other conventional methods such as DFT, some modifications described below are necessary to adopt it to the TC method. Figure~\ref{fig:Algo} presents a flow of our calculation. Here we define the TC-Fock operator $F[\phi]$ as $F[\phi]\phi_i(x)$ equals the left-hand side of Eq.~(\ref{eq:SCF}). Our algorithm consists of a double loop. In the inner loop labeled with $p$, a subspace dimension is gradually increased. In the outer loop labeled with $q$, the TC-Fock operator is diagonalized in that subspace and the convergence is checked. Both indices start from $1$. Detailed description of the algorithm is presented below. \subsection{Inner loop: subspace extension} First, we begin with the initial trial vectors $\{ v_1^{(1,q)}, v_2^{(1,q)}, \dots, v_n^{(1,q)} \}$ and eigenvalues $\{ \epsilon_1^{(q)},\epsilon_2^{(q)},\dots,\epsilon_n^{(q)} \}$ of the TC-Fock operator, i.e. the initial estimates of the TC orbitals $\phi$ and the eigenvalues of the $\epsilon_{ij}$ matrix in Eq.~(\ref{eq:SCF}), at each $k$-point $\mathbf{k}$ where $n$ is the number of bands to be calculated here. These initial estimates are obtained in the previous $(q-1)$-th outer loop, where $\langle v_i^{(1,q)} | v_j^{(1,q)} \rangle = \delta_{ij}$ ($1\leq i, j \leq n$) is satisfied. In the first loop, i.e. $q=1$, we set LDA orbitals and their energies as initial guesses. Next, we calculate $\{ \tilde{v}_1^{(p+1,q)}, \tilde{v}_2^{(p+1,q)},\dots, \tilde{v}_n^{(p+1,q)} \} \equiv \{ A_1^{(p,q)}v_1^{(p,q)}, A_2^{(p,q)}v_2^{(p,q)},\dots, A_n^{(p,q)}v_n^{(p,q)} \}$ where \begin{equation} A_m^{(p,q)}v_m^{(p,q)}(\mathbf{G})\equiv T_m^{(p,q)}(\mathbf{G})(F[v^{(1,\tilde{q})}]-\epsilon_m^{(q)})v_m^{(p,q)}(\mathbf{G}) \end{equation} for the wave vector $\mathbf{G}$ and the preconditioner $T_m^{(p,q)}(\mathbf{G})$. $\tilde{q}$ is defined later. The preconditioner is introduced to reduce numerical errors by filtering high-energy plane waves. Diagonal elements of the Fock operator in terms of a plane-wave basis set are usually used in the preconditioner, but their evaluation requires sizable computational effort for the TC method owing to the existence of the three-body terms. Thus we tried two types of the preconditioner: (i) using diagonal elements of the kinetic energy \begin{equation} T_m^{(p,q)}(\mathbf{G})\equiv \left( \frac{1}{2}|\mathbf{k}+\mathbf{G}|^2 -\epsilon_m^{(q)} \right)^{-1}, \end{equation} and (ii) an empirical form proposed by M. C. Payne {\it et al.}~\cite{Payne_precon} in the context of the CG method \begin{equation} T_m^{(p,q)}(\mathbf{G})\equiv \frac{27+18x+12x^2+8x^3}{27+18x+12x^2+8x^3+16x^4}, \end{equation} where \begin{equation} x = \frac{1}{2}|\mathbf{k}+\mathbf{G}|^2 \left( \sum_{\mathbf{G'}} \frac{1}{2}|\mathbf{k}+\mathbf{G'}|^2 |v_m^{(p,q)}(\mathbf{G'})|^2 \right)^{-1}. \end{equation} We found that both schemes sufficiently improve the convergence of calculations. Here we used the latter one, but no large difference is observed for calculations in this study. Then we perform the Gram-Schmidt orthonormalization for $\{ \tilde{v}_1^{(p+1,q)}, \tilde{v}_2^{(p+1,q)},\dots, \tilde{v}_n^{(p+1,q)} \}$ and obtain $\{ v_1^{(p+1,q)}, v_2^{(p+1,q)}, \dots, v_n^{(p+1,q)} \}$ where $v_m^{(p+1,q)}$ is orthogonalized with $v_i^{(j,q)}$ ($1\leq i \leq n$, $1\leq j \leq p$) and $v_i^{(p+1,q)}$ ($1\leq i \leq m-1$). In other words, the Gram-Schmidt orthonormalization is performed for $n(p+1)$ vectors, $\{ \{v_1^{(1,q)}, v_2^{(1,q)}, \dots, v_n^{(1,q)}\}, \allowbreak \{ v_1^{(2,q)}, v_2^{(2,q)}, \dots, v_n^{(2,q)}\}, \allowbreak \dots, \{v_1^{(p,q)}, v_2^{(p,q)}, \dots, v_n^{(p,q)}\}, \allowbreak \{\tilde{v}_1^{(p+1,q)}, \tilde{v}_2^{(p+1,q)}, \dots, \tilde{v}_n^{(p+1,q)}\} \}$, and only $\tilde{v}$ are changed. The subspace for diagonalization is now spanned with these $n(p+1)$ vectors. Then we return to the beginning of the $p$ loop and continue to expand the subspace dimension up to $np_{\mathrm{max}}$. \subsection{Outer loop: diagonalization and convergence check} After the end of the inner $p$ loop, we evaluate the matrix elements of $F[v^{(1,\tilde{q})}]$ in the subspace spanned with $v_i^{(j,q)}$ ($1\leq i \leq n$, $1\leq j \leq p_{\mathrm{max}}$), and then diagonalize it. The eigenvectors and eigenvalues obtained here are used as the initial guesses for the next $q$ loop: $\{ v_1^{(1,q+1)}, v_2^{(1,q+1)}, \dots, v_n^{(1,q+1)} \}$ and $\{ \epsilon_1^{(q+1)}, \epsilon_2^{(q+1)}, \dots,\epsilon_n^{(q+1)} \}$ where the only $n$ eigenvectors and eigenvalues are picked up from the lowest eigenvalue. Trial vectors $v$ included in $F[v^{(1,\tilde{q})}]$ are updated for every $N_{\mathrm{update}}$ loops of $q$, i.e., $\tilde{q}$ is defined as (the maximum multiple of $N_{\mathrm{update}}$) $+1$ not exceeding $q$. At the same time as such update, we check convergence for $\tilde{E}^{(q)}\equiv \sum_i f(\epsilon_i^{(q)})\epsilon_i^{(q)}$ where $f(\epsilon)$ is the occupation number, instead of a bit expensive evaluation of the total energy in the TC method. $f(\epsilon)$ is always 0 or 1 in this study but can be a fractional number for metallic systems. If we store the matrix elements of the one, two, and three-body terms in the Fock matrix separately, the total energy can be efficiently evaluated and future implementation along this line is possible, but the present convergence criteria does not arise any problem in this study. For band structure calculation after an self-consistent determination of the occupied orbitals\cite{nonSCFnote}, we instead check convergence of $\sum_i \epsilon_i^{(q)}$. If convergence is achieved, we once evaluate the total energy for output and calculation ends. Otherwise, the $q$ loop is continued. In principle, other convergence criteria are also applicable, e.g., that with respect to the electron density. All updates, evaluation of matrix elements, generation of new basis, the Gram-Schmidt orthonormalization, and diagonalization in our algorithm are performed simultaneously with respect to the $k$-points. \subsection{Other issues} We comment on the choice of the parameters. In this study, we fixed $p_{\mathrm{max}}$ to be 2, i.e., the max value of the subspace dimension is $2n$. We also fixed $N_{\mathrm{update}}$ to be 3 for SCF calculations. We verified that these choices work well both for accuracy and efficiency in this study. For the BiTC method, we employ essentially the same algorithm, but the orthonormalization condition is replaced with the bi-orthonormalization conditions: $\langle w_i^{(1,q)} | v_j^{(1,q)} \rangle = \delta_{ij}$ and $\langle v_i^{(1,q)} | v_i^{(1,q)} \rangle = 1$ ($1\leq i, j \leq n$) where $w$ are the left trial vectors~\cite{Davidson_nonsym}. Note that the left and right eigenvectors are obtained simultaneously in the subspace diagonalization. To improve the speed of convergence, we also employed the Pulay's scheme for density mixing~\cite{Pulay1,Pulay2}. The density mixing is performed when $v$ in $F[v]$ are updated. \begin{figure} \begin{center} \includegraphics[scale=.3]{Algorithm_last.pdf} \caption{Flow of the iterative diagonalization scheme for solving the SCF equation in the TC method using a plane-wave basis set.} \label{fig:Algo} \end{center} \end{figure} \section{Results}\label{sec:4} \subsection{Computational details}\label{sec:4A} Norm-conserving Troullier-Martins pseudopotentials~\cite{TM} in the Kleinman-Bylander form~\cite{KB} constructed in LDA~\cite{PZ81} calculations were used in this study. For Li, C, and Si atoms, we used two types of pseudopotentials for each: all-electron or [He]-core for Li and C, and [He]- or [Ne]-core for Si. In this study, 1$s$ core states of Si are always included in the pseudopotential and not explicitly treated. Even in `all-electron' calculations, we used small core radii (0.9 Bohr for H $s$ state, 0.4 and 0.8 Bohr for Li $s$ and $p$ states, and 0.2 Bohr for C $s$ and $p$ states, respectively) for non-local projection. In this study, we call calculations using smaller-core pseudopotentials for each atomic species and those using larger-core ones `with-core' and `without-core' calculations, respectively. We call the occupied states in `without-core' calculations valence states hereafter. Singularities of the electron-electron Coulomb repulsion and the Jastrow function in the $k$-space were handled with a method proposed by Gygi and Baldereschi~\cite{GygiBaldereschi} using an auxiliary function of the same form as Ref.~[\onlinecite{auxfunc}]. Static dielectric constants calculated in our previous study~\cite{TCjfo} using an RPA formula were adopted in determination of the Jastrow parameter $A$ (Eq.~(\ref{eq:JastrowA})). Here we used the same value of the static dielectric constants independent of the treatment of core states for a fair comparison between with-core and without-core calculations. Note that a value of $N$ (see Eqs.~(\ref{eq:Hamil}) and (\ref{eq:JastrowA})) is by definition different between with-core and without-core calculations. Here, consistency of $N$ between Eq.~(\ref{eq:Hamil}) and Eq.~(\ref{eq:JastrowA}) is required to describe a screening effect through the three-body terms in the TC Hamiltonian in a way presented in our previous study~\cite{TCjfo}. Experimental lattice parameters used in this study were also the same as those in Ref.~[\onlinecite{TCjfo}]. LDA calculations were performed with \textsc{tapp} code~\cite{tapp1,tapp2} to make an initial guess for TC orbitals. Subsequent TC calculations were performed with \textsc{tc}{\small ++} code~\cite{Sakuma,TCaccel}. We also used LDA orbitals as basis functions for TC orbitals in Sec.~\ref{sec:4B} to compare its performance with that for calculation using a plane-wave basis set. These calculations are called LDA-basis and plane-wave-basis calculations hereafter. \subsection{Efficiency of plane-wave-basis calculation}\label{sec:4B} To see how efficiently our new algorithm works, we performed band structure calculations for bulk silicon with core states. This is because a regular setup, bulk silicon without core states, requires only a small number of subspace dimension both for LDA-basis and plane-wave-basis calculations and is not appropriate as a test case here. In Figure~\ref{fig:bandconv}, we present the convergence behavior of the band structure for bulk silicon with core states in terms of the number of bands $n$, where the subspace dimension for diagonalization is $n$ and $np_{\mathrm{max}}=2n$ for LDA-basis and plane-wave-basis calculations, respectively (see Section~\ref{sec:3}). Band gaps between the highest valence and lowest conduction bands at the $\Gamma$ point, that between the highest valence band at the $\Gamma$ point and the lowest conduction band at the $X$ point, and the valence bandwidth at the $\Gamma$ point are shown here. A cut-off energy of plane waves was 256 Ry, and a $2\times 2\times 2$ $k$-point mesh was used throughout this subsection. Convergence of the total energy is also shown in Figure~\ref{fig:Econv}. In these figures, we can see that LDA-basis calculations require an enormously large number of the subspace dimension. Even when we used 800 LDA bands, the direct gap at the $\Gamma$ point still exhibits an error of about 0.8 eV compared with the well-converged value obtained in plane-wave-basis calculation. Total energy also shows much slower convergence in LDA-basis calculation than in plane-wave-basis calculation. Because one needs to take such a huge number of LDA bands even for bulk silicon when one explicitly takes the core states into account, LDA-basis calculation for more complex materials such as strongly correlated systems seems to be intractable both in computation time and memory requirement. On the other hand, plane-wave-basis calculations require a moderate number of bands to achieve sufficient convergence especially for the band structure calculations. \begin{figure} \begin{center} \includegraphics[width=8 cm]{band.pdf} \caption{Convergence of the band structure for bulk silicon with core states in terms of the number of bands is presented for LDA-basis and plane-wave-basis calculations. Lines show the values corresponding to the rightmost data points for plane-wave-basis calculations as guides to the eyes.} \label{fig:bandconv} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8 cm]{totE.pdf} \caption{Convergence of the total energy for bulk silicon with core states in terms of the number of bands is presented for LDA-basis and plane-wave-basis calculations. A line shows the value corresponding to the rightmost data point for plane-wave-basis calculations as a guide to the eyes.} \label{fig:Econv} \end{center} \end{figure} Figure~\ref{fig:banditer} presents the convergence behavior of the band structure with respect to the number of iterations for LDA-basis and plane-wave-basis calculations. LDA-basis and plane-wave-basis calculations shown here were performed with the number of bands $n$ being 800 and 60, respectively, where the convergence errors in calculations with two basis sets are comparable using these values of $n$ as we have seen in Figs.~\ref{fig:bandconv} and \ref{fig:Econv}. The number of iteration here denotes that for the outer loop described in Section~\ref{sec:3} for plane-wave-basis calculations. Since we did not calculate the total energy at each iteration in plane-wave-basis calculations as noted in Section~\ref{sec:3}, we only show data for the band structure. We verified that about 5 iterations are sufficient to obtain convergence within an error of 0.1 eV for both basis sets in this case. As for the computation time, one iteration takes about 60 and 22 hours without parallelization on our workstation for LDA-basis and plane-wave-basis calculations, respectively. Note that LDA-basis calculation with the number of bands $n=800$ still does not achieve sufficient convergence as mentioned in the previous paragraph, which means that, to achieve the same level of accuracy, LDA-basis calculation needs much longer computation time and larger memory requirement than plane-wave-basis calculation. \begin{figure} \begin{center} \includegraphics[width=8 cm]{banditer.pdf} \caption{Convergence of the band structure for bulk silicon in terms of the number of iterations is presented for LDA-basis ($n=800$) and plane-wave-basis ($n=60$) calculations.} \label{fig:banditer} \end{center} \end{figure} \subsection{Application to band structure calculations with core states} As a test of our new algorithm, we investigated the band structures with core states using a plane-wave basis set. Because accurate calculation of the total energy with an explicit treatment of core electrons requires sizable computational effort, we concentrate on the band structures in this study. In Figure~\ref{fig:LiH}, we present the band structures of LiH calculated with the TC and BiTC methods using a $6\times 6\times 6$ $k$-mesh and $n=20$. We used 49 and 169 Ry for the cutoff energy of plane waves in without-core and with-core calculations, respectively. There is almost no difference between the TC and BiTC band structures for this material. We can see that the position of the valence band is affected by inclusion of core electrons, which improves the agreement of the calculated band gap with experimental one as presented in Table~\ref{table:band}~\cite{noteLiH}. Band structures in the upper energy region are almost unchanged between with-core and without-core calculations. \begin{figure} \begin{center} \includegraphics[width=8.5 cm]{LiH_band.pdf} \caption{(a) TC and (b) BiTC band structures of LiH with (red solid lines) and without (blue broken lines) an explicit treatment of the Li-1$s$ core states. Conduction band bottoms for both band dispersions are set to zero in the energy scale.} \label{fig:LiH} \end{center} \end{figure} We also calculated the band structures of $\beta$-SiC as shown in Figure~\ref{fig:SiC} using a $4\times 4\times 4$ $k$-mesh. Cutoff energies of 121 and 900 Ry and $n=30$ and $40$ were used for without-core and with-core calculations, respectively. We again see that an explicit inclusion of core electrons makes the position of the deepest valence band a bit shallow. A similar trend is found also for Si, where the overestimated valence bandwidth is improved by an inclusion of core states as presented in Table~\ref{table:band}. This feature can be relevant to the observed overestimation of the valence bandwidth for bulk silicon calculated with the state-of-the-art diffusion Monte Carlo (DMC) method~\cite{SiDMC}. In the upper energy region, band structures of $\beta$-SiC are again almost unchanged but exhibit some differences between with-core and without-core calculations. The difference in the BiTC band structures can be interpreted as mere shifts for each angular momentum. The TC band structures, on the other hand, show changes that are not constant shifts. Note that these slight changes are natural because our Jastrow factor is different between with-core and without-core calculations as mentioned before. Furthermore, the LDA core states used to construct our LDA pseudopotentials should be different from those in the TC and BiTC methods, which naturally can, in principle, produce complicated changes in the band structures. In addition, the deep core states explicitly included in the many-body wave function are expected to dominate the wave-function optimization, which can affect the obtained eigenvalues in the upper energy region. It is noteworthy that, nevertheless, the BiTC band structures are always improved by an inclusion of deep core states as shown in Table~\ref{table:band}, for materials we calculated here. The position of the deepest valence bands is always improved both for the TC and BiTC methods. Further investigation for different materials is an important future issue. It is also an important future issue to construct (Bi)TC pseudopotentials that are free from the aforementioned pseudopotential errors evaluated in our analysis. Such development is a nontrivial task because of the difference of the Jastrow factor among with/without-core calculations for solids and atomic calculations required in constructing the pseudopotentials, but the similarity between the band structures obtained in with-core and without-core calculations is a hopeful observation for this purpose. \begin{figure} \begin{center} \includegraphics[width=8.5 cm]{SiC_band.pdf} \caption{(a) TC and (b) BiTC band structures of $\beta$-SiC with (red solid lines) and without (blue broken lines) an explicit treatment of the C-1$s$ and Si-2$s$2$p$ core states. Valence-band tops for both band dispersions are set to zero in the energy scale.} \label{fig:SiC} \end{center} \end{figure} \begin{table*} \begin{center} \begin{tabular}{c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{LDA} & \multicolumn{2}{c}{TC} & \multicolumn{2}{c}{BiTC} & Expt.\\ & with core & N & Y & N & Y & N & Y & -\\ \hline LiH & Band gap & 2.6 & 2.6 & 6.6 & 6.0 & 6.6 & 5.9 & 5.0$^a$\\ & Valence bandwidth & 5.5 & 5.3 & 6.8 & 6.5 & 6.8 & 6.5 & 6.3$\pm$1.1$^b$\\ $\beta$-SiC & Indirect band gap & 1.3 & 1.3 & 3.1 & 3.6 & 3.9 & 3.6 & 2.4$^c$ \\ & Direct band gap & 6.3 & 6.3 & 9.3 & 8.9 & 9.4 & 8.7 & 6.0$^d$ \\ & Valence bandwidth & 15.3 & 15.3 & 19.6 & 18.5 & 19.3 & 18.6 & - \\ Si & Indirect band gap & 0.5 & 0.5 & 2.0 & 2.2 & 2.2 & 1.8 & 1.17 $^e$\\ & Direct band gap & 2.6 & 2.5 & 4.6 & 4.5 & 4.6 & 3.6 & 3.40, 3.05$^f$ \\ & Valence bandwidth & 11.9 & 11.9 & 15.1 & 13.5 & 15.1 &14.2 & 12.5$\pm$0.6$^f$\\ \hline \hline \end{tabular} \end{center} \caption{\label{table:band} Band energies calculated with LDA, TC, and BiTC methods for LiH, $\beta$-SiC, and Si. All in eV. $^a$ Reference \onlinecite{LiHbandExpt}. $^b$ Reference \onlinecite{LiHbwdthExpt}. $^c$ Reference \onlinecite{SiCbandExpt}. $^d$ Reference \onlinecite{SiCdbandExpt}. $^e$ Reference \onlinecite{SibandExpt}. $^f$ From the compilation given in Reference \onlinecite{SidbandExpt}.} \end{table*} \section{Summary}\label{sec:5} In this study, we develop an iterative diagonalization scheme for solving an SCF equation of the TC method using a plane-wave basis set. We make use of the block-Davidson algorithm and verify that our new scheme effectively reduces computational requirement both in memory and time. Also an influence of the core states in the TC calculations is investigated. We find that an explicit treatment of core states improves a position of deep valence states whereas the band structures in upper energy region do not exhibit large changes, which means that our choice of the Jastrow factor can provide consistent electronic structures for a wide energy range whether the core electrons are explicitly treated or not. Our study opens the way to further application of the TC method to the strongly correlated systems. \section*{Acknowledgments} This study was supported by a Grant-in-Aid for young scientists (B) (Number 15K17724) from the Japan Society for the Promotion of Science, MEXT Element Strategy Initiative to Form Core Research Center, and Computational Materials Science Initiative, Japan.
{ "attr-fineweb-edu": 1.856445, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbO85qWTD6dLV6Cmq
\section*{Introduction} The symmetry of the order parameter is one of the most important questions for the high temperature superconductors (HTSC). This issue is especially interesting as a function of different doping levels. Electronic Raman scattering (ELRS) plays a special role in addressing this problem \cite{Abr73,Kle84,Dev95,Dev97,Car96,Carb96,Weng97}. The symmetry properties of the order parameter can be determined by investigating the anisotropy of the scattering cross section for the different symmetry components. Different scattering components originate from different areas of the Fermi surface (FS). The ratio of one scattering component compared with another one reflects the changes of the Fermi surface (FS) topology with doping\cite{Chen97}. There are several theoretical attempts \cite{Dev95,Dev97,Car96,Carb96,Weng97} to describe the electronic Raman scattering in HTSC at $T<T_c$, but still there is no consensus concerning the exact mechanism of the scattering. In the optimally doped HTSC the electronic Raman scattering from single crystals in the superconducting state reveals several common features \cite {Dev95,Dev97,Car96,Carb96,Weng97,Hackl88,Coop88,Chen93,Stauf92,Hof94,Chen94,Nem93,Gasp97}. The superconducting transition manifests itself in a redistribution of the ELRS continuum into a broad peak (pair-breaking peak), the intensity and frequency position $\Omega$ of which differs for the different symmetry components. For the optimally doped samples, one has $\Omega(B_{1g}) > \Omega(B_{2g}) > \Omega(A_{1g})$ \cite {Dev95,Dev97,Car96,Carb96,Weng97,Hackl88,Coop88,Chen93,Stauf92,Hof94,Chen94,Nem93,Gasp97}. The scattering on the low frequency side of the pair-breaking peak does not reveal additional peaks or a cut-off, which would be an indication of anisotropic s-wave component. In contrast, a power-law decrease of the scattering intensity toward zero frequency shift is observed. In the B$_{1g}$ scattering component this power-law is close to a $\omega^3$-dependence, while in the A$_{1g}$ and B$_{2g}$ scattering components a linear-in-$\omega$ decrease is observed. The above mentioned features were first described by Devereaux et al.\cite{Dev95} in the framework of a d-wave order parameter, i.e. using the gap function $\Delta(\vec{k})=\Delta_{max}\cos2\phi$, where $\phi$ is an angle between $\vec{k}$ and Cu-O bond direction within the CuO$_2$ plane. The general description of the Raman scattering cross section follows from the fluctuation-dissipation theorem. For the case of nonresonant scattering the Raman scattering cross section is given by the Raman response function $\chi_{\gamma,\gamma}(\vec{q},\omega:)$ \begin{equation} \frac{\partial^2\sigma}{\partial\omega\partial\Omega}\propto \left[1+n(\omega)\right]\Im \mbox{m}\,\chi_{\gamma\gamma}(\vec{q},\omega), \end{equation} where $n(\omega)=1/(\exp(\omega/T)-1)$ is the Bose-factor, $\omega$=$\omega_I-\omega_S$, is Stokes Raman shift, where $\omega_I(\omega_S)$ is the frequency of the incident (scattered) photon. The Raman response function due to the breaking of Cooper pairs in a superconductor and including Coulomb repulsion can be written as \cite{Abr73,Kle84}: \begin{equation}\label{rares} \chi_{\gamma\gamma}(\vec{q},\omega)=\langle\gamma^2_{\vec{k}} \lambda_{\vec{k}}\rangle -\frac{\langle\gamma_{\vec{k}}\lambda_{\vec{k}}\rangle^2} {\langle\lambda_{\vec{k}}\rangle}, \end{equation} where $\gamma_{\vec{k}}$ is the Raman vertex, which describes the strength of the corresponding Raman transition, $\lambda_{\vec{k}}$ is the Tsuneto function\cite{Tsun60} and the brackets $\langle\cdots\rangle$ denote the average of the momentum $\vec{k}$ over the Fermi surface. The Tsuneto function is determined as: \begin{equation}\label{Tsuneto} \lambda({\vec{k},i\omega})= \frac{\Delta(\vec{k})^2}{E(\vec{k})} \tanh{\left[\frac{E(\vec{k})}{2T}\right]} \left[\frac{1}{2E(\vec{k})+i\omega} +\frac{1}{2E(\vec{k})-i\omega}\right], \end{equation} with excitation energy E$^2(\vec{k})=\xi^2(\vec{k})+\Delta^2(\vec{k})$, conduction band $\xi(\vec{k})=\epsilon(\vec{k})-\mu$, chemical potential $\mu$ and superconducting energy gap $\Delta(\vec{k})$. There is an important consequence following from Eqs. \ref{rares} and \ref{Tsuneto} that the Raman response is proportional to the square of the superconducting order parameter, therefore, as it was already mentioned in Ref.\onlinecite{Car96}, Raman scattering is not sensitive to the sign of the order parameter. In the case of nonresonant scattering one can describe the Raman vertex through the curvature of the energy bands $\epsilon(\vec{k})$: \begin{equation} \gamma_{\vec{k}}=\sum_{\alpha \beta}e_\alpha^I \frac{\partial^2\epsilon(\vec{k})}{\partial k_{\alpha}\partial k_{\beta}} e^S_{\beta}, \end{equation} where $\vec{e}^I(\vec{e}^S)$ is the polarization vector of the incident (scattered) photon, and $\alpha$ and $\beta$ are summation indices which correspond to the different projections of $\vec{k}$. If one assumes\cite{Dev95} that the Raman vertex does not depend on the frequency of the incident photon one can take into account symmetry considerations to evaluate corresponding Raman scattering components. In such a case the Raman vertex can be described in terms of Brillouin zone (BZ) or Fermi surface harmonics\cite{Dev95} $\Phi_L(\vec{k})$, which transform according to point group transformations of the crystal. \begin{equation} \gamma_{\vec{k}}(\omega_i,\omega_s)=\sum_L\gamma_L(\omega_i,\omega_s) \Phi_L(\vec{k}). \end{equation} For tetragonal symmetry one gets the following form of the Raman vertices\cite{Dev95}: \begin{equation} \gamma_{B_{1g}} \propto \cos2\phi\qquad \gamma_{B_{2g}} \propto \sin2\phi\qquad \gamma_{A_{1g}} \propto 1+\cos4\phi. \end{equation} Let us analyze the Raman response (Eq.\ref{rares}). For simplicity we have drawn in Fig.1 corresponding polar plots of the functions contained in each of the two terms of the Raman response. The first term is the "bare" Raman response which reflects the attractive interaction in the Cooper pair whereas the second term ("screening") is due to the Coulomb repulsion. Let us start with the "screening"term. This term is proportional to the squared FS average of the product of the Raman vertex $\gamma$ and the Tsuneto function $\lambda$. The Tsuneto function in turn is proportional to the square of the gap function. Following Devereaux and Einzel\cite{Dev95} we assume a d-wave gap in the form of $\Delta(\vec{k})=\Delta_{max}\cos2\phi$, which has a B$_{1g}$ symmetry. When squared it becomes totally symmetric (A$_{1g}$). Therefore an averaged product of the Raman vertex and Tsuneto function will be nonzero only if the vertex function is totally symmetric. This is not the case for the B$_{1g}$ ($\gamma\sim\cos2\phi$) and B$_{2g}$ ($\gamma\sim\sin2\phi$) Raman vertexes, but only for the A$_{1g}$ ($\gamma\sim1+\cos4\phi$) as seen in Fig.1. Therefore A$_{1g}$ scattering component is the only component strongly affected or "screened" by the long range Coulomb interaction \cite{Dev95,Dev97,Car96,Carb96,Weng97}. Let us now look on the bare Raman response. This term is proportional to the FS average of the product of the squared Raman vertex $\gamma^2$ and Tsuneto function $\lambda$ ($\propto\Delta(\vec{k})^2$). Both $\gamma^2$ and $\lambda$ are totally symmetric. One sees from Fig.1 that maxima and nodes of the squared B$_{1g}$ Raman vertex coincide with that of squared d-wave gap. This leads to the highest relative peak position for the B$_{1g}$ scattering component and a $\omega^3$-dependence of the low frequency scattering. In contrast, maxima of the B$_{2g}$ Raman vertex coincide with nodes of the squared d-wave order parameter, resulting in a lower relative peak position and a linear-in-$\omega$ low frequency dependence for this component. The A$_{1g}$ scattering component is the only one which is screened. The "screening" term shifts the peak position of the A$_{1g}$ scattering component to a frequency smaller than that of the B$_{1g}$. Because of the "screening" term, one could expect that the A$_{1g}$ ELRS peak should be the weakest one\cite{Car96,Carb96,Weng97}. Nevertheless in all optimally doped HTSC (YBCO\cite{Hackl88,Coop88,Chen93}, Bi-2212\cite{Stauf92}, Tl-2223\cite{Hof94}, La-214\cite{Chen94}, Tl-2201\cite{Nem93,Gasp97}) the relative intensity of the A$_{1g}$ ELRS peak is strong and comparable to that of the B$_{1g}$ peak. This contradicts existing LDA-based calculations of the electronic Raman scattering cross-section\cite{Car96}. However, resonance effects\cite{Blum96,Sher97} may alter these calculations. This picture qualitatively describes the experimental results for all optimally doped HTSC's. The only exception is the $n$-type superconductor (Nd,Ce)-214, which demonstrates a behavior consistent with an s-wave type of order parameter\cite{Hacl95}. For the overdoped or underdoped samples the above mentioned universality of the experimental results does not hold anymore. For instance C. Kendziora et al.\cite{Kend96} reported for overdoped Tl$_2$Ba$_2$CuO$_{6+\delta}$ (Tl-2201) a similar peak position for the different symmetry components of the electronic Raman scattering. The authors pointed out that the gap does not scale with T$_c$, but rather decreases with an increase of doping, yielding a $2\Delta_0/k_BT_c=3.9$. This led them to suggest that in the overdoped Tl-2201 the order parameter has s-symmetry. One should note, however, that existing calculations of the ELRS peak positions (especially for the A$_{1g}$ scattering component \cite{Dev95,Dev97,Car96,Carb96,Weng97}) strongly depend on the chosen electronic structure and gap function parameters. For the optimally doped Tl-2201 the difference between the peak positions of the B$_{1g}$ and B$_{2g}$ components is about 10$\%$ only \cite{Gasp97}. One can estimate an expected difference between the corresponding peak positions for strongly overdoped Tl-2201 by scaling the peak position of the B$_{1g}$ scattering component in optimally doped Tl-2201 ($\approx\mbox{430~cm}^{-1}$) to that reported for the strongly overdoped Tl-2201 ($\approx\mbox{80-100~cm}^{-1}$). Such an estimate gives for the strongly overdoped crystal a peak position of the B$_{2g}$ scattering component at only 8-10~cm$^{-1}$ lower frequency than that of the B$_{1g}$ component. This is actually within experimental error. Therefore the same position of the peaks cannot prove s-wave pairing. According to Devereaux et al.\cite{Dev95}, the low frequency power-law behavior of the ELRS intensity is more "robust" concerning changes of the FS topology as a result of overdoping and underdoping. Particularly the $\omega^3$-law for the low frequency scattering in the B$_{1g}$ scattering component and $\omega$-law for the A$_{1g}$ and B$_{2g}$ scattering components should not change with doping in a d-wave superconductor. Unfortunately the ELRS peaks in strongly overdoped Tl-2201 have their maxima at a rather low frequency, which makes it difficult to determine their low-frequency tails precisely. Additionally the low frequency scattering for the A$_{1g}$ component is easily obscured by Rayleigh scattering. In order to test the low frequency behavior in the overdoped Tl-2201 it is therefore necessary to investigate moderately overdoped samples with a pair-breaking peak not at too low frequency. In addition to the scattering in the superconducting state the normal state scattering provides important information about carrier dynamics. Raman scattering in the normal state in channel L and assuming a single impurity scattering lifetime $\tau$ can be described by a Lorentzian: \begin{equation}\label{norm} \Im\mbox{m}\chi_L(\omega, T>T_c)=2N_F{\gamma^2_L}\frac{\omega\tau} {(\omega\tau)^2+1}, \end{equation} where $\Gamma=1/\tau$ is the scattering rate, $\gamma_L$ is a Raman vertex, and N$_F$ is the carrier density of states at the Fermi level\cite{Zav90,Kost92}. Generally speaking, $\tau$ is a function of the scattering channel $L$ and momentum $\vec{k}$\cite{Mis94}. $\Im\mbox{m}\chi_L(\omega, T>T_c)$ has a peak at the frequency $\omega=1/\tau$, and the spectrum falls off as $1/\omega$. Using this fact one can analyze Raman spectra in the normal state and determine how scattering rates change with doping. Hackl et al.\cite{Hacl96} fitted their data for Bi-2212 using Eq.\ref{norm} and a frequency dependence of $\Gamma$ given by the nested Fermi liquid model\cite{Viros92}. The scattering rates at T$\approx$ 100~K were found to be $\Gamma(B_{1g})\approx$ 600~cm$^{-1}$, $\Gamma(B_{2g})\approx$ 170~cm$^{-1}$ for the nearly optimally doped Bi-2212 and $\Gamma(B_{1g})\approx$ 160~cm$^{-1}$, $\Gamma(B_{2g})\approx$ 120~cm$^{-1}$ for overdoped Bi-2212\cite{Hacl96}. In this paper we present electronic Raman scattering experiments on moderately overdoped Tl$_2$Ba$_2$CuO$_{6+\delta}$ with T$_c$=56~K. These are compared with measurements on optimally doped (T$_c$=80~K) and strongly overdoped (T$_c$=30~K) crystals. We show that similarly to optimally doped Tl-2201 also moderately overdoped Tl-2201 samples show a $\omega^3$-low frequency behavior of the B$_{1g}$ scattering component and a linear low frequency behavior for the B$_{2g}$ scattering component. The above mentioned power laws are consistent with d-wave symmetry of the order parameter. Additionally we will discuss the changes of the relative intensities of the pair breaking peaks in the A$_{1g}$ and B$_{1g}$ scattering components with doping, as well as the electronic Raman scattering in the normal state. \section*{Experimental} We investigated the electronic Raman scattering in the single-CuO$_2$ layered compound Tl-2201. This provides a single-sheeted Fermi surface \cite{Tat93}. Therefore the inter-valley scattering due to the multi-sheeted Fermi surface\cite{Car96} invoked for the explanation of unexpectedly large A$_{1g}$ scattering intensity does not play a role. Our samples had a shape of rectangular platelets with the size of 2x2x0.15mm. Moderately overdoped and strongly overdoped crystals of Tl-2201 were characterized by a SQUID magnetometer, T$_c$ was found equal to 56$\pm$2~K (moderately overdoped) and 30$\pm$2~K (strongly overdoped), respectively. The orientation of the crystals was controlled by X-ray diffraction. The Raman measurements were performed in quasi-backscattering geometry. Raman scattering was excited using an Ar$^+$-ion laser. The laser beam with 3mW power was focused into a spot of 50$\mu$m diameter. The laser induced heating was estimated by increasing the laser power level at a fixed temperature (5~K) and comparing the dependence of the ELRS B$_{1g}$-peak intensity on laser power with the temperature dependence of the intensity of this peak measured at fixed laser power (3mW). Estimated additional heating was found to be about 12.5$\pm$2.5~K (all data are plotted with respect to the estimated temperature). In order to analyze pure scattering geometries we extracted the A$_{1g}$ scattering component from the X'X' (A$_{1g}$+B$_{2g}$) and XY (B$_{2g}$) scattering geometries. The X' and Y' axes are rotated by 45$^{\circ}$ with respect to the X and Y-axes. The X- and Y-axes are parallel to the Cu-O bonds in the CuO$_2$ plane of the Tl-2201 unit cell. After subtraction of the dark counts of the detector the spectra were corrected for the Bose-factor in order to obtain the imaginary part of the Raman response function. In order to analyze the low frequency behavior of the B$_{1g}$ scattering component in moderately overdoped Tl-2201 with T$_c$=56~K we performed measurement in superfluid He (T=1.8~K). This gives us several advantages: Because of the huge thermal conductivity of superfluid helium we do not have any overheating of the sample due to laser radiation. The absence of overheating allows us to precisely determine the real temperature of the excited volume. For T=1.8~K the Bose factor is equal to zero down to at least 10~cm$^{-1}$. Therefore down to 10~cm$^{-1}$ we actually measure the imaginary part of the Raman response function. \section*{Results and discussion} The Raman spectrum of Tl-2201 shows several phonons and a broad electronic continuum. The superconducting transition leads to the redistribution of the continuum into a broad peak. In Figs.2-4 we show the B$_{1g}$, A$_{1g}$ and B$_{2g}$ scattering components of the Raman scattering for $T \ll T_c$ (solid line) and $T > T_c$ (dashed line) for the Tl-2201 single crystals with T$_c$ =80~K (Fig.2), T$_c$ =56~K (Fig.3), and T$_c$ =30~K (Fig.4). In order to emphasize the redistribution of the scattering intensity in the superconducting state compared to the normal state we draw not only the Bose-factor-corrected raw spectra (Figs.2, 3 and 4, upper panel), but we subtract the spectra above T$_c$ from the spectra well below T$_c$ (Fig.2, 3 and 4, lower panel). The positions of the ELRS peaks in the superconducting state for different scattering components as a function of doping are summarized in Table I. It is generally accepted that the B$_{1g}$ scattering component reflects much of the properties of the superconducting density of states\cite{Carb96}. Therefore it is reasonable to analyze intensities of other components relative to the B$_{1g}$ scattering component. There are several differences between optimally- and overdoped crystals. i) If one identifies the peak in the B$_{1g}$ ELRS component as a 2$\Delta_0$ one obtains the reduced gap value 2$\Delta_0/k_BT_c \approx 7.8$ for the optimally doped crystal, while in the overdoped crystals 2$\Delta_0/k_BT_c$ is close to 3, (see Table I). ii) For the optimally doped crystals the peak positions of the B$_{2g}$ and A$_{1g}$ scattering components are lower than that of the B$_{1g}$, (see Fig.2 and Table I). In the overdoped crystals the B$_{2g}$ component peaks at a frequency very close to that of the B$_{1g}$ scattering component (see Figs.3, 4 and Table.I.), although its peak position is still about 10$\pm$2\% lower (similar to the optimally doped Tl-2201, see Table I). The A$_{1g}$ peak position is close to that of the B$_{1g}$ peak as well, although an exact determination of the pair-breaking peak position for the A$_{1g}$ scattering component is difficult due to the A$_{1g}$ phonon at 127~cm$^{-1}$ of moderately overdoped Tl-2201, (see Fig.3) or due to the superimposed Rayleigh scattering in strongly overdoped Tl-2201 (see Fig.4). iii) The most drastic changes of the relative ELRS peak intensity with doping are seen in the A$_{1g}$ scattering component. For the optimally doped crystal we observe a strong peak, which is comparable in intensity to that of the B$_{1g}$ component, see Fig.2a and b, lower panel. In contrast, for two overdoped crystals (Figs.3, 4 a and b, lower panel) the relative intensity of the ELRS peak in the A$_{1g}$ scattering component is weak. iiii) In contrast to the A$_{1g}$ scattering component the intensity of the B$_{1g}$ scattering component is stronger in the moderately overdoped sample (Fig. 3a) compared to the optimally doped one (Fig. 2a). For the strongly overdoped sample an exact determination of the relative intensity of the pair-breaking peak is difficult in all scattering components. The pair-breaking peak is at too low frequency ($\approx 60~cm^{-1}$), therefore its intensity is very sensitive to the Bose-factor correction, which in turn depends upon the uncertainty in the estimated temperature. Additionally, Rayleigh scattering and impurity induced scattering\cite{Dev95} may obscure the evaluated difference between the corresponding spectra below and above T$_c$. According to Devereaux et al.\cite{Dev95}, the $\omega^3$-law for the low frequency scattering in the B$_{1g}$ scattering component and the $\omega$-law for the A$_{1g}$ and B$_{2g}$ scattering components should not change with doping in d-wave superconductors. In order to check these power laws for the moderately overdoped Tl-2201 we have performed measurements in superfluid helium (T=1.8~K). To illustrate the low frequency behavior of the imaginary part of the Raman response function in the B$_{1g}$ and B$_{2g}$ scattering components on the same frequency scale we have scaled the Raman shift by the corresponding peak position, as shown in Fig. 5a. The fit of the low frequency scattering in the B$_{1g}$ scattering component with the $\omega^n$-function leads to exponents n=2.9 and 3.5 for the optimum doped and moderately overdoped Tl-2201, respectively. An even better fit to the low frequency scattering intensity in moderately overdoped Tl-2201 was obtained with a linear term added to the $\omega^n$ function, similarly to overdoped Bi-2212\cite{Hacl96}. The appearance of such a crossover from linear to a power law in the B$_{1g}$ scattering component indicates the presence of impurities\cite{Dev95}. For the B$_{2g}$ scattering component one can easily fit the low frequency scattering of optimally to overdoped samples with a linear-in-$\omega$ law as shown in Fig.5b. Unfortunately in the T$_c=$30-K crystal the expected ELRS peak is too close to zero frequency to make a definite conclusion about its low frequency behavior. The observed power laws ( Fig.5) lead to the conclusion that even overdoped Tl-2201 has a d-wave symmetry of the order parameter. Let us now discuss temperature induced spectral changes in the overdoped crystal. A detailed temperature dependence for the Tl-2201 (T$_c$=56~K) sample is shown for the B$_{1g}$ component in Fig.6. With increasing temperature the intensity of the pair-breaking peak decreases and its position shifts toward lower frequency. This dependence slightly differs from that predicted by the BCS theory, as shown in the inset of Fig.6, i.e. the gap opens more abruptly. At the same time the intensity of the pair-breaking peak decreases nearly linearly with increasing temperature (see insert in Fig.7) whereas the intensity of the low frequency scattering (at for instance $\approx$50~cm$^{-1}$) increases. At a temperature close to T$_c$ both intensities match. From this data one can determine the ratio of the superconducting response to the normal state response in the static limit ("static ratio"), i.e when $\omega\rightarrow 0$ and compare it with the calculations of the ratio in the presence of impurities\cite{Dev97}. From such a comparison we found for the moderately overdoped Tl-2201 the corresponding value of the scattering rate to be $\Gamma/\Delta(0)\approx0.5$. This leads to $\Gamma\approx$60~cm$^{-1}$. In the normal state spectra (we discuss the imaginary part of the Raman response function) one sees an increase of the intensity towards zero with a broad peak at $\approx$50~cm$^{-1}$, Figs.3 and 7. This peak is more pronounced in the B$_{1g}$ scattering component. Such a peak can be attributed to impurity induced scattering. According to Eq.\ref{norm} the frequency of the peak corresponds to the scattering rate $\Gamma=1/\tau$ of the normal state\cite{Zav90,Kost92}. The position of the peak depends strongly on doping. It is roughly 35 or 50~cm$^{-1}$ for strongly and moderately overdoped Tl-2201, respectively. Practically there is no anisotropy of the peak position comparing the B$_{1g}$ and B$_{2g}$ scattering components. Note that the scattering rates calculated from the peak positions are very close to that evaluated from the "static ratio" and sufficiently smaller than that found by Hackl et al.\cite{Hacl96} using a frequency dependence of $\Gamma$ given by the nested Fermi liquid model. Scattering rates may also be determined using the frequency dependent conductivity from the infrared measurements. One finds for many HTSC scattering rates $1/\tau$ of about 100-200~cm$^{-1}$ at T$\approx$100~K\cite{Tim92}. Additionally and very surprisingly, the scattering rates decrease with increasing overdoping\cite{Tim96}. From our Raman measurements we found scattering rates $\Gamma=1/\tau$=35 or 50~cm$^{-1}$ for strongly and moderately overdoped Tl-2201 not too far from the infrared data, and a similar decrease of $\Gamma$ with increasing overdoping. We would like to sum up the effects of overdoping that are also partly observed in other HTSC: In the nearly optimally doped regime (YBCO\cite{Hackl88,Coop88,Chen93}, Bi-2212\cite{Stauf92}, Tl-2223\cite{Hof94}, La-214\cite{Chen94}, Tl-2201\cite{Nem93,Gasp97}) the ELRS peak positions scale with T$_c$ for all scattering components. The B$_{1g}$ scattering component is most sensitive to changes of T$_c$. The relative intensity of the ELRS A$_{1g}$ peak is stronger or at least comparable to that of the B$_{1g}$ component. The relative intensity of the B$_{2g}$ peak is always the weakest one. For the overdoped crystals (Tl-2201, Bi-2212)\cite{Kend96,Hacl96} the peak position of the B$_{1g}$ scattering component decreases faster than T$_c$ so that 2$\Delta_0/k_BT_c$ decreases with overdoping from 7.4 to $\approx$3 (data of this paper), or from 8 to 5 in Bi-2212\cite{Hacl96}. The relative intensity of the A$_{1g}$ ELRS peak as compared to B$_{1g}$ decreases when the system becomes more overdoped\cite{Blum97}. This is an important point concerning the influence of the Fermi surface topology changes on Raman scattering and will be discussed further below. We will now discuss some reasons which may explain the shift of the B$_{1g}$ peak position with doping. The decrease of the B$_{1g}$ ELRS peak position and 2$\Delta_0/k_BT_c$ with doping is connected to the fact that the crossing of the Fermi surface with the Brillouin zone moves away from the (0,$\pm\pi$), ($\pm\pi$,0) points with doping. Therefore the FS average $\langle\gamma^2_{\vec{k}}\lambda_{\vec{k}}\rangle$ of the Raman vertex with the Tsuneto function in Eq.\ref{rares} gives a $\Delta_0$ smaller than $\Delta_{max}$. A detailed discussion of this poin is given in the work of Branch and Carbotte \cite{Carb96}. In the case of optimum doping it is supposed that the Fermi level is close to the van Hove singularity (vHs) so that the FS pinches at the (0,$\pm\pi$), ($\pm\pi$,0) points of the BZ\cite{Nov95} leading to $\Delta_0\approx\Delta_{max}$. Now let us turn to the decrease of the A$_{1g}$ vs. B$_{1g}$ intensities of the ELRS with doping. In contrast to B$_{1g}$ and B$_{2g}$ the A$_{1g}$ scattering component is affected by the screening term. We suppose that "screening" itself is connected with the FS anisotropy, which is in turn affected by the van Hove singularity. In optimally doped crystals vHs is close to the Fermi level (FL) leading to strongly anisotropic FS. By overdoping we move FL from vHs. This leads to a more isotropic FS with larger "screening". Therefore the increase of "screening" with doping would be a plausible explanation for the observed decrease of the A$_{1g}$ scattering component with doping. This suggestion has a consequence for the intensity of the B$_{1g}$ scattering component. Namely the "screening" term for the A$_{1g}$ scattering component has the same symmetry as the bare term for the B$_{1g}$ scattering component (see Fig.1). If we suppose that the "screening" increases, the B$_{1g}$ response should also increase. This is in agreement with our results (see Figs. 2a and 3a, lower panel). In conclusion we have presented measurements of the electronic Raman scattering on optimally doped as well as moderately and strongly overdoped Tl-2201 single crystals. The strong decrease of the A$_{1g}$ scattering intensity with increasing overdoping has been observed. We connect this effect with the changes of the FS topology connected to the existence of a van Hove singularity. We propose investigations on other overdoped HTSC in order to check this idea. Our measurements of the low frequency behavior of the electronic Raman scattering in optimally doped and moderately overdoped Tl-2201 confirmed a d-wave symmetry of the order parameter, in contrast to earlier reports \cite{Kend96}. The scattering rates, we have evaluated from the normal state Raman spectra as well as a decrease of them with overdoping are consistent with the existing infrared data. \section*{Acknowledgments} This work was supported by DFG through SFB 341, BMBF FKZ 13N7329 and INTAS grants 94-3562 and 96-410. One of us (L.V.G.) acknowledges support from the Alexander von Humboldt Foundation.
{ "attr-fineweb-edu": 1.769531, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbOw4eIZijQpFYXRL
\section{Introduction} A cataclysmic variable (CV) is a binary star system that features mass transfer via Roche lobe overflow onto a white dwarf (WD). As the material has too much angular momentum to accrete directly onto the WD, it instead forms an accretion disk around the WD. A thorough review of CVs can be found in \cite{Warner95} and \cite{Hellier01}. Also known as DQ Herculis stars, intermediate polars (IPs) are a subset of CVs in which the WD has a comparatively weak magnetic field and rotates asynchronously \citep[for a review, see][]{Patterson94}. In most well-studied IPs, the magnetic field disrupts the inner accretion disk, forcing the gas to travel along magnetic field lines until it impacts the WD near its magnetic poles; these systems are commonly said to be ``disk-fed.'' In principle, the magnetic field can disrupt the disk altogether, resulting in ``diskless'' or ``stream-fed'' accretion. The simulations shown in Fig.~3 of \citet{norton04} provide an excellent visualization of stream-fed accretion flows compared to their disk-fed counterparts. However, in practice, long-term diskless accretion is rare amongst known IPs. V2400~Oph \citep{buckley95, buckley97} is the best-known example of persistently diskless IP, and Paloma \citep{schwarz, joshi} is a strong candidate as well. A few IPs show evidence of an accretion disk at some epochs while being diskless at others, with FO~Aqr being one proposed example \citep{Littlefield20}. In some IPs, the stream can overflow the disk until it impacts the magnetosphere creating a hybrid of diskless and disk-fed accretion. The subject of this study, YY Draconis (hereafter, YY Dra),\footnote{YY Dra is also known as DO Draconis (DO Dra) due to an ambiguity in identification, as explained in detail by \citet{patterson87}. In our Appendix, we discuss new evidence that YY~Dra is the correct identifier.} is an IP with a lengthy observational history following its identification by \citet{patterson82} as the optical counterpart of the X-ray source 3A\ 1148+719. Its status as an IP was established through optical and X-ray studies by \citet{Patterson92} and \citet{patterson93}, respectively. The optical data from \citet{Patterson92} showed wavelength-dependent periods of 275~s and 265~s, each of which was identified as the second harmonic of a pair of fundamental frequencies of variation (550~s and 529~s respectively). The subsequent X-ray study by \citet{patterson93} found an X-ray counterpart to the 265~s signal, which the authors attributed to the rotation of the WD. They argued the two accretion regions contribute almost equally to the light curve, causing the fundamental WD spin frequency ($\omega$ = 163 cycles d$^{-1}$) to have a low amplitude compared to its second harmonic. Their identification of $\omega$\ meant that the 550~s signal is the spin-orbit beat period ($\omega$-$\Omega$ = 157 cycles d$^{-1}$). The dominance of the harmonics is consistent with equatorial accretion regions that contribute nearly equally to the light curve \citep{szkody02}. A spectroscopic study by \cite{Matteo91} established the system's orbital period to be 3.96~h and estimated the orbital inclination to be $i = 42^{\circ}\pm5^{\circ}$. \citet{haswell} and \citet{joshi_yy_dra} refined the orbital period, and the \citet{joshi_yy_dra} measurement of 0.16537424(2)~d ($\Omega$ = 6.061 cycles d$^{-1}$) is the most precise value in the previous literature. The Gaia Early Data Release 3 (EDR3) distance of YY Dra is 196.6 $\pm$ 1.1 pc \citep{BJ21}. \section{Observations} \subsection{\textit{TESS}} The Transiting Exoplanet Survey Satellite (\textit{TESS}) observed YY Dra from 2019 December 25 until 2020 February 28 in Sectors 20 and 21. During these 65 days, it observed the system at a two-minute cadence across a broad bandpass ($\sim$600-1,000~nm) that covers the red and near-infrared spectrum, including the $R$ and $I$ bands. For a deeper description of \textit{TESS}, see \citet{Ricker2015}. Using the Python package {\tt lightkurve} \citep{Lightkurve20}, we downloaded the \textit{TESS}\ light curve, which has nearly continuous coverage with the exception of three $\sim$2-day gaps due to data downlinks. The flux measurements from the \textit{TESS}\ pipeline come in two forms: simple-aperture photometry (SAP) and pre-conditioned simple-aperture photometry (PDCSAP). The latter is a processed version of the former that attempts to remove systematic trends and the effects of blending. Although the elimination of these problems is desirable in principle, \citet{littlefield21} argued that the PDCSAP light curve of the IP TX~Col removed an outburst that was present in the SAP light curve. With this issue in mind, we plotted the \textit{TESS}\ PDCSAP and SAP fluxes as a function of the simultaneous Zwicky Transient Facility (ZTF) \citep{Smith19} $g$ flux, repeating the procedure separately for each sector. Our objective was to determine which version of the light curve better correlated with the contemporaneous ground-based data. Following the exclusion of a single discrepant ZTF measurement, we determined that the SAP flux was much more strongly correlated with the ZTF flux than was the PDCSAP flux. The difference was especially pronounced during Sector 21, when the best-fit linear regressions had coefficients of determination of $r^{2}_{SAP} = 0.92$ and $r^{2}_{PDC} = 0.24$ for the SAP and PDCSAP light curves, respectively. These statistics indicate that 92\% of the variation in the SAP light curve but only 24\% of the variation in the PDCSAP data can be explained by the variations in the ZTF flux. During Sector 20, we found that $r^{2}_{SAP} = 0.26$ and $r^{2}_{PDC} = 0.02$. The higher values of $r^2$ during Sector 21 are probably because the range in brightness was much greater than in Sector 20. A visual inspection of the residuals from the linear fits confirmed that the low value of $r^{2}_{PDC}$ in both sectors was attributable to a weak correlation as opposed to a strong, but non-linear, relationship. Consequently, we elected to use the SAP light curve in our analysis. The top panel of Figure~\ref{fig:3DLC} shows the full \textit{TESS}\ light curve, while Figure~\ref{fig:High.Low.PhasedLC} shows selected segments to give a sense of the data quality. Throughout this paper, including Figure~\ref{fig:3DLC}, we use the Barycentric \textit{TESS}\ Julian Date (BTJD) convention to express time, which is defined as BTJD = BJD $-$ 2457000, where BJD is the Barycentric Julian Date in Barycentric Dynamical Time. \begin{figure*} \centering \includegraphics[width=\textwidth]{LC.pdf} \caption{{\bf Top Panel:} \textit{TESS}\ light curve of YY Dra, showing the SAP flux from the mission's pipeline. The deep low state described in the text is apparent at BTJD=1888. {\bf Bottom Three Panels:} Slices of YY Dra's two-dimensional power spectrum near frequencies of interest, following the subtraction of a smoothed version of the light curve to suppress red noise. All periodic variation ceases during the deep low state except for the 2$\Omega$\ signal, which we attribute to ellipsoidal variations by the donor star. The same intensity color map is used for all three panels.} \label{fig:3DLC} \end{figure*} \subsection{Survey Photometry} We compared the \textit{TESS}\ light curve to a long-term light curve (Figure~\ref{fig:DiffLC}) assembled from the Harvard Digital Access to a Sky-Century at Harvard project \citep[DASCH;][]{Grindlay09}, the All-Sky Automated Survey for Supernovae \citep[ASAS-SN; ][]{Shappee14,Kochanek17}, and the ZTF. The DASCH light curve used the default quality flags of the project's pipeline, and its magnitudes are appoximately equal to the Johnson $B$ band. We used only $g$ magnitudes from ASAS-SN and ZTF. \begin{figure*} \centering \includegraphics[width=\textwidth]{High.Low.PhasedLC.pdf} \caption{Zoomed section of both a high and low state in YY Dra's \textit{TESS}\ light curve. {\bf Top:} Two-day portion in the \textit{TESS}\ light curve during the slow brightening at the beginning of the observation. This is representative of the first three weeks of the \textit{TESS}\ light curve. {\bf Middle:} 1.2~d deep low state during which accretion ceased. During this low state, the only periodic variability is from ellipsoidal variations. {\bf Bottom:} YY Dra's orbital light curve during the deep low state between BTJD = 1888-1889, phased according to Eq.~\ref{ephemeris}. The red stars represent bins with a width of 0.085. The data are repeated for clarity.} \label{fig:High.Low.PhasedLC} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{DiffLC.pdf} \caption{YY Dra light curves from ASAS-SN, ZTF, and DASCH. {\bf Top:} Overlaid ASAS-SN (blue circles) and ZTF (green triangles) light curves with including a deep low state observed by \textit{TESS}\ shown in the cyan shaded region. The ASAS-SN light curve is relatively uniform for the first few years. However, multiple intermittent low states begin to appear around 2018-2019. The observed low states reach a consistent minimum magnitude. {\bf Bottom-left:} The DASCH light curve, with detections shown as circles and nondetections as dashes. As explained in the text, nondetections with limits brighter than magnitude 15 have been excluded. The quiescent magnitude is comparable to that observed by ZTF. While there are numerous outbursts throughout the observational period, no low states comparable to those in the ASAS-SN and ZTF data are present. {\bf Bottom-right:} Enlargement of the ZTF observations obtained during the \textit{TESS}\ light curve. The faintest datum ($g=17.37\pm0.12$) was obtained during the deep low state, when accretion was negligible. } \label{fig:DiffLC} \end{figure*} \section{Analysis} \begin{figure*} \centering \includegraphics[width=\textwidth]{2PwrSpectrum.pdf} \caption{ {\bf Top:} YY Dra's power spectrum of the entire \textit{TESS}\ light curve, with very clear $\Omega$, 2$\Omega$, $\omega$-$\Omega$, 2($\omega$-$\Omega)$, and 2$\omega$\ frequencies. The spin frequency $\omega$\ was below the noise level, but its expected position is labeled in red. The comparatively high intensity of $\omega$-$\Omega$\ and its harmonics relative to $\omega$\ and 2$\omega$\ is characteristic of stream-fed accretion in the \citet{Ferrario99} models. {\bf Bottom:} Power spectrum during the 28~h low state shown in Fig.~\ref{fig:High.Low.PhasedLC}. The only remaining periodic variation occurs at 2$\Omega$\ and is attributable to ellipsoidal variations. } \label{fig:PwrSpectrum} \end{figure*} \subsection{\textit{TESS}\ light curve} During the \textit{TESS}\ observation, YY Dra was comparatively bright for the first 20 days ($g\sim15.2$; Fig.~\ref{fig:DiffLC}), showing a slow, rising trend in the \textit{TESS}\ data. However, for the remainder of the light curve, it generally exhibited a gradual fade, punctuated by three obvious dips (Figure~\ref{fig:3DLC}, top~panel). The slow fade began near BTJD=1865, leading into the first of the three dips at BTJD = 1868. Although YY~Dra quickly recovered from this dip, the overall fading trend continued, and another, deeper dip took place at BTJD = 1878. The second dip consisted of a relatively rapid fade over $\sim2.5$~d and thereafter recovered within 3.5~d, lasting about a week in total. The third dip is the most remarkable feature of the light curve. Unlike the first two dips, its deepest portion is flat-bottomed, lasts for just 1.2~d, and was observed by ZTF to be $g=17.37\pm0.12$, approximately 2~mag fainter than the beginning of the \textit{TESS}\ light curve. The middle panel of Figure~\ref{fig:High.Low.PhasedLC} shows a 3~d section of the light curve centered on this dip. The deepest part of the low state shows smooth, sinusoidal variation, the nature of which which we explore in Sec.~\ref{sec:2Dpower}. This low state began with a rapid 33\% drop in brightness lasting eight hours at BTJD = 1888, and it ended with a burst that lasted for 16~h. After a rapid fade from the burst, YY~Dra began a slow recovery, but it never completely recovered to its starting brightness during the \textit{TESS}\ observation. In the following subsections, we use power spectral analysis to gain insight into these behaviors. \subsection{1D Power Spectrum} \label{sec:1Dpower} We computed a Lomb-Scargle power spectrum \citep{LOMB,SCARGLE} to examine YY~Dra's periodic variations between 0.67 cycles~d$^{-1}$ and 360 cycles~d$^{-1}$. The $\Omega$, 2$\Omega$, 2($\omega$-$\Omega)$, and 2$\omega$\ frequencies from \citet{haswell} are easily identified in this power spectrum (Figure~\ref{fig:PwrSpectrum}). Because of the relatively short \textit{TESS}\ baseline, the measured periods from the power spectrum are statistically indistinguishable from those in \citet{haswell}. The power spectrum of an IP contains important information about the system's structure \citep{warner86, Ferrario99, murray}. For example, \citet{Ferrario99} computed theoretical optical power spectra for stream-fed and disk-fed IPs. They predicted that the spin period should tend to be very apparent during disk-fed accretion because the accretion disk encircles the entire white dwarf, providing each magnetic pole with a uniform reservoir of matter from which to accrete, regardless of the WD's rotational phase. In stream-fed accretion, however, the white dwarf accretes from a fixed region in the binary rest frame: a single, focused point where the stream from the donor star encounters the magnetic field. In sharp contrast to disk-fed accretion, the accretion rate onto any one pole will vary with the WD's rotation. In YY Dra, stream-fed accretion could explain why there is a very conspicuous $\omega$-$\Omega$\ and 2($\omega$-$\Omega)$\ and a rather inconspicuous $\omega$. The \citet{Ferrario99} models predict that stream-fed accretion will tend to shift power into $\omega$-$\Omega$\ and its harmonics, and in YY~Dra, the amplitudes of $\omega$-$\Omega$\ and 2($\omega$-$\Omega)$\ are much higher than $\omega$\ and 2$\omega$. This distribution of power is not accounted for in the \citet{Ferrario99} disk-fed models, but it is consistent with their stream-fed models, particularly at lower inclinations. Furthermore, the abrupt disappearance of $\Omega$\ during a 28~h cessation of accretion during the \textit{TESS}\ observation (Sec.~\ref{sec:2Dpower}) reveals that this frequency is produced by accretion (as opposed to the changing aspect of the donor star), which is consistent with a prediction by \citet{Ferrario99} that stream-fed accretion will often shift power into $\Omega$. Although we believe that the \citet{Ferrario99} stream-fed accretion models offer the best explanation of the observed power spectrum, we also considered two other scenarios to account for YY~Dra's comparatively strong $\omega$-$\Omega$\ and 2($\omega$-$\Omega)$\ signals. The first was proposed by \citet{warner86}, who argued that stationary structures in the binary frame can reprocess X-rays into the optical at a frequency of $\omega$-$\Omega$; we presume that this effect could also produce a signal at 2($\omega$-$\Omega)$\ if both poles participate in this process. However, one challenge with applying this scenario to YY~Dra is that at its inclination of $i=42^{\circ}$ \citep{Matteo91}, we would expect at least a modest amplitude modulation of the reprocessed optical radiation across the binary orbit, shifting power into the upper and lower orbital sidebands of both $\omega$-$\Omega$\ and 2($\omega$-$\Omega)$ \citep{warner86}. This predicted distribution of power is inconsistent with the observed power spectrum in Fig.~\ref{fig:PwrSpectrum} and suggests that the region in which the optical $\omega$-$\Omega$\ and 2($\omega$-$\Omega)$\ signals originate is uniformly visible across the orbit. While not absolutely conclusive, this reasoning disfavors reprocessed X-rays as the origin of $\omega$-$\Omega$\ and 2($\omega$-$\Omega)$\ in the \textit{TESS}\ data. Another alternative explanation that we considered involves the possibility of spiral structure within the accretion disk. \citet{murray} computed theoretical power spectra for IPs in which spiral arms in the disk extend far enough inward to reach the disk's inner rim. Similar to stream-fed accretion, this scenario would break the azimuthal symmetry of the accretion flow at the magnetospheric boundary, producing power at $\omega$-$\Omega$\ and/or 2($\omega$-$\Omega)$. However, we disfavor this interpretation for two reasons. First, it seems unlikely that if a disk were present, the power at $\omega$-$\Omega$\ and 2($\omega$-$\Omega)$\ could dwarf that at $\omega$\ and 2$\omega$. Second, we would not expect spiral disk structure to be present during quiescence in a CV with YY~Dra's 4~h orbital period. While spiral structure has been detected in IP~Peg ($P_{orb}$ = 3.8~h) during its outbursts \citep{steeghs97}, the physical conditions in the disk during outburst---particularly its viscosity---are very different than in quiescence, and YY~Dra's faintness during the \textit{TESS}\ observation rules out the possibility that the system possessed an outbursting disk. We therefore conclude that YY Dra was predominantly stream-fed during these observations. \subsection{2D Power Spectrum} \label{sec:2Dpower} Figure~\ref{fig:3DLC} presents the evolution of the power spectrum over time and offers insight into the stability of YY~Dra's periodic variations as well as their dependence on the brightness of YY Dra. Throughout the observation, $\Omega$\ and 2$\Omega$\ were consistently the largest-amplitude signals in the two-dimensional power spectrum. Of the frequencies related to the WD's rotation, 2($\omega$-$\Omega)$\ consistently had the highest amplitude, while $\omega$-$\Omega$, $\omega$, and 2$\omega$\ were not reliably detected above the noise in 0.5~day segments of the light curve. The amplitude of 2($\omega$-$\Omega)$\ declines in the second half of the light curve, coinciding with YY~Dra's gradual fade. During the 28~h long, flat-bottomed low state at BTJD=1888, no periodic variability is present, except for 2$\Omega$, and even the flickering appears to stop. We attribute this behavior to a cessation of accretion during which only ellipsoidal variations from the companion star are present in the light curve, as seen in other CVs during zero-accretion states \citep[e.g. KR~Aur; ][]{kr_aur}. The absence of accretion during this 28~h low state distinguishes it from the low state detected by \citet{Breus17}, during which YY~Dra showed evidence of both ellipsoidal variations and additional variability at the orbital frequency, resulting in a double-humped orbital profile with unequal maxima; had accretion ceased altogether during the \citet{Breus17} observations, we would expect to observe the maxima to have been equal. While episodes of negligible accretion have been identified in other magnetic CVs, our review of the literature suggests that states of insignificant accretion are extremely rare in IPs, as we discuss in Sec.~\ref{sec:comparison}. Lastly, we unsuccessfully searched the two-dimensional power spectrum for evidence of transient periodic oscillations (TPOs), a phenomenon identified in YY~Dra by \citet{andronov08}. The TPOs in that study consisted of a mysterious signal near 86~cycles~d$^{-1}$ whose frequency gradually decreased to 40~cycles~d$^{-1}$ over the course of three nights. Because this phenomenon in the \citet{andronov08} dataset occurred at an elevated accretion rate ($\sim1$~mag brighter than during the \textit{TESS}\ observation), the lack of TPOs in the \textit{TESS}\ data suggests that they are sensitive to the accretion rate. \subsection{Long-Term Light Curve} The DASCH light curve spans from 1902 to 1986 and is by far the most sparsely sampled of the three datasets in Figure~\ref{fig:DiffLC}. However, it offers a unique look at YY~Dra in the decades prior to its discovery. The DASCH data show no low states as deep as the one observed by both \textit{TESS}\ and ZTF, although one observation (at magnitude $B=16.7$ in 1936) comes fairly close. The lack of deep low states could be attributable in part to the relatively shallow limiting magnitude of many of the photographic plates. Indeed, Figure~\ref{fig:DiffLC} excludes 6,165 non-detections with limiting magnitudes brighter than 15, a threshold selected because it is too shallow to reach YY Dra's quiescent brightness. Even the non-detections with comparatively deep limits cannot meaningfully distinguish between normal quiescent variability and a low state. Additionally, there are no observations during the early 1950s and late 1960s, so any low states during that interval would have gone undetected. Finally, while low states are elusive in the century-long light curve of YY~Dra, the DASCH data do emphasize the bright, $>5$~mag outbursts of YY~Dra, a distinguishing property noted by previous studies \citep[e.g.,][]{szkody02}. While the DASCH data provide a partial glimpse of YY~Dra's behavior across the 20th century, the ASAS-SN and ZTF survey photometry provide significantly denser observational coverage during the past decade. The ASAS-SN and ZTF $g$ light curves, both of which overlap the \textit{TESS}\ observations, are in excellent agreement with each another (Fig.~\ref{fig:DiffLC}, top panel). Although ASAS-SN has a longer baseline by six years, the ZTF data fill in some of the gaps in the ASAS-SN coverage. In 2018, YY~Dra was regularly in a low state, dropping below $g\sim17$, and while the exact duration of the low states is not clear, some of them could have lasted for months. The ZTF data are especially critical because they include an observation of YY~Dra during its deep low state (Figure~\ref{fig:DiffLC}). This single datum therefore establishes the magnitude of YY~Dra when its accretion rate was negligible: $g = 17.37\pm0.12$. In the absence of any accretion, the optical luminosity of YY~Dra should be composed of the stellar photospheres of the WD and the companion star. As such, the ZTF measurement should represent the minimum magnitude which YY Dra can appear, at least until the WD has cooled significantly. Inspecting Figure~\ref{fig:DiffLC}, it is evident that YY~Dra spent much of 2018 in or near such a state.\footnote{\citet{covington} examined archival observations of YY~Dra from 2018 and identified a 3~mag fade in Swift ultraviolet UVW- and UVM-band observations that coincided with a non-detection of YY~Dra in X-ray observations. Likewise, \citet{shaw2020} found no X-ray emission in a 55.4~ks NuSTAR observation of YY~Dra in July 2018. These findings provide independent support for a cessation of accretion onto the WD in 2018. } \subsection{Updated Orbital Period} The minima of the ellipsoidal variations in Figure~\ref{fig:High.Low.PhasedLC} are almost perfectly equal, making it difficult to identify which one corresponds to inferior conjunction of the donor star. However, the data can be reliably phased to the binary orbit using the \citet{joshi_yy_dra} orbital period and the \citet{haswell} epoch of inferior conjunction.\footnote{ We converted the epoch of the \citet{haswell} to Barycentric Julian Date in Barycentric Dynamical Time using routines in {\tt astropy}.} Because of the excellent precision of the \citet{joshi_yy_dra} orbital period (0.16537424(2)~d), this trial ephemeris has an accumulated phase error of only $\pm0.009$ at the time of the low state near BTJD=1888, so when the data are phased to it, the photometric minimum corresponding to inferior conjunction should be observed within $\pm0.009$~orbital cycles of phase 0.0. The phased data agree with this prediction. When phased to the aforementioned trial ephemeris, the closest minimum to phase 0.0 occurred at a nominal orbital phase of $\sim-0.02$. While this suggests that the \citet{joshi_yy_dra} uncertainty might have been slightly underestimated, the phasing of the deep-low-state light curve strongly supports our inference that ellipsoidal variations are responsible for the observed signal at 2$\Omega$. Refining the orbital period slightly to 0.16537420(2)~d corrects the phase shift from the \citet{joshi_yy_dra} orbital period. Adopting this revised orbital period, we offer an orbital ephemeris of \begin{equation} T_{conj}[BJD] = 2446863.4383(20) + 0.16537420(2)\times\ E, \label{ephemeris} \end{equation} where $E$ is the integer cycle count. \section{Discussion} \subsection{Comparison to Low States in Other IPs} \label{sec:comparison} \citet{Garnavich88} searched for low states in three IPs in the Harvard plate collection: V1223 Sgr, FO Aqr, and AO Psc. They found that V1223~Sgr experienced a nearly decade-long low state whose depth was typically $\sim$1~mag but briefly exceeded 4~mag. Additionally, their analysis of AO~Psc revealed a single $\sim$1~mag low state, lasting for at most several weeks. However, considering the difference in these time scales compared to YY~Dra's 28~h low state as well as the absence of contemporaneous time series photometry for the low states in the Harvard plates, it is difficult to compare the \citet{Garnavich88} low states against the low state in the \textit{TESS}\ light curve. More recently, low-accretion states have been detected in several confirmed or suspected IPs, including V1323~Her \citep{V1323Her}, FO Aqr \citep{Littlefield16, Kennedy17}, DW Cnc \citep{Montero20}, Swift J0746.3-1608 \citep{Bernardini19}, J183221.56-162724.25 \citep[J1832;][]{beuermann}, and YY Dra itself \citep{Breus17}, though as we noted in Sec.~\ref{sec:2Dpower}, the previously observed YY~Dra low states do not show evidence of a cessation of accretion. Additionally, \citet{covington} have identified low states in V1223~Sgr, V1025~Cen, V515 And, and RX~J2133.7+5107, and they conclude that low states in IPs are more common than previously realized. Detailed observations of these low states has revealed significant changes in the accretion geometry. FO~Aqr is the best example of this, as optical \citep{Littlefield16, Littlefield20}, X-ray \citep{Kennedy17}, and spectroscopic \citep{Kennedy20} studies have all agreed that the system's mode of accretion changes at reduced mass-transfer rates. This is also true of DW~Cnc and V515~And \citep{covington}. With just one exception, these studies have not detected a total interruption of accretion during low states. Of the other IPs that have been observed in low states, only in J1832 has the accretion rate been observed to decline to negligible levels. Similar to YY~Dra, J1832 shows no variability related to the WD's rotation during its deep low states \citep{beuermann}, but J1832 has the added advantage of being a deeply eclipsing system. \citet{beuermann} showed that when J1832 enters a deep low state, its optical eclipse depth becomes imperceptibly shallow, suggesting that there is no remaining accretion luminosity and hence no accretion. Due to gaps in ground-based observational coverage of J1832, it is unclear how long J1832's deep low states last or how long it takes for the system to fall into one. In contrast, the \textit{TESS}\ light curve of YY~Dra firmly establishes the duration of the stoppage of accretion (28~h) as well as the length of the abrupt transitions into and out of that episode of negligible accretion (10~h and 2~h, respectively). Interestingly, J1832 appears to be a persistently diskless IP \citep{beuermann}, and we have argued that YY~Dra's power spectrum suggests that it too was diskless when its accretion ceased. This might be an important clue as to why the accretion rates of other IPs have not been observed to fall to negligible levels during their low states. A Keplerian disk is depleted at its viscous timescale, which can be days or even weeks, but in a diskless geometry, the lifetime of the accretion flow is much shorter (of order the free fall time between the system's L1 point and the WD), so a cessation of mass transfer by the secondary would quickly result in a corresponding stoppage of accretion onto the WD. More broadly, when looking at low states in other IPs, we see that they tend to last substantially longer (anywhere from months to years) than YY~Dra's 28~h deep low state. The low states in FO Aqr lasted 5-6 months in 2016, 4-5 months in 2017, and only 2-3 months in 2018 \citep{Littlefield20}. Additionally, V1025 Cen \citep{covington, littlefield22} declined 2.5 magnitudes over the course of 4 years and remained in that low state for a little under 2 years. DW~Cnc has significant gaps in the data, but it has shown a nearly 3~year low state with a depth of 2.3 mag \citep{covington}. The lengths of these low states are substantially longer than those of YY Dra, underscoring the remarkable brevity of YY~Dra's deep low state. Finally, \citet{scaringi17, scaringi21} examined low states of MV~Lyr and TW~Pic observed by \textit{Kepler} and \textit{TESS}, respectively. Although both systems are now suspected to be extremely low-field IPs, neither had been known to be magnetic. \citet{scaringi17, scaringi21} identified magnetically gated accretion episodes via the \citet{st93} instability, resulting from a magnetic field strength so low that it produces observable effects (magnetically gated bursts) only during epochs of low accretion. In both MV~Lyr and TW~Pic, the low states tended to last for a number of days or even weeks, although one low state in TW~Pic lasted for just over $\sim1$~d. However, unlike the deep low state of YY~Dra, TW~Pic continued to accrete during this interval. In conclusion, there is no reported precedent for a $\sim$day-long cessation in accretion in a confirmed IP. During a future episode of near-zero accretion, it should be possible to use optical spectra to search for Zeeman-split photospheric lines from the exposed WD in order to directly measure its magnetic-field strength. \subsection{Nature of the Low State} \citet{livio94} and \citet{hessman} have theorized that star spots can pass over the inner hemisphere of a star and shrink the height of the companion star's atmosphere, causing it to temporarily underfill its Roche lobe. Large star spots will distort the light curve of the companion's ellipsoidal variations. When star spots are present, the shape of the ellipsoidal light curve would be distorted with a dip at certain orbital phases. If the star spots disappear, the dip should go away as well. To search for evidence of such a change, we split the deep low state of the \textit{TESS}\ light curve into two halves and phased both according to Equation~\ref{ephemeris}. However, there are no statistically significant changes between the first and second halves of the deep low state. This result is perhaps unsurprising, given that each half of the deep low state encompasses only $\sim$3.5 binary orbits, which is too short a baseline to significantly boost the signal-to-noise ratio through phase-averaging the data. However, more precise measurements during a future cessation of accretion might be able to test more closely a change in the ellipsoidal light curve. A major benefit of performing such an analysis in a non-accreting IP (as opposed to a non-accreting polar) is that in a polar, the equality of $\omega$\ and $\Omega$\ makes it difficult to disentangle the secondary's ellipsoidal variations from other sources of variability at the orbital frequency, such as accretion or thermal emission from hot spots on the WD surface \citep[e.g., as observed for the low-state polar LSQ1725-64;][]{fuchs}. Although the \textit{TESS}\ data therefore cannot test the starspot model, they do offer a helpful clue as to when mass transfer from the donor (as distinguished from the instantaneous accretion onto the WD) stopped. In a disk-fed geometry, a disk will dissipate at its viscous timescale, which can be several weeks (or longer) at low mass-transfer rates. In such a system, a disk can therefore be present even if it is not being replenished with mass from the donor star, and the only observable sign that the secondary is not transferring mass would be the disappearance of the hotspot where the accretion stream impacts the rim of the disk. Had YY~Dra been accreting from a disk during the \textit{TESS}\ observation, we would have expected to detect the hotspot as a contribution to $\Omega$\ and perhaps its harmonics in the power spectrum, and its disappearance would have been observable in the 2D power spectrum. No such changes are observed. However, in a stream-fed geometry, there is only a short lag between a cessation of mass loss by the companion and the resulting interruption of accretion on to the WD. Finally, it is worth noting that the flare at the end of the deep low state bears at least some resemblance to the behavior of the apparently non-magnetic VY~Scl system KR~Aur during its deep low states, when as in YY~Dra ellipsoidal variability dominates the light curve \citep{kr_aur}. However, in KR~Aur, the flaring episodes appear to be short, lasting for tens of minutes as opposed to 0.7~d observed in YY~Dra. Moreover, one of the longer low-state flares in KR~Aur showed evidence of a quasi-periodicity, while the flare in YY~Dra showed no significant periodicity other than the underlying ellipsoidal variations. \section{Conclusion} During its \textit{TESS}\ observation, YY Dra had a unique low state during which accretion turned off altogether for 28~h, leaving behind only ellipsoidal variability in the light curve. Our analysis suggests that both before and after the low state, accretion in YY~Dra was stream-fed, similar to the only other IP that has been detected in a state of negligible accretion. Long-term survey photometry reveals that YY~Dra has experienced episodes of negligible accretion relatively frequently during recent years, which raises the enticing prospect of directly measuring the field strength of the WD via the Zeeman effect during a future low state. \acknowledgements We thank the anonymous referee for an expeditious and helpful report. PS and CL acknowledge support from NSF grant AST-1514737. M.R.K. acknowledges funding from the Irish Research Council in the form of a Government of Ireland Postdoctoral Fellowship (GOIPD/2021/670: Invisible Monsters). \software{astropy \citep{astropy:2013,Astropy2018}, lightkurve \citep{Lightkurve20}, matplotlib \citep{Hunter07}}
{ "attr-fineweb-edu": 1.90918, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbPHxK7ICUn2IYPAX
\section{Additional tables and figures} \input{notation} \begin{figure*}[h] \centering \includegraphics[width=\columnwidth]{misc/mr-scatter.pdf} \includegraphics[width=\columnwidth]{misc/mr-rate.pdf} \caption{Scatter plot and change ratio for MR dataset.} \label{fig:mr} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=\columnwidth]{misc/yelp-scatter.pdf} \includegraphics[width=\columnwidth]{misc/yelp-rate.pdf} \caption{Scatter plot and change ratio for Yelp dataset.} \label{fig:yelp} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=\columnwidth]{misc/imdb-scatter.pdf} \includegraphics[width=\columnwidth]{misc/imdb-rate.pdf} \caption{Scatter plot and change ratio for IMDB dataset.} \label{fig:imdb} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=\columnwidth]{misc/mnli-scatter.pdf} \includegraphics[width=\columnwidth]{misc/mnli-rate.pdf} \caption{Scatter plot and change ratio for MNLI dataset.} \label{fig:mnli} \end{figure*} \begin{table*}[t] \centering \small \begin{tabular}{p{0.98\textwidth}} \toprule \textbf{Ori (Tech):} Gates and Ballmer get pay raises . Bill Gates and Steve Ballmer each received total compensation of \$ 901 , 667 in Microsoft Corp . ' s 2004 fiscal year , up 4 . 4 percent from \$ 863 , 447 one year ago .\\ \textbf{Adv (Business):} Gates and Ballmer pay one million . Bill Gates and Steve Ballmer both received dividends of \$ 495 , 80 each in Microsoft Corp . ' s 2004 fiscal year , up 2 . 5 percent from \$ 351 , 681 a year earlier .\\\midrule \textbf{Ori (Tech):} Viruses keep on growing . Most IT Managers won ' t question the importance of security , but this priority has been sliding between the third and fourth most important focus for companies .\\ \textbf{Adv (Business):} Guaranteed investing ? . Most IT Managers don ' t question the importance of security , but this priority has been sliding between the third and fourth most important focus for companies .\\\midrule \textbf{Ori (Tech):} Czech Republic ' s Cell Operators Fined ( AP ) . AP - All three cell phone operators in the Czech Republic were fined a total of \$ 1 . 7 million for breaching competition rules , officials said Thursday .\\ \textbf{Adv (World):} Czech Republic ' s Mobile Operators Fined ( AP ) . AP - Three large mobile phone operators in the Czech Republic are fined an average of \$ 3 . 2 billion for breached banking rules , officials said Saturday .\\\midrule \textbf{Ori (Sport):} Japanese baseball players set to strike . Japanese baseball players will strike for the first time if owners proceed with a proposed merger of two teams , the players ' union said Monday .\\ \textbf{Adv (World):} Japan baseball players go on strike . Baseball players go on strike for the first time in two years , with one member of the strike , the players ' union said Monday .\\\midrule \textbf{Ori (Sport):} Football Association charges Bolton striker over spitting incident . Bolton striker El - Hadji Diouf was cited for improper conduct by the Football Association on Monday after spitting in the face of an opponent .\\ \textbf{Adv (World):} Football . : Bolton cited on spitting charge . Bolton striker El - Hadji Diouf was accused of improper conduct by the Football Association on Tuesday for spitting in the face of the striker .\\\midrule \textbf{Ori (Sport):} Sox lose Kapler to Japan . Outfielder Gabe Kapler became the first player to leave the World Series champion Boston Red Sox , agreeing to a one - year contract with the Yomiuri Giants in Tokyo .\\ \textbf{Adv (World):} Sox take Katum to Japan . Outfielder Gabe Kapler became the first player to major the World Series champion Boston Red Sox by agreeing to a one - year contract with the Yomiuri Giants . . .\\\midrule \textbf{Ori (Business):} Colgate - Palmolive Announces Job Cuts . Toothpaste maker Colgate - Palmolive said today it is cutting 4 , 400 jobs and closing a third of its 78 factories around the world . The group , which makes products such as Colgate\\ \textbf{Ori (Tech):} Colgate - Palmolive Announces Job Cuts . Miheran software maker Colgate - Palmolive said that it is cutting 1 , 500 jobs and shutting a number of about 39 factories around the world . The group , which makes products such as Colgate\\\bottomrule \end{tabular} \caption{More examples on AG dataset.} \end{table*} \begin{table*}[t] \centering \small \begin{tabular}{p{0.98\textwidth}} \toprule \textbf{Ori (negative):} the things this movie tries to get the audience to buy just won ' t fly with most intelligent viewers .\\ \textbf{Adv (positive):} this rather intelligent movie tries to buy the audience out . it doesn ' t try to get viewers out .\\\midrule \textbf{Ori (negative):} not at all clear what it ' s trying to say and even if it were i doubt it would be all that interesting .\\ \textbf{Adv (positive):} not at all clear what it ' s trying to say , but if it had to say it would . just as interesting .\\\midrule \textbf{Ori (negative):} what starts off as a potentially incredibly twisting mystery becomes simply a monster chase film .\\ \textbf{Adv (positive):} what starts out as a twisting and potentially mystery becomes quite a monster chase film .\\\midrule \textbf{Ori (negative):} it ' s probably not easy to make such a worthless film . . .\\ \textbf{Adv (positive):} it ' s not too easy to make up for worthless film , though .\\\midrule \textbf{Ori (positive):} even better than the first one !\\ \textbf{Adv (negative):} even better , the first two !\\\midrule \textbf{Ori (positive):} not only a coming - of - age story and cautionary parable , but also a perfectly rendered period piece .\\ \textbf{Adv (negative):} not only a coming - of - age story and cautionary parable , but also a period - rendered piece .\\\midrule \textbf{Ori (positive):} a well - done film of a self - reflexive , philosophical nature .\\ \textbf{Adv (negative):} a well - made film of a self - reflexive but philosophical nature .\\\bottomrule \end{tabular} \caption{Examples on MR dataset} \end{table*} \section{Similar sentence sampler} With sentence-level threat model\xspace, one naive attack is to first sample sentences $\mathbf{u}$ from GPT2 model, then check if $\mathbf{u}$ is in $\Delta(x)$ and if it changes the prediction. This method is inefficient because the sampled sentences would not belong to $\Delta(x)$ in most cases. In this section, we introduce our \texttt{RewritingSampler}\xspace which samples from a distribution that can more likely give sentences belong to $\Delta(\mathbf{x})$. \subsection{Distribution over sentences} Instead of sampling from GPT2 language model, we sample from a distribution over sentences conditioned on the original sentence. We define the distribution as \begin{equation} p(\mathbf{u}|\mathbf{x}) \propto p_{\text{GPT2}}(\mathbf{u}) \mathbf{1}[\text{cos\_sim}\big(R(\mathbf{u}),R(\mathbf{x})\big)\geq\epsilon] \label{eq:prob} \end{equation} where $p_{\text{GPT2}}(\cdot)$ is the probability of a sentence given by the GPT2 model. This conditional distribution has several properties \begin{itemize} \item Sampling from this distribution can cover all the sentences in $\Delta(\mathbf{x})$, i.e. $p(\mathbf{u}|\mathbf{x}) > 0, \forall \mathbf{u}\in\Delta(\mathbf{x}).$ \item Any sentence sampled for this distribution satisfies the semantic constraint. \item Sentences with better grammatical quality are more likely to be sampled. So we are likely to get a sentence in $\Delta(\mathbf{x}).$ \end{itemize} \subsection{Gibbs sampling} We further make approximation to Eq.~\ref{eq:prob} and use Gibbs sampling to sample sentences from the approximated distribution. We derive the conditional distribution for each word and make approximation as follows \begin{align} &p(u_i|\mathbf{u}_{-i}, \mathbf{x}) \nonumber\\ \propto& p(\mathbf{u}|\mathbf{x})\nonumber\\ \propto& p_{\text{GPT2}}(\mathbf{u})\mathbf{1}[\text{cos\_sim}\big(R(\mathbf{u}),R(\mathbf{x})\big)>\epsilon]\nonumber\\ =&p_{\text{GPT2}}(\mathbf{u}_{-i}) p_{\text{GPT2}}(u_i|\mathbf{u}_{-i})\mathbf{1}[\ldots]\nonumber\\ \propto& p_{\text{GPT2}}(u_i|\mathbf{u}_{-i})\mathbf{1}[\ldots]\nonumber\\ \approx& p_{\text{BERT}}(u_i|\mathbf{u}_{-i}) \times \nonumber\\ &\exp \big[-\kappa \max\big(\sigma - \text{cos\_sim}(R(\mathbf{u}), R(\mathbf{x})), 0\big)\big], \label{eq:gibbs} \end{align} where $\mathbf{u}_{-i}$ means a sentence without the $i$-th word. In Eq.~\eqref{eq:gibbs}, we makes two approximations. We approximate $p_{\text{GPT2}}(u_i|\mathbf{u}_{-i})$ to $p_{\text{BERT}}(u_i|\mathbf{u}_{-i})$, because latter one is easier to compute. And we assume that the probability distributions modeled by BERT and GPT2 are similar. We also approximate the semantic constraint $\mathbf{1}[\text{cos\_sim}\big(R(\mathbf{u}),R(\mathbf{x})\big)\geq\epsilon]$ with a \textit{soft semantic constraint} \begin{equation} \exp \big[-\kappa \max\big(\sigma - \text{cos\_sim}\big(R(\mathbf{u}),R(\mathbf{x})\big), 0\big)\big]. \label{eq:soft} \end{equation} where $\sigma$ is soft semantic threshold.\footnote{The soft semantic threshold $\sigma$ can be different from $\epsilon$ in the threat model. } The soft semantic constraint is $1$ if the semantic similarity is great than $\sigma$. The constraint is a smaller positive value if semantic similarity is less than $\sigma$. Soft semantic constraint allows the sampler to slight cross the soft semantic threshold during sampling. $\kappa$ is a parameter that defines the penalty of crossing soft semantic threshold. When $\kappa=0$, the constraint is ignored. And when $\kappa=\infty$, the constraint is enforced rigorously. The soft semantic constraint can also prevent the sampler from getting stuck. In some cases, replacing any word in a sentence will make the new sentence not semantically similar to the original sentence. Using the hard semantic constraint will make the sampler repeat the original sentence, while soft semantic constraint can overcome this issue. We can use Eq.~\eqref{eq:gibbs} to do Gibbs sampling. We start with the original sentence. The sampler runs several iterations. Each iterations contains $l$ steps. In the $i$-th step, the $i$-th word is replaced by a new word. The new word is sampled from the distribution in Eq.~\eqref{eq:gibbs}. \subsection{TO DELETE: Word piece embeddings} \label{sec:wordpiece} The different vocabulary and tokenization of BERT and GloVe causes implementation difficulties. BERT uses a word-pieces, and it has 30k word-pieces in its vocabulary. GloVe uses word tokenization, and it has 400k words in its vocabulary. We follow the tokenization in BERT model and learns word-piece embeddings. Let $\mathbf{w}=\{w_1, \ldots, w_N\}$ be the concatenation of all sentences in the corpus tokenized by words. Let $E(\mathbf{w})\in\mathbb{R}^{N\times300}$ be the word embeddings for all words. Let $E'\in^{30k\times300}$ be the word-piece embeddings. Let $T(\mathbf{w})\in\mathbb{R}^{N\times30k}$ be an indicator for word-piece tokenization. $T(\mathbf{w})_{i,j}=1$ if $w_i$ is tokenized to the $j$-th word-piece. We train the word-piece embeddings $E'$ by minimize the absolute error \begin{equation} \min_{E'} ||E(\mathbf{w})-T(\mathbf{w})E'||_1, \label{eq:wp} \end{equation} where $||\cdot||_1$ sums up the absolute value of all entries in a matrix (aka entrywise matrix norm). Figure~\ref{fig:wp} illustrates the meaning of three matrices in Eq.~\ref{eq:wp}. We optimize $E'$ using stochastic gradient descent. In each step, we sample 5000 words from $\mathbf{w}$, then update $E'$ correspondingly. After learning $E'$, we use $E'$ to compute $R(\mathbf{x})$. \begin{figure}[htb] \centering \includegraphics[width=1.0\columnwidth]{misc/wordpiece.pdf} \caption{Demonstration of learning word-piece embedding. In this example, the corpus contain a word `NLP', but it is tokenized to two word-pieces `NL' and `\#\#P'.} \label{fig:wp} \end{figure} \subsection{Other details} \noindent\textbf{Fine-tuning BERT for attacking purpose} For a dataset with $C$ classes, we fine-tune $C$ different BERT language models. For the $i$-th language model, we exclude training data from $i$-th class, so that the sentences are more likely to be mis-classified. \noindent\textbf{Blocked sampling} Blocked sampling replaces more than one word in one step. The advantage of blocked sampling is to overcome highly correlated words. For example, `i am' are highly correlated words. If we want to sample an alternative word for `i', we probably get the same word `i'. Using a 2-blocked sampling, we can replace these two words together in one step, and overcome the issue. \noindent\textbf{Fix entity names} Changing entity names can drasticly change the meaning of a sentence. And we observe that it is easier for the sampler to remove an entity name then adding a new one. So we prevent our \texttt{RewritingSampler}\xspace from deleting entity names from a sentence. We use stanza \cite{qi2020stanza} package to recognize entity names in the data. In one step, if the current word is the last occurrence of the entity name in the sentence, we skip this step to ensure that entity name still appears in the sentence. Although \texttt{RewritingSampler}\xspace cannot delete entity names, it is allowed to move it to other position in a sentence. For example, if there's an entity name only once at the beginning of the sentence. \texttt{RewritingSampler}\xspace can generate the same entity name at other position of a sentence in one iteration, and then remove the entity name at the beginning of the sentence in the later iterations. \noindent\textbf{Multi-round sampling and dynamic constraints} To get high quality adversarial sentences, we run the sampler multiple round. We start with a high $\sigma$, and reduce it if no adversarial example is found in a few rounds. \section{Introduction} Recently, a number of researchers have studied adversarial attacks in depth, aiming to improve the robustness of deep learning models, especially in image classification \citep{Goodfellow2014ExplainingAH, carlini2017towards, papernot2016limitations}. Generally, these attacks \textit{slightly perturb} an image in such a way that an image classifier ends up misclassifying it. By adding adversarial images into a training set, the classifier can learn from them, and resist similar attacks after deployment \cite{madry2017towards}. These adversarial images are not the first type of auxiliary data to be added to training sets. Data augmentation was introduced much earlier than adversarial training, and has since become standard for training image classifiers~\citep{he2016identity}. Data augmentation expands the training set by making \textit{global} changes to images, such as scaling, rotating, and clipping, so that the classifier can defend against such changes. As such, data augmentation and adversarial training are complementary to each other, and can be used concurrently to improve the robustness of an image classifier. \begin{table}[t] \centering \begin{tabular}{p{0.95\columnwidth}} \toprule \textbf{Original sentence (World):} \\ Turkey is put on track for EU membership . \\ \textbf{Sentence-level modification (Business):} \\ \red{EU} \red{puts} \red{Turkey} on track for \red{full} membership .\\\bottomrule \end{tabular} \caption{Adversarial example with sentence-level rewriting. \texttt{RewritingSampler}\xspace changes the sentence from passive voice to active voice by replacing 4 words. None of the word substitutions (Turkey -> EU, is -> puts, put-> Turkey, EU->full) have similar meanings, but the meaning of the sentence doesn't change. } \vspace{-1ex} \label{tab:eg} \end{table} The global changes made by data augmentation and the tiny changes made by adversarial training motivated us to consider whether both of these attack types also exist for natural language models. Recent works \citep{liang2017deep, samanta2018generating, papernot2016crafting, Jin2019IsBR} have shown that word- or character-level attacks can generate adversarial examples. These attacks use edit distance as a threat model. This threat model considers an attacker that can make up to a certain number of word or character substitutions, such that each new word has a similar semantic meaning to the original one. These methods can only make \textit{tiny} changes to a sentence, and after an attack, the original sentence and adversarial sentence look very similar. After surveying the literature, we found that little work has been done regarding the \textit{sentence-level} changes. In this paper, we explore the problem of an adversarial attack with sentence-level modifications. For example, the attacker could change a sentence from passive to active voice while keeping the sentence's meaning unchanged, as shown in Table~\ref{tab:eg}. One challenge inherent to this problem is defining a proper threat model. An edit distance threat model prefers word-level modifications. While it can preserve meaning, sentence-level modification can lead to a large edit distance. To overcome this issue, we propose a sentence-level threat model\xspace, where we use the sum of word embeddings to constrain the semantic similarity, and we use a GPT2 language model \citep{radford2019language} to constrain the grammatical quality. The other challenge involves effectively rewriting sentences. Although humans can rephrase a sentence in multiple ways, it is hard to generate these modifications using an algorithm. We solve the problem under a conditional BERT sampling framework (\texttt{CBS}\xspace). \citep{Devlin2019BERTPO}. \noindent\textbf{Our contributions are summarized as follows:} \begin{itemize} \item We propose \texttt{CBS}\xspace, a flexible framework to conditionally sample sentences from a BERT language model. \item We design \texttt{RewritingSampler}\xspace, an instance of \texttt{CBS}\xspace, which can rewrite a sentence while retaining its meaning. It can be used to attack text classifiers. \item We propose a sentence-level threat model\xspace for natural language classifiers. It allows sentence-level modifications, and is adjustable in semantic similarity and grammatical quality. \item We evaluate \texttt{RewritingSampler}\xspace on 6 datasets. We show that existing text classifiers are non-robust against sentence rewriting. With the same semantic similarity and grammatical quality constraints, our method achieves a better success rate than existing word-level attacking methods. \end{itemize} \section{Related Work} Pre-trained language models such as GPT2~\citep{radford2019language}, Bert~\citep{Devlin2019BERTPO}, RoBertA~\citep{liu2019roberta} and XL-Net\citep{yang2019xlnet} are currently popular in NLP. These models first learn from a large corpus without supervision. After pretraining, they can quickly adapt to downstream tasks via supervised fine-tuning, and can achieve state-of-the-art performance on several benchmarks \citep{wang2018glue, wang2019superglue}. The first adversarial attack on deep learning models appears in the computer vision domain \citep{Goodfellow2014ExplainingAH}. The attacking methods fall into two categories: black-box attacks and white-box attacks. In black-box attacks, the attackers can only access the prediction of a classifier, while in white-box attacks, the attackers know the architecture of the classifier and can query for gradients. \citet{Jin2019IsBR} shows that state-of-the-art text classification models can be easily attacked by slightly perturbing the text. They propose TextFooler, a black-box attacking method, which heuristically replaces words with synonyms. There are also white-box attacks that use the fast gradient sign method (FGSM) \cite{Goodfellow2014ExplainingAH} and other gradient-based methods \citep{liang2017deep, samanta2018generating, papernot2016crafting}. \citet{cheng2019robust} use a gradient-based method to create adversarial examples and improve the robustness of a neural machine translation model. \citet{zhang2020adversarial} give a thorough survey of an adversarial attack method for natural language models. \citet{wong2017dancin} and \citet{vijayaraghavan2019generating} use generative adversarial networks and reinforcement learning to generate adversarial sentences, but these methods have quality issues such as mode collapse. \citet{oren2019distributionally} studies the robustness of a classifier when adapting it to texts from other domains. \citet{jia2019certified} proposes a certified robust classifier. Our \texttt{RewritingSampler}\xspace method is similar to \citet{wang2019bert}, which is an unconditional Gibbs sampling algorithm on a Bert model. \section{Conditional BERT Sampling Framework} In this section, we introduce our conditional BERT sampling (\texttt{CBS}\xspace) framework, a flexible framework that can sample sentences conditioned on some criteria from a BERT language model. Figure~\ref{fig:overview} shows the framework. The framework starts with a seed sentence $\mathbf{u}^{(0)}=\{u_1^{(0)},\ldots, u_l^{(0)}\}$. It iteratively sample and replace words in the seed sentence for $N$ times. Within the $i$-th iteration, the algorithm contains following steps to generate $\mathbf{u}^{(i)}$: \begin{itemize} \item Randomly pick a position $k^{(i)}$. For any $k'\neq k^{(i)}$, $u_{k'}^{(i)} = u_{k'}^{(i-1)}$. \item Replace the $k^{(i)}$-th position in $\mathbf{u}^{(i-1)}$ with the special mask token in a BERT language model, and compute the \textit{language model word distribution} over the BERT vocabulary as $p_{\text{lm}}$. \item Depending on the criteria we want to satisfy, we design an \textit{enforcing word distribution} over the BERT vocabulary as $p_{\text{enforce}}$. \item The \textit{proposal word distribution} is $p_{\text{proposal}} = p_{\text{lm}} \times p_{\text{enforce}}$. We use the sum of log probability distribution in our implementation for convenience. Then we sample a candidate word $z$ from $p_{\text{proposal}}$. \item We use a decision function $h(\cdot)$ to decide whether to use the proposed word $z$ or retain the word in the previous iteration $u_{k^{(i)}}^{(i-1)}$. \end{itemize} After $N$ iterations, we use $\mathbf{u}^{(N)}$ as the sampling output. We can use minibatch to sample sentence in parallel for efficiency. The advantage of this framework is its flexibility. By properly set the enforcing word distribution and the decision function, \texttt{CBS}\xspace can sample sentences satisfying many different criteria. \section{Rewriting Sampling Method} In this section, we introduce \texttt{RewritingSampler}\xspace, an instance of \texttt{CBS}\xspace framework. The objective of \texttt{RewritingSampler}\xspace is to rewrite a sentence while retaining its meaning. Specifically, given a sentence $\mathbf{x}=\{x_1,\ldots, x_l\}$, we set the seed sentence $\mathbf{u}^{(0)}=\mathbf{x}$. After $N$ iterations of update, $\mathbf{u}^{(N)}$ still have the same meaning as $\mathbf{x}$, but it is a different sentence. In \texttt{RewritingSampler}\xspace, we use word embedding similarity to compute a semantic enforcing distribution. The decision function always accepts the proposed word. \subsection{Semantic Enforcing distribution} The semantic enforcing distribution enforces the sampled sentence to have a similar GloVe \cite{pennington2014glove} embedding. GloVe word embeddings can effectivly capture the semantic meaning of a sentence. \citet{reimers2019sentence} show that using the average of GloVe word embeddings to represent the meaning of a sentence can achieve competitive performance on various tasks. We use GloVe embeddings to derive the enforcing distribution because of its efficacy and simplicity. Let \begin{equation} R(\mathbf{u})=\sum_{i=1}^{l} E(u_i)\mathbf{1}[u_i\notin \text{stopwords}]. \label{eq:sentemb} \end{equation} be the semantic representation of a sentence given by the sum of word embeddings, where $E(u_i)$ is the GloVe word embedding of $u_i$, and $\mathbf{1}[\cdot]$ is the indicator function. When replacing the $k$-th word in $\mathbf{u}$, the unnormalized enforcing distribution is defined as \begin{equation} p_{\text{enforce}}(z) \nonumber\\ =\exp(-\kappa \max(0, \nonumber \sigma - \text{similarity})). \end{equation} where \begin{equation} \text{similarity} =\text{cos\_sim}(R(\mathbf{u}_{-k}, z), R(\mathbf{x})). \label{eq:sim} \end{equation} $\sigma$ is the similarity threshold. If replacing $u_k$ with $z$ can create a sentence with similarity higher than the threshold, the enforcing probability for $z$ is high. Otherwise, the probability also depends on the smoothing parameter $\kappa$. Larger $\kappa$ means more rigorous enforcement on the similarity. \subsection{Word piece embeddings}\label{sec:wordpiece} The $p_\text{lm}(\cdot)$ works on a 30k-word-piece BERT vocabulary, while $p_\text{enforce}(\cdot)$ works on a 400k-word GloVe vocabulary. We decide to build our method on word-piece level, meaning that we tokenize the sentence into BERT word-pieces at the beginning. In each step, we sample and replace one word-piece. This design leads to an efficiency issue when computing $R(\mathbf{u})$ in $p_\text{enforce}(\cdot)$. To explain this issue and our solution, we clarify the notations in this subsection. $\mathbf{x}=\{x_1,\ldots, x_l\}$ and $\mathbf{u}=\{u_1,\ldots, u_l\}$ are the original sentence and the sampled sentence respectively. Each sentence has $l$ word-pieces. $x_i$ and $u_i$ are word-pieces in the BERT vocabulary. The computation of $p_\text{lm}(z|\mathbf{u}_{-i})$ is simple. We can replace $u_i$ with a special mask token and call the BERT model. We can get the probabilities for all 30k word-pieces with only one BERT call. The difficulties come from the semantic enforcing distribution, in particular $R(\mathbf{u})$. We need to compute embeddings for 30k different sentences. These sentences are very similar, except that the $i$-th word-piece is different. However, the $i$-th word-piece can concatenate with neighboring word-pieces to form words. Thus, we have to address the 30k sentences one by one. For each sentence, we need to convert word-pieces to words, then look up embeddings from the embedding table. Looping, conversion, and lookup are inefficient operations. If we have word-piece embeddings $E'\in\mathbb{R}^{30k\times d}$, such that $R(\mathbf{u})\approx\sum_{k=1}^{l} E'(u_k)$, where $d$ is the dimension of GloVe embeddings, we can avoid these operations. We can first compute a vector $c=\sum_{1\leq k\leq l, k\neq i} E'(u_k)$. Then we compute the 30k representations by adding $c$ to each row of $E'$. This improvement can significantly speed up the method. For this reason, we train word-piece embeddings. The word-piece embeddings are trained such that the addition of word-piece embeddings is close to the word embedding. For example, the word `hyperparameter' is tokenized to two word pieces, `hyper' and `\#\#parameter', then $E'(\text{hyper})+E'(\text{\#\#parameter})\approx E(\text{hyperparameter}).$ We train the word-piece embeddings as follows. Let $\mathbf{w}=\{w_1, \ldots, w_N\}$ be the concatenation of all sentences in the corpus tokenized by words. Let $E(\mathbf{w})\in\mathbb{R}^{N\times d}$ be the word embeddings for all words. Let $E'\in^{30k\times d}$ be the word-piece embeddings. Let $T(\mathbf{w})\in\mathbb{R}^{N\times30k}$ be an indicator for word-piece tokenization. \[ T(\mathbf{w})_{i,j}=\begin{cases} 1 & \text{if $w_i$ is tokenized to}\\ &\text{the $j$-th word-piece},\\ 0 & \text{otherwise.} \end{cases} \] We train the word-piece embeddings $E'$ by minimizing the absolute error \begin{equation} \min_{E'} ||E(\mathbf{w})-T(\mathbf{w})E'||_1, \label{eq:wp} \end{equation} where $||\cdot||_1$ sums up the absolute value of all entries in a matrix (Aka. entrywise matrix norm). We optimize Eq.~\eqref{eq:wp} using stochastic gradient descent. In each step, we sample 5000 words from $\mathbf{w}$, then update $E'$ accordingly. \subsection{Other details} \noindent\textbf{Fine-tuning BERT for attacking purpose:} For a dataset with $C$ classes, we fine-tune $C$ different BERT language models. For the $i$-th language model, we exclude training data from $i$-th class, so that the sampled sentences are more likely to be misclassified. \noindent\textbf{Blocked sampling:} Blocked sampling replaces more than one word in one step. It can overcome highly correlated words. For example, `i am' are highly correlated words. If we want to sample an alternative word for `i', we probably get the same word `i'. Using a 2-blocked sampling, we can replace these two words together in one step, and overcome the issue. \noindent\textbf{Fix entity names:} Changing entity names can drastically change the meaning of a sentence, and we observe that it is easier for the sampler to remove an entity name than to add a new one. For this reason, we prevent our \texttt{RewritingSampler}\xspace from deleting entity names from a sentence. We use the stanza \cite{qi2020stanza} package to recognize entity names in the data. In any particular step, if the current word is also the last occurrence of the entity name in the sentence, we skip this step to ensure that entity name still appears in the sentence. Although \texttt{RewritingSampler}\xspace cannot delete entity names, it is allowed to move them to other positions in a sentence. For example, if an entity name appears once at the beginning of the sentence, \texttt{RewritingSampler}\xspace can generate the same entity name at another sentence position in one iteration, then remove the entity name from the beginning in later iterations. \noindent\textbf{Multi-round sampling and dynamic constraints:} To get high-quality adversarial sentences, we run the sampler multiple times. We start with a high $\sigma$, and reduce it if no adversarial example is found within a few rounds. \section{Sentence-Level Threat Model} When attackers go after a text classifier $f(\cdot)$, they start with a natural sentence of $l$ words $\mathbf{x}=\{x_1,\ldots, x_l\}$, and the true label $y$. Their objective is to find a sentence $\mathbf{u}$ that can trigger an incorrect prediction $f(\mathbf{u})\neq y$. The set of sentences from which $\mathbf{u}$ is chosen is specified by a threat model. In this section, we discuss existing threat models and propose our sentence-level threat model\xspace. We have been able to find two existing threat models. In $k$-word-substitution~\citep{jia2019certified}, the attacker can substitute at most $k$ words of the original sentence, and the new words must be similar to original words under cosine similarity. So \begin{align*} \Delta_{k}(\mathbf{x}) = \big\{\mathbf{u}\big|&\sum_{i=1}^{l} \mathbf{1}[x_i\neq u_i]\leq k \wedge\\ & \text{cos\_sim}\big(E(x_i), E(u_i)\big)\geq\epsilon\big\}, \end{align*} where $\mathbf{1}[\cdot]$ is the indicator function, $E(\cdot)$ outputs the embedding of a word, and $\text{cos\_sim}(\cdot)$ computes the cosine similarity. $k$ and $\epsilon$ are parameters for the threat model. This threat model bounds the number of word changes in a sentence, and so does not support sentence-level modifications. In similarity-based distance \citep{Jin2019IsBR}, the attacker can choose sentences under some neural-network-based similarity measurement. So \[ \Delta_{\text{sim}}(\mathbf{x}) = \big\{\mathbf{u}\big|\text{cos\_sim}(H(\mathbf{x}), H(\mathbf{u}))\geq\epsilon\} \] where $H(\cdot)$ is a universal sentence encoder (USE) \citep{cer2018universal}. Using a neural network to define a threat model makes the threat model hard to analyze. We propose our sentence-level threat model\xspace to overcome these issues. We use a neural network to check the grammatical quality of a sentence, and we use word embeddings to measure the semantic distance between two sentences. This is defined as \begin{align} \Delta(\mathbf{x}) = \{\mathbf{u}\big|&\text{ppl}(\mathbf{u})\leq\lambda \text{ppl}(\mathbf{x}) \wedge\nonumber\\ &\text{cos\_sim}\big(R(\mathbf{u}),R(\mathbf{x})\big) \geq \epsilon\}, \label{eq:delta} \end{align} where $\text{ppl}(\cdot)$ is the perplexity of a sentence, $R(\cdot)$ is the representation of a sentence defined in Eq. \eqref{eq:sentemb}, $\lambda$ and $\epsilon$ are adjustable criterion for the threat model. Perplexity is the inverse probability of a sentence normalized by the number of words \begin{equation} \text{ppl}(\mathbf{x}) = p(\mathbf{x})^{-\frac{1}{l}}= \big[\prod_{i=1}^{l}p(x_i|x_1,\ldots, x_{i-1})\big]^{-\frac{1}{l}}. \label{eq:ppl} \end{equation} Thus a sentence with correct grammar has low perplexity. We use the GPT2 language model \cite{radford2019language} to compute Eq.~\eqref{eq:ppl} because of its high quality and computational ease. The language model is used to measure the \textbf{grammatical quality} of the adversarial sentence, and so is independent of the classifier we are trying to attack. $\lambda$ is the criteria for grammatical quality. A smaller $\lambda$ enforces better quality. Our threat model not only captures two important properties -- grammatical quality and semantic similarity -- it also allows for the independent control of these two properties through two adjustable criterion, and for flexible rewriting of the sentence. \section{Experiment} In this section, we show the results of experiments on 4 text classification datasets, and 2 natural language inference (NLI) datasets. For classification datasets, the attacker's objective is to modify the sentence and get it misclassified. For NLI datasets, a classifier is trained to infer the relation of a premise and a hypothesis among three options: neutral, entailment and contradiction. The attacker should rewrite the hypothesis to change the classifier's output. \subsection{Experimental settings} \textbf{Baseline:} We compare our method with TextFooler~\citep{Jin2019IsBR}, a recent black-box attack method. TextFooler is different from our \texttt{RewritingSampler}\xspace because it only makes word-level changes. \noindent\textbf{Datasets:} We use four text classification datsets \textbf{AG}'s News~\citep{zhang2015character}, Movie Reviews (\textbf{MR})~\citep{pang2005seeing}, \textbf{Yelp} Reviews~\citep{zhang2015character}, and \textbf{IMDB} Movie Reviews~\citep{maas2011learning}. We use two NLI datasets, Stanford Natural Language Inference (\textbf{SNLI}) \citep{bowman2015large}, and Multi-Genre Natural Language Inference\footnote{MNLI has matched and mismatched testsets.} (\textbf{MNLI}) \citep{williams2017broad}. Dataset details are shown in Table~\ref{tab:dataset}. \begin{table}[htb] \small \centering \begin{tabular}{cccccc} \toprule \textbf{Name} & \textbf{Type} & \textbf{\#C} & \textbf{Cased} & \textbf{Train/Test} & \textbf{Len} \\ \midrule AG & Topic & 4 & Y & 120k/7.6k & 54 \\ MR & Senti & 2 & N & 9k/1k & 24\\ Yelp & Senti & 2 & Y & 160k/38k & 182\\ IMDB & Senti & 2 & Y & 25k/25k & 305\\\midrule SNLI & NLI & 3 & Y & 570k/10k & 15/8\\ MNLI & NLI & 3 & Y & 433k/10k & 28/11\\ \bottomrule \end{tabular} \caption{Dataset details. Topic, Senti, and NLI mean topic classification, sentiment classification and natural language inference respectively. \#C means number of classes. Cased means whether the dataset is cased or uncased. Len is the number of BERT word-pieces in a sentence. For the NLI dataset, the two numbers in Len represent the length for the premise and the hypothesis respectively.} \label{tab:dataset} \end{table} \noindent\textbf{Classifier}: For all datasets, we use the BERT-base classifier \citep{Devlin2019BERTPO} (\#layers=12, hidden\_size=768). We use an uncased model on MR,\footnote{MR does not have a cased version.} and use cased models for other datasets. We train the classifier on 20k batches (5k batches on MR), with batch size 32. We use Adamw optimizer~\citep{loshchilov2017decoupled} and learning rate 0.00002. \noindent\textbf{Evaluation Metric:} We evaluate the efficacy of the attack method using the after attack accuracy. Lower accuracy means the attacker is more effective in finding adversarial examples. We use two threat model setups, $(\lambda=2, \epsilon=0.95)$ and $(\lambda=5, \epsilon=0.90)$. None of the attack methods ensure that the output satisfies a threat model, so after we execute the TextFooler and \texttt{RewritingSampler}\xspace, we filter adversarial sentences using the threat model. Only those adversarial sentences that satisfy the threat model criteria are considered to be successful attacks. The first threat model is more rigorous than the second one, so fewer sentences can be considered as successful attacks in the first threat model, and the after-attack accuracy will be higher on the first threat model. Note that when we filter adversarial examples using a threat model, we use the GloVe \textit{word} embeddings to compute the similarity. Word-piece embeddings are only used in the \texttt{RewritingSampler}\xspace attacking method. \noindent\textbf{Language model for \texttt{RewritingSampler}\xspace}: We use a BERT-based language model (\#layers=12, hidden\_size=768). For each dataset, we fine-tune the language model 20k batches on the training set (5k batches on MR dataset), with batch size 32 and learning rate 0.0001. We follow the masking strategy in \citet{Devlin2019BERTPO} and 20\% masked words. \begin{figure*}[t] \centering \vspace{-1em} \includegraphics[width=.9\columnwidth]{misc/agnes-scatter.pdf} \includegraphics[width=.9\columnwidth]{misc/agnews-scatter-use.pdf} \vspace{-1em} \caption{Scatter plot of adversarial examples found by \texttt{RewritingSampler}\xspace and TextFooler on AG dataset. Each `x' or `.' in the figure represents one pair of original and adversarial sentences. The x-axis is the perplexity ratio $\text{ppl}(\mathbf{u})/\text{ppl}(\mathbf{x})$. The y-axis is the cosine similarity between sentence pairs using GloVe representations (left) and universal sentence encoder representation (right).} \label{fig:scatter_ag} \end{figure*} \subsection{Results on AG dataset} For each sentence, we run \texttt{RewritingSampler}\xspace $2$ rounds, and each round contains $50$ iterations. Overall, we generate $100$ neighboring sentences for each original sentence. In the first round, we set the soft semantic constraint $\sigma=0.98$ to find sentences with very high similarity. If there is a sentence that can change the classification within the $50$ iterations of the first round, we skip the second round. Otherwise, we reduce the constraint to $\sigma=0.95$ in the second round to increase the chance of finding an adversarial sentence. We set $\kappa=1000$. We try blocked-sampling with three different block sizes ($\text{block}=1, 3, 5$). Table \ref{tab:res_ag} shows our experimental results on the Ag dataset. We observe that our method outperforms TextFooler on both threat models. Using $\text{block}=1$ can get the best performance on the more rigorous threat model, while a larger block size gives a better performance on the less rigorous threat model. \texttt{RewritingSampler}\xspace can rewrite a sentence differently with a larger block size, but it is hard to rewrite a sentence very differently while keeping the similarity as high as $0.95$. Figure~\ref{fig:ratio_block} shows the change rate of adversarial sentences. A larger block size leads to a larger change rate. \begin{table}[htb] \centering \small \begin{tabular}{ccc} \toprule & $\lambda=2$ & $\lambda=5$ \\ \textbf{Model} & $\epsilon=0.95$ & $\epsilon=0.9$\\\midrule TextFooler & 84.0 & 66.7 \\ \texttt{RewritingSampler}\xspace (block=1) & \textbf{76.8} & 55.6 \\ \texttt{RewritingSampler}\xspace (block=3) & 78.7 & 49.6 \\ \texttt{RewritingSampler}\xspace (block=5) & 81.5 & \textbf{49.3} \\ \bottomrule \end{tabular} \caption{Results on AG Dataset. We report the accuracy (\%) of the cased BERT classifier after attack. } \label{tab:res_ag} \end{table} On Figure~\ref{fig:scatter_ag} (left) we compare adversarial examples found by \texttt{RewritingSampler}\xspace (block=1) with TextFooler. The marginal distribution of the perplexity ratio shows that our method generates sentences with a higher grammatical quality. Our method achieves a lower semantic similarity than TextFooler because TextFooler makes local synonym substitution, while \texttt{RewritingSampler}\xspace makes global changes to the sentence. It is difficult to rewrite a sentence while keeping the same sentence representation. In \citet{Jin2019IsBR}, the similarity between sentences is measured by USE \citep{cer2018universal}. So we also measure the similarity using USE for comparison, shown on Figure~\ref{fig:scatter_ag} (right). Figure~\ref{fig:ratio-ag} compares the word-piece change rate between TextFooler and our method. Our method changes many more words because it is allowed to make sentence-level changes. \begin{figure}[htb] \centering \includegraphics[width=0.8\columnwidth]{misc/agnews-rate-block.pdf} \vspace{-1em} \caption{Compare word-piece change rate between different block size on AG dataset. The dashed vertical line shows the average change rate. } \label{fig:ratio_block} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.8\columnwidth]{misc/agnews-rate.pdf} \vspace{-1em} \caption{Change ration on AG's News dataset. } \label{fig:ratio-ag} \end{figure} Table~\ref{tab:agcase} shows some adversarial examples found by \texttt{RewritingSampler}\xspace. We found our method can rewrite the sentence while preserving a similar meaning. \begin{table*}[t] \centering \begin{minipage}{\columnwidth} \centering \small \begin{tabular}{lcc} \toprule & TextFooler & \texttt{RewritingSampler}\xspace \\\midrule \textbf{MR} (Acc 86) & &\\ $\epsilon=0.95,\lambda=2$ & \textbf{60.1} & 60.7 \\ $\epsilon=0.90,\lambda=5$ & 38.7 & \textbf{26.9}\\\midrule \textbf{Yelp} (Acc 97.0) & &\\ $\epsilon=0.95,\lambda=2$ & 49.1 & \textbf{23.5} \\ $\epsilon=0.90,\lambda=5$ & 20.3 & \textbf{16.6} \\\midrule \textbf{IMDB} (Acc 90.9) & &\\ $\epsilon=0.95,\lambda=2$ & 26.7 & \textbf{22.2} \\ $\epsilon=0.90,\lambda=5$ & 15.0 & \textbf{14.2} \\\bottomrule \end{tabular} \caption{Results on sentiment analysis datasets.} \label{tab:res_senti} \end{minipage} \begin{minipage}{\columnwidth} \centering \small \begin{tabular}{lcc} \toprule & TextFooler & \texttt{RewritingSampler}\xspace \\\midrule \textbf{SNLI} (Acc 89.4) & &\\ $\epsilon=0.90,\lambda=2$ & 78.4 & \textbf{48.4} \\ $\epsilon=0.80,\lambda=5$ & 49.2 & \textbf{40.8} \\\midrule \multicolumn{2}{l}{\textbf{MNLI-matched} (Acc 85.1)} &\\ $\epsilon=0.90,\lambda=2$ & 64.6 & \textbf{37.0}\\ $\epsilon=0.80,\lambda=5$ & 36.9 & \textbf{24.2}\\\midrule \multicolumn{2}{l}{\textbf{MNLI-mismatched} (Acc 83.7)} &\\ $\epsilon=0.90,\lambda=2$ & 61.7 & \textbf{33.3} \\ $\epsilon=0.80,\lambda=5$ & 31.6 & \textbf{19.1} \\\bottomrule \end{tabular} \caption{Results on NLI datasets.} \label{tab:res_nli} \end{minipage} \end{table*} \subsection{Results on sentiment datasets} On all three datasets, we also run \texttt{RewritingSampler}\xspace 2 rounds with $\sigma=0.98$ and $\sigma=0.95$. We set $\kappa=1000$. On MR, we run $50$ iterations each round. Because Yelp and IMDB have much longer sentences, we run 3 iterations in each round for efficiency. To compensate for the reduction in iterations, we save intermediate sentences every $10$ steps, and use the intermediate sentences to attack the model. Table~\ref{tab:res_senti} shows the results. Our method gets better performance on most cases. \subsection{Results on NLI datasets} On both NLI datasets, we also run \texttt{RewritingSampler}\xspace 2 rounds with $\sigma=0.95$ and $\sigma=0.90$. We set $\kappa=1000$. We run $10$ iterations each round. We use two threat models, $(\epsilon=0.9, \lambda=2)$ and $(\epsilon=0.8, \lambda=5)$. We use smaller $\epsilon$ because adversarial sentences by TextFooler have lower similarity. Table~\ref{tab:res_nli} shows the results. Our method consistently and significantly outperforms our baseline. Figure~\ref{fig:snli_scatter} shows a scatter plot of the adversarial sentences found by \texttt{RewritingSampler}\xspace on SNLI. \texttt{RewritingSampler}\xspace is superior in both semantic similarity and grammatical quality. \begin{figure}[htb] \centering \includegraphics[width=0.9\columnwidth]{misc/snli-scatter.pdf} \vspace{-1em} \caption{Scatter plot of adversarial examples found by \texttt{RewritingSampler}\xspace and TextFooler on SNLI dataset. Each `x' or `.' in the figure represents one pair of original and adversarial sentences. The x-axis is the perplexity ratio $\text{ppl}(\mathbf{u})/\text{ppl}(\mathbf{x})$. The y-axis is the cosine similarity between GloVe representations.} \label{fig:snli_scatter} \end{figure} \section{Discussion} Our \texttt{RewritingSampler}\xspace is reasonably efficient. The most time-consuming portion involves computing $p_\text{BERT}(u_i|\mathbf{u}_{-i})$ because the complexity of BERT is $O(l^2)$, where $l$ is the number of word-pieces in a sentence. As it runs the BERT model once for each step, the complexity of one iteration of sampling is $O(l^3)$. On an RTX 2080 GPU, we can finish 3 iterations of sampling for a 50-word-piece sentence per second. Our method can outperform TextFooler for two reasons. First, the sampling procedure is capable of generating more different sentences. TextFooler only makes word substitutions, so their diversity of sentences is low. Second, in TextFooler, the criteria for choosing a word substitution is the words' semantic similarity. It uses ad-hoc filtering to ensure grammatical correctness. On the contrary, \texttt{RewritingSampler}\xspace jointly considers the semantic similarity and grammatical quality, so it can make a better choice when sampling words. \section{Conclusion} In this paper, we explore the problem of generating adversarial examples with sentence-level modifications. We propose a Gibbs sampling method that can effectively rewrite a sentence with good grammatical quality and high semantic similarity. Experimental results show that our method achieves a higher attack success rate than other methods. In the future, we will explore the use of this \texttt{RewritingSampler}\xspace for data augmentation
{ "attr-fineweb-edu": 1.71875, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbQfxK1Thg9qFblU0
\section{Introduction} Peakon solutions to non-linear two-dimensional $(x,t)$ integrable fluid equations have been shown to exhibit themselves integrable dynamics in several interesting cases. They take the generic form \begin{equation} \label{eq:peakonphi} \varphi(x,t) = \sum_{i=1}^n p_i(t)\, e^{\vert x-q_i(t) \vert } \end{equation} and their dynamics for $(p_i,q_i)$ is deduced from a reduction of the 1+1 fluid equations for $\varphi(x,t)$. Integrability of peakons entails the existence of a Poisson structure for $(p_i,q_i)$ deduced from the original Poisson structure of the fluid fields, including $\varphi(x,t)$; the existence of a Hamiltonian $h(p_i,q_i)$ and Poisson-commuting higher Hamiltonians, a priori deduced by reduction of the continuous Hamiltonians to peakon solutions for the integrable 1+1 dynamical system. In a number of cases, the dynamics is expressed in terms of a Lax equation: \begin{equation} \label{eq:lax} \dot L = [ L,M ] \end{equation} where $L,M$ are $(p,q)$-dependent matrices and \eqref{eq:lax} contains all equations for $p_i(t)$, $q_i(t)$ obtained from plugging \eqref{eq:peakonphi} into the 1+1 integrable equation. The Lax matrix naturally yields candidate conserved Hamiltonians $h^{(k)} = \mathop{\rm tr}\nolimits(L^k)$. Poisson-commutation of $h^{k)}$ is equivalent \cite{BV} to the existence of an $r$-matrix formulation of the Lax matrix Poisson brackets: \begin{equation} \label{eq:PBL1L2} \{ L_1 , L_2 \} = \sum \{ L_{ij} , L_{jk} \}\, e_{ij} \otimes e_{kl} = [ r_{12} , L_1 ] - [ r_{21} , L_2 ]. \end{equation} The $r$-matrix itself may depend on the dynamical variables \cite{STS}. A simple example of such dynamical $r$-matrix is given by the reformulation of the well-known ``quadratic'' $r$-matrix structure, extensively studied \cite{Skl1982,Skl1983,MF}. We recall the form of this quadratic structure: \begin{equation} \label{eq:PBFM} \{ L_1 , L_2 \} = a_{12} L_1 L_2 - L_1 L_2 d_{12} + L_1 b_{12} L_2 - L_2 c_{12} L_1 \end{equation} where $a_{12} = -a_{21}$, $d_{12} = -d_{21}$, $b_{12} = c_{21}$ to ensure antisymmetry of the Poisson bracket. When the \textsl{regularity condition}\footnote{The name 'regularity' will be motivated in section \ref{sect:twist}.} \begin{equation}\label{eq:trace0} a_{12}-c_{12}=d_{12}-b_{12} \end{equation} is fulfilled, \eqref{eq:PBFM} is indeed identified with \eqref{eq:PBL1L2} by setting $r_{12} = {\textstyle{\frac{1}{2}}}(a_{12}L_2+L_2a_{12})-L_2c_{12}$. Hence when the regularity condition \eqref{eq:trace0} is fulfilled, the quantities $\mathop{\rm tr}\nolimits L^k$ mutually Poisson-commute, ensuring the integrability of the peakon models. The case $a=d$, $b=c=0$ was first characterized by E.~Sklyanin \cite{Skl1979}; $a=d$, $b=c$ yields the so-called classical reflection algebra \cite{Skl1982,Skl1983}. We consider here the three integrable peakon equations discussed in e.g. \cite{AR2016} for which the key features of Poisson structure, integrability and Lax matrix, have been established: 1. The Camassa--Holm equation \cite{CH,CHH}. Poisson structure for peakons is given in \cite{dGHH}, Lax formulation in \cite{RB} although the Poisson structure here is not the one in \cite{dGHH}. We shall comment and relate the two structures in Section 2. 2. The Degasperis--Procesi equation \cite{DGP,dGPro3}. Poisson structure for peakons is given also in \cite{dGHH}. Lax formulation is given in \cite{dGPro3}, also commented in \cite{HH}. 3. The Novikov equation \cite{Novi1}. Poisson structure for peakons is given in \cite{HW}. Lax formulation is given in \cite{HLS}. Note that a fourth peakon-bearing integrable equation was identified (so-called modified Camassa--Holm equation \cite{Fuchs,Fokas}), but peakon integrability properties are obstructed by the higher non-linearity of the modified Camassa--Holm equation, precluding the consistent reduction of Poisson brackets and Hamiltonians to peakon variables \cite{AK}. We will establish in these three cases the existence of a quadratic $r$-matrix structure \eqref{eq:PBFM}. We will show that the four parametrizing matrices $a$, $b$, $c$, $d$ are equal or closely connected to the Toda $A_n$ $r$-matrix \cite{OT}. This close connection can be understood in the Camassa--Holm and Novikov cases, via an identification between the Lax matrix of Camassa--Holm and the well-known Toda molecule Lax matrix \cite{FO}. In addition, the construction of the Novikov Lax matrix as $L^{\text{Nov}} = TL^{\text{CH}}$, where $T = \sum_{i,j}^n \big(1+\mathop{\rm sgn}\nolimits(q_i-q_j)\big) e_{ij}$, relates the two $r$-matrices by a twist structure. The occurence of $a_{12}$ in the Camassa--Holm context however also requires an understanding of the Camassa--Holm peakons Poisson bracket in \cite{dGHH} as a second Poisson bracket in the sense of Magri \cite{Magri,OR}, where the first Poisson bracket is the canonical structure $\{q_i,p_j\}=\delta_{ij}$, to be detailed in Section 2. Each following section is now devoted to one particular model, resp. Camassa--Holm (Section 2), Degasperis--Procesi (Section 3), and Novikov (Section 4). We conclude with some comments and open questions. \section{Camassa--Holm peakons\label{sect2}} The Camassa--Holm shallow-water equation reads \cite{CH,CHH} \begin{equation} u_t - u_{xxt} + 3 uu_x = 2u_x u_{xx} + uu_{xxx} \end{equation} The $n$-peakon solutions take the form \begin{equation} u(x,t) = \sum_{i=1}^n p_i(t)\, e^{-|x-q_i(t)|} \end{equation} yielding a dynamical system for $p_i,q_i$: \begin{equation} \dot{q}_i = \sum_{j=1}^n p_j \, e^{-|q_i-q_j|} \,, \qquad \dot{p}_i = \sum_{j=1}^n p_ip_j \,{\mathfrak s}_{ij}\, e^{-|q_i-q_j|} \,. \end{equation} This discrete dynamical system is described by a Hamiltonian \begin{equation} H = \frac12\,\sum_{i,j=1}^n {p_ip_j} \,e^{ -| q_i-q_j| } \end{equation} such that \begin{equation} \dot{f}=\{ f\,,\,H \} \,, \end{equation} with the canonical Poisson structure: \begin{equation}\label{eq:PB-can} \{ p_i,p_j \} = \{ q_i,q_j \} = 0 \,, \qquad \{ q_i,p_j \} = \delta_{ij} \,. \end{equation} The same dynamics is in fact also triggered \cite{dGHH} by the reduced Camassa--Holm Hamiltonian: \begin{equation} H = \sum_{i} p_i \end{equation} with the reduced Camassa--Holm Poisson structure (which is dynamical and ``non-local''): \begin{equation}\label{eq:PBCH} \begin{aligned} \{ {p}_i,{p}_j \} &= {\mathfrak s}_{ij}\,{p}_i {p}_j e^{-|{q}_i-{q}_j| } \,, \\ \{ {q}_i,{p}_j \} &= {p}_j e^{-|{q}_i-{q}_j|} \,, \\ \{ {q}_i,{q}_j \} &= {\mathfrak s}_{ij} \big( 1-e^{-|{q}_i-{q}_j| }\big) \,, \end{aligned} \end{equation} where ${\mathfrak s}_{ij}=\mathop{\rm sgn}\nolimits({q}_i-{q}_j)$. It is also encoded in the Lax formulation \cite{CF,CH} \begin{equation} \frac{dL}{dt} = [L,M] \end{equation} with \begin{equation} \label{eq:LCH} L = \sum_{i,j=1}^n L_{ij} e_{ij} \,, \qquad L_{ij} = \sqrt{p_ip_j} \,e^{-{\textstyle{\frac{1}{2}}} | q_i-q_j | } \,. \end{equation} \subsection{The linear Poisson structure} We summarize here the results obtained in \cite{RB}. The Poisson structure \eqref{eq:PB-can} endows the Lax matrix \eqref{eq:LCH} with a linear $r$-matrix structure \begin{equation}\label{eq:rlin-CH} \{L_1\,,\, L_2\} = [r_{12}\,,\, L_1] -[r_{21}\,,\, L_2]\,,\quad\mbox{with}\quad r_{12}=a_{12}-b_{12}\,. \end{equation} In \eqref{eq:rlin-CH}, $a_{12}$ is the $A_{n-1}$ Toda $r$-matrix \begin{equation}\label{eq:Toda} a_{12}=\frac14\,\sum_{i,j=1}^n {\mathfrak s}_{ij}\, e_{ij}\otimes e_{ji}=-a_{21} \qquad \text{and} \qquad b_{12}=-a_{12}^{t_2} \,, \end{equation} with by convention $\mathop{\rm sgn}\nolimits(0) = 0$, and $e_{ij}$ is the $n\times n$ elementary matrix with 1 at position $(i,j)$ and 0 elsewhere. Connection of the Lax matrix with the Toda $r$-matrix structure was already pointed out in \cite{RB}. The $r$-matrix structure \eqref{eq:rlin-CH} is indeed identified with the same structure occuring in the so-called Toda lattice models \cite{OR}. One can add that the $r$-matrix structure for the Toda lattice in \cite{RB} and the peakon dynamics in \eqref{eq:rlin-CH} is directly identified with the well-known $r$-matrix structure for Toda molecule models \cite{FO}. Indeed, both Toda lattice and peakon Lax matrices endowed with the canonical Poisson structure \eqref{eq:PB-can} are representations of the abstract $A_{n-1}$ Toda molecule structure \begin{equation} L = \sum_i x_i\,h_i +\sum_{\alpha\in\Delta_+} x_\alpha (e_{\alpha}+e_{-\alpha})\,, \end{equation} with $\{x_\alpha\,,\,x_\beta\} = x_{\alpha+\beta}$ and $\{h_i\,,\,x_\alpha\} = \alpha(i)\,x_{\alpha}$. In the highly degenerate case of Toda Lax matrix (where $x_\alpha=0$ for non-simple roots $\alpha$), it is directly checked that $a$ and $s$ yield the same contribution to \eqref{eq:rlin-CH}, implying that the Toda Lax matrix has an $r$-matrix structure parametrized by $a_{12}$ solely, as it is well-known \cite{OT}. \subsection{The quadratic Poisson structure} The new result which we shall elaborate on now is stated as: \begin{prop}\label{prop:CHQ} The Poisson structure \eqref{eq:PBCH} endows the Lax matrix \eqref{eq:LCH} with a quadratic $r$-matrix structure: \begin{equation} \label{eq:LLQuad} \{ L_1 , L_2 \} = [ a_{12} , L_1 L_2 ] - L_2 b_{12} L_1 + L_1 b_{12} L_2, \end{equation} where $a_{12} $ and $b_{12}$ are given in \eqref{eq:Toda}. \end{prop} \textbf{Proof:}~ Direct check by computing the Poisson bracket $\{ L_{ij},L_{kl}\}$ on the left hand side and right hand side. The antisymmetry of the Poisson structure, explicitly realized by \eqref{eq:LLQuad}, allows to eliminate ``mirror display'', i.e. $(ij,kl) \leftrightarrow (kl,ij)$. The invariance of the Poisson structure \eqref{eq:LLQuad} under each operation $t_1$ and $t_2$ is due to the symmetry $L^t = L$ of \eqref{eq:LCH}, the identification of $b_{12} = -a_{12}^{t_2}$, and the antisymmetry $a_{12}^{t_1t_2} = -a_{12}$. It allows to eliminate transposed displays $(ij,kl) \leftrightarrow (ji,kl) \leftrightarrow (ji,lk) \leftrightarrow (ij,kl)$ and to check only a limited number of cases (indeed 13 cases). Remark that the form \eqref{eq:LLQuad} ensures that the regularity condition \eqref{eq:trace0} is trivially obeyed. The Poisson structure \eqref{eq:PBCH}, identified as a second Poisson structure in the sense of Magri \cite{Magri}, yields the natural quadratization \eqref{eq:LLQuad} of the $r$-matrix structure \eqref{eq:rlin-CH}. This fact is consistent with the fact that \eqref{eq:PBCH} is obtained by reduction to peakon variables of the second Poisson structure of {Camassa--Holm}, built in \cite{dGHH}, while \eqref{eq:PB-can} is obtained by reduction of the first Camassa--Holm Poisson structure. Reduction procedure (from fields to peakon variables) and recursion construction (\textit{\`a la} Magri, see \cite{OR}) are therefore compatible in this case, and the compatibility extends to the $r$-matrix structures of the reduced variables. Such a consistency at the $r$-matrix level is not an absolute rule. For instance, the first and second Poisson structures for the Calogero--Moser model yield $r$-matrix structures, ``linear'' \cite{ABT} and ``quadratic'' \cite{AR}, but with different $r$-matrices. \subsection{The Yang--Baxter relations} \paragraph{Quadratic structure.} As is known from general principles \cite{MF}, quadratic Poisson $r$-matrix structures obey consistency quadratic equations of Yang--Baxter type to ensure Jacobi identity of Poisson brackets. In the case of original Camassa--Holm pair $(a,b)$ in \eqref{eq:Toda}, the skew-symmetric element $a_{12}$ obeys the modified Yang--Baxter equation: \begin{equation} \label{eq:modYB} [ a_{12},a_{13} ] + [ a_{12},a_{23} ] + [ a_{13},a_{23} ] = \frac1{16}\Big(\Omega_{123} - \Omega_{123}^{t_1t_2t_3}\Big) \,, \ \text{ with }\ \Omega_{123}=\sum_{i,j,k=1}^n e_{ij}\otimes e_{jk}\otimes e_{ki} \,. \end{equation} The symmetric element $b_{12}$ obeys an adjoint-modified Yang--Baxter equation directly obtained from transposing \eqref{eq:modYB} over space 3: \begin{equation} \label{eq:modYBadj} [ a_{12},b_{13} ] + [ a_{12},b_{23} ] + [ b_{13},b_{23} ] = \frac1{16}\Big(- \Omega_{123}^{t_3} + \Omega_{123}^{t_1t_2}\Big) \,. \end{equation} Cancellation of a suitable combination of \eqref{eq:modYB} and \eqref{eq:modYBadj} with all permutations added, together with symmetry of the Camassa--Holm Lax matrix, allows to then check explicitly Jacobi identity for $L^{CH}$ and Poisson structure \eqref{eq:LLQuad}. \\ \paragraph{Linear structure.} The Jacobi identity for the linear Poisson structure \eqref{eq:rlin-CH} also follows from \eqref{eq:modYB} and \eqref{eq:modYBadj}. Associativity of the linear Poisson bracket is equivalent to the cyclic relation: \begin{equation} \label{eq:Jacobi-lin} [[ r_{12},r_{13} ] + [ r_{12},r_{23} ] + [ r_{32},r_{13} ], L_1] + cyclic = 0\,, \end{equation} where \textit{cyclic} stands for sum over cyclic permutations of $(1,2,3)$. The Yang--Baxter "kernel" $[ r_{12},r_{13} ] + [ r_{12},r_{23} ] + [ r_{32},r_{13} ]$ must now be evaluated. In many models, it is known to be equal to $0$ (classical Yang--Baxter equation) or to a combination of the cubic Casimir operators $\Omega_{123}$ and $\Omega_{123}^{t_1t_2t_3} $ (modified Yang--Baxter equation). If any of these two sufficient conditions holds, \eqref{eq:Jacobi-lin} is then trivial. However, in the Camassa--Holm case, the situation is more involved. Indeed from \eqref{eq:modYB} and \eqref{eq:modYBadj}, and denoting the Casimir term $C_{123} \equiv \Omega_{123} - \Omega_{123}^{t_1t_2t_3}$, one has: \begin{equation} \label{eq:Jacobi-lin2} [ r_{12},r_{13} ] + [ r_{12},r_{23} ] + [ r_{32},r_{13} ] = C_{123} + C_{123}^{t_3} + C_{123}^{t_2} - C_{123}^{t_1} \end{equation} which is neither a cubic Casimir nor even cyclically symmetric. Realization of \eqref{eq:Jacobi-lin} indeed follows from explicit direct cancellation of the first (factorizing) Casimir term in \eqref{eq:Jacobi-lin2} under commutation with $L_1 + L_2 + L_3$ and cross-cancellation of the remaining 9 terms, using in addition the invariance of $L$ under transposition. We have here a textbook example of an $r$-matrix parametrizing a Poisson structure for a Lax matrix without obeying one of the canonical classical Yang--Baxter equations. \section{Degasperis--Procesi peakons}\label{sect3} This integrable shallow-water equation reads \cite{dGPro3} \begin{equation} u_t - u_{xxt} + 4 uu_x = 3 u_x u_{xx} + u u_{xxx} \end{equation} Note that, together with the Camassa--Holm equation, it is a particular case of the so-called $b$-equations: \begin{equation} u_t - u_{xxt} + (\beta+1) uu_x = \beta u_x u_{xx} + u u_{xxx} \end{equation} for which integrability properties are established for $\beta=2$ (Camassa--Holm) and $\beta=3$ (Degasperis--Procesi), by an asymptotic integrability approach \cite{dGPro3}. This approach fails at $\beta=4$. \subsection{The quadratic Poisson structure} For $\beta=3$, $n$-peakon solutions are parametrized as \begin{equation} u(x,t) = {\textstyle{\frac{1}{2}}} \sum_{j=1}^n p_j(t)\, e^{-|x-q_j(t)| } \,, \end{equation} yielding a dynamical system: \begin{equation} \label{eq:dynGP} \begin{split} \dot{p}_j &= 2 \sum_{k=1}^n p_j p_k \,{\mathfrak s}_{jk}\, e^{ -|q_j-q_k|} \\ \dot{q}_j &= \sum_{k=1}^n p_k\, e^{-|q_j-q_k| } \,. \end{split} \end{equation} Note the extra factor 2 in $\dot{p}_j$ compared with the Camassa--Holm equation. \\ The Lax matrix is now given by \begin{equation} \label{eq:LaxGP} L_{ij} = \sqrt{p_ip_j} \, \big( T_{ij} - {\mathfrak s}_{ij}\,e^{-|q_i-q_j|} \big) \,, \end{equation} with \begin{equation} \label{eq:defT} T_{ij} = 1+{\mathfrak s}_{ij}\,, \quad i,j=1,\dots,n\,. \end{equation} The dynamical equations \eqref{eq:dynGP} derive from the Hamiltonian $H = \mathop{\rm tr}\nolimits(L)$ and the Poisson structure obtained by reduction from the canonical Poisson structure of Degasperis--Procesi: \begin{align} \{ p_i,p_j \} &= 2{p}_i {p}_j {\mathfrak s}_{ij}\, e^{-|{q}_i-{q}_j|} \,, \nonumber \\ \label{eq:PBDGP} \{ q_i,p_j \} &= {p}_j e^{-|{q}_i-{q}_j|} \,, \\ \{ q_i,q_j \} &= {\textstyle{\frac{1}{2}}} {\mathfrak s}_{ij}\, \big( 1-e^{-|{q}_i-{q}_j|}\big) \,. \nonumber \end{align} Again one notes the non-trivial normalization of the Poisson brackets in \eqref{eq:PBDGP} compared with \eqref{eq:PBCH} which will have a very significant effect on the $r$-matrix issues. Let us note that the Hamiltonian associated to a time evolution $\dot{f}=\{ f\,,\,H \}$ consistent with the dynamics \eqref{eq:dynGP} is in fact the conserved quantity noted $P$ in \cite{dGPro3}. The Hamiltonian $H$ in \cite{dGPro3} is $\mathop{\rm tr}\nolimits L^2$. Let us now state the key result of this section. \begin{prop}\label{prop:DGP} The Poisson structure \eqref{eq:PBDGP} endows the Lax matrix $L$ given in \eqref{eq:LaxGP} with a quadratic $r$-matrix structure: \begin{equation} \label{eq:LLDGP} \{ L_1 , L_2 \} = [ a'_{12} , L_1 L_2 ] - L_2 b'_{12} L_1 + L_1 b'_{12} L_2 \,, \end{equation} where (we remind our convention that $\mathop{\rm sgn}\nolimits(0)=0$) \begin{align} a'_{12} &= \frac12\sum_{i,j} {\mathfrak s}_{ij}\, e_{ij} \otimes e_{ji}=2\,a_{12}\, , \\ b'_{12} &= -\frac12\sum_{i,j} {\mathfrak s}_{ij}\, e_{ij} \otimes e_{ij} -\frac12\cQ_{12} \,,\quad \text{with}\quad \cQ_{12}= \sum_{i,j=1}^n e_{ij} \otimes e_{ij}. \end{align} \end{prop} \textbf{Proof:}~: By direct check of the left hand side and right hand side of \eqref{eq:LLDGP}. Since the Lax matrix $L$ is neither symmetric nor antisymmetric, many more cases of inequivalent index displays occur. More precisely, 12 four-indices, 18 three-indices and 5 two-indices must be checked. {\hfill \rule{5pt}{5pt}}\\ The regularity condition \eqref{eq:trace0} is again trivially fulfilled. \subsection{The Yang--Baxter equations} The matrix $a'_{12}$ in the Degasperis--Procesi model is essentially the linear Toda $r$-matrix as in the Camassa--Holm model. On the contrary, the $b'$ component of the quadratic structure \eqref{eq:LLDGP} must differ from the $b$ component in the quadratic Camassa--Holm bracket \eqref{eq:LLQuad} by an extra term proportional to $\cQ_{12} ={\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12}^{t_1}={\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12}^{t_2}=\cQ_{21}$, where ${\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12}= \sum_{i,j=1}^n e_{ij} \otimes e_{ji}$ is the permutation operator between space 1 and space 2. For future use, we also note the property \begin{equation} {\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12}^2={\mathbb I}_n\otimes{\mathbb I}_n \quad\text{and}\quad {\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12} \, M_1 \, M'_2\,{\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12} = M'_1 \, M_2 \,, \end{equation} \begin{equation}\label{eq:propQ} M_1\,\cQ_{12} = M^t_2\,\cQ_{12} \quad\text{and}\quad \cQ_{12}\,M_1=\cQ_{12}\,M^t_2 \end{equation} which holds for any $n\times n$ matrices $M$ and $M'$. \begin{rmk} Since the Lax matrix $L^{CH}$ of Camassa--Holm peakons is a symmetric $c$-number matrix, $L=L^t$, one checks that \begin{equation} {\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12}\, L_1\, L_2 = L_2\, L_1\, {\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12} \quad\text{and}\quad L_1\, {\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12}^{t_1}\, L_2 = L_2\, {\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12}^{t_1}\, L_1\,. \end{equation} Hence the $(a',b')$ pair of $r$-matrices yielding the quadratic Poisson structure for Degasperis--Procesi peakons yields the quadratic Poisson structure for Camassa--Holm peakons with a pair $(a,b+\frac14\cQ)$, since the extra contribution $L_2 \,\cQ_{12}\, L_1 - L_1\, \cQ_{12}\, L_2$ cancels out. In this case, we will call this pair an alternative presentation for Camassa--Holm peakons. \end{rmk} The Degasperis--Procesi $(a',b')$ pair obeys a set of classical Yang--Baxter equations which is simpler than the Camassa--Holm pair $(a,b)$. The skew-symmetric element $a_{12}$ still obeys a modified Yang--Baxter equation \begin{equation} [ a'_{12},a'_{13} ] + [ a'_{12},a'_{23} ] + [ a'_{13},a'_{23} ] = \frac1{4}\Big(\Omega_{123} - \Omega_{123}^{t_1t_2t_3}\Big) \,, \end{equation} but the symmetric element $b'$ obeys an adjoint-modified Yang--Baxter equation with zero right-hand-side: \begin{equation} \label{eq:YBb} [ a'_{12},b'_{13} ] + [ a'_{12},b'_{23} ] + [ b'_{13},b'_{23} ] =0 \,. \end{equation} \begin{rmk} A term proportional to ${\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12}$ can be added to $a'_{12}$, leading to a matrix $\wt a'_{12}=a'_{12}+\frac12{\cal P}} \def\cQ{{\cal Q}} \def\cR{{\cal R}_{12}$. This term is optional, it does not change the Poisson brackets, nor the regularity condition. If added, it allows the relation $b'_{12}=-(\wt a'_{12})^{t_2}$, which already occurred for the Camassa--Holm model. However, such a term breaks the antisymmetry relation $a'_{21}=-a'_{12}$, which has deep consequences at the level of Yang--Baxter equations. Indeed, the form of the left-hand-side in \eqref{eq:modYB} heavily relies on this antisymmetry property of $a'_{12}$. In fact, if one computes "naively" $[ \wt a'_{12},\wt a'_{13} ] + [ \wt a'_{12},\wt a'_{23} ] + [ \wt a'_{13},\wt a'_{23} ]$, one finds exactly zero and could be tempted to associate it to a Yang--Baxter equation with zero right-hand-side. Yet, the "genuine" Yang--Baxter equation, i.e. the relation ensuring the associativity of the Poisson brackets, plugs the $\Omega$-term back into the game, leading \textit{in fine} to again a modified Yang--Baxter equation. \end{rmk} \subsection{Search for a linear Poisson structure} Contrary to the Camassa--Holm peakon case, the canonical Poisson bracket \eqref{eq:PB-can} is not compatible with the soliton-derived Poisson bracket structure \eqref{eq:PBDGP}. Indeed, the linear pencil $\{\cdot\,,\,\cdot\}_{\text{can}} + \lambda \{\cdot\,,\,\cdot\}_{DP}$ (``can'' is for canonical and DP for Degasperis--Procesi) does not obey Jacobi identity due to extra non-cancelling contributions from the non-trivially \emph{scaled} brackets of $\{p,p\}$ and $\{q,q\}$ in \eqref{eq:PBDGP}. The statement is consistent with the fact, pointed out in \cite{dGHH}, that no second local Poisson structure exists in the Degasperis--Procesi case, contrary to the Camassa--Holm case (where it is denoted as $B_1$ in \cite{dGHH}). Consistently with the absence of a second local Poisson structure for soliton Degasperis--Procesi equation yielding a ``linear'' $r$-matrix structure for the Lax matrix, one observes that the associated linear $r$-matrix structure naively defined by $r_{12}=a'_{12}+b'_{12}$, does not yield consistent Poisson brackets for the variables in the Lax matrix. If one indeed sets \begin{equation} \{ L_1 , L_2 \} = [ a'_{12}+b'_{12} , L_1 ] - [ a'_{12}+b'_{12} , L_2 ] \,, \end{equation} the Poisson brackets for individual coordinates of $L$ are inconsistent, due to the antisymmetric part in \eqref{eq:LaxGP}, contrary to the Camassa--Holm case where $L_{ij}=L_{ji}$. We also checked using software calculations that (at least for $n$ running from 2 to 7) that there is no non-trivial linear combination $r'_{12}=x\,a'_{12}+y\,b'_{12}$ such that the relation $\{L_1\,,\,L_2\}=[r'_{12}\,,\,L_1]-[r'_{21}\,,\,L_2]$ with $L$ given in \eqref{eq:LaxGP}, yield a consistent Poisson structure for the $(p_i,q_j)$ variables. The peakon Lax matrix \eqref{eq:LaxGP} realizes therefore an interesting example of a non-dynamical quadratic $(a',b')$ Poisson structure where there is no associated linear $r$-matrix structure. The exact form, or even the existence, of such linear $r$-matrix structure for Degasperis--Procesi peakons remains an open question. \section{Novikov peakons} The Novikov shallow-wave equation reads \begin{equation} u_t - u_{xxt} + 4 u^2u_x = 3 uu_x u_{xx} + u^2 u_{xxx}, \end{equation} showing now a cubic non-linearity instead of a quadratic one as in Camassa--Holm or Degasperis--Procesi. Originally proposed by Novikov \cite{MN} as an integrable partial differential equation, it was later shown \cite{HW} to have integrable peakons: \begin{equation} u(x,t) = \sum_{i=1}^n p_i(t)\, e^{-|x-q_i(t)|}. \end{equation} \subsection{The quadratic Poisson structure} The complete integrability structure was established in \cite{HLS}. The dynamical system for $p_i,q_i$ reads \begin{equation} \label{eq:dynNov} \begin{split} \dot{p}_i &= p_i \sum_{j,k=1}^n {\mathfrak s}_{ij}\,p_j p_k \, e^{-|q_i-q_j|-|q_i-q_k|} \\ \dot{q}_i &= \sum_{j,k=1}^n p_j p_k\, e^{-|q_i-q_j|-|q_i-q_k| } \,, \end{split} \end{equation} still with the notation ${\mathfrak s}_{ij}=\mathop{\rm sgn}\nolimits(q_i-q_j)$. They constitute a Hamiltonian system where the Poisson structure takes the following form: \begin{equation}\label{eq:PBNov} \begin{aligned} \{ p_i,p_j \} &= {\mathfrak s}_{ij}\,p_i p_j\, e^{-2|q_i-q_j| } \,, \\ \{ q_i,p_j \} &= p_j \,e^{-2|q_i-q_j| } \,, \\ \{ q_i,q_j \} &= {\mathfrak s}_{ij}\, \big( 1-e^{-2|q_i-q_j| }\big) \,, \end{aligned} \end{equation} The conserved Hamiltonians are obtained as traces of a Lax matrix \begin{equation} L = TPEP \end{equation} where \begin{equation} T_{ij} = 1+{\mathfrak s}_{ij}, \qquad P_{ij} = p_i \delta_{ij}, \qquad E_{ij} = e^{-|q_i-q_j| }. \end{equation} In other words, the time evolution \eqref{eq:dynNov} is described by the Hamilton equation $\dot f = \{f\,,\,H\}$, with $H=\frac12\mathop{\rm tr}\nolimits L$ and the PB \eqref{eq:PBNov}. Redefining now \begin{equation}\label{eq:RenNov} \bar{q}_j = 2q_j \qquad \text{and} \qquad \bar{p}_j = p_j^2, \end{equation} yield a Poisson structure \begin{equation}\label{eq:PBNov2} \begin{aligned} \{ \bar{p}_i,\bar{p}_j \} &= 4{\mathfrak s}_{ij}\,\bar{p}_i \bar{p}_j e^{ -|\bar{q}_i-\bar{q}_j|} \,, \\ \{ \bar{q}_i,\bar{p}_j \} &= 4\bar{p}_j e^{ -|\bar{q}_i-\bar{q}_j| } \,, \\ \{ \bar{q}_i,\bar{q}_j \} &= 4{\mathfrak s}_{ij}\, \big( 1-e^{ -|\bar{q}_i-\bar{q}_j|}\big) \,, \end{aligned} \end{equation} identical to the Camassa--Holm peakon structure \eqref{eq:PBCH} up to a factor 4. The Lax matrix now reads \begin{equation} \label{eq:LaxNov} L_{ij} = \sum_{k=1}^n T_{ik} \sqrt{\bar{p}_k\,\bar{p}_j} \, e^{ -{\textstyle{\frac{1}{2}}}|\bar{q}_j-\bar{q}_k| } \end{equation} exactly identified with $TL^{CH}$. Hence, the Novikov peakons are in fact described by a Lax matrix simply twisted from the Camassa--Holm Lax matrix ($L \to TL$) and an identical Poisson bracket, a fact seemingly overlooked in \cite{HLS}. The $r$-matrix structure immediately follows, but several inequivalent structures are identified due to the gauge covariance pointed out in section \ref{sect3}: \begin{prop} The Poisson structure \eqref{eq:PBNov} endows the Lax matrix \eqref{eq:LaxNov} with a set of quadratic $r$-matrix structure \begin{equation}\label{eq:L1L2Nov} \{ L_1,L_2 \} = a''_{12} L_1L_2 - L_1L_2 d''_{12} + L_1b''_{12}L_2 - L_2c''_{12}L_1 \,, \end{equation} where \begin{equation} \label{eq:twistabcd} \begin{aligned} & a''_{12} = 4\, T_1\,T_2\, a_{12}\, T_1^{-1}\,T_2^{-1}, \quad && d''_{12} = 4\,a_{12}\,, \\ & b''_{12} = T_2 \, \big( -4a_{12}^{t_2}- \cQ_{12}\big)\, T_2^{-1}, \qquad &&c''_{12} = b''_{21}\,, \end{aligned} \end{equation} and $a_{12}$ is given in \eqref{eq:Toda}. \end{prop} The proof follows trivially from section 2 and gauge invariance in section 3. The regularity condition \eqref{eq:trace0} is fulfilled by this Poisson structure. Although less trivial than in the previous two cases this property will be proved in the next section. \subsection{The Yang--Baxter equations} The Yang--Baxter equations for \eqref{eq:twistabcd} follow immediately by suitable conjugations by $T$ of the Yang--Baxter equations for the alternative form of Degasperis--Procesi structures matrices. Precisely, from the redefinitions in \eqref{eq:twistabcd} the Yang--Baxter equations for $a''$, $b''$, $c''$ and $d''$ read \begin{equation} \label{eq:ybe-Novi} \begin{aligned} & [ a''_{12},a''_{13} ] + [ a''_{12},a''_{23} ] + [ a''_{13},a''_{23} ] = \Omega_{123} - \Omega_{123}^{t_1t_2t_3} \,,\\ & [ d''_{12},d''_{13} ] + [ d''_{12},d''_{23} ] + [ d''_{13},d''_{23} ] = \Omega_{123} - \Omega_{123}^{t_1t_2t_3} \,,\\ &[ a''_{12},b''_{13} ] + [ a''_{12},b''_{23} ] + [ b''_{13},b''_{23} ] =0 \,,\\ &[ d''_{12},c''_{13} ] + [ d''_{12},c''_{23} ] + [ c''_{13},c''_{23} ] =0 \,, \end{aligned} \end{equation} where in writing the right-hand-side of the relation for $a''$, we have used the property that $\Omega_{123} - \Omega_{123}^{t_1t_2t_3}$ commutes with any product of the form $M_1\,M_2\,M_3$ for any matrix $M$ ($M=T$ for the present calculation). Note that the adjoint Yang--Baxter equations for $b''$ and $c''$ remain with zero right-hand-side, despite the conjugations by $T$ depend on the matrices (e.g. $a''$ or $b''$) one considers. \paragraph{Linear structure.} As for the Degasperis--Procesi case, we looked for a linear combination $r''_{12}= x\,a''_{12}+y\,b''_{12}+z\,c''_{12}+t\,d''_{12}$ such that the Poisson structure $\{L_1\,,\,L_2\}=[r''_{12}\,,\,L_1]-[r''_{21}\,,\,L_2]$ with $L$ given in \eqref{eq:LaxNov}, yield a consistent Poisson structure for the $(p_i,q_j)$ variables. For $n>2$, the only solution is given by $r''_{12}= x\,(a''_{12}+b''_{12}-c''_{12}-d''_{12})$ which is identically zero due to the regularity relation \eqref{eq:trace0}. The calculation was done using a symbolic computation software for $n$ running from 3 to 5. Hence, in the generic case, we conjecture that there is no non-trivial linear structure, at least directly associated to the quadratic one. Again, the existence (and the exact form) of such linear $r$-matrix structure for general Novikov peakons remains an open question. In the particular case $n=2$, there is indeed a solution related to the solution \begin{equation} r''_{12}= \frac12\Big(\,a''_{12}-b''_{12}-c''_{12}+d''_{12}\Big)=\,a''_{12}-c''_{12} =\begin{pmatrix} 1 &0 &0 &0 \\ 2& 0& -1 &0 \\ 0 &1 &0 &0 \\ 2 &0 &-2 &1\end{pmatrix}\,. \end{equation} The $r$-matrix obeys the modified Yang--Baxter relation \begin{equation} [ r''_{12},r''_{13} ] + [ r''_{12},r''_{23} ] + [ r''_{32},r''_{13} ] = \Omega_{123} - \Omega_{123}^{t_1t_2t_3} \end{equation} and leads to PB of the form \begin{equation} \begin{aligned} \{\bar p_1\,,\,\bar p_2\}&=-4\,{\mathfrak s}_{12}\sqrt{\bar p_1\bar p_2}\,e^{-\frac12|\bar q_1-\bar q_2|} \,,\\ \{\bar q_1-\bar q_2\,,\,\bar p_1\}&=4\,\sqrt{\frac{\bar p_1}{\bar p_2}}\,e^{-\frac12|\bar q_1-\bar q_2|}+4 \,,\\ \{\bar q_1-\bar q_2\,,\,\bar p_2\}&=-4\,\sqrt{\frac{\bar p_2}{\bar p_1}}\,e^{-\frac12|\bar q_1-\bar q_2|}-4 \,. \end{aligned} \end{equation} Indeed since the combination $\bar q_1+\bar q_2$ does not appear in the expression of $L$, one can realize a consistent associative Poisson bracket by setting $\{\bar q_1+\bar q_2\,,\,X\}=0$ for all $X$. \subsection{Dual presentation for Novikov peakons} Let us remark that the form of the Novikov Lax matrix $L^N=T\,L^{CH}$ suggests a dual presentation for the Novikov peakons. Indeed, one can introduce the Lax matrix $\wt L^N=L^{CH}\,T$, which takes explicitly the form \begin{equation} \wt L_{ij} = \sum_{k=1}^n \sqrt{\bar{p}_k\,\bar{p}_i}\, e^{-{\textstyle{\frac{1}{2}}}|\bar{q}_i-\bar{q}_k| }\,T_{kj} \,. \end{equation} In that case, the PB \eqref{eq:PBNov} have still a quadratic structure of the form \eqref{eq:L1L2Nov}, but with now \begin{equation} \begin{aligned} & \wt a''_{12} = 4\,a_{12}, \qquad && \wt d''_{12} = 4\, T^{-1}_1\,T^{-1}_2\, a_{12}\, T_1\,T_2, \\ & \wt b''_{12} = T_1^{-1} \, \big( -4a_{12}^{t_2}- \cQ_{12}\big)\, T_1, \qquad &&\wt c''_{12} = \wt b''_{21}. \end{aligned} \end{equation} It is easy to see that these matrices still obey precisely the same Yang--Baxter relations \eqref{eq:ybe-Novi}. The Hamiltonians one constructs using $\wt L^N$ are exactly the same as for $L^N$. Thus, we get a dual presentation of exactly the same model and the same Hamiltonians. This property extends to the calculation presented in the next section. \section{Non-trivial boundary terms for peakons \label{sect:twist}} It is known that other sets of Hamiltonians can be defined for each solution $\gamma$ of the dual classical reflection equation \begin{equation}\label{eq:class-RE} \gamma_1 \gamma_2 a_{12} - d_{12} \gamma_1 \gamma_2 + \gamma_2 b_{12} \gamma_1 - \gamma_1 c_{12} \gamma_2=0\,. \end{equation} The Hamiltonians take the form $\mathop{\rm tr}\nolimits\big((\gamma L)^k\big)$. The solution $\gamma={\mathbb I}_n$ exists whenever the condition \eqref{eq:trace0} holds, motivating its designation as \textsl{regularity condition}. Note that in the Freidel--Maillet approach \cite{MF}, the immediate correspondence between solutions $\gamma$ of the dual equation and solutions $\gamma^{-1}$ of the direct equation (r.h.s. of \eqref{eq:PBFM} = 0) is used to yield an equivalent form of the commuting Hamiltonians. We shall now propose classes of invertible solutions to the classical reflection equation \eqref{eq:class-RE} for each of the three peakon cases. Let us first emphasize that any solution $\gamma$ of \eqref{eq:class-RE} where $a'_{12}=4a_{12}$, $b'_{12}$, $c_{12}=b'_{21}$, $d'_{12}=4a_{12}$ are the matrices associated with the Degasperis--Procesi peakons, or to the alternative presentation of Camassa--Holm peakons, corresponds to a solution $\gamma'=\gamma T^{-1}$ for the reflection equation associated to the Novikov peakons, since the structure matrices are related as: $b_{12}'' = 2T_2\,b'_{12}\,T_2^{-1}$, $c_{12}=b''_{21}$, $a''_{12}=2T_1\,T_2\,a'_{12}\,T_1^{-1}\,T_2^{-1}$ and $d''_{12}=2a'_{12}$. Hence the Degasperis--Procesi case provides solutions for the two other peakon models. \begin{lemma}\label{lem:diag} If $\gamma$ is a solution of the Degasperis--Procesi reflection equation \eqref{eq:class-RE}, then for any diagonal matrix $D$, the matrix $D\gamma D$ is also a solution of the reflection equation. \end{lemma} \textbf{Proof:}~ The structure matrices $a'_{12}$ and $b'_{12}$ of \eqref{eq:class-RE} for the Degasperis--Procesi peakons obey : \begin{align} a'_{12}\,D_1D_2=D_1D_2\,a'_{12} \qquad \text{and}\qquad D_1\,b'_{12}\,D_2=D_2\,b'_{12}\,D_1\,, \end{align} where we used also the property \eqref{eq:propQ}. Now, multiplying \eqref{eq:class-RE} by $D_1D_2$ on the right or / and the left hand sides, and using the above properties of $a'_{12}$ and $b'_{12}$, leads to the desired result. {\hfill \rule{5pt}{5pt}}\\ Note that the transformation $\gamma\,\to\,D\gamma D$ is equivalent to a canonical redefinition $p_i\,\to\, d^2_i\,p_i$. \begin{prop}\label{prop:twist} For Degasperis--Procesi peakons, and for $n$ arbitrary, we have two fundamental solutions: \\ \null\qquad(i) the unit matrix ${\mathbb I}_n$, \\ \null\qquad(ii) the matrix $T$ introduced in \eqref{eq:defT}.\\ Moreover, when $n$ is even, we have an additional solution whose explicit form depends on the Weyl chamber we consider for the variables $q_i$. In the first Weyl chamber, where $q_i>q_j\ \Leftrightarrow\ i>j$, it takes the form \begin{equation} (iii)\qquad S^{id}=\sum_{i=0}^{\frac{n}2-1} \big(e_{2i+1,2i+2} -e_{2i+2,2i+1}\big)={\mathbb I}_{n/2}\otimes\, \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}. \end{equation} In any other Weyl chamber defined by a permutation $\sigma$ such that $q_i>q_j\ \Leftrightarrow\ \sigma(i)>\sigma(j)$, the solution $S^\sigma$ takes the form \begin{equation} S^{\sigma}=\sum_{i=0}^{\frac{n}2-1} \big(e_{\sigma(2i+1),\sigma(2i+2)} -e_{\sigma(2i+2),\sigma(2i+1)}\big). \end{equation} Using the lemma \ref{lem:diag}, it leads to 2 (resp. 3) classes of solutions for $n$ odd (resp. even). All these solutions are also valid for the alternative presentation of Camassa--Holm, and (once multiplied on the left by $T$) for the Novikov model. The solutions (i) and (iii) are also valid for the original Camassa--Holm model. \end{prop} \textbf{Proof:}~ $(i)$ The unit matrix is trivially a solution since \eqref{eq:class-RE} is then the regularity condition. \\ $(ii)$ The reflection equation for $T$ projected on a generic element $e_{ij} \otimes e_{kl}$ contains explicitly the indices $i,j,k,l$, and possibly two summation indices corresponding to the products by $\gamma_1$ and $\gamma_2$. Since the entries of the matrices depend on the indices only through the sign function $\mathop{\rm sgn}\nolimits(r-s)$, it is sufficient to check the relations for small values of $n$. We verified them through a symbolic calculation software for $n$ running from 2 to 8. \\ $(iii)$ Similarly, the reflection equation for $S$ needs to be checked for small values of $n$. We verified it through a symbolic calculation software for $n$ running from 2 to 8. {\hfill \rule{5pt}{5pt}}\\ \begin{rmk}\label{lem:transp-inv} A solution $\gamma$ to the reflection equation also leads to solutions of the form $\gamma^t$ and $\gamma^{-1}$ for Camassa--Holm peakons. However, the classes of solutions they lead to, falls in the ones already presented in proposition \ref{prop:twist}. \end{rmk} Note that dressing $T$ by the diagonal matrix $D=\mathop{\rm diag}\nolimits((-1)^i)$ yields $T^{-1}$, which is also a solution to the Degasperis--Procesi reflection equation. Moreover, it proves that the unit matrix is also a solution of the reflection equation \eqref{eq:class-RE} in the Novikov case, proving the property mentioned in the previous section that regularity condition is fulfilled by the Novikov $r$-matrix structure. \section{Hamiltonians} Now that the quadratic $r$-matrix structure have been defined, we are in position to compute higher Hamiltonians for each of the three classes of peakons. These Hamiltonians will be PB-commuting, with the Poisson brackets \eqref{eq:PBCH}, \eqref{eq:PBDGP} or \eqref{eq:PBNov}, depending on the peakon model that is to say the Lax matrices \eqref{eq:LCH}, \eqref{eq:LaxGP} or \eqref{eq:LaxNov}. We provide also some cases of Hamiltonians with non-trivial boundary terms. \subsection{Camassa--Holm Hamiltonians} In addition to the peakon Hamiltonian $H_{CH}=\mathop{\rm tr}\nolimits L=\sum_i p_i$, we get for instance \begin{equation}\label{eq:Hn-CH} \begin{aligned} H^{(1)}_{CH}&=\mathop{\rm tr}\nolimits L=\sum_i p_i\,,\\ H^{(2)}_{CH}&=\mathop{\rm tr}\nolimits L^2=\sum_{i,j} p_ip_j \,e^{-|q_i-q_j|} \,,\\ H^{(3)}_{CH}&= \mathop{\rm tr}\nolimits L^3=\sum_{i,j,k} p_ip_jp_k \,e^{-\frac12|q_i-q_j|}\,e^{-\frac12|q_j-q_k|}\,e^{-\frac12|q_k-q_i|}\,. \end{aligned} \end{equation} We recognize in $H^{(1)}_{CH}$ and $H^{(2)}_{CH}$ the usual Camassa--Holm Hamiltonians, as computed e.g. in \cite{dGHH}. \paragraph{Diagonal boundary term.} If one chooses $\gamma=D$ as a diagonal solution to the reflection equation, we get another series of PB-commuting Hamiltonians: \begin{equation} \begin{aligned} \mathop{\rm tr}\nolimits(DL)&=\sum_i d_i\,p_i\,,\\ \mathop{\rm tr}\nolimits \big((DL)^2\big)&=\sum_{i} d_i^2\,p_i^2 +2\sum_{i<j} d_i\,d_j\,p_ip_j \,e^{-|q_i-q_j|} \,,\\ \mathop{\rm tr}\nolimits \big((DL)^3\big)&=\sum_{i,j,k} d_i\,d_j\,d_k\,p_ip_jp_k \,e^{-\frac12|q_i-q_j|}\,e^{-\frac12|q_j-q_k|}\,e^{-\frac12|q_k-q_i|}. \end{aligned} \end{equation} One gets a "deformed" version of the Camassa--Holm Hamiltonians, with deformation parameters $d_i$. \paragraph{$T$-boundary term.} Choosing now $\gamma=DTD$ as a solution to the reflection equation, we get: \begin{equation} \begin{aligned} \mathop{\rm tr}\nolimits(\gamma\,L)&=\sum_{i} d_i^2\,p_i +2\sum_{i<j} d_id_j\,\sqrt{p_ip_j} \,e^{-\frac12|q_i-q_j|} \,,\\ \mathop{\rm tr}\nolimits \big((\gamma\,L)^2\big)&=\sum_{i} d_i^4\,p_i^2+3 \sum_{i\neq j} d^2_i\,d^2_j\,p_ip_j \,e^{-|q_i-q_j|} +4 \sum_{i\neq j} d^3_i\,d_j\,\sqrt{p_i^3p_j} \,e^{-\frac12|q_i-q_j|} \\ &+6\sum_{\atopn{i, j,k}{\text{all }\neq}} d_i^2d_jd_k\,p_i\sqrt{p_jp_k} \,e^{-\frac12|q_i-q_j|}\,e^{-\frac12|q_i-q_k|}\\ &+\sum_{\atopn{i, j,k,l}{\text{all }\neq}} (1+{\mathfrak s}_{jk}{\mathfrak s}_{l i})\,d_id_jd_kd_l\,\sqrt{p_ip_jp_kp_l} \,e^{-\frac12|q_i-q_j|}\,e^{-\frac12|q_k-q_l|} \,.\\ \end{aligned} \end{equation} Note that since the alternative presentation of Camassa--Holm peakons describes the same Poisson structure, the above Hamiltonians are also valid when using the presentation of section \ref{sect2}. \paragraph{$S$-boundary term for $n$ even.} Since for Camassa--Holm peakons, the Lax matrix $L$ is symmetric while $S^\sigma$ is antisymmetric, we get in any Weyl chamber \begin{equation} \mathop{\rm tr}\nolimits\big((S^\sigma L)^{2m+1}\big)=0\,,\forall m\,. \end{equation} As an example of non-vanishing Hamiltonian, we have for $\gamma=DS^\sigma D$ (in the Weyl chamber defined by $\sigma$): \begin{equation} \mathop{\rm tr}\nolimits\big((\gamma L)^{2}\big)=2\sum_{\ell=0}^{n/2-1} d_{\ell'}d_{\ell'+1}\,p_{\sigma(\ell')}\,p_{\sigma(\ell'+1)}\, \Big( e^{-|q_{\sigma(\ell'+1)}-q_{\sigma(\ell')}|}- 1\Big)\,, \end{equation} where $\ell'=2\ell+1$. \subsection{Degasperis--Procesi Hamiltonians} \paragraph{Diagonal boundary term.} Considering immediately the case with diagonal matrix $\gamma$, we have \begin{equation}\label{eq:Hn-DP} \begin{aligned} \mathop{\rm tr}\nolimits (\gamma L)&=\sum_i d_i\,p_i\,,\\ \mathop{\rm tr}\nolimits\big( (\gamma L)^2\big)&=\sum_{i} d_i^2\,p_i^2+\sum_{i<j} d_i\,d_j\,p_ip_j \,\Big(2-e^{-|q_i-q_j|}\Big)e^{-|q_i-q_j|} \,,\\ \mathop{\rm tr}\nolimits\big( (\gamma L)^3\big)&= \sum_{i,j,k} d_id_jd_k\,p_ip_jp_k \, \Big( -3\,e^{-|q_i-q_k|-|q_j-q_k|} + 4 \,e^{- \frac12 (|q_i-q_k|+|q_j-q_k|+|q_j-q_i|)} \Big)\,. \end{aligned} \end{equation} The "usual" Hamiltonians $\mathop{\rm tr}\nolimits L^m$ are recovered by setting $d_i=1$, $\forall i$. \paragraph{$T$-boundary term.} In any Weyl chamber, we get \begin{equation}\label{eq:Hn-DP} \begin{aligned} \mathop{\rm tr}\nolimits (\gamma T \gamma L)&=\sum_{i} d_i\,p_i+\sum_{i\neq j} d_i\,d_j\,\sqrt{p_i\,p_j}\,e^{-|q_i-q_j|}\,, \\ \mathop{\rm tr}\nolimits \big((\gamma T \gamma L)^2\big)&=\Big(\sum_{i, j} d_i\,d_j\,\sqrt{p_i\,p_j}\Big)^2 +\sum_{i\neq j} d^2_i\,d^2_j\,p_i\,p_j\,\big(1-e^{-|q_i-q_j|}\big)^2 \\ &-4\sum_{i\neq j} d_i^2\,d_j\,p_i\,\sqrt{p_j}\big(1-e^{-|q_i-q_j|}\big)\Big(\sum_k d_k\sqrt{p_k}\Big)\\ &-8\sum_{q_j<q_i<q_k} d_id_jd_k\,\sqrt{p_ip_jp_k}\big(1-e^{-|q_j-q_k|}\big)\Big(\sum_l d_l\sqrt{p_l}\Big)\\ &+2\sum_{\atopn{i,j,k}{\text{all }\neq}} d_i^2\,d_jd_k\,p_i\sqrt{p_jp_k}\, \big(1-e^{-|q_i-q_j|}\big)\big(1- e^{- |q_i-q_k|} \big)\\ &+8\sum_{q_i<q_j<q_k<q_l}d_id_jd_kd_l\,\sqrt{p_ip_jp_kp_l}\, \big(1-e^{-|q_i-q_l|}\big)\big(1- e^{- |q_j-q_k|} \big) \, . \end{aligned} \end{equation} \paragraph{$S$-boundary term for $n$ even.} In the Weyl chamber characterized by $\sigma$, wet get: \begin{equation} \begin{aligned} \mathop{\rm tr}\nolimits(S^\sigma L)&=2\sum_{\ell=0}^{\frac n2-1}\,d_{\ell+1}\,\sqrt{p_{\sigma(\ell')}\,p_{\sigma(\ell'+1)}}\Big(1-e^{-|q_{\sigma(\ell')}-q_{\sigma(\ell'+1)}|}\Big)\,,\\ \mathop{\rm tr}\nolimits\big((S^\sigma L)^2\big)&=2\sum_{\ell=0}^{\frac n2-1}\,d_{\ell+1}^2\,p_{\sigma(\ell')}\,p_{\sigma(\ell'+1)}\Big(1-e^{-|q_{\sigma(\ell')}-q_{\sigma(\ell'+1)}|}\Big)^2\\ &+ 2\sum_{\ell=0}^{\frac n2-1}\sum_{\atopn{j=0}{j\neq\ell}}^{\frac n2-1}\,d_{\ell+1}d_{j+1}\,\sqrt{p_{\sigma(\ell')}\,p_{\sigma(\ell'+1)}p_{\sigma(j')}\,p_{\sigma(j'+1)}}\\ &\qquad\times\Big(2e^{-|q_{\sigma(\ell'+1)}-q_{\sigma(j')}|} -e^{-|q_{\sigma(\ell'+1)}-q_{\sigma(j'+1)}|}-e^{-|q_{\sigma(\ell')}-q_{\sigma(j')}|}\Big)\end{aligned} \end{equation} where we noted $\ell'=2\ell+1$ and ${j}'=2j+1$ to have more compact expressions. \subsection{Novikov Hamiltonians} We recall that the Novikov and Camassa--Holm Lax matrices are related by $L^{Nov}=T\,L^{CH}$ and that the solutions of the dual reflection equation \eqref{eq:class-RE} are related dually: $\gamma^{Nov}=\gamma^{CH}\,T^{-1}$. The basic combination $\gamma L$ entering the Hamiltonians for the Novikov peakons therefore yields the same object as for the Camassa--Holm case. Hence Camassa--Holm and Novikov peakons share identical forms of commuting Hamiltonians, a consistent consequence of their sharing the same Poisson structure \eqref{eq:PBNov2} after renormalization \eqref{eq:RenNov}. \section{Conclusion} Having established the quadratic Poisson structures for three integrable peakon models, in every case based on the Toda molecule $r$-matrix and its partial transposition, many issues remain open or have arisen in the course of our approach. Amongst them we should mention the problem of finding compatible linear Poisson structures and their underlying $r$-matrix structures. Only solved for the Camassa Holm peakons this question is particularly subtle in the case of Degasperis Procesi peakons as follows from the discussion in Section $3.3$. Another question left for further studies here is the extension of integrability properties beyond the exact peakon dynamics, on the lines of the work in \cite{RB} regarding Camassa Holm peakons and (not surprisingly) based on the linear (and canonical) Poisson structure, yet unidentified in other cases. The difficulty is to disentangle , whenever only a quadratic structure is available, the peakon potential in the Lax matrix from the dynamical weight function in the $n$-body Poisson brackets (the so called $G$ function in \cite{dGHH}). Excitingly, the quadratic structures we have found open the path to a quantized version of peakon models, in the form of $ABCD$ algebras, following the lines developed in \cite{MF}. We hope to come back on this point in a future work. Finally in this same extended integrable peakons the still open problem of full understanding of the unavoidably dynamical $r$-matrix structure remains a challenge.
{ "attr-fineweb-edu": 1.573242, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbSLxaJJQnKrAiraB
\section{Introduction} Quantum correlations are those correlations which are not present in classical systems, and in bipartite quantum systems are associated with the presence of quantum discord~\cite{nielsen-book-02,ollivier-prl-02,modi-rmp-12}. Such correlations in a bipartite mixed state can go beyond quantum entanglement and therefore can be present even if the state is separable~\cite{ferraro-pra-10}. The threshold between classical and quantum correlations was investigated in linear-optical systems by observing the emergence of quantum discord~\cite{karo-pra-13}. Quantum discord was experimentally measured in systems such as NMR, that are described by a deviation density matrix~\cite{pinto-pra-10,maziero-bjp-13,passante-pra-11}. Further, environment-induced sudden transitions in quantum discord dynamics and their preservation were investigated using NMR~\cite{auccaise-prl-11-2,harpreet-discord}. It has been shown that even with very low (or no) entanglement, quantum information processing can still be performed using nonclassical correlations as characterized by the presence of quantum discord~\cite{datta-pra-05,fahmy-pra-08}. However, computing and measuring quantum discord typically involves complicated numerical optimization and furthermore it has been shown that computing quantum discord is NP-hard~\cite{huang-njp-14}. It is hence of prime interest to find other means such as witnesses to detect the presence of quantum correlations~\cite{saitoh-qip-11}. While there have been several experimental implementations of entanglement witnesses~\cite{rahimi-jpamg-06,rahimi-pra-07,filgueiras-qip-11}, there have been fewer proposals to witness nonclassicality. A nonlinear classicality witness was constructed for a class of two-qubit systems~\cite{maziero-ijqi-12} and experimentally implemented using NMR~\cite{auccaise-prl-11,pinto-ptrsa-12} and was estimated in a linear optics system via statistics from a single measurement~\cite{aguilar-prl-12}. It is to be noted that as the state space for classical correlated systems is not convex, a witness for nonclassicality is more complicated to construct than a witness for entanglement and is necessarily nonlinear~\cite{saitoh-pra-08}. In this work we report the experimental detection of nonclassicality through a recently proposed positive map method~\cite{rahimi-pra-10}. The map is able to witness nonclassical correlations going beyond entanglement, in a mixed state of a bipartite quantum system. The method requires much less experimental resources as compared to measurement of discord using full state tomography and therefore is an attractive alternative to demonstrating the nonclassicality of a separable state. The map implementation involves two-qubit gates and single-qubit polarization measurements and can be achieved in a single experimental run using NMR. We perform experiments on a two-qubit separable state (non-entangled) which contains nonclassical correlations. Further, the state was allowed to freely evolve in time under natural NMR decohering channels, and the amount of nonclassicality present was evaluated at each time point by calculating the map value. We compared our results using the positive map witness with those obtained by computing the quantum discord via full state tomography, and obtained a good match. However beyond a certain time, the map was not able to detect nonclassicality, although the quantum discord measure indicated that nonclassicality was present in the state. This indicates that while the positive map nonclassicality witness is easy to implement experimentally in a single experiment and is a good indicator of nonclassicality in a separable state, it is not able to characterize nonclassicality completely. In our case this is typified by the presence of a small amount of quantum discord when the state has almost decohered or when the amount of nonclassicality present is small. The material in this paper is organized as follows:~Section~\ref{mapvalue} contains a brief description of the construction of the positive map to detect nonclassicality, followed by details of the experimental NMR implementation in Section~\ref{expt}. The map value dynamics with time is contained in Section~\ref{mv-dyn}, while the comparison of the results of the positive map method with those obtained by measuring quantum discord via full quantum state tomography is described in Section~\ref{qd-dyn}. Section~\ref{concl} contains some concluding remarks. \section{Experimentally detecting nonclassical correlations} \label{fullexptl} \subsection{Constructing the nonclassicality witness map} \label{mapvalue} For pure quantum states of a bipartite quantum system which are represented by one-dimensional projectors $\vert \psi\rangle\langle \psi\vert$ in a tensor projector Hilbert space ${\cal H}_A \otimes {\cal H}_B$, the only type of quantum correlation is entanglement~\cite{guhne-pr-09,oppenheim-prl-02}. However, for mixed states the situation is more complex and quantum correlations can be present even if the state is separable {\it i.\,e.} it is a classical mixture of separable pure states given by \begin{equation} \rho_{sep}=\sum_i{w_i\rho_i^A\otimes\rho_i^B} \label{sep} \end{equation} where $w_i$ are positive weights and $\rho_i^A,\rho_i^B$ are pure states in Hilbert spaces ${\cal H}_A$ and ${\cal H}_B$ respectively~\cite{peres-prl-96}. A separable state is called a properly classically correlated state (PCC) if it can be written in the form~\cite{horodecki-rmp-09} \begin{equation} \rho_{\rm PCC}=\sum_{i,j}{p_{ij}\vert e_i\rangle^{A}\langle e_i\vert\otimes\vert e_j\rangle^{B}\langle e_j\vert} \label{pccform} \end{equation} where $p_{ij}$ is a joint probability distribution and $\vert e_i\rangle^{A}$ and $\vert e_j\rangle^{B}$ are local orthogonal eigenbases in local spaces ${\cal H}_A$ and ${\cal H}_B$ respectively. A state that cannot be written in the form given by Equation~(\ref{pccform}) is called a nonclassically correlated (NCC) state. An NCC state can be entangled or separable. The correlations in NCC states over and above those in PCC states are due to the fact that the eigenbases for the subsystems may not be orthogonal {\it i.\,e.} the basis vectors are in a superposition~\cite{vedral-found-10}. A typical example of a bipartite two-qubit NCC state is \begin{equation} \sigma=\frac{1}{2}\left[\vert00\rangle\langle 00\vert+\vert1+\rangle\langle1+\vert\right] \label{sigma} \end{equation} with $\vert + \rangle = \frac{1}{\sqrt{2}} \left(\vert 0 \rangle+\vert 1 \rangle\right)$. In this case the state has no product eigenbasis as the eigenbasis for subsystem B, since $\vert 0 \rangle$ and $\vert+\rangle$ are not orthogonal to each other. The state is separable (not entangled) as it can be written in the form given by Equation~(\ref{sep}); however since it is an NCC state, it has non-trivial quantum correlations and has non-zero quantum discord. How to pin down the nonclassical nature of such a state with minimal experimental effort and without actually computing quantum discord is something that is desirable. It has been shown that such nonclassicality witnesses can be constructed using a positive map~\cite{rahimi-pra-10}. The map $\mathcal{W}$ over the state space ${\cal H}={\cal H}_A \otimes \cal{H}_B$ takes a state to a real number $\mathcal{R}$ \begin{equation} \mathcal{W}:{\cal H}\longrightarrow\mathcal{R} \label{map} \end{equation} This map is a nonclassicality witness map {\it i.\,e.} it is capable of detecting NCC states in ${\cal H}$ state space if and only if~\cite{rahimi-pra-10}: \begin{itemize} \item[(a)] For every bipartite state $\rho_{PCC}$ having a product eigenbasis, $\mathcal{W}:\rho_{PCC}\geq0$. \item[(b)] There exists at least one bipartite state $\rho_{NCC}$ (having no product eigenbasis) such that $\mathcal{W}:\rho_{NCC}<0$. \end{itemize} A specific non-linear nonclassicality witness map proposed by~\cite{rahimi-pra-10} is defined in terms of expectation values of positive Hermitian operators $\hat{A}_1$, $\hat{A}_2\ldots\hat{A}_m$: \begin{equation} \mathcal{W} : \rho\rightarrow c-\left(Tr(\rho\hat{A}_1)\right) \left(Tr(\rho\hat{A}_2)\right)\ldots\ldots\left(Tr(\rho\hat{A}_m)\right) \end{equation} where $c \ge 0$ is a real number. For the case of two-qubit systems using the operators $A_1=\vert00\rangle\langle 00\vert$ and $A_2=\vert 1+\rangle\langle 1+\vert$ we obtain a nonclassicality witness map for state in Eqn.~(\ref{sigma}) as: \begin{equation} \mathcal{W}_{\sigma}:\rho\rightarrow c-\left(Tr(\rho\vert00\rangle\langle 00\vert)\right)\left(Tr(\rho\vert1+\rangle\langle 1+\vert)\right) \label{sigmamap} \end{equation} The value of the constant $c$ in the above witness map has to be optimized such that for any PCC state $\rho$ having a product eigenbasis, the condition $\mathcal{W}_{\sigma} \rho \ge 0$ holds and the optimized value of $c$ turns out to be $c_{\rm opt}=0.182138$. The map given by Equation~(\ref{sigmamap}) does indeed witness the nonclassical nature of the state $\sigma$ as $\left(Tr(\rho\vert00\rangle\langle 00\vert)\right)\left(Tr(\rho\vert 1+\rangle\langle 1+\vert)\right)$ for $\rho\equiv\sigma$ has the value 0.25, which suggests that the state $\sigma$ is an NCC state~\cite{rahimi-pra-10}. The value of a nonclassicality map which when negative implicates the nonclassical nature of the state is denoted its map value (MV). \subsection{NMR experimental system} \label{expt} We implemented the nonclassicality witness map $\mathcal{W}_\sigma$ on an NMR sample of ${}^{13}$C-enriched chloroform dissolved in acetone-D6; the ${}^{1}$H and ${}^{13}$C nuclear spins were used to encode the two qubits (see Fig.~\ref{molecule} for experimental parameters). \begin{figure}[t] \includegraphics[angle=0,scale=1.0]{molecule.pdf} \caption{(a) Pictorial representation of ${}^{13}$C labeled chloroform with the two qubits encoded as nuclear spins of ${}^{1}$H and ${}^{13}$C; system parameters including chemical shifts $\nu_i$, scalar coupling strength $J$ (in Hz) and relaxation times T$_{1}$ and T$_{2}$ (in seconds) are tabulated alongside. (b) Thermal equilibrium NMR spectra of ${}^{1}$H (Qubit 1) and ${}^{13}$C (Qubit 2) after a $\frac{\pi}{2}$ readout pulse. (c) NMR spectra of ${}^{1}$H and ${}^{13}$C for the $\sigma$ NCC state. Each transition in the spectra is labeled with the logical state ($\vert0\rangle$ or $\vert 1\rangle$) of the ``passive qubit'' (not undergoing any transition).} \label{molecule} \end{figure} Unitary operations were implemented by specially crafted transverse radio frequency pulses of suitable amplitude, phase and duration. A sequence of spin-selective pulses interspersed with tailored free evolution periods were used to prepare the system in an NCC state as follows: \begin{eqnarray*} &&I_{1z}+I_{2z} \stackrel{(\pi/2)^1_x}{\longrightarrow} -I_{1y}+I_{2z} \stackrel{Sp. Av.}{\longrightarrow} I_{2z} \stackrel{(\pi/2)^2_y}{\longrightarrow} I_{2x} \stackrel{\frac{1}{4J}}{\longrightarrow} \nonumber\\ && \quad\quad\quad\quad \frac{I_{2x}+2I_{1z}I_{2y}}{\sqrt{2}} \stackrel{(\pi/2)^2_x}{\longrightarrow} \frac{I_{2x}+2I_{1z}I_{2z}}{\sqrt{2}} \stackrel{(-\pi/4)^2_y}{\longrightarrow} \nonumber \\ && \quad\quad\quad\quad \quad\quad\quad \quad \frac{\left(I_{2z}+I_{2x}+2I_{1z}I_{2z}-2I_{1z}I_{2x}\right)}{2} \end{eqnarray*} \begin{figure}[h] \includegraphics[angle=0,scale=1]{ckt+seq.pdf} \caption{(a) Quantum circuit to create and detect an NCC state. (b) NMR pulse sequence to create the NCC state and then detect it using controlled-Hadamard and CNOT gates. Unfilled rectangles depict $\frac{\pi}{2}$ pulses, grey-shaded rectangles depict $\pi$ pulses and filled rectangles depict $\frac{\pi}{4}$ pulses, respectively. Phases are written above each pulse, with a bar over a phase indicating a negative phase. The evolution period was set to $\tau_{12}=\frac{1}{4J}$ (refer main text for details of delay $\tau$).} \label{ckt} \end{figure} The quantum circuit to implement the nonclassicality witness map is shown in Fig.~\ref{ckt}(a). The first module represents NCC state preparation using the pulses as already described. The circuit to capture nonclassicality of the prepared state consists of a controlled-Hadamard (CH) gate, followed by measurement on both qubits, a controlled-NOT (CNOT) gate and finally detection on `Qubit 2'. The CH gate is analogous to a CNOT gate, with a Hadamard gate being implemented on the target qubit if the control qubit is in the state $\vert 1 \rangle$ and a `no-operation' if the control qubit is in the state $\vert 0 \rangle$. The NMR pulse sequence corresponding to the quantum circuit is depicted in Fig.~\ref{ckt}(b). The set of pulses grouped under the label `State prep.' convert the thermal equilibrium state to the desired NCC state. A dephasing $z$-gradient is applied on the gradient channel to kill undesired coherences. After a delay $\tau$ followed by the pulse sequence to implement the CH gate, the magnetizations of both qubits were measured with $\frac{\pi}{2}$ readout pulses (not shown in the figure). In the last part of detection circuit a CNOT gate is applied followed by a magnetization measurement of `Qubit 2'; the scalar coupling time interval was set to $\tau_{12}=\frac{1}{4J}$ where $J$ is the strength of the scalar coupling between the qubits. Refocusing pulses were used during all $J$-evolution to compensate for unwanted chemical shift evolution during the selective pulses. State fidelity was computed using the Uhlmann-Jozsa measure~\cite{uhlmann-rpmp-76,jozsa-jmo-94}, and the NCC state was prepared with a fidelity of $0.97 \pm 0.02$. To detect the nonclassicality in the prepared NCC state via the map $\mathcal{W}_\sigma$, the expectation values of the operators $\vert00\rangle\langle 00\vert$ and $\vert 1+\rangle\langle 1+\vert$ are required. Re-working the map brings it to the following form~\cite{rahimi-pra-10} \begin{eqnarray*} \mathcal{W}_{\sigma}:\rho\rightarrow c_{\rm opt}-\frac{1}{16}&&\left(1+\langle Z_1\rangle+\langle Z_2\rangle+\langle Z_2'\rangle\right)\times \nonumber \\ &&\left(1-\langle Z_1\rangle+\langle Z_2\rangle-\langle Z_2'\rangle\right) \end{eqnarray*} where $\langle Z_1\rangle$ and $\langle Z_2\rangle$ are the polarizations of `Qubit 1' and `Qubit 2' after a CH gate on the input state $\rho$, while $\langle Z_2'\rangle$ is the polarization of `Qubit 2' after a CNOT gate. The theoretically expected normalized values of $\langle Z_1\rangle$, $\langle Z_2\rangle$ and $\langle Z_2'\rangle$ for state $\rho\equiv\sigma$ are $0$, $1$ and $0$ respectively. Map value (MV) is $-0.067862<0$ and as desired this map does indeed witness the presence of nonclassicality. The experimentally computed MV for the prepared NCC state turns out to be $-0.0406 \pm 0.0056$, proving that the map is indeed able to witness the nonclassicality present in the state. \subsection{Map Value Dynamics} \label{mv-dyn} The prepared NCC state was allowed to evolve freely in time and the MV calculated at each time point, in order to characterize the decoherence dynamics of the nonclassicality witness map. As theoretically expected, one should get a negative MV for states which are NCC. We measured MV at time points which were integral multiples of $\frac{2}{J}$ i.e. $\frac{2n}{J}$ (with $n$ = 0, 1, 3, 5, 7, 9, 11, 13, 15, 20, 25, 30, 35, 40, 45 and 50), in order to avoid experimental errors due to $J$-evolution. The results of MV dynamics as a function of time are shown in Fig.~\ref{MV}(a). The standard NMR decoherence mechanisms denoted by T$_2$ the spin-spin relaxation time and T$_1$ the spin-lattice relaxation time, cause dephasing among the energy eigenstates and energy exchange between the spins and their environment, respectively. As seen from Fig.~\ref{MV}(a), the MV remains negative (indicating the state is NCC) for upto 120 ms, which is approximately the ${}^{1}$H transverse relaxation time. The MV was also calculated directly from the tomographically reconstructed state using full state tomography~\cite{leskowitz-pra-04} at each time point and the results are shown in Fig.~\ref{MV}(b), which are in good agreement with direct experimental MV measurements. The state fidelity was also computed at the different time points and the results are shown in Fig.~\ref{fid}. The red squares represent fidelity w.r.t. the theoretical NCC state. \begin{figure}[h] \includegraphics[angle=0,scale=1]{MV.pdf} \caption{(a) Experimental map value (in $\times10^{-2}$ units) plotted as a function of time. (b) Map value (in $\times10^{-2}$ units) directly calculated from the tomographically reconstructed state at each time point.} \label{MV} \end{figure} \begin{figure}[h] \includegraphics[angle=0,scale=1]{Fidlt.pdf} \caption{(Color online) Time evolution of state fidelity. The red squares represent fidelity of the experimentally prepared NCC state w.r.t. the the theoretical NCC state.} \label{fid} \end{figure} \subsection{Quantum Discord Dynamics} \label{qd-dyn} We also compared the map value evaluation of nonclassicality with the standard measure of nonclassicality, namely quantum discord~\cite{ollivier-prl-02,luo-pra-08}. The state was reconstructed by performing full quantum state tomography and the quantum discord measure was computed from the experimental data. Quantum mutual information can be quantified by the equations: \begin{eqnarray} I(\rho_{AB})&=&S(\rho_A)+S(\rho_B)-S(\rho_{AB}) \nonumber \\ J_A(\rho_{AB})&=&S(\rho_B)-S(\rho_B\vert \rho_A) \label{mutual} \end{eqnarray} where $S(\rho_B\vert\rho_A)$ is the conditional von Neumann entropy of subsystem $B$ when $A$ has already been measured. Quantum discord is defined as the minimum difference between the two formulations of mutual information in Equation~(\ref{mutual}): \begin{equation} D_A(\rho_{AB})=S(\rho_A)-S(\rho_{AB})+S(\rho_B\vert\lbrace\Pi^A_j\rbrace) \label{discord} \end{equation} Quantum discord hence depends on projectors $\lbrace\Pi^A_j\rbrace$. The state of the system, after the outcome corresponding to projector $\lbrace\Pi^A_j\rbrace$ has been detected, is \begin{equation} \tilde{\rho}_{AB}\vert \lbrace\Pi^A_j\rbrace=\frac{\left(\Pi^A_j\otimes I_B\right)\rho_{AB}\left(\Pi^A_j\otimes I_B\right)}{p_j} \label{B} \end{equation} with the probability $p_j=Tr\left((\Pi^A_j\otimes I_B)\rho_{AB}(\Pi^A_j\otimes I_B)\right)$; $I_B$ is identity operator on subsystem B. The state of the system B, after this measurement is \begin{equation} \rho_B\vert \lbrace\Pi^A_j\rbrace=Tr_A\left(\tilde{\rho}_{AB}\vert \lbrace\Pi^A_j\rbrace\right) \label{A} \end{equation} $S\left(\rho_B\vert\lbrace\Pi^A_j\rbrace\right)$ is the missing information about B before measurement $\lbrace\Pi^A_j\rbrace$. The expression \begin{equation} S(\rho_B\vert\lbrace\Pi^A_j\rbrace)=\sum_{j}{p_jS \left(\rho_B\vert\lbrace\Pi^A_j\rbrace\right)} \label{cond-entropy} \end{equation} is the conditional entropy appearing in Eqn.~(\ref{discord}). In order to capture the true quantumness of the correlation one needs to perform an optimization over all sets of von Neumann type measurements represented by the projectors $\lbrace\Pi^A_j\rbrace$. We define two orthogonal vectors (for spin half quantum subsystems), characterized by two real parameters $\theta$ and $\phi$, on the Bloch sphere as: \begin{eqnarray} &&\cos{\theta}\vert 0\rangle+e^{\iota\phi}\sin{\theta}\vert 1\rangle \nonumber \\ &&e^{-\iota\phi}\sin{\theta}\vert 0\rangle-\cos{\theta}\vert 1\rangle \end{eqnarray} These vectors can be used to construct the projectors $\Pi^{A,B}_{1,2}$, which are then used to find out the state of $B$ after an arbitrary measurement was made on subsystem $A$. The definition of conditional entropy (Equation~(\ref{cond-entropy})) can be used to obtain an expression which is parameterized by $\theta$ and $\phi$ for a given state $\rho_{AB}$. This expression is finally minimized by varying $\theta$ and $\phi$ and the results fed back into Equation~(\ref{discord}), which yields a measure of quantum discord independent of the basis chosen for the measurement of the subsystem. To compare the detection via the positive map method with the standard quantum discord measure, we let the state evolve for a time $\tau$ and then reconstructed the experimentally prepared via full quantum state tomography and calculated the quantum discord at all time points where the MV was determined experimentally (the results are shown in Fig.~\ref{QD}). At $\tau$ = 0 s, a non-zero QD confirms the presence of NCC and verifies the results given by MV. As the state evolves with time, the quantum discord parameter starts decreasing rapidly, in accordance with increasing MV. Beyond 120 ms, while the MV becomes positive and hence fails to detect nonclassicality, the discord parameter remains non-zero, indicating the presence of some amount of nonclassicality (although by this time the state fidelity has decreased to 0.7). However, value of quantum discord is very close to zero and in fact cannot be distinguished from contributions due to noise. One can hence conclude that the positive map suffices to detect nonclassicality when decoherence processes have not set in and while the fidelity of the prepared state is good. Once the state has decohered however, a measure such as quantum discord has to be used to verify if the degraded state retains some amount of nonclassical correlations or not. \begin{figure}[h] \includegraphics[angle=0,scale=1]{QD.pdf} \caption{(Color online) Time evolution of quantum discord (characterizing total quantum correlations present in the state) for the NCC state.} \label{QD} \end{figure} \section{Conclusions} \label{concl} In this work we experimentally detected nonclassical correlations in a separable two-qubit quantum state, using a nonlinear positive map as a nonclassicality witness. The witness is able to detect nonclassicality in a single-shot experiment and its obvious advantage lies in its using much fewer experimental resources as compared to quantifying nonclassicality by measuring discord via full quantum state tomography. It will be interesting to construct and utilize this map in higher-dimensional quantum systems and for greater than two qubits, where it is more difficult to distinguish between classical and quantum correlations. It has been posited that quantum correlations captured by quantum discord which go quantum entanglement and can thus be present even in separable states are responsible for achieving computational speedup in quantum algorithms. It is hence important, from the point of view of quantum information processing, to confirm the presence of such correlations in a quantum state, without having to expend too much experimental effort and our work is a step forward in this direction. \begin{acknowledgments} All experiments were performed on a Bruker Avance-III 600 MHz FT-NMR spectrometer at the NMR Research Facility at IISER Mohali. Arvind acknowledges funding from DST India under Grant No.~EMR/2014/000297. KD acknowledges funding from DST India under Grant No.~EMR/2015/000556. \end{acknowledgments}
{ "attr-fineweb-edu": 1.87207, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbTE4eIfiUWJpLlt0
\section{Introduction} The strong magnetic fields measured in many spiral galaxies, $B\sim 2\times 10^{-6}$ G \cite{B}, are conjectured to be produced primordially; proposed mechanisms include fluctuations during an inflationary universe \cite{Turner} or at the GUT scale \cite{EO}, and plasma turbulence during the electroweak transition \cite{Hogan,LMcL,olinto1} or in the quark-gluon hadronization transition \cite{qcd,olinto}. The production and later diffusion of magnetic fields depends crucially on the electrical conductivity, $\sigma_{el}$, of the matter in the universe; typically, over the age of the universe, $t$, fields on length scales smaller than $L\sim (t/4\pi\sigma_{el})^{1/2}$ are damped. In this paper we calculate $\sigma_{el}$ in detail below and above the electroweak transition scale. The electrical conductivity was estimated in \cite{Turner} in the relaxation time approximation as $\sigma_{el}\sim n\alpha\tau_{el}/m$ with $m\sim T$ and relaxation time $\tau_{el}\sim 1/(\alpha^2T)$, where $\alpha = e^2/4\pi$. In Refs. \cite {HK} and \cite{Enqvist} the relaxation time was corrected with the Coulomb logarithm. A deeper understanding of the screening properties in QED and QCD plasmas in recent years has made it possible to calculate a number of transport coefficients including viscosities, diffusion coefficients, momentum stopping times, etc., exactly in the weak coupling limit \cite{BP,tran,eta}; also \cite{GT}. However, calculation of processes that are sensitive to very singular forward scatterings remain problematic. For example, the calculated color diffusion and conductivity \cite{color}, even with dynamical screening included, remain infrared divergent due to color exchange in forward scatterings. Also the quark and gluon damping rates at non-zero momenta calculated by resumming ring diagrams exhibit infrared divergences \cite{damp} whose resolution requires more careful analysis including higher order diagrams as, e.g., in the Bloch-Nordseick calculation of Ref. \cite{Blaizot}. Charge exchanges through $W^\pm$ exchange, processes similar to gluon color exchange in QCD, are important in forward scatterings at temperatures above the $W$ mass, $M_W$. While, as we show, such processes lead negligible transport by left-handed charged leptons, they do not prevent transport by right-handed charged leptons. As a consequence, electrical conduction at temperatures above the electroweak transition is large, and does not inhibit generation of magnetic fields \cite{Enqvist}; the observed magnetic fields in galaxies could then be generated earlier than the electroweak transition\cite{Hogan,LMcL} and have survived until today. More generally, we find that the electrical conductivity is sufficiently large that it does not lead to destruction of large-scale magnetic flux over timescales of the expansion of the universe. In Sec. II we calculate the electrical conductivity for $T\ll M_W$. In this regime the dominant momentum transfer processes in electrical transport are electrodynamic, and one can ignore weak interactions. We turn then in Sec. III to the very early universe, $T\gg T_c$, where the $W^\pm$ are effectively massless and their effects on electrical conduction must be taken into account. \section{Electrical conductivities in high temperature QED} We first calculate the electrical conductivity in the electroweak symmetry-broken phase at temperatures well below the electroweak boson mass scale, $T\ll M_W$. As we argue below, charged leptons $\ell=e^-,\mu^-,\tau^-$ and anti-leptons $\bar{\ell}=e^+,\mu^+,\tau^+$ dominate the currents in the regime in which $m_{\ell}\ll T$. In the broken-symmetry phase weak interactions between charged particles, which are generally smaller by a factor $\sim(T/M_W)^4$ compared with photon-exchange processes, can be ignored. The primary effect of strong interactions is to limit the drift of strongly interacting particles, and we need consider only electromagnetic interactions between charged leptons and quarks. Transport processes are most simply described by the Boltzmann kinetic equation for the distribution functions, $n_i({\bf p},{\bf r},t)$, of particle species $i$, of charge $e$, \begin{eqnarray} \left(\frac{\partial}{\partial t} + {\bf v}_1\cdot \nabla_{\bf r}+ e{\bf E}\cdot\nabla_1 \right) n_1 = -2\pi \nu_2\sum_{234}&|M_{12\to 34}|^2 [n_1n_2(1\pm n_3)(1\pm n_4)-n_3n_4(1\pm n_1)(1\pm n_2)] \nonumber \\ &\times\delta_{{\bf p}_1+{\bf p}_2, {\bf p}_3+{\bf p}_4} \delta(\epsilon_1 +\epsilon_2 -\epsilon_3 -\epsilon_4 ), \label{BE} \end{eqnarray} where ${\bf E}$ is the external electric field driving the charges, and the right side describes the two-particle collisions $12\leftrightarrow 34$ slowing them down. The $\pm$ signs refer to bosons and fermions. The sums are over momenta ${\bf p}_2$, ${\bf p}_3$, and ${\bf p}_4$, and the statistical factor $\nu_2$ accounts for the number of leptons and spin projections that scatter with particle 1. Massless lepton-lepton or antilepton-antilepton scattering conserves the electrical current, and affects the conductivity only in higher order. In lowest order we need to take into account only lepton-antilepton scattering. The square of the matrix element for scattering of a charged lepton from initial momentum ${\bf p}_1$ to final momentum ${\bf p}_3$ with an antilepton from ${\bf p}_2$ to ${\bf p}_4$ is \begin{eqnarray} |M_{12\to 34}|^2=2e^4(s^2+u^2)/t^2/ (16\epsilon_1\epsilon_2\epsilon_3\epsilon_4), \label{matrix} \end{eqnarray} where $s,t,u$ are the usual Mandelstam variables formed from the incoming and outgoing momenta. To solve the kinetic equation for a weak driving field, we write $n_i = n_i^0 + \Phi_i$, where $n^0_i=(\exp(\epsilon_i/T)\pm1)^{-1}$ is the global equilibrium distribution (chemical potentials are taken to vanish in the early universe), and linearize Eq. (\ref{BE}) in the deviations $\Phi_i$ of the distribution functions from equilibrium (see Refs. \cite{BPbook}, \cite{BHP} and \cite{eta} for details); thus Eq. (\ref{BE}) reduces to \begin{eqnarray} &e{\bf E}&\cdot {\bf v}_1 \frac{\partial n_1}{\partial\epsilon_1} = \, -2\pi \nu_2\sum_{234} |M_{12\to 34}|^2 n^0_1n^0_2(1\pm n^0_3)(1\pm n^0_4) \left(\Phi_1+\Phi_2-\Phi_3-\Phi_4\right) \delta_{{\bf p}_1+{\bf p}_2, {\bf p}_3+{\bf p}_4} \delta(\epsilon_1 +\epsilon_2 -\epsilon_3 -\epsilon_4 ). \label{BE2} \end{eqnarray} We set the stage by considering the simple situation of conduction by massless charged leptons (component 1) and antileptons (component 2) limited by their scattering together; later we include effects of quarks. The electric field drives the charged leptons and antileptons into flow with steady but opposite fluid velocities ${\bf u}_1=-{\bf u}_2$ in the center-of-mass system (see Fig. 1). Assuming that collisions keep the drifting components in local thermodynamic equilibrium, we approximate the quasiparticle distribution functions by those of relativistic particles in equilibrium with a local flow velocity ${\bf u}$, \begin{eqnarray} n_i({\bf p}_i) =\frac{1}{\exp[(\epsilon_i-{\bf u}_i\cdot{\bf p}_i)/T]\mp 1}; \label{n} \end{eqnarray} the deviation $\Phi_i$ is thus \begin{eqnarray} \Phi_i = -{\bf u}_i\cdot {\bf p}_i \frac{\partial n_i}{\partial\epsilon_i}. \label{Phi} \end{eqnarray} The ansatz (\ref{n}) for the distribution function is an excellent approximation; a more elaborate variational calculation which we have carried out (see \cite{eta} for analogous calculations for the viscosity) gives almost the same distribution function, and a corresponding electrical conductivity only 1.5\% smaller than our result, Eq. (\ref{ec}) below. Equation (\ref{Phi}) gives the total electric current from lepton-antilepton pairs with $N_\ell$ effectively massless species present at temperature $T$: \begin{eqnarray} {\bf j}_{\ell\bar{\ell}} = e n_{\rm ch}{\bf u}_1 = e{\bf u}_1 \frac{3\zeta(3)}{\pi^2}N_\ell T^3, \end{eqnarray} where $n_{ch}$ is the number density of electrically charged leptons plus antileptons. Note that since photon exchange does not transfer charge (as do $W^\pm$), particle 3 in Eq. (\ref{BE}) has the same charge as particle 1, and particle 4 the same as 2, and ${\bf u}_3={\bf u}_1$ and ${\bf u}_4={\bf u}_2=-{\bf u}_1$. To calculate $\sigma_{el}$ we multiply Eq. (\ref{BE2}) by $\nu_1{\bf v}_1$ and sum over ${\bf p}_1$, to find an equation of the form: \begin{eqnarray} -e{\bf E}N_\ell T^2/18 = -\xi {\bf u_1} \label{Xi} \end{eqnarray} where $\xi$ results from the right side of (\ref{BE2}). Since QED interactions are dominated by very singular forward scattering arising from massless photon exchange, the sum on the right side in $\xi$ diverges unless we include Debye screening of longitudinal interactions and dynamical screening of transverse interactions due to Landau damping. This is done by including (in the Coulomb gauge) the Coulomb self-energy, $\Pi_L$, and the transverse photon self-energy, $\Pi_T$, in the longitudinal and transverse electromagnetic propagators. The inverse of the longitudinal propagator becomes $q^2+\Pi_L(\omega,q)$, and the inverse of the transverse propagator, $t=\omega^2-q^2$ becomes $\omega^2-q^2-\Pi_T(\omega,q)$, where $\omega$ and $q$ are the energy and momentum transferred by the electromagnetic field in the scattering. The quantity $\xi$ can be calculated to leading logarithmic order in the coupling constant by expanding for small momentum transfers (see Ref. \cite{eta} for details). Small momentum transfer processes are screened by $\Pi_L\sim q_D^2=4\pi\alpha N_l T^2/3$, and $\Pi_T\sim i\pi q_D^2\omega/ 4q$. (Large momentum transfers, $q\raisebox{-.5ex}{$\stackrel{>}{\sim}$} \langle p\rangle \sim 3T$, are cut off by the distribution functions.) The resulting integrals, $\int q^2d^2q/|q^2+\Pi_{L,T}|^2$, give characteristic logarithms, $\ln(T/q_D)$, and we find, \begin{eqnarray} \xi = \frac{2\ln2}{9\pi}N_{\ell}^2\alpha^2\ln(C/\alpha N_l) T^4. \label{Xi2} \end{eqnarray} The constant $C\sim 1$ in the logarithm gives the next to leading order terms (see \cite{eta} for the calculation of second-order contributions to the viscosity). The electrical conductivity for charged leptons is thus \cite{tran,BHP} \begin{eqnarray} \sigma_{el}^{(\ell\bar{\ell})} &\equiv& j_{\ell\bar{\ell}}/E =\, \frac{3\zeta(3)}{\ln2} \frac{T}{\alpha\ln(1/\alpha N_l)}\,, \quad m_e\ll T\ll T_{QCD}. \label{ec} \end{eqnarray} Note that the number of lepton species drops out except in the logarithm. The above calculation taking only electrons as massless leptons ($N_l=1$) gives a first approximation to the electrical conductivity in the temperature range $m_e\ll T\;\ll\; T_{QGP}$, below the hadronization transition, $T_{QGP}\sim 150$ GeV, at which hadronic matter undergoes a transition to a quark-gluon plasma. Thermal pions and muons in fact also reduce the conductivity by scattering electrons, but they do not become significant current carriers because their masses are close to $T_{QGP}$. For temperatures $T > T_{QGP}$, the matter consists of leptons and deconfined quarks. The quarks themselves contribute very little to the current, since strong interactions limit their drift velocity. The quark drift velocity can be estimated by replacing $\alpha$ by the strong interaction fine structure constant, $\alpha_s$, in $\xi$, Eqs. (\ref{Xi}) and (\ref{Xi2}), which yields ${\bf u}_q\sim {\bf u}_\ell (\alpha^2\ln\alpha^{-1})/(\alpha_s^2\ln\alpha_s^{-1})$. Even though quarks do not contribute significantly to the currents they are effective scatterers, and thus modify the conductivity (an effect ignored in the recent numerical analysis of Ref. \cite{Ahonen}). To calculate the quark contribution to the lepton conductivity, we note that the collision term between leptons (1,3) and quarks (2,4) includes the following additional factors compared with lepton-lepton scattering: a factor 1/2, because the quark velocity, ${\bf u}_2$, is essentially zero, a factor 3 from colors, and a factor 2 because the charged leptons collide on both $q$ and $\bar{q}$; finally we must sum over flavors with a weight $Q_q^2$, where $Q_qe$ is the charge of quark flavor $q=u,d,s,c,b,t$. We must also divide by the number of leptons, $N_l$, to take into account the number of quark scatterings relative to lepton scatterings. Including $\ell q$ and $\ell \bar q$ collisions on the right side of Eq. (\ref{ec}) we find the total electrical conductivity of the early universe \cite{tran,BHP}: \begin{eqnarray} \sigma_{el} = \frac{N_l}{N_l+ 3 \sum_q^{N_q} Q^2_q}\sigma_{el}^{\ell\bar{\ell}} =\frac{N_l}{N_l+ 3 \sum_q^{N_q} Q^2_q} \frac{3\zeta(3)}{\ln2} \frac{T}{\alpha\ln(1/\alpha N_l)}, \quad T_{QGP}\ll\; T\;\ll\; M_W. \label{eq} \end{eqnarray} The charged lepton and quark numbers $N_l$ and $N_q$ count only the species present in the plasma at a given temperature, i.e., those with masses $m_i\raisebox{-.5ex}{$\stackrel{<}{\sim}$}T$. Figure 2 illustrates the conductivities (\ref{ec},\ref{eq}). For simplicity this figure assumes that the quarks and leptons make their appearance abruptly when $T$ becomes $>m_i$; in reality they are gradually produced thermally as $T$ approaches their masses. Since a range of particle energies in the neighborhood of the temperature is included, possible resonance effects in scatterings are smeared out. We will not attempt to calculate the electrical conductivity in the range $M_W\raisebox{-.5ex}{$\stackrel{<}{\sim}$} T \raisebox{-.5ex}{$\stackrel{<}{\sim}$} T_c$ below the critical temperature. Recent lattice calculations \cite{Kajantie} predict a relatively sharp transition at a temperature $T_c\sim 100$ GeV from the symmetry broken phase at low temperatures, $T\ll T_c$, to the symmetric phase at $T\gg T_c$. The transition is sensitive, however, to the unknown Higgs mass, with a first order transition predicted only for Higgs masses below $\sim 90$GeV. The calculations of the conductivity are technically more involved when masses are comparable to the temperature. Furthermore one must begin to include contributions of the $W^\pm$ to currents and scatterings, as the thermal suppression of their density decreases near the transition. \section{The symmetry-restored phase} To calculate the conductivity well above the electroweak transition, $T\gg T_c$, where the electroweak symmetries are fully restored, we describe the electroweak interactions by the standard model Weinberg-Salam Lagrangian with minimal Higgs: \begin{eqnarray} {\cal L}_{MSM} &=& -\frac{1}{4}{\bf W}_{\mu\nu}\cdot {\bf W}^{\mu\nu} -\frac{1}{4}B_{\mu\nu}B^{\mu\nu} + \bar{L}\gamma^\mu\left(i\partial_\mu- \frac{g}{2}\mbox{\boldmath $\tau$} \cdot{\bf W}_\mu -\frac{g'}{2}YB_\mu\right)L + \bar{R}\gamma^\mu\left(i\partial_\mu-\frac{g'}{2}YB_\mu\right)R \nonumber\\ &+& \left| \left(i\partial_\mu- \frac{g}{2}\mbox{\boldmath$\tau$}\cdot{\bf W}_\mu -\frac{g'}{2}YB_\mu\right)\phi \right|^2 - \mu^2\phi^\dagger\phi -\lambda(\phi^\dagger\phi)^2 - G_1\bar{L}\phi R+iG_2\bar{L}\tau_2\phi^*R + h.c. \, . \label{EW} \end{eqnarray} Here $L$ denotes left-handed doublet and $R$ right-handed singlet leptons or quarks, $e=g\sin\theta_W=g'\cos\theta_W$ and electrical charge $Q=T_3+Y/2$. The last terms provide masses for leptons and quarks in the low temperature phase, $T<T_c$, where the Higgs field has a non-vanishing vacuum expectation value $\langle\phi\rangle=(0,v)/\sqrt{2}$; at zero temperature $v^2=-\mu^2/\lambda=1/(G_F\sqrt{2})=4M_W^2/g^2=(246{\rm GeV})^2$. At temperatures below $T_c$, where $\mu^2<0$, the Higgs mechanism naturally selects the representation $W^\pm$, $Z^0$, and $\gamma$ of the four intermediate vector bosons. At temperatures above the transition -- where $\langle\phi\rangle$ vanishes for a sharp transition, or tends to zero for a crossover -- we consider driving the system with external vector potentials $A^a=B,W^\pm,W^3$, which give rise to corresponding ``electric'' fields ${\bf E}_a$, where \begin{eqnarray} E_i^a &\equiv& F_{i0}^a = \partial_i A_0^a - \partial_0 A_i^a, \quad A^a=B \label{Bdef} \\ &\equiv& F_{i0}^a = \partial_i A_0^a - \partial_0 A_i^a -g \epsilon_{abc}A_i^bA_0^c, \quad A^a=W^1,W^2,W^3. \label{Wdef} \end{eqnarray} One can equivalently drive the system with the electromagnetic and weak fields derived from $A$, $Z^0$, and $W^\pm$, as when $T\ll T_c$, or any other rotated combination of these. We consider here only the weak field limit and ignore the nonlinear driving terms in Eq. (\ref{Wdef}). The self-couplings between gauge bosons are important, however, in the scattering processes in the plasma determining the conductivity, as we discuss below.\footnote{Pure SU(2) gauge fields undergo a confining first order phase transition \cite{SU2} at a critical temperature, $T_c$, where the fields are intrinsically strongly interacting. In an academic universe without Higgs fields the electroweak phase transition is non-existent. If, in this case, we run the temperature from far above $M_W$ to low temperatures the electroweak running coupling constants diverge at a very small temperature $\sim 10^{-23}$GeV, signalling that the interactions of the fields have become strong, and that one is in the neighborhood of the SU(2) phase transition. For our purposes we are therefore safe in ignoring such non-linear effects and the SU(2) phase transition. However, the nature of electrical conduction in the confined state by charged ``mesons" formed, e.g., by $e^+\nu_e$ or $u\bar d$, remains an interesting problem in principle. We thank Eduardo Fradkin for calling this issue to our attention.} The electroweak fields $A^b$ act on the matter to generate currents $J_a$ of the various particles present in the plasma, such as left and right-handed leptons and their antiparticles, and quarks, vector bosons, and Higgs bosons. The Higgs and vector boson contributions are, as we shall see, negligible. Therefore the significant terms in the currents are \begin{eqnarray} J_B^\mu &=& \frac{g'}{2} (\bar{L}\gamma^\mu Y L +\bar{R}\gamma^\mu Y R) \\ J_{W^i}^\mu &=& \frac{g}{2} \bar{L}\gamma^\mu \tau_iL. \label{J} \end{eqnarray} We define the conductivity tensor $\sigma_{ab}$ in general by \begin{eqnarray} {\bf J}_a = \sigma_{ab} {\bf E}^b. \label{sdef} \end{eqnarray} Equation (\ref{sdef}) with the equations of motion in a gauge with $\partial^\mu A_\mu^a = 0$ yields the weak field flux diffusion equation for the transverse component of the fields, as in QED, \begin{eqnarray} (\partial^2_t-\nabla^2){\bf A}_a = \sigma_{ab}\partial_t{\bf A}_b . \end{eqnarray} describing the decay of weak fields in terms of the the conductivity. The electroweak $U(1)\times SU(2)$ symmetry implies that the conductivity tensor, $\sigma_{ab}$, in the high temperature phase is diagonal in the representation $a,b=B,W^1,W^2,W^3$, as can be seen directly from the (weak field) Kubo formula \begin{eqnarray} \sigma_{ab}=- \lim_{\omega\to 0} \lim_{k\to 0} \frac{1}{\omega} {\rm Im}\, \langle J_aJ_b\rangle_{\rm irr}, \end{eqnarray} which relates the conductivity to (one-boson irreducible) current-current correlation functions.\footnote{Since the linearized currents are proportional to the Pauli spin matrices $\tau^a$ for the $W^a$ ($a$=1,2,3) fields and the identity matrix $\tau_0=1$ for the $B$ field, one finds explicitly in one-loop order that $\sigma_{ab}\propto Tr\{\tau_a\tau_b\}=2\delta_{ab}$ ($a$=0,1,2,3). Including the dominant interactions in the current-current loop is basically equivalent to solving the Boltzmann equation, which produces no off-diagonal elements.} The construction of the conductivity in terms of the Kubo formula assures that the conductivity and hence the related entropy production in electrical conduction are positive. Then \begin{eqnarray} \sigma = \left( \begin{array}{cccc} \sigma_{BB} & 0 & 0 & 0 \\ 0 & \sigma_{WW} & 0 & 0 \\ 0 & 0 & \sigma_{WW} & 0 \\ 0 & 0 & 0 & \sigma_{WW} \end{array} \right) \,. \end{eqnarray} Due to isospin symmetry of the $W$-interactions the conductivities $\sigma_{W^iW^i}$ are the same, $\equiv\sigma_{WW}$, but differ from the $B$-field conductivity, $\sigma_{BB}$. The calculation of the conductivities $\sigma_{BB}$ and $\sigma_{WW}$ in the weak field limit parallels that done for $T\ll T_c$. The main difference is that weak interactions are no longer suppressed by a factor $(T/M_W)^4$ and the exchange of electroweak vector bosons must be included. The conductivity, $\sigma_{BB}$, for the abelian gauge field $B$ can be calculated similarly to the electrical conductivity at $T\ll T_c$. Taking into account the fact that both left-handed neutrinos and charged leptons couple to the $B$-field with the same sign, and that they scatter the same way, their flow velocities are equal. Consequently, in the scatterings $12\leftrightarrow 34$, ${\bf u}_1 ={\bf u}_3$ and ${\bf u}_2={\bf u}_4$, whether or not the interaction is by charge exchange. The situation is thus similar to electrodynamic case. Although the quarks and $W^\pm$ are charged, their drifts in the presence of an electric field do not significantly contribute to the electrical conductivity. Charge flow of the quarks is stopped by strong interactions, while similarly flows of the $W^\pm$ are effectively stopped by $W^+ + W^- \to Z^0$, via the triple boson coupling. Charged Higgs bosons are likewise stopped via $W^\pm\phi^\dagger\phi$ couplings. These particles do, however, affect the conductivity by scattering leptons. The lepton and quark mass terms in the Weinberg-Salam Lagrangian provide masses only when the Higgs field has a non-zero expectation value. For $T\gg T_c$ the quarks and leptons have thermal masses, which, for the longitudinal (electric) degrees of freedom, are of order the plasma frequency, $m_{pl}\sim gT$, and of likely order $m_{mag}\sim g^2T$ \cite{Linde} for the transverse (magnetic) mass. These small masses give rise to spin-flip interactions changing the helicity; such interactions are, however, suppressed by factors of $m/T$, and can therefore be neglected here. The mass terms also provide a small coupling, $G_l=\sqrt{2}m_l/v$, between the Higgs and leptons, proportional to the ratio of the lepton mass to the vacuum expectation value, which leads to a negligibly small contribution to the conductivity. Even the coupling of $\tau$ mesons with the Higgs is a factor $G_\tau^2/e^2\sim 10^{-3}$ smaller than their coupling to the $B$ field. Charge transfer scatterings via Higgs exchange are more singular than scatterings via $B$ exchange, and are enhanced by a factor $1/e^2$; nonetheless such processes are negligible. These considerations imply that the $B$ current consists primarily of right-handed $e^\pm$, $\mu^\pm$ and $\tau^\pm$, interacting only through exchange of uncharged vector bosons $B$, or equivalently $\gamma$ and $Z^0$. Because the left-handed leptons interact through ${\bf W}$ as well as through $B$, they give only a minor contribution to the current. They are, however, effective scatterers of right-handed leptons. The resulting conductivity is \begin{eqnarray} \sigma_{BB} &=& \frac{1}{2}\frac{N_l\cos^2\theta_W} {[\frac{1}{8}(Y_R^2+2Y_L^2)N_l + \frac{1}{8}\sum_q^{N_q} (Y^2_{q_R}+Y^2_{q_L})]} \sigma_{el}^{(\ell\bar{\ell})} = \frac{9}{19} \cos^2\theta_W\sigma_{el}^{(\ell\bar{\ell})} \,,\quad T\gg T_c \,, \label{sTT} \end{eqnarray} where the $Y_{q_{R,L}}$ are the right and left-handed quark hypercharges, and the $Y_{R,L}$ are the right and left-handed charged lepton hypercharges. The terms entering the prefactor are: i) a factor 1/2 because only the right-handed leptons contribute significantly to the conductivity; ii) a net factor of $\cos^2\theta_W$ because the $B$ field coupling to right-handed leptons and the current $J_B$ contain factors of $g'=e/\cos\theta_W$, while the square of the matrix element contains a factor $(e/\cos\theta_W)^4$; and iii) a factor $(Y_R^2+2Y_L^2)/8 = 3/4$ in the scatterings of the right-handed charged leptons with right and left-handed leptons, and iv) a factor $\sum_q^{N_q}(Y^2_{q_R}+Y^2_{q_L})/8 =11/12$ in the scatterings of the right-handed charged leptons with right and left-handed quarks. The factor 9/19 holds in the limit that the temperature is much greater than the top quark mass, $m_t$; excluding the top quark for $T_c<T<m_t$ gives 108/211 instead. Applying a $W^3$ field to the electroweak plasma drives the charged leptons and neutrinos oppositely since they couple through $g\tau_3W_3$. In this case, exchanges of $W^\pm$ dominate the interactions as charge is transferred in the singular forward scatterings, so that ${\bf u}_3={\bf u}_2=-{\bf u}_1$. The collision term is then weighted by a factor $({\bf p}_1+{\bf p}_2)$ instead of a factor $({\bf p}_1-{\bf p}_2)={\bf q}$ and one ends up with an integral $\int p^2dq^2/(q^2+q_D^2)^2\simeq T^2/q_D^2\sim\alpha^{-1}$ for the longitudinal part of the interaction. For the transverse part of the interaction one encounters a logarithmic singularity; while Landau damping is not sufficient to screen the interaction, a magnetic mass, $m_{mag}$, will provide an infrared cutoff. Besides the logarithms, the factor $\alpha^{-1}$ remains and we expect that \begin{eqnarray} \sigma_{WW} \sim \alpha\, \sigma_{BB}. \end{eqnarray} This effect of $W^\pm$ exchange is analogous to the way gluon exchange in QCD gives strong stopping and reduces the ``color conductivity" significantly \cite{color}; similar effects are seen in spin diffusion in Fermi liquids \cite{BPbook}. The electrical conductivity is found from $\sigma_{BB}$ and $\sigma_{WW}$ by rotating the $B$ and $W^3$ fields and currents by the Weinberg angle; using Eq. (\ref{sdef}) we obtain, \begin{eqnarray} \left(\begin{array}{c} J_A \\ J_{Z^0} \end{array} \right) &=& {\cal R}(\theta_W)\sigma {\cal R}(-\theta_W) \left(\begin{array}{c} A \\ Z^0 \end{array} \right) \nonumber \\ &=& \left(\begin{array}{cc}\sigma_{BB}\cos^2\theta_W+\sigma_{WW}\sin^2\theta_W & \quad(\sigma_{BB}-\sigma_{WW})\cos\theta_W\sin\theta_W \\ (\sigma_{WW}-\sigma_{BB})\cos\theta_W\sin\theta_W & \quad \sigma_{BB}\sin^2\theta_W +\sigma_{WW}\cos^2\theta_W \end{array} \right) \left(\begin{array}{c} A \\ Z^0 \end{array} \right). \label{sigmarot} \end{eqnarray} Thus the electrical conductivity is given by \begin{eqnarray} \sigma_{AA} = \sigma_{BB}\cos^2\theta_W + \sigma_{WW}\sin^2\theta_W; \end{eqnarray} $\sigma_{el}/T$ above the electroweak transition differs from that below mainly by a factor $\sim\cos^4\theta_W\simeq 0.6$. In the wide temperature range we are considering the coupling constants in fact run as $\alpha_i(Q)=\alpha_i(\mu)+b_i\ln(Q/\mu)$ where the coefficients $b_i$ are found by renormalization group calculations\cite{HM,Wilczek}. In a high temperature plasma typical momentum transfers $Q$ are of order $q_D\sim eT$. The exact values employed for $Q$ is not important as the couplings only increase logarithmically with temperature. In the temperature range 1 to 10$^6$ GeV, $\alpha^{-1}$ varies from 130 to 123 and $\sin^2\theta_W$ from 0.21 to 0.28. \section{Summary and Outlook} We have calculated the electrical and electroweak conductivities in the early universe over a wide range of temperatures. Typically, $\sigma_{el}\simeq T/\alpha^2\ln(1/\alpha)$, where the logarithmic dependence on the coupling constant arises from Debye and dynamical screening of small momentum-transfer interactions. In the quark-gluon plasma, at $T\gg T_{QGP}\sim 150$ MeV, the additional stopping on quarks reduces the electrical conductivity from that in the hadronic phase. In the electroweak symmetry-restored phase, $T\gg T_c$, interactions between leptons and $W^\pm$ and $Z^0$ bosons reduce the conductivity further. The electrical conductivity does not vanish (as one might have imagined to result from singular unscreened $W^\pm$-exchanges), and is larger than previous estimates, within an order of magnitude. The current is carried mainly by right-handed leptons since they interact only through exchange of $\gamma$ and $Z^0$. From the above analysis we can infer the qualitative behavior of other transport coefficients. The characteristic electrical relaxation time, $\tau_{el}\sim (\alpha^2\ln(1/\alpha)T)^{-1}$, defined from $\sigma\simeq e^2 n\tau_{el}/T$, is a typical ``transport time" which determines relaxation of transport processes when charges are involved. Right-handed leptons interact through $Z^0$ exchanges only, whereas left-handed leptons may change into neutrinos by $W^\pm$ exchanges as well. Since $Z^0$ exchange is similar to photon exchange when $T\gg T_c$, the characteristic relaxation time is similar to that for electrical conduction, $\tau_{\nu}\sim (\alpha^2\ln(1/\alpha)T)^{-1}$ (except for the dependence on the Weinberg angle). Thus the viscosity is $\eta\sim \tau_\nu \sim T^3/(\alpha^2\ln(1/\alpha))$. For $T\ll M_W$ the neutrino interaction is suppressed by a factor $(T/M_W)^4$; in this regime neutrinos have longest mean free paths and dominate the viscosity.\cite{BHP} The electrical conductivity of the plasma in the early universe is sufficiently large that large-scale magnetic flux present in this period does not diffuse significantly over timescales of the expansion of the universe. The time for magnetic flux to diffuse on a distance scale $L$ is $\tau_{diff} \sim \sigma_{el} L^2$. Since the expansion timescale $t_{exp}$ is $\sim 1/(t_{\rm Planck}T^2)$, where $t_{\rm Planck} \sim 10^{-43}$ s is the Planck time, one readily finds that \begin{eqnarray} \frac{\tau_{diff}}{t_{exp}} \sim \alpha x^2 \frac{\tau_{el}}{t_{\rm Planck}} \gg 1, \end{eqnarray} where $x = L/ct_{exp}$ is the diffusion length scale in units of the distance to the horizon. As described in Refs. \cite{Enqvist} and \cite{LMcL}, sufficiently large domains with magnetic fields in the early universe would survive to produce the primordial magnetic fields observed today. We grateful to L.McLerran, C.J. Pethick, J. Popp and B. Svetitsky for discussion. This work was supported in part by DOE Grant DE-AC03-76SF00098 and NSF Grant PHY 94-21309.
{ "attr-fineweb-edu": 1.526367, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbU_xK4tBVhat2LD2
\section{Multiple-time data acquisition} \subsection{Sensitivity analysis}\label{app:multi_SA} We first give the sensitivity analysis for finite-size $|\Theta|$. The results are basically the same as the ones for the one-time data acquisition mechanism except that we do not give a lower bound for $\alpha$. \begin{theorem}\label{thm:multi_sensitive} When $|\Theta|$ is finite, if $f$ is strictly convex, then Mechanism~\ref{alg:multi} is sensitive in the first $T-1$ rounds if either of the following two conditions holds, \begin{enumerate} \item[(1)] $\forall i$, $Q_{-i}$ has rank $|\Theta|$. \item[(2)] $\forall i, \sum_{i'\neq i}({rank_k(G_{i'})-1)\cdot N_{i'}}+1\ge |\Theta|$. \end{enumerate} \end{theorem} When $\Theta \subseteq \mathbb{R}^m$ is a continuous space, the results are entirely similar to the ones for Mechanism~\ref{alg:single} but with slightly different proofs. Suppose the data analyst uses a model from the exponential family so that the prior and all the posterior of $\bth$ can be written in the form in Lemma~\ref{lem:multi_comp}. The sensitivity of the mechanism will depend on the normalization term $g(\nu, \overline{\bm{\tau}})$ (or equivalently, the partition function) of the pdf. Define \begin{align} \label{eqn:h} h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)= \frac{g(\nu_i, \overline{\bm{\tau}}_i) }{g(\nu_i +\nu_{-i}-\nu_0, \frac{\nu_{i} \overline{\bm{\tau}}_{i} + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i +\nu_{-i}-\nu_0} ) }, \end{align} then we have the following sufficient and necessary conditions for the sensitivity of the mechanism. \begin{theorem} \label{thm:multi_sensitive_cont} When $\Theta \subseteq \mathbb{R}^m$, if the data analyst uses a model in the exponential family and a strictly convex $f$, then Mechanism~\ref{alg:multi} is sensitive in the first $T-1$ rounds if and only if for any $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, we have $ \Pr_{D_{-i}} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0. $ \end{theorem} See Section~\ref{sec:single} for interpretations of this theorem. \subsection{Missing proofs}\label{app:multi_proofs} The following part are the proofs for our results. \paragraph{Proof of Theorem~\ref{thm:multi_main}.} It is easy to verify that the mechanism is IR, budget feasible and symmetric. We prove the truthfulness as follows. Let's look at the payment for day $t$. At day $t$, data provider $i$ reports a dataset $\tilde{D}_i^{(t)}$. Assuming that all other data providers truthfully report $D^{(t)}_{-i}$, data provider $i$'s expected payment is decided by his expected score \begin{align} &\mathbb{E}_{(D_{-i}^{(t)}, D_{-i}^{(t+1)})|D_i^{(t)}} [s_i] \notag\\ = &\mathbb{E}_{D_{-i}^{(t+1)}} f'\left(\frac{1}{PMI(\tilde{D}_i^{(t)}, D_{-i}^{(t+1)})}\right) - \mathbb{E}_{D_{-i}^{(t)}|D_i^{(t)}}f^*\left( f'\left( \frac{1}{PMI(\tilde{D}_i^{(t)}, D_{-i}^{(t)}) }\right)\right). \end{align} The first expectation is taken over the marginal distribution $p( D_{-i}^{(t+1)})$ without conditioning on $ D_i^{(t)}$ because $D^{(t+1)}$ is independent from $D^{(t)}$, so we have $p( D_{-i}^{(t+1)} | D_i^{(t)}) = p( D_{-i}^{(t+1)})$. Since the underlying distributions for different days are the same, we drop the superscripts for simplicity in the rest of the proof, so the expected score is written as \begin{align} \label{eqn:e_payment} \mathbb{E}_{D_{-i}}\ f'\left(\frac{1}{PMI(\tilde{D}_i, D_{-i})}\right) - \mathbb{E}_{D_{-i}|D_i} \ f^*\left( f'\left( \frac{1}{PMI(\tilde{D}_i, D_{-i}) }\right)\right). \end{align} We then use Lemma~\ref{lem11} to get an upper bound of the expected score~\eqref{eqn:e_payment} and show that truthfully reporting $D_i$ achieves the upper bound. We apply Lemma~\ref{lem11} on two distributions of $D_{-i}$, the distribution of $D_{-i}$ conditioning on the observed $D_i$, $p( D_{-i} | D_i)$, and the marginal distribution $p( D_{-i})$. Then we get \begin{align} \label{eqn:f_div_multi} D_f( p( D_{-i} | D_i), p( D_{-i})) \ge \sup_{g \in \mathcal{G}} \mathbb{E}_{D_{-i}}[g(D_{-i})] - \mathbb{E}_{D_{-i}|D_i} [f^*(g(D_{-i}))], \end{align} where $f$ is the given convex function, $\mathcal{G}$ is the set of all real-valued functions of $D_{-i}$. The supremum is achieved and only achieved at function $g$ with \begin{align} \label{eqn:f_div_max} g(D_{-i}) = f'\left( \frac{p( D_{-i})}{p(D_{-i} | D_i)} \right) \text{ for all } D_{-i} \text{ with } p(D_{-i} | D_i)>0. \end{align} For a dataset $\tilde{D}_i$, define function $$ g_{\tilde{D}_i}(D_{-i}) = f'\left(\frac{1}{PMI(\tilde{D}_i, D_{-i})}\right). $$ Then \eqref{eqn:f_div_multi} gives an upper bound of the expected score~\eqref{eqn:e_payment} as \begin{align*} &D_f( p( D_{-i} | D_i), p( D_{-i})) \\ \ge & \ \mathbb{E}_{D_{-i}}\big[g_{\tilde{D}_i}(D_{-i})\big] - \mathbb{E}_{D_{-i}|D_i} \big[f^*\big(g_{\tilde{D}_i}(D_{-i})\big) \big]\\ = & \ \mathbb{E}_{D_{-i}}\left[f'\left(\frac{1}{PMI(\tilde{D}_i, D_{-i})}\right)\right] - \mathbb{E}_{D_{-i}|D_i} \left[f^*\left(f'\left(\frac{1}{PMI(\tilde{D}_i, D_{-i})}\right)\right)\right]\\ = & \ \eqref{eqn:e_payment}. \end{align*} By~\eqref{eqn:f_div_max}, the upper bound is achieved only when \begin{align*} g_{\tilde{D}_i}(D_{-i}) = f'\left( \frac{p( D_{-i})}{p(D_{-i} | D_i)} \right) \text{ for all } D_{-i} \text{ with } p(D_{-i} | D_i)>0, \end{align*} that is \begin{align} \label{eqn:opt_cond} f'\left(\frac{1}{PMI(\tilde{D}_i, D_{-i})}\right) = f'\left( \frac{p( D_{-i})}{p(D_{-i} | D_i)} \right) \text{ for all } D_{-i} \text{ with } p(D_{-i} | D_i)>0. \end{align} Then it is easy to prove the truthfulness. Truthfully reporting $D_i$ achieves~\eqref{eqn:opt_cond} because by Lemma~\ref{lem12}, for all $D_i$ and $D_{-i}$, \begin{align*} PMI(D_i, D_{-i}) = \frac{p(D_i, D_{-i})}{p(D_i) p(D_{-i})} = \frac{p(D_{-i}|D_i)}{p(D_{-i})}. \end{align*} Again, let $\bm{Q_{-i}}$ be a $(\Pi_{j\in [n], j\neq i} |\mathcal{D}_j|^{N_j}) \times |\Theta|$ matrix with elements equal to $p(\bth|D_{-i})$ and let $G_i$ be the $|\mathcal{D}_i|\times|\Theta|$ data generating matrix with elements equal to $p(\bth|d_i)$. Then we have the following sufficient conditions for the mechanism's sensitivity. \paragraph{Proof of Theorem~\ref{thm:multi_sensitive}.} We then prove the sensitivity. For discrete and finite-size $\Theta$, we prove that when $f$ is strictly convex and $\bm{Q}_{-i}$ has rank $|\Theta|$, the mechanism is sensitive. When $f$ is strictly convex, $f'$ is a strictly increasing function. Let $\tilde{\q}_i = p(\bth|\tilde{D}_{i})$. Then accordint to the definition of $PMI(\cdot)$, condition~\eqref{eqn:opt_cond} is equivalent to \begin{align} \label{eqn:opt_cond_strict} PMI(\tilde{D}_i, D_{-i}) = \sum_{\bth \in \Theta} \frac{\tilde{\q}_i\cdot p(\bth|D_{-i})}{p(\bth)} = \frac{p( D_{-i} | D_i)}{p(D_{-i})} \ \text{ for all } D_{-i} \text{ with } p(D_{-i} | D_i)>0. \end{align} We show that when matrix $\bm{Q}_{-i}$ has rank $|\Theta|$, $\tilde{\q}_i = p(\bth|D_i)$ is the only solution of~\eqref{eqn:opt_cond_strict}, which means that the payment rule is sensitive. Then suppose $\tilde{\q}_i =p(\bth |D_i)$ and $\tilde{\q}_i = p(\bth |\tilde{D}_i)$ are both solutions of~\eqref{eqn:opt_cond_strict}, then we should have \begin{align*} p(D_{-i}|\tilde{D}_i) = p(D_{-i}|D_i) \text{ for all } D_{-i} \text{ with } p(D_{-i} | D_i)>0. \end{align*} In addition, because $$ \sum_{D_{-i}} p(D_{-i}|\tilde{D}_i) = 1 = \sum_{D_{-i}} p(D_{-i}|D_i) $$ and $p(D_{-i}|\tilde{D}_i)\ge 0$, we must also have $p(D_{-i}|\tilde{D}_i) = 0$ for all $D_{-i}$ with $p(D_{-i} | D_i)=0$. Therefore we have $$ PMI(\tilde{D}_i, D_{-i}) = PMI(D_i, D_{-i}) \text{ for all } D_{-i}. $$ Since $PMI(\cdot)$ can be written as, \begin{align*} \label{eqn:opt_cond_strict2} PMI(\tilde{D}_i, D_{-i}) = \sum_{\bth \in \Theta} \frac{p(\bth|\tilde{D}_{i}) p(\bth|D_{-i})}{p(\bth)} = (\bm{Q}_{-i} \bm{\Lambda} \tilde{\q}_i)_{D_{-i}} \end{align*} where $\bm{\Lambda}$ is the $|\Theta| \times |\Theta|$ diagonal matrix with $1/p(\bth)$ on the diagonal. So we have $$ \bm{Q}_{-i} \bm{\Lambda} p(\bth |D_i) = \bm{Q}_{-i} \bm{\Lambda} \bm{q} \quad \Longrightarrow \quad \bm{Q}_{-i} \bm{\Lambda} (p(\bth |D_i) - \bm{q}) = 0. $$ Since $\bm{Q}_{-i} \bm{\Lambda}$ must have rank $|\Theta|$, which means that the columns of $\bm{Q}_{-i} \bm{\Lambda}$ are linearly independent, we must have $$ p(\bth |D_i) - \bm{q} = 0, $$ which completes our proof of sensitivity for finite-size $\Theta$. The proof of condition (2) is the same as the proof of Theorem~\ref{thm_sensitive_app} condition (2). \paragraph{Proof of Theorem~\ref{thm:multi_sensitive_cont}.} When $\Theta \subseteq \mathbb{R}^m$ and a model in the exponential family is used, we prove that when $f$ is strictly convex, the mechanism will be sensitive if and only if for any $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, \begin{eqnarray}\label{eqn:app_multi_sensi} \Pr_{D_{-i}} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0. \end{eqnarray} We first show that the above condition is equivalent to that for any $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, \begin{eqnarray} \label{eqn:app_multi_sen} \Pr_{D_{-i}|D_i} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0, \end{eqnarray} where $D_{-i}$ is drawn from $p(D_{-i}|D_i)$ but not $p(D_{-i})$. This is because, by conditional independence of the datasets, for any event $\mathcal{E}$, we have $$ \Pr_{D_{-i}|D_i}[\mathcal{E}] = \int_{\bth\in \Theta} p(\bth|D_i) \Pr_{D_{-i}|\bth}[\mathcal{E}] \, d\bth $$ and $$ \Pr_{D_{-i}}[\mathcal{E}] = \int_{\bth\in \Theta} p(\bth) \Pr_{D_{-i}|\bth}[\mathcal{E}] \, d\bth. $$ Since both $p(\bth)$ and $p(\bth|D_i)$ are always positive because they are in exponential family, it should hold that $$ \Pr_{D_{-i}|D_i}[\mathcal{E}] > 0 \ \Longleftrightarrow \ \Pr_{D_{-i}}[\mathcal{E}] > 0. $$ Therefore \eqref{eqn:app_multi_sensi} is equivalent to \eqref{eqn:app_multi_sen}, and we only need to show that the mechanism is sensitive if and only if \eqref{eqn:app_multi_sen} holds. Let $\tilde{\q}_i = p(\bth|\tilde{D}_i)$. We then again apply Lemma~\ref{lem11}. By Lemma~\ref{lem11} and the strict convexity of $f$, $\tilde{\q}_i$ achieves the supremum if and only if $$ PMI(\tilde{D}_i, D_{-i}) = \frac{p( D_{-i} | D_i)}{p(D_{-i})} \ \text{ for all } D_{-i} \text{ with } p(D_{-i} | D_i)>0. $$ By the definition of $PMI$ and Lemma~\ref{lem12}, the above condition is equivalent to \begin{eqnarray} \label{eqn:multi_exp_cond} \int_{\bth\in \Theta} \frac{\tilde{\q}_i(\bth)p(\bth|D_{-i})}{p(\bth)} \,d\bth = \int_{\bth\in \Theta} \frac{p(\bth|D_i)p(\bth|D_{-i})}{p(\bth)}\, d\bth \ \text{ for all } D_{-i} \text{ with } p(D_{-i} | D_i)>0. \end{eqnarray} When we're using a (canonical) model in exponential family, the prior $p(\bth)$ and the posteriors $p(\bth|D_i), p(\bth|D_{-i})$ can be represented in the standard form~\eqref{eqn:exp_fam_prior}, \begin{eqnarray*} & p(\bth) = \mathcal{P}(\bth| \nu_0, \overline{\bm{\tau}}_0),\\ & p(\bth|D_i) = \mathcal{P}\big(\bth| \nu_i, \overline{\bm{\tau}}_i\big),\\ & p(\bth|D_{-i}) = \mathcal{P}\big(\bth| \nu_{-i}, \overline{\bm{\tau}}_{-i}\big),\\ & \tilde{\q}_i = \mathcal{P}\big(\bth| \nu_{i}', \overline{\bm{\tau}}_{i}'\big), \end{eqnarray*} where $\nu_0, \overline{\bm{\tau}}_0$ are the parameters for the prior $p(\bth)$, $\nu_i, \overline{\bm{\tau}}_i$ are the parameters for the posterior $p(\bth|D_i)$, $\nu_{-i}, \overline{\bm{\tau}}_{-i}$ are the parameters for the posterior $p(\bth|D_{-i})$, and $ \nu_{i}', \overline{\bm{\tau}}_{i}'$ are the parameters for $\tilde{\q}_i$. Then by Lemma~\ref{lem:exp_int}, the condition that $\tilde{\q}_i$ achieves the supremum~\eqref{eqn:multi_exp_cond} is equivalent to \begin{eqnarray} \frac{g(\nu_i', \overline{\bm{\tau}}_i') }{ g(\nu_i' +\nu_{-i}-\nu_0, \frac{\nu_{i}' \overline{\bm{\tau}}_{i}' + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i' +\nu_{-i}-\nu_0} )} = \frac{g(\nu_i, \overline{\bm{\tau}}_i) }{ g(\nu_i +\nu_{-i}-\nu_0, \frac{\nu_{i} \overline{\bm{\tau}}_{i} + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i +\nu_{-i}-\nu_0} )}. \end{eqnarray} which, by our definition of $h(\cdot)$, is just $$ h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') = h_{D_{-i}}(\nu_{i}, \overline{\bm{\tau}}_{i}), \quad \text{ for all } D_{-i} \text{ with } p(D_{-i} | D_i)>0. $$ Now we are ready to prove Theorem~\ref{thm:multi_sensitive_cont}. Since \eqref{eqn:app_multi_sensi} is equivalent to \eqref{eqn:app_multi_sen}, we only need to show that the mechanism is sensitive if and only if for all $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, \begin{eqnarray*} \Pr_{D_{-i}|D_i} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0. \end{eqnarray*} If the above condition holds, then $\tilde{\q}_i$ with parameters $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$ should have a non-zero loss in the expected score~\eqref{eqn:e_payment} compared to the optimal solution $p(\bth|D_i)$ with parameters $(\nu_i, \overline{\bm{\tau}}_i)$, which means that the mechanism is sensitive. For the other direction, if the condition does not hold, i.e., there exists $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$ with $$ \Pr_{D_{-i}|D_i} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] = 0, $$ then reporting $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$ will give the same expected score as truthfully reporting $(\nu_i, \overline{\bm{\tau}}_i)$, which means that the mechanism is not sensitive. \section{Mathematical Background} \label{app:math} Our mechanisms are built with some important mathematical tools. First, in probability theory, an $f$-divergence is a function that measures the difference between two probability distributions. \begin{definition}[$f$-divergence] Given a convex function $f$ with $f(1) = 0$, for two distributions over $\Omega$, $p,q \in \Delta \Omega$, define the $f$-divergence of $p$ and $q$ to be $$ D_f(p,q) = \int_{\omega \in \Omega} p(\omega) f\left( \frac{q(\omega) }{ p(\omega) } \right). $$ \end{definition} In duality theory, the convex conjugate of a function is defined as follows. \begin{definition}[Convex conjugate] For any function $f: \mathbb{R} \to \mathbb{R}$, define the convex conjugate function of $f$ as $$ f^*(y) = \sup_x xy - f(x). $$ \end{definition} Then the following inequality (\cite{nguyen2010estimating, kong2018water}) holds. \begin{lemma}[Lemma 1 in \cite{nguyen2010estimating}] For any differentiable convex function $f$ with $f(1) = 0$, any two distributions over $\Omega$, $p,q \in \Delta \Omega$, let $\mathcal{G}$ be the set of all functions from $\Omega$ to $\mathbb{R}$, then we have $$ D_f(p,q) \ge \sup_{g \in \mathcal{G}} \int_{\omega \in \Omega} g(\omega) q(\omega) - f^*(g(\omega)) p(\omega)\, d\omega = \sup_{g \in \mathcal{G}} \mathbb{E}_q g - \mathbb{E}_p f^*(g). $$ A function $g$ achieves equality if and only if $ g(\omega) \in \partial f\left(\frac{q(\omega)}{p(\omega)}\right) \forall \omega $ with $p(\omega)>0$, where $\partial f\big(\frac{q(\omega)}{p(\omega)}\big)$ represents the subdifferential of $f$ at point $q(\omega)/p(\omega)$. \end{lemma} The $f$-mutual information of two random variables is a measure of the mutual dependence of two random variables, which is defined as the $f$-divergence between their joint distribution and the product of their marginal distributions. \begin{definition}[Kronecker product] Consider two matrices $\bm{A}\in \mathbb{R}^{m\times n}$ and $\bm{B}\in \mathbb{R}^{p\times q}$. The Kronecker product of $\bm{A}$ and $\bm{b}$, denoted as $\bm{A}\otimes \bm{B}$, is defined as the following $pm\times qn$ matrix: \begin{equation*} \bm{A}\otimes \bm{B} = \begin{bmatrix} a_{11}\bm{B} &\cdots &a_{1n} \bm{B}\\ \vdots&\ddots&\vdots\\ a_{m1}\bm{B} &\cdots &a_{mn} \bm{B} \end{bmatrix}. \end{equation*} \end{definition} \begin{definition}[$f$-mutual information and pointwise MI] Let $(X,Y)$ be a pair of random variables with values over the space $\mathcal{X} \times \mathcal{Y}$. If their joint distribution is $p_{X,Y}$ and marginal distributions are $p_X$ and $p_Y$, then given a convex function $f$ with $f(1)=0$, the $f$-mutual information between $X$ and $Y$ is $$ I_f(X;Y) = D_f(p_{X,Y}, p_X \otimes p_Y) = \int_{x\in \mathcal{X},y\in \mathcal{Y}} p_{X,Y}(x,y) f\left( \frac{p_X(x)\cdot p_Y(y)}{p_{X,Y}(x,y)} \right). $$ We define function $K(x,y)$ as the reciprocal of the ratio inside $f$, $$ K(x,y) = \frac{p_{X,Y}(x,y)}{p_X(x)\cdot p_Y(y)}. $$ \end{definition} If two random variables are independent conditioning on another random variable, we have the following formula for the function $K$. \begin{lemma} \label{lem12} When random variables $X, Y$ are independent conditioning on $\bth$, for any pair of $(x,y) \in \mathcal{X} \times \mathcal{Y}$, we have $$ K(x,y) = \sum_{\bth \in \Theta} \frac{p(\bth|x)p(\bth|y)}{p(\bth)} $$ if $|\Theta|$ is finite, and $$ K(x,y) = \int_{\bth \in \Theta} \frac{p(\bth|x)p(\bth|y)}{p(\bth)} \,d \bth $$ if $\Theta\subseteq \mathbb{R}^m$. \end{lemma} \begin{proof} We only prove the second equation for $\Theta\subseteq \mathbb{R}^m$ as the proof for finite $\Theta$ is totally similar. \begin{align*} K(x,y) & = \frac{p(x,y)}{p(x)\cdot p(y)}\\ & = \frac{\int_{\bth\in\Theta}p(x|\bth)p(y|\bth)p(\bth)\, d\bth}{p(x)\cdot p(y)}\\ & = \int_{\bth\in\Theta}\frac{p(\bth|x)p(\bth|y)}{p(\bth)} \, d\bth, \end{align*} where the last equation uses Bayes' Law. \end{proof} \begin{definition}[Exponential family~\cite{murphy2012machine}] A probability density function or probability mass function $p(\x|\bth)$, for $\x = (x_1, \dots, x_n) \in \mathcal{X}^n$ and $\bth \in \Theta \subseteq \mathbb{R}^m$ is said to be in the \emph{exponential family} in canonical form if it is of the form \begin{equation} \label{eqn:exp_fam_prob} p(\x|\bth) = h(\x) \exp \left[\bth^T \bphi(\x) - A(\bth) \right] \end{equation} where $A(\bth) = \log \int_{\mathcal{X}^m} h(\x) \exp\left[\bth^T \bphi(\x) \right]$. The conjugate prior with parameters $\nu_0, \overline{\bm{\tau}}_0$ for $\bth$ has the form \begin{equation} \label{eqn:exp_fam_prior} p(\bth) = \mathcal{P}(\bth| \nu_0, \overline{\bm{\tau}}_0) = g(\nu_0, \overline{\bm{\tau}}_0) \exp\left[ \nu_0 \bth^T \overline{\bm{\tau}}_0 - \nu_0 A(\bth ) \right]. \end{equation} Let $\overline{\bm{s}} = \frac{1}{n} \sum_{i=1}^n \bm{\phi}(x_i) $. Then the posterior of $\bth$ is of the form \begin{align*} p(\bth|\x) & \propto \exp \left[ \bth^T (\nu_0\overline{\bm{\tau}}_0 + n \overline{\bm{s}}) - (\nu_0 + n) A(\bth) \right]\\ & = \mathcal{P}\big(\bth| \nu_0 + n, \frac{\nu_0\overline{\bm{\tau}}_0 + n \overline{\bm{s}}}{\nu_0 + n}\big), \end{align*} where $\mathcal{P}\big(\bth| \nu_0 + n, \frac{\nu_0\overline{\bm{\tau}}_0 + n \overline{\bm{s}}}{\nu_0 + n}\big)$ is the conjugate prior with parameters $\nu_0 + n$ and $\frac{\nu_0\overline{\bm{\tau}}_0 + n \overline{\bm{s}}}{\nu_0 + n}$. \end{definition} \begin{lemma} \label{lem:exp_int} Let $\bth$ be the parameters of a pdf in the exponential family. Let $\mathcal{P}(\bth|\nu, \overline{\bm{\tau}}) = g(\nu, \overline{\bm{\tau}}) \exp\left[ \nu \bth^T \overline{\bm{\tau}} - \nu A(\bth ) \right]$ denote the conjugate prior for $\bth$ with parameters $\nu, \overline{\bm{\tau}}$. For any three distributions of $\bth$, \begin{eqnarray*} p_1(\bth) = \mathcal{P}(\bth|\nu_1, \overline{\bm{\tau}}_1),\\ p_2(\bth) = \mathcal{P}(\bth|\nu_2, \overline{\bm{\tau}}_2),\\ p_0(\bth) = \mathcal{P}(\bth|\nu_0, \overline{\bm{\tau}}_0), \end{eqnarray*} we have $$ \int_{\bth \in \Theta} \frac{p_1(\bth) p_2(\bth)}{p_0(\bth)} \, d\bth = \frac{g(\nu_1, \overline{\bm{\tau}}_1) g(\nu_{2}, \overline{\bm{\tau}}_{2})}{g(\nu_0, \overline{\bm{\tau}}_0) g(\nu_1 +\nu_{2}-\nu_0, \frac{\nu_{1} \overline{\bm{\tau}}_{1} + \nu_{2} \overline{\bm{\tau}}_{2} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_1 +\nu_{2}-\nu_0} )}. $$ \end{lemma} \begin{proof} To compute the integral, we first write $p_1(\bth)$, $p_2(\bth)$ and $p_3(\bth)$ in full, \begin{eqnarray*} p_1(\bth) = \mathcal{P}(\bth|\nu_1, \overline{\bm{\tau}}_1) = g(\nu_1, \overline{\bm{\tau}}_1) \exp\left[ \nu_1 \bth^T \overline{\bm{\tau}}_1 - \nu_1 A(\bth ) \right],\\ p_2(\bth) = \mathcal{P}(\bth|\nu_2, \overline{\bm{\tau}}_2) = g(\nu_2, \overline{\bm{\tau}}_2) \exp\left[ \nu_2 \bth^T \overline{\bm{\tau}}_2 - \nu_2 A(\bth ) \right],\\ p_0(\bth) = \mathcal{P}(\bth|\nu_0, \overline{\bm{\tau}}_0) = g(\nu_0, \overline{\bm{\tau}}_0) \exp\left[ \nu_0 \bth^T \overline{\bm{\tau}}_0 - \nu_0 A(\bth ) \right]. \end{eqnarray*} Then we have the integral equal to \begin{align*} &\int_{\bth \in \Theta} \frac{p_1(\bth) p_2(\bth)}{p_0(\bth)} \, d\bth \\ =& \int_{\bth \in \Theta} \frac{g(\nu_1, \overline{\bm{\tau}}_1) \exp\left[ \nu_1 \bth^T \overline{\bm{\tau}}_1 - \nu_1 A(\bth ) \right] g(\nu_2, \overline{\bm{\tau}}_2) \exp\left[ \nu_2 \bth^T \overline{\bm{\tau}}_2 - \nu_2 A(\bth ) \right]}{g(\nu_0, \overline{\bm{\tau}}_0) \exp\left[ \nu_0 \bth^T \overline{\bm{\tau}}_0 - \nu_0 A(\bth ) \right]}\, d\bth \\ = & \frac{g(\nu_1, \overline{\bm{\tau}}_1) g(\nu_{2}, \overline{\bm{\tau}}_{2})}{g(\nu_0, \overline{\bm{\tau}}_0)} \int_{\bth \in \Theta} \exp \left[ \bth^T (\nu_{1} \overline{\bm{\tau}}_{1} + \nu_{2} \overline{\bm{\tau}}_{2} - \nu_{0}\overline{\bm{\tau}}_{0}) - A(\bth)(\nu_1 +\nu_{2}-\nu_0) \right] \, d\bth \\ = & \frac{g(\nu_1, \overline{\bm{\tau}}_1) g(\nu_{2}, \overline{\bm{\tau}}_{2})}{g(\nu_0, \overline{\bm{\tau}}_0) }\cdot \frac{1}{g(\nu_1 +\nu_{2}-\nu_0, \frac{\nu_{1} \overline{\bm{\tau}}_{1} + \nu_{2} \overline{\bm{\tau}}_{2} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_1 +\nu_{2}-\nu_0} )}. \end{align*} The last equality is because $$ g\left(\nu_1 +\nu_{2}-\nu_0, \frac{\nu_{1} \overline{\bm{\tau}}_{1} + \nu_{2} \overline{\bm{\tau}}_{2} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_1 +\nu_{2}-\nu_0} \right)\exp \left[ \bth^T (\nu_{1} \overline{\bm{\tau}}_{1} + \nu_{2} \overline{\bm{\tau}}_{2} - \nu_{0}\overline{\bm{\tau}}_{0}) - A(\bth)(\nu_1 +\nu_{2}-\nu_0) \right] $$ is the pdf $$p\left(\bth|\nu_1 +\nu_{2}-\nu_0,\frac{\nu_{1} \overline{\bm{\tau}}_{1} + \nu_{2} \overline{\bm{\tau}}_{2} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_1 +\nu_{2}-\nu_0}\right)$$ and thus has the integral over $\bth$ equal to $1$. \end{proof} \section{Proofs in \cite{kong2018water}} \subsection{Multi-task common ground mechanism} Suppose Alice holds a private dataset $X_A$ and Bob holds a private dataset $X_B$. Assume that the datasets and the ground truth $Y\in \mathcal{Y}$ are jointly drawn from a distribution $p(X_A, X_B, Y)$. We want to elicit predictions of the ground truth $Y$ from Alice and Bob, and the goal is to design a reward rule so that truthful reporting (Alice reports $p(Y|X_A)$ and Bob reports $p(Y|X_B)$) is a (strict) equilibrium. \begin{definition} Let $\tilde{p}_A$ be Alice's prediction, and let $\tilde{p}_B$ be Bob's prediction. A reward rule $R(\tilde{p}_A, \tilde{p}_B):\Delta \mathcal{Y}\times \Delta \mathcal{Y} \to \mathbb{R}$ takes their predictions as inputs and decides a reward for both of them. A reward rule is (strictly) truthful if reporting $p(Y|X_A)$ and $p(Y|X_B)$ is a (strict) equilibrium, $$ \mathbb{E}_{X_B|x_A} \big[R(p(Y|x_A), p(Y|X_B))\big] \ge \mathbb{E}_{X_B|x_A}\big[R(\tilde{p}_A, p(Y|X_B)\big], \quad \forall x_A, \tilde{p}_A. $$ $$ \mathbb{E}_{X_A|x_B} \big[R(p(Y|X_A), p(Y|x_B))\big] \ge \mathbb{E}_{X_A|x_B}\big[R(p(Y|X_A), \tilde{p}_B)\big], \quad \forall x_B, \tilde{p}_B. $$ \end{definition} Suppose now we have multiple i.i.d. prediction tasks $$ (X_A^{(1)}, X_B^{(1)}, Y^{(1)}), \dots, (X_A^{(n)}, X_B^{(n)}, Y^{(n)})\sim p(X_A, X_B, Y). $$ We give the prediction tasks to the data holders in random order and ask them to submit predictions. Let the predictions for $i$-th task be $\tilde{p}^{(i)}_A, \tilde{p}^{(i)}_B$. \begin{theorem} Suppose $n \ge 2$, and prediction tasks are i.i.d., and $X_A$ and $X_B$ are independent conditioning on $Y$, and the designer knows $Y$'s prior distribution $p(Y)$. Let $f:\mathbb{R} \to \mathbb{R}$ be a differentiable convex function {\color{red} with strictly increasing derivative $f'$ ?}. Define a reward function $r(\tilde{p}_A, \tilde{p}_B)$ as the value of $f'$ at point $\sum_{y} \frac{\tilde{p}_A(y) \tilde{p}_B(y)}{p(y)}$, $$ r(\tilde{p}_A, \tilde{p}_B) = f'\left(\sum_{y} \frac{\tilde{p}_A(y) \tilde{p}_B(y)}{p(y)} \right). $$ The following reward rule is truthful: (1) for each $i \in [n]$, reward Alice and Bob the amount of agreement on task $i$, i.e., $$ r(\tilde{p}^{(i)}_A, \tilde{p}^{(i)}_B); $$ (2) for each pair of $i,j \in [n]$ with $i\neq j$, punish Alice and Bob the amount of agreement on this pair of tasks, i.e., $$ f^*(r(\tilde{p}^{(i)}_A, \tilde{p}^{(j)}_B)); $$ so the total reward is $$ R(\tilde{p}_A, \tilde{p}_B) = \frac{1}{n} \sum_{i=1}^n r(\tilde{p}^{(i)}_A, \tilde{p}^{(i)}_B) - \frac{1}{n(n-1)} \sum_{i,j: i\neq j} f^*(r(\tilde{p}^{(i)}_A, \tilde{p}^{(j)}_B)), $$ which can be proved to be truthful. \end{theorem} \begin{proof} \begin{lemma} For two random variables $X$ and $Y$ and a function $g(x,y):\Sigma_X\times \Sigma_Y\rightarrow \mathbb{R}$, $$\mathbb{E}_{U_{X,Y}} g(X=x,Y=y)-\mathbb{E}_{V_{X,Y}}{f^*(g(X=x,Y=y))}\le MI^f(X;Y)$$ and the equality holds only when $g\in \partial f(K(X=x,Y=y))$. \end{lemma}\label{lem13} The proof of this lemma could be obtained by using the Lemma \ref{lem11}, by letting $p=U(x,y)$ and $q=V(x,y)$. Then we prove that under this payment rule, the mechanism is truthful. For the first term, since when the indices for Alice and Bob are the same, we know that the expected value of $\frac{1}{n} \sum_{i=1}^n r(\tilde{p}^{(i)}_A, \tilde{p}^{(i)}_B)$ should be identical to $\mathbb{E}_{U_{X_A,X_B}} r(\tilde{p}_A(X_A=x_a),\tilde{p}_B(X_B=x_b))$ and the second term should be identical to $\mathbb{E}_{V_{X_A,X_B}} r(\tilde{p}_A(X_A=x_a),\tilde{p}_B(X_B=x_b))$. Thus by the Lemma above, the maximum revenue should be achieved when $r(\tilde{p}_A(X_A=x_a),\tilde{p}_B(X_B=x_b))=f'(K(x_a,x_b))$. \\ So we only need to prove that $r(\tilde{p}_A,\tilde{p}_B) = f'(K(X_A=x_a,X_B=x_b)).$ \begin{align*} K(x_a,x_b) &=\frac {Pr(x_a,x_b)}{Pr(x_a)\cdot Pr(x_b)}\\ &= \frac {\sum_y {Pr(x_a,x_b\vert y)\cdot Pr(y)}}{Pr(x_a)\cdot Pr(x_b)}\\ &= \sum_y{\frac {{Pr(x_a\vert y)Pr(x_b\vert y)\cdot Pr(y)}}{Pr(x_a)\cdot Pr(x_b)}}\\ &= \sum_y{\frac {{Pr(x_a\vert y)Pr(x_b\vert y)\cdot Pr(y)^2}}{Pr(x_a)\cdot Pr(x_b)\cdot Pr(y)}}\\ &= \sum_y{\frac {Pr(y,x_a)Pr(y,x_b)}{Pr(x_a)\cdot Pr(x_b)\cdot Pr(y)}}\\ &= \sum_y{\frac {Pr(y\vert x_a)Pr(y\vert x_b)}{Pr(y)}},\\ \end{align*} thus $$f'\left(\sum_{y} \frac{\tilde{p}_A(y) \tilde{p}_B(y)}{p(y)} \right)=f'(K(x_a,x_b)).$$ Since the maximum in Lemma \ref{lem13} is among all arbitrary functions, reporting the true $p_A(Y=y\vert X_A=x_a)$ should be the strategy gaining the largest revenue. The mechanism is truthful. \end{proof} \subsection{Single-task common ground mechanism} \begin{theorem} Suppose $X_A$ and $X_B$ are independent conditioning on $Y$, and the designer knows $Y$'s prior distribution $p(Y)$. The following reward rule is truthful $$ R(\tilde{p}_A, \tilde{p}_B) = \log \sum_y \frac{\tilde{p}_A(y) \tilde{p}_B(y)}{p(y)}. $$ \end{theorem} \begin{proof} Suppose that Bob honestly report the posterior as $\tilde{p}_B(y)=p_B(y)$, then we have the expected revenue for Alice is \begin{align*} Rev_A&=\sum_{x_a,x_b}Pr(x_a,x_b)\log \sum_y \frac{\tilde{p}_A(y) p_B(y)}{p(y)}\\ &=\sum_{x_a,x_b}Pr(x_a,x_b)\log \sum_y \frac{\tilde{p}_A(y) Pr(y,x_b)}{p(y)}/Pr(x_b)\\ &=\sum_{x_a,x_b}Pr(x_a,x_b)\log \sum_y \frac{\tilde{p}_A(y) Pr(y,x_b)}{p(y)}-\sum_{x_a,x_b}Pr(x_a,x_b)\log Pr(x_b)\\ &= \sum_{x_a,x_b}Pr(x_a,x_b)\log \sum_y \frac{\tilde{p}_A(y) Pr(y,x_b)}{p(y)}-C\\ &= \sum_{x_a,x_b}Pr(x_b|x_a)\cdot Pr(x_a)\log \sum_y {\tilde{p}_A(y) Pr(x_b|y)}-C. \end{align*} Since we have $\sum_{x_b,y} {\tilde{p}_A(y) Pr(x_b|y)}=\sum_{y} {\tilde{p}_A(y)}=1$, we have $$Rev_A= \sum_{x_a,x_b}Pr(x_a) Pr(x_b|x_a)\cdot Pr(x_b)\log \tilde{Pr}(x_b|x_a)-C\le \sum_{x_a,x_b}Pr(x_a) Pr(x_b|x_a)\cdot Pr(x_b)\log Pr(x_b|x_a)$$, where $\tilde{Pr}(x_b|x_a)=\sum_y {\tilde{p}_A(y) Pr(x_b|y)}$. The equality holds when $\tilde{Pr}(x_a|x_b)=Pr(x_a|x_b)$. Since when $\tilde{p}_A(y)=Pr(y|x_a)$, we have \begin{align*} \sum_y{\frac{\tilde{p}_A(y)\cdot Pr(y,x_b)}{Pr(y)}}&=\sum_y{\frac{Pr(y|x_a)\cdot Pr(y,x_b)}{Pr(y)}}\\ &= \sum_y{\frac{Pr(y,x_a)\cdot Pr(y,x_b)}{Pr(y)\cdot Pr(x_a)}}\\ &=\sum_y{\frac{Pr(x_a|y)\cdot Pr(x_b|y)}{ Pr(x_a)}}\cdot Pr(y)\\ &= \sum_y{\frac{Pr(x_a,x_b|y)\cdot Pr(y)}{ Pr(x_a)}}\\ &=\frac{Pr(x_a,x_b)}{Pr(x_a)}\\ &=Pr(x_b|x_a), \end{align*} The maximal Alice's revenue could be achieved when $\tilde{p}_A(y)= Pr(y|x_a).$ The mechanism should be truthful. \end{proof} \end{comment} \section{Missing proof for Lemma~\ref{lem:sens_def}} \label{app:sens_def} \begin{lemma}[Lemma~\ref{lem:sens_def}] When $D_1, \dots, D_n$ are independent conditioned on $\bth$, for any $(D_1,\dots,D_n)$ and $(\tilde{D}_1, \dots, \tilde{D}_n)$, if $ p(\bth|D_i)=p(\bth|\tilde{D}_i)\ \forall i$, then $p(\bth|D_1,\dots,D_n)=p(\bth|\tilde{D}_1, \dots, \tilde{D}_n)$. \end{lemma} \begin{proof} Suppose $\forall i, p(\bth|D_i)=p(\bth|D_i')$, then we have \begin{align*} p(\bth|D_1,D_2,\cdots,D_n)&=\frac{p(D_1,D_2,\cdots,D_n,\bth)}{p(D_1,D_2,\cdots,D_n)}\\ &=\frac{p(D_1,D_2,\cdots,D_n|\bth)\cdot p(\bth)}{p(D_1,D_2,\cdots,D_n)}\\ &=\frac{p(D_1|\bth)\cdot p(D_2|\bth)\cdots p(D_n|\bth)\cdot p(\bth)}{p(D_1,D_2,\cdots,D_n)}\\ &=\frac{p(D_1,\bth)\cdot p(D_2,\bth)\cdots p(D_n,\bth)\cdot p(\bth)}{p(D_1,D_2,\cdots,D_n)\cdot p^n(\bth)}\\ &=\frac{p(\bth|D_1)\cdot p(\bth|D_2)\cdots p(\bth|D_n)\cdot p(D_1)\cdot p(D_2)\cdot \cdots p(D_n)}{ p(D_1,D_2,\cdots,D_n)\cdot p^{n-1}(\bth)}\\ &\propto \frac{p(\bth|D_1)\cdot p(\bth|D_2)\cdots p(\bth|D_n)}{p^{n-1}(\bth)}. \end{align*} Similarly, we have $$ p(\bth|D_1',D_2',\cdots,D_n') \propto \frac{p(\bth|D_1')\cdot p(\bth|D_2')\cdots p(\bth|D_n')}{p^{n-1}(\bth)}, $$ since the analyst calculate the posterior by normalize the terms, we have $$p(\bth|D_1,D_2,\cdots,D_n)=p(\bth|D_1',D_2',\cdots,D_n').$$ \end{proof} \section{One-time data acquisition} \label{app:single} \subsection{An example of applying peer prediction} \label{app:single_trivial} The mechanism is as follows.\\ \begin{algorithm}[H] \SetAlgorithmName{Mechanism}{mechanism}{List of Mechanisms} \begin{algorithmic} \STATE(1) Ask all data providers to report their datasets $\tilde{D}_1, \dots, \tilde{D}_n$. \STATE(2) For all $D_{-i}$, calculate probability $p(D_{-i}|D_{i})$ by the reported $D_i$ and $p(D_i|\bth)$. \STATE(3) The Brier score for agent $i$ is $s_i=1-\frac{1}{|D_{-i}|}\sum_{D_{-i}}(p(D_{-i}|\tilde{D}_i)-\mathbb{I}[D_{-i}=\tilde{D}_{-i}])^2$,\\ where $\mathbb{I}[D_{-i}=\tilde{D}_{-i}]=1$ if $D_{-i}$ is the same as the reported $\tilde{D}_{-i}$ and 0 otherwise. \STATE(4) The final payment for agent $i$ is $r_i=\frac{B\cdot s_i}{n}$. \end{algorithmic} \caption{One-time data collecting mechanism by using Brier Score.} \label{alg:single_brier} \end{algorithm} This payment function is actually the mean square error of the reported distribution on $D_{-i}$. It is based on the Brier score which is first proposed in~\cite{brier1950verification} and is a well-known bounded proper scoring rule. The payments of the mechanism are always bounded between $0$ and $1$. \begin{theorem} Mechanism~\ref{alg:single_brier} is IR, truthful, budget feasible, symmetric. \end{theorem} \begin{proof} The symmetric property is easy to verify. Moreover, since the payment for each agent is in the interval $[0,1]$, the mechanism is then budget feasible and IR. We only need to prove the truthfulness. Suppose that all the other agents except $i$ reports truthfully. Agent $i$ has true dataset $D_i$ and reports $\tilde{D}_i$. Since in the setting, the analyst is able to calculate $p(D_{-i}|D_i)$, then if the agent receives $s_i$ as their payment, from agent $i$'s perspective, his expected revenue is then: \begin{align*} Rev_i'&=\sum_{D_{-i}}p(D_{-i}|D_{i})\cdot \left(1-\sum_{D_{-i}'} (p(D_{-i}'|\tilde{D}_i)-\mathbb{I}[D_{-i}'=D_{-i}])^2 \right)\\ &= -\sum_{D_{-i}}p(D_{-i}|D_{i})\left(\sum_{D_{-i}'}\left( p(D_{-i}'|\tilde{D}_{i})^2 \right)-2 p(D_{-i}|\tilde{D}_i)\right)\\ &=\sum_{D_{-i}}\left(-{p(D_{-i}|\tilde{D}_i)}^2+2p(D_{-i}|\tilde{D}_i)p(D_{-i}|D_{i})\right) \end{align*} Since the function $-x^2+2ax$ is maximized when $x=a$, the revenue $Rev_i'$ is maximized when $\forall D_{-i}, p(D_{-i}|D_{-i})=p(D_{-i}|D_i)$. Since the real payment $r_i$ is a linear transformation of $s_i$ and the coefficients are independent of the reported datasets, reporting the dataset with the true posterior will still maximize the agent's revenue and the mechanism is truthful. \end{proof} \subsection{Bounding log-PMI: discrete case} \label{app:single_LR} In this section, we give a method to compute the bounds of the $\log$-PMI score when $|\Theta|$ is finite. First we give the upper bound of the PMI. We have for any $i, D_i\in \mathbb{D}_i(D_{-i})$ \begin{align*} PMI(D_i, D_{-i})&\le \max_{i,D_{-i}',D_i'\in \mathbb{D}_i(D_{-i}')}\{ PMI(D_i', D_{-i}')\}\\ &=\max_{i,D_{-i}',D_i'\in \mathbb{D}_i(D_{-i}')}\left\{ \sum_{\bth\in \Theta} \frac{p(\bth|D_i')p(\bth|D_{-i}')}{p(\bth)} \right\}\\ &\le \max_{i,D_i'}\left\{ \sum_{\bth\in \Theta} \frac{p(\bth|D_i')}{\min_{\bth}\{p(\bth)\}} \right\}\\ &\le {\frac{1}{\min_{\bth}\{p(\bth)\}}}. \sherry{\text{need to explain more clearly}} \end{align*} The last inequality is because we have $\sum_\bth p(\bth|D_i')=1$. Since we have assumed that $p(\bth)$ is positive, the term $\frac{1}{\min_{\bth}\{p(\bth)\}}$ could then be computed and is finite. Thus we just let $R$ be $\log \left({\frac{1}{\min_{\bth}\{p(\bth)\}}}\right)$. Then we need to calculate a lower bound of the score. We have for any $i, D_{-i}$ and $D_i\in \mathbb{D}_i(D_{-i})$ \begin{align}\label{eqn:app_bound_LR} PMI(D_i, D_{-i}) =\sum_{\bth\in \Theta} \frac{p(\bth|D_i)p(\bth|D_{-i})}{p(\bth)} \ge \sum_{\bth\in \Theta} p(\bth|D_i)p(\bth|D_{-i}). \end{align} \begin{claim} \label{clm:app_bound_LR} Let $D = \{d^{(1)}, \dots, d^{(N)}\}$ be a dataset with $N$ data points that are i.i.d. conditioning on $\bth$. Let $\mathcal{D}$ be the support of the data points $d$. Define $$ T = \frac{\max_{\bth\in \Theta} p(\bth)}{\min_{\bth\in \Theta} p(\bth)}, \quad U(\mathcal{D}) = \max_{\bth\in \Theta, d\in \mathcal{D}}\ p(\bth|d)\left/\min_{\bth\in \Theta, d\in \mathcal{D}:p(\bth|d)>0} \ p(\bth|d)\right., $$ Then we have $$ \frac{\max_{\bth\in \Theta} \ p(\bth|D)}{\min_{\bth: p(\bth|D)>0} p(\bth|D)} \le U(\mathcal{D})^N\cdot T^{N-1}. $$ \end{claim} \begin{proof} By Lemma~\ref{lem:sens_def}, we have $$ p(\bth|D) \propto \frac{\prod_j p(\bth|d^{(j)})}{p(\bth)^{N-1}}, $$ for a fixed $D$, it must hold that $$ \frac{\max_{\bth\in \Theta} \ p(\bth|D)}{\min_{\bth: p(\bth|D)>0} p(\bth|D)} \le U(\mathcal{D})^N\cdot T^{N-1}. $$ \end{proof} \begin{claim} \label{clm:app_bound_multi} For any two datasets $D_i$ and $D_j$ with $N_i$ and $N_j$ data points respectively, let $\mathcal{D}_i$ be the support of the data points in $D_i$ and let $\mathcal{D}_j$ be the support of the data points in $D_j$. Then $$ \frac{\max_{\bth\in \Theta} \ p(\bth|D_i,D_j)}{\min_{\bth: p(\bth|D_i, D_j)>0} p(\bth|D_i, D_j)} \le U(\mathcal{D}_i)^{N_i}\cdot U(\mathcal{D}_j)^{N_j} \cdot T^{N_i + N_j -1}. $$ \end{claim} \begin{proof} Again by Lemma~\ref{lem:sens_def}, we have $$ p(\bth|D_i, D_j) \propto \frac{p(\bth|D_i)p(\bth|D_j)}{p(\bth)}. $$ Combine it with Claim~\ref{clm:app_bound_LR}, we prove the statement. \end{proof} Then for any $D_i$, since $\sum_{\bth \in \Theta} p(\bth|D_i) = 1$, by Claim~\ref{clm:app_bound_LR}, $$ \min_{\bth: p(\bth|D_i)>0} p(\bth|D_i) \ge \frac{1}{1 + |\Theta| \cdot U(\mathcal{D}_i)^{N_i}\cdot T^{N_i-1}}\triangleq \eta(\mathcal{D}_i,N_i). $$ And for any $D_{-i}$, since $\sum_{\bth \in \Theta} p(\bth|D_{-i}) = 1$, by Claim~\ref{clm:app_bound_multi}, $$ \min_{\bth: p(\bth|D_{-i})>0} p(\bth|D_{-i}) \ge \frac{1}{1 + |\Theta| \cdot \Pi_{j\neq i} U(\mathcal{D}_j)^{N_j}\cdot T^{\sum_{j\neq i} N_j-1}}\triangleq \eta(\mathcal{D}_{-i},N_{-i}). $$ Finally, for any $i, D_{-i}$,and $D_i\in \mathbb{D}_i(D_{-i})$, according to~\eqref{eqn:app_bound_LR}, \begin{align*} PMI(D_i, D_{-i}) \ge \sum_{\bth\in \Theta} p(\bth|D_i)p(\bth|D_{-i}) \ge \eta(\mathcal{D}_i,N_i)\cdot \eta(\mathcal{D}_{-i},N_{-i}). \end{align*} The last inequality is because $D_i\in \mathbb{D}_i(D_{-i})$ and there must exists $\bth \in \Theta$ so that both $p(\bth|D_i)$ and $p(\bth|D_{-i})$ are non-zero. Both $\eta(\mathcal{D}_i,N_i)$ and $\eta(\mathcal{D}_{-i},N_{-i})$ can be computed in polynomial time. Take minimum over $i$, we find the lower bound for PMI. \begin{comment} We first give the following lemma: \begin{lemma}\label{lem:dp_ind} When agent $i$'s dataset has $N_{ij}'$ data points $d_{ij}$ ($\sum_k{N_{ij}'}=N_i$),\sherry{???} and these data points are i.i.d generated, then $$p(\bth|D_i')\propto \frac{\prod_j p(\bth|d_{ij})^{N_{ij}'}}{p(\bth)^{N_i-1}}.$$ \end{lemma} The proof of this lemma almost the same as the lemma~\ref{lem:sens_def}. We could derive from this lemma that if $p(\bth|D_i')>0$ then $$ \frac{\min_{\bth',j,p(\bth'|d_{ij})>0} p(\bth'|d_{ij})^{N_i}}{\max_\bth' p(\bth)^{N_i-1}} \le \frac{\prod_j p(\bth|d_{ij})^{N_{ij}'}}{p(\bth)^{N_i-1}}\le \frac{1}{\min_{\bth'} p(\bth')^{N_i-1}}, $$ \sherry{notations way too complicated.} by denoting the terms $\frac{\min_{\bth,d_{ij},p(\bth|d_{ij})>0} p(\bth|d_{ij})^{N_i}}{\max_\bth p(\bth)^{N_i-1}}$ and $\frac{1}{\min_\bth p(\bth)^{N_i-1}}$ as $n_{\min}^i$ and $n_{\max}^i$ respectively, we have \begin{equation} \forall D_i', \bth,\ \text{if } p(\bth|D_i')>0,\ p(\bth|D_i')=\frac{\frac{\prod_j p(\bth|d_{ij})^{N_{ij}'}}{p(\bth)^{N_i-1}}}{\sum_{\bth'} \frac{\prod_j p(\bth'|d_{ij})^{N_{ij}'}}{p(\bth')^{N_i-1}}}\ge \frac{n_{\min}^i}{|\bm{\Theta}|\cdot n_{\max}^i}.\label{eqn:i} \end{equation} Combining the Lemma~\ref{lem:dp_ind} and Lemma~\ref{lem:sens_def}, we could know that $$ p(\bth|D_{-i}')\propto \frac{\prod_{k\neq i} p(\bth|D_k)}{p(\bth)^{n-2}} \propto \frac{\prod_{k\neq i} \frac{\prod_j p(\bth|d_{kj})^{N_{kj}'}}{p(\bth)^{N_k-1}}}{p(\bth)^{n-2}}=\frac{\prod_{k\neq i,j} p(\bth|d_{kj})^{N_{kj}'}}{p(\bth)^{n-2+\sum_{k\neq i}(N_k-1)}}, $$ if $p(\bth|D_{-i})>0$ then $$ \frac{\prod_{k\neq i} \left(\min_{\bth',j,p(\bth'|d_{kj})>0} p(\bth'|d_{kj})^{N_{k}}\right)}{\max_{\bth'} p(\bth')^{n-2+\sum_{k\neq i}(N_k-1)}} \le \frac{\prod_{k\neq i,j} p(\bth|d_{kj})^{N_{kj}'}}{p(\bth)^{n-2+\sum_{k\neq i}(N_k-1)}}\le \frac{1}{\min_{\bth'} p(\bth')^{n-2+\sum_{k\neq i}(N_k-1)}} $$ Similarly denoting the LHS and RHS terms as $n_{\min}^{-i}$ and $n_{\max}^{-i}$ respectively, we have \begin{equation} \forall D_{-i}', \bth,\ \text{if } p(\bth|D_{-i}')>0,\ p(\bth|D_{-i}')=\frac{\frac{\prod_{k\neq i,j} p(\bth|d_{kj})^{N_{kj}'}}{p(\bth)^{n-2+\sum_{k\neq i}(N_k-1)}}}{\sum_{\bth'} \frac{\prod_{k\neq i,j} p(\bth'|d_{kj})^{N_{kj}'}}{p(\bth')^{n-2+\sum_{k\neq i}(N_k-1)}}} \ge \frac{n_{\min}^{-i}}{|\bm{\Theta}|\cdot n_{\max}^{-i}}.\label{eqn:mi} \end{equation} We fix $i$, since the second minimum is taken in the subspace $D_{-i}',D_{i}\in \mathbb{D}_i(D_{-i}')$ we should have $\sum_{\bth\in \Theta} p(\bth|D_i')p(\bth|D_{-i}')>0$. Thus we have the following inequality: \begin{align*} \min_{D_{-i}',D_{i}\in \mathbb{D}_i(D_{-i}')}\sum_{\bth\in \Theta} p(\bth|D_i')p(\bth|D_{-i}')&\ge \min_{\bth, D_i',p(\bth|D_i')>0} p(\bth|D_i')\cdot \min_{\bth, D_{-i}',p(\bth|D_{-i}')>0} p(\bth|D_{-i}')\\ &\ge \frac{n_{\min}^i}{|\bm{\Theta}|\cdot n_{\max}^i}\cdot \frac{n_{\min}^{-i}}{|\bm{\Theta}|\cdot n_{\max}^{-i}} \tag{By inequality~\ref{eqn:i} and~\ref{eqn:mi}}\\ &=\frac{n_{\min}^i\cdot n_{\min}^{-i}}{|\bm{\Theta}|^2\cdot n_{\max}^i\cdot n_{\max}^{-i}}. \end{align*} Since it is easy to verify that $\frac{n_{\min}^i\cdot n_{\min}^{-i}}{|\bm{\Theta}|^2\cdot n_{\max}^i\cdot n_{\max}^{-i}}$ is larger than zero and polynomially computable, finally we have \begin{align*} \log PMI(\tilde{D}_i, \tilde{D}_{-i})&\ge \min_i \left\{\min_{D_{-i}',D_{i}\in \mathbb{D}_i(D_{-i}')}\left\{\log \sum_{\bth\in \Theta} p(\bth|D_i')p(\bth|D_{-i}')\right\} \right\}\\ &\ge \min_i \log \left(\frac{n_{\min}^i\cdot n_{\min}^{-i}}{|\bm{\Theta}|^2\cdot n_{\max}^i\cdot n_{\max}^{-i}}\right), \end{align*} We could just enumerate all the $i$ and let $L=\min_i \log \left(\frac{n_{\min}^i\cdot n_{\min}^{-i}}{|\bm{\Theta}|^2\cdot n_{\max}^i\cdot n_{\max}^{-i}}\right)$, thus we could bound the $\log$-PMI score in a polynomially computable interval $[L,R]$. \end{comment} \subsection{Bounding log-PMI: continuous case}\label{app:gaussian} Consider estimating the mean $\mu$ of a univariate Gaussian $\mathcal{N}(x|\mu, \sigma^2)$ with known variance $\sigma^2$. Let $D = \{x_1, \dots, x_N\}$ be the dataset and denote the mean by $\overline{x} = \frac{1}{N} \sum_{j} x_j$. We use the Gaussian conjugate prior, $$ \mu \sim \mathcal{N}(\mu | \mu_0, \sigma_0^2). $$ Then according to~\cite{murphy2007conjugate}, the posterior of $\mu$ is equal to $$ p(\mu|D) = \mathcal{N}(\mu | \mu_N, \sigma_N^2), $$ where $$ \frac{1}{\sigma_N^2} = \frac{1}{\sigma_0^2} + \frac{N}{\sigma^2} $$ only depends on the number of data points. By Lemma~\ref{lem:multi_comp}, we know that the payment function for exponential family is in the form of $$PMI(D_i, D_{-i}) = \frac{g(\nu_i, \overline{\bm{\tau}}_i) g(\nu_{-i}, \overline{\bm{\tau}}_{-i})}{g(\nu_0, \overline{\bm{\tau}}_0) g(\nu_i +\nu_{-i}-\nu_0, \frac{\nu_{i} \overline{\bm{\tau}}_{i} + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i +\nu_{-i}-\nu_0} )}.$$ The normalization term for Gaussian is $\frac{1}{\sqrt{2\pi\sigma^2}}$, so we have \begin{align*} PMI(D_i, D_{-i}) = \frac{ \sqrt{\frac{1}{\sigma_0^2} + \frac{N_i}{\sigma^2} } \sqrt{\frac{1}{\sigma_0^2} + \frac{N_{-i}}{\sigma^2} }}{\sqrt{\frac{1}{\sigma_0^2}}\sqrt{\frac{1}{\sigma_0^2} + \frac{N_i + N_{-i}}{\sigma^2} }}. \end{align*} When the total number of data points has an upper bound $N_{\max}$, each of the square root term should be bounded in the interval $$ \left[ \frac{1}{\sigma_0},\ \sqrt{\frac{1}{\sigma_0^2} + \frac{N_{max}}{\sigma^2} } \ \right] $$ Therefore $PMI(D_i, D_{-i})$ is bounded in the interval $$ \left[\left(1+N_{max}\sigma_0^2/\sigma^2\right)^{-1/2}, 1+N_{max}\sigma_0^2/\sigma^2 \right]. $$ \subsection{Sensitivity analysis for the exponential family}\label{app:multi_exp_sensitive} If we are estimating the mean $\mu$ of a univariate Gaussian $\mathcal{N}(x|\mu, \sigma^2)$ with known variance $\sigma^2$. Let $D = \{x_1, \dots, x_N\}$ be the dataset and denote the mean by $\overline{x} = \frac{1}{N} \sum_{j} x_j$. We use the Gaussian conjugate prior, $$ \mu \sim \mathcal{N}(\mu | \mu_0, \sigma_0^2). $$ Then according to~\cite{murphy2007conjugate}, the posterior of $\mu$ is equal to $$ p(\mu|D) = \mathcal{N}(\mu | \mu_N, \sigma_N^2), $$ where $$ \frac{1}{\sigma_N^2} = \frac{1}{\sigma_0^2} + \frac{N}{\sigma^2} $$ only depends on the number of data points. Since the normalization term $\frac{1}{\sqrt{2\pi\sigma^2}}$ of Gaussian distributions only depends on the variance, function $h(\cdot)$ defined in~\eqref{eqn:h} \begin{align*} h_{D_{-i}}(N_i, \overline{x}_i) &= \frac{g(\nu_i, \overline{\bm{\tau}}_i) }{g(\nu_i +\nu_{-i}-\nu_0, \frac{\nu_{i} \overline{\bm{\tau}}_{i} + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i +\nu_{-i}-\nu_0} ) }\\ & =\sqrt{\frac{1}{\sigma_0^2} + \frac{N_i}{\sigma^2} }\left/\sqrt{\frac{1}{\sigma_0^2} + \frac{N_i + N_{-i}}{\sigma^2} }\right. \end{align*} will only be changed if the number of data points $N_i$ changes, which means that the mechanism will be sensitive to replication and withholding, but not necessarily other types of manipulations. If we are estimating the mean $\mu$ of a Bernoulli distribution $Ber(x|\mu)$. Let $D = \{x_1, \dots, x_N\}$ be the data points. Denote by $\alpha = \sum_{i} x_i$ the number of ones and denote by $\beta = \sum_i 1- x_i$ the number of zeros. The conjugate prior is the Beta distribution, $$ p(\mu) = \text{Beta}(\mu|\alpha_0, \beta_0) = \frac{1}{B(\alpha_0, \beta_0)} \mu^{\alpha_0-1}(1-\mu)^{\beta_0-1}. $$ where $B(\alpha_0, \beta_0)$ is the Beta function $$ B(\alpha_0, \beta_0) = \frac{(\alpha_0 + \beta_0 -1)!}{(\alpha_0 -1)! (\beta_0 -1)!}. $$ The posterior of $\mu$ is equal to $$ p(\mu|D) = \text{Beta}(\mu|\alpha_0 + \alpha, \beta_0 + \beta). $$ Then we have \begin{align*} h_{D_{-i}}(\alpha, \beta) & = \frac{B(\alpha_0 + \alpha_i + \alpha_{-i}, \beta_0 + \beta_i + \beta_{-i})}{B(\alpha_0 + \alpha_i, \beta_0 + \beta_i)}\\ & = \frac{(\alpha_0 + \beta_0 + N_i + N_{-i} -1)!(\alpha_0+\alpha_i-1)!(\beta_0+\beta_i-1)!}{(\alpha_0 + \alpha_i + \alpha_{-i} -1)! (\beta_0 + \beta_i + \beta_{-i} -1)! (\alpha_0 + \beta_0 + N_i-1)!}. \end{align*} Define $A_i = \alpha_0 + \alpha_i -1$ and $B_i = \beta_0 + \beta_i -1$, since $N_i=\alpha_i+\beta_i$ and $N_{-i}=\alpha_{-i}+\beta_{-i}$, we have $$ h_{D_{-i}}(\alpha, \beta)=h_{\alpha_{-i},\beta_{-i}}(A_i, B_i)=\frac{A_i!B_i!(A_i+B_i+\alpha_{-i}+\beta_{-i}+1)!}{(A_i+\alpha_{-i})!(B_i+\beta_{-i})!(A_i+B_i+1)!} $$ Now we are going to prove that for any two different pairs $(A_i, B_i)$ and $(A_i', B_i')$, there should always exists a pair $(\alpha_{-i}',\beta_{-i}')$ selected from the four pairs: $(\alpha_{-i}, \beta_{i}),(\alpha_{-i}+1, \beta_{i}),(\alpha_{-i}, \beta_{i}+1),(\alpha_{-i}+1, \beta_{i}+1)$, such that $h_{\alpha_{-i}',\beta_{-i}'}(A_i, B_i)\neq h_{\alpha_{-i}',\beta_{-i}'}(A_i', B_i')$. Suppose that this does not hold, then there should exist two pairs $(A_i,B_i)$ and $(A_i',B_i')$ such that for each $(\alpha_{-i}',\beta_{-i}')$ in the four pairs, $h_{\alpha_{-i}',\beta_{-i}'}(A_i, B_i)= h_{\alpha_{-i}',\beta_{-i}'}(A_i', B_i').$ Then by the two cases when $(\alpha_{-i}',\beta_{-i}')=(\alpha_{-i}, \beta_{-i})$ and $(\alpha_{-i}+1, \beta_{-i})$ we can derive that \begin{align*} \frac{h_{\alpha_{-i}+1,\beta_{-i}}(A_i, B_i)}{h_{\alpha_{-i},\beta_{-i}}(A_i, B_i)}&=\frac{h_{\alpha_{-i}+1,\beta_{-i}}(A_i', B_i')}{h_{\alpha_{-i},\beta_{-i}}(A_i', B_i')}\\ \frac{A_i+B_i+\alpha_{-i}+1+\beta_{-i}+1}{A_i+\alpha_{-i}+1}&=\frac{A_i'+B_i'+\alpha_{-i}+1+\beta_{-i}+1}{A_i'+\alpha_{-i}+1} \end{align*} $$ (A_i+B_i-A_i'-B_i')(\alpha_{-i}+1)+(A_i'-A_i)(\alpha_{-i}+\beta_{-i}+2)+A_i'B_i-A_iB_i'=0$$ Replacing $\beta_{-i}$ with $\beta_{-i}+1$, we could get $$ (A_i+B_i-A_i'-B_i')(\alpha_{-i}+1)+(A_i'-A_i)(\alpha_{-i}+\beta_{-i}+3)+A_i'B_i-A_iB_i'=0$$ Subtracting the last equation from this, we get $A_i'-A_i=0$. Symmetrically, when $(\alpha_{-i}',\beta_{-i}')=(\alpha_{-i}, \beta_{-i})$ and $(\alpha_{-i}, \beta_{-i}+1)$ and replacing $\alpha_{-i}$ with $\alpha_{-i}+1$, we have $B_{i}'-B_i=0$ and thus $(A_i,B_i)=(A_i',B_i')$. This contradicts to the assumption that $(A_i,B_i)\neq (A_i',B_i')$. Therefore for any two different pairs of reported data in the Bernoulli setting, at least one in the four others' reported data $(\alpha_{-i}, \beta_{i}),(\alpha_{-i}+1, \beta_{i}),(\alpha_{-i}, \beta_{i}+1),(\alpha_{-i}+1, \beta_{i}+1)$ would make the agent strictly truthfully report his posterior. \subsection{Missing proofs} \label{app:single_proofs} \subsubsection{Proof for Theorem~\ref{thm_truthful} and Theorem~\ref{thm_sensitive}} \label{app:single_alpha} \begin{theorem}[Theorem~\ref{thm_truthful}] Mechanism~\ref{alg:single} is IR, truthful, budget feasible, symmetric. \end{theorem} \newcommand{\bm{\mathcal{D}}}{\bm{\mathcal{D}}} We suppose that the dataset space of agent $i$ is $\bm{\mathcal{D}}_i$. We first give the definitions of several matrices. These matrices are essential for our proofs, but they are unknown to the data analyst. Since the dataset $D_i$ consists of $N_i$ i.i.d data points drawn from the data generating matrix $G_i$, we define prediction matrix $P_i$ of agent $i$ to be a matrix with $|\bm{\mathcal{D}}_i|=|\mathcal{D}|^{N_i}$ rows and $|\Theta|$ columns. Each column corresponds to a $\bth\in \Theta$ and each row corresponds to a possible dataset $D_i\in\bm{\mathcal{D}}_i$. The matrix element on the column corresponding to $\bth$ and the row corresponding to $D_i$ is $p(D_i|\bth)$. Intuitively, this matrix is the posterior of agent $i$'s dataset conditioned on the parameter $\bth$. Similarly, we define the out-prediction matrix $P_{-i}$ of agent $i$ to be a matrix with $\prod_{j\neq i} |\bm{\mathcal{D}}_j|$ rows and $|Y|$ columns. Each column corresponds to a $\bth\in \Theta$ and each row corresponds to a possible dataset $D_{-i}\in \bm{\mathcal{D}}_{-i}$. The element corresponding to $D_{-i}$ and $\bth$ is $p(D_{-i}|\bth)$. In the proof, we also give a lower bound on the sensitiveness coefficient $\alpha$ related to these out-prediction matrices. \begin{theorem}[Theorem~\ref{thm_sensitive}] \label{thm_sensitive_app} Mechanism~\ref{alg:single} is sensitive if either condition holds: \begin{itemize} \item [1.] $\forall i$, $Q_{-i}$ has rank $|\Theta|$. \item [2.] $\forall i, \sum_{i'\neq i}{(rank_k(G_{i'})-1)\cdot N_{i'}}+1\ge |\Theta|$. \end{itemize} Moreover, it is $e_i\cdot\frac{B}{n (R-L)}$-sensitive for agent $i$, where $e_{i}$ is the smallest singular value of matrix $P_{-i}$. \end{theorem} \begin{proof} First, it is easy to verify that the mechanism is budget feasible because $s_i$ is bounded between $L$ and $R$. Let agent $i$'s expected revenue of Mechanism~\ref{alg:single} be $Rev_i$. Then we have $$ Rev_i=\frac{B}{n}\cdot \left(\frac{\sum_{D_{-i}\in \mathbb{D}_i(D_{-i})} p(D_{-i}|D_i) \cdot \log PMI(\tilde{D}_i,D_{-i})-L}{R{}-L{}}\right). $$ We consider another revenue $Rev_i'\triangleq\sum_{D_{-i}} p(D_{-i}|D_i)\cdot \log\left(\sum_\bth \frac{p(\bth|\tilde{D}_i)\cdot p(\bth|D_{-i})}{p(\bth)}\right)$ assuming that $0\cdot\log 0=0$. Then we have \begin{align*} Rev_i'&=\sum_{D_{-i}} p(D_{-i}|D_i)\cdot \log\left(\sum_\bth \frac{p(\bth|\tilde{D}_i)\cdot p(\bth|D_{-i})}{p(\bth)}\right)\\ &=\sum_{D_{-i}, D_i\in \mathbb{D}_i(D_{-i})} p(D_{-i}|D_i)\cdot \log PMI(\tilde{D}_i,D_{-i}) \\ &\quad +\sum_{D_{-i}, D_i\notin \mathbb{D}_i(D_{-i})} p(D_{-i}|D_i)\cdot \log PMI(\tilde{D}_i,D_{-i}) \\ &=\sum_{D_{-i}, D_i\in \mathbb{D}_i(D_{-i})} p(D_{-i}|D_i)\cdot \log PMI(\tilde{D}_i,D_{-i}) +\sum_{D_{-i}, D_i\notin \mathbb{D}_i(D_{-i})} 0\cdot \log 0\\ &= \sum_{D_{-i}, D_i\in \mathbb{D}_i(D_{-i})} p(D_{-i}|D_i)\cdot \log PMI(\tilde{D}_i,D_{-i})\\ &=Rev_i\cdot \frac{n}{B}\cdot (R{}-L{}) + L. \end{align*} $Rev_i'$ is a linear transformation of $Rev_i$. The coefficients $L{}$, $R{}$, $\frac{n}{B}$ do not depend on $\tilde{D}_i$. The ratio $\frac{n}{B}\cdot (R{}-L{})$ is larger than 0. Therefore, the optimal reported $\tilde{D}_i$ for $Rev_i$ should be the same as that for $Rev_i'$. If the a payment rule with revenue $Rev_i'$ is $e_{i}$ - sensitive for agent $i$, then the Mechanism~\ref{alg:single} would then be $e_i\cdot\frac{B}{n\cdot (R-L)}$ - sensitive. In the following part, we prove that real dataset $D_i$ would maximize the revenue $Rev_i'$ and the $Rev_i'$ is $e_i\cdot\frac{B}{|\mathcal{N}|\cdot (R-L)}$ - sensitive for all the agents. Thus in the following parts we prove the revenue $Rev_i'$ is $e_{i}$ - sensitive for agent $i$. \begin{align*} Rev_i'&=\sum_{D_{-i}} p(D_{-i}|D_i)\cdot \log\left(\sum_\bth \frac{p(\bth|\tilde{D}_i)\cdot p(\bth|D_{-i})}{p(\bth)}\right)\\ &= \sum_{D_{-i}} p(D_{-i}|D_i)\cdot \log \left(\sum_\bth \frac{p(\bth|\tilde{D}_i)\cdot p(\bth,D_{-i})}{p(\bth)}\right)-\sum_{D_{-i}} p(D_{-i}|D_i)\cdot \log\left( p(D_{-i}) \right)\\ &= \sum_{D_{-i}} p(D_{-i}|D_i)\cdot \log \left(\sum_\bth \frac{ p(\bth|\tilde{D}_i)\cdot p(\bth,D_{-i})}{p(\bth)}\right)-C. \end{align*} Since the term $\sum_{D_{-i}} p(D_{-i}|D_i)\cdot \log\left( p(D_{-i}) \right)$ does not depend on $\tilde{D}_i$, agent $i$ could only manipulate to modify the term $\sum_{D_{-i}} p(D_{-i}|D_i)\cdot \log \left(\sum_\bth \frac{ p(\bth|\tilde{D}_i)\cdot p(\bth,D_{-i})}{p(\bth)}\right)$. Since we have \begin{align*} \sum_{D_{-i},\bth} \frac{ p(\bth|\tilde{D}_i)\cdot p(\bth,D_{-i})}{p(\bth)} &= \sum_{\bth}\frac{1}{p(\bth)}\left( \sum_{D_{-i}} p(\bth|\tilde{D}_i)\cdot p(\bth,D_{-i})\right)\\ &= \sum_{\bth}\frac{1}{p(\bth)}\left( p(\bth|\tilde{D}_i)\cdot p(\bth)\right)\\ &= \sum_{\bth}p(\bth|\tilde{D}_i)\\ &=1, \end{align*} Since we have $\sum_{D_{-i}}\left(\sum_{\bth} \frac{ p(\bth|\tilde{D}_i)\cdot p(\bth,D_{-i})}{p(\bth)}\right) =1$, we could view the term $\sum_\bth\frac{ p(\bth|\tilde{D}_i)\cdot p(\bth,D_{-i})}{p(\bth)}$ as a probability distribution on the variable $D_{-i}$. Since it depends on $\tilde{D}_i$, we denote it as $\tilde{p}(D_{-i}|\tilde{D}_i)$. Since if we fix a distributions $p(\sigma)$, then the distribution $q(\sigma)$ that maximizes $\sum_\sigma p(\sigma)\log q(\sigma)$ should be the same as $p$. (If we assume that $0\cdot \log 0=0$, this still holds.) When agent $i$ report truthfully, \begin{align*} \sum_\bth \frac{ p(\bth|D_i)\cdot p(\bth,D_{-i})}{p(\bth)} &=\sum_\bth\frac{ p(D_i,\bth)\cdot p(D_{-i},\bth)}{p(D_i)\cdot p(\bth)} \\ &=\sum_\bth\frac{ p(D_i|\bth)\cdot p(D_{-i},\bth)}{p(D_i)}\\ &=\sum_\bth\frac{ p(D_i|\bth)\cdot p(D_{-i}|\bth)\cdot p(\bth)}{p(D_i)}\\ &=\sum_\bth\frac{ p(D_i,D_{-i},\bth)}{p(D_i)}\\ &=p(D_{-i}|D_{i}). \end{align*} The data provider can always maximize $Rev_i'$ by truthfully reporting $D_i$. And we have proven the truthfulness of the mechanism. Then we need to prove the relation between the sensitiveness of the mechanism and the out-prediction matrices. When Alice reports $\tilde{D}_i$ the revenue difference from truthfully report is then \begin{align*} \Delta_{Rev_i'} &= \sum_{D_{-i}} p(D_{-i}|D_{i})\log p(D_{-i}|D_i)- \sum_{D_{-i}} p(D_{-i}|D_{i})\log \tilde{p}(D_{-i}|D_i)\\ &= \sum_{D_{-i}} p(D_{-i}|D_{i}) \log \frac{p(D_{-i}|D_i)}{\tilde{p}(D_{-i}|D_i)}\\ &=D_{KL}(p\Vert \tilde{p})\\ &\ge \sum_{D_{-i}} \Vert p(D_{-i}|D_{i})-\tilde{p}(D_{-i}|D_i)\Vert^2. \end{align*} We let the distribution difference vector be $\Delta_i$ (Note that here $\Delta_i$ is a $|\Theta|$-dimension vector), then we have \begin{align*} \Delta_{Rev_i'} \ge \sum_{D_{-i}} | p(D_{-i}|D_{i})-\tilde{p}(D_{-i}|D_i)|^2 &\ge \sum_{D_{-i}} \left\Vert\sum_\bth \left(p(\bth|D_i)-\tilde{p}(\bth|D_i)\right)\cdot p(D_{-i}|\bth)\right\Vert^2\\ &=\Vert P_{-i} \Delta_i \Vert ^2. \end{align*} Since $e_i$ is the minimum singular value of $P_{-i}$ and thus $P_{-i}^T P_{-i}-e_i I$ is semi-positive, we have \begin{align*} \Vert P_{-i} \Delta_i \Vert ^2&=\Delta_i^T P_{-i}^T P_{-i} \Delta_i\\ &= \Delta_i^T ( P_{-i}^T P_{-i}-e_iI) \Delta_i + \Delta_i^T e_iI \Delta_i \\ &\ge \Delta_i^T e_i I \Delta_i\\ &\ge e_i \Delta_i^T \Delta_i\\ &= \Vert \Delta_i\Vert \cdot e_i. \end{align*} Finally get the payment rule with revenue $Rev_i'$ is $e_i$-sensitive for agent $i$. If all $P_{-i}$ has rank $|\Theta|$, then all the singular values of the matrix $P_{-i}$ should have positive singular values and for all $i$, $e_i>0$. By now we have proven that if all the $P_{-i}$ has rank $|\Theta|$, then the mechanism is sensitive. Since $p(\bth|D_i)=p(D_i|\bth)\cdot \frac{p(\bth)}{p(D_i)}$, we have the matrix equation: $$ Q_{-i} = \Lambda^{D_{i}^{-1}} \cdot P_{-i}\cdot \Lambda^{\bth}, $$ where $\Lambda^{D_{i}^{-1}}=\begin{bmatrix} \frac{1}{p(D_i^1)}& & &\\ & \frac{1}{p(D_i^2)}& &\\ & & \ddots &\\ & & & \frac{1}{p(D_i^{|\bm{\mathcal{D}}_i|})} \end{bmatrix}$ and $\Lambda^{\bth}=\begin{bmatrix} p(\bth_1)& & &\\ & p(\bth_2)& &\\ & & \ddots &\\ & & & p(\bth_{|\Theta|}) \end{bmatrix}.$\\ $p(D_i^j)$ is the probability that agent $i$ gets the dataset $D_i^j$. $p(\bth_k)$ is the probability of the prior of the parameter $\bth$ with index $k$. Both are all diagnal matrices. Both of the diagnal matrices well-defined and full-rank. Thus the rank of $P_{-i}$ should be the same as $Q_{-i}$ and we have proved the first condition. The proof for the second sufficient condition is directly derived from the paper \cite{sidiropoulos2000uniqueness} and the condition 1. We first define a matrix $G_i'$ with the same size as $G_i$ while its elements are $p(d_i|\bth)$ rather than $p(\bth|d_i)$. Since for all $i'\in[n]$ the prediction matrix $P_{i'}$ is the columnwise Kronecker product (defined in Lemma 1 in \cite{sidiropoulos2000uniqueness} which is shown below) of $N_{i'}$ data generating matrices. By using the following Lemma in \cite{sidiropoulos2000uniqueness}, if the k-rank of $G'_{i'}$ is $r$, then each time we multiply(columnwise Kronecker product) a matrix by $G'_{i'}$, the k-rank would increase by at least $rank_k(G'_{i'}) -1$, or reach the cap of $|\Theta|$. \begin{lemma} Consider two matrices $\bm{A}=[\bm{a}_1,\bm{a}_2,\cdots,\bm{a}_F]\in \mathbb{R}^{I\times F},\bm{B}=[\bm{b}_1,\bm{b}_2,\cdots,\bm{b}_F]\in \mathbb{R}^{J\times F}$ and $\bm{A}\odot_c\bm{B}$ is the columnwise Krocnecker product of $\bm{A}$ and $\bm{B}$ defined as: $$ \bm{A}\odot_c\bm{B}\triangleq\left[\bm{a}_1\otimes\bm{b}_1,\bm{a}_2\otimes\bm{b}_2,\cdots, \bm{a}_F\otimes\bm{b}_F\right], $$ where $\otimes$ stands for the Kronecker product. It holds that $$ rank_k(\bm{A}\odot_c\bm{B})\ge \min\{rank_k(\bm{A})+rank_k(\bm{B})-1,F\}. $$ \end{lemma} Therefore the final k-rank of the $N_{i'}$ would be no less than $\min\{N_i\cdot (r-1)+1,|\Theta|\}$. We then need to calculate the k-rank of the out-prediction matrix of each agent $i$ and verify whether it is $|\Theta|$. Similarly, the out-prediction matrix of agent $i$ is the columnwise Kronecker product of all the other agent's prediction matrices. By the same lower bound tool in \cite{sidiropoulos2000uniqueness}, the k-rank of $P_{-i}$ should be at least $\min\{\sum_{i'\neq i}{(rank_k(G'_{i'})-1)\cdot N_{i'}}+1,|\Theta|\}$ and by Theorem~\ref{thm_sensitive}, if the k-rank of all prediction matrices are all $|\Theta|$, Mechanism~\ref{alg:single} should be sensitive. \end{proof} \subsubsection{Missing Proof for Theorem~\ref{thm:single_sensitive_cont}} When $\Theta \subseteq \mathbb{R}^m$ and a model in the exponential family is used, we prove that the mechanism will be sensitive if and only if for any $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, \begin{eqnarray}\label{eqn:app_single_sensi} \Pr_{D_{-i}} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0. \end{eqnarray} We first show that the above condition is equivalent to that for any $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, \begin{eqnarray} \label{eqn:app_single_sen} \Pr_{D_{-i}|D_i} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0, \end{eqnarray} where $D_{-i}$ is drawn from $p(D_{-i}|D_i)$ but not $p(D_{-i})$. This is because, by conditional independence of the datasets, for any event $\mathcal{E}$, we have $$ \Pr_{D_{-i}|D_i}[\mathcal{E}] = \int_{\bth\in \Theta} p(\bth|D_i) \Pr_{D_{-i}|\bth}[\mathcal{E}] \, d\bth $$ and $$ \Pr_{D_{-i}}[\mathcal{E}] = \int_{\bth\in \Theta} p(\bth) \Pr_{D_{-i}|\bth}[\mathcal{E}] \, d\bth. $$ Since both $p(\bth)$ and $p(\bth|D_i)$ are always positive because they are in exponential family, it should hold that $$ \Pr_{D_{-i}|D_i}[\mathcal{E}] > 0 \ \Longleftrightarrow \ \Pr_{D_{-i}}[\mathcal{E}] > 0. $$ Therefore \eqref{eqn:app_single_sensi} is equivalent to \eqref{eqn:app_single_sen}, and we only need to show that the mechanism is sensitive if and only if \eqref{eqn:app_single_sen} holds. When we're using a (canonical) model in exponential family, the prior $p(\bth)$ and the posteriors $p(\bth|D_i), p(\bth|D_{-i})$ can be represented in the standard form~\eqref{eqn:exp_fam_prior}, \begin{eqnarray*} & p(\bth) = \mathcal{P}(\bth| \nu_0, \overline{\bm{\tau}}_0),\\ & p(\bth|D_i) = \mathcal{P}\big(\bth| \nu_i, \overline{\bm{\tau}}_i\big),\\ & p(\bth|D_{-i}) = \mathcal{P}\big(\bth| \nu_{-i}, \overline{\bm{\tau}}_{-i}\big),\\ & p(\bth|\tilde{D}_i) = \mathcal{P}\big(\bth| \nu_{i}', \overline{\bm{\tau}}_{i}'\big), \end{eqnarray*} where $\nu_0, \overline{\bm{\tau}}_0$ are the parameters for the prior $p(\bth)$, $\nu_i, \overline{\bm{\tau}}_i$ are the parameters for the posterior $p(\bth|D_i)$, $\nu_{-i}, \overline{\bm{\tau}}_{-i}$ are the parameters for the posterior $p(\bth|D_{-i})$, and $ \nu_{i}', \overline{\bm{\tau}}_{i}'$ are the parameters for $p(\bth|\tilde{D}_i)$. From the proof for Theorem~\ref{thm_truthful}, we know that the difference between the expected score of reporting $D_i$ and the expected score of reporting $\tilde{D}_i \neq D_i$ is equal to $$ \Delta_{Rev} = D_{KL}(p(D_{-i}|D_i)\Vert p(D_{-i}|\tilde{D}_i)). $$ Therefore if $p(D_{-i}|\tilde{D}_i)$ differs from $p(D_{-i}|D_i)$ with non-zero probability, that is, \begin{eqnarray} \label{eqn:app_single_sc} \Pr_{D_{-i}|D_i}[ p(D_{-i}|D_i) \neq p(D_{-i}|\tilde{D}_i)] > 0, \end{eqnarray} then $\Delta_{Rev} > 0$. By Lemma~\ref{lem12} and Lemma~\ref{lem:exp_int}, $$ p(D_{-i}|D_i) = \int_{\bth \in \Theta} \frac{p(\bth|D_i)p(\bth|D_{-i})}{p(\bth)}\, d\bth = \frac{g(\nu_i, \overline{\bm{\tau}}_i) g(\nu_{-i}, \overline{\bm{\tau}}_{-i})}{g(\nu_0, \overline{\bm{\tau}}_0) g(\nu_i +\nu_{-i}-\nu_0, \frac{\nu_{i} \overline{\bm{\tau}}_{i} + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i +\nu_{-i}-\nu_0} )}. $$ $$ p(D_{-i}|\tilde{D}_i) = \int_{\bth \in \Theta} \frac{p(\bth|\tilde{D}_i)p(\bth|D_{-i})}{p(\bth)}\, d\bth = \frac{g(\nu_i', \overline{\bm{\tau}}_i') g(\nu_{-i}, \overline{\bm{\tau}}_{-i})}{g(\nu_0, \overline{\bm{\tau}}_0) g(\nu_i' +\nu_{-i}-\nu_0, \frac{\nu_{i}' \overline{\bm{\tau}}_{i}' + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i' +\nu_{-i}-\nu_0} )}. $$ Therefore \eqref{eqn:app_single_sc} is equivalent to $$ \Pr_{D_{-i}|D_i}[h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i) \neq h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i')] > 0. $$ Therefore if for all $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, we have $$ \Pr_{D_{-i}|D_i}[h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i) \neq h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i')] > 0, $$ then reporting any $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$ will lead to a strictly lower expected score, which means the mechanism is sensitive. To prove the other direction, if the above condition does not hold, i.e., there exists $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$ with $$ \Pr_{D_{-i}|D_i} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] = 0, $$ then reporting $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$ will give the same expected score as truthfully reporting $(\nu_i, \overline{\bm{\tau}}_i)$, which means that the mechanism is not sensitive. \begin{comment} \section{Multiple-time data collecting} \label{app:multi} \paragraph{Proof of Theorem~\ref{thm:multi_main}.} It is easy to verify that the mechanism is IR, budget feasible and symmetric. We prove the truthfulness as follows. Let's look at the payment for day $t$. At day $t$, data provider $i$ reports a dataset $\tilde{D}_i^{(t)}$. Assuming that all other data providers truthfully report $D^{(t)}_{-i}$, data provider $i$'s expected payment is decided by his expected score \begin{align} \label{eqn:e_payment} &\mathbb{E}_{D_{-i}^{(t)}, D_{-i}^{(t+1)}|D_i^{(t)}} [s_i] \notag\\ = &\mathbb{E}_{D_{-i}^{(t)}|D_i^{(t)}} f'\left( v(\tilde{\q}_i ,\ p(\bth|D_{-i}^{(t)}) \right) - \mathbb{E}_{D_{-i}^{(t+1)}}f^* \left( f'(v(\tilde{\q}_i ,\ p(\bth|D_{-i}^{(t+1)}))\right). \end{align} The second expectation is taken over the marginal distribution $p( D_{-i}^{(t+1)})$ without conditioning on $ D_i^{(t)}$ because $D^{(t+1)}$ is independent from $D^{(t)}$, so we have $p( D_{-i}^{(t+1)} | D_i^{(t)}) = p( D_{-i}^{(t+1)})$. We then use Lemma~\ref{lem11} to get an upper bound of the expected score~\eqref{eqn:e_payment} and show that truthfully reporting $D_i$ achieves the upper bound. We apply Lemma~\ref{lem11} on two distributions of $D_{-i}$, the distribution of $D_{-i}$ conditioning on $D_i$, $p( D_{-i} | D_i)$, and the marginal distribution $p( D_{-i})$. Then we have \begin{align} \label{eqn:f_div_multi} D_f( p( D_{-i} | D_i), p( D_{-i})) \ge \sup_{g \in \mathcal{G}} \mathbb{E}_{D_{-i}|D_i}[g(D_{-i})] - \mathbb{E}_{D_{-i}} [f^*(g(D_{-i}))], \end{align} where $f$ is the given convex function, $\mathcal{G}$ is the set of all real-valued functions of $D_{-i}$. The supremum is achieved and only achieved at function $g$ with \begin{align} \label{eqn:f_div_max} g(D_{-i}) = f'\left( \frac{p( D_{-i} | D_i)}{p(D_{-i})} \right) \text{ for all } D_{-i}. \end{align} Consider function $g_{\tilde{\q}_i}(D_{-i}) = f'(v(\tilde{\q}_i, p(\bth|D_{-i})))$. Then \eqref{eqn:f_div_multi} gives an upper bound of the expected score~\eqref{eqn:e_payment} as \begin{align*} D_f( p( D_{-i} | D_i), p( D_{-i})) &\ge \mathbb{E}_{D_{-i}|D_i}[g_{\tilde{\q}_i}(D_{-i})] - \mathbb{E}_{D_{-i}} f^*(g_{\tilde{\q}_i}(D_{-i}))\\ & = \mathbb{E}_{D_{-i}|D_i}[f'(v(\tilde{\q}_i, p(\bth|D_{-i})))] - \mathbb{E}_{D_{-i}} [f^*(f'(v(\tilde{\q}_i, p(\bth|D_{-i}))))]. \end{align*} By~\eqref{eqn:f_div_max}, the upper bound is achieved only when \begin{align*} g_{\tilde{\q}_i}(D_{-i}) = f'\left( \frac{p( D_{-i} | D_i)}{p(D_{-i})} \right) \text{ for all } D_{-i}, \end{align*} that is \begin{align} \label{eqn:opt_cond} f'(v(\tilde{\q}_i, p(\bth|D_{-i}))) = f'\left( \frac{p( D_{-i} | D_i)}{p(D_{-i})} \right) \text{ for all } D_{-i}. \end{align} Then it is easy to prove the truthfulness. Truthfully reporting $D_i$ achieves~\eqref{eqn:opt_cond} because by Lemma~\ref{lem12}, for all $D_i$ and $D_{-i}$, \begin{align*} v(p(\bth|D_i), p(\bth|D_{-i})) = \frac{p(D_i, D_{-i})}{p(D_i) p(D_{-i})} = \frac{p(D_{-i}|D_i)}{p(D_{-i})}. \end{align*} \paragraph{Proof of Theorem~\ref{thm:multi_sensitive}.} We then prove the sensitivity. For discrete and finite-size $\Theta$, we prove that when $f$ is strictly convex and $\bm{Q}_{-i}$ has rank $|\Theta|$, the mechanism is sensitive. When $f$ is strictly convex, $f'$ is a strictly increasing function. Then condition~\eqref{eqn:opt_cond} is equivalent to \begin{align} \label{eqn:opt_cond_strict} v(\tilde{\q}_i, p(\bth|D_{-i})) = \frac{p( D_{-i} | D_i)}{p(D_{-i})} \ \text{ for all } D_{-i}. \end{align} We show that when matrix $\bm{Q}_{-i}$ has rank $|\Theta|$, $\tilde{\q}_i = p(\bth|D_i)$ is the only solution of~\eqref{eqn:opt_cond_strict}, which means that the payment rule is sensitive. By definition of $v(\cdot)$, $$ v(\tilde{\q}_i, p(\bth|D_{-i})) = \sum_{\bth \in \Theta} \frac{\tilde{q}_i(\bth) p(\bth|D_{-i})}{p(\bth)} = (\bm{Q}_{-i} \bm{\Lambda} \tilde{\q}_i)_{D_{-i}} $$ where $\bm{\Lambda}$ is the $|\Theta| \times |\Theta|$ diagonal matrix with $1/p(\bth)$ on the diagonal. Then if $\tilde{\q}_i =p(\bth |D_i)$ and $\tilde{\q}_i =\q$ are both solutions of~\eqref{eqn:opt_cond_strict}, we must have $$ \bm{Q}_{-i} \bm{\Lambda} p(\bth |D_i) = \bm{Q}_{-i} \bm{\Lambda} \bm{q} \quad \Longrightarrow \quad \bm{Q}_{-i} \bm{\Lambda} (p(\bth |D_i) - \bm{q}) = 0. $$ Since $\bm{Q}_{-i} \bm{\Lambda}$ must have rank $|\Theta|$, which means that the columns of $\bm{Q}_{-i} \bm{\Lambda}$ are linearly independent, we must have $$ p(\bth |D_i) - \bm{q} = 0, $$ which completes our proof of sensitivity for finite-size $\Theta$. \paragraph{Proof of Theorem~\ref{thm:multi_sensitive_cont}.} When $\Theta \subseteq \mathbb{R}^m$ and a model in the exponential family is used, we prove that when $f$ is strictly convex, the mechanism will be sensitive if and only if for any $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, \begin{eqnarray}\label{eqn:app_multi_sensi} \Pr_{D_{-i}} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0. \end{eqnarray} We first show that the above condition is equivalent to that for any $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, \begin{eqnarray} \label{eqn:app_multi_sen} \Pr_{D_{-i}|D_i} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0, \end{eqnarray} where $D_{-i}$ is drawn from $p(D_{-i}|D_i)$ but not $p(D_{-i})$. This is because, by conditional independence of the datasets, for any event $\mathcal{E}$, we have $$ \Pr_{D_{-i}|D_i}[\mathcal{E}] = \int_{\bth\in \Theta} p(\bth|D_i) \Pr_{D_{-i}|\bth}[\mathcal{E}] \, d\bth $$ and $$ \Pr_{D_{-i}}[\mathcal{E}] = \int_{\bth\in \Theta} p(\bth) \Pr_{D_{-i}|\bth}[\mathcal{E}] \, d\bth. $$ Since both $p(\bth)$ and $p(\bth|D_i)$ are always positive because they are in exponential family, it should hold that $$ \Pr_{D_{-i}|D_i}[\mathcal{E}] > 0 \ \Longleftrightarrow \ \Pr_{D_{-i}}[\mathcal{E}] > 0. $$ Therefore \eqref{eqn:app_multi_sensi} is equivalent to \eqref{eqn:app_multi_sen}, and we only need to show that the mechanism is sensitive if and only if \eqref{eqn:app_multi_sen} holds. We then again apply Lemma~\ref{lem11}. By Lemma~\ref{lem11} and the strict convexity of $f$, $\tilde{\q}_i$ achieves the supremum if and only if $$ v(\tilde{\q}_i, p(\bth|D_{-i})) = \frac{p(D_{-i}|D_i)}{p(D_{-i})} \text{ for all } D_{-i}. $$ By the definition of $v$ and Lemma~\ref{lem12}, the above condition is equivalent to \begin{eqnarray} \label{eqn:multi_exp_cond} \int_{\bth\in \Theta} \frac{\tilde{\q}_i(\bth)p(\bth|D_{-i})}{p(\bth)} \,d\bth = \int_{\bth\in \Theta} \frac{p(\bth|D_i)p(\bth|D_{-i})}{p(\bth)}\, d\bth \quad \text{ for all } D_{-i}. \end{eqnarray} When we're using a (canonical) model in exponential family, the prior $p(\bth)$ and the posteriors $p(\bth|D_i), p(\bth|D_{-i})$ can be represented in the standard form~\eqref{eqn:exp_fam_prior}, \begin{eqnarray*} & p(\bth) = \mathcal{P}(\bth| \nu_0, \overline{\bm{\tau}}_0),\\ & p(\bth|D_i) = \mathcal{P}\big(\bth| \nu_i, \overline{\bm{\tau}}_i\big),\\ & p(\bth|D_{-i}) = \mathcal{P}\big(\bth| \nu_{-i}, \overline{\bm{\tau}}_{-i}\big),\\ & \tilde{\q}_i = \mathcal{P}\big(\bth| \nu_{i}', \overline{\bm{\tau}}_{i}'\big) \end{eqnarray*} where $\nu_0, \overline{\bm{\tau}}_0$ are the parameters for the prior $p(\bth)$, $\nu_i, \overline{\bm{\tau}}_i$ are the parameters for the posterior $p(\bth|D_i)$, $\nu_{-i}, \overline{\bm{\tau}}_{-i}$ are the parameters for the posterior $p(\bth|D_{-i})$, and $ \nu_{i}', \overline{\bm{\tau}}_{i}'$ are the parameters for $\tilde{\q}_i$. Then by Lemma~\ref{lem:exp_int}, the condition that $\tilde{\q}_i$ achieves the supremum~\eqref{eqn:multi_exp_cond} is equivalent to \begin{eqnarray} \frac{g(\nu_i', \overline{\bm{\tau}}_i') }{ g(\nu_i' +\nu_{-i}-\nu_0, \frac{\nu_{i}' \overline{\bm{\tau}}_{i}' + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i' +\nu_{-i}-\nu_0} )} = \frac{g(\nu_i, \overline{\bm{\tau}}_i) }{ g(\nu_i +\nu_{-i}-\nu_0, \frac{\nu_{i} \overline{\bm{\tau}}_{i} + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i +\nu_{-i}-\nu_0} )}, \ \text{ for all }D_{-i}. \end{eqnarray} which, by our definition of $h(\cdot)$, is just $$ h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') = h_{D_{-i}}(\nu_{i}, \overline{\bm{\tau}}_{i}), \quad \text{ for all } D_{-i}. $$ Now we are ready to prove Theorem~\ref{thm:multi_sensitive_cont}. Since \eqref{eqn:app_multi_sensi} is equivalent to \eqref{eqn:app_multi_sen}, we only need to show that the mechanism is sensitive if and only if for all $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, \begin{eqnarray*} \Pr_{D_{-i}|D_i} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0. \end{eqnarray*} If the above condition holds, then $\tilde{\q}_i$ with parameters $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$ should have a non-zero loss in the expected score~\eqref{eqn:e_payment} compared to the optimal solution $p(\bth|D_i)$ with parameters $(\nu_i, \overline{\bm{\tau}}_i)$, which means that the mechanism is sensitive. For the other direction, if the condition does not hold, i.e., there exists $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$ with $$ \Pr_{D_{-i}|D_i} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] = 0, $$ then reporting $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$ will give the same expected score as truthfully reporting $(\nu_i, \overline{\bm{\tau}}_i)$, which means that the mechanism is not sensitive. \paragraph{Proof of Theorem~\ref{thm:multi-only-if}} In thr proof of Theorem~\ref{thm:multi_sensitive}, we have proven that to strictly maximize the payment, agent $i$ need to report $\tilde{q}_i$ such that \begin{equation} v(\tilde{\q}_i, p(\bth|D_{-i})) = (\bm{Q}_{-i} \bm{\Lambda} \tilde{\q}_i)_{D_{-i}}=\frac{p( D_{-i} | D_i)}{p(D_{-i})} \ \text{ for all } D_{-i}.\label{eqn:multi_oif} \end{equation} Obviously, $\tilde{\q}_i=p(\bth|D_i)$ is a solution to ~\ref{eqn:multi_oif}. If the rank of $\bm{Q}_{-i}$ is smaller than $|\Theta|$, then the rank of the matrix $\bm{Q}_{-i} \bm{\Lambda}$ should be smaller than $|\Theta|$. We append an row whose elements are all 1 under the matrix $\bm{Q}_{-i} \bm{\Lambda}$ to make it a new matrix $\bm{Q}'$. The rank of $\bm{Q}'$ should still be less than $|\Theta|$ and there should be at least one non-zero solution for $\bm{\Delta}$ to the equation $\bm{Q'}\bm{\Delta}=0$, by multiply the solution $\bm{\Delta}$ by a small coefficient and adding it to the original solution $p(\bth|D_i)$ will make it a new solution $\q^*$ to the equation. Since the posterior $p(\bth|D_{i})$ is positive and all the posteriors are admissible, the agent would achieve the maximal payment by reporting the dataset whose posterior is $\q^*$. The mechansim is not sensitive. \end{comment} \section{Discussion} Our work leaves some immediate open questions. Our one-time data acquisition mechanism requires a lower bound and an upper bound of PMI, which may be difficult to find for some models in the exponential family. Can we find a mechanism that would work for any data distribution, just as our multi-time data acquisition mechanism? Another interesting direction is to design stronger mechanisms to strengthen the sensitivity guarantees. Finally, our method incentivizes truthful reporting, but it is not guaranteed that datasets that give more accurate posteriors will receive higher payments (in expectation). It would be desirable if the mechanism could have this property as well. An observation of our mechanism also raises an important issue of adversarial attacks for data acquisition. Our mechanism guarantees that the data holders who have some data will truthfully report the data at the equilibrium. But an adversary without any real data can submit some fake data and get a positive payoff. The situation is even worse if the adversary can create multiple fake accounts to submit data. This is tied to the individual rationality guarantee that we placed on our mechanism, which requires the payments to always be non-negative and which is generally considered an important property for incentivizing desirable behavior of strategic agents. However, this observation suggests that future work needs to carefully explore the tradeoff between guarantees for strategic agents and guarantees for adversarial agents for data acquisition problems. \newpage \section*{Broader Impact} The results in this work are mainly theoretical. They contribute to the ongoing efforts on encouraging data sharing, ensuring data quality and distributing values of generated from data back to data contributors. \begin{ack} This work is supported by the National Science Foundation under grants CCF-1718549 and IIS-2007887. \end{ack} \section{Introduction} Data has been the fuel of the success of machine learning and data science, which is becoming a major driving force for technological and economic growth. An important question is how to acquire high-quality data to enable learning and analysis when data are private possessions of data providers. Naively, we could issue a constant payment to data providers in exchange for their data. But data providers can report more or less data than they actually have or even misreport values of their data without affecting their received payments. Alternatively, if we have a test dataset, we could reward data providers according to how well the model trained on their reported data performs on the test data. However, if the test dataset is biased, this could potentially incentivize data providers to bias their reported data toward the test set, which will limit the value of the acquired data for other learning or analysis tasks. Moreover, a test dataset may not even be available in many settings. In this work, we explore the design of reward mechanisms for acquiring high-quality data from multiple data providers when a data buyer doesn't have access to a test dataset. The ultimate goal is that, with the designed mechanisms, strategic data providers will find that truthfully reporting their possessed dataset is their best action and manipulation will lead to lower expected rewards. To make the mechanisms practical, we also require our mechanisms to always have non-negative and bounded payments so that data providers will find it beneficial to participate in (a.k.a. individual rationality) and the data buyer can afford the payments. In a Bayesian paradigm where data are generated independently conditioned on some unknown parameters, we design mechanisms for two settings: (1) data are acquired only once, and (2) data are acquired repeatedly and each day's data are independent from the previous days' data. For both settings, our mechanisms guarantee that truthfully reporting the datasets is always an equilibrium. For some models of data distributions, data providers in our mechanisms receive strictly lower rewards in expectation if their reported dataset leads to an inaccurate prediction of the underlying parameters, a property we called \emph{sensitivity}.\footnote{This means that a data provider can report a different dataset without changing his reward as long as the dataset leads to the same prediction for the underlying parameters as his true dataset.} While sensitivity doesn't strictly discourage manipulations of datasets that do not change the prediction of the parameters, it is a significant step toward achieving strict incentives for truthful reporting one's datasets, an ideal goal, especially because finding a manipulation without affecting the prediction of the parameters can be difficult. Our mechanisms guarantee IR and budget feasibility for certain underlying distributions in the first setting and for any underlying distributions in the second setting. Our mechanisms are built upon recent developments~\cite{kong2019information, kong2018water} in the peer prediction literature. The insight is that if we reward a data provider the mutual information \cite{kong2019information} between his data and other providers' data, then by the data processing inequality, if other providers report their data truthfully, this data provider will only decrease the mutual information, hence his reward, by manipulating his dataset. We extend the peer prediction method developed by \cite{kong2018water} to the data acquisition setting, and to further guarantee IR and budget feasibility. One of our major technical contributions is the explicit sensitivity guarantee of the peer-prediction style mechanisms, which is absent in the previous work. \section{Related Work} The problem of purchasing data from people has been investigated with different focuses, e.g. privacy concerns~\cite{GR11,FL12, GLRS14, NVX14, CIL15,waggoner2015market}, effort and cost of data providers\cite{Roth2012,CDP15,Yiling15,Chen2018,zheng2017active,chen2019prior}, reward allocation~\cite{ghorbani2019data,agarwal2019marketplace}. Our work is the first to consider rewarding data without (good) test data that can be used evaluate the quality of reported data. Similar to our setting, \cite{ghorbani2019data,agarwal2019marketplace} consider paying to multiple data providers in a machine learning task. They use a test set to assess the contribution of subsets of data and then propose a fair measurement of the value of each data point in the dataset, which is based on the Shapley value in game theory. Both of the works do not formally consider the incentive compatibility of payment allocation. \cite{waggoner2015market} proposes a market framework that purchases hypotheses for a machine learning problem when the data is distributed among multiple agents. Again they assume that the market has access to some true samples and the participants are paid with their incremental contributions evaluated by these true samples. Besides, there is a small literature (see~\cite{fang2007putting} and subsequent work) on aggregating datasets using scoring rules that also considers signal distributions in exponential families. The main techniques of this work come from the literature of \emph{peer prediction}~\cite{miller2005eliciting,prelec2004bayesian,dasgupta2013crowdsourced,frongillo2017geometric,shnayder2016informed,kong2019information,kong2018water,liu2018surrogate, kong2020dominantly}. Peer prediction is the problem of information elicitation without verification. The participants receive correlated signals of an unknown ground truth and the goal is to elicit the true signals from the participants. In our problem, the dataset can be viewed as a signal of the ground truth. What makes our problem more challenging than the standard peer prediction problem is that (1) the signal space is much larger and (2) the correlation between signals is more complicated. Standard peer prediction mechanisms either require the full knowledge of the underlying signal distribution, or make assumptions on the signal distribution that are not applicable to our problem. \cite{kong2018water} applies the peer prediction method to the co-training problem, in which two participants are asked to submit forecasts of latent labels in a machine learning problem. Our work is built upon the main insights of \cite{kong2018water}. We discuss the differences between our model and theirs in the model section, and show how their techniques are applied in the result sections. Our work is also related to Multi-view Learning (see~\cite{xu2013survey} for a survey). But our work focuses on the data acquisition, but not the machine learning methods used on the (multi-view) data. \section{Model} A data analyst wants to gather data for some future statistical estimation or machine learning tasks. There are $n$ data providers. The $i$-th data provider holds a dataset $D_i$ consisting of $N_i$ data points $d_i^{(1)}, \dots, d_i^{(N_i)}$ with support $\mathcal{D}_i$. The data generation follows a standard Bayesian process. For each data set $D_i$, data points $d_i^{(j)} \in D_i$ are i.i.d. samples conditioned on some unknown parameters $\bth \in \Theta$. Let $p(\bth, D_1, \dots, D_n)$ be the joint distribution of $\bth$ and $n$ data providers' datasets. We consider two types of spaces for $\Theta$ in this paper: (1) $\bth$ has finite support, i.e., $|\Theta| = m$ is finite, and (2) $\bth$ has continuous support, i.e. $\bth \in \mathbb{R}^m$ and $\Theta \subseteq \mathbb{R}^m$. For the case of continuous support, to alleviate computational issues, we consider a widely used class of distributions, an \emph{exponential family}. The data analyst's goal is to incentivize the data providers to give their true datasets with a budget $B$. She needs to design a payment rule $r_i(\tilde{D}_1, \dots, \tilde{D}_n)$ for $i\in[n]$ that decides how much to pay data provider $i$ according to all the reported datasets $\tilde{D}_1, \dots, \tilde{D}_n$. The payment rule should ideally incentivize truthful reporting, that is, $\tilde{D}_i = D_i$ for all $i$. Before we formally define the desirable properties of a payment rule, we note that the analyst will have to leverage the correlation between people's data to distinguish a misreported dataset from a true dataset because all she has access to is the reported datasets. To make the problem tractable, we thus make the following assumption about the data correlation: parameters $\bth$ contains all the mutual information between the datasets. More formally, the datasets are independent conditioned on $\bth$. \begin{assumption} $D_1, \dots, D_n$ are independent conditioned on $\bth$, $$ p(D_1, \dots, D_n | \bth) = p(D_1|\bth) \cdots p(D_n|\bth). $$ \end{assumption} This is definitely not an assumption that would hold for arbitrarily picked parameters $\bth$ and any datasets. One can easily find cases where the datasets are correlated to some parameters other than $\bth$. So the data analyst needs to carefully decide what to include in $\bth$ and $D_i$, by either expanding $\bth$ to include all relevant parameters or reducing the content of $D_i$ to exclude all redundant data entries that can cause extra correlations. \begin{example} \label{exm:linear} Consider the linear regression model where provider $i$'s data points $d_i^{(j)} = (\z_i^{(j)}, y_i^{(j)})$ consist of a feature vector $\z_i^{(j)}$ and a label $y_i^{(j)}$. We have a linear model $$ y_i^{(j)} = \bth^T \z_i^{(j)} + \varepsilon_i^{(j)}. $$ Then datasets $D_1, \dots, D_n$ will be independent conditioning on $\bth$ as long as (1) different data providers draw their feature vectors independently, i.e., $\z_1^{(j_1)}, \dots, \z_n^{(j_n)}$ are independent for all $j_1\in [N_1], \dots, j_n\in[N_n]$, and (2) the noises are independent. \end{example} We further assume that the data analyst has some insight about the data generation process. \begin{assumption} The data analyst possesses a commonly accepted prior $p(\bth)$ and a commonly accepted model for data generating process so that she can compute the posterior $p(\bth| D_i),\, \forall i, D_i$. \end{assumption} When $|\Theta|$ is finite, $p(\bth| D_i)$ can be computed as a function of $ p(\bth|d_i)$ using the method in Appendix~\ref{app:sens_def}. For a model in the exponential family, $p(\bth|D_i)$ can be computed as in Definition~\ref{def:exp_conj}. Note that we do not always require the data analyst to know the whole distribution $p(D_i | \bth)$, it suffices for the data analyst to have the necessary information to compute $p(\bth | D_i)$. \begin{example} \label{exm:linear_reg} Consider the linear regression model in Example~\ref{exm:linear}. We use $\z_i$ to represent all the features in $D_i$ and use $\y_i$ to represent all the labels in $D_i$. If the features $\z_i$ are independent from $\bth$, the data analyst does not need to know the distribution of $\z_i$. It suffices to know $p(\y_i | \z_i, \bth)$ and $p(\bth)$ to know $p(\bth|D_i)$ because \begin{align*} p(\bth|(\z_i, \y_i))& \propto p((\z_i,\y_i)|\bth) p(\bth) = p(\y_i|\z_i, \bth) p(\z_i|\bth) p(\bth) = p(\y_i|\z_i, \bth) p(\z_i) p(\bth) \\ & \propto p(\y_i|\z_i, \bth) p(\bth). \end{align*} \end{example} Finally we assume that the identities of the providers can be verified. \begin{assumption} The data analyst can verify the data providers' identities, so one data provider can only submit one dataset and get one payment. \end{assumption} We now formally introduce some desirable properties of a payment rule. We say that a payment rule is \emph{truthful} if reporting true datasets is a weak equilibrium, that is, when the others report true datasets, it is also (weakly) optimal for me to report the true dataset (based on my own belief). \begin{definition}[Truthfulness] Let $D_{-i}$ be the datasets of all providers except $i$. A payment rule $\mathbf{r}(D_1, \dots, D_n)$ is truthful if: for any (commonly accepted model of) underlying distribution $p(\bth, D_1, \dots, D_n)$, for every data provider $i$ and any realization of his dataset $D_i$, when all other data providers truthfully report $D_{-i}$, truthfully reporting $D_i$ leads to the highest expected payment, where the expectation is taken over the distribution of $D_{-i}$ conditioned on $D_i$, i.e., $$ \mathbb{E}_{D_{-i}\sim p(D_{-i}|D_i)}[r_i(D_i, D_{-i})] \ge \mathbb{E}_{D_{-i}\sim p(D_{-i}|D_i)}[r_i(D_i', D_{-i})], \quad \forall i, D_i, D_i'. $$ \end{definition} Note that this definition does not require the agents to actually know the conditional distribution and to be able to evaluate the expectation themselves. It is a guarantee that no matter what the underlying distribution is, truthfully reporting is an equilibrium. Because truthfulness is defined as a weak equilibrium, it does not necessarily discourage misreporting.\footnote{A constant payment rule is just a trivial truthful payment rule.} What it ensures is that the mechanism does not encourage misreporting.\footnote{Using a fixed test set may encourage misreporting.} So, we want a stronger guarantee than truthfulness. We thus define \emph{sensitivity}: the expected payment should be strictly lower when the reported data does not give the accurate prediction of $\bth$. \begin{definition}[Sensitivity] A payment rule $\mathbf{r}(D_1, \dots, D_n)$ is sensitive if for any (commonly accepted model of) underlying distribution $p(\bth, D_1, \dots, D_n)$, for any provider $i$ and any realization of his dataset $D_i$, when all other providers $j\neq i$ report $\tilde{D}_{j}(D_j)$ with accurate posterior $p(\bth|\tilde{D}_{j}(D_j)) = p(\bth|D_{j})$, we have (1) truthfully reporting $D_i$ leads to the highest expected payment $$ \mathbb{E}_{D_{-i}\sim p(D_{-i}|D_i)}[r_i(D_i, \tilde{D}_{-i}(D_{-i}))] \ge \mathbb{E}_{D_{-i}\sim p(D_{-i}|D_i)}[r_i(D_i', \tilde{D}_{-i}(D_{-i}))], \ \forall D_i' $$ and (2) reporting a dataset $D_i'$ with inaccurate posterior $p(\bth|D_i') \neq p(\bth|D_i)$ is strictly worse than reporting a dataset $\tilde{D}_i$ with accurate posterior $p(\bth|\tilde{D}_i) = p(\bth|D_i)$, $$ \mathbb{E}_{D_{-i}\sim p(D_{-i}|D_i)}[r_i(\tilde{D}_i, \tilde{D}_{-i}(D_{-i}))] > \mathbb{E}_{D_{-i}\sim p(D_{-i}|D_i)}[r_i(D_i', \tilde{D}_{-i}(D_{-i}))], $$ Furthermore, let $\Delta_i = p(\bth|D_i') - p(\bth|D_i)$, a payment rule is $\alpha$-sensitive for agent $i$ if $$ \mathbb{E}_{D_{-i}\sim p(D_{-i}|D_i)}[r_i(D_i, \tilde{D}_{-i}(D_{-i}))] - \mathbb{E}_{D_{-i}\sim p(D_{-i}|D_i)}[r_i(D_i', \tilde{D}_{-i}(D_{-i}))] \ge \alpha \Vert \Delta_i \Vert, $$ for all $D_i, D_i'$ and reports $\tilde{D}_{-i}(D_{-i})$ that give the accurate posteriors. \end{definition} \yc{Hmm, this definition is tricky. As it is, it doesn't define truthfully reporting one's posterior as an equilibrium, because the belief used by individual $i$ is $p(D_{-i}|D_i)$ rather than $p(\tilde{D}_{-i}|D_i)$. So, this is not exactly correct ...}\sherry{It seems to me the belief used by the agent does not really matter.} Our definition of sensitivity guarantees that at an equilibrium, the reported datasets must give the correct posteriors $p(\bth|\tilde{D}_i) = p(\bth|D_i)$. \yc{I still don't think the above definition guarantees that reporting $\tilde{D}_i$ with accurate posterior is a strictly equilibrium. The above definition ensures that reporting true dataset $D_i$ is weakly better than reporting any other dataset. Then, reporting a dataset $\tilde{D}_i$ with the correct posterior is strictly better than reporting any dataset $D'_i$ with a wrong posterior. But the definition doesn't exclude the possibility that reporting $D_i$ is strictly better than reporting $\tilde{D}_i$. If that's the case, reporting $\tilde{D}_i \neq D_i$ but with the correct posterior is not an equilibrium.} We can further show that at an equilibrium, the analyst will get the accurate posterior $p(\bth|D_1, \dots, D_n)$. \begin{lemma} \label{lem:sens_def} When $D_1, \dots, D_n$ are independent conditioned on $\bth$, for any $(D_1,\dots,D_n)$ and $(\tilde{D}_1, \dots, \tilde{D}_n)$, if $ p(\bth|D_i)=p(\bth|\tilde{D}_i)\ \forall i$, then $p(\bth|D_1,\dots,D_n)=p(\bth|\tilde{D}_1, \dots, \tilde{D}_n)$. \end{lemma} A more ideal property would be that the expected payment is strictly lower for any dataset $D_i' \neq D_i$. Mechanisms that satisfy sensitivity can be viewed as an important step toward this ideal goal, as the only possible payment-maximizing manipulations are to report a dataset $\tilde{D}_i$ that has the correct posterior $p(\bth|\tilde{D}_i) = p(\bth|D_i)$. Arguably, finding such a manipulation can be challenging. Sensitivity guarantees the accurate prediction of $\bth$ at an equilibrium. Second, we want a fair payment rule that is indifferent to data providers' identities. \begin{definition}[Symmetry] A payment rule $r$ is symmetric if for all permutation of $n$ elements $\pi(\cdot)$, $ r_i(D_1, \dots, D_n) = r_{\pi(i)}(D_{\pi(1)}, \dots, D_{\pi(n)}) $ for all $i$. \end{definition} Third, we want non-negative payments and the total payment should not exceed the budget. \begin{definition}[Individual rationality and budget feasibility] A payment rule $r$ is individually rational if $ r_i(D_1, \dots, D_n) \ge 0, \quad \forall i, D_1, \dots, D_n. $ A payment rule $r$ is budget feasible if $ \sum_{i=1}^n r_i(D_1, \dots, D_n) \le B$, $\forall D_1, \dots, D_n. $ \end{definition} We will consider two acquisition settings in this paper: \textbf{One-time data acquisition.} The data analyst collects data in one batch. In this case, our problem is very similar to the single-task forecast elicitation in~\cite{kong2018water}. But our model considers the budget feasibility and the IR, whereas they only consider the truthfulness of the mechanism. \textbf{Multiple-time data acquisition.} The data analyst repeatedly collects data for $T\ge 2$ days. On day~$t$, $(\bth^{(t)}, D_1^{(t)}, \dots, D_n^{(t)})$ is drawn independently from the same distribution $p(\bth, D_1, \dots, D_n)$. The analyst has a budget $B^{(t)}$ and wants to know the posterior of $\bth^{(t)}$, $p(\bth^{(t)}|D_1^{(t)}, \dots, D_n^{(t)})$. In this case, our setting differs from the multi-task forecast elicitation in~\cite{kong2018water} because providers can decide their strategies on a day based on all the observed historical data before that day.\footnote{This is not to say that the providers will update their prior for $\bth^{(t)}$ using the data on first $t-1$ days. Because we assume that $\bth^{(t)}$ is independent from $\bth^{(t-1)}, \dots, \bth^{(t-1)}$, so the data on first $t-1$ days contains no information about $\bth^{(t)}$. We use the same prior $p(\bth)$ throughout all $T$ days. What it means is that when the analyst decides the payment for day $t$ not only based on the report on day $t$ but also the historical reports, the providers may also use different strategies for different historical reports.} The multi-task forecast elicitation in~\cite{kong2018water} asks the agents to submit forecasts of latent labels in multiple similar independent tasks. It is assumed that the agent's forecast strategy for one task only depends on his information about that task but not the information about other tasks. \section{Multiple-time Data Acquisition}\label{sec:multi} Now we consider the case when the data analyst needs to repeatedly collect data for the same task. At day $t$, the analyst has a budget $B^{(t)}$ and a new ensemble $(\bth^{(t)}, D_1^{(t)}, \dots, D_n^{(t)})$ is drawn from the same distribution $p(\bth, D_1, \dots, D_n)$, independent of the previous data. Again we assume that the data generating distribution $p(D_i|\bth)$ can be unknown, but the analyst is able to compute $p(\bth | D_i)$) after seeing the data(See Example~\ref{exm:linear_reg}). The data analyst can use the one-time purchasing mechanism (Section~\ref{sec:single}) at each round. But we show that if the data analyst can give the payment one day after the data is reported, a broader class of mechanisms can be applied to guarantee the desirable properties, which ensures bounded payments without any assumptions on the underlying distribution. Our method is based on the \emph{$f$-mutual information gain} in~\cite{kong2018water} for multi-task forecast elicitation. The payment function in \cite{kong2018water} has a minor error. We correct the payment function in this work.\footnote{The term $PMI(\cdot)$ in the payment function of \cite{kong2018water} should actually be $1/PMI(\cdot)$. This is because when \cite{kong2018water} cited Lemma 1 in \cite{nguyen2010estimating}, $q(\cdot)/p(\cdot)$ is mistakenly replaced by $p(\cdot)/q(\cdot)$.} Our mechanism (Mechanism~\ref{alg:multi}) works as follows. On day $t$, the data providers are first asked to report their data for day $t$. Then for each provider $i$, we use the other providers' reported data on day $t-1$ and day $t$ to evaluate provider $i$'s reported data on day $t-1$. A score $s_i$ will be computed for each provider's $\tilde{D}_i^{(t-1)}$. The score $s_i$ is defined in the same way as the \emph{$f$-mutual information gain} in~\cite{kong2018water}, which is specified by a differentiable convex function $f:\mathbb{R}\to\mathbb{R}$ and its convex conjugate $f^*$ (Definition~\ref{def:convex_conjugate}), \begin{align} \label{eqn:multi_score} s_i = f'\left(\frac{1}{PMI(\tilde{D}_i^{(t-1)}, \tilde{D}_{-i}^{(t)})}\right) - f^*\left( f'\left( \frac{1}{PMI(\tilde{D}_i^{(t-1)}, \tilde{D}_{-i}^{(t-1)}) }\right)\right). \text{\footnotemark} \end{align} \footnotetext{Here we assume that $PMI(\cdot)$ is non-zero. For $PMI(\cdot)=0$, we can just do the same as in the one-time acquisition mechanism.} The score is defined in this particular way because it can guarantee truthfulness according to Lemma~\ref{lem11}. It can be proved that when the agents truthfully report $D_i$, the expectation of $s_i$ will reach the supremum in Lemma~\ref{lem11}, and will then be equal to the $f$-mutual information of $D_i^{(t-1)}$ and $D_{-i}^{(t-1)}$. We can further prove that if data provider $i$ reports a dataset $\tilde{D}_i^{(t-1)}$ that leads to a different posterior $p(\bth|\tilde{D}_i^{(t-1)})\neq p(\bth|D_i^{(t-1)})$, the expectation of $s_i$ will deviate from the supremum in Lemma~\ref{lem11} and thus get lower. According to the definition~\eqref{eqn:multi_score}, if we carefully choose the convex function $f$ to be a differentiable convex function with a bounded derivative $f' \in [0, U]$ and with the convex conjugate $f^*$ bounded on $[0,U]$, then the scores $s_1, \dots, s_n$ will always be bounded. We can then normalize $s_1, \dots, s_n$ so that the payments are non-negative and the total payment does not exceed $B^{(t-1)}$. Here we give one possible choice of $f'$ that can guarantee bounded scores: the Logistic function $\frac{1}{1+e^{-x}}$. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $f(x)$ & $f'(x)$ & range of $f'(x)$ & $f^*(x)$ & range of $f^*(x)$ \\ \hline $\ln(1 + e^x)$ & $\frac{1}{1+e^{-x}}$ & $[\frac{1}{2}, 1)$ on $\mathbb{R}^{\ge 0}$ & $x \ln x + (1-x) \ln (1-x)$ & $ [-\ln 2, 0]$ on $[\frac{1}{2},1)$\\ \hline \end{tabular} \end{center} Finally, if day $t$ is the last day, we adopt the one-time mechanism to pay for day $t$'s data as well. \begin{algorithm}[!h] \SetAlgorithmName{Mechanism}{mechanism}{List of Mechanisms} \begin{algorithmic} \STATE Given a differentiable convex function $f$ with $f'\in[0, U]$ and $f^*$ bounded on $[0,U]$ \FOR {$t = 1,\dots,T$} \STATE(1) On day $t$, ask all data providers to report their datasets $\tilde{D}_1^{(t)}, \dots, \tilde{D}_n^{(t)}$. \STATE(2) If $t$ is the last day $t=T$, use the payment rule of Mechanism~\ref{alg:single} to pay for day $T$'s data or just give each data provider $B^{(T)}/n$. \STATE (3) If $t>1$, give the payments for day $t-1$ as follows. First compute all the scores $s_i$ as in~\eqref{eqn:multi_score}. Then normalize the scores so that the total payment is no more than $B^{(t-1)}$. Let the range of the scores be $[L,R]$. Assign payments \begin{center} $ r_i(\tilde{D}_1^{(t-1)}, \dots, \tilde{D}_n^{(t-1)}) = \frac{B^{(t-1)}}{n}\cdot\frac{s_i - L}{R-L}. $ \end{center} \ENDFOR \end{algorithmic} \caption{Multi-time data collecting mechanism.} \label{alg:multi} \end{algorithm} Our first result is that Mechanism~\ref{alg:multi} guarantees all the basic properties of a desirable mechanism. \begin{theorem} \label{thm:multi_main} Given any differentiable convex function $f$ that has (1) a bounded derivative $f' \in [0, U]$ and (2) the convex conjugate $f^*$ bounded on $[0,U]$, Mechanism~\ref{alg:multi} is IR, budget feasible, truthful and symmetric in all $T$ rounds. \end{theorem} If we choose computable $f'$ and $f^*$ (e.g. $f'$ equal to the Logistic function), the payments will also be computable for finite-size $\Theta$ and for models in exponential family (Lemma~\ref{lem:multi_comp}). If we use a strictly convex function $f$ with $f'>0$, then Mechanism~\ref{alg:multi} has basically the same sensitivity guarantee as Mechanism~\ref{alg:single} in the first $T-1$ rounds. We defer the sensitivity analysis to Appendix~\ref{app:multi_SA}. The missing proofs in this section can be found in Appendix~\ref{app:multi_proofs}. \section{Preliminaries} In this section, we introduce some necessary background for developing our mechanisms. We first give the definitions of {\em exponential family} distributions. Our designed mechanism will leverage the idea of mutual information between reported datasets to incentivize truthful reporting. \subsection{Exponential Family} \begin{definition}[Exponential family~\cite{murphy2012machine}] A likehihood function $p(\x|\bth)$, for $\x = (x_1, \dots, x_n) \in \mathcal{X}^n$ and $\bth \in \Theta \subseteq \mathbb{R}^m$ is said to be in the \emph{exponential family} in canonical form if it is of the form \begin{equation} \label{eqn:exp_fam_prob} p(\x|\bth) = \frac{1}{Z(\bth)} h(\x) \exp \left[\bth^T \bphi(\x) \right] \quad\text{ or }\quad p(\x|\bth) = h(\x) \exp \left[\bth^T \bphi(\x) - A(\bth) \right] \end{equation} Here $\bm{\phi}(x) \in \mathbb{R}^m$ is called a vector of \emph{sufficient statistics}, $Z(\bth) = \int_{\mathcal{X}^n} h(\x) \exp\left[\bth^T \bphi(\x) \right]$ is called the \emph{partition function}, $A(\bth) = \ln Z(\bth)$ is called the \emph{log partition function}. \end{definition} In Bayesian probability theory, if the posterior distributions $p(\bth|\x)$ are in the same probability distribution family as the prior probability distribution $p(\bth)$, the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function. \begin{definition}[Conjugate prior for the exponential family~\cite{murphy2012machine}] \label{def:exp_conj} For a likelihood function in the exponential family $p(\x|\bth) = h(\x) \exp \left[\bth^T \bphi(\x) - A(\bth) \right]$. The conjugate prior for $\bth$ with parameters $\nu_0, \overline{\bm{\tau}}_0$ is of the form \begin{equation} \label{eqn:exp_fam_prior} p(\bth) = \mathcal{P}(\bth| \nu_0, \overline{\bm{\tau}}_0) = g(\nu_0, \overline{\bm{\tau}}_0) \exp\left[ \nu_0 \bth^T \overline{\bm{\tau}}_0 - \nu_0 A(\bth ) \right]. \end{equation} Let $\overline{\bm{s}} = \frac{1}{n} \sum_{i=1}^n \bm{\phi}(x_i) $. Then the posterior of $\bth$ can be represented in the same form as the prior \begin{align*} p(\bth|\x) \propto \exp \left[ \bth^T (\nu_0\overline{\bm{\tau}}_0 + n \overline{\bm{s}}) - (\nu_0 + n) A(\bth) \right] = \mathcal{P}\big(\bth| \nu_0 + n, \frac{\nu_0\overline{\bm{\tau}}_0 + n \overline{\bm{s}}}{\nu_0 + n}\big), \end{align*} where $\mathcal{P}\big(\bth| \nu_0 + n, \frac{\nu_0\overline{\bm{\tau}}_0 + n \overline{\bm{s}}}{\nu_0 + n}\big)$ is the conjugate prior with parameters $\nu_0 + n$ and $\frac{\nu_0\overline{\bm{\tau}}_0 + n \overline{\bm{s}}}{\nu_0 + n}$. \end{definition} A lot of commonly used distributions belong to the exponential family. Gaussian, Multinoulli, Multinomial, Geometric, etc. Due to the space limit, we introduce only the definitions and refer the readers who are not familiar with the exponential family to~\cite{murphy2012machine} for more details. \subsection{Mutual Information} We will use the \emph{point-wise mutual information} and the \emph{$f$-mutual information gain} defined in~\cite{kong2018water}. We introduce this notion of mutual information in the context of our problem. \begin{definition}[Point-wise mutual information] \label{def:PMI} We define the point-wise mutual information between two datasets $D_1$ and $D_2$ to be \begin{eqnarray} PMI(D_1, D_2) = \int_{\bth\in \Theta} \frac{p(\bth|D_1)p(\bth|D_2)}{p(\bth)} \, d\bth. \end{eqnarray} \end{definition} For finite case, we define $PMI(D_1, D_2) = \sum_{\bth\in \Theta} \frac{p(\bth|D_1)p(\bth|D_2)}{p(\bth)} \, d\bth$. When $|\Theta|$ is finite or a model in the exponential family is used, the PMI will be computable. \begin{lemma} \label{lem:multi_comp} When $|\Theta|$ is finite, $PMI(\cdot)$ can be computed in $O(|\Theta|)$ time. If a model in exponential family is used, so that the prior and all the posterior of $\bth$ can be written in the form \begin{eqnarray*} &p(\bth) = \mathcal{P}(\bth|\nu_0, \overline{\bm{\tau}}_0) = g(\nu_0, \overline{\bm{\tau}}_0) \exp\left[ \nu_0 \bth^T \overline{\bm{\tau}}_0 - \nu_0 A(\bth ) \right], \end{eqnarray*} $p(\bth|D_i) = \mathcal{P}(\bth|\nu_i, \overline{\bm{\tau}}_i)$ and $p(\bth|D_{-i}) = \mathcal{P}(\bth|\nu_{-i}, \overline{\bm{\tau}}_{-i})$, then the point-wise mutual information can be computed as $$ PMI(D_i, D_{-i}) = \frac{g(\nu_i, \overline{\bm{\tau}}_i) g(\nu_{-i}, \overline{\bm{\tau}}_{-i})}{g(\nu_0, \overline{\bm{\tau}}_0) g(\nu_i +\nu_{-i}-\nu_0, \frac{\nu_{i} \overline{\bm{\tau}}_{i} + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i +\nu_{-i}-\nu_0} )}. $$ \end{lemma} For single-task forecast elicitation, \cite{kong2018water} proposes a truthful payment rule. \begin{definition}[$\log$-PMI payment~\cite{kong2018water}] Suppose there are two data providers reporting $\tilde{D}_A$ and $\tilde{D}_B$ respectively. Then the $\log$-PMI rule pays them $r_A = r_B= \log(PMI(\tilde{D}_A, \tilde{D}_B))$. \end{definition} \begin{proposition} \label{prop:mutual_info} When the $\log$-PMI rule is used, the expected payment equals the mutual information between $\tilde{D}_A$ and $\tilde{D}_B$, where the expectation is taken over the distribution of $\tilde{D}_A$ and $\tilde{D}_B$. \end{proposition} We now give the definition of the $f$-mutual information. An $f$-divergence is a function that measures the difference between two probability distributions. \begin{definition}[$f$-divergence] Given a convex function $f$ with $f(1) = 0$, for two distributions over $\Omega$, $p,q \in \Delta \Omega$, define the $f$-divergence of $p$ and $q$ to be $ D_f(p,q) = \int_{\omega \in \Omega} p(\omega) f\left( \frac{q(\omega) }{ p(\omega) } \right). $ \end{definition} The \emph{$f$-mutual information} of two random variables is a measure of the mutual dependence of two random variables, which is defined as the $f$-divergence between their joint distribution and the product of their marginal distributions. In duality theory, the convex conjugate of a function is defined as follows. \begin{definition}[Convex conjugate] \label{def:convex_conjugate} For any function $f: \mathbb{R} \to \mathbb{R}$, define the convex conjugate function of $f$ as $ f^*(y) = \sup_x xy - f(x). $ \end{definition} The following inequality (\cite{nguyen2010estimating, kong2018water}) will be used in our proof. \begin{lemma}[Lemma 1 in \cite{nguyen2010estimating}] \label{lem11} For any differentiable convex function $f$ with $f(1) = 0$, any two distributions over $\Omega$, $p,q \in \Delta \Omega$, let $\mathcal{G}$ be the set of all functions from $\Omega$ to $\mathbb{R}$, then we have $$ D_f(p,q) \ge \sup_{g \in \mathcal{G}} \int_{\omega \in \Omega} g(\omega) q(\omega) - f^*(g(\omega)) p(\omega)\, d\omega = \sup_{g \in \mathcal{G}} \mathbb{E}_q g - \mathbb{E}_p f^*(g). $$ A function $g$ achieves equality if and only if $ g(\omega) \in \partial f\left(\frac{q(\omega)}{p(\omega)}\right) \forall \omega $ with $p(\omega)>0$, where $\partial f\big(\frac{q(\omega)}{p(\omega)}\big)$ represents the subdifferential of $f$ at point $q(\omega)/p(\omega)$. \end{lemma} \section{One-time Data Acquisition}\label{sec:single} In this section we apply \cite{kong2018water}'s $\log$-PMI payment rule to our one-time data acquisition problem. The $\log$-PMI payment rule ensures truthfulness, but its payment can be negative or unbounded or even ill-defined. So we mainly focus on the mechanism's sensitivity, budget feasibility and IR. To guarantee budget feasibility and IR, our mechanism requires a lower bound and an upper bound of PMI, which may be difficult to find for some models in the exponential family. If the analyst knows the distribution $p(D_i|\bth)$, then she will be able to compute $p(D_{-i}|D_i)=\sum_\bth p(D_{-i}|\bth) p(\bth|D_i)$. In this case, we can employ peer prediction mechanisms~\cite{miller2005eliciting} to design payments and guarantee truthfulness. In Appendix \ref{app:single_trivial}, we give an example of such mechanisms. In this work we do not assume that $p(D_i|\bth)$ is known (see Example~\ref{exm:linear_reg}). When $p(D_i | \bth)$ is unknown but the analyst can compute $p(\bth | D_i)$, our idea is to use the $\log$-PMI payment rule in~\cite{kong2018water} and then add a normalization step to ensure budget feasibility and IR. However the $\log$-PMI will be ill-defined if $PMI = 0$. To avoid this, for each possible $D_{-i}$, we define set $\mathbb{D}_i(D_{-i})=\{D_i| PMI(D_i, D_{-i})>0\}$ and the log-PMI will only be computed for $\tilde{D}_i \in \mathbb{D}_i(\tilde{D}_{-i})$. The normalization step will require an upper bound $R$ and lower bound $L$ of the log-PMI payment.\footnote{WLOG, we can assume that $L<R$ here. Because $L=R$ implies that all agents' datasets are independent. } If $|\Theta|$ is finite, we can find a lower bound and an upper bound in polynomial time, which we prove in Appendix~\ref{app:single_LR}. When a model in the exponential family is used, it is more difficult to find $L$ and $R$. By Lemma~\ref{lem:multi_comp}, if the $g$ function is bounded, we will be able to bound the payment. For example, if we are estimating the mean of a univariate Gaussian with known variance, $L$ and $R$ will be bounded if the number of data points is bounded. Details can be found in Appendix~\ref{app:gaussian}. Our mechanism works as follows. \begin{algorithm}[H] \SetAlgorithmName{Mechanism}{mechanism}{List of Mechanisms} \begin{algorithmic} \STATE(1) Ask all data providers to report their datasets $\tilde{D}_1, \dots, \tilde{D}_n$. \STATE(2) If data provider $i$'s reported dataset $\tilde{D}_i \in \mathbb{D}_i(\tilde{D}_{-i})$, we compute a score for his dataset $s_i=\log PMI(\tilde{D}_i, \tilde{D}_{-i})$. \STATE(3) The final payment for data provider $i$ is: \begin{align*} r_i(\tilde{D}_1, \dots, \tilde{D}_n)=\left\{\begin{array}{ll} \frac{B}{n}\cdot \frac{s_i-L}{R-L} & \text{ if } \tilde{D}_i\in \mathbb{D}_i(\tilde{D}_{-i}) \\ 0 & \text{ otherwise.} \end{array} \right. \end{align*} \end{algorithmic} \caption{One-time data collecting mechanism.} \label{alg:single} \end{algorithm} \begin{theorem}\label{thm_truthful} Mechanism~\ref{alg:single} is IR, truthful, budget feasible, symmetric. \end{theorem} Note that by Proposition~\ref{prop:mutual_info}, the expected payment for a data provider is decided by the mutual information between his data and other people's data. The payments are efficiently computable for finite-size $\Theta$ and for models in exponential family (Lemma~\ref{lem:multi_comp}). Next, we discuss the sensitivity. In~\cite{kong2018water}, checking whether the mechanism will be sensitive requires knowing whether a system of linear equations (which has an exponential size in our problem) has a unique solution. So it is not clear how likely the mechanisms will be sensitive. In our data acquisition setting, we are able to give much stronger and more explicit guarantees. This kind of stronger guarantee is possible because of the special structure of the reports (or the signals) that each dataset consists of i.i.d. samples. We first define some notations. When $|\Theta|$ is finite, let $\bm{Q_{-i}}$ be a $(\Pi_{j\in [n], j\neq i} |\mathcal{D}_j|^{N_j})\times |\Theta|$ matrix that represents the conditional distribution of $\bth$ conditioning on every realization of $D_{-i}$. So the element in row $D_{-i}$ and column $\bth$ is equal to $ p(\bth | D_{-i}). $ We also define the data generating matrix $G_i$ with $|\mathcal{D}_i|$ rows and $|\Theta|$ columns. Each row corresponds to a possible data point $d_i\in \mathcal{D}_i$ in the dataset and each column corresponds to a $\bth\in \Theta$. The element in the row corresponding to data point $d_i$ and the column $\bth$ is $p(\bth|d_i)$. We give the sufficient condition for the mechanism to be sensitive. \begin{theorem}\label{thm_sensitive} When $|\Theta|$ is finite, Mechanism~\ref{alg:single} is sensitive if for all $i$, $Q_{-i}$ has rank $|\Theta|$. \end{theorem} Since the size of $Q_{-i}$ can be exponentially large, it may be computationally infeasible to check the rank of $Q_{-i}$. We thus give a simpler condition that only uses $G_i$, which has a polynomial size. \begin{definition} The Kruskal rank (or $k$-rank) of a matrix $M$, denoted by $rank_k(M)$, is the maximal number $r$ such that any set of $r$ columns of $M$ is linearly independent. \end{definition} \begin{corollary}\label{coro_sensitive} When $|\Theta|$ is finite, Mechanism~\ref{alg:single} is sensitive if for all $i$, $\sum_{j\neq i}\left(rank_k(G_{j})-1\right)\cdot N_{j}+1\ge |\Theta|$, where $N_j$ is the number of data points in $D_j$. \end{corollary} In Appendix~\ref{app:single_alpha}, we also give a lower bound for $\alpha$ so that Mechanism~\ref{alg:single} is $\alpha$-sensitive. Our sensitivity results (Theorem~\ref{thm_sensitive} and Corollary~\ref{coro_sensitive}) basically mean that when there is enough correlation between other people's data $D_{-i}$ and $\theta$, the mechanism will be sensitive. Corollary~\ref{coro_sensitive} quantifies the correlation using the k-rank of the data generating matrix. It is arguably not difficult to have enough correlation: a naive relaxation of Corollary~\ref{coro_sensitive} says that assuming different $\theta$ lead to different data distributions (so that $rank_k(G_j)\ge 2$), the mechanism will be sensitive if for any provider $i$, the total number of other people's data points $\ge |\Theta| -1$. When $\Theta \subseteq \mathbb{R}^m$, it becomes more difficult to guarantee sensitivity. Suppose the data analyst uses a model from the exponential family so that the prior and all the posterior of $\bth$ can be written in the form in Lemma~\ref{lem:multi_comp}. The sensitivity of the mechanism will depend on the normalization term $g(\nu, \overline{\bm{\tau}})$ (or equivalently, the partition function) of the pdf. More specifically, define \begin{align} \label{eqn:h} h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)= \frac{g(\nu_i, \overline{\bm{\tau}}_i) }{g(\nu_i +\nu_{-i}-\nu_0, \frac{\nu_{i} \overline{\bm{\tau}}_{i} + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i +\nu_{-i}-\nu_0} ) }, \end{align} then we have the following sufficient and necessary conditions for the sensitivity of the mechanism. \begin{theorem} \label{thm:single_sensitive_cont} When $\Theta \subseteq \mathbb{R}^m$, if the data analyst uses a model in the exponential family, then Mechanism~\ref{alg:single} is sensitive if and only if for any $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, we have $ \Pr_{D_{-i}} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0. $ \end{theorem} The theorem basically means that the mechanism will be sensitive if any pairs of different reports that will lead to different posteriors of $\bth$ can be distinguished by $h_{D_{-i}}(\cdot)$ with non-zero probability. However, for different models in the exponential family, this is not always true. For example, if we estimate the mean $\mu$ of a univariate Gaussian with a known variance and the Gaussian conjugate prior is used, then the normalization term only depends on the variance but not the mean, so in this case $h(\cdot)$ can only detect the change in variance, which means that the mechanism will be sensitive to replication and withholding, but not necessarily other types of manipulations. But if we estimate the mean of a Bernoulli distribution whose conjugate prior is the Beta distribution, then the partition function will be the Beta function, which can detect different posteriors and thus the mechanism will be sensitive. See Appendix~\ref{app:multi_exp_sensitive} for more details. The missing proofs can be found in Appendix~\ref{app:single_proofs}. \begin{comment} When $\Theta = \mathbb{R}^m$ and the data analyst uses a model from the exponential family so that $$ p(\bth | D_i) = \mathcal{P}(\bth | \nu_i, \overline{\bm{\tau}}_i) = g(\nu_i, \overline{\bm{\tau}}_i) \exp \left[\nu_i \bth^T \overline{\bm{\tau}}_i - \nu_i A(\bth ) \right]. $$ Let $$ h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)= \frac{g(\nu_i, \overline{\bm{\tau}}_i) }{g(\nu_i +\nu_{-i}-\nu_0, \frac{\nu_{i} \overline{\bm{\tau}}_{i} + \nu_{-i} \overline{\bm{\tau}}_{-i} - \nu_{0}\overline{\bm{\tau}}_{0}}{\nu_i +\nu_{-i}-\nu_0} ) }. $$ Then we have the following sufficient and necessary conditions for the sensitivity of the mechanism. \begin{theorem} \label{thm:single_sensitive_cont} When $\Theta \subseteq \mathbb{R}^m$, if the data analyst uses a model in the exponential family and a strictly convex $f$, then Mechanism~\ref{alg:single} is sensitive if and only if for any $(\nu_i', \overline{\bm{\tau}}_i') \neq (\nu_i, \overline{\bm{\tau}}_i)$, we have $ \Pr_{D_{-i}} [h_{D_{-i}}(\nu_i', \overline{\bm{\tau}}_i') \neq h_{D_{-i}}(\nu_i, \overline{\bm{\tau}}_i)] > 0. $ \end{theorem} \end{comment}
{ "attr-fineweb-edu": 1.355469, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbWLxK1ThhAqzU2xd
\section{Introduction} \subsection{Motivation} Random graphs are arguably the most studied objects at the interface of combinatorics and probability theory. One aspect of their study consists in analyzing a uniform random graph of large size $n$ in a prescribed family, \emph{e.g.} perfect graphs \cite{RandomPerfect}, planar graphs \cite{NoyPlanarICM}, graphs embeddable in a surface of given genus \cite{RandomGraphsSurface}, graphs in subcritical classes \cite{SubcriticalClasses}, hereditary classes \cite{GraphonsEntropy} or addable classes \cite{ConjectureAddable,ProofConjectureAddable}. The present paper focuses on uniform random \emph{cographs} (both in the labeled and unlabeled settings). \medskip Cographs were introduced in the seventies by several authors independently, see \emph{e.g.}~\cite{Seinsche} and further references on the Wikipedia page~\cite{Wikipedia}. They enjoy several equivalent characterizations. Among others, cographs are \begin{itemize} \item the graphs avoiding $P_4$ (the path with four vertices) as an induced subgraph; \item the graphs which can be constructed from graphs with one vertex by taking disjoint unions and joins; \item the graphs whose modular decomposition does not involve any prime graph; \item the inversion graphs of separable permutations. \end{itemize} Cographs have been extensively studied in the algorithmic literature. They are recognizable in linear time~\cite{CorneilLinear,Habib,Bretscher} and many computationally hard problems on general graphs are solvable in polynomial time when restricted to cographs; see \cite{corneil} and several subsequent works citing this article. In these works, as well as in the present paper, a key ingredient is the encoding of cographs by some trees, called \emph{cotrees}. These cotrees witness the construction of cographs using disjoint unions and joins (mentioned in the second item above). To our knowledge, cographs have however not been studied from a probabilistic perspective so far. Our motivation to the study of random cographs comes from our previous work~\cite{Nous1,Nous2,SubsClosedRandomTrees,Nous3} which exhibits a Brownian limiting object for separable permutations (and various other permutation classes). The first main result of this paper (\cref{th:MainTheorem}) is the description of a Brownian limit for cographs. Although cographs are the inversion graphs of separable permutations, this result is not a consequence of the previous one on permutations: indeed the inversion graph is not an injective mapping, hence a uniform cograph is not the cograph of a uniform separable permutation. Our convergence result holds in the space of \emph{graphons}. Graphon convergence has been introduced in~\cite{IntroducingGraphons} and has since then been a major topic of interest in graph combinatorics -- see \cite{LovaszBook} for a broad perspective on the field. The question of studying graphon limits of uniform random graphs (either labeled or unlabeled) in a given class is raised by Janson in \cite{JansonGraphLimits} (see Remark 1.6 there). Some general results have been recently obtained for hereditary\footnote{A class of graphs is hereditary if any induced subgraph of a graph in the class is in the class as well.} classes in \cite{GraphonsEntropy}. However, these results (in particular Theorem 3 in \cite{GraphonsEntropy}) do not apply to cographs, since the class of cographs contain $e^{o(n^2)}$ graphs of size $n$. The graphon limit of cographs found here, which we call the \emph{Brownian cographon}, is constructed from a Brownian excursion. By analogy with the realm of permutations~\cite{Nous2, SubsClosedRandomTrees}, we expect that the Brownian cographon (or a one-parameter deformation of it) is a universal limiting object for uniform random graphs in classes of graphs which are small\footnote{A class of labeled (resp. unlabeled) graphs is \emph{small} when its exponential (resp. ordinary) generating series has positive radius of convergence.} and closed under the substitution operation at the core of the modular decomposition. \medskip \subsection{Main results}~ From now on, for every $n\geq 1$, we let $\bm G_n$ and $\bm G_n^u$ be uniform random labeled and unlabeled cographs of size $n$, respectively. It is classical (see also \cref{defi:Graphe->Graphon} below) to associate with any graph a graphon, and we denote by $W_{\bm G_n}$ and $W_{\bm G_n^u}$ the graphons associated with $\bm G_n$ and $\bm G_n^u$. We note that the graphons associated with a labeled graph and its unlabeled version are the same. However, $W_{\bm G_n}$ and $W_{\bm G_n^u}$ have different distributions, since the number of possible labelings of an unlabeled cograph of a given size varies (see \cref{fig:all_cographs_4} p.\pageref{fig:all_cographs_4} for an illustration). \begin{theorem} \label{th:MainTheorem} We have the following convergences in distribution as $n$ tends to $+\infty$: $$ W_{\bm G_n} \to W^{1/2}, \qquad W_{\bm G_n^u} \to W^{1/2}, $$ where $W^{1/2}$ is the Brownian cographon introduced below in \cref{def:BrownianCographon}. \end{theorem} Roughly speaking, the graphon convergence is the convergence of the rescaled adjacency matrix with an unusual metric, the {\em cut metric}, see \cref{subsec:space_graphons}. To illustrate \cref{th:MainTheorem}, we show on \cref{fig:simu} the adjacency matrix of a large random uniform labeled cograph. Entries $1$ in the matrix are represented as black dots, entries $0$ as white dots. It was obtained by using the encoding of cographs by cotrees and sampling a large uniform cotree using Boltzmann sampling \cite{Boltzmann} of the equation \eqref{eq:SerieS}, p. \pageref{eq:SerieS}. Note that the order of vertices in the axis of \cref{fig:simu} is not the order of labels but is given by the depth-first search of the associated cotree. The fractal aspect of the figure -- appearance of imbricated squares at various scale -- is consistent with the convergence to a Brownian limiting object, since the Brownian excursion enjoys some self-similarity properties. \begin{figure}[htbp] \[\includegraphics[height=5cm]{largecograph}\] \caption{The adjacency matrix of a uniform labeled random cograph of size 4482. \label{fig:simu}} \end{figure} \medskip We now present further results. It is well-known that the graphon convergence entails the convergence of many graph statistics, like subgraph densities, the spectrum of the adjacency matrix, the normalized degree distribution (see \cite{LovaszBook} and \cref{sec:Graphons} below). Hence, \cref{th:MainTheorem} implies that these statistics have the same limit in the labeled and unlabeled cases, and that this limit may (at least in principle) be described in terms of the Brownian cographon. Among these, the degree distribution of the Brownian cographon (or to be precise, its \emph{intensity}\footnote{The degree distribution of a graphon is a measure, and therefore that of the Brownian cographon is a \emph{random} measure. Following Kallenberg \cite[Chapter 2]{RandomMeasures}, we call the \emph{intensity} of a random measure $\bm \mu$ the (deterministic) measure $I[\bm \mu]$ defined by $I[\bm \mu](A) = \mathbb{E}[\bm \mu(A)]$ for all measurable sets $A$. In other words, we consider here the ``averaged'' degree distribution of the Brownian cographon, where we average on all realizations of the Brownian cographon.}) is surprisingly nice: it is simply the Lebesgue measure on $[0,1]$. We therefore have the following result, where we denote by $\deg_G(v)$ the degree of a vertex $v$ in a graph $G$. \begin{theorem} \label{th:DegreeRandomVertex} For every $n\geq 1$, let $\bm v$ and $\bm v^u$ be uniform random vertices in $\bm G_n$ and $\bm G_n^u$, respectively. We have the following convergences in distribution as $n$ tends to $+\infty$: \[\tfrac1n \deg_{\bm G_n}(\bm v) \to U, \qquad \tfrac1n \deg_{\bm G_n^u}(\bm v^u) \to U,\] where $U$ is a uniform random variable in $[0,1]$. \end{theorem} \medskip On the other hand, other graph statistics are not continuous for the graphon topology, and therefore can have different limits in the labeled and unlabeled cases. We illustrate this phenomenon with the \emph{vertex connectivity} $\kappa$ (defined as the minimum number of vertices whose removal disconnects the graph). Our third result is the following. \begin{theorem}\label{thm:DegreeConnectivityIntro} There exist different probability distributions $(\pi_j)_{j\geq 1}$ and $(\pi_j^u)_{j\geq 1}$ such that, for every fixed $j \ge 1$, as $n$ tends to $+\infty$, we have \begin{equation} \mathbb{P}(\kappa(\bm G_n)=j) \to\pi_j, \qquad \mathbb{P}(\kappa(\bm G^u_n)=j) \to \pi^u_j. \end{equation} \end{theorem} Formulas for $\pi_j$ and $\pi_j^u$ are given in \cref{thm:DegreeConnectivity}. \begin{remark} A part of these results (\cref{th:MainTheorem}) has been independently derived in \cite{Benedikt} during the preparation of this paper. The proof method is however different. \end{remark} \subsection{Proof strategy} We first discuss the proof of \cref{th:MainTheorem}. For any graphs $g$ and $G$ of size $k$ and $n$ respectively, we denote by ${\sf Dens}(g,G)$ the number of copies of $g$ in $G$ as induced subgraphs normalized by $n^k$. Equivalently, let $\vec{V}^k$ be a $k$-tuple of i.i.d. uniform random vertices in $G$, then ${\sf Dens}(g,G)= \mathbb{P}(\SubGraph(\vec{V}^k,G)=g)$, where $\SubGraph(I,G)$ is the subgraph of $G$ induced by the vertices of $I$. (All subgraphs in this article are induced subgraphs, and we sometimes omit the word ``induced''.) % From a theorem of Diaconis and Janson~\cite[Theorem 3.1]{DiaconisJanson}, the graphon convergence of any sequence of random graphs $({\bm H}_n)$ is characterized by the convergence of $\mathbb{E}[{\sf Dens}(g,{\bm H}_n)]$ for all graphs $g$. In the case of $\bm G_n$ (the uniform random labeled cographs of size $n$), for any graph $g$ of size $k$, we have \[ \mathbb{E}[{\sf Dens}(g,{\bm G}_n)] = \frac{\left|\left\{ (G,I) : \, {G=(V,E) \text{ labeled cograph of size } n, \atop I \in V^k \text{ and } \SubGraph(I,G)=g} \right\}\right|} {|\{ G \text{ labeled cograph of size } n \}| \cdot n^k}, \] and a similar formula holds in the unlabeled case. Both in the labeled and unlabeled cases, the asymptotic behavior of the denominator follows from the encoding of cographs as cotrees, standard application of the symbolic method of Flajolet and Sedgewick~\cite{Violet} and singularity analysis (see \cref{prop:Asympt_S_Seven,prop:Asympt_U}). The same methods can be used to determine the asymptotic behavior of the numerator, counting cotrees with marked leaves inducing a given subtree. This requires more involved combinatorial decompositions, which are performed in \cref{sec:proofLabeled,sec:unlabeled}. We note that we already used a similar proof strategy in the framework of permutations in~\cite{Nous2}. The adaptation to the case of {\em labeled} cographs does not present major difficulties. The {\em unlabeled} case is however more subtle, since we have to take care of symmetries when marking leaves in cotrees (see the discussion in \cref{ssec:unlabeled_to_labeled} for details). We overcome this difficulty using the $n!$-to-$1$ mapping that maps a pair $(G,a)$ (where $G$ is a labeled cograph and $a$ an automorphism of $G$) to the unlabeled version of $G$. We then make combinatorial decompositions of such pairs $(G,a)$ with marked vertices inducing a given subgraph (or more precisely, of the associated cotrees, with marked leaves inducing a given subtree). Our analysis shows that symmetries have a negligeable influence on the asymptotic behavior of the counting series. This is similar -- though we have a different and more combinatorial presentation -- to the techniques developed in the papers~\cite{PanaStufler,GittenbergerPolya}, devoted to the convergence of unordered unlabeled trees to the \emph{Brownian Continuum Random Tree}. \medskip With \cref{th:MainTheorem} in our hands, proving \cref{th:DegreeRandomVertex} amounts to proving that the intensity of the degree distribution of the Brownian cographon is the Lebesgue measure on $[0,1]$. Rather than working in the continuous, we exhibit a discrete approximation $\bm G_n^b$ of the Brownian cographon, which has the remarkable property that the degree of a uniform random vertex in $\bm G_n^b$ is {\em exactly} distributed as a uniform random variable in $\{0,1,\cdots,n-1\}$. The latter is proved by purely combinatorial arguments (see \cref{prop:IntensityW12}). \medskip To prove \cref{thm:DegreeConnectivityIntro}, we start with a simple combinatorial lemma, which relates the vertex connectivity of a connected cograph to the sizes of the subtrees attached to the root in its cotree. Based on that, we can use again the symbolic method and singularity analysis as in the proof of \cref{th:MainTheorem}. \subsection{Outline of the paper} \cref{sec:cotrees} explains the standard encoding of cographs by cotrees and the relation between taking induced subgraphs and subtrees. \cref{sec:Graphons} presents the necessary material on graphons; results stated there are quoted from the literature, except the continuity of the degree distribution, for which we could not find a reference. \cref{sec:BrownianCographon} introduces the limit object of \cref{th:MainTheorem}, namely the Brownian cographon. It is also proved that the intensity of its degree distribution is uniform (which is the key ingredient for \cref{th:DegreeRandomVertex}). \cref{th:MainTheorem,th:DegreeRandomVertex} are proved in \cref{sec:proofLabeled} for the labeled case and in \cref{sec:unlabeled} for the unlabeled case. Finally, \cref{thm:DegreeConnectivityIntro} is proved in \cref{sec:DegreeConnectivity}. \section{Cographs, cotrees and induced subgraphs} \label{sec:cotrees} \subsection{Terminology and notation for graphs} All graphs considered in this paper are \emph{simple} (\emph{i.e.} without multiple edges, nor loops) and not directed. A {\em labeled graph} $G$ is a pair $(V,E)$, where $V$ is its vertex set (consisting of distinguishable vertices, each identified by its label) and $E$ is its edge set. Two labeled graphs $(V,E)$ and $(V',E')$ are isomorphic if there exists a bijection from $V$ to $V'$ which maps $E$ to $E'$. Equivalence classes of labeled graphs for the above relation are {\em unlabeled graphs}. Throughout this paper, the \emph{size} of a graph is its number of vertices. Note that there are finitely many unlabeled graphs with $n$ vertices, so that the uniform random unlabeled graph of size $n$ is well defined. For labeled graphs, there are finitely many graphs with any given vertex set $V$. Hence, to consider a uniform random labeled graph of size $n$, we need to fix a vertex set $V$ of size $n$. The properties we are interested in do not depend on the choice of this vertex set, so that we can choose $V$ arbitrarily, usually $V=\{1,\dots,n\}$. As a consequence, considering a subset (say $\mathcal{C}$) of the set of all graphs, we can similarly define the \emph{uniform random unlabeled graph of size $n$ in $\mathcal{C}$} (resp. the uniform random labeled graph with vertex set $\{1,\dots,n\}$ in $\mathcal{C}$ -- which we simply denote by \emph{uniform random labeled graph of size $n$ in $\mathcal{C}$}). The restricted family of graphs considered in this paper is that of \emph{cographs}. \subsection{Cographs and cotrees} Let $G=(V,E)$ and $G'=(V',E')$ be labeled graphs with disjoint vertex sets. We define their \emph{disjoint union} as the graph $(V \uplus V', E \uplus E')$ (the symbol $\uplus$ denoting as usual the disjoint union of two sets). We also define their \emph{join} as the graph $(V \uplus V', E \uplus E' \uplus (V \times V'))$: namely, we take copies of $G$ and $G'$, and add all edges from a vertex of $G$ to a vertex of $G'$. Both definitions readily extend to more than two graphs (adding edges between any two vertices originating from different graphs in the case of the join operation). \begin{definition} A \emph{labeled cograph} is a labeled graph that can be generated from single-vertex graphs applying join and disjoint union operations. An \emph{unlabeled cograph} is the underlying unlabeled graph of a labeled cograph. \end{definition} It is classical to encode cographs by their \emph{cotrees}. \begin{definition} \label{def:cotree} A labeled \emph{cotree} of size $n$ is a rooted tree $t$ with $n$ leaves labeled from $1$ to $n$ such that \begin{itemize} \item $t$ is not plane (\emph{i.e.} the children of every internal node are not ordered);% \item every internal node has at least two children; \item every internal node in $t$ is decorated with a $\mathtt{0}$ or a $\mathtt{1}$. \end{itemize} An \emph{unlabeled cotree} of size $n$ is a labeled cotree of size $n$ where we forget the labels on the leaves. \end{definition} \begin{remark} In the literature, cotrees are usually required to satisfy the property that decorations $\mathtt{0}$ and $\mathtt{1}$ should alternate along each branch from the root to a leaf. In several proofs, our work needs also to consider trees in which this alternation assumption is relaxed, hence the choice of diverging from the usual terminology. Cotrees which do satisfy this alternation property are denoted \emph{canonical cotrees} in this paper (see \cref{def:canonical_cotree}). \end{remark} For an unlabeled cotree $t$, we denote by $\cograph(t)$ the unlabeled graph defined recursively as follows (see an illustration in \cref{fig:ex_cotree}): \begin{itemize} \item If $t$ consists of a single leaf, then $\cograph(t)$ is the graph with a single vertex. \item Otherwise, the root of $t$ has decoration $\mathtt{0}$ or $\mathtt{1}$ and has subtrees $t_1$, \dots, $t_d$ attached to it ($d \ge 2$). Then, if the root has decoration $\mathtt{0}$, we let $\cograph(t)$ be the {\em disjoint union} of $\cograph(t_1)$, \dots, $\cograph(t_d)$. Otherwise, when the root has decoration $\mathtt{1}$, we let $\cograph(t)$ be the {\em join} of $\cograph(t_1)$, \dots, $\cograph(t_d)$. \end{itemize} Note that the above construction naturally entails a one-to-one correspondence between the leaves of the cotree $t$ and the vertices of its associated graph $\cograph(t)$. Therefore, it maps the size of a cotree to the size of the associated graph. Another consequence is that we can extend the above construction to a \emph{labeled} cotree $t$, and obtain a \emph{labeled} graph (also denoted $\cograph(t)$), with vertex set $\{1,\dots,n\}$: each vertex of $\cograph(t)$ receives the label of the corresponding leaf of $t$. By construction, for all cotrees $t$, the graph $\cograph(t)$ is a cograph. Conversely, each cograph can be obtained in this way, albeit not from a unique tree $t$. It is however possible to find a canonical cotree representing a cograph $G$. This tree was first described in~\cite{corneil}. The presentation of~\cite{corneil}, although equivalent, is however a little bit different, since cographs are generated using exclusively ``complemented unions'' instead of disjoint unions and joins. The presentation we adopt has since been used in many algorithmic papers, see \emph{e.g.}~\cite{Habib,Bretscher}. \begin{definition}\label{def:canonical_cotree} A cotree is \emph{canonical} if every child of a node decorated by $\mathtt{0}$ (resp. $\mathtt{1}$) is either decorated by $\mathtt{1}$ (resp. $\mathtt{0}$) or a leaf. \end{definition} \begin{proposition}\label{prop:canonical_cotree} Let $G$ be a labeled (resp. unlabeled) cograph. Then there exists a unique labeled (resp. unlabeled) canonical cotree $t$ such that $\cograph(t)=G$. \end{proposition} Example of cographs and their canonical cotree are given in \cref{fig:ex_cotree,fig:all_cographs_4}. From a graph $G$, the canonical cotree $t$ such that $\cograph(t)=G$ is recursively built as follows. If $G$ consists of a single vertex, $t$ is the unique cotree with a single leaf. If $G$ has at least two vertices, we distinguish cases depending on whether $G$ is connected or not. \begin{itemize} \item If $G$ is not connected, the root of $t$ is decorated with $\mathtt{0}$ and the subtrees attached to it are the cographs associated with the connected components of $G$. \item If $G$ is connected, the root of $t$ is decorated with $\mathtt{1}$ and the subtrees attached to it are the cographs associated with the induced subgraphs of $G$ whose vertex sets are those of the connected components of $\bar{G}$, where $\bar{G}$ is the complement of $G$ (graph on the same vertices with complement edge set). \end{itemize} Important properties of cographs which justify the correctness of the above construction are the following: cographs are stable by induced subgraph and by complement, and a cograph $G$ of size at least two is not connected exactly when its complement $\bar{G}$ is connected. \begin{figure}[htbp] \begin{center} \includegraphics[width=8cm]{Cograph_CanonicalTree.pdf} \caption{Left: A labeled canonical cotree $t$ with $8$ leaves. Right: The associated labeled cograph $\cograph(t)$ of size $8$. } \label{fig:ex_cotree} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \begin{tabular}[t]{| >{\centering\arraybackslash}m{40mm} >{\centering\arraybackslash}m{20mm} >{\centering\arraybackslash}m{20mm}>{\centering\arraybackslash}m{20mm}>{\centering\arraybackslash}m{20mm}>{\centering\arraybackslash}m{20mm} |} \hline & & & & & \\ Unlabeled canonical cotree & \includegraphics[width=18mm]{1234_arbre.pdf} & \includegraphics[width=18mm]{2134_arbre.pdf} & \includegraphics[width=18mm]{1432_arbre.pdf} & \includegraphics[width=18mm]{3124_arbre.pdf} & \includegraphics[width=18mm]{2143_arbre.pdf}\\ Corresponding unlabeled cograph & \includegraphics[width=18mm]{1234_graphe.pdf} & \includegraphics[width=18mm]{2134_graphe.pdf} & \includegraphics[width=18mm]{1432_graphe.pdf} & \includegraphics[width=18mm]{3124_graphe.pdf} & \includegraphics[width=18mm]{2143_graphe.pdf}\\ Number of associated labeled cographs & $1$ & $6$ & $4$ & $12$ & $3$ \\ & & & & & \\ \hline & & & & & \\ Unlabeled canonical cotree & \includegraphics[width=18mm]{4321_arbre.pdf} & \includegraphics[width=18mm]{3421_arbre.pdf} & \includegraphics[width=18mm]{2341_arbre.pdf} & \includegraphics[width=18mm]{3241_arbre.pdf} & \includegraphics[width=18mm]{3412_arbre.pdf}\\ Corresponding unlabeled cograph & \includegraphics[width=18mm]{4321_graphe.pdf} & \includegraphics[width=18mm]{3421_graphe.pdf} & \includegraphics[width=18mm]{2341_graphe.pdf} & \includegraphics[width=18mm]{3241_graphe.pdf} & \includegraphics[width=18mm]{3412_graphe.pdf}\\ Number of associated labeled cographs & $1$ & $6$ & $4$ & $12$ & $3$ \\ & & & & & \\ \hline \end{tabular} \caption{All unlabeled cographs of size $4$ with their corresponding (unlabeled) canonical cotrees and their number of distinct labelings.} \label{fig:all_cographs_4} \end{center} \end{figure} \subsection{Subgraphs and induced trees} Let $G$ be a graph of size $n$ (which may or not be labeled), and let $I=(v_1,\dots,v_k)$ be a $k$-tuple of vertices of $G$. Recall that the \emph{subgraph of $G$ induced by $I$}, which we denote by $\SubGraph(I,G)$, is the graph with vertex set $\{v_1,\dots,v_k\}$ and which contains the edge $\{v_i,v_j\}$ if and only if $\{v_i,v_j\}$ is an edge of $G$. In case of repetitions of vertices in $I$, we take as many copies of each vertex as times it appears in $I$ and do not connect copies of the same vertex. We always regard $\SubGraph(I,G)$ as an unlabeled graph. In the case of cographs, the (induced) subgraph operation can also be realized on the cotrees, through {\em induced trees}, which we now present. We start with a preliminary definition. \begin{definition}[First common ancestor]\label{dfn:common_ancestor} Let $t$ be a rooted tree, and $u$ and $v$ be two nodes (internal nodes or leaves) of $t$. The \emph{first common ancestor} of $u$ and $v$ is the node furthest away from the root $\varnothing$ that appears on both paths from $\varnothing$ to $u$ and from $\varnothing$ to $v$ in $t$. \end{definition} For any cograph $G$, and any vertices $i$ and $j$ of $G$, % the following simple observation allows to read in any cotree encoding $G$ if $\{i,j\}$ is an edge of $G$. \begin{observation}\label{obs:caract_edge-label1} Let $i \neq j$ be two leaves of a cotree $t$ and $G=\cograph(t)$. We also denote by $i$ and $j$ the corresponding vertices in $G$. Let $v$ be the first common ancestor of $i$ and $j$ in $t$. Then $\{i,j\}$ is an edge of $G$ if and only if $v$ has label $\mathtt{1}$ in $t$. \end{observation} \begin{definition}[Induced cotree]\label{dfn:induced_cotree} Let $t$ be a cotree (which may or not be labeled), and $I=(\ell_1,\dots,\ell_k)$ a $k$-tuple of distinct leaves of $t$, which we call the \emph{marked leaves} of $t$. The \emph{tree induced by $(t,I)$}, denoted $t_I$, is the \textbf{always labeled} cotree of size $k$ defined as follows. The tree structure of $t_I$ is given by \begin{itemize} \item the leaves of $t_I$ are the marked leaves of $t$;% \item the internal nodes of $t_I$ are the nodes of $t$ that are first common ancestors of two (or more) marked leaves; \item the ancestor-descendant relation in $t_I$ is inherited from the one in $t$; \item the decoration of an internal node $v$ of $t_I$ is inherited from the one in $t$; \item for each $i\leq k$, the leaf of $t_I$ corresponding to leaf $\ell_i$ in $t$ is labeled $i$ in $t_I$. \end{itemize} \end{definition} We insist on the fact that we \textbf{always} define the induced cotree $t_I$ as a \textbf{labeled} cotree, regardless of whether the original cotree $t$ is labeled or not. The labeling of the induced cotree is related to the order of the marked leaves $I$ (and not to their labels in the case $t$ was labeled); this echoes the choice of $I$ as $k$-tuple (\emph{i.e.} an {\em ordered} collection) of distinct leaves. A detailed example of the induced cotree construction is given in \cref{fig:SsArbreInduit}. \begin{figure}[htbp] \begin{center} \includegraphics[width=12cm]{SsArbreInduit.pdf} \end{center} \caption{On the left: A cotree $t$ of size $n=26$, where leaves are indicated both by $\circ$ and $\bullet$. We also fix a $9$-tuple $I=(\ell_1,\dots,\ell_9)$ of marked leaves (indicated by $\bullet$). In green, we indicate the internal nodes of $t$ which are first common ancestors of these $9$ marked leaves. On the right: The labeled cotree $t_I$ induced by the $9$ marked leaves.} \label{fig:SsArbreInduit} \end{figure} \begin{proposition}\label{prop:diagramme_commutatif} Let $t$ be a cotree and $G=\cograph(t)$ the associated cograph. Let $I$ be a $k$-tuple of distinct leaves in $t$, which identifies a $k$-tuple of distinct vertices in $G$. Then, as unlabeled graphs, we have $\SubGraph(I,G) = \cograph(t_I)$. \end{proposition} \begin{proof} This follows immediately from \cref{obs:caract_edge-label1} and the fact that the induced cotree construction (\cref{dfn:induced_cotree}) preserves first common ancestors and their decorations. \end{proof} \begin{remark} The reader might be surprised by the fact that we choose induced subtrees to be labeled, while induced subgraphs are not. The reasons are the following. The labeling of induced subtrees avoids symmetry problems when decomposing cotrees with marked leaves inducing a given subtree, namely in \cref{th:SerieT_t0,th:V_t_0}. On the other hand, the theory of graphons is suited to consider unlabeled subgraphs. \end{remark} \section{Graphons} \label{sec:Graphons} Graphons are continuum limit objects for sequences of graphs. We present here the theory relevant to our work. We recall basic notions from the literature, following mainly Lovász' book \cite{LovaszBook}, then we recall results of Diaconis and Janson \cite{DiaconisJanson} regarding convergence of random graphs to random graphons. Finally, we prove a continuity result for the degree distribution with respect to graphon convergence. The theory of graphons classically deals with unlabeled graphs. Consequently, unless specified otherwise, all graphs in this section are considered unlabeled. When considering labeled graphs, graphon convergence is to be understood as the convergence of their unlabeled versions. \subsection{The space of graphons}\label{subsec:space_graphons} \begin{definition}\label{defi:Graphon} A graphon is an equivalence class of symmetric functions $[0,1]^2 \to [0,1]$, under the equivalence relation $\sim$, where $w \sim u$ if there exists an invertible measurable and Lebesgue measure preserving function $\phi:[0,1] \to [0,1]$ such that $w(\phi(x),\phi(y)) = u(x,y)$ for almost every $x,y\in[0,1]$. \end{definition} Intuitively, a graphon is a continuous analogue of the adjacency matrix of a graph, viewed up to relabelings of its continuous vertex set. \begin{definition}\label{defi:Graphe->Graphon} The graphon $W_G$ associated to a labeled graph $G$ with $n$ vertices (labeled from $1$ to $n$) is the equivalence class of the function $w_G:[0,1]^2\to [0,1]$ where \[w_G(x,y) = A_{\lceil nx\rceil,\lceil ny\rceil} \in \{0,1\}\] and $A$ is the adjacency matrix of the graph $G$. \end{definition} Since any relabeling of the vertex set of $G$ gives the same graphon $W_G$, the above definition immediately extends to unlabeled graphs. We now define the so-called {\em cut metric}, first on functions, and then on graphons. We note that it is different than usual metrics on spaces of functions ($L^1$, supremum norms, \ldots), see \cite[Chapter 8]{LovaszBook} for details. For a real-valued symmetric function $w$ on $[0,1]^2$, its {\em cut norm} is defined as \[ \| w \|_\Box = \sup_{S,T \subseteq [0,1]} \left| \int_{S \times T} w(x,y) dx dy \right| \] Identifying as usual functions equal almost-everywhere, this is indeed a norm. It induces the following {\em cut distance} on the space of graphons \[ \delta_{\Box}(W,W') = \inf_{w \in W,w' \in W'} \| w -w'\|_\Box.\] While the symmetry and triangular inequalities are immediate, this ``distance'' $\delta_{\Box}$ {\em does not} separate points, {\em i.e.} there exist different graphons at distance zero. Call $\widetilde{\mathcal W_0}$ the space of graphons, quotiented by the equivalence relation $W \equiv W'$ if $\delta_{\Box}(W,W') =0$. This is a metric space with distance $\delta_{\Box}$. This definition is justified by the following deep result, see, {\em e.g.}, \cite[Theorem 9.23]{LovaszBook}. \begin{theorem} \label{thm:Graphons_Compact} The metric space $(\widetilde{\mathcal W_0},\delta_{\Box})$ is compact. \end{theorem} In the sequel, we think of graphons as elements in $\widetilde{\mathcal W_0}$ and convergences of graphons are to be understood with respect to the distance $\delta_{\Box}$. \medskip \subsection{Subgraph densities and samples} ~ An important feature of graphons is that one can extend the notion of density of a given subgraph $g$ in a graph $G$ to density in a graphon $W$. Moreover, convergence of graphons turns to be equivalent to convergence of all subgraph densities. To present this, we start by recalling from the introduction the definition of subgraph densities in graphs. We recall that, if $I$ is a tuple of vertices of $G$, then we write $\SubGraph(I,G)$ for the induced subgraph of $G$ on vertex set $I$. % \begin{definition}[Density of subgraphs] The density of an (unlabeled) graph $g$ of size $k$ in a graph $G$ of size $n$ (which may or not be labeled) is defined as follows: let $\vec{V}^k$ be a $k$-tuple of i.i.d. uniform random vertices in $G$, then \begin{align*} {\sf Dens}(g,G) = \mathbb{P}(\SubGraph(\vec{V}^k,G)=g). \end{align*} \end{definition} We now extend this to graphons. Consider a graphon $W$ and one of its representatives $w$. We denote by $\Sample_k(W)$ the unlabeled random graph built as follows: $\Sample_k(W)$ has vertex set $\{v_1,v_2,\dots,v_k\}$ and, letting $\vec{X}^k=(X_1,\dots,X_k)$ be i.i.d. uniform random variables in $[0,1]$, we connect vertices $v_i$ and $v_j$ with probability $w(X_i,X_j)$ (these events being independent, conditionally on $(X_1,\cdots,X_k)$). Since the $X_i$'s are independent and uniform in $[0,1]$, the distribution of this random graph is the same if we replace $w$ by a function $w' \sim w$ in the sense of \cref{defi:Graphon}. It turns out that this distribution also stays the same if we replace $w$ by a function $w'$ such that $\|w-w'\|_\Box=0$ (it can be seen as a consequence of \cref{Thm:GraphonsCv_Densities} below), so that the construction is well-defined on graphons. \begin{definition} % The density of a graph $g=(V,E)$ of size $k$ in a graphon $W$ is \begin{align*} {\sf Dens}(g,W) &= \mathbb{P}(\Sample_k(W)=g)\\ &= \int_{[0,1]^k} \prod_{\{v,v'\} \in E} w(x_v,x_{v'}) \prod_{\{v,v'\} \notin E} (1-w(x_v,x_{v'})) \, \prod_{v \in V} dx_v, \end{align*} where, in the second expression, we choose an arbitrary representative $w$ in $W$. \end{definition} This definition extends that of the density of subgraphs in (finite) graphs in the following sense. For every finite graphs $g$ and $G$, denoting by $\vec{V}^k$ a $k$-tuple of i.i.d. uniform random vertices in $G$, $$ {\sf Dens}(g,W_G) = \mathbb{P}(\Sample_k(W_G)=g) = \mathbb{P}(\SubGraph(\vec{V}^k,G)=g) = {\sf Dens}(g,G). $$ The following theorem is a prominent result in the theory of graphons, see {\em e.g.} \cite[Theorem 11.5]{LovaszBook}. \begin{theorem} \label{Thm:GraphonsCv_Densities} Let $W_n$ (for all $n \geq 0$) and $W$ be graphons. Then the following are equivalent: \begin{enumerate}% \item [(a)] $(W_n)$ converges to $W$ (for the distance $\delta_{\Box}$); \item [(b)] for any fixed graph $g$, we have ${\sf Dens}(g,W_n) \to {\sf Dens}(g,W)$. \end{enumerate} \end{theorem} Classically, when $(H_n)$ is a sequence of graphs, we say that $(H_n)$ converges to a graphon $W$ when $(W_{H_n})$ converges to $W$. \medskip \subsection{Random graphons} ~ We now discuss convergence of a sequence of random graphs $\bm H_n$ (equivalently, of the associated random graphons $W_{\bm H_n}$) towards a possibly random limiting graphon $\bm W$. In this context, the densities ${\sf Dens}(g,\bm H_n)$ are random variables. This was studied in \cite{DiaconisJanson} and it turns out that it is enough to consider the expectations $\mathbb{E}\big[{\sf Dens}(g,\bm H_n)\big]$ and $\mathbb{E}\big[{\sf Dens}(g,\bm W)\big]$ to extend \cref{Thm:GraphonsCv_Densities} to this random setting. Note first that $\mathbb{E}\big[{\sf Dens}(g,\bm H_n)\big] = \mathbb{P}(\SubGraph(\vec{V}^k,\bm H_n) = g)$, where both $\bm H_n$ and $\vec{V}^k$ are random, and that similarly $\mathbb{E}\big[{\sf Dens}(g,\bm W)\big]= \mathbb{P}(\Sample_k(\bm W)=g)$, where the randomness comes both from $\bm W$ and the operation $\Sample_k$. A first result states that the distributions of random graphons are characterized by expected subgraph densities. \begin{proposition}[Corollary 3.3 of \cite{DiaconisJanson}]\label{Prop:CaracterisationLoiGraphon} Let ${\bf W},{\bf W'}$ be two random graphons, seen as random variables in $\widetilde{\mathcal W_0}$. The following are equivalent: \begin{itemize} \item ${\bf W}\stackrel{d}{=}{\bf W'}$; \item for every finite graph $g$, $\mathbb{E}[{\sf Dens}(g,{\bf W})] = \mathbb{E}[{\sf Dens}(g,{\bf W'})]$; \item for every $k\geq 1$, $\Sample_k(\bm W) \stackrel d = \Sample_k(\bm W')$. \end{itemize} \end{proposition} The next result, which is essentially \cite[Theorem 3.1]{DiaconisJanson}, characterizes the convergence in distribution of random graphs to random graphons. \begin{theorem} \label{th:Portmanteau} For any $n$, let $\bm H_n$ be a random graph of size $n$. Denote by $W_{\bm H_n}$ the graphon associated to ${\bm H_n}$ by \cref{defi:Graphe->Graphon}. The following assertions are equivalent: \begin{enumerate}% \item [(a)] The sequence of random graphons $(W_{\bm H_n})_n$ converges in distribution to some random graphon $\bm W$. \item [(b)] The random infinite vector $\big({\sf Dens}(g,\bm H_n)\big)_{g\text{ finite graph}}$ converges in distribution in the product topology to some random infinite vector $(\bm \Lambda_g)_{g\text{ finite graph}}$. \item [(c)]For every finite graph $g$, there is a constant $\Delta_g \in [0,1]$ such that \[\mathbb{E}[{\sf Dens}(g,\bm H_n)] \xrightarrow{n\to\infty} \Delta_g.\] \item [(d)] For every $k\geq 1$, denote by $\vec{V'}^k=(V'_1,\dots,V'_k)$ a uniform $k$-tuple of {\em distinct} vertices of $\bm H_n$. Then the induced graph $\SubGraph(\vec{V'}^k,\bm H_n)$ converges in distribution to some random graph $\bm g_k$. % \end{enumerate} Whenever these assertions are verified, we have \begin{equation} \label{eq:Lambda_Density} (\bm \Lambda_g)_{g\text{ finite graphs}} \stackrel d = ({\sf Dens}(g,\bm W))_{g\text{ finite graphs}}. \end{equation} and, for every graph $g$ of size $k$, \begin{equation} \Delta_g = \mathbb{E}[\bm \Lambda_g] = \mathbb{E}[{\sf Dens}(g,\bm W)]= \mathbb{P}(\bm g_k = g). \label{eq:Delta_Esper} \end{equation} \end{theorem} Using the identity $\mathbb{E}\big[{\sf Dens}(g,\bm W)\big]= \mathbb{P}(\Sample_k(\bm W)=g)$, we note that \cref{eq:Delta_Esper} implies that, for all $k \ge 1$, we have \begin{equation} \Sample_k(\bm W) \stackrel d = \bm g_k \label{eq:SampleK_gK} \end{equation} \begin{proof} The equivalence of the first three items, \cref{eq:Lambda_Density} and the first two equalities in \cref{eq:Delta_Esper} are all proved in \cite{DiaconisJanson}; see Theorem 3.1 there. Thus, we only prove (c) $\Leftrightarrow$ (d) and the related equality $\mathbb{P}(\bm g_k = g)=\Delta_g$. For any graphs $g,G$ of respective sizes $k\leq n$, we define their \emph{injective density} $ {\sf Dens}^{\text{inj}}(g,G) = \mathbb{P}(\SubGraph(\vec{V'}^k,G)=g)$ where $\vec{V'}^k$ is a uniform $k$-tuple of {\em distinct} vertices of $G$. As explained in~\cite{DiaconisJanson} (and standard in the graphon literature), Assertion (c) is equivalent, for the same limits $(\Delta_g)$, to its analogue with injective densities, which is: for every graph $g$, \begin{equation} \mathbb{E}[{\sf Dens}^{\text{inj}}(g,\bm H_n)] \xrightarrow{n\to\infty} \Delta_g. \label{eq:conv_Inj_Dens} \end{equation} Moreover, we note that, if $(\bm H_n)$ is a sequence of random graphs, then, for any graph $g$ of size $k$, \begin{equation} \label{eq:esper_Dens_Inj} \mathbb{E} \big[ {\sf Dens}^{\text{inj}}(g,\bm H_n) \big] = \mathbb{P}(\SubGraph(\vec{V'}^k,\bm H_n)=g), \end{equation} where both $\vec{V'}^k$ and $\bm H_n$ are random. Since $\SubGraph(\vec{V'}^k,\bm H_n)$ takes value in a finite set, its convergence in distribution (Assertion (d)) is equivalent to the convergence of its point probabilities, {\em i.e.} of the right-hand side of \cref{eq:esper_Dens_Inj}. Recalling \cref{eq:conv_Inj_Dens}, this proves the equivalence of Assertions (c) and (d). Futhermore, when these assertions hold, we have \[\mathbb{P}(\bm g_k = g)= \lim_{n \to \infty} \mathbb{P}\big[ \SubGraph(\vec{V'}^k,\bm H_n)=g \big] = \lim_{n \to \infty} \mathbb{E} \big[ {\sf Dens}^{\text{inj}}(g,\bm H_n) \big] = \Delta_g,\] as wanted. \end{proof} We finally collect an immediate corollary. \begin{lemma}\label{lem:sample_converges_to_W} If $\bm W$ is a random graphon, then $W_{\Sample_n(\bm W)}$ converges in distribution to $\bm W$ as $n\to\infty$. \end{lemma} \begin{proof} Recall that $\Sample_n(\bm W)$ is the random graph on vertex set $\{v_1,\cdots,v_n\}$ obtained by taking $X_1$, \ldots, $X_n$ i.i.d. uniform in $[0,1]$ and joining $v_i$ and $v_j$ with probability $w(X_i,X_j)$ where $w$ is a representative of $W$. Fix $k$ in $\{1,\cdots,n\}$. As in the previous theorem, let $\vec{V'}^k=(v_{h_1},\cdots,v_{h_k})$ be a uniform random $k$-tuple of distinct vertices of $\Sample_n(\bm W)$. Then $\SubGraph(\vec{V'}^k,\Sample_n(\bm W))$ is the random graph on vertex set $\{v_{h_1},\cdots,v_{h_k}\}$ obtained by taking $X_{h_1}$, \ldots, $X_{h_k}$ i.i.d. uniform in $[0,1]$ and joining $v_{h_i}$ and $v_{h_j}$ with probability $w(X_{h_i},X_{h_j})$. Up to renaming $v_{h_i}$ as $v_i$ and $X_{h_i}$ as $X_i$, this matches the construction of $\Sample_k(\bm W)$. Therefore we have the following equality in distribution of random unlabeled graphs: \[\SubGraph(\vec{V'}^k,\Sample_n(\bm W)) \stackrel d= \Sample_k(\bm W).\] Thus, Assertion (d) of \cref{th:Portmanteau} is fulfilled for the graph sequence $(\Sample_n(\bm W))_n$ and for $\bm g_k \stackrel d = \Sample_k(\bm W)$. Therefore Assertion (a) holds and the graphon sequence $(W_{\Sample_n(\bm W)})_n$ has a limit in distribution $\bm W'$, which satisfies, for all $k$ (see \cref{eq:SampleK_gK}): \[\Sample_k(\bm W') \stackrel d = \bm g_k \stackrel d = \Sample_k(\bm W). \] From \cref{Prop:CaracterisationLoiGraphon}, we have $\bm W \stackrel d= \bm W'$, concluding the proof of the lemma. \end{proof} \subsection{Graphons and degree distribution} In this section, we consider the degree distribution of a graphon, as introduced by Diaconis, Holmes and Janson in \cite[Section 4]{diaconis2008threshold}. This defines a continuous functional from the space of graphons to that of probability measures on $[0,1]$ (equipped with the weak topology) \cite[Theorem 4.2]{diaconis2008threshold}. For self-completeness we provide a proof here of this simple fact. Our approach is different from that of Diaconis, Holmes and Janson: we prove that the functional is 2-Lipschitz with respect to natural metrics, while they express moments of the degree distribution in terms of subgraph densities to prove the continuity. At the end of the section, we also discuss the degree distribution of random graphs and random graphons. This is a preparation for \cref{ssec:degdistrib} where we shall study the degree distribution of random cographs and of the Brownian cographon. We also note that other works \cite{BickelChenLevina,DegreeGraphonsDelmas} study of the degree distributions of specific random graph models and their convergence to that of their graphon limit. The {\em degree distribution} of a graphon $W$ is the measure $D_{W}$ on $[0,1]$ defined as follows (see \cite[Theorem 4.4]{diaconis2008threshold}): for every continuous bounded function $f : [0,1] \to \mathbb{R} $, we have \[ \int_{[0,1]}f(x) D_{W}(dx) = \int_{[0,1]} f\left( \int_{[0,1]} w(u,v)dv \right) du,\] where $w$ is, as usual, an arbitrary representative of $W$ (the resulting measure does not depend on the chosen representative). For the graphon $W_G$ associated to a graph $G$ of size $n$, the measure $D_{W_G}$ is simply the empirical distribution of the rescaled degrees: \[D_{W_G} = \frac 1 {n}\sum_{v\in G} \delta_{\deg_G(v)/n}\] where $\delta_u$ is the Dirac measure concentrated at $u$. The next lemma implies that graphon convergence entails weak convergence of degree distributions. To state a more precise result, it is useful to endow the space $\mathcal M_1([0,1])$ of Borel probability measures on $[0,1]$ with the so-called Wasserstein metric (see {\em e.g.} \cite[Section 1.2]{Ross_Survey_Stein}), defined as \[d_{\mathrm{Wass}}(\mu,\nu)= \inf_f \left|\int_{[0,1]}f(x) \mu(dx) - \int_{[0,1]}f(x) \nu(dx)\right|, \] where the infimum runs over all $1$-Lipschitz functions $f$ from $[0,1]$ to $\mathbb R$. We recall that this distance metrizes weak convergence (see \emph{e.g.} \cite[Sec. 8.3]{Bogachev}). \begin{lemma}\label{lem:cv-degree-distribution} The map $W\mapsto D_W$ from $(\widetilde{\mathcal W_0},\delta_{\Box})$ to $(\mathcal M_1([0,1]),d_{\mathrm{Wass}})$ is 2-Lipschitz. Consequently, if $(W_n)$ converges to $W$ in $\widetilde{\mathcal W_0}$, then the sequence of associated measures $(D_{W_n})$ converges weakly to $D_W$. \end{lemma} \begin{proof} Let $W$ and $W'$ be graphons with representatives $w$ and $w'$. Let $f:[0,1] \to \mathbb R$ be 1-Lipschitz. We have \begin{align*} d_{\mathrm{Wass}}(D_W, D_{W'}) & \leq \left|\int_{[0,1]}f(x) D_{W}(dx) - \int_{[0,1]}f(x) D_{W'}(dx)\right|\\ &= \left|\int_{[0,1]}f\left( \int_{[0,1]} w(u,v)dv \right) - f\left( \int_{[0,1]} w'(u,v)dv \right)du\right| \\ &\leq \int_{[0,1]} \left| \int_{[0,1]} (w(u,v)- w'(u,v))dv\right|du\\ &= \int_{S} \int_{[0,1]} (w(u,v)- w'(u,v))dvdu -\int_{[0,1]\setminus S} \int_{[0,1]} (w(u,v)- w'(u,v))dvdu \end{align*} where $S = \left\{u\in[0,1] : \int_{[0,1]} (w(u,v)- w'(u,v))dv \geq 0 \right\}$. But, from the definition of $\lVert \cdot \rVert_\Box$, % % each of the two summands has modulus bounded by $\|w-w'\|_\Box$. % We finally get \[d_{\mathrm{Wass}}(D_W, D_{W'}) \le 2\|w-w'\|_\Box. \] which ends the proof by definition of $\delta_{\Box}$ since the choice of representatives $w,w'$ was arbitrary. \end{proof} Remark that when $\bm W$ is a random graphon, $D_{\bm W}$ is a random measure. We recall, see {\em e.g.} \cite[Lemma 2.4]{RandomMeasures}, that given a random measure $\bm \mu$ on some space $B$, its {\em intensity measure} $I[\bm \mu]$ is the deterministic measure on $B$ defined by $I[\bm \mu](A) = \mathbb{E}[\bm \mu(A)]$ for any measurable subset $A$ of $B$. To get an intuition of what $I[D_{\bm W}]$ is for a random graphon $\bm W$, it is useful to consider the case where $\bm W=W_{\bm G}$ is the graphon associated with a random graph $\bm G$ of size $n$. In this case, for any measurable subset $A$ of $[0,1]$, \[ D_{W_{\bm G}}(A)= \mathbb{P}(\, \tfrac1n \deg_{\bm G}(\bm v) \in A\ |\ \bm G),\] where $\bm v$ is a uniform random vertex in $\bm G$. Therefore \[I[D_{W_{\bm G}}](A)=\mathbb{E}\big[D_{W_{\bm G}}(A) \big] = \mathbb{P}(\, \tfrac1n \deg_{\bm G}(\bm v) \in A),\] so that $I[D_{W_{\bm G}}]$ is the law of the normalized degree of a uniform random vertex $\bm v$ in the random graph $\bm G$. We sum up the results of this section into the following proposition. \begin{proposition}\label{prop:degree} Let $\bm H_n$ be a random graph of size $n$ for every $n$, and $\bm W$ be a random graphon, such that $W_{\bm H_n} \xrightarrow[n\to\infty]{d} \bm W$. Then we have the following convergence in distribution of random measures: \begin{equation*} \frac 1 n \sum_{v\in \bm H_n} \delta_{\deg_{\bm H_n} (v)/n} \xrightarrow[n\to\infty]{d} D_{\bm W}. \end{equation*} Furthermore, denoting $\bm v_n$ a uniform random vertex in $\bm H_n$ and $\bm Z$ a random variable with law $I[D_{\bm W}]$, \begin{equation*} \tfrac 1 n \deg_{\bm H_n} (\bm v_n) \xrightarrow[n\to\infty]{d} \bm Z . \end{equation*} \end{proposition} \begin{proof} From \cref{lem:cv-degree-distribution}, we immediately obtain $D_{W_{\bm H_n}} \stackrel d\to D_{\bm W}$, which is by definition of $D_{W_G}$ exactly the first of the stated convergences. The second one follows from the first, combining Lemma 4.8 and Theorem 4.11 of \cite{RandomMeasures}\footnote{Theorem 4.11 tells us that if random measures $(\bm \xi_n)$ converge in distribution to $\bm \xi$ then, for any compactly supported continuous function $f$, we have $\bm \xi_n f \xrightarrow[n\to\infty]{d}\bm \xi f$. But since those variables are bounded (by $\|f\|_\infty$), this convergence also holds in $L^1$, \emph{i.e.} $\bm \xi_n \stackrel{L^1}{\to} \bm \xi$ in the notation of \cite{RandomMeasures}. By Lemma 4.8, this implies the convergence of the corresponding intensity measures.}. \end{proof} \section{The Brownian cographon} \label{sec:BrownianCographon} \subsection{Construction} Let $ \scalebox{1.1}{$\mathfrak{e}$}$ denote a Brownian excursion of length one. We start by recalling a technical result on the local minima of $ \scalebox{1.1}{$\mathfrak{e}$}$: the first two assertions below are well-known, we refer to \cite[Lemma 2.3]{MickaelConstruction} for the last one. \begin{lemma} With probability one, the following assertions hold. First, all local minima of $ \scalebox{1.1}{$\mathfrak{e}$}$ are strict, and hence form a countable set. Moreover, the values of $ \scalebox{1.1}{$\mathfrak{e}$}$ at two distinct local minima are different. Finally, there exists an enumeration $(b_i)_i$ of the local minima of $ \scalebox{1.1}{$\mathfrak{e}$}$, such that for every $i\in \mathbb N, x,y\in[0,1]$, the event $\{b_i \in (x,y), \scalebox{1.1}{$\mathfrak{e}$}(b_i) = \min_{[x,y]} \scalebox{1.1}{$\mathfrak{e}$}\}$ is measurable. \end{lemma} Let $\bm S^p = (\bm s_1,\ldots)$ be a sequence of i.i.d. random variables in $\{\mathtt{0},\mathtt{1}\}$, independent of $ \scalebox{1.1}{$\mathfrak{e}$}$, with $\mathbb{P}(\bm s_1 = \mathtt{0}) = p$ (in the sequel, we simply speak of i.i.d. decorations of bias $p$). We call $( \scalebox{1.1}{$\mathfrak{e}$},\bm S^p)$ a \textit{decorated Brownian excursion}, thinking of the decoration $\bm s_i$ as attached to the local minimum $b_i$. For $x,y\in[0,1]$, we define $\Dec(x,y; \scalebox{1.1}{$\mathfrak{e}$},\bm S^p)$ to be the decoration of the minimum of $ \scalebox{1.1}{$\mathfrak{e}$}$ on the interval $[x,y]$ (or $[y,x]$ if $y\le x$; we shall not repeat this precision below). If this minimum is not unique or attained in $x$ or $y$ and therefore not a local minimum, $\Dec(x,y; \scalebox{1.1}{$\mathfrak{e}$},\bm S^p)$ is ill-defined and we take the convention $\Dec(x,y; \scalebox{1.1}{$\mathfrak{e}$},\bm S^p)=\mathtt{0}$. Note however that, for uniform random $x$ and $y$, this happens with probability $0$, so that the object constructed in \cref{def:BrownianCographon} below is independent from this convention. \begin{definition}\label{def:BrownianCographon} The Brownian cographon $\bm W^p$ of parameter $p$ is the equivalence class of the random function\footnote{Of course, in the image set of $\bm w^p$, the \emph{real values} $0$ and $1$ correspond to the \emph{decorations} $\mathtt{0}$ and $\mathtt{1}$ respectively.} $$ \begin{array}{ r c c c} \bm w^p: & [0,1]^2 &\to& \{0,1\};\\ & (x,y) & \mapsto & \Dec(x,y; \scalebox{1.1}{$\mathfrak{e}$},\bm S^p). \end{array} $$ \end{definition} In most of this article, we are interested in the case $p=1/2$; in particular, as claimed in \cref{th:MainTheorem}, $W^{1/2}$ is the limit of uniform random (labeled or unlabeled) cographs, justifying its name. \subsection{Sampling from the Brownian cographon} We now compute the distribution of the random graph $\Sample_k(\bm W^p)$. \begin{proposition} \label{prop:CaracterisationBrownianCographon} If $\bm W^p$ is the Brownian cographon of parameter $p$, then for every $k\geq 2$, $\Sample_k(\bm W^p)$ is distributed like the unlabeled version of ${\sf Cograph}(\bm b^p_k)$, where the cotree $\bm b^p_k$ is a uniform labeled binary tree with $k$ leaves equipped with i.i.d. decorations of bias $p$. \end{proposition} Let us note that $\bm b^p_k$ is not necessarily a canonical cotree. \begin{proof} We use a classical construction (see \cite[Section 2.5]{LeGall}) which associates to an excursion $e$ and real numbers $x_1,\cdots,x_k$ a plane tree, denoted $\mathrm{Tree}(e;x_1,\ldots,x_k)$, which has the following properties: \begin{itemize} \item its leaves are labeled with $1,\cdots,k$ and correspond to $x_1, \ldots, x_k$ respectively; \item its internal nodes correspond to the local minima of $e$ on intervals $[x_i,x_j]$; \item the first common ancestor of the leaves $i$ and $j$ corresponds to the local minimum of $e$ on $[x_i,x_j]$. \end{itemize} The tree $\mathrm{Tree}(e;x_1,\ldots,x_k)$ is well-defined with probability $1$ when $e$ is a Brownian excursion and $x_1,\cdots,x_k$ i.i.d. uniform random variables in $[0,1]$. Moreover, in this setting, it has the distribution of a uniform random plane and labeled binary tree with $k$ leaves \cite[Theorem 2.11]{LeGall}. Forgetting the plane structure, it is still uniform among binary trees with $k$ labeled leaves, because the number of plane embeddings of a \emph{labeled binary tree} depends only on its size. We now let $( \scalebox{1.1}{$\mathfrak{e}$},\bm S)$ be a decorated Brownian excursion, and $X_1, \ldots, X_k$ denote a sequence of i.i.d. uniform random variables in $[0,1]$, independent from $( \scalebox{1.1}{$\mathfrak{e}$},\bm S)$. We make use of the decorations $\bm S$ of the local minima of $ \scalebox{1.1}{$\mathfrak{e}$}$ to turn $\mathrm{Tree}( \scalebox{1.1}{$\mathfrak{e}$};X_1,\ldots,X_k)$ into a cotree. Namely, since its internal nodes correspond to local minima of $ \scalebox{1.1}{$\mathfrak{e}$}$, we can simply report these decorations on the tree, and we get a decorated tree $\mathrm{Tree}_{\mathtt{0}/\mathtt{1}}( \scalebox{1.1}{$\mathfrak{e}$},\bm S;X_1,\ldots,X_k)$. When the decorations in $\bm S$ are i.i.d. of bias $p$, then $\mathrm{Tree}_{\mathtt{0}/\mathtt{1}}( \scalebox{1.1}{$\mathfrak{e}$},\bm S,X_1,\ldots,X_k))$ is a uniform labeled binary tree with $k$ leaves, equipped with i.i.d. decorations of bias $p$. Finally, recall that $\Sample_k(\bm W^p)$ is built by considering $X_1,\ldots,X_k$ i.i.d. uniform in $[0,1]$ and connecting vertices $v_i$ and $v_j$ if and only if $\bm w^p(X_i,X_j)=1$ (since a representative $\bm w^p$ of $\bm W^p$ takes value in $\{0,1\}$, there is no extra randomness here). By definition of $\bm w^p$, $\bm w^p(X_i,X_j)=1$ means that the decoration of the minimum of $ \scalebox{1.1}{$\mathfrak{e}$}$ on $[X_i, X_j]$ is $\mathtt{1}$. But, by construction of $\mathrm{Tree}_{\mathtt{0}/\mathtt{1}}( \scalebox{1.1}{$\mathfrak{e}$},\bm S;X_1,\ldots,X_k)$, this decoration is that of the first common ancestor of the leaves $i$ and $j$ in $\mathrm{Tree}_{\mathtt{0}/\mathtt{1}}( \scalebox{1.1}{$\mathfrak{e}$},\bm S;X_1,\ldots,X_k)$. So it is equal to $\mathtt{1}$ if and only if $i$ and $j$ are connected in the associated cograph (see \cref{obs:caract_edge-label1}). Summing up, we get the equality of unlabeled random graphs \begin{equation*} \Sample_k(\bm W^p) = {\sf Cograph} \big( \mathrm{Tree}_{\mathtt{0}/\mathtt{1}}( \scalebox{1.1}{$\mathfrak{e}$},\bm S,X_1,\ldots,X_k)) \big), \label{eq:Sample_Wp} \end{equation*} ending the proof of the proposition. \end{proof} \subsection{Criterion of convergence to the Brownian cographon} The results obtained so far yield a very simple criterion for convergence to the Brownian cographon. For simplicity and since this is the only case we need in the present paper, we state it only in the case $p=1/2$. \begin{lemma}\label{lem:factorisation} Let $\bm t^{(n)}$ be a random cotree of size $n$ for every $n$ (which may be labeled or not). For $n\geq k\geq 1$, denote by $\bm t^{(n)}_k$ the subtree of $\bm t^{(n)}$ induced by a uniform $k$-tuple of distinct leaves. Suppose that for every $k$ and for every labeled binary cotree $t_0$ with $k$ leaves, \begin{equation}\mathbb{P}(\bm t^{(n)}_k=t_0) \xrightarrow[n\to\infty]{} \frac{(k-1)!}{(2k-2)!}.\label{eq:factorisation}\end{equation} Then $W_{{\sf Cograph}(\bm t^{(n)})}$ converges as a graphon to $\bm W^{1/2}$. \end{lemma} \begin{proof} We first remark that $\frac{(k-1)!}{(2k-2)!} = \frac 1 {\mathrm{card}(\mathcal C_k)}$, where $\mathcal C_k$ is the set of labeled binary cotrees with $k$ leaves. Indeed the number of plane labeled binary trees with $k$ leaves is given by $k!\,\mathrm{Cat}_{k-1}$ where $\mathrm{Cat}_{k-1}=\frac{1}{k}\binom{2k-2}{k-1}$ is the $(k-1)$-th Catalan number. Decorations on internal nodes induce the multiplication by a factor $2^{k-1}$ while considering non-plane trees yields a division by the same factor in order to avoid symmetries. Therefore $\mathrm{card}(\mathcal{C}_k)=k!\,\mathrm{Cat}_{k-1} = \frac{(2k-2)!}{(k-1)!}$. Consequently, \cref{eq:factorisation} states that $\bm t^{(n)}_k$ converges in distribution to a uniform element of $\mathcal C_k$. Morever, a uniform element of $\mathcal C_k$ is distributed as $\bm b^{1/2}_k$ where $\bm b^{1/2}_k$ is a uniform labeled binary tree with $k$ leaves equipped with i.i.d. decorations of bias ${1/2}$. % Hence, as $n$ tends to $+\infty$, we have the following convergence of random labeled graphs of size $k$, \[{\sf Cograph}(\bm t^{(n)}_k) \stackrel d \rightarrow {\sf Cograph}(\bm b^{1/2}_k).\] Forgetting the labeling, the left-hand side is $\SubGraph(\vec{V'}^k,{\sf Cograph}(\bm t^{(n)}))$, where $\vec{V'}^k$ is a uniform tuple of $k$ distinct vertices of ${\sf Cograph}(\bm t^{(n)})$; see the definition of $\bm t^{(n)}_k$ in the statement of the lemma and \cref{prop:diagramme_commutatif}. Moreover, thanks to \cref{prop:CaracterisationBrownianCographon}, forgetting again the labeling, the right-hand side has the same distribution as $\Sample_k(\bm W^{1/2})$. This proves the lemma, using \cref{th:Portmanteau} (namely, the implication $(d) \Rightarrow (a)$, and \cref{eq:SampleK_gK} together with \cref{Prop:CaracterisationLoiGraphon} to identify the limit in item (a) with $\bm W^{1/2}$). \end{proof} \subsection{The degree distribution of the Brownian cographon}\label{ssec:degdistrib} In this section we are interested in the degree distribution $D_{\bm W^p}$ of the Brownian cographon. It turns out that, in the special case $p=1/2$, the intensity $I[D_{\bm W^{1/2}}]$ is particularly simple. \begin{proposition}\label{prop:IntensityW12} $I[D_{\bm W^{1/2}}] \stackrel{d}{=} U$, where $U$ is the Lebesgue measure on $[0,1]$. \end{proposition} \begin{proof} Rather than working in the continuous, we exhibit a discrete approximation $\bm G_n^b$ of the Brownian cographon, which has the remarkable property that the degree of a uniform random vertex $\bm v_n$ in $\bm G_n^b$ is {\em exactly} distributed as a uniform random variable in $\{0,1,\cdots,n-1\}$. To construct $\bm G_n^b$, we let $\bm b_n$ be a uniform $\mathtt{0}/\mathtt{1}$-decorated plane labeled binary tree with $n$ leaves. Forgetting the plane structure, it is still uniform among labeled binary cotrees with $n$ leaves. Set $\bm G_n^b={\sf Cograph}(\bm b_n)$. From \cref{prop:CaracterisationBrownianCographon}, $\bm G_n^b$ has the same distribution as $\Sample_n(\bm W^{1/2})$, so that $W_{\bm G_n^b}$ converges in distribution to $\bm W^{1/2}$ (\cref{lem:sample_converges_to_W}). Consider a uniform random vertex $\bm v_n$ in $\bm G_n^b$. Thanks to \cref{prop:degree}, $\mathrm{Law}\big(\tfrac 1 n \deg_{\bm G_n^b}(\bm v_n)\big)$ converges to $I[D_{\bm W^{1/2}}]$. Proving the following claim will therefore conclude the proof of the proposition. \medskip {\em Claim.} The law of $\deg(\bm v_n)$ in $\bm G_n^b$ is the uniform law in $\{0,1,\cdots,n-1\}$. \smallskip {\em Proof of the claim.} We start by defining two operations for deterministic $\mathtt{0}/\mathtt{1}$-decorated plane labeled binary trees $b$. \begin{itemize} \item First, we consider a (seemingly unnatural\footnote{This order is actually very natural if we interpret $b$ as the separation tree of a separable permutation (see \cite{Nous1} for the definition). It is simply the value order on the elements of the permutation.}) order on the leaves of $b$. To compare two leaves $\ell$ and $r$, we look at their first common ancestor $u$ and assume w.l.o.g. that $\ell$ and $r$ are descendants of its left and right children, respectively. If $u$ has decoration $\mathtt{0}$, we declare $\ell$ to be smaller than $r$; if it has decoration $\mathtt{1}$, then $r$ is smaller than $\ell$. It is easy to check that this defines a total order on the leaves of $b$ (if we flip the left and right subtrees of internal nodes with decoration $\mathtt{1}$, this is simply the left-to-right depth-first order of the leaves). We write $\rank_b(\ell)$ for the rank of a leaf $\ell$ in this order. \item Second, we define an involution $\Phi$ on the set of $\mathtt{0}/\mathtt{1}$-decorated plane labeled binary trees $b$ with a distinguished leaf $\ell$. We keep the undecorated structure of the tree, and simply flip the decorations of all the ancestors of $\ell$ which have $\ell$ as a descendant {\em of their right child}. This gives a new decorated plane labeled binary tree $b'$ and we set $\Phi(b,\ell)=(b',\ell)$. \end{itemize} Consider $b$ as above, with two distinguished leaves $\ell$ and $\tilde{\ell}$. The corresponding vertices $v$ and $\tilde{v}$ in $G={\sf Cograph}(b)$ are connected if and only if the first common ancestor $u$ of $\ell$ and $\tilde{\ell}$ in $b$ has decoration $\mathtt{1}$. Setting $\Phi(b,\ell)=(b',\ell)$, this happens in two cases: \begin{itemize} \item either $\ell$ is a descendant of the left child of $u$, and $u$ has decoration $\mathtt{1}$ in $b'$; \item or $\ell$ is a descendant of the right child of $u$, and $u$ has decoration $\mathtt{0}$ in $b'$; \end{itemize} This corresponds exactly to $\ell$ being bigger than $\tilde{\ell}$ in the order associated to $b'$. Consequently, $\deg_G(v)$ is the number of leaves smaller than $\ell$ in that order, \emph{i.e.} \begin{equation} \deg_G(v) =\rank_{b'}(\ell)-1. \label{eq:deg_rank} \end{equation} Recall now that $\bm G_n^b={\sf Cograph}(\bm b_n)$, where $\bm b_n$ is a uniform $\mathtt{0}/\mathtt{1}$-decorated plane labeled binary tree with $n$ leaves. The uniform random vertex $\bm v_n$ in $\bm G_n^b$ corresponds to a uniform random leaf $\bm \ell_n$ in $\bm b_n$. Set $(\bm b'_n,\bm \ell_n)=\Phi(\bm b_n,\bm \ell_n)$. Since $\Phi$ is an involution, $(\bm b'_n,\bm \ell_n)$ is a uniform $\mathtt{0}/\mathtt{1}$-decorated plane labeled binary tree of size $n$ with a uniform random leaf $\bm \ell_n$. Conditioning on $\bm b'_n$, the rank $\rank_{\bm b'_n}(\bm \ell_n)$ is a uniform random variable in $\{1,\cdots,n\}$. The same holds taking $\bm b'_n$ at random, and, using \cref{eq:deg_rank}, we conclude that $\deg_{\bm G_n^b}(\bm v_n)$ is a uniform random variable in $\{0,\cdots,n-1\}$. This proves the claim, and hence the proposition. \end{proof} \begin{remark} It seems likely that this result can also be proved by working only in the continuous. In particular, using a result of Bertoin and Pitman \cite[Theorem 3.2]{BertoinPitman}, the degree $D(x)=\int_y \bm W^{1/2}(x,y) dy$ of a uniform random $x$ in $[0,1]$ in the Brownian cographon corresponds to the cumulated length of a half of the excursions in a Brownian bridge. \end{remark} \section{Convergence of labeled cographs to the Brownian cographon}\label{sec:proofLabeled} In this section, we are interested in labeled cographs with $n$ vertices, which are in one-to-one correspondence with labeled canonical cotrees with $n$ leaves (\cref{prop:canonical_cotree}). To study these objects, we use the framework of labeled combinatorial classes, as presented in the seminal book of Flajolet and Sedgewick \cite[Chapter II]{Violet}. In this framework, an object of size $n$ has $n$ atoms, which are labeled bijectively with integers from $1$ to $n$. For us, the atoms are simply the leaves of the trees, which is consistent with \cref{def:cotree}. We will also consider (co)trees with marked leaves and, here, more care is needed. Indeed, in some instances, those marked leaves have a label (and thus should be seen as atoms and counted in the size of the objects), while, in other instances, they do not have a label (and are therefore not counted in the size of the object). To make the distinction, we will refer to marked leaves of the latter type (\emph{i.e.} without labels) as {\em blossoms} and reserve {\em marked leaves} for those carrying labels. \subsection{Enumeration of labeled canonical cotrees} Let $\mathcal{L}$ be the family of non-plane labeled rooted trees in which internal nodes have at least two children. % For $n\geq 1$, let $\ell_n$ be the number of trees with $n$ leaves in $\mathcal{L}$. Let $L(z)$ denote the corresponding exponential generating function: $$ L(z)=\sum_{n\geq 1} \frac{\ell_n}{n!}z^n. $$ \begin{proposition}\label{prop:SeriesS} The series $L(z)$ is the unique formal power series without constant term solution of \begin{equation}\label{eq:SerieS} L(z)=z+\exp(L(z))-1-L(z). \end{equation} \end{proposition} \begin{proof}% (This series is treated in \cite[Example VII.12 p.472]{Violet}.) A tree in $\mathcal{L}$ consists of \begin{itemize} \item either a single leaf (counted by $z$) ; \item or a root to which is attached an unordered sequence of at least two trees of $\mathcal{L}$ (counted by $\sum_{k\geq 2}L^k/k!= e^L-1-L$). \end{itemize} This justifies that $L(z)$ satisfies \cref{eq:SerieS}. The uniqueness is straightforward, since \cref{eq:SerieS} determines for every $n$ the coefficient of $z^n$ in $L(z)$ from those of $z^k$ for $k<n$. \end{proof} Computing the first coefficients, we find $$ L(z)=z+\frac{z^2}{2!}+4\frac{z^3}{3!}+26\frac{z^4}{4!}+236\frac{z^5}{5!}+2752\frac{z^6}{6!} \mathcal{O}(z^7). $$ These numbers correspond to the fourth Schr\"oder's problem (see \href{http://oeis.org/A000311}{Sequence A000311} in \cite{SloaneSchroder}).\medskip Let $m_n$ be the number of labeled canonical cotrees with $n$ leaves. We have $m_1=1$ and $m_n=2\,\ell_n$ for $n\geq 2$. Indeed to each tree of $\mathcal{L}$ containing internal nodes (\emph{i.e.}, with at least two leaves) there correspond two canonical cotrees: one with the root decorated by $\mathtt{0}$ and one with the root decorated by $\mathtt{1}$ (the other decorations are then determined by the alternation condition). The exponential generating series $M(z)=\sum_{n\geq 1} \frac{m_n}{n!}z^n$ of labeled canonical cotrees (or equivalently of labeled cographs) thus satisfies $M(z)=2L(z)-z$. Combining this with \cref{prop:SeriesS}, we have \begin{equation}\label{eq:Lien_T_expS} M(z)=\exp(L(z))-1. \end{equation} It is standard (and easy to see) that the series $$ L'(z)=\sum_{n\geq 1} \frac{\ell_n}{(n-1)!}z^{n-1} \quad \text{ and } \quad L^\bullet(z)=zL'(z)=\sum_{n\geq 1} \frac{\ell_n}{(n-1)!}z^{n} $$ counts trees of $\mathcal{L}$ with a blossom or a marked leaf, repectively. In the subsequent analysis we need to consider the generating function $L^\text{even}$ (resp. $L^\text{odd}$) counting trees of $\mathcal{L}$ having a blossom at even (resp. odd) distance from the root. Obviously, $L^\text{even}+L^\text{odd} =L'$. \begin{proposition}\label{prop:SeriesSeven} We have the following identities \begin{align} L^\text{even}&=\frac{1}{e^L(2-e^L)}, \label{eq:SevenExplicite}\\ L^\text{odd}&=\frac{e^L-1}{e^L(2-e^L)}. \label{eq:SoddExplicite} \end{align} \end{proposition} \begin{proof}% We first claim that \begin{equation}\label{eq:SevenSoddImplicite} \begin{cases} L^\text{even}&=1+ (e^L-1)L^\text{odd},\\ L^\text{odd}&= (e^L-1)L^\text{even}. \end{cases} \end{equation} We prove the first identity, the second one is proved similarly. A tree counted by $L^\text{even}$ is \begin{itemize} \item either reduced to a blossom (therefore the tree has size $0$, \emph{i.e.} is counted by $1$); \item or has a root to which are attached \begin{itemize} \item a tree with a blossom at odd height (counted by $L^\text{odd}$), and \item an unordered sequence of at least one unmarked trees (counted by $\sum_{k\geq 1}L^k/k!= e^L-1$). \end{itemize} \end{itemize} We obtain the proposition by solving \cref{eq:SevenSoddImplicite}. \end{proof} \smallskip \subsection{Enumeration of canonical cotrees with marked leaves inducing a given cotree} For a labeled (not necessarily canonical) cotree $t_0$ of size $k$, we consider the family $\mathcal{M}_{t_0}$ of tuples $\left(t;\ell_1,\dots,\ell_k\right)$ where \begin{itemize} \item $t$ is a labeled canonical cotree; \item $(\ell_1,\dots,\ell_k)$ is a $k$-tuple of distinct leaves of $t$; \item the subtree of $t$ induced by $(\ell_1,\dots,\ell_k)$ is $t_0$. \end{itemize} We denote by $M_{t_0}$ the associated exponential generating function. \begin{theorem}\label{th:SerieT_t0} Let $t_0$ be a labeled cotree with $k$ leaves. Denote by $n_v$ its number of internal nodes, by $n_=$ its number of edges of the form $\mathtt{0}-\mathtt{0}$ or $\mathtt{1}-\mathtt{1}$, and by $n_{\neq}$ its number of edges of the form $\mathtt{0}-\mathtt{1}$ or $\mathtt{1}-\mathtt{0}$. We have the identity \begin{equation}\label{Eq:ComptageLabeled} M_{t_0} = (L') (\exp(L))^{n_v} (L^\bullet)^k (L^\text{odd})^{n_=} (L^\text{even})^{n_{\neq}}. \end{equation} \end{theorem} \begin{figure}[htbp] \begin{center} \includegraphics[width=10cm]{SqueletteStyliseCographe.pdf} \caption{On the left: a (non-canonical) labeled cotree $t_0$ of size $5$. On the right: a schematic view of a canonical cotree in $\mathcal{M}_{t_0}$.} \label{fig:SqueletteStyliseCographe} \end{center} \end{figure} \begin{proof}% \emph{(Main notations of the proof are summarized in \cref{fig:SqueletteStyliseCographe}.)} Let $\left(t;\ell_1,\dots,\ell_k\right) \in \mathcal{M}_{t_0}$. There is a correspondence between the nodes of $t_0$ and some nodes of $t$, mapping leaves to marked leaves and internal nodes to first common ancestors of marked leaves. These first common ancestors of marked leaves in $t$ will be refered to as {\em branching nodes} below. In order to prove \cref{Eq:ComptageLabeled} we will decompose each such $t$ into subtrees, called {\em pieces}, of five different types: \emph{pink, blue, yellow, green} and \emph{gray} (see the color coding\footnote{We apologize to the readers to whom only a black-and-white version were available.} in \cref{fig:SqueletteStyliseCographe}). Our decomposition has the following property: to reconstruct an element of $\mathcal{M}_{t_0}$, we can choose each piece independently in a set depending on its color only, so that the generating series of $\mathcal{M}_{t_0}$ writes as a product of the generating series of the pieces. In this decomposition, there is exactly one gray piece obtained by pruning $t$ at the node $r$ of $t$ corresponding to the root of $t_0$. In this piece, $r$ is replaced by a blossom. We note that, by definition of induced cotree, the decoration of $r$ has to match that of the root of $t_0$. Since decorations in canonical cotrees must alternate, this determines all decorations in the gray piece. Possible choices for the gray piece are therefore counted by the same series as undecorated trees with a blossom, that is $L'$. For the rest of the decomposition, we consider branching nodes $v$ of $t$ (including $r$), and look at all children $w$ of such nodes $v$. \begin{itemize} \item If such a node $w$ has exactly one descendant (possibly, $w$ itself) which is a marked leaf, we build a piece, colored yellow, by taking the fringe subtree rooted at $w$. Yellow pieces are labeled canonical cotrees with one marked leaf. However, the decoration within the yellow piece is forced by the alternation of decorations in $t$ and by the decoration of the parent $v$ of $w$, which has to match the decoration of the corresponding node in $t_0$ (see \cref{fig:SqueletteStyliseCographe}). So the generating function for yellow pieces is $L^\bullet$. Of course, we have a yellow piece for each marked leaf of $t$, \emph{i.e.} for each leaf of $t_0$. \item If a node $w$ child of a branching node in $t$ has at least two marked leaves among its descendants, it must also have a descendant (possibly equal to $w$) that is a branching node. We define $v'$ as the branching node descending from $w$ (possibly equal to it) which is the closest to $w$. This implies that the node of $t_0$ corresponding to $v'$ (denoted $v'_0$) is a child of the one corresponding to $v$ (denoted $v_0$). We build a piece rooted at $w$, which corresponds to the edge $(v_0,v'_0)$ of $t_0$. This piece is the fringe subtree rooted at $w$ pruned at $v'$, {\em i.e.} where $v'$ is replaced by a blossom. We color it blue if the blossom is at odd distance from $w$, pink otherwise. The generating functions for blue and pink pieces are therefore $L^{\text{odd}}$ and $L^{\text{even}}$, respectively (since again all decorations in the piece are dertermined by the one of $v_0$). Because of the alternation of decoration, the piece is blue if and only if $w$ and $v'$ have different decorations in $t$, which happens if and only if $v$ and $v'$ (or equivalently, $v_0$ and $v'_0$) have the same decoration. We therefore have a blue piece for each internal edge of $t_0$ with extremities with the same decoration, and a pink piece for each internal edge of $t_0$ with extremities with different decorations. \item All other nodes $w$ have no marked leaf among their descendants. We group all such nodes $w$ that are siblings to build a single green piece, attached to their common parent $v$. Namely, for each branching node $v$, we consider all its children $w$ having no marked leaf as a descendant (possibly, there are none), and we define the green piece attached to $v$ as the (possibly empty) forest of fringe subtrees of $t$ rooted at these nodes $w$. Green pieces are forest, \emph{i.e.} unordered set of labeled canonical cotrees. The decoration within the green piece is forced by the alternation of decorations in $t$ and by the decoration of $v$, which as before has to match the decoration of the corresponding node in $t_0$. Therefore, choosing a green piece amounts to choosing an unordered set of undecorated trees in $\mathcal L$. We conclude that possible choices for each green piece are counted by $e^L$. Finally, we recall that there is one (possibly empty) green piece for each branching node of $t$, \emph{i.e.} for each internal node of $t_0$. \end{itemize} Since $t_0$ is a labeled cotree, leaves / internal nodes / edges of $t_0$ can be ordered in a canonical way. Since yellow (resp. green, resp. blue and pink) pieces in the above decomposition are indexed by leaves (resp. internal nodes, resp. edges) of $t_0$, they can be ordered in a canonical way as well. Moreover, the correspondence between marked trees $(t;\ell_1,\cdots,\ell_k)$ in $\mathcal{M}_{t_0}$ and tuples of colored pieces is one-to-one. This completes the proof of \cref{Eq:ComptageLabeled}. \end{proof} \subsection{Asymptotic analysis} Following Flajolet and Sedgewick \cite[p. 389]{Violet}, we say that a power series is $\Delta$-analytic if it is analytic in some $\Delta$-domain $\Delta(\phi,\rho)$, where $\rho$ is its radius of convergence. This is a technical hypothesis, which enables to apply the transfer theorem \cite[Corollary VI.1 p. 392]{Violet}; all series in this paper are $\Delta$-analytic.% \begin{proposition}\label{prop:Asympt_S_Seven} The series $L(z)$ has radius of convergence $\rho=2\log(2)-1$ and is $\Delta$-analytic. Moreover, the series $L$ is convergent at $z=\rho$ and we have \begin{equation}\label{eq:Asympt_S} L(z)\underset{z\to \rho}{=}\log(2)-\sqrt{\rho}\sqrt{1-\tfrac{z}{\rho}} +\mathcal{O}(1-\tfrac{z}{\rho}). \end{equation} \end{proposition} \begin{proof}% Using \cref{prop:SeriesS}, \cref{prop:Asympt_S_Seven} is a direct application of \cite[Theorem 1]{MathildeMarniCyril}. \end{proof} It follows from \cref{prop:Asympt_S_Seven} that $L'$, $\exp(L)$, $L^{\text{even}}$ and $L^{\text{odd}}$ also have radius of convergence $\rho=2\log(2)-1$, are all $\Delta$-analytic and that their behaviors near $\rho$ are \begin{equation} L'(z) \underset{z\to \rho}{\sim} \frac{1}{2\sqrt{\rho}}\left(1-\tfrac{z}{\rho}\right)^{-1/2} \label{eq:asymLprime}; \qquad \exp(L(z)) \underset{z\to \rho}{\sim} 2; % \end{equation} \begin{equation} L^{\text{even}}(z) \underset{z\to \rho}{\sim} \frac{1}{4\sqrt{\rho}}\left(1-\tfrac{z}{\rho}\right)^{-1/2}; % \qquad L^{\text{odd}}(z) \underset{z\to \rho}{\sim} \frac{1}{4\sqrt{\rho}}\left(1-\tfrac{z}{\rho}\right)^{-1/2}. \label{eq:asymLodd} \end{equation} Indeed, the first estimate follows from \cref{eq:Asympt_S} by singular differentiation \cite[Thm. VI.8 p. 419]{Violet}, while the third and fourth ones are simple computations using \cref{eq:SevenExplicite} and \cref{eq:SoddExplicite}. \medskip \subsection{Distribution of induced subtrees of uniform cotrees} We take a uniform labeled canonical cotree $\bm t^{(n)}$ with $n$ leaves. We also choose uniformly at random a $k$-tuple $(\bm\ell_1,\cdots,\bm\ell_k)$ of distinct leaves of $\bm t^{(n)}$. Equivalently, $(\bm t^{(n)};\bm\ell_1,\cdots,\bm\ell_k)$ is chosen uniformly at random among labeled canonical cotrees of size $n$ with $k$ marked leaves. We denote by $\mathbf{t}^{(n)}_k$ the labeled cotree induced by the $k$ marked leaves. \begin{proposition}\label{prop:proba_arbre} Let $k\geq 2$, and let $t_0$ be a labeled binary cotree with $k$ leaves. Then \begin{equation}\label{eq:proba_asymptotique_t0} \mathbb{P} (\mathbf{t}^{(n)}_k = t_0) \underset{n\to+\infty}{\to} \displaystyle{\frac{(k-1)!}{(2k-2)!}}. % \end{equation} \end{proposition} \begin{proof} We fix a labeled binary cotree $t_0$ with $k$ leaves. From the definitions of $\mathbf{t}^{(n)}_k$, $M$ and $M_{t_0}$ we have for $n \geq k$ (we use the standard notation $[z^n]A(z)$ for the $n$-th coefficient of a power series $A$) \begin{equation} \mathbb{P}(\mathbf{t}^{(n)}_k=t_0) = \frac{n! [z^{n}] M_{t_0}(z)}{n \cdots (n-k+1)\, n! [z^n] M(z)}. \label{eq:probat0_quotient} \end{equation} Indeed, the denominator counts the total number of labeled canonical cotrees $(t;\ell_1,\cdots,\ell_k)$ of size $n$ with $k$ marked leaves. The numerator counts those tuples, for which $(\ell_1,\cdots,\ell_k)$ induce the subtree $t_0$. The quotient is therefore the desired probability. % % % By \cref{th:SerieT_t0}, and using the notation introduced therein, we have $$ M_{t_0}= (L') (\exp(L))^{n_v} (L^\bullet)^k (L^\text{odd})^{n_=} (L^\text{even})^{n_{\neq}}. $$ Since $t_0$ is binary, we have $n_v=k-1$ and $n_= +n_{\neq}=k-2$. We now consider the asymptotics around $z=\rho$. Using \cref{eq:asymLprime} and \eqref{eq:asymLodd} and recalling that $L^\bullet(z)=zL'(z)$, we get \begin{align*} M_{t_0}(z) &\underset{z\to \rho}{\sim} \rho^k \left(\frac{1}{2\sqrt{\rho}}\left(1-\tfrac{z}{\rho}\right)^{-1/2}\right)^{k+1} 2^{k-1} \left(\frac{1}{4\sqrt{\rho}}\left(1-\tfrac{z}{\rho}\right)^{-1/2}\right)^{k-2}\\ &\underset{z\to \rho}{\sim} \frac{{\rho}^{1/2}}{2^{2k-2}} \left(1-\tfrac{z}{\rho}\right)^{-(k-1/2)}. \end{align*} By the transfer theorem (\cite[Corollary VI.1 p.392]{Violet}) we obtain $$ [z^{n}] M_{t_0}(z) \underset{n\to +\infty}{\sim} \frac{{\rho}^{1/2}}{2^{2k-2}\rho^{n}} \frac{n^{k-3/2}}{\Gamma(k-1/2)} = \frac{(k-1)!}{\sqrt{\pi}(2k-2)!} \frac{n^{k-3/2}}{\rho^{n-1/2}}. $$ Applying again the transfer theorem to $M(z)=2L(z)-z$ whose asymptotics is given in \cref{eq:Asympt_S}, we also have $$ n(n-1)\dots (n-k+1) [z^n] M(z) \underset{n\to +\infty}{\sim} n^k (-2\sqrt{\rho})\frac{n^{-3/2}}{\rho^n \Gamma(-1/2)} \sim \frac{n^{k-3/2}}{\rho^{n-1/2} \sqrt{\pi}}. $$ Finally, $ \mathbb{P}(\mathbf{t}^{(n)}_k=t_0) \to \frac{(k-1)!}{(2k-2)!}. $ \end{proof} \subsection{Proof of \cref{th:MainTheorem,th:DegreeRandomVertex} in the labeled case}\label{SousSec:PreuveEtiquete} \ Since labeled canonical cotrees and labeled cographs are in bijection, ${\sf Cograph}(\bm t^{(n)})$ is a uniform labeled cograph of size $n$, \emph{i.e.} is equal to $\bm G_n$ in distribution. Thus \cref{th:MainTheorem} follows from \cref{lem:factorisation} and \cref{prop:proba_arbre}. \cref{th:DegreeRandomVertex} is now a consequence of \cref{th:MainTheorem}, combined with \cref{prop:degree,prop:IntensityW12}. \section{Convergence of unlabeled cographs to the Brownian cographon} \label{sec:unlabeled} \subsection{Reducing unlabeled canonical cotrees to labeled objects} \label{ssec:unlabeled_to_labeled} In this section, we are interested in unlabeled cographs. They are in one-to-one correspondence with unlabeled canonical cotrees. We denote by $\overline{\mathcal V}$ the class of unlabeled canonical cotrees and by $\overline{\mathcal U}$ the class of rooted non-plane unlabeled trees with no unary nodes, counted by the number of leaves. If $V$ and $U$ are their respective ordinary generating functions, then clearly, $V(z) = 2U(z)-z$. The class $\overline{\mathcal U}$ may be counted using the multiset construction and the P\'olya exponential \cite[Thm. I.1]{Violet}: a tree of $\overline{\mathcal U}$ is either a single leaf or a multiset of cardinality at least $2$ of trees of $\overline{\mathcal U}$, yielding the following equation: \begin{equation} \label{eq:U} U(z) = z + \exp\left( \sum_{r\geq 1} \frac 1 r\ U(z^r) \right) -1 - U(z). \end{equation} As in the labeled case, we want to count the number of pairs $(t,I)$ where $t$ is a cotree of $\overline{\mathcal V}$ with $n$ leaves, and $I$ is a $k$-tuple of leaves of $t$ (considered labeled by the order in which they appear in the tuple), such that the subtree induced by $I$ in $t$ is a given labeled cotree $t_0$. \medskip To that end, we would need to refine \cref{eq:U} to count trees with marked leaves, inducing a given subtree, in a similar spirit as in \cref{th:SerieT_t0}. There is however a major difficulty here, which we now explain. There are two ways of looking at tuples of marked leaves in unlabeled trees. \begin{itemize} \item We consider pairs $(t,I)$, where $t$ is a labeled tree and $I$ a $k$-tuple of leaves of $t$. Then we look at orbits $\overline{(t,I)}$ of such pairs under the natural relabeling action. \item Or we first consider orbits $\overline{t}$ of labeled trees under the relabeling action, {\em i.e.} unlabeled trees. For each such orbit we fix a representative and consider pairs $(\overline{t},I)$, where $I$ is a $k$-tuple of leaves of the representative of $\overline{t}$. \end{itemize} In the second model, every unlabeled tree has exactly $\binom{n}{k}$ marked versions, which is not the case in the first model\footnote{\emph{E.g.}, the tree with three leaves all attached to the root, two of which are marked, has only one marked version in the first model.}. Consequently, if we take an element uniformly at random in the second model, the underlying unlabeled tree is a uniform unlabeled tree, while this property does not hold in the first model. Our goal is to study uniform random unlabeled cographs of size $n$, where we next choose a uniform random $k$-tuple of leaves. This corresponds exactly to the second model. The problem is that combinatorial decompositions of unlabeled combinatorial classes are suited to study the first model (unlabeled objects are orbits of labeled objects under relabeling). In particular, \cref{th:SerieT_t0} has an easy analogue for counting unlabeled trees with marked leaves inducing a given labeled cotree in the first sense, but not in the second sense. To overcome this difficulty, we consider the following labeled combinatorial class: \[ {\mathcal U} = \{ (t,a) : t\in \mathcal L, a \text{ a root-preserving automorphism of }t\}\] where $\mathcal{L}$ is the family of non-plane labeled rooted trees in which internal nodes have at least two children, studied in \cref{sec:proofLabeled}. We define the size of an element $(t,a)$ of ${\mathcal U}$ as the number of leaves of $t$. This set is relevant because of the following easy but key observation. % \begin{proposition}% Let $\Phi$ denote the operation of forgetting both the labels and the automorphism. Then, $\Phi({\mathcal U})= \overline {\mathcal U}$ and every $t\in \overline {\mathcal U}$ of size $n$ has exactly $n!$ preimages by $\Phi$. As a result, the ordinary generating series $U\!$ of $\,\overline {\mathcal U}$ equals the exponential generating function of ${\mathcal U}$ and the image by $\Phi$ of a uniform random element of size $n$ in ${\mathcal U}$ is a uniform random element of size $n$ in $ \overline {\mathcal U}$. \end{proposition} \begin{proof} The number of preimages of $t\in \overline {\mathcal U}$ is the number of automorphisms of $t$ times the number of distinct labelings of $t$, which equals $n!$ by the orbit-stabilizer theorem. The other claims follow immediately. \end{proof} Working with ${\mathcal U}$ instead of $ \overline {\mathcal U}$ solves the issue raised above concerning marking, since we have labeled objects. However the additional structure (the automorphism) has to be taken into account in combinatorial decompositions, but this turns out to be tractable (at least asymptotically). \subsection{Combinatorial decomposition of $\mathcal{U}$} We first describe a method for decomposing pairs $(t,a)$ in $\mathcal U$ at the root of $t$, which explains \emph{combinatorially} why the exponential generating function $U$ of $\mathcal U$ satisfies \cref{eq:U}. This combinatorial interpretation of \cref{eq:U} is necessary for the refinement with marked leaves done in the next section. Let $(t,a)\in {\mathcal U}$. Then $t$ is a non-plane rooted labeled tree with no unary nodes and $a$ is one of its root-preserving automorphisms. Assuming $t$ is of size at least two, we denote by $v_1,\ldots v_d$ the children of the root, and $t_1,\ldots, t_d$ the fringe subtrees rooted at these nodes, respectively. Because $a$ is a root-preserving automorphism, it preserves the set of children of the root, hence there exists a permutation $\pi\in \Sn_d$ such that $a(v_i) = v_{\pi(i)}$ for all $1\leq i \leq d$. Moreover, we have necessarily $a(t_i) = t_{\pi(i)}$ for all $1\leq i \leq d$. Let $\pi = \prod_{s=1}^p c_s$ be the decomposition of $\pi$ into disjoint cycles, including cycles of length one. Let $c_s = (i_1,\ldots,i_r)$ be one of them. We consider the forest $t(c_s)$ formed by the trees $t_{i_1},\ldots,t_{i_r}$. Then the pair $(t(c_s),a_{|t(c_s)})$ lies in the class $\mathcal C_r$ of pairs $(f,a)$, where $f$ is a forest of $r$ trees and $a$ an automorphism of $f$ acting cyclically on the components of $f$. The tree $t$ can be recovered by adding a root to $\biguplus_{s=1}^p t(c_s)$. Moreover, $a$ is clearly determined by $(a_{|t(c_s)})_{1\leq s\leq p}$. So we can recover $(t,a)$ knowing $(t(c_s),a_{|t(c_s)})_{1\leq s\leq p}$. Recall that the cycles $c_s$ indexing the latter vector are the cycles of the permutation $\pi$, which has size at least $2$ (the root of $t$ has degree at least $2$). Since permutations $\pi$ are sets of cycles, we get the following decomposition of $\mathcal U$ (using as usual $\mathcal Z$ for the atomic class, representing here the single tree with one leaf): \begin{equation} \label{eq:Dec_U_Cr} \mathcal U= \mathcal Z \, \uplus \, \textsc{Set}_{\ge 1}\Big(\biguplus_{r\geq 1} \,\mathcal C_r\Big) \setminus \mathcal C_1, \end{equation} We then relate $\mathcal C_r$ to $\mathcal U$ to turn \cref{eq:Dec_U_Cr} into a recursive equation. Let $(f,a)$ be an element of $\mathcal C_r$, and $t$ be one of the component of $f$. We write $f=\{t_1,\cdots,t_r\}$ such that $t_1=t$ and $a$ acts on these components by $t_1 \overset a \to t_2 \overset a \to \cdots \overset a \to t_r \overset a \to t_1$ (this numbering of the components of $f$ is uniquely determined by $t$). We then encode $(f,a)$ by a unique tree $\widehat t$ isomorphic to $t_1$, with multiple labelings, \emph{i.e.} each leaf $\widehat v\in \widehat t$, corresponding to $v\in t_1$, is labeled by $(v,a(v), a^2(v),\ldots, a^{r-1}(v))$. Finally, $a^r$ induces an automorphism of $\widehat t$. Consequently, $(\widehat t, a^r)$ is an element of the combinatorial class $\mathcal U \circ \mathcal Z^r$, \emph{i.e.} an element of $\mathcal U$ % where each atom (here, each leaf of the tree) carries a vector of $r$ labels; the size of an element of $\mathcal U \circ \mathcal Z^r$ is the total number of labels, \emph{i.e.} $r$ times the number of leaves of $\widehat t$. The forest $f$ and its marked component $t$ are trivially recovered from $(\widehat t, a^r)$. Since a forest automorphism is determined by its values on leaves, we can recover $a$ as well. This construction defines a size-preserving bijection between triples $(f,a,t)$, where $(f,a)$ is in $\mathcal C_r$ and $t$ one of the component of $f$, and elements of $\mathcal U \circ \mathcal Z^r$. Forgetting the marked component $t$, this defines an $r$-to-$1$ size-preserving correspondence from $\mathcal C_r$ to $\mathcal U \circ \mathcal Z^r$. Together with \cref{eq:Dec_U_Cr}, this gives the desired combinatorial explanation to the fact that the exponential generating function of $\mathcal U$ satisfies \cref{eq:U}. \medskip We now introduce the combinatorial class $\mathcal D$ of trees in $\mathcal U$ with size $\geq 2$ such that no child of the root is fixed by the automorphism. This means that there is no cycle of size $1$ in the above decomposition of $\pi$ into cycles. Therefore, the exponential generating function of $\mathcal D$ satisfies \begin{equation} \label{eq:D} D(z) = \exp\left( \sum_{r\geq 2} \frac 1 r\ U(z^r) \right) -1 .% \end{equation} Note that introducing the series $D$ is classical when applying the method of singularity analysis on unlabeled unrooted structures (aka P\'olya structures), see, {\em e.g.}, \cite[p 476]{Violet}. However, interpreting it combinatorially with objects of $\mathcal D$ is not standard, but necessary for our purpose. In the sequel, for $k\geq 0$, we write $\exp_{\geq k}(z) = \sum_{z\geq k} \frac {z^k}{k!}$. Algebraic manipulations from \cref{eq:U} allow to rewrite the equation for $U$ as \begin{equation} \label{eq:U2} U = z + \exp_{\geq 2}(U)+ D\exp(U). \end{equation} Moreover, \cref{eq:U2} has a combinatorial interpretation. Indeed, pairs $(t,a)$ in $\mathcal U$ of size at least $2$ can be split into two families as follows. \begin{itemize} \item The first family consists in pairs $(t,a)$, for which all children of the root are fixed by the automorphism $a$; adapting the above combinatorial argument, we see that the generating series of this family is $\exp_{\geq 2}(U)$ (recall that the root has at least 2 children). \item The second family consists in pairs $(t,a)$, where some children of the root are moved by the automorphism $a$. Taking the root, its children moved by $a$ and their descendants give a tree $t_1$ such that $(t_1,a_{|t_1})$ is in $\mathcal D$. Each child $c$ of the root fixed by $a$ with its descendants form a tree $t_c$ such that $(t_c,a_{|t_c})$ is in $\mathcal U$. We have a (possibly empty) unordered set of such children. Therefore, elements in this second family are described as pairs consisting of an element of $\mathcal D$ and a (possibly empty) unordered set of elements of $\mathcal U$, so that the generating series of this second family is $D\exp(U)$. \end{itemize} Bringing the two cases together, we obtain a combinatorial interpretation of \cref{eq:U2}. Again, this combinatorial interpretation will be important later, when refining with marked leaves. \medskip We can now turn to defining the combinatorial classes that will appear in our decomposition. Similarly to the case of labeled cographs, we will need to consider objects of $\mathcal U$ (recall that these are \emph{labeled} objects) where some leaves are marked. Here again, we need to distinguish marked leaves carrying a label (contributing to the size of the objects), and leave not carrying any label (not counted in the size). We keep the terminology of our section on labeled cographs, namely we call \emph{blossoms} marked leaves of the latter type (\emph{i.e.} without labels) and we reserve {\em marked leaves} for those carrying labels. We let $\mathcal{U}^\bullet$ (resp. $\mathcal U'$) be the combinatorial class of pairs $(t,a)$ in $\mathcal U$ with a marked leaf (resp. blossom) in $t$. Their exponential generating functions are respectively $zU'(z)$ and $U'(z)$ (the derivative of $U(z)$). We also define $\mathcal U^\star \subset \mathcal U'$ as the class of pairs $(t,a)$ in $\mathcal U$ with a blossom in $t$ {\em which is fixed by $a$}. Finally, we decompose $\mathcal U^\star$ as $\mathcal U^\star = \mathcal U^{\mathrm{even}} \uplus \mathcal U^{\mathrm{odd}}$, according to the parity of the distance from the root to the blossom. We denote by $U^\star$, $U^{\mathrm{even}}$ and $U^{\mathrm{odd}}$ the exponential generating functions of these classes, respectively. \begin{proposition} We have the following equations: \begin{align} & \hspace*{1cm} U^\star \, \, = \ 1 + U^\star\exp_{\geq 1}(U)+ U^\star D\exp(U), \label{eq:Ustar}\\ & \begin{cases} U^{\mathrm{even}} &= \ 1 + U^{\mathrm{odd}}\exp_{\geq 1}(U)+ U^{\mathrm{odd}} D\exp(U),\\ U^{\mathrm{odd}} &= \ U^{\mathrm{even}}\exp_{\geq 1}(U)+ U^{\mathrm{even}} D\exp(U). \label{eq:Uevenodd} \end{cases} \end{align} \end{proposition} \begin{proof} Note that if a blossom is required to be fixed by the automorphism, then all of its ancestors are also fixed by the automorphism. Then, the equation of $U^\star$ is obtained by the same decomposition as for \cref{eq:U2}, imposing that the blossom belongs to a subtree attached to a child of the root which is fixed by the automorphism. The other two equations follow immediately. \end{proof} \subsection{Enumeration of canonical cotrees with marked leaves inducing a given cotree} We first define ${\mathcal V}$ as the class of pairs $(t,a)$, where $t$ is a labeled canonical cotree and $a$ a root-preserving automorphism of $t$. As for ${\mathcal U}$ and $ \overline {\mathcal U}$, we have a $n!$-to-1 map from ${\mathcal V}$ to $ \overline {\mathcal V}$ and $V$ can be seen either as the ordinary generating function of $ \overline {\mathcal V}$ or the exponential generating function of $\mathcal V$.% We would like to find a combinatorial decomposition of pairs in $\mathcal V$ with marked leaves inducing a given cotree. It turns out that it is simpler and sufficient for us to work with a smaller class, which we now define. \begin{definition} Let $t_0$ be a labeled cotree of size $k$. Let $\mathcal V_{t_0}$ be the class of tuples $(t,a;\ell_1,\dots,\ell_k)$, where $(t,a)$ is in $\mathcal V$ and $\ell_1$, \dots, $\ell_k$ are distinct leaves of $t$ (referred to as {\em marked leaves}) such that \begin{itemize} \item the marked leaves induce the subtree $t_0$; \item the following nodes are fixed by $a$: all first common ancestors of the marked leaves, and their children leading to a marked leaf. \end{itemize} \label{Def:V_t_0} \end{definition} We note that, because of the second item in the above definition, not all tuples $(t,a;\ell_1,\dots,\ell_k)$ (where $(t,a)$ is in $\mathcal V$ and $\ell_1$, \dots, $\ell_k$ are leaves of $t$) belong to some $\mathcal V_{t_0}$. However, we will see below (as a consequence of \cref{prop:proba_arbre_unlabeled}) that asymptotically almost all tuples $(t,a;\ell_1,\dots,\ell_k)$ do belong to some $\mathcal V_{t_0}$ (even if we restrict to binary cotrees $t_0$, which is similar to the previous section). Let $V_{t_0}$ be the exponential generating series of $\mathcal V_{t_0}$; it is given by the following result. \begin{theorem}\label{th:V_t_0} Let $t_0$ be a labeled cotree with $k$ leaves, $n_v$ internal nodes, $n_=$ edges of the form $\mathtt{0}-\mathtt{0}$ or $\mathtt{1}-\mathtt{1}$, $n_{\neq}$ edges of the form $\mathtt{0}-\mathtt{1}$ or $\mathtt{1}-\mathtt{0}$. We have the identity $$ V_{t_0}=(U^\star)(2U+1-z)^{n_v} (U^\bullet)^k (U^\mathrm{odd})^{n_=} (U^\mathrm{even})^{n_{\neq}}. $$ \end{theorem} \begin{proof} Let $(t,a;\ell_1,...,\ell_k)$ be a tree in $\mathcal V_{t_0}$. % % % The tree $t$ with its marked leaves $\ell_1,...,\ell_k$ can be decomposed in a unique way as in the proof of \cref{th:SerieT_t0} into pieces: pink trees, blue trees, yellow trees, gray trees and green forests. As soon as a node of $t$ is fixed by the automorphism $a$, then the set of its descendants is stable by $a$. Therefore, the second item of \cref{Def:V_t_0} ensures that each colored piece in the decomposition of $t$ is stable by $a$, so that $a$ can be decomposed uniquely into a collection of automorphisms, one for each colored piece. Consequently, from now on, we think at pieces as trees/forests \emph{with an automorphism}. As in \cref{th:SerieT_t0}, each piece can be chosen independently in a set depending on its color. Moreover, since $t_0$ is labeled, the pieces can be ordered in a canonical way, so that the generating series of $V_{t_0}$ is the product of the generating series of the pieces. \begin{itemize} \item The gray subtree is a tree with an automorphism and a blossom which is fixed by the automorphism (because of the second item of \cref{Def:V_t_0}). As in \cref{th:SerieT_t0}, the decoration is forced by the context, so that we can consider the gray subtree as not decorated. The possible choices for the gray subtrees are therefore counted by $U^\star$. \item The possible choices for each green forest (and its automorphism) are counted by $1+U+(U-z)$: the first term corresponds to the empty green piece, the second one to exactly one tree in the green forest, and the third one to a set of at least two green trees (which can be seen as a non-trivial tree in $\mathcal U$ by adding a root). \item The possible choices for each yellow piece are counted by $U^\bullet$, since these trees have a marked leaf which is not necessarily fixed by the automorphism. \item The possible choices for each pink piece are counted by $U^{\text{even}}$: the blossom must be at even distance from the root of the piece (for the same reason as in \cref{th:SerieT_t0}) and must be fixed by the automorphism (because of the second item of \cref{Def:V_t_0}). \item Similarly, the possible choices for each blue piece are counted by $U^{\text{odd}}$. \end{itemize} Bringing everything together gives the formula in the theorem. \end{proof} \subsection{Asymptotic analysis} Let $\rho$ be the radius of convergence of $U$. It is easily seen that we have $0<\rho<1$, see, {\em e.g.}, \cite{GenitriniPolya}, where the numerical approximation $\rho \approx 0.2808$ is given. \begin{proposition}\label{prop:Asympt_U} The series $U,U',U^\star,U^\mathrm{even}, U^\mathrm{odd}$ all have the same radius of convergence $\rho$, are $\Delta$-analytic and admit the following expansions around $\rho$: \begin{equation*} U(z)\underset{z\to \rho}{=}\frac{1+\rho}{2}-\beta\sqrt{\rho-z} +o(\sqrt{\rho - z}), \qquad U'(z)\underset{z\to \rho}{\sim}\frac {\beta}{2\sqrt{\rho-z}}, \end{equation*} \begin{equation*} 2 U^\mathrm{even}(z) \sim 2U^\mathrm{odd}(z) \sim U^\star(z)\underset{z\to \rho}{\sim}\frac {1}{2\beta\sqrt{\rho-z}}, \end{equation*} for some constant $\beta>0$. \end{proposition} To prove the proposition, we need the following lemma, which is standard in the analysis of P\'olya structures. \begin{lemma}\label{lem:Rayon_U} The radius of convergence of $D$ is $\sqrt{\rho}>\rho$. \end{lemma} \begin{proof} Since $U$ has no constant term, for every $x \geq 1$ and $0<z<1$ we have $U(z^x)\leq U(z)z^{x-1}$. Hence for $0<t<\rho$, \[D(\sqrt{t}) = \exp_{\geq 1}\left( \sum_{r\geq 2} \frac 1 r\ U(t^{r/2}) \right) \leq \exp\left( \sum_{r\geq 2} U(t)t^{r/2-1} \right)\leq \exp\left(U(t) \frac 1{1-\sqrt{t}}\right) <\infty.\] This implies that the radius of convergence of $D$ is at least $\sqrt{\rho}$. Looking at \cref{eq:D}, we see that $D$ termwise dominates $\tfrac 12 U(z^2)$, whose radius of convergence is $\sqrt{\rho}$. Therefore, the radius of convergence of $D$ is exactly $\sqrt{\rho}$. \end{proof} \begin{proof}[Proof of \cref{prop:Asympt_U}] Set $F(z,u) = z+ \exp_{\geq 2}(u) + D(z)\exp(u)$. Then $U$ verifies the equation $U = F(z,U)$, which is the setting of \cite[Theorem A.6]{Nous3}\footnote{We warn the reader that the function $U$ appearing in \cite[Theorem A.6]{Nous3} is unrelated to the quantity $U(z)$ in the present article (which corresponds instead to $Y(z)$ in \cite[Theorem A.6]{Nous3}).} (in the one dimensional case, which is then just a convenient rewriting of \cite[Theorem VII.3]{Violet}). The only non-trivial hypothesis to check is the analyticity of $F$ at $(\rho,U(\rho))$. This holds because $\exp$ has infinite radius of convergence, while $D$ has radius of convergence $\sqrt{\rho}>\rho$ from \cref{lem:Rayon_U}. From items vi) and vii) of \cite[Theorem A.6]{Nous3}, we have that $U$ and $(1-\partial_uF(z,U(z)))^{-1}$ have radius of convergence $\rho$, are $\Delta$-analytic and that $\partial_uF(\rho,U(\rho))=1$. Moreover, \begin{equation*} U(z)\underset{z\to \rho}{=}U(\rho)-\frac {\beta}{\zeta}\sqrt{\rho-z} +o(\sqrt{\rho - z}), \qquad U'(z)\underset{z\to \rho}{\sim}\frac {\beta}{2\zeta\sqrt{\rho-z}} \end{equation*} \begin{equation*} (1-\partial_uF(z,U(z)))^{-1} \underset{z\to \rho}{\sim}\frac {1}{2\beta\zeta\sqrt{\rho-z}}, \end{equation*} where $\beta = \sqrt{\partial_zF(\rho,U(\rho))}$ and $\zeta = \sqrt{\tfrac 12 \partial^2_uF(\rho,U(\rho))}$. We have $\partial_uF(z,u) = \exp_{\geq 1}(u) + D(z)\exp(u) = F(z,u) + u - z$. Hence $\partial_uF(z,U(z)) = 2U(z) - z$. Recalling that $\partial_uF(\rho,U(\rho))=1$, we get $U(\rho) = \frac{1+\rho}{2}$. In addition, $\partial^2_uF(z,u) = \exp(u) + D(z)\exp(u) = \partial_uF(z,u) + 1$. Therefore, $\partial^2_uF(\rho,U(\rho)) = 2$ and $\zeta = 1$. The asymptotics of $U$ and $U'$ follow. Regarding $U^\star$ , \cref{eq:Ustar} implies that $U^\star = ({1-\partial_uF(z,U(z))})^{-1}$. Similarly solving the system of equations \eqref{eq:Uevenodd} we get % $U^{\mathrm{even}} = (1 - (\partial_uF(z,U(z)))^2)^{-1}$ and $U^{\mathrm{odd}} = \partial_uF(z,U(z))U^{\mathrm{even}}$ . By the daffodil lemma \cite[Lemma IV.1, p.266]{Violet}, we have $|\partial_uF(z,U(z))|<1$ for $|z| \le \rho$ and $z \ne \rho$. In particular, $\partial_uF(z,U(z))$ avoids the value $1$ and $-1$ for such $z$. Therefore $U^\star$, $U^{\mathrm{even}}$ and $U^{\mathrm{odd}}$ are $\Delta$-analytic. The asymptotics of $U^\star$ follows from the above results. Finally, since $\partial_uF(\rho,U(\rho)) =1$, we have $U^{\mathrm{even}}\sim U^\mathrm{odd}$ when $z$ tends to $\rho$. And, since $U^\star = U^{\mathrm{even}}+ U^\mathrm{odd}$, their asymptotics follow. \end{proof} \subsection{Distribution of induced subtrees of uniform cotrees} We take a uniform unlabeled canonical cotree $\bm t^{(n)}$ with $n$ leaves, \emph{i.e.} a uniform element of size $n$ in $\overline{\mathcal V}$. We also choose uniformly at random a $k$-tuple of distinct leaves of $\bm t^{(n)}$. We denote by $\mathbf{t}^{(n)}_k$ the labeled cotree induced by the $k$ marked leaves. \begin{proposition}\label{prop:proba_arbre_unlabeled} Let $k\geq 2$, and let $t_0$ be a labeled binary cotree with $k$ leaves. Then \begin{equation}\label{eq:proba_asymptotique_t0_unlabeled} \mathbb{P} (\mathbf{t}^{(n)}_k = t_0) \underset{n\to+\infty}{\to} \displaystyle{\frac{(k-1)!}{(2k-2)!}}. \end{equation} \end{proposition} \begin{proof} We take a uniform random pair $(\bm T^{(n)},\bm a)$ of $\mathcal V$ of size $n$ with a $k$-tuple of distinct leaves of $\bm T^{(n)}$, also chosen uniformly. We denote by $\mathbf{T}^{(n)}_k$ the cotree induced by the $k$ marked leaves. Since the forgetting map from $\mathcal V$ to $\overline{\mathcal V}$ is $n!$-to-$1$, $\bm T^{(n)}_k$ is distributed as $\bm t^{(n)}_k$. Hence, similarly to \cref{eq:probat0_quotient}, we have \[ \mathbb{P}(\mathbf{t}^{(n)}_k=t_0) = \mathbb{P}(\mathbf{T}^{(n)}_k=t_0) \geq \frac{n![z^{n}] V_{t_0}(z)}{n \dots (n-k+1) \, n! [z^n] V(z)}.\] The inequality comes from the fact that $\mathcal{V}_{t_0}$ does not consist of all pairs in $\mathcal V$ with a $k$-tuple of marked leaves inducing $t_0$, but only of some of them (see the additional constraint in the second item of \cref{Def:V_t_0}). % % % % From \cref{th:V_t_0}, we have $$ V_{t_0}=(U^\star)(2U+1-z)^{n_v} (U^\bullet)^k (U^\mathrm{odd})^{n_=} (U^\mathrm{even})^{n_{\neq}}. $$ Recalling that $U^\bullet(z) = zU'(z)$, we use the asymptotics for $U,U',U^\star,U^\mathrm{even}, U^\mathrm{odd}$ (given in \cref{prop:Asympt_U}) and furthermore the equalities $n_v=k-1$ and $n_= +n_{\neq}=k-2$ (which hold since $t_0$ is binary) to obtain \begin{align*} V_{t_0}(z) &\underset{z\to \rho}{\sim} \frac {1}{2\beta} 2^{k-1} \left(\frac{\beta}{2} \cdot \rho\right)^k \left(\frac {1}{4\beta}\right)^{k-2} (\rho-z)^{-(k-1/2)}\\ &\underset{z\to \rho}{\sim} \frac {\beta \rho^k}{2^{2k-2}} (\rho-z)^{-(k-1/2)} = \frac {\beta\sqrt{\rho}}{2^{2k-2}} (1-\tfrac z \rho)^{-(k-1/2)}. \end{align*} By the transfer theorem (\cite[Corollary VI.1 p.392]{Violet}) we have $$ [z^{n}] V_{t_0}(z) \underset{n\to +\infty}{\sim} \frac{\beta\sqrt{\rho}}{2^{2k-2} \rho^{n}} \frac{n^{k-3/2}}{\Gamma(k-1/2)} = \beta \frac{(k-1)!}{\sqrt{\pi}(2k-2)!} \frac{n^{k-3/2}}{\rho^{n-1/2}} $$ Besides, using $V(z)=2U(z)-z$, \cref{prop:Asympt_U}, and the transfer theorem as above, we have $$ n(n-1)\dots (n-k+1) [z^n] V(z) \underset{n\to +\infty}{\sim} n^k (-2\beta\sqrt{\rho})\frac{n^{-3/2}}{\rho^n \Gamma(-1/2)} \sim {\beta} \frac{n^{k-3/2}}{\rho^{n-1/2} \sqrt{\pi}}. $$ Finally, $ \liminf_{n\to\infty}\mathbb{P}(\mathbf{t}^{(n)}_k=t_0) \geq \frac{(k-1)!}{(2k-2)!} $. To conclude, recall (as seen in the proof of \cref{lem:factorisation}) that summing the right-hand-side over all labeled binary cotrees $t_0$ of size $k$ gives $1$, from which the proposition follows. \end{proof} \subsection{Proof of \cref{th:MainTheorem,th:DegreeRandomVertex} in the unlabeled case}\label{SousSec:PreuveNonEtiquete} The argument is identical to the labeled case. Recall that $\bm t^{(n)}$ is a uniform unlabeled canonical cotree of size $n$, so that ${\sf Cograph}(\bm t^{(n)})$ is a uniform unlabeled cograph of size $n$, \emph{i.e.} has the same ditribution as $\bm G_n^u$. Thus \cref{th:MainTheorem} follows from \cref{lem:factorisation} and \cref{prop:proba_arbre_unlabeled}, and \cref{th:DegreeRandomVertex} is then a consequence of \cref{th:MainTheorem,prop:degree,prop:IntensityW12}. \section{Vertex connectivity} \label{sec:DegreeConnectivity} A connected graph $G$ is said to be $k$-connected if it does not contain a set of $k - 1$ vertices whose removal disconnects the graph. The \emph{vertex connectivity} $\kappa(G)$ is defined as the largest $k$ such that $G$ is $k$-connected. Throughout this section, $\bm G_n$ (resp. $\bm G^u_n$) is a uniform random labeled (resp. unlabeled) cograph of size $n$, {\em conditioned to be connected}. The aim of this section is to prove that the random variable $\kappa(\bm G_n)$ (resp. $\kappa(\bm G^u_n)$) converges in distribution to a non-trivial random variable (without renormalizing). The limiting distributions in the labeled and unlabeled cases are different. \medskip A cograph $G$ (of size at least $2$) is connected if and only if the root of its canonical cotree is decorated by $\mathtt{1}$. (This implies that in both cases a uniform cograph of size $n$ is connected with probability $1/2$ for every $n$.) Therefore, any connected cograph $G$ (of size at least $2$) can be uniquely decomposed as the join of $F_1,\dots, F_k$ where each $F_i$ is either a disconnected cograph or a one-vertex graph. Moreover, the cographs $F_i$ are those whose canonical cotrees are the fringe subtrees attached to the root of the canonical cotree of $G$. Throughout this section, we refer to the $F_i$'s as the \emph{components} of $G$. The following lemma, illustrated by \cref{fig:vertex_connectivity}, gives a simple characterization of $\kappa(G)$ when $G$ is a cograph. \begin{lemma}\label{lem:CalculerKappa} Let $G$ be a connected cograph which is not a complete graph. Let $F_1,\dots, F_k$ be the components of $G$. It holds that $$ \kappa(G)= % |G| - \max_{1\leq i \leq k} \{|F_i|\}. $$ \end{lemma} \begin{proof} We reorder the components such that $|F_{1}|=\max_i|F_{i}|$. Because $G$ is not a complete graph, $F_{1}$ is not a one-vertex graph, and therefore is disconnected. Let us denote by $v_1,\dots,v_r$ the vertices of $F_{2}\cup F_{3} \cup \dots \cup F_{k}$. We have to prove that $\kappa(G)=r$. \noindent{\bf Proof of $\kappa(G)\leq r$.} If we remove all vertices $v_1,\dots,v_r$ then we are left with $F_1$ which is disconnected.\\ \noindent{\bf Proof of $\kappa(G)\geq r$.} If we remove only $r-1$ vertices then there remains at least one $v_j$ among $v_1,\dots,v_r$. Let us denote by $F_i$ the component of $v_j$. There also remains at least a vertex $v \notin F_i$ (or $|F_i|$ would be larger than $|F_1|$). Consequently, $v$ and $v_j$ are connected by an edge, and every remaining vertex is connected to $v_j$ (when not in $F_i$) or to $v$ (when not in the component containing $v$), so that $G$ remains connected. Therefore we must remove at least $r$ points to disconnect $G$. \end{proof} \begin{figure}[htbp] \begin{center} \includegraphics[width=11cm]{ExplicationConnectivityDegree.pdf} \caption{A connected cograph and the corresponding cotree. The connectivity degree of this graph is $|F_{2}|+|F_{3}|+|F_{4}|=2+2+1=5$. \label{fig:vertex_connectivity}} \end{center} \end{figure} \begin{theorem}\label{thm:DegreeConnectivity} Let $M(z)$ (resp. $V(z)$) be the exponential (resp. ordinary) generating series of labeled (resp. unlabeled) cographs. Their respective radii of convergence are $\rho=2\log(2)-1$ and $\rho_u \approx 0.2808$. For $j \geq 1$, set $$ \pi_j= \rho^j [z^{j}]M(z), \qquad \pi_j^u = \rho_u^j [z^{j}] V(z). $$ Then $(\pi_j)_{j\geq 1}$ and $(\pi_j^u)_{j\geq 1}$ are probability distributions and, for every fixed $j \ge 1$, \begin{equation} \label{eq:limit_pi_l} \mathbb{P}(\kappa(\bm G_n)=j) \underset{n\to +\infty}{\to}\pi_j, \qquad \mathbb{P}(\kappa(\bm G^u_n)=j) \underset{n\to +\infty}{\to}\pi^u_j. \end{equation} \end{theorem} \begin{remark} Readers acquainted with Boltzmann samplers may note that $(\pi_j)_{j\geq 1}$ and $(\pi_j^u)_{j\geq 1}$ are distributions of sizes of Boltzmann-distributed random labeled and unlabeled cographs, respectively. The Boltzmann parameters are chosen to be the radii of convergence. We do not have a direct explanation of this fact. \end{remark} \begin{proof} Recall from \cref{sec:proofLabeled,sec:unlabeled} that $M(z)=2L(z)-z$ and $V(z)=2U(z)-z$. It follows from \cref{prop:Asympt_S_Seven,prop:Asympt_U} that $\rho=2\log(2)-1$ and $\rho_u \approx 0.2808$ are their respective radii of convergence. We first prove that $(\pi_j)$ (resp. $(\pi^u_j)$) sum to one: \begin{align*} \sum_{j \geq 1} \pi_j &= \sum_{j\geq 1} \rho^j [z^j]M(z)=M(\rho)=2L(\rho)-\rho=1,\\ \sum_{j \geq 1} \pi^u_j &= \sum_{j\geq 1} \rho_u^j [z^j]V(z)=V(\rho_u)=2U(\rho_u)-\rho_u = 1, \end{align*} using \cref{prop:Asympt_S_Seven,prop:Asympt_U} for the last equalities. \smallskip For the remaining of the proof, we fix $j \geq 1$. In the labeled case, let $\bm T_n$ be the canonical cotree of $\bm G_n$. Since $\bm G_n$ is conditioned to be connected, $\bm T_n$ is a uniform labeled canonical cotree of size $n$ conditioned to have root decoration $\mathtt{1}$. Forgetting the decoration, we can see it as a uniform random element of size $n$ in $\mathcal L$. Let $n > 2j$. As the components of $\bm G_n$ correspond to the subtrees attached to the root of $\bm T_n$, using \cref{lem:CalculerKappa} we have ${\kappa}(\bm G_n)=j$ if and only if $\bm T_n$ is composed of a tree of $\mathcal{L}$ of size $n-j$ and $k\geq 1$ trees of $\mathcal{L}$ of total size $j$, all attached to the root. Since $n> 2j$, the fringe subtree of size $n- j$ is uniquely defined, and there is only one such decomposition. Therefore, % for every fixed $j \ge 1$ and $n>2j$, we have \[ \mathbb{P}({\kappa}(\bm G_n)=j) =\frac{[z^{n-j}]L(z) \, [z^{j}] \! \left(e^{L(z)}-1\right)}{[z^{n}]L(z)}. \] From \cref{prop:Asympt_S_Seven}, the series $L(z)$ has radius of convergence $\rho$, is $\Delta$-analytic and has a singular expansion amenable to singularity analysis. Thus, the transfer theorem ensures that $\frac{[z^{n-j}]L(z)}{[z^{n}]L(z)}$ tends to $\rho^j$, so that $$ \mathbb{P}({\kappa}(\bm G_n)=j) \underset{n\to +\infty}{\to} \rho^j \, [z^{j}] \! \left(e^{L(z)}-1\right) = \pi_j, $$ where we used $M(z)=e^{L(z)}-1$ (see \cref{eq:Lien_T_expS}). In the unlabeled case, let $\bm T_n^u$ be the canonical cotree of $\bm G_n^u$. Like in the labeled case, forgetting the decoration, it is a uniform element of $\overline{\mathcal U}$ of size $n$. Let $n >2j$. We have ${\kappa}(\bm G_n^u)=j$ if and only if $\bm T_n^u$ has a fringe subtree of size $n-j$ at the root. Let us count the number of trees of $\overline{\mathcal U}$ of size $n$ that have a fringe subtree of size $n-j$ at the root. Since $n-j>n/2$, there must be exactly one such fringe subtree, and there are $[z^{n-j}]U(z)$ choices for it. Removing it, the rest of the tree contains $j$ leaves, and is either a tree of $\overline{\mathcal U}$ of size $\geq 2$ (if the root still has degree at least 2), or a tree formed by a root and a single tree of $\overline{\mathcal U}$ attached to it. So the number of choices for the rest is $[z^{j}](2U(z)-z)$. We deduce that for $j\geq 1$ and $n>2j$, \[ \mathbb{P}({\kappa}(\bm G_n^u)=j) = \frac{[z^{n-j}]U(z) \, [z^{j}](2U(z)-z)}{[z^{n}]U(z)}. \] From \cref{prop:Asympt_U}, % the series $U(z)$ has radius of convergence $\rho_u$, is $\Delta$-analytic and has a singular expansion amenable to singularity analysis. The transfer theorem ensures that $\frac{[z^{n-j}]U(z)}{[z^{n}]U(z)}$ tends to $\rho_u^j$, so that \[ \mathbb{P}({\kappa}(\bm G_n^u)=j) \underset{n\to +\infty}{\to} \rho_u^j \, [z^{j}] (2U(z)-z)= \pi^u_j \] where we used $V(z) = 2U(z)-z$.% \end{proof} \begin{remark} In the labeled case, we could have used \cref{lem:CalculerKappa} and local limit results for trees instead of the generating series approach above. Indeed, the canonical cotree of $\bm G_n$ (without its decorations) is distributed as a Galton-Watson tree with an appropriate offspring distribution conditioned on having $n$ leaves. Such conditioned Galton-Watson trees converge in the local sense near the root towards a Kesten's tree \cite[Section 2.3.13]{AD15}. Since Kesten's trees have a unique infinite path from the root, this convergence implies the convergence (without renormalization) of the sizes of all components of $\bm G_n$ but the largest one. Therefore the sum $\kappa(\bm G_n)$ of these sizes also converges (without renormalization); the limit can be computed (at least in principle) using the description of Kesten's trees. In the unlabeled case, the canonical cotree of $\bm G_n^u$ (without its decorations) belongs to the family of random {\em P\'olya} trees. Such trees are {\em not} conditioned Galton-Watson trees. For scaling limits, it has been proven they can be approximated by conditioned Galton-Watson trees and hence converge under suitable conditions to the Brownian Continuum Random Tree \cite{PanaStufler}, but we are not aware of any local limit result for such trees. \end{remark} \subsection*{Acknowledgments} MB is partially supported by the Swiss National Science Foundation, under grants number 200021\_172536 and PCEFP2\_186872.
{ "attr-fineweb-edu": 1.708008, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbYU5qrqCyt4L02up
\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex} {2.3ex plus .2ex}{\large\bf}} \def\@startsection{subsection}{2}{\z@}{2.3ex plus .2ex{\@startsection{subsection}{2}{\z@}{2.3ex plus .2ex} {2.3ex plus .2ex}{\bf}} \newcommand\Appendix[1]{\def\Alph{section}}{Appendix \Alph{section}} \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{\label{#1}}\def\Alph{section}}{\Alph{section}}} \begin{document} \begin{titlepage} \samepage{ \setcounter{page}{0} \rightline{\tt hep-th/0602286} \rightline{February 2006} \vfill \begin{center} {\Large \bf Statistics on the Heterotic Landscape: \\ {\bf Gauge Groups and Cosmological Constants of Four-Dimensional Heterotic Strings}\\} \vfill \vspace{.10in} {\large Keith R. Dienes\footnote{ E-mail address: dienes@physics.arizona.edu} \\} \vspace{.10in} {\it Department of Physics, University of Arizona, Tucson, AZ 85721 USA\\} \end{center} \vfill \begin{abstract} {\rm Recent developments in string theory have reinforced the notion that the space of stable supersymmetric and non-supersymmetric string vacua fills out a ``landscape'' whose features are largely unknown. It is then hoped that progress in extracting phenomenological predictions from string theory --- such as correlations between gauge groups, matter representations, potential values of the cosmological constant, and so forth --- can be achieved through statistical studies of these vacua. To date, most of the efforts in these directions have focused on Type~I vacua. In this note, we present the first results of a statistical study of the {\it heterotic}\/ landscape, focusing on more than $10^5$ explicit non-supersymmetric tachyon-free heterotic string vacua and their associated gauge groups and one-loop cosmological constants. Although this study has several important limitations, we find a number of intriguing features which may be relevant for the heterotic landscape as a whole. These features include different probabilities and correlations for different possible gauge groups as functions of the number of orbifold twists. We also find a vast degeneracy amongst non-supersymmetric string models, leading to a severe reduction in the number of realizable values of the cosmological constant as compared with naive expectations. Finally, we also find strong correlations between cosmological constants and gauge groups which suggest that heterotic string models with extremely small cosmological constants are overwhelmingly more likely to exhibit the Standard-Model gauge group at the string scale than any of its grand-unified extensions. In all cases, heterotic worldsheet symmetries such as modular invariance provide important constraints that do not appear in corresponding studies of Type~I vacua. } \end{abstract} \vfill \smallskip} \end{titlepage} \setcounter{footnote}{0} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def{\textstyle{1\over 2}}{{\textstyle{1\over 2}}} \def{\cal O}{{\cal O}} \def{\cal E}{{\cal E}} \def{\cal T}{{\cal T}} \def{\cal M}{{\cal M}} \def{\cal F}{{\cal F}} \def{\cal Y}{{\cal Y}} \def{\cal V}{{\cal V}} \def{\cal N}{{\cal N}} \def{\overline{\i}}{{\overline{\i}}} \def{ \overline{q} }{{\overline{q}}} \def{\tilde m}{{\tilde m}} \def{ \hat a }{{\hat a}} \def{\tilde n}{{\tilde n}} \def\rep#1{{\bf {#1}}} \def{\it i.e.}\/{{\it i.e.}\/} \def{\it e.g.}\/{{\it e.g.}\/} \def{{\rm Str}\,}{{{\rm Str}\,}} \def{\bf 1}{{\bf 1}} \def{\vartheta_i}{{\vartheta_i}} \def{\vartheta_j}{{\vartheta_j}} \def{\vartheta_k}{{\vartheta_k}} \def\overline{\vartheta_i}{\overline{\vartheta_i}} \def\overline{\vartheta_j}{\overline{\vartheta_j}} \def\overline{\vartheta_k}{\overline{\vartheta_k}} \def{\overline{\eta}}{{\overline{\eta}}} \def{ {{{\rm d}^2\tau}\over{\tautwo^2} }}{{ {{{\rm d}^2\tau}\over{\tautwo^2} }}} \def{ \overline{q} }{{ \overline{q} }} \def{ \hat a }{{ \hat a }} \newcommand{\newcommand}{\newcommand} \newcommand{\gsim}{\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$}} \newcommand{\lsim}{\lower.7ex\hbox{$\;\stackrel{\textstyle<}{\sim}\;$}} \hyphenation{su-per-sym-met-ric non-su-per-sym-met-ric} \hyphenation{space-time-super-sym-met-ric} \hyphenation{mod-u-lar mod-u-lar--in-var-i-ant} \def\,\vrule height1.5ex width.4pt depth0pt{\,\vrule height1.5ex width.4pt depth0pt} \def\relax\hbox{$\inbar\kern-.3em{\rm C}$}{\relax\hbox{$\,\vrule height1.5ex width.4pt depth0pt\kern-.3em{\rm C}$}} \def\relax\hbox{$\inbar\kern-.3em{\rm Q}$}{\relax\hbox{$\,\vrule height1.5ex width.4pt depth0pt\kern-.3em{\rm Q}$}} \def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}} \font\cmss=cmss10 \font\cmsss=cmss10 at 7pt \def\IZ{\relax\ifmmode\mathchoice {\hbox{\cmss Z\kern-.4em Z}}{\hbox{\cmss Z\kern-.4em Z}} {\lower.9pt\hbox{\cmsss Z\kern-.4em Z}} {\lower1.2pt\hbox{\cmsss Z\kern-.4em Z}}\else{\cmss Z\kern-.4em Z}\fi} \long\def\@caption#1[#2]#3{\par\addcontentsline{\csname ext@#1\endcsname}{#1}{\protect\numberline{\csname the#1\endcsname}{\ignorespaces #2}}\begingroup \small \@parboxrestore \@makecaption{\csname fnum@#1\endcsname}{\ignorespaces #3}\par \endgroup} \catcode`@=12 \input epsf \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Introduction} \setcounter{footnote}{0} One of the most serious problems faced by practitioners of string phenomenology is the multitude of possible, self-consistent string vacua. That there exist large numbers of potential string solutions has been known since the earliest days of string theory; these result from the large numbers of possible ways in which one may choose an appropriate compactification manifold (or orbifold), an appropriate set of background fields and fluxes, and appropriate expectation values for the plethora of additional moduli to which string theories generically give rise. Although historically these string solutions were not completely stabilized, it was tacitly anticipated for many years that some unknown vacuum stabilization mechanism would ultimately lead to a unique vacuum state. Unfortunately, recent developments suggest that there continue to exist huge numers of self-consistent string solutions ({\it i.e.}\/, string ``models'' or ``vacua'') even after stabilization. Thus, a picture emerges in which there exist huge numbers of possible string vacua, all potentially stable (or sufficiently metastable), with apparently no dynamical principle to select amongst them. Indeed, each of these potential vacua can be viewed as sitting at the local minimum of a complex terrain of possible string solutions dominated by hills and valleys. This terrain has come to be known as the ``string-theory landscape''~\cite{landscape}. The existence of such a landscape has tremendous practical significance because the specific low-energy phenomenology that can be expected to emerge from string theory depends critically on the particular choice of vacuum state. Detailed quantities such as particle masses and mixings, and even more general quantities and structures such as the choice of gauge group, number of chiral particle generations, magnitude of the supersymmetry-breaking scale, and even the cosmological constant can be expected to vary significantly from one vacuum solution to the next. Thus, in the absence of some sort of vacuum selection principle, it is natural to tackle a secondary but perhaps more tractible question concerning whether there might exist generic string-derived {\it correlations}\/ between different phenomenological features. In this way, one can still hope to extract phenomenological predictions from string theory. This idea has triggered a recent surge of activity concerning the {\it statistical}\/ properties of the landscape~[2--13]. Investigations along these lines have focused on diverse phenomenological issues, including the value of the supersymmetry-breaking scale~\cite{douglas,SUSYbreaking}, the value of the cosmological constant~\cite{Weinberg,BP,lambdapapers}, and the preferred rank of the corresponding gauge groups, the prevalence of the Standard-Model gauge group, and possible numbers of chiral generations~\cite{douglas,blumenhagen,schellekens}. Discussions of the landscape have also led to various theoretical paradigm shifts, ranging from alternative landscape-based notions of naturalness~\cite{SUSYbreaking,splitSUSY} and novel cosmological inflationary scenarios~\cite{Weinberg,BP,lambdapapers} to the use of anthropic arguments to constrain the set of viable string vacua~\cite{Weinberg,lambdapapers,anthropics}. There have even been proposals for field-theoretic analogues of the string-theory landscape~\cite{fieldtheory} as well as discussions concerning whether there truly exist effective field theories that can describe it~\cite{banks}. Collectively, these developments have even given birth to a large, ambitious, organized effort dubbed the ``String Vacuum Project (SVP)''~\cite{SVP}, one of whose purposes is to map out the properties of this landscape of string vacua. It is envisioned that this will happen not only through direct enumeration/construction of viable string vacua, but also through planned large-scale statistical studies across the landscape as a whole. Unfortunately, although there have been many abstract theoretical discussions of such vacua and their statistical properties, there have been relatively few direct statistical examinations of actual string vacua. Despite considerable effort, there have been relatively few pieces of actual data gleaned from direct studies of the string landscape and the vacua which populate it. This is because, in spite of recent progress, the construction and analysis of completely stable string vacua remains a rather complicated affair~\cite{fluxes,constructions}. Surveying whole classes of such vacua and doing a proper statistical analysis thus remains a formidable task. There are exceptions, however. For example, one recent computer analysis examined millions of supersymmetric intersecting D-brane models on a particular orientifold background~\cite{blumenhagen}. Although the models which were constructed for such analyses are not completely stable (since they continue to have flat directions), the analysis reported in Ref.~\cite{blumenhagen} examined important questions such as the statistical occurrences of various gauge groups, chirality, numbers of generations, and so forth. A similar statistical study focusing on Gepner-type orientifolds exhibiting chiral supersymmetric Standard-Model spectra was performed in Ref.~\cite{schellekens}. By means of such studies, a number of interesting statistical correlations were uncovered. To date, however, there has been almost no discussion of the {\it heterotic}\/ landscape. This is somewhat ironic, especially since perturbative heterotic strings were the framework in which most of the original work in string phenomenology was performed in the late 1980's and early 1990's. In this paper, we shall present the results of the first statistical study of the heterotic string landscape. Thus, in some sense, this work can be viewed as providing a heterotic analogue of the work reported in Refs.~\cite{blumenhagen,schellekens}. In this paper, we shall focus on a sample of approximately $1.2\times 10^5$ distinct four-dimensional perturbative heterotic string models, all randomly generated, and we shall analyze statistical information concerning their gauge groups and one-loop cosmological constants. As we shall see, the statistical properties of perturbative heterotic strings are substantially different from those of Type~I strings. This is already apparent at the level of gauge groups: while the gauge groups of Type~I strings are constrained only by allowed D-brane configurations and anomaly-cancellation constraints, those of perturbative heterotic strings necessarily have a maximum rank. Moreover, as we shall repeatedly see, modular invariance shall also prove to play an important role in constraining the features of the heterotic landscape. This too is a feature that is lacking for Type~I landscape. On the other hand, there will be certain similarities. For example, one of our results will concern a probability for randomly obtaining the Standard-Model gauge group from perturbative heterotic strings. Surprisingly, this probability shall be very close to what is obtained for Type~I strings. For various technical and historical reasons, our statistical study will necessarily have certain limitations. There will be discussed more completely below and in Sect.~2. However, three limitations are critical and deserve immediate mention. First, as mentioned, our sample size is relatively small, consisting of only $\sim 10^5$ distinct models. However, although this number is miniscule compared with the numbers of string models that are currently quoted in most landscape discussions, we believe that the statistical results we shall obtain have already achieved saturation --- {\it i.e.}\/, we do not believe that they will change as more models are added. We shall discuss this feature in more detail in Sect.~2. Second, for historical reasons to be discussed below, our statistical study in this paper shall be limited to only two phenomenological properties of these models: their low-energy gauge groups, and their one-loop vacuum amplitudes (cosmological constants). Nevertheless, as we shall see, this represents a considerable wealth of data. Further studies are currently underway to investigate other properties of these models and their resulting spacetime spectra, and we hope to report those results in a later publication. Perhaps most importantly, however, all of the models we shall be analyzing are non-supersymmetric. Therefore, even though they are all tachyon-free, they have non-zero dilaton tadpoles and thus are not stable beyond tree level. Indeed, the models we shall be examining can be viewed as four-dimensional analogues of the $SO(16)\times SO(16)$ heterotic string in ten dimensions~\cite{SOsixteen}. Such models certainly satisfy all of the necessary string self-consistency constraints --- they have worldsheet conformal/superconformal invariance, they have one-loop and multi-loop modular-invariant amplitudes, they exhibit proper spin-statistics relations, and they contain physically sensible GSO projections and orbifold twists. However, they are not stable beyond tree level. Clearly, such models do not represent the sorts of truly stable vacua that we would ideally like to be studying. Again invoking landscape imagery, such models do not sit at local minima in the landscape --- they sit on hillsides and mountain passes, valleys and even mountaintops. Thus, in this paper, we shall in some sense be surveying the entire {\it profile}\/ of the landscape rather than merely the properties of its local minima. Indeed, we can call this a ``raindrop'' study: we shall let the rain fall randomly over the perturbative heterotic landscape and collect statistical data where each raindrop hits the surface. Clearly this is different in spirit from a study in which our attention is restricted to the locations of the puddles which remain after the rain has stopped and the sun comes out. Despite these limitations, we believe that such a study can be of considerable value. First, such models do represent valid string solutions at tree level, and it is therefore important to understand their properties as a first step towards understanding the full phenomenology of non-supersymmetric strings and their contributions to the overall architecture of the landscape. Indeed, since no stable perturbative non-supersymmetric heterotic strings have yet been constructed, our study represents the current state of the art in the statistical analysis of perturbative non-supersymmetric heterotic strings. Second, as we shall discuss further in Sect.~2, the models we shall be examining range from the extremely simple, involving a single set of sectors, to the extraordinarily complex, involving many convoluted layers of overlapping orbifold twists and Wilson lines. In all cases, these sets of orbifolds twists and Wilson lines were randomly generated, yet each satisfies all necessary self-consistency constraints. These models thus exhibit an unusual degree of intricacy and complexity, just as we expect for models which might eventually exhibit low-energy phenomenologies resembling that of the real world. Third, an important question for any landscape study is to understand the phenomenological roles played by supersymmetry and by the need for vacuum stability. However, the only way in which we might develop an understanding of the statistical significance of the effects that spacetime supersymmetry might have on other phenomenological properties (such as gauge groups, numbers of chiral generations, {\it etc}\/.) is to already have the results of a study of strings in which supersymmetry is absent. But most importantly, we know as an experimental fact that the low-energy world is non-supersymmetric. Therefore, if we believe that perturbative heterotic strings are relevant to its description, it behooves us to understand the properties of non-supersymmetric strings. Although no such strings have yet been found which are stable beyond tree level, analyses of these unstable vacua may prove useful in pointing the way towards their eventual constructions. Indeed, as we shall see, some of our results shall suggest some of the likely phenomenological properties that such string might ultimately have. This paper is organized as follows. In Sect.~2, we shall provide an overview of the models that we will be analyzing in this paper. We shall also discuss, in more detail, the limitations and methodologies of our statistical study. In Sect.~3, we shall then provide a warm-up discussion that focuses on the better-known properties of the {\it ten}\/-dimensional heterotic landscape. We will then turn our attention to heterotic strings in four dimensions for the remainder of the paper. In Sect.~4, we shall focus on the gauge groups of such strings, and in Sect.~5 we shall focus on their one-loop vacuum energies (cosmological constants). Finally, in Sect.~6, we shall analyze the statistical {\it correlations}\/ between the gauge groups and cosmological constants. A short concluding section will then outline some future directions. Note that even though these string models are unstable beyond tree level, we shall use the terms ``string models'' and ``string vacua'' interchangeably in this paper to refer to these non-supersymmetric, tachyon-free string solutions. \bigskip \noindent{\it Historical note} \medskip This paper has a somewhat unusual provenance. Therefore, before beginning, we provide a brief historical note. In the late 1980's, soon after the development of the free-fermionic construction~\cite{KLT}, a number of string theorists undertook various computer-automated randomized searches through the space of perturbative four-dimensional heterotic string models. The most detailed and extensive of such searches was described in Ref.~\cite{Senechal}; to the best of our knowledge, this represents the earliest automated search through the space of heterotic string models. Soon afterwards, other searches were also performed (see, {\it e.g.}\/, Ref.~\cite{PRL}). At that time, the goals of such studies were to find string models with certain favorable phenomenological properties. In other words, these investigations were viewed as searches rather than as broad statistical studies. One such search at that time~\cite{PRL} was aimed at finding four-dimensional perturbative non-supersymmetric tachyon-free heterotic string models which nevertheless have zero one-loop cosmological constants. Inspired by Atkin-Lehner symmetry and its possible extensions~\cite{Moore}, we conducted a search using the techniques (and indeed some of the software) first described in Ref.~\cite{Senechal}. At that time, our interest was purely on the values of the cosmological constant. However, along the way, the corresponding gauge groups of these models were also determined and recorded. In this paper, we shall report on the results of a new, comprehensive, statistical analysis of this ``data'' which was originally collected in the late 1980's. As a consequence of the limited scope of our original search, our statistical analysis here shall therefore be focused on non-supersymmetric tachyon-free models. Likewise, in this paper we shall concentrate on only the two phenomenological properties of such models (gauge groups and cosmological constants) for which such data already existed. As mentioned above, a more exhaustive statistical study using modern software and a significantly larger data set is currently underway: this will include both supersymmetric and non-supersymmetric heterotic string models, and will involve many additional properties of the physical spectra of the associated models (including their gauge groups, numbers of generations, chirality properties, and so forth). However, the study described in this paper shall be limited to the data set that was generated as part of the investigations of Ref.~\cite{PRL}. Although this data was generated over fifteen years ago, we point out that almost all of statistical results of this paper were obtained recently and have not been published or reported elsewhere in the string literature. \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{The string vacua examined} \setcounter{footnote}{0} In this section we shall describe the class of string vacua which are included in our statistical analysis. Each of these vacua represents a weakly coupled critical heterotic string compactified to, or otherwise constructed directly in, four large (flat) spacetime dimensions. In general, such a string may be described in terms of its left- and right-moving worldsheet conformal field theories (CFT's); in four dimensions, in addition to the spacetime coordinates and their right-moving worldsheet superpartners, these internal CFT's must have central charges $(c_R,c_L)=(9,22)$ in order to enforce worldsheet conformal anomaly cancellation. While the left-moving internal CFT must merely exhibit conformal invariance, the right-moving internal CFT must actually exhibit superconformal invariance. While any CFT's with these central charges may be considered, in this paper we shall focus on those string models for which these internal worldsheet CFT's may be taken to consist of tensor products of free, non-interacting, complex (chiral) bosonic or fermionic fields. This is a huge class of models which has been discussed and analyzed in many different ways in the string literature. On the one hand, taking these worldsheet fields as fermionic leads to the so-called ``free-fermionic'' construction~\cite{KLT} which will be our primary tool throughout this paper. In the language of this construction, different models are achieved by varying (or ``twisting'') the boundary conditions of these fermions around the two non-contractible loops of the worldsheet torus while simultaneously varying the phases according to which the contributions of each such spin-structure sector are summed in producing the one-loop partition function. However, alternative but equivalent languages for constructing such models exist. For example, we may bosonize these worldsheet fermions and construct ``Narain'' models~\cite{Narain,Lerche} in which the resulting complex worldsheet bosons are compactified on internal lattices of appropriate dimensionality with appropriate self-duality properties. Furthermore, many of these models have additional geometric realizations as orbifold compactifications with randomly chosen Wilson lines; in general, the process of orbifolding is quite complicated in these models, involving many sequential layers of projections and twists. Note that all of these constructions generally overlap to a large degree, and all are capable of producing models in which the corresponding gauge groups and particle contents are quite intricate. Nevertheless, in all cases, we must ensure that all required self-consistency constraints are satisfied. These include modular invariance, physically sensible GSO projections, proper spin-statistics identifications, and so forth. Thus, each of these vacua represents a fully self-consistent string solution at tree level. In order to efficiently survey the space of such non-supersymmetric four-dimensional string-theoretic vacua, we implemented a computer search based on the free-fermionic spin-structure construction, as originally developed in Ref.~\cite{KLT}. Recall that in this light-cone gauge construction, each of the six compactified bosonic spacetime coordinates is fermionized to become two left-moving and two right-moving internal free real fermions, and consequently our four-dimensional heterotic strings consist of the following fields on the worldsheet: 20 right-moving free real fermions (the eight original supersymmetric partners of the eight transverse bosonic coordinates of the ten-dimensional string, along with twelve additional internal fermions resulting from compactification); 44 left-moving free real fermions (the original 32 in ten dimensions plus the additional twelve resulting from compactification); and of course the two transverse bosonic (coordinate) fields $X^\mu$. Of these 20 right-moving real fermions, only two (the supersymmetric partners of the two remaining transverse coordinates) carry Lorentz indices. In our analysis, we restricted our attention to those models for which our real fermions can always be uniformly paired to form complex fermions, and therefore it was possible to specify the boundary conditions (or spin-structures) of these real fermions in terms of the complex fermions directly. We also restricted our attention to cases in which the worldsheet fermions exhibited either antiperiodic (Neveu-Schwarz) or periodic (Ramond) boundary conditions. Of course, in order to build a self-consistent string model in this framework, these boundary conditions must satisfy tight constraints. These constraints are necessary in order to ensure that the one-loop partition function is modular invariant and that the resulting Fock space of states can be interpreted as arising from a physically sensible projection from the space of all worldsheet states onto the subspace of physical states with proper spacetime spin-statistics. Thus, within a given string model, it is necessary to sum over appropriate sets of untwisted and twisted sectors with different boundary conditions and projection phases. Our statistical analysis consisted of an examination of $123,573$ distinct vacua, each randomly generated through the free-fermionic construction. (In equivalent orbifold language, each vacuum was constructed from randomly chosen sets of orbifold twists and Wilson lines, subject to the constraints described above.) Details of this study are similar to those of the earlier study described in Ref.~\cite{Senechal}, and made use of model-generating software borrowed from that earlier study. Essentially, each set of boundary conditions was chosen randomly in each sector, subject only to the required self-consistency constraints. However, in our statistical sampling, we placed no limits on the complexity of the orbifold twisting ({\it i.e.}\/, on the number of basis vectors in the free-fermionic language). Thus, our statistical analysis included models of arbitrary intricacy and sophistication. As discussed above, for the purpose of this search, we demanded that supersymmetry be broken without introducing tachyons. Thus, these vacua are all non-supersymmetric but tachyon-free, and can be considered as four-dimensional analogues of the ten-dimensional $SO(16)\times SO(16)$ heterotic string~\cite{SOsixteen} which is also non-supersymmetric but tachyon-free. As a result, these models all have non-vanishing but finite one-loop cosmological constants/vacuum energies $\Lambda$, and we shall examine these values of $\Lambda$ in Sect.~5. However, other than demanding that supersymmetry be broken in a tachyon-free manner, we placed no requirements on other possible phenomenological properties of these vacua such as the possible gauge groups, numbers of chiral generations, or other aspects of the particle content. We did, however, require that our string construction begin with a supersymmetric theory in which the supersymmetry is broken only through subsequent orbifold twists. (In the language of the free-fermionic construction, this is tantamount to demanding that our fermionic boundary conditions include a superpartner sector, typically denoted ${\bf W}_1$ or ${\bf V}_1$.) This is to be distinguished from a potentially more general class of models in which supersymmetry does not appear at any stage of the construction. This is merely a technical detail in our construction, and we do not believe that this ultimately affects our results. Because of the tremendous redundancy inherent in the free-fermionic construction, string vacua were judged to be distinct based on their spacetime characteristics --- {\it i.e.}\/, their low-energy gauge groups and massless particle content. Thus, as a minimum condition, distinct string vacua necessarily exhibit different massless spacetime spectra.\footnote{ As a result of conformal invariance and modular invariance (both of which simultaneously relate states at all mass levels), it is extremely difficult for two string models to share the same massless spectrum (as well as the same off-shall tachyonic structure) and yet differ in their massive spectra. Thus, for all practical purposes, our requirement that two models must have different massless spectra is not likely to eliminate potential models whose spectra might differ only at the massive level.} As we shall discuss further below, such a requirement about the distinctness of the spacetime spectrum must be an important component of any statistical study of string models. Since the same string model may have a plethora of different worldsheet realizations, one cannot verify that one is accurately surveying the space of distinct, independent string models based on their worldsheet realizations alone. This ``redundancy'' issue becomes increasingly pressing as larger and larger numbers of models are considered. Clearly, this class of string models is not all-encompassing. By its very nature, the free-fermionic construction reaches only certain specific points in the full space of self-consistent string models. For example, since each worldsheet fermion is nothing but a worldsheet boson compactified at a specific radius, a larger (infinite) class of models can immediately be realized through a bosonic formulation by varying these radii away from their free-fermionic values. However, this larger class of models will typically have only abelian gauge groups and consequently uninteresting particle representations. Indeed, the free-fermionic points typically represent precisely those points at which additional (non-Cartan) gauge-boson states become massless, thereby enhancing the gauge symmetries to become non-abelian. Thus, the free-fermionic construction naturally leads to precisely the set of models which are likely to be of direct phenomenological relevance. Similarly, it is possible to go beyond the class of free-field string models altogether, and consider models built from more complicated worldsheet CFT's ({\it e.g.}\/, Gepner models). We may even transcend the realm of critical string theories, and consider non-critical strings and/or strings with non-trivial background fields. Likewise, we may consider heterotic strings beyond the usual perturbative limit. However, although such models may well give rise to phenomenologies very different from those that emerge in free-field constructions, their spectra are typically very difficult to analyze and are thus not amenable to an automated statistical investigation. Finally, even within the specific construction we are employing in this paper, we may drop our requirement that our models be non-supersymmetric, and consider models with varying degrees of unbroken supersymmetry. This will be done in future work. Finally, we should point out that strictly speaking, the class of models we are considering is only finite in size. Because of the tight worldsheet self-consistency constraints arising from modular invariance and the requirement of physically sensible GSO projections, there are only a finite number of distinct boundary condition vectors and GSO phases which may be chosen in our construction as long as we restrict our attention to complex worldsheet fermions with only periodic (Ramond) or antiperiodic (Neveu-Schwarz) boundary conditions. For example, in four dimensions there are a maximum of only $32$ boundary-condition vectors which can possibly be linearly independent, even before we impose other dot-product modular-invariance constraints. This is, nevertheless, a very broad and general class of theories. Indeed, models which have been constructed using such techniques span almost the entire spectrum of closed-string models, including MSSM-like models, models with and without extra exotic matter, and so forth. Moreover, worldsheet bosonic and fermionic constructions can produce models which have an intricacy and complexity which is hard to duplicate purely through geometric considerations --- indeed, these are often models for which no geometric compactification space is readily apparent. It is for this reason that while most of our geometric insights about string models have historically come from Calabi-Yau and general orbifold analyses, much of the serious work at realistic closed-string model-building over the past two decades has been through the more algebraic bosonic or fermionic formulations. It is therefore within this class of string models that our analysis will be focused. Moreover, as we shall see, this set of models is still sufficiently large to enable various striking statistical correlations to appear. Finally, we provide some general comments about the statistical analysis we will be performing and the interpretation of our results. As with any statistical landscape study, it is important to consider whether the properties we shall find are rigorously true for the landscape as a whole, or are merely artifacts of having considered only a finite statistical sample of models or a sample which is itself not representative of the landscape at large because it is statistically biased or skewed in some way. Clearly, without detailed analytical knowledge of the entire landscape of models in the category under investigation, one can never definitively answer this question. Thus, in each case, it is necessary to judge which properties or statistical correlations are likely to exist because of some deeper, identifiable string consistency constraint, and which are not. In this paper, we shall try to indicate in every circumstance what we believe are the appropriate causes of each statistical correlation we find. The issue concerning the finite size of our sample is particularly relevant in our case, since we will be examining the properties of only $\sim 10^5$ distinct models in this paper. Although this is certainly a large number of string models on an absolute scale, this number is extremely small compared with the current estimated size of the entire string landscape. However, one way to judge the underlying validity of a particular statistical correlation is to test whether it persists without significant modification as our sample size increases. If so, then it is likely that we have already reached the ``continuum limit'' (borrowing a phrase from our lattice gauge theory colleagues) as far as the particular statistical correlation is concerned. This can be verified by testing the numerical stability of a given statistical correlation as more and more string models are added to our sample set, and can be checked {\it a posteriori}\/ by examining whether the correlation persists even if the final sample set is partitioned or subdivided. All correlations that we will present in this paper are stable in the ``continuum limit'' unless otherwise indicated. Finally, we point out that all correlations in this paper will ultimately depend on a particular assumed measure across the landscape. For example, when we plot a correlation between two quantities, the averaging for these quantities is calculated across all models in our data set, with each physically distinct string model weighted equally. However, we expect that such averages would change significantly if models were weighted in a different manner. For example, as we shall see, many of our results would be altered if we were to weight our probabilities equally across the set of distinct gauge groups rather than across the set of distinct string models. This sensitivity to the underlying string landscape measure is, of course, well known. In this paper, we shall employ a measure in which each distinct string model is weighted equally across our sample set. \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{A preliminary example:\\ The ten-dimensional heterotic landscape} Before plunging into the four-dimensional case of interest, let us first consider the ``landscape'' of {\it ten}\/-dimensional heterotic string models. Recall that in ten dimensions, such models have maximal gauge-group rank $16$, corresponding to sixteen left-moving worldsheet bosons (or complex fermions). It turns out that we can examine the resulting ``landscape'' of such models by arranging them in the form of a family ``tree''. First, at the root of the tree, we have what is literally the simplest ten-dimensional heterotic string model we can construct: this is the supersymmetric $SO(32)$ heterotic string model in which our worldsheet fermionic fields all have identical boundary conditions in each spin-structure sector of the theory. Indeed, the internal $SO(32)$ rotational invariance amongst these fermions is nothing but the spacetime gauge group of the resulting model. Starting from this model, there are then a number of ways in which we may ``twist'' the boundary conditions of these fields (or, in orbifold language, mod out by discrete symmetries). First, we might seek to twist the boundary conditions of these sixteen complex fermions into two blocks of eight complex fermions each. If we do this in a way that also breaks spacetime supersymmetry, we obtain a non-supersymmetric, tachyon free model with gauge group $SO(16)\times SO(16)$; in orbifold language, we have essentially chosen a SUSY-breaking orbifold which projects out the non-Cartan gauge bosons in the coset $SO(32)/[SO(16)\times SO(16)]$. However, if we try to do this in a way which simultaneously preserves spacetime supersymmetry, we find that we cannot obtain $SO(16)\times SO(16)$; instead, modular invariance requires that our SUSY-preserving orbifold twist simultaneously come with a twisted sector which supplies new gauge-boson states, enhancing this gauge group to $E_8\times E_8$. This produces the well-known $E_8\times E_8$ heterotic string. The $SO(16)\times SO(16)$ and $E_8\times E_8$ heterotic strings may thus be taken to sit on the second branch of our family tree. Continuing from these three heterotic strings, we may continue to perform subsequent orbifold twists and thereby generate additional models. For example, we may act with other configurations of $\IZ_2$ twists on the supersymmetric $SO(32)$ string model: the three other possible self-consistent models that can be obtained this way are the non-supersymmetric $SO(32)$ heterotic string model, the $SO(8)\times SO(24)$ string model, and a heterotic string model with gauge group $U(16)=SU(16)\times U(1)$. All have physical (on-shell) tachyons in their spectrum. Likewise, we may perform various $\IZ_2$ orbifolds of the $E_8\times E_8$ string model: self-consistent choices produce additional non-supersymmetric, tachyonic models with gauge groups $SO(16)\times E_8$ and $(E_7)^2\times SU(2)^2$. Finally, we may also orbifold the $E_8\times E_8$ model by a discrete symmetry (outer automorphism) which exchanges the two $E_8$ gauge groups, producing a final non-supersymmetric, tachyonic (rank-reduced) model with a single $E_8$ gauge group realized at affine level $k=2$~\cite{KLTclassification}. By its very nature, this last model is beyond the class of models with complex worldsheet fields that we will be considering, since modding out by the outer automorphism cannot be achieved on the worldsheet except through the use of {\it real}\/ worldsheet fermions, or by employing non-abelian orbifold techniques. In this manner, we have therefore generated the nine self-consistent heterotic string models in ten dimensions which are known to completely fill out the ten-dimensional heterotic ``landscape''~\cite{KLTclassification}. However, the description we have provided above represents only one possible route towards reaching these nine models; other routes along different branches of the tree are possible. For example, the non-supersymmetric, tachyon-free $SO(16)\times SO(16)$ heterotic string can be realized either as a $\IZ_2$ orbifold of the supersymmetric $SO(32)$ string or as a different $\IZ_2$ orbifold of the $E_8\times E_8$ string. Thus, rather than a direct tree of ancestors and descendants, what we really have are deeply interlocking webs of orbifold relations. A more potent example of this fact is provided by the single-$E_8$ heterotic string model. This model can be constructed through several entirely different constructions: as a free-fermionic model involving necessarily real fermions; as an abelian orbifold of the $E_8\times E_8$ heterotic string model in which the discrete symmetry is taken to be an outer automorphism (exchange) of the two $E_8$ gauge symmetries; and as a {\it non}\/-abelian orbifold model in which the non-abelian discrete group is $D_4$~\cite{nonabelian}. Moreover, as noted above, even within a given construction numerous unrelated combinations of parameter choices can yield exactly the same string model. These sorts of redundancy issues become increasingly relevant as larger and larger sets of models are generated and analyzed, and must be addressed in order to allow efficient progress in the task of enumerating models. One way to categorize different branches of the ``tree'' of models is according to total numbers of irreducible gauge-group factors that these models contain. As we have seen, the ten-dimensional heterotic landscape contains exactly three models with only one irreducible gauge group: these are the $SO(32)$ models, both supersymmetric and non-supersymmetric, and the single-$E_8$ model. By contrast, there are five models with two gauge-group factors: these are the models with gauge groups $E_8\times E_8$, $SO(16)\times SO(16)$, $SO(24)\times SO(8)$, $SU(16)\times U(1)$, and $SO(16)\times E_8$. Finally, there is one model with four gauge-group factors: this is the $(E_7)^2\times SU(2)^2$ model. Note that no other models with other numbers of gauge-group factors appear in ten dimensions. Alternatively, we may classify our models into groups depending on their spacetime supersymmetry properties: there are two models with unbroken spacetime supersymmetry, one with broken supersymmetry but without tachyons, and six with both broken supersymmetry and tachyons. Clearly, this ten-dimensional heterotic ``landscape'' is very restricted, consisting of only nine discrete models. Nevertheless, many of the features we shall find for the four-dimensional heterotic landscape are already present here: \begin{itemize} \item First, we observe that not all gauge groups can be realized. For example, we do not find any ten-dimensional heterotic string models with gauge group $SO(20)\times SO(12)$, even though this gauge group would have the appropriate total rank $16$. We also find no models with three gauge-group factors, even though we have models with one, two, and four such factors. Indeed, of all possible gauge groups with total rank $16$ that may be constructed from the simply-laced factors $SO(2n)$, $SU(n)$ and $E_{6,7,8}$, we see that only eight distinct combinations emerge from the complete set of self-consistent string models. Likewise, if we allow for the possibility of a broader class of models which incorporate rank-cutting, then we must also allow for the possibility that our gauge group can be composed of factors which also include the non-simply laced gauge groups $SO(2n+1)$, $Sp(2n)$, $F_4$, and $G_2$. However, even from this broader set, only one additional gauge group (a single $E_8$) is actually realized. \item Second, we see that certain phenomenological features are {\it correlated} in such strings. For example, although there exist models with gauge groups $SO(16)\times SO(16)$ and $E_8\times E_8$, the first gauge group is possible only in the non-supersymmetric case, while the second is possible only in the supersymmetric case. These two features (the presence/absence of spacetime supersymmetry and the emergence of different possible gauge groups) are features that would be completely disconnected in quantum field theory, and thus represent intrinsically {\it stringy correlations}\/. As such, these may be taken to represent statistical predictions from string theory, manifestations of the deeper worldsheet self-consistency constraints that string theory imposes. \item Third, we have seen that a given string model can be realized in many different ways on the worldsheet, none of which is necessarily special or preferred. This is part of the huge redundancy of string constructions that we discussed in Sect.~2. Thus, all of the string models that we shall discuss will be defined to be physically distinct on the basis of their {\it spacetime}\/ properties ({\it e.g.}\/, on the basis of differing spacetime gauge groups or particle content). \item Fourth, we see that our ten-dimensional ``landscape'' contains models with varying amounts of supersymmetry: in ten dimensions, we found ${\cal N}=1$ supersymmetric models, non-supersymmetric tachyon-free models, and non-supersymmetric models with tachyons. These are also features which will survive into the heterotic landscapes in lower dimensions, where larger numbers of unbroken supersymmetries are also possible. Since other phenomenological properties of these models may be correlated with their degrees of supersymmetry, it is undoubtedly useful to separate models according to this primary feature before undertaking further statistical analyses. \item Finally, we observe that a heterotic string model with the single rank-eight gauge group $E_8$ is already present in the ten-dimensional heterotic landscape. This is a striking illustration of the fact that not all string models can be realized through orbifold techniques of the sort we will be utilizing, and that our landscape studies will necessarily be limited in both class and scope. \end{itemize} \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Gauge groups: Statistical results} \setcounter{footnote}{0} We now turn our attention to our main interest, the landscape of heterotic string models in four dimensions. As we discussed in Sect.~2, the string models we are examining are free-field models ({\it i.e.}\/, models in which our worldsheet fields are free and non-interacting). As such, the gauge sector of such four-dimensional heterotic string models can be described by even self-dual\footnote{ For strings with spacetime supersymmetry, modular invariance requires these gauge lattices to be even and self-dual. In other cases, the self-duality properties actually apply to the full lattice corresponding to the internal gauge group as well as the spacetime Lorentz group.} Lorentzian lattices of dimensionality $(6,22)$, as is directly evident in a bosonic (Narain) construction~\cite{Narain,Lerche}. [This is the four-dimensional analogue of the sixteen-dimensional lattice that underlies the $SO(32)$ or $E_8\times E_8$ heterotic string models in ten dimensions; the remaining $(6,6)$ components arise internally through compactification from ten to four dimensions.] In general, the right-moving (worldsheet supersymmetric) six-dimensional components of these gauge lattices correspond at best to very small right-moving gauge groups composed of products of $U(1)$'s, or $SU(2)$'s realized at affine level $k=2$. We shall therefore disregard these right-moving gauge groups and focus exclusively on the left-moving gauge groups of these models. Moreover, because we are focusing on free-field constructions involving only complex bosonic or fermionic worldsheet fields, the possibility of rank-cutting is not available in these models. Consequently, these models all have left-moving simply-laced gauge groups with total rank 22, realized at affine level $k=1$. As we shall see, the twin requirements of modular invariance and physically sensible projections impose powerful self-consistency constraints on such models and their possible gauge groups. As such, these are features that are not present for open strings, but they are ultimately responsible for most of the statistical features we shall observe. Moreover, as discussed previously, in this paper we shall restrict our attention to models which are non-supersymmetric but tachyon-free. Such models are therefore stable at tree level, but have finite, non-zero one-loop vacuum energies (one-loop cosmological constants). In this section, we shall focus on statistical properties of the gauge groups of these models. We believe that these properties are largely independent of the fact that our models are non-supersymmetric. In Sect.~5, we shall then focus on the statistical distributions of the values of their cosmological constants, and in Sect.~6 we shall discuss correlations between the gauge groups and the cosmological constants. In comparison with the situation for heterotic strings in ten dimensions, the four-dimensional situation is vastly more complex, with literally billions and billions of distinct, self-consistent heterotic string models exhibiting arbitrary degrees of intricacy and complexity. These models are generated randomly, with increasingly many randomly chosen twists and overlapping orbifold projections. Each time a self-consistent model is obtained, it is compared with all other models that have already been obtained. It is deemed to represent a new model only if it has a massless spacetime spectrum which differs from all models previously obtained. Because of the tremendous worldsheet redundancy inherent in the free-fermionic approach, it becomes increasingly more difficult (both algorithmically and in terms of computer time) to find a ``new'' model as more and more models are constructed and stored. Nevertheless, through random choices for all possible twists and GSO projection phases, we have generated a set of more than $10^5$ distinct four-dimensional heterotic string models which we believe provide a statistically representative sample of the heterotic landscape. Indeed, in many cases we believe that our sample is essentially complete, especially for relatively simple models involving relatively few orbifold twists. Just as for ten-dimensional heterotic strings, we can visualize a ``tree'' structure, grouping our models on higher and higher branches of the tree in terms of their numbers of gauge-group factors. For this purpose, we shall decompose our gauge groups into irreducible factors; {\it e.g.}\/, $SO(4)\sim SU(2)\times SU(2)$ will be considered to have two factors. We shall generally let $f$ denote the number of irreducible gauge-group factors. Clearly, as $f$ increases, our gauge group of total rank 22 becomes increasingly ``shattered'' into smaller and smaller pieces. The following is a description of some salient features of the tree that emerges from our study. Again we emphasize that our focus is restricted to models which are non-supersymmetric but tachyon-free. \begin{itemize} \item {$f=1$}:\/ We find that there is only one heterotic string model whose gauge group contains only one factor: this is an $SO(44)$ string model which functions as the ``root'' of our subsequent tree. This is a model in which all left-moving worldsheet fermions have identical boundary conditions, but in which all tachyons are GSO-projected out of the spectrum. This model is the four-dimensional analogue of the ten-dimensional $SO(32)$ heterotic string model. \item {$f=2$}:\/ On the next branch, we find $34$ distinct string models whose gauge groups contain two simple factors. As might be expected, in all cases these gauge groups are of the form $SO(44-n)\times SO(n)$ for $n=8,12,16,20$. Each of these models is constructed from the above $SO(44)$ string model by implementing a single twist. Modular invariance, together with the requirement of worldsheet superconformal invariance and a single-valued worldsheet supercurrent, are ultimately responsible for restricting the possible twists to those with $n= 8,12,16,20$. Note that $n=36,32,28,24$ are implicitly included in this set, yielding the same gauge groups as those with $n= 8,12,16,20$ respectively. Finally, note that cases with $n=4$ (or equivalently $n=40$) are not absent; they are instead listed amongst the $f=3$ models since $SO(4)\sim SU(2)\times SU(2)$. \item {$f=3$}:\/ On the next branch, we find that $186$ distinct models with eight distinct gauge groups emerge. Notably, this branch contains the first instance of an exceptional group. The eight relevant gauge groups, each with total rank 22, are: $SO(28)\times SO(8)^2$, $SO(24)\times SO(12)\times SO(8)$, $SO(20)\times SO(16)\times SO(8)$, $SO(20)\times SO(12)^2$, $SO(16)^2\times SO(12)$, $SO(40)^2\times SU(2)^2$, $E_8\times SO(20)\times SO(8)$, and $E_8\times SO(16)\times SO(12)$. \item $f=4$: This level gives rise to $34$ distinct gauge groups, and is notable for the first appearance of $E_7$ as well as the first appearance of non-trivial $SU(n)$ groups with $n=4,8,12$. This is also the first level at which gauge groups begin to contain $U(1)$ factors. This correlation between $SU(n)$ and $U(1)$ gauge-group factors will be discussed further below. Note that other `SU' groups are still excluded, presumably on the basis of modular-invariance constraints. \item $f=5$: This is the first level at which $E_6$ appears, as well as gauge groups $SU(n)$ with $n=6,10,14$. We find $49$ distinct gauge groups at this level. \end{itemize} As we continue towards larger values of $f$, our gauge groups become increasingly ``shattered'' into more and more irreducible factors. As a result, it becomes more and more difficult to find models with relatively large gauge-group factors. Thus, as $f$ increases, we begin to witness the disappearance of relatively large groups and a predominance of relatively small groups. \begin{itemize} \item $f=6$: At this level, $SU(14)$ disappears while $SU(7)$ makes its first appearance. We find $70$ distinct gauge groups at this level. \item $f=7$: We find $75$ distinct gauge groups at this level. Only one of these gauge groups contains an $E_8$ factor. This is also the first level at which an $SU(5)$ gauge-group factor appears. \item $f=8$: We find $89$ distinct gauge groups at this level. There are no $E_8$ gauge-group factors at this level, and we do not find an $E_8$ gauge-group factor again at any higher level. This also the first level at which an $SU(3)$ factor appears. Thus, assuming these properties persist for the full heterotic landscape, we obtain an interesting stringy correlation: no string models in this class can have a gauge group containing $E_8\times SU(3)$. These sorts of constraints emerge from modular invariance --- {\it i.e.}\/, our inability to construct a self-dual 22-dimensional lattice with these two factors. Indeed, such a gauge group would have been possible based on all other considerations ({\it e.g.}\/, CFT central charges, total rank constraints, {\it etc.}\/). \item $f=9$: Here we find that our string models give rise to $86$ distinct gauge groups. This level also witnesses the permanent disappearance of $SU(12)$. \end{itemize} This tree ends, of course, at $f=22$, where our gauge groups contain only $U(1)$ and $SU(2)$ factors, each with rank 1. It is also interesting to trace the properties of this tree backwards from the $f=22$ endpoint. \begin{itemize} \item $f=22$: Here we find only $16$ gauge groups, all of the form $U(1)^n\times SU(2)^{22-n}$ for all values $0\leq n\leq 22$ except for $n=1,2,3,5,7,9,11$. Clearly no larger gauge-group factors are possible at this ``maximally shattered'' endpoint. \item $f=21$: Moving backwards one level, we find $10$ distinct gauge groups, all of the form $U(1)^n\times SU(2)^{20-n}\times SU(3)$ for $n=9,11,12,13,14,15,16,17,18,19$. Note that at this level, an $SU(3)$ factor must {\it always}\/ exist in the total gauge group since there are no simply-laced irreducible rank-two groups other than $SU(3)$. \item $f=20$: Moving backwards again, we find $24$ distinct gauge groups at this level. Each of these models contains either $SU(3)^2$ or $SU(4)\sim SO(6)$. \item $f=19$: Moving backwards one further level, we find $37$ distinct gauge groups, each of which contains either $SU(3)^3$ or $SU(3)\times SU(4)$ or $SU(5)$ or $SO(8)$. \end{itemize} Clearly, this process continues for all branches of our ``tree'' over the range $1\leq f\leq 22$. Combining these results from all branches, we find a total of $1301$ distinct gauge groups from amongst over $120,000$ distinct models. As we discussed at the end of Sect.~2, it is important in such a statistical landscape study to consider whether the properties we find are rigorously true or are merely artifacts of having considered a finite statistical sample of models or a sample which is itself not representative of the landscape at large. One way to do this is to determine which properties or statistical correlations are likely to persist because of some deeper string consistency constraint, and which are likely to reflect merely a finite sample size. For example, the fact that the $f=21,22$ levels give rise to gauge groups of the forms listed above with only particular values of $n$ is likely to reflect the finite size of our statistical sample. Clearly, it is extremely difficult to randomly find a sequence of overlapping sequential orbifold twists which breaks the gauge group down to such forms for any arbitrary values of $n$, all without introducing tachyons, so it is entirely possible that our random search through the space of models has simply not happened to find them all. Thus, such restrictions on the values of $n$ in these cases are not likely to be particularly meaningful. However, the broader fact that $E_8$ gauge-group factors are not realized beyond a certain critical value of $f$, or that $SU(3)$ gauge-group factors are realized only beyond a different critical value of $f$, are likely to be properties that relate directly back to the required self-duality of the underlying charge lattices. Such properties are therefore likely to be statistically meaningful and representative of the perturbative heterotic landscape as a whole. Of course, the quoted values of these critical values of $f$ should be viewed as approximate, as statistical results describing probability distributions. Nevertheless, the relative appearance and disappearance of different gauge-group factors are likely to be meaningful. Throughout this paper, we shall focus on only those statistical correlations which we believe represent the perturbative heterotic landscape at large. Indeed, these correlations are stable in the ``continuum limit'' (in the sense defined at the end of Sect.~2). Having outlined the basic tree structure of our heterotic mini-landscape, let us now examine its overall statistics and correlations. The first question we might ask pertains to the {\it overall distribution}\/ of models across bins with different values of $f$. Just how ``shattered'' are the typical gauge groups in our heterotic landscape? \begin{figure}[ht] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot1.eps} } \caption{The absolute probabilities of obtaining distinct four-dimensional heterotic string models as a function of the degree to which their gauge groups are ``shattered'' into separate irreducible factors, stretching from a unique model with the irreducible rank-22 gauge group $SO(44)$ to models with only rank-one $U(1)$ and $SU(2)$ gauge-group factors. The total value of the points (the ``area under the curve'') is 1. As the number of gauge-group factors increases, the behavior of the probability distribution bifurcates according to whether this number is even or odd. Indeed, as this number approaches its upper limit $22$, models with even numbers of gauge-group factors become approximately ten times more numerous than those with odd numbers of gauge-group factors.} \label{Fig1} \end{figure} The results are shown in Fig.~\ref{Fig1}, where we plot the absolute probabilities of obtaining distinct four-dimensional heterotic string models as a function of $f$, the number of factors in their spacetime gauge groups. These probabilities are calculated by dividing the total number of distinct models that we have found for each value of $f$ by the total number of models we have generated. (Note that we plot relative probabilities rather than raw numbers of models because it is only these relative probabilities which are stable in the continuum limit discussed above. Indeed, we have explicitly verified that restricting our sample size in any random manner does not significantly affect the overall shape of the curves in Fig.~\ref{Fig1}.) The average number of gauge-group factors in our sample set is $\langle f\rangle \approx 13.75$. It is easy to understand the properties of these curves. For $f=1$, we have only one string model, with gauge group $SO(44)$. However, as $f$ increases beyond $1$, the models grow in complexity, each with increasingly intricate patterns of overlapping orbifold twists and Wilson lines, and consequently the number of such distinct models grows considerably. For $f\gsim 14$, we find that the behavior of this probability as a function of $f$ bifurcates to whether $f$ is even or odd. Indeed, as $f\to 22$, we find that models with even numbers of gauge-group factors become approximately ten times more numerous than those with odd numbers of gauge-group factors. Of course, this behavior might be an artifact of our statistical sampling methodology for randomly generating string models. However, we believe that this is actually a reflection of the underlying modular-invariance constraints that impose severe self-duality restrictions on the charge lattices corresponding to these models. \begin{figure}[t] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot2.eps} } \caption{The number of distinct gauge groups realized from heterotic string models with $f$ gauge-group factors, plotted as a function of $f$. Only $1301$ distinct gauge groups are realized from $\sim 10^5$ distinct heterotic string models.} \label{Fig2} \end{figure} Fig.~\ref{Fig1} essentially represents the total number of distinct {\it models}\/ found as a function of $f$. However, we can also examine the number of distinct {\it gauge groups}\/ found as a function of $f$. This data appears in Fig.~\ref{Fig2}. Once again, although we expect the raw number of distinct gauge groups for each $f$ to continue to grow as our sample size increases, we do not expect the overall shape of these curves to change significantly. For small values of $f$, the number of distinct realizable gauge groups is relatively small, as discussed earlier. For example, for $f=1$ we have a single realizable gauge group $SO(44)$, while for $f=2$ we have the four groups $SO(44-n)\times SO(n)$ for $n=8,12,16,20$. Clearly, as $f$ increases, the number of potential gauge group combinations increases significantly, reaching a maximum for $f\approx 12$. Beyond this, however, the relative paucity of Lie groups with small rank becomes the dominant limiting factor, ultimately leading to very few distinct realizable gauge groups as $f\to 22$. Since we have found that $N\approx 1.2\times 10^5$ distinct heterotic string models yield only $1301$ distinct gauge groups, this number of models yields an average gauge-group multiplicity factor $\approx 95$. As we shall discuss later ({\it c.f.}\/ Fig.~\ref{Fig10}), we expect that this multiplicity will only increase as more models are added to our sample set. However, it is also interesting to examine how this average multiplicity is distributed across the set of distinct gauge groups. This can be calculated by dividing the absolute probabilities of obtaining distinct heterotic string models (plotted as a function of $f$ in Fig.~\ref{Fig1}) by the number of distinct gauge groups (plotted as a function of $f$ in Fig.~\ref{Fig2}). The resulting average multiplicity factor, distributed as a function of $f$, is shown in Fig.~\ref{Fig3}. As we see, this average redundancy factor is relatively small for small $f$, but grows dramatically as $f$ increases. This makes sense: as the heterotic gauge group accrues more factors, there are more combinations of allowed representations for our matter content, thereby leading to more possibilities for distinct models with distinct massless spectra. \begin{figure}[ht] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot3.eps} } \caption{Average gauge-group multiplicity (defined as the number of distinct heterotic string models divided by the number of distinct gauge groups), plotted as a function of $f$, the number of gauge-group factors in the total gauge group. As the number of factors increases, we see that there are indeed more ways of producing a distinct string model with a given gauge group. Note that the greatest multiplicities occur for models with relatively large, {\it even}\/ numbers of gauge-group factors.} \label{Fig3} \end{figure} \begin{figure}[ht] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot4.eps} } \caption{ The composition of heterotic gauge groups, showing the average contributions to the total allowed rank from $SO(2n\geq 6)$ factors (denoted `SO'), $SU(n\geq 3)$ factors (denoted `SU'), exceptional group factors $E_{6,7,8}$ (denoted `E'), and rank-one factors $U(1)$ and $SU(2)$ (denoted `I'). In each case, these contributions are plotted as functions of the number of gauge-group factors in the string model and averaged over all string models found with that number of factors. In the case of $SU(4)\sim SO(6)$ factors, the corresponding rank contribution is apportioned equally between the `SO' and `SU' categories. The total of all four lines is 22, as required.} \label{Fig4} \end{figure} Another important statistical question we may consider concerns the relative abundances of `SO', `SU', and exceptional gauge groups. Regardless of the value of $f$, the total rank of the full gauge group is $22$; thus, the interesting question concerns how this total rank is apportioned amongst these different classes of Lie groups. This information is shown in Fig.~\ref{Fig4}. For the purposes of this plot, contributions from $SU(4) \sim SO(6)$ factors, when they occur, are equally shared between `SO' and `SU' categories. The total of all four lines is 22, as required. Once again, we observe several important features which are consistent with our previous discussions. First, we see that all of the allowed rank is found to reside in `SO' groups for $f=1,2$; for $f=1$, this is because the unique realizable gauge group in such models is $SO(44)$, while for $f=2$, this is because the only realizable gauge-group breaking in such models is of the form $SO(44)\to SO(44-n)\times SO(n)$ for $n=8,12,16,20$. We also observe that the `E' groups do not contribute any net rank unless $f\geq 3$, while the `SU' groups do not contribute any net rank unless $f\geq 4$. It is worth noting that the exceptional groups $E_{6,7,8}$ are exceedingly rare for all values of $f$, especially given that their share of the total rank in a given string model must be at least six whenever they appear. As $f$ grows increasingly large, however, the bulk of the rank is to be found in $U(1)$ and $SU(2)$ gauge factors. Of course, for $f=21$, the `SU' groups have an average rank which is exactly equal to $2$. This reflects the fact that each of the realizable gauge groups for $f=21$ necessarily contains a single $SU(3)$ factor, as previously discussed. Across all of our models, we find that \begin{itemize} \item {\bf 85.79\%} of our heterotic string models contain $SO(2n\geq 6)$ factors; amongst these models, the average number of such factors is $\approx 2.5$. \item {\bf 74.35\%} of our heterotic string models contain $SU(n\geq 3)$ factors; amongst these models, the average number of such factors is $\approx 2.05$. \item {\bf 0.57\%} of our heterotic string models contain $E_{6,7,8}$ factors; amongst these models, the average number of such factors is $\approx 1.01$. \item {\bf 99.30\%} of our heterotic string models contain $U(1)$ or $SU(2)$ factors; amongst these models, the average number of such factors is $\approx 13.04$. \end{itemize} In the above statistics, an $SU(4)\sim SO(6)$ factor is considered to be a member of whichever category (`SU' or `SO') is under discussion. Note that these statistics are calculated across distinct heterotic string models. However, as we have seen, there is a tremendous redundancy of gauge groups amongst these string models, with only 1301 distinct gauge groups appearing for these $\approx 1.2\times 10^5$ models. Evaluated across the set of distinct {\it gauge groups}\/ which emerge from these string models (or equivalently, employing a different measure which assigns statistical weights to models according to the distinctness of their gauge groups), these results change somewhat: \begin{itemize} \item {\bf 88.55\%} of our heterotic gauge groups contain $SO(2n\geq 6)$ factors; amongst these groups, the average number of such factors is $\approx 2.30$. \item {\bf 76.79\%} of our heterotic gauge groups contain $SU(n\geq 3)$ factors; amongst these groups, the average number of such factors is $\approx 2.39$. \item {\bf 8.38\%} of our heterotic gauge groups contain $E_{6,7,8}$ factors; amongst these groups, the average number of such factors is $\approx 1.06$. \item {\bf 97.62\%} of our heterotic gauge groups contain $U(1)$ or $SU(2)$ factors; amongst these groups, the average number of such factors is $\approx 8.83$. \end{itemize} Note that the biggest relative change occurs for the exceptional groups, with over $8\%$ of our gauge groups containing exceptional factors. Thus, we see that while exceptional gauge-group factors appear somewhat frequently within the set of allowed distinct heterotic gauge groups, the gauge groups containing exceptional factors emerge relatively rarely from our underlying heterotic string models. \begin{figure}[ht] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot5.eps} } \caption{The probability that a given $SO(2n)$ or $SU(n+1)$ gauge-group factor appears at least once in the gauge group of a randomly chosen heterotic string model, plotted as a function of the rank $n$ of the factor. While the `SU' curve (solid line) is plotted for all ranks $\geq 1$, the `SO' curve (dashed line) is only plotted for ranks $\geq 3$ since $SO(2)\sim U(1)$ and $SO(4)\sim SU(2)^2$. These curves necessarily share a common point for rank 3, where $SU(4)\sim SO(6)$. } \label{Fig5} \end{figure} Of course, this information does not indicate the probabilities of individual `SO' or `SU' groups. Such individual probabilities are shown in Fig.~\ref{Fig5}, where we indicate the probabilities of individual `SO' or `SU' groups as functions of the ranks of these groups. We observe that for all ranks~$\geq 3$, the `SU' groups are significantly less common than the corresponding `SO' groups. This pattern exists even for ranks up to $22$, where the probabilities for `SU' groups continue to be significantly less than those of their corresponding `SO' counterparts. This helps to explain why, despite the information itemized above, the `SO' groups are able to make a significantly larger contribution to the total rank than do the `SU' groups, as illustrated in Fig.~\ref{Fig4}. It is natural to wonder why the `SO' groups tend to dominate over the `SU' groups in this way, especially since ordinary quantum field theory would lead to no such preferences. Of course, it is entirely possible that these results may indicate some sort of bias in our sample of free-field string models. However, in a heterotic string framework, we recall that gauge symmetries ultimately have their origins as internal symmetries amongst worldsheet fields. Indeed, within the free-field constructions we are examining, `SO' groups tend to be the most natural since they represent rotational symmetries amongst identical worldsheet fields. By contrast, `SU' groups are necessarily more difficult to construct, especially as the rank of the `SU' group becomes relatively large. We can illustrate this fact directly in the case of of free-field worldsheet constructions by considering the relevant charge lattices for the gauge-boson states. These charges are nothing but the weights of the adjoint representations of these gauge groups, where each direction of the charge lattice corresponds to a different worldsheet field. It is then straightforward to consider how the different gauge groups are embedded in such a lattice, {\it i.e.}\/, how these gauge-boson states can be represented in terms of the underlying string degrees of freedom. For example, in a string formulation based upon complex worldsheet bosons $\phi_\ell$ or fermions $\psi_\ell$, each lattice direction $\hat e_\ell$ --- and consequently each generator $U_\ell$ --- corresponds to a different worldsheet boson or fermion: $U_\ell\equiv i\partial \phi_\ell=\overline{\psi}_\ell \psi_\ell$. Given such a construction, we simply need to understand how the simple roots of each gauge group are oriented with respect to these lattice directions. Disregarding irrelevant overall lattice permutations and inversions, the `SO' groups have a natural lattice embedding. For any group $SO(2n)$, the roots $\lbrace \vec \alpha\rbrace$ can be represented in an $n$-dimensional lattice as $\lbrace \pm \hat e_i \pm \hat e_j\rbrace$, with the simple roots given by $\vec\alpha_i = \hat e_i - \hat e_{i+1}$ for $1\leq i\leq n-1$, and $\vec \alpha_n = \hat e_{n-1}+ \hat e_{n}$. As we see, all coefficients for these embeddings are integers, which means that these charge vectors can be easily realized through excitations of Neveu-Schwarz worldsheet fermions. By contrast, the group $SU(n)$ contains roots with necessarily non-integer coefficients if embedded in an $(n-1)$-dimensional lattice [as appropriate for the rank of $SU(n)$]. For example, $SU(3)$ has two simple roots whose relative angle is $2\pi /3$, ensuring that no two-dimensional orthogonal coordinate system can be found with respect to which both roots have integer coordinates. In free-field string constructions, this problem is circumvented by embedding our $SU(n)$ groups into an $n$-dimensional lattice rather than an $(n-1)$-dimensional lattice. One can then represent the $n-1$ simple roots as $\vec \alpha_i = \hat e_i - \hat e_{i+1}$ for $1\leq i\leq n-1$, using only integer coefficients. However, this requires the use of an {\it extra}\/ lattice direction in the construction --- {\it i.e.}\/, this requires the coordinated participation of an additional worldsheet degree of freedom. Indeed, the $SU(n)$ groups are realized non-trivially only along diagonal hyperplanes within a higher-dimensional charge lattice. Such groups are consequently more difficult to achieve than their $SO(2n)$ cousins. This also explains why the appearance of $SU(n)$ gauge groups is strongly correlated with the appearance of $U(1)$ factors in such free-field string models. As the above embedding indicates, in order to realize $SU(n)$ what we are really doing is first realizing $U(n)\equiv SU(n)\times U(1)$ in an $n$-dimensional lattice. In this $n$-dimensional lattice, the $U(1)$ group factor amounts to the trace of the $U(n)$ symmetry, and corresponds to the lattice direction ${\vec E}\equiv \sum_{\ell=1}^n \hat e_\ell$. The $(n-1)$-dimensional hyperplane orthogonal to ${\vec E}$ then corresponds to the $SU(n)$ gauge group. Thus, in such free-field string models, we see that the appearance of $SU(n)$ gauge groups is naturally correlated with the appearance of $U(1)$ gauge groups. Indeed, within our statistical sample of heterotic string models, we find that \begin{itemize} \item {\bf 99.81\% of all heterotic string models which contain one or more $SU(n)$ factors also exhibit an equal or greater number of $U(1)$ factors.} [In the remaining 0.19\% of models, one or more of these $U(1)$ factors is absorbed to become part of another non-abelian group.] By contrast, the same can be said for only $74.62\%$ of models with $SO(2n\geq 6)$ factors and only $61.07\%$ of models with $E_{6,7,8}$ factors. Given that the average number of $U(1)$ factors per model across our entire statistical sample is $\approx 6.75$, these results for the `SO' and `E' groups are essentially random and do not reflect any underlying correlations. \end{itemize} Note that these last statements only apply to $SU(n)$ gauge-group factors with $n=3$ or $n\geq 5$; the special case $SU(4)$ shares a root system with the orthogonal group $SO(6)$ and consequently does not require such an embedding. For the purposes of realizing the Standard Model from heterotic strings, we may also be interested in the relative probabilities of achieving $SU(3)$, $SU(2)$, and $U(1)$ gauge-group factors individually. This information is shown in Fig.~\ref{Fig6}(a). Moreover, in Fig.~\ref{Fig6}(b), we show the joint probability of {\it simultaneously}\/ obtaining at least one of each of these factors within a given string model, producing the combined factor $G_{\rm SM}\equiv SU(3)\times SU(2)\times U(1)$. Indeed, averaging across all our heterotic string models, we find that \begin{itemize} \item {\bf 10.64\%} of our heterotic string models contain $SU(3)$ factors; amongst these models, the average number of such factors is $\approx 1.88$. \item {\bf 95.06\%} of our heterotic string models contain $SU(2)$ factors; amongst these models, the average number of such factors is $\approx 6.85$. \item {\bf 90.80\%} of our heterotic string models contain $U(1)$ factors; amongst these models, the average number of such factors is $\approx 4.40$. \end{itemize} \begin{figure}[ht] \centerline{ \epsfxsize 3.1 truein \epsfbox {plot6a.eps} \epsfxsize 3.1 truein \epsfbox {plot6b.eps} } \caption{(a) (left) The distribution of total rank amongst $U(1)$, $SU(2)$, and $SU(3)$ gauge-group factors as a function of the total number of gauge-group factors. The sum of the $U(1)$ and $SU(2)$ lines reproduces the `I' line in Fig.~\protect\ref{Fig4}, while the $SU(3)$ line is a subset of the `SU' line in Fig.~\protect\ref{Fig4}. (b) (right) Absolute joint probability of obtaining a string model with the Standard-Model gauge group $G_{\rm SM}\equiv SU(3)\times SU(2)\times U(1)$ as a function of the total number of gauge-group factors in the string model. Although off-scale and therefore not shown on this plot, the probability of realizing $G_{\rm SM}$ actually hits 1 for $f=21$. } \label{Fig6} \end{figure} Note that these overall probabilities for $SU(3)$ and $SU(2)$ factors are consistent with those shown in Fig.~\ref{Fig5}. By contrast, across the set of allowed {\it gauge groups}\/, we find that \begin{itemize} \item {\bf 23.98\%} of our heterotic gauge groups contain $SU(3)$ factors; amongst these groups, the average number of such factors is $\approx 2.05$. \item {\bf 73.87\%} of our heterotic gauge groups contain $SU(2)$ factors; amongst these groups, the average number of such factors is $\approx 5.66$. \item {\bf 91.47\%} of our heterotic gauge groups contain $U(1)$ factors; amongst these groups, the average number of such factors is $\approx 5.10$. \end{itemize} We see that the biggest relative change occurs for $SU(3)$ gauge-group factors: although such factors appear within almost $24\%$ of the allowed gauge groups, these gauge groups emerge from underlying string models only half as frequently as we would have expected. This is why only $10\%$ of our distinct heterotic string models contain $SU(3)$ gauge-group factors. At first glance, it may seem that these results for $SU(2)$ and $U(1)$ factors conflict with the results in Fig.~\ref{Fig6}(a). However, the total number of models containing at least one gauge-group factor of a given type is dependent not only on the average rank contributed by a given class of gauge group as a function of $f$ [as shown in Fig.~\ref{Fig6}(a)], but also on the overall number of models as a function of $f$ [as shown in Fig.~\ref{Fig1}]. Thus, these plots provide independent information corresponding to different ways of correlating and presenting statistics for the same data set. As we see from Fig.~\ref{Fig6}(a), $SU(3)$ gauge factors do not statistically appear in our heterotic string models until the overall gauge group has been ``shattered'' into at least eight irreducible factors. Moreover, as we have seen, the net probabilities of $SU(2)$ and $U(1)$ factors peak only when there are relatively large numbers of factors. Consequently, we observe from the joint probabilities in Fig.~\ref{Fig6}(b) that the entire Standard-Model gauge group does not statistically appear until our overall gauge group has been shattered into at least $10$ gauge-group factors. This precludes the appearance of gauge groups such as $G_{\rm SM}\times SO(36)$, $G_{\rm SM}\times E_6\times SO(24)$, and so forth --- all of which would have been allowed on the basis of rank and central charge constraints. Once again, it is the constraint of the self-duality of the complete charge lattice --- {\it i.e.}\/, the modular invariance of the underlying string model --- which is ultimately the origin of such correlations. These results also agree with what has been found in several explicit (supersymmetric) semi-realistic perturbative heterotic string models with Standard-Model gauge groups~\cite{Faraggimodels}. Note from Fig.~\ref{Fig6}(b) that the probability of obtaining the Standard-Model gauge group actually hits $100\%$ for $f=21$, and drops to zero for $f=22$. Both features are easy to explain in terms of correlations we have already seen. For $f=21$, our gauge groups are {\it required}\/ to contain an $SU(3)$ factor since there are no simply-laced irreducible rank-two groups other than $SU(3)$. [This is also why the $SU(3)$ factors always contribute exactly two units of rank to the overall rank for $f=21$, as indicated in Fig.~\ref{Fig6}(a).] For $f=22$, by contrast, no $SU(3)$ factors can possibly appear. Another important issue for string model-building concerns cross-correlations between {\it different}\/ gauge groups --- {\it i.e.}\/, the joint probabilities that two different gauge groups appear simultaneously within a single heterotic string model. For example, while one gauge-group factor may correspond to our observable sector, the other factor may correspond to a hidden sector. Likewise, for model-building purposes, we might also be interested in probabilities that involve the entire Standard-Model gauge group $G_{\rm SM}\equiv SU(3)\times SU(2)\times U(1)$ or the entire Pati-Salam gauge group $G_{\rm PS}\equiv SU(4)\times SU(2)^2$. \begin{table} \centerline{ \begin{tabular}{||r|| r|r|r|r|r|r|r|r|r|r||r|r||} \hline \hline ~ & $U_1$~ & $SU_2$ & $SU_3$ & $SU_4$ & $SU_5$ & $SU_{>5}$ & $SO_8$ & $SO_{10}$ & $SO_{>10}$ & $E_{6,7,8}$ & SM~ & PS~ \\ \hline \hline $U_1$ & 87.13& 86.56& 10.64& 65.83& 2.41& 8.20& 32.17& 14.72& 8.90& 0.35& 10.05& 61.48 \\ \hline $SU_2$ & ~ & 94.05& 10.05& 62.80& 2.14& 7.75& 37.29& 13.33& 12.80& 0.47& 9.81& 54.31 \\ \hline $SU_3$ & ~ & ~ & 7.75& 5.61& 0.89& 0.28& 1.44& 0.35& 0.06& $10^{-5}$ & 7.19& 5.04 \\ \hline $SU_4$ & ~ & ~ & ~ & 35.94& 1.43& 5.82& 24.41& 11.15& 6.53& 0.22& 5.18& 33.29 \\ \hline $SU_5$ & ~ & ~ & ~ & ~ & 0.28& 0.09& 0.46& 0.14& 0.02& 0 & 0.73& 1.21 \\ \hline $SU_{>5}$ & ~ & ~ & ~ & ~ & ~ & 0.59& 3.30& 1.65& 1.03& 0.06& 0.25& 4.87 \\ \hline $SO_8$ & ~ & ~ & ~ & ~ & ~ & ~ & 12.68& 6.43& 8.66& 0.30& 1.19& 22.02 \\ \hline $SO_{10}$ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 2.04& 2.57& 0.13& 0.25& 9.44 \\ \hline $SO_{>10}$ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 3.03& 0.25& 0.03& 5.25 \\ \hline $E_{6,7,8}$ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 0.01& 0& 0.13 \\ \hline \hline SM & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 7.12& 3.86 \\ \hline PS & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & ~ & 26.86 \\ \hline \hline total:& 90.80& 95.06& 10.64& 66.53& 2.41& 8.20& 40.17& 15.17& 14.94& 0.57& 10.05& 62.05\\ \hline \hline \end{tabular} } \caption{Percentages of four-dimensional heterotic string models which exhibit various combinations of gauge groups. Columns/rows labeled as $SU_{>5}$, $SO_{>10}$, and $E_{6,7,8}$ indicate {\it any}\/ gauge group in those respective categories [{\it e.g.}\/, $SU_{>5}$ indicates any gauge group $SU(n)$ with $n>5$]. Off-diagonal entries show the percentage of models whose gauge groups simultaneously contain the factors associated with the corresponding rows/columns, while diagonal entries show the percentage of models which meet the corresponding criteria at least {\it twice}\/. For example, $7.75\%$ of models contain an $SU(2)\times SU(n)$ factor with any $n>5$, while $35.94\%$ of models contain at least two $SU(4)$ factors. `SM' and `PS' respectively indicate the Standard-Model gauge group $G_{\rm SM}\equiv SU(3)\times SU(2)\times U(1)$ and the Pati-Salam gauge group $G_{\rm PS}\equiv SU(4)\times SU(2)^2$. Thus, only $0.13\%$ of models contain $G_{PS}$ together with an exceptional group, while $33.29\%$ of models contain at least $G_{\rm PS}\times SU(2)=SU(4)^2\times SU(2)^2$ and $26.86\%$ of models contain at least $G_{\rm PS}^2= SU(4)^2\times SU(2)^4$. A zero entry indicates that no string model with the required properties was found, whereas the entry $10^{-5}$ indicates the existence of a single string model with the given properties. Entries along the `total' row indicate the total percentages of models which have the corresponding gauge-group factor, regardless of what other gauge groups may appear; note that this is {\it not}\/ merely the sum of the joint probabilities along a row/column since these joint probabilities are generally not exclusive. For example, although $86.56\%$ of models have both an $SU(2)$ factor and a $U(1)$ factor and $94.05\%$ of models contain $SU(2)^2$, the total percentage of models containing an $SU(2)$ factor is only slightly higher at $95.06\%$, as claimed earlier. Note that nearly every string model which contains an $SU(n\geq 3)$ gauge-group factor for $n\not=4$ also contains a $U(1)$ gauge-group factor, as discussed earlier; thus, the joint and individual probabilities are essentially equal in this case. Also note that only $10.05\%$ of heterotic string models contain the Standard-Model gauge group, regardless of other correlations; moreover, when this gauge group appears, it almost always comes with an additional $SU(2)$, $SU(3)$, or $SU(4)$ factor. } \label{table1} \end{table} This information is collected in Table~\ref{table1}. It is easy to read a wealth of information from this table. For example, this table provides further confirmation of our previous claim that nearly all heterotic string models which contain an $SU(n\geq 3)$ factor for $n\not=4$ also contain a corresponding $U(1)$ factor. [Recall that $SU(4)$ is a special case: since $SU(4)\sim SO(6)$, the roots of $SU(4)$ have integer coordinates in a standard lattice embedding.] Likewise, we see from this table that \begin{itemize} \item {\bf The total probability of obtaining the Standard-Model gauge group across our entire sample set is only 10.05\%, regardless of what other gauge-group factors are present.} \end{itemize} This is similar to what is found for Type~I strings~\cite{blumenhagen}, and agrees with the sum of the data points shown in Fig.~\ref{Fig6}(b) after they are weighted by the results shown in Fig.~\ref{Fig1}. Since we have seen that only $10.64\%$ of our heterotic string models contain at least one $SU(3)$ gauge factor [as compared with $95.06\%$ for $SU(2)$ and $90.80\%$ for $U(1)$], we conclude that {\it the relative scarcity of $SU(3)$ factors is the dominant feature involved in suppressing the net probability of appearance of the Standard-Model gauge group}. Indeed, the relative scarcity of $SU(3)$ gauge-group factors is amply illustrated in this table. It is also apparent that among the most popular GUT groups $SU(5)$, $SO(10)$, and $E_6$, the choice $SO(10)$ is most often realized across this sample set. This again reflects the relative difficulty of realizing `SU' and `E' groups. Indeed, \begin{itemize} \item {\bf The total probability of obtaining a potential GUT group of the form $SO(2n\geq 10)$ across our entire sample set is 24.5\%, regardless of what other gauge-group factors are present. For $SU(n\geq 5)$ and $E_{6,7,8}$ GUT groups, by contrast, these probabilities fall to 7.7\% and 0.2\% respectively.} \end{itemize} Once again, we point out that these numbers are independent of those in Table~\ref{table1}, since the sets of models exhibiting $SO(10)$ versus $SO(2n>10)$ factors [or exhibiting $SU(5)$ versus $SU(n>5)$ factors] are not mutually exclusive. Note that for the purposes of this tally, we are considering the $SO(4n\geq 12)$ groups as potential GUT groups even though they do not have complex representations; after all, these groups may dynamically break at lower energies to the smaller $SO(4n-2\geq 10)$ groups which do. Moreover, although we are referring to such groups as ``GUT'' groups, we do not mean to imply that string models realizing these groups are necessarily suitable candidates for realizing various grand-unification scenarios. We only mean to indicate that these gauge groups are those that are found to have unification possibilities in terms of the quantum numbers of their irreducible representations. In order for a string model to realize a complete unification scenario, it must also give rise to the GUT Higgs field which is necessary for breaking the GUT group down to that of the Standard Model. For each of the potential GUT groups we are discussing, the smallest Higgs field that can accomplish this breaking must transform in the adjoint representation of the GUT group, and string unitarity constraints imply that such Higgs fields cannot appear in the massless string spectrum unless these GUT groups are realized at affine level $k>1$. This can only occur in models which also exhibit rank-cutting~\cite{rankcutting,Prep}, and as discussed in Sect.~2, this is beyond the class of models we are examining. Nevertheless, there are many ways in which such models may incorporate alternative unification scenarios. For example, such GUT groups may be broken by field-theoretic means ({\it e.g.}\/, through effective or composite Higgs fields which emerge by running to lower energies and which therefore do not correspond to elementary string states at the string scale). Thus far, we have paid close attention to the {\it ranks}\/ of the gauge groups. There is, however, another important property of these groups, namely their {\it orders}\/ or dimensions ({\it i.e.}\/, the number of gauge bosons in the massless spectrum of the corresponding string model). This will be particularly relevant in Sect.~6 when we examine correlations between gauge groups and one-loop vacuum amplitudes (cosmological constants). Although we find $1301$ gauge groups for our $\sim 10^5$ models, it turns out that these gauge groups have only $95$ distinct orders. These stretch all the way from $22$ [for $U(1)^{22}$] to $946$ [for $SO(44)$]. Note that the $22$ gauge bosons for $U(1)^{22}$ are nothing but the Cartan generators at the origin of our 22-dimensional charge lattice, with all higher orders signalling the appearance of additional non-Cartan generators which enhance these generators to form larger, non-abelian Lie groups. In Fig.~\ref{Fig7}(a), we show the average orders of our string gauge groups as a function of the number $f$ of gauge-group factors, where the average is calculated over all string models sharing a fixed $f$. Within the set of models with fixed $f$, the orders of the corresponding gauge groups can vary wildly. However, the average exhibits a relatively smooth behavior, as seen in Fig.~\ref{Fig7}(a). \begin{figure}[ht] \centerline{ \epsfxsize 3.1 truein \epsfbox {plot7a.eps} \epsfxsize 3.1 truein \epsfbox {plot7b.eps} } \caption{(a) (left) The orders (dimensions) of the gauge groups ({\it i.e.}\/, the number of gauge bosons) averaged over all heterotic string models with a fixed number $f$ of gauge-group factors, plotted as a function of $f$. The monotonic shape of this curve indicates that on average --- and despite the contributions from twisted sectors --- the net effect of breaking gauge groups into smaller irreducible factors is to project non-Cartan gauge bosons out of the massless string spectrum. The $f=1$ point with order $946$ is off-scale and hence not shown. (b) (right) Same data, plotted versus the average rank per gauge group factor in such models, defined as $22/f$. On this plot, the extra point for $f=1$ with order $946$ would correspond to $\langle {\rm rank}\rangle=22$ and thus continues the asymptotically linear behavior.} \label{Fig7} \end{figure} It is easy to understand the shape of this curve. Ordinarily, we expect the order of a Lie gauge-group factor to scale as the square of its rank $r$: \begin{equation} {\rm order}~\sim~ p \, r^2~~~~~~~~ {\rm for}~~r\gg 1~ \label{orderrank} \end{equation} where the proportionality constant for large $r$ is \begin{equation} p~=~\cases{ 1 & for $SU(r+1)$\cr 2 & for $SO(2r)$\cr \approx 2.17 & for $E_6$\cr \approx 2.71 & for $E_7$\cr \approx 3.88 & for $E_8$~.\cr} \label{pvalues} \end{equation} (For the $E$ groups, these values of $p$ are merely the corresponding orders divided by the corresponding ranks.) Thus, for the total gauge group of a given string model, we expect the total order to scale as \begin{equation} {\rm order}~\sim~ \langle p \rangle \cdot \langle r^2\rangle \cdot \langle \#~{\rm of~factors}\rangle \label{orderrank2} \end{equation} However, letting $f=\langle \#~{\rm of~factors}\rangle$, we see that for our heterotic string models, $\langle r\rangle = 22/f$. We thus find that \begin{equation} {\rm order}~\sim~ (22)^2\,\langle p \rangle \,{1\over \langle \#~{\rm of~factors}\rangle} ~\sim~ 22 \,\langle p\rangle\, \langle {\rm rank}\rangle~ \label{orderrank3} \end{equation} where we are neglecting all terms which are subleading in the average rank. In Fig.~\ref{Fig7}(b), we have plotted the same data as in Fig.~\ref{Fig7}(a), but as a function of $\langle {\rm rank}\rangle \equiv 22/f$. We see that our expectations of roughly linear behavior are indeed realized for large values of $\langle {\rm rank}\rangle$, with an approximate numerical value for $\langle p \rangle$ very close to $2$. Given Eq.~(\ref{pvalues}), this value for $\langle p \rangle$ reflects the dominance of the `SO' groups, with the contributions from `SU' groups tending to cancel the contributions of the larger but rarer `E' groups. For smaller values of $\langle {\rm rank}\rangle$, however, we observe a definite curvature to the plot in Fig.~\ref{Fig7}(b). This reflects the contributions of the subleading terms that we have omitted from Eq.~(\ref{orderrank}) and from our implicit identification of $\langle r^2\rangle\sim\langle r \rangle^2$ in passing from Eq.~(\ref{orderrank2}) to Eq.~(\ref{orderrank3}). Fig.~\ref{Fig7}(a) demonstrates that as we ``shatter'' our gauge groups ({\it e.g.}\/, through orbifold twists), the net effect is to project non-Cartan gauge bosons out of the string spectrum. While this is to be expected, we emphasize that this need not always happen in a string context. Because of the constraints coming from modular invariance and anomaly cancellation, performing an orbifold projection in one sector requires that we introduce a corresponding ``twisted'' sector which can potentially give rise to new non-Cartan gauge bosons that replace the previous ones. The most famous example of this phenomenon occurs in ten dimensions: starting from the supersymmetric $SO(32)$ heterotic string (with $496$ gauge bosons), we might attempt an orbifold twist to project down to the gauge group $SO(16)\times SO(16)$ (which would have had only $240$ gauge bosons). However, if we also wish to preserve spacetime supersymmetry, we are forced to introduce a twisted sector which provides exactly $256$ extra gauge bosons to replace those that were lost, thereby enhancing the resulting $SO(16)\times SO(16)$ gauge group back up to $E_8\times E_8$ (which again has $496$ gauge bosons). As evident from Fig.~\ref{Fig7}(a), there are several places on the curve at which increasing the number of gauge-group factors by one unit does not appear to significantly decrease the average order; indeed, this phenomenon of extra gauge bosons emerging from twisted sectors is extremely common across our entire set of heterotic string models. However, we see from Fig.~\ref{Fig7}(a) that {\it on average}\/, more gauge-group factors implies a diminished order, as anticipated in Eq.~(\ref{orderrank3}). For later purposes, it will also be useful for us to evaluate the ``inverse'' map which gives the average number of gauge-group factors as function of the total order. Since our models give rise to $95$ distinct orders, it is more effective to provide this map in the form of a histogram. The result is shown in Fig.~\ref{Fig8}. Note that because this ``inverse'' function is binned according to orders rather than averaged ranks, models are distributed differently across the data set and thus Fig.~\ref{Fig8} actually contains independent information relative to Fig.~\ref{Fig7}. However, the shape of the resulting curve in Fig.~\ref{Fig8} is indeed independent of bin {\it size}\/, as necessary for statistical significance. \begin{figure}[ht] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot8.eps} } \caption{Histogram illustrating the ``inverse'' of Fig.~\protect\ref{Fig7}. This plot shows the number of gauge-group factors, averaged over all heterotic string models with a given gauge-group order (dimension).} \label{Fig8} \end{figure} Once again, many of the features in Fig.~\ref{Fig8} can be directly traced back to the properties of our original model ``tree''. At the right end of the histogram, for example, we see the contribution from the $SO(44)$ model (for which $f=1$), with order $946$. This is the model with the highest order. After this, with order $786$, are two models with gauge group $SO(40)\times SU(2)^2$, followed by 12 models with gauge group $SO(36)\times SO(8)$ at order $658$. This is why the histogram respectively shows exactly $\langle f\rangle=3$ and $\langle f\rangle =2$ at these orders. This pattern continues as the orders descend, except that we start having multiple gauge groups contributing with the same order. For example, at order $466$, there are twelve models with three distinct rank-22 simply-laced gauge groups: five models with gauge group $SO(24)\times SO(20)$, two models with gauge group $E_8\times SO(20)\times SO(8)$, and five models with gauge group $SO(30)\times SU(4)^2\times U(1)$. Combined, this yields $\langle f\rangle=3$, as shown in Fig.~\ref{Fig8} for order $466$. Finally, at the extreme left edge of Fig.~\ref{Fig8}, we see the contributions from models with $f=22$, which necessarily have orders $\leq 66$. \begin{figure}[ht] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot9.eps} } \caption{Histogram illustrating the absolute probabilities of obtaining distinct four-dimensional heterotic string models as a function of the orders of their gauge groups. The total probability from all bins (the ``area under the curve'') is 1, with models having orders exceeding $300$ relatively rare.} \label{Fig9} \end{figure} We can also plot the absolute probabilities of obtaining distinct four-dimensional string models as a function of the orders of their gauge groups. This would be the analogue of Fig.~\ref{Fig1}, but with probabilities distributed as functions of orders rather than numbers of gauge-group factors. The result is shown in Fig.~\ref{Fig9}. As we see from Fig.~\ref{Fig9}, models having orders exceeding $200$ are relatively rare. \begin{figure}[ht] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot10.eps} } \caption{The number of gauge groups obtained as a function of the number of distinct heterotic string models examined. While the total number of models examined is insufficient to calculate a precise shape for this curve, one possibility is that this curve will eventually saturate at a maximum number of possible gauge groups.} \label{Fig10} \end{figure} As a final topic, we have already noticed that our data set of $\gsim 10^5$ distinct models has yielded only $1301$ different gauge groups. There is, therefore, a huge redundancy of gauge groups as far as our models are concerned, with it becoming increasingly difficult to find new heterotic string models exhibiting gauge groups which have not been seen before. In Fig.~\ref{Fig10}, we show the number of gauge groups that we obtained as a function of the number of distinct heterotic string models we examined. Clearly, although the total number of models examined is insufficient to calculate a precise shape for this curve, one possibility is that this curve will eventually saturate at a maximum number of possible gauge groups which is relatively small. This illustrates the tightness of the modular-invariance constraints in restricting the set of possible allowed gauge groups. \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Cosmological constants: Statistical results} \setcounter{footnote}{0} We now turn to calculations of the vacuum energy densities (or cosmological constants) corresponding to these heterotic string vacua. Since the tree-level contributions to these cosmological constants all vanish as a result of conformal invariance, we shall focus exclusively on their one-loop contributions. These one-loop cosmological constants $\lambda$ may be expressed in terms of the one-loop zero-point functions $\Lambda$, defined as \begin{equation} \Lambda ~\equiv~ \int_{\cal F} {d^2 \tau\over ({\rm Im} \,\tau)^2} \, Z(\tau)~. \label{Lambdadef} \end{equation} Here $Z(\tau)$ is the one-loop partition function of the tree-level string spectrum of the model in question (after GSO projections have been implemented); $\tau\equiv \tau_1+i \tau_2$ is the one-loop toroidal complex parameter, with $\tau_i\in\relax{\rm I\kern-.18em R}$; and ${\cal F}\equiv \lbrace \tau: |{\rm Re}\,\tau|\leq {\textstyle{1\over 2}}, {\rm Im}\,\tau>0, |\tau|\geq 1\rbrace$ is the fundamental domain of the modular group. Because the string models under consideration are non-supersymmetric but tachyon-free, $\Lambda$ is guaranteed to be finite and in principle non-zero. The corresponding one-loop vacuum energy density (cosmological constant) $\lambda$ is then defined as $\lambda\equiv -{\textstyle{1\over 2}} {\cal M}^4\Lambda$, where ${\cal M}\equiv M_{\rm string}/(2\pi)$ is the reduced string scale. Although $\Lambda$ and $\lambda$ have opposite signs, with $\Lambda$ being dimensionless, we shall occasionally refer to $\Lambda$ as the cosmological constant in cases where the overall sign of $\Lambda$ is not important. Of course, just as with the ten-dimensional $SO(16)\times SO(16)$ string, the presence of a non-zero $\Lambda$ indicates that these string models are unstable beyond tree level. Thus, as discussed in the Introduction, these vacua are generically not situated at local minima within our ``landscape'', and can be expected to become unstable as the string coupling is turned on. Nevertheless, we shall investigate the values of these amplitudes for a number of reasons. First, the amplitude defined in Eq.~(\ref{Lambdadef}) represents possibly the simplest one-loop amplitude that can be calculated for such models; as such, it represents a generic quantity whose behavior might hold lessons for more complicated amplitudes. For example, more general $n$-point amplitudes are related to this amplitude through differentiations; a well-known example of this is provided by string threshold corrections~\cite{Kaplunovsky}, which are described by a similar modular integration with a slightly altered (differentiated) integrand. Second, by evaluating and analyzing such string-theoretic expressions, we can gain insight into the extent to which results from effective supergravity calculations might hold in a full string context. Indeed, we shall be able to judge exactly how significant a role the massive string states might play in altering our field-theoretic expectations based on considerations of only the massless states. Third, when we eventually combine this information with our gauge-group statistics in Sect.~6, we shall be able to determine the extent to which gauge groups and the magnitudes of such scattering amplitudes might be correlated in string theory. But finally and most importantly, we shall investigate this amplitude because it relates directly back to fundamental questions of supersymmetry breaking and vacuum stability. Indeed, if we can find models for which $\Lambda$ is nearly zero, we will have found good approximations to stable vacua with broken supersymmetry. We shall also discover other interesting features, such as unexpected one-loop degeneracies in the space of non-supersymmetric string models. All of this may represent important information concerning the properties of the landscape of {\it non}\/-supersymmetric strings. In general, the one-loop partition function $Z(\tau)$ which appears in Eq.~(\ref{Lambdadef}) is defined as the trace over the full Fock space of string states: \begin{equation} Z(\tau)~\equiv~ {\rm Tr}\, (-1)^F\, \overline{q}^{H_R}\, q^{H_L}~. \label{Zdef} \end{equation} Here $F$ is the spacetime fermion number, $(H_R,H_L)$ are the right- and left-moving worldsheet Hamiltonians, and $q\equiv \exp(2\pi i\tau)$. Thus spacetime bosonic states contribute positively to $Z(\tau)$, while fermionic states contribute negatively. In general, the trace in Eq.~(\ref{Zdef}) may be evaluated in terms of the characters $\chi_i$ and $\overline{\chi}_j$ of the left- and right-moving conformal field theories on the string worldsheet, \begin{equation} Z(\tau) ~=~ \tau_2^{-1}\, \sum_{i,j}\, \overline{\chi}_j (\overline{\tau}) \, N_{j i} \, \chi_i(\tau)~, \label{Zchi} \end{equation} where the coefficients $N_{ij}$ describe the manner in which the left- and right-moving CFT's are stitched together and thereby reflect the particular GSO projections inherent in the given string model. The $\tau_2^{-1}$ prefactor in Eq.~(\ref{Zchi}) represents the contribution to the trace in Eq.~(\ref{Zdef}) from the continuous spectrum of states corresponding to the uncompactified spacetime dimensions. Since the partition function $Z(\tau)$ represents a trace over the string Fock space as in Eq.~(\ref{Zdef}), it encodes the information about the net degeneracies of string states at each mass level in the theory. Specifically, expanding $Z(\tau)$ as a double-power series in $(q,{ \overline{q} })$, we obtain an expression of the form \begin{equation} Z(\tau) ~=~ \tau_2^{-1} \, \sum_{mn}\, b_{m n} \,{ \overline{q} }^{m} \,q^{n}~ \label{bare} \end{equation} where $(m,n)$ represent the possible eigenvalues of the right- and left-moving worldsheet Hamiltonians $(H_R,H_L)$, and where $b_{mn}$ represents the net number of bosonic minus fermionic states (spacetime degrees of freedom) which actually have those eigenvalues and satisfy the GSO constraints. Modular invariance requires that $m-n\in \IZ$ for all $b_{mn}\not=0$; a state is said to be ``on-shell'' or ``level-matched'' if $m=n$, and corresponds to a spacetime state with mass ${\cal M}_n = 2\sqrt{n}M_{\rm string}$. Thus, states for which $m+n\geq 0 $ are massive and/or massless, while states with $m+n <0$ are tachyonic. By contrast, states with $m-n\in\IZ\not=0$ are considered to be ``off-shell'': they contribute to one-loop amplitudes such as $\Lambda$ with a dependence on $|m-n|$, but do not correspond to physical states in spacetime. Substituting Eq.~(\ref{bare}) into Eq.~(\ref{Lambdadef}), we have \begin{eqnarray} \Lambda &=& \sum_{m,n} \, b_{mn} \,\int_{\cal F} {d^2\tau\over \tau_2^2}\,\tau_2^{-1} \,{ \overline{q} }^m q^n\nonumber\\ &=& \sum_{m,n} \, b_{mn} \,\int_{\cal F} {d^2\tau\over \tau_2^3}\, \exp\left\lbrack -2\pi(m+n) \tau_2 \right\rbrack \, \cos\lbrack 2\pi(m-n) \tau_1 \rbrack ~. \label{format} \end{eqnarray} (Note that since ${{\cal F}}$ is symmetric under $\tau_1\to-\tau_1$, only the cosine term survives in passing to the second line.) Thus, we see that the contributions from different regions of $\cal F$ will depend critically on the values of $m$ and $n$. The contribution to $\Lambda$ from the $\tau_2<1$ region is always finite and non-zero for all values of $m$ and $n$. However, given that $m-n\in\IZ$, we see that the contribution from the $\tau_2>1$ region is zero if $m\not= n$, non-zero and finite if $m=n>0$, and infinite if $m=n<0$. For heterotic strings, our worldsheet vacuum energies are bounded by $m\geq -1/2$ and $n\geq -1$. Moreover, all of the models we are considering in this paper are tachyon-free, with $b_{mn}=0$ for all $m=n<0$. As a result, in such models, each contribution to $\Lambda$ is finite and non-zero. By contrast, note that a supersymmetric string model would have $b_{mn}=0$ for all $(m,n)$, leading to $\Lambda=0$. Non-zero values of $\Lambda$ are thus a signature for broken spacetime supersymmetry. For simplicity, we can change variables from $(m,n)$ to $(s\equiv m+n,d\equiv |n-m|)$. Indeed, since $\Lambda\in \relax{\rm I\kern-.18em R}$, we can always take $d\geq 0$ and define \begin{equation} a_{sd} ~\equiv~ b_{(s-d)/2 , (s+d)/2} ~+~ b_{(s+d)/2 , (s-d)/2} ~. \label{adef} \end{equation} We can thus rewrite Eq.~(\ref{format}) in the form \begin{equation} \Lambda ~=~ \sum_{s,d} \,a_{sd}\, I_{sd}~~~~~~~ {\rm where}~~~~ I_{sd}~\equiv~ \int_{\cal F} {d^2\tau\over \tau_2^3}\, \exp( -2\pi s \tau_2 ) \, \cos( 2\pi d \tau_1 ) ~. \label{Idef} \end{equation} For heterotic strings, this summation is over all $s\geq -1$ and $|d|=0,1,...,[s]+2$ where $[x]$ signifies the greatest integer $\leq x$. Of course, only those states with $d=0$ are on-shell, but $\Lambda$ receives contributions from off-shell states with non-zero $d$ as well. In general, the values of $s$ which appear in a given string model depend on a number of factors, most notably the conformal dimensions of the worldsheet fields (which in turn depend on various internal compactification radii); however, for the class of models considered in this paper, we have $s\in\IZ/2$. The numerical values of $I_{sd}$ from the lowest-lying string states with $s\leq 1.5$ are listed in Table~\ref{integraltable}. \begin{table}[ht] \centerline{ \begin{tabular}{||r|r|r||r|r|r||} \hline \hline $s$~ & $d$ & $I_{sd}$~~~~~ & $s$~ & $d$ & $I_{sd}$~~~~~ \\ \hline \hline $-1.0$ &$ 1 $&$ {\tt -12.192319 } $ & $ 1.0 $&$ 0 $&$ {\tt 0.000330 }$\\ $ -0.5 $&$ 1 $&$ {\tt -0.617138 } $ & $ 1.0 $&$ 1 $&$ {\tt -0.000085 } $\\ $ 0.0 $&$ 0 $&$ {\tt 0.549306 }$ & $ 1.0 $&$ 2 $&$ {\tt 0.000035 }$\\ $ 0.0 $&$ 1 $&$ {\tt -0.031524 } $ & $ 1.0 $&$ 3 $&$ {\tt -0.000018 }$\\ $ 0.0 $&$ 2 $&$ {\tt 0.009896 }$ & $ 1.5 $&$ 0 $&$ {\tt 0.000013} $\\ $ 0.5 $&$ 0 $&$ {\tt 0.009997 }$ & $ 1.5 $&$ 1 $&$ {\tt -0.000004} $\\ $ 0.5 $&$ 1 $&$ {\tt -0.001626 }$ & $ 1.5 $&$ 2 $&$ {\tt 0.000002} $\\ $ 0.5 $&$ 2 $&$ {\tt 0.000587 }$ & $ 1.5 $&$ 3 $&$ {\tt -0.000001} $\\ \hline \hline \end{tabular} } \caption{Contributions to the one-loop string vacuum amplitude $I_{sd}$ defined in Eq.~(\protect\ref{Idef}) from the lowest-lying string states with $s\leq 1.5$.} \label{integraltable} \end{table} We immediately observe several important features. First, although the coefficients $a_{sd}$ tend to experience an exponential (Hagedorn) growth $a_{sd}\sim e^{\sqrt{s}}$, we see that the values of $I_{sd}$ generally decrease exponentially with $s$, {\it i.e.}\/, $I_{sd}\sim e^{-s}$. This is then sufficient to overcome the Hagedorn growth in the numbers of states, leading to a convergent value of $\Lambda$. However, because of the balancing between these two effects, it is generally necessary to determine the complete particle spectrum of each vacuum state up to a relatively high ({\it e.g.}\/, fifth or sixth) mass level, with $s\approx 5$ or $6$, in order to accurately calculate the full cosmological constant $\Lambda$ for each string model. This has been done for all results quoted below. Second, we observe that the contributions $I_{s,0}$ from all on-shell states are positive, {\it i.e.}\/, $I_{s,0}>0$ for all $s$. Thus, as anticipated on field-theoretic grounds, on-shell bosonic states contribute positively to $\Lambda$, while on-shell fermionic states contribute negatively. (Note that this is reversed for the actual cosmological constant $\lambda$, which differs from $\Lambda$ by an overall sign.) However, we more generally observe that $I_{sd}>0$ $(<0)$ for even (odd) $d$. Thus, the first group of off-shell states with $d=1$ tend to contribute oppositely to the corresponding on-shell states with $d=0$, with this behavior partially compensated by the second group of off-shell states with $d=2$, and so forth. This, along with the fact~\cite{missusy} that the coefficients $a_{sd}$ necessarily exhibit a regular oscillation in their overall signs as a function of $s$, also aids in the convergence properties of $\Lambda$. Finally, we observe that by far the single largest contributions to the vacuum amplitude $\Lambda$ actually come from states with $(s,d)=(-1,1)$, or $(m,n)=(0,-1)$. These states are off-shell tachyons. At first glance one might suspect that it would be possible to project such states out of the spectrum (just as one does with the on-shell tachyons), but it turns out that this is impossible: {\it all heterotic string models necessarily have off-shell tachyons with $(m,n)=(0,-1)$}. These are ``proto-graviton'' states emerging in the Neveu-Schwarz sector: \begin{equation} \hbox{proto-graviton:}~~~~~~~~~~~~ \tilde b_{-1/2}^\mu |0\rangle_R ~\otimes~ ~|0\rangle_L~ \label{protograviton} \end{equation} where $\tilde b_{-1/2}^\mu$ represents the excitation of the right-moving worldsheet Neveu-Schwarz fermion $\tilde \psi^\mu$. Since the Neveu-Schwarz heterotic string ground state has vacuum energies $(H_R,H_L)=(-1/2,-1)$, we see that the ``proto-graviton'' state in Eq.~(\ref{protograviton}) has worldsheet energies $(H_R,H_L)=(m,n)=(0,-1)$; indeed, this is nothing but the graviton state without its left-moving oscillator excitation. However, note that regardless of the particular GSO projections, the graviton state must always appear in the string spectrum. Since GSO projections are insensitive to the oscillator excitations, this implies that the proto-graviton must also necessarily appear in the string spectrum. By itself, of course, this argument does not prove that we must necessarily have $a_{-1,1}\not=0$. However, it is easy to see that the only state which could possibly cancel the contribution from the (bosonic) proto-graviton in the Neveu-Schwarz sector is a (fermionic) proto-gravitino in the Ramond sector: \begin{equation} \hbox{proto-gravitino:}~~~~~~~~~~~~ \lbrace \tilde b_{0}\rbrace^\alpha |0\rangle_R ~\otimes~ ~ |0\rangle_L~. \label{protogravitino} \end{equation} Here $\lbrace \tilde b_{0}\rbrace^\alpha$ schematically indicates the Ramond zero-mode combinations which collectively give rise to the spacetime Lorentz spinor index $\alpha$. However, if the gravitino itself is projected out of the spectrum (producing the non-supersymmetric string model), then that same GSO projection must simultaneously project out the proto-gravitino state. In other words, while all heterotic strings contain a proto-graviton state, only those with spacetime supersymmetry will contain a compensating proto-gravitino state. Thus, all non-supersymmetric heterotic string models must have an uncancelled $a_{-1,1}> 0$. This fact has important implications for the overall sign of $\Lambda$. {\it A priori}\/, we might have expected that whether $\Lambda$ is positive or negative would be decided primarily by the net numbers of massless, on-shell bosonic and fermionic states in the string model --- {\it i.e.}\/, by the sign of $a_{00}$. However, we now see that because $a_{-1,1}>0$ and $I_{-1,1}<0$, there is already a built-in bias towards negative values of $\Lambda$ for heterotic strings. Indeed, each off-shell tachyon can be viewed as providing an initial negative offset for $\Lambda$ of magnitude $I_{-1,1}\approx -12.19$, so that there is an approximate critical value for $a_{00}$, given by \begin{equation} a_{00} \biggl |_{\rm critical} ~\approx~-{I_{-1,1}\over I_{0,0}}~\approx~ 22.2~, \end{equation} which is needed just to balance each off-shell tachyon and obtain a vanishing $\Lambda$. Of course, even this estimate is low, as it ignores the contributions of off-shell {\it massless}\/ states which, like the off-shell tachyon, again provide negative contributions for bosons and positive contributions for fermions. The lesson from this discussion, then, is clear: \begin{itemize} \item {\bf In string theory, contributions from the infinite towers of string states, both on-shell and off-shell, are critical for determining not only the magnitude but also the overall sign of the one-loop cosmological constant.} Examination of the massless string spectrum ({\it e.g.}\/, through a low-energy effective field-theory analysis) is insufficient. \end{itemize} We now turn to the values of $\Lambda$ that are obtained across our set of heterotic string models. For each model, we evaluated the on- and off-shell degeneracies $a_{sd}$ to at least the fifth level ($s\approx 5$), and then tabulated the corresponding value of $\Lambda$ as in Eq.~(\ref{Idef}). A histogram illustrating the resulting distribution of cosmological constants is shown in Fig.~\ref{Fig11}. \begin{figure}[ht] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot11.eps} } \caption{Histogram showing calculated values of the one-loop amplitude $\Lambda$ defined in Eq.~(\protect\ref{Lambdadef}) across our sample of $ N \gsim 10^5$ tachyon-free perturbative heterotic string vacua with string-scale supersymmetry breaking. Both positive and negative values of $\Lambda$ are obtained, with over $73\%$ of models having positive values. The smallest $|\Lambda|$-value found is $\Lambda\approx 0.0187$, which appears for eight distinct models. (This figure adapted from Ref.~\cite{thesis}.)} \label{Fig11} \end{figure} As can be seen, both positive and negative values of $\Lambda$ are obtained, and indeed the bulk of these models have values of $\Lambda$ centered near zero. In fact, despite the contributions of the off-shell tachyons, it is evident from Fig.~\ref{Fig11} that there is a preference for positive values of $\Lambda$, with just over $73\%$ of our models having $\Lambda>0$. However, we obtained no model with $\Lambda=0$; indeed, the closest value we obtained for any model is $\Lambda \approx 0.0187$, which appeared for eight distinct models. Given that we examined more than $10^5$ distinct heterotic string models, it is natural to wonder why no smaller values of $\Lambda$ were found. This question becomes all the more pressing in light of recent expectations~\cite{BP} that the set of cosmological constant values should be approximately randomly distributed, with relative spacings and a smallest overall value that diminish as additional models are considered. It is easy to see why this does not happen, however: {\it just as for gauge groups, it turns out that there is a tremendous degeneracy in the space of string models, with many distinct heterotic string models sharing exactly the same value of $\Lambda$.} Again, we stress that these are distinct models with distinct gauge groups and particle content. Nevertheless, such models may give rise to exactly the same one-loop cosmological constant! The primary means by which two models can have the same cosmological constant is by having the same set of state degeneracies $\lbrace a_{sd}\rbrace$. This can happen in a number of ways. Recall that these degeneracies represent only the net numbers of bosonic minus fermionic degrees of freedom; thus it is possible for two models to have different numbers of bosons and fermions separately, but to have a common difference between these numbers. Secondly, it is possible for two models to have partition functions $Z_1(\tau)$ and $Z_2(\tau)$ which differ by a purely imaginary function of $\tau$; in this case, such models they will once again share a common set of state degeneracies $\lbrace a_{sd}\rbrace$ although their values of $b_{mn}$ will differ. Finally, it is possible for $Z_1(\tau)$ and $Z_2(\tau)$ to differ when expressed in terms of conformal field-theory characters (or in terms of Jacobi theta functions $\vartheta_i$), but with this difference proportional to the vanishing Jacobi factor \begin{equation} J ~\equiv~ {1\over \eta^4} \left(\vartheta_3^4 - \vartheta_4^4 - \vartheta_2^4\right) ~=~0~ \end{equation} where $\eta$ and $\vartheta_i$ respectively represent the Dedekind eta-function and Jacobi theta-functions, defined as \begin{eqnarray} \eta(q) ~\equiv& q^{1/24}~ \displaystyle\prod_{n=1}^\infty ~(1-q^n)&=~ \sum_{n=-\infty}^\infty ~(-1)^n\, q^{3(n-1/6)^2/2}\nonumber\\ \vartheta_2(q)~\equiv& 2 q^{1/8} \displaystyle\prod_{n=1}^\infty (1+q^n)^2 (1-q^n)&=~ 2\sum_{n=0}^\infty q^{(n+1/2)^2/2} \nonumber\\ \vartheta_3(q)~\equiv& \displaystyle\prod_{n=1}^\infty (1+q^{n-1/2})^2 (1-q^n) &=~ 1+ 2\sum_{n=1}^\infty q^{n^2/2} \nonumber\\ \vartheta_4(q) ~\equiv& \displaystyle\prod_{n=1}^\infty (1-q^{n-1/2})^2 (1-q^n) &=~ 1+ 2\sum_{n=1}^\infty (-1)^n q^{n^2/2} ~. \label{thetadefs} \end{eqnarray} Such partition functions $Z_1(\tau)$ and $Z_2(\tau)$ differing by $J$ will then have identical $(q,{ \overline{q} })$ power-series expansions, once again leading to identical degeneracies $\lbrace a_{sd}\rbrace$ and identical values of $\Lambda$. In Fig.~\ref{Fig12}, we have plotted the actual numbers of distinct degeneracy sets $\lbrace a_{sd}\rbrace$ found (and therefore the number of distinct cosmological constants $\Lambda$ obtained) as a function of the number of distinct models examined. It is clear from these results that this cosmological-constant redundancy is quite severe, with only $4303$ different values of $\Lambda$ emerging from over $1.2\times 10^5$ models! This represents a redundancy factor of approximately $28$, and it is clear from Fig.~\ref{Fig12} that this factor tends to grow larger and larger as the number of examined models increases. Thus, we see that \begin{itemize} \item {\bf More string models does not necessarily imply more values of $\Lambda$.} Indeed, many different string models with entirely different spacetime phenomenologies (different gauge groups, matter representations, hidden sectors, and so forth) exhibit identical values of $\Lambda$. \end{itemize} \begin{figure}[ht] \centerline{ \epsfxsize 3.1 truein \epsfbox {plot12a.eps} \epsfxsize 3.1 truein \epsfbox {plot12b.eps} } \caption{ Unexpected degeneracies in the space of non-supersymmetric string vacua. As evident from these figures, there is a tremendous degeneracy according to which many distinct non-supersymmetric heterotic string models with different gauge groups and particle contents nevertheless exhibit exactly the {\it same}\/ numbers of bosonic and fermionic states and therefore have identical one-loop cosmological constants. (a) (left) Expected versus actual numbers of cosmological constants obtained for the first fifteen thousand models. (b) (right) Continuation of this plot as more models are examined. While the number of models examined is insufficient to calculate a precise shape for this curve, one possibility is that this curve will eventually saturate at a maximum number of allowed cosmological constants, as discussed in the text. (Right figure adapted from Ref.~\cite{thesis}.)} \label{Fig12} \end{figure} In fact, the shape of the curve in Fig.~\ref{Fig12}(b) might lead us to conclude that there may be a finite and relatively small number of self-consistent matrices $\lbrace a_{sd}\rbrace$ which our models may be capable of exhibiting. If this were the case, then we would expect the number of such matrices $\lbrace a_{sd}\rbrace$ already seen, $\Sigma$, to have a dependence on the total number of models examined, $t$, of the form \begin{equation} \Sigma(t) ~=~ N_0\,\left( 1~-~ e^{-t/t_0}\right)~, \label{namenlosetwo} \end{equation} where $N_0$ is this total number of matrices $\lbrace a_{sd}\rbrace$ and $t_0$, the ``time constant'', is a parameter characterizing the scale of the redundancy. Fitting the curve in Fig.~\ref{Fig12}(b) to Eq.~(\ref{namenlosetwo}), we find that values of $N_0\sim 5500$ and $t_0 \sim 70\,000$ seem to be indicated. (One cannot be more precise, since we have clearly not examined a sufficient number of models to observe saturation.) Of course, this sort of analysis assumes that our models uniformly span the space of allowed $\lbrace a_{sd}\rbrace $ matrices (and also that our model set uniformly spans the space of models). As if this redundancy were not enough, it turns out that there is a further redundancy beyond that illustrated in Fig.~\ref{Fig12}. In Fig.~\ref{Fig12}, note that we are actually plotting the numbers of distinct sets of degeneracy matrices $\lbrace a_{sd}\rbrace$, since identical matrices necessarily imply identical values of $\Lambda$. However, it turns out that there are situations in which even {\it different}\/ values of $\lbrace a_{sd}\rbrace$ can lead to identical values of $\Lambda$!~ Clearly, such an occurrence would be highly non-trivial, requiring that two sets of integers $\lbrace a^{(1)}_{sd}\rbrace$ and $\lbrace a^{(2)}_{sd}\rbrace$ differ by non-zero integer coefficients $c_{sd}\equiv a^{(1)}_{sd}-a^{(2)}_{sd}$ for which $\sum_{sd} c_{sd} I_{sd} =0$. At first glance, given the values of $I_{sd}$ tabulated in Table~\ref{integraltable}, it may seem that no such integer coefficients $c_{sd}$ could possibly exist. Remarkably, however, it was shown in Ref.~\cite{PRL} that there exists a function \begin{eqnarray} Q &\equiv & {1\over 128 \,\tau_2}\, {1\over \overline{\eta}^{12}\, \eta^{24}} \, \sum_{\scriptstyle i,j,k=2 \atop \scriptstyle i\not= j\not= k}^4 \, |{\vartheta_i}|^4 \, \Biggl\lbrace \, {\vartheta_i}^4 {\vartheta_j}^4 {\vartheta_k}^4 \,\biggl\lbrack \,2\, |{\vartheta_j} {\vartheta_k}|^8 - {\vartheta_j}^8 \overline{\vartheta_k}^{8} - \overline{\vartheta_j}^{8} {\vartheta_k}^8 \biggr\rbrack \nonumber\\ &&~~~~~~~~~~~~~~ +~ {\vartheta_i}^{12} \,\biggl[ \, 4 \,{\vartheta_i}^8 \overline{\vartheta_j}^{4} \overline{\vartheta_k}^{4} + (-1)^i~13 \,|{\vartheta_j} {\vartheta_k}|^8 \biggr] \, \Biggr\rbrace~ \label{Qdef} \end{eqnarray} which, although non-zero, has the property that \begin{equation} \int_{\cal F} {d^2 \tau\over ({\rm Im} \,\tau)^2} \,Q~=~0~ \label{ident} \end{equation} as the result of an Atkin-Lehner symmetry~\cite{Moore}. Power-expanding the expression $Q$ in Eq.~(\ref{Qdef}) using Eq.~(\ref{thetadefs}) then yields a set of integer coefficients $c_{sd}$ for which $\sum_{sd} c_{sd}I_{sd}=0$ as a consequence of Eq.~(\ref{ident}). Thus, even though neither of the partition functions $Z_1(\tau)$ and $Z_2(\tau)$ of two randomly chosen models exhibits its own Atkin-Lehner symmetry (consistent with an Atkin-Lehner ``no-go'' theorem~\cite{Balog}), it is possible that their {\it difference}\/ might nevertheless exhibit such a symmetry. If so, then such models are ``twins'', once again sharing the same value of $\Lambda$. As originally reported in Ref.~\cite{PRL}, this additional type of twinning redundancy turns out to be pervasive throughout the space of heterotic string models, leading to a further $\sim 15\%$ reduction in the number of distinct values of $\Lambda$. Indeed, we find not only twins, but also ``triplets'' and ``quadruplets'' --- groups of models whose degeneracies $a^{(i)}_{sd}$ differ sequentially through repeated additions of such coefficients $c_{sd}$. Indeed, we find that our $4303$ different sets $\lbrace a_{sd}\rbrace$ which emerge from our $\sim 10^5$ models can be categorized as $3111$ ``singlets'', $500$ groupings of ``twins'', $60$ groupings of ``triplets'', and $3$ groupings of ``quadruplets''. [Note that indeed $3111+2(500)+3(60)+4(3)=4303.$] Thus, the number of distinct cosmological constants emerging from our $\sim 10^5$ models is not actually $4303$, but only $3111+500+60+3= 3674$, which represents an additional $14.6\%$ reduction. At first glance, since there are relatively few groupings of twins, triplets, and quadruplets, it might seem that this additional reduction is not overly severe. However, this fails to take into account the fact that our previous redundancy may be (and in fact is) statistically clustered around these sets. Indeed, across our entire set of $10^5$ distinct string models, we find that \begin{itemize} \item $30.7\%$ are ``singlets''; $48.2\%$ are members of a ``twin'' grouping; $21.0\%$ are members of a ``triplet'' grouping; and $0.1\%$ are members of a ``quadruplet'' grouping. \end{itemize} Thus, we see that this twinning phenomenon is responsible for a massive degeneracy across the space of non-supersymmetric heterotic string vacua.\footnote{ Indeed, reviving the ``raindrop'' analogy introduced in the Introduction, we see that the rain falls mainly on the plane.} Note that this degeneracy may be of considerable relevance for various solutions of the cosmological-constant problem. For example, one proposal in Ref.~\cite{kane} imagines a large set of degenerate vacua which combine to form a ``band'' of states whose lowest state has a significantly suppressed vacuum energy. However, the primary ingredient in this formulation is the existence of a large set of degenerate, non-supersymmetric vacua. This is not unlike what we are seeing in this framework. Of course, there still remains the outstanding question concerning how transitions between these vacua can arise, as would be needed in order to generate the required ``band'' structure. In all cases, modular invariance is the primary factor that underlies these degeneracies. Despite the vast set of possible heterotic string spectra, there are only so many modular-invariant functions $Z(\tau)$ which can serve as the partition functions for self-consistent string models. It is this modular-invariance constraint which ultimately limits the possibilities for the degeneracy coefficients $\lbrace a_{sd}\rbrace$, and likewise it is modular invariance (along with Atkin-Lehner symmetries) which leads to identities such as Eq.~(\ref{ident}) which only further enhance this tendency towards degeneracy. Needless to say, our analysis in this section has been limited to one-loop cosmological constants. It is therefore natural to ask whether such degeneracies might be expected to persist at higher orders. Of course, although modular invariance is a one-loop phenomenon, there exist multi-loop generalizations of modular invariance; likewise it has been speculated that there also exist multi-loop extensions of Atkin-Lehner symmetry~\cite{Moore}. Indeed, modular invariance is nothing but the reflection of the underlying finiteness of the string amplitudes, and we expect this finiteness to persist to any order in the string perturbation expansion. It is therefore possible that degeneracies such as these will persist to higher orders as well. In any case, this analysis dramatically illustrates that many of our na\"\i ve expectations concerning the values of string amplitudes such as the cosmological constant and the distributions of these values may turn out to be grossly inaccurate when confronted with the results of explicit string calculations. The fact that string theory not only provides infinite towers of states but then tightly constrains the properties of these towers through modular symmetries --- even when spacetime supersymmetry is broken --- clearly transcends our na\"\i ve field-theoretic expectations. It is therefore natural to expect that such properties will continue to play a significant role in future statistical studies of the heterotic landscape, even if/when stable non-supersymmetric heterotic string models are eventually constructed. \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Gauge groups and cosmological constants:\\ Statistical correlations} \setcounter{footnote}{0} We now turn to {\it correlations}\/ between our gauge groups and cosmological constants. To what extent does the gauge group of a heterotic string model influence the magnitude of its cosmological constant, and vice versa? Note that in field theory, these quantities are completely unrelated --- the gauge group is intrinsically non-gravitational, whereas the cosmological constant is of primarily gravitational relevance. However, in string theory, we can expect correlations to occur. To begin the discussion, let us again construct a ``tree'' according to which our string models are grouped according to their gauge groups. While in Sect.~4 we grouped our models on ``branches'' according to their numbers of gauge-group factors, in this section, for pedagogical purposes, we shall instead group our models into ``clusters'' according to their orders. \begin{itemize} \item {\sl order=946:}\/ As we know, there is only one distinct string model with this order, with gauge group $SO(44)$. The corresponding cosmological constant is $\Lambda\approx 7800.08\equiv \Lambda_1$. This is the largest value of $\Lambda$ for any string model, and it appears for the $SO(44)$ string model only. \item {\sl order=786:}\/ This cluster contains two distinct models, both of which have gauge group $SO(40)\times SU(2)^2$. However, while one of these models has $\Lambda\approx 3340.08$, the other has a cosmological constant exactly equal to $\Lambda_1/2$! \item {\sl order=658:}\/ This cluster contains 12 distinct models, all with gauge group $SO(36)\times SO(8)$. Remarkably, once again, all have cosmological constants $\Lambda=\Lambda_1/2$, even though they differ in their particle representations and content. This is a reflection of the huge cosmological-constant redundancies discussed in Sect.~5. \item {\sl order=642:}\/ This cluster contains only one model, with gauge group $SO(36)\times SU(2)^4$ and cosmological constant $\Lambda\approx 3620.06\equiv \Lambda_2$. \item {\sl order=578:}\/ Here we have one model with gauge group $SO(34)\times SU(4)\times U(1)^2$. Remarkably, this model has cosmological constant given by $\Lambda=\Lambda_1/4$! \end{itemize} These kinds of redundancies and scaling patterns persist as we continue to implement twists that project out gauge bosons from our string models. For example, we find \begin{itemize} \item {\sl order=530:}\/ Here we have 10 distinct string models: nine have gauge group $SO(32)\times SO(8)\times SU(2)^2$, while one has gauge group $E_8\times SO(24)\times SU(2)^2$. Two of the models in the first group have $\Lambda=\Lambda_2$, while the remaining eight models in this cluster exhibit five new values of $\Lambda$ between them. \item {\sl order=514:}\/ Here we have two distinct string models, both with gauge group $SO(32)\times SU(2)^6$: one has $\Lambda=\Lambda_1/2$, while the other has $\Lambda=\Lambda_1/4$. \item {\sl order=466:}\/ Here we have 12 distinct string models: five with gauge group $SO(24)\times SO(20)$, two with gauge group $E_8\times SO(20)\times SO(8)$, and five with gauge group $SO(30)\times SU(4)^2\times U(1)$. All of the models in the last group have $\Lambda=\Lambda_1/4$, while two of the five in the first group have $\Lambda=\Lambda_1/2$. This is also the first cluster in which models with $\Lambda<0$ appear. \end{itemize} Indeed, as we proceed towards models with smaller orders, we generate not only new values of the cosmological constant, but also cosmological-constant values which are merely rescaled from previous values by factors of 2, 4, 8, and so forth. For example, the original maximum value $\Lambda_1$ which appears only for the $SO(44)$ string model has many ``descendents'': there are $21$ distinct models with $\Lambda=\Lambda_1/2$; $61$ distinct models with $\Lambda=\Lambda_1/4$; $106$ distinct models with $\Lambda=\Lambda_1/8$; and so forth. Ultimately, these rescalings are related to the fact that our models are constructed through successive $\IZ_2$ twists. Although there are only limited numbers of modular-invariant partition functions $Z(\tau)$, these functions may be rescaled without breaking the modular invariance. However, we emphasize that {\it this rescaling of the partition function does not represent a trivial overall rescaling of the associated particle spectrum}. In each model, for example, there can only be one distinct gravity multiplet; likewise, the string vacuum states are necessarily unique. Thus, it is somewhat remarkable that two models with completely different particle spectra can nevertheless give rise to rescaled versions of the same partition function and cosmological constant. Having described the characteristics of our cosmological-constant tree, let us now turn to its overall statistics and correlations. For each branch of the tree, we can investigate the values of the corresponding cosmological constants, averaged over all models on that branch. If we organize our branches according to the numbers of gauge-group factors as in Sect.~4, we then find the results shown in Figs.~\ref{Fig13}(a) and \ref{Fig13}(b). Alternatively, we can also cluster our models according to the orders of their gauge groups, as described above, and calculate average cosmological constants as a function of these orders. We then find the results shown in Fig.~\ref{Fig13}(c). \begin{figure \centerline{ \epsfxsize 3.1 truein \epsfbox {plot13a.eps} \epsfxsize 3.1 truein \epsfbox {plot13b.eps} } \centerline{ \epsfxsize 3.4 truein \epsfbox {plot13c.eps} } \caption{Correlations between cosmological constants and gauge groups. (a) (upper left) Average values of the cosmological constants obtained as a function of the number $f$ of gauge-group factors in the corresponding string models. Note that all cosmological constant averages are positive even though approximately 1/4 of individual string models have $\Lambda<0$, as shown in Fig.~\protect\ref{Fig11}. For plotting purposes, we show data only for values $f\geq 3$. (b) (upper right) Same data plotted versus the average rank per gauge group factor, defined as $22/f$. (c) (lower figure) Histogram showing the average values of $\Lambda$ as a function of the order (dimension) of corresponding gauge group. Note that in every populated bin, we find $\langle \Lambda\rangle >0$ even though most bins contain at least some string models with $\Lambda<0$. Thus, in this figure, bins with $\langle \Lambda\rangle=0$ are to be interpreted as empty rather than as bins for which $\Lambda>0$ models exactly balance against $\Lambda<0$ models.} \label{Fig13} \end{figure} Clearly, we see from Fig.~\ref{Fig13} that there is a strong and dramatic correlation between gauge groups and cosmological constants: \begin{itemize} \item {\bf Models with highly shattered gauge groups and many irreducible gauge-group factors tend to have smaller cosmological constants, while those with larger non-abelian gauge groups tend to have larger cosmological constants.} Indeed, we see from Fig.~\ref{Fig13}(b) that the average cosmological constant grows approximately linearly with the average rank of the gauge-group factors in the corresponding model. \end{itemize} It is easy to understand this result. As we shatter our gauge groups into smaller irreducible factors, the average size of each {\it representation}\/ of the gauge group also becomes smaller. (For example, we have already seen this behavior in Fig.~\ref{Fig8} for gauge bosons in the adjoint representation.) Therefore, we expect the individual tallies of bosonic and fermionic states at each string mass level to become smaller as the total gauge group is increasingly shattered. If these individual numbers of bosons and fermions become smaller, then we statistically expect the magnitudes of their differences $\lbrace a_{sd}\rbrace$ in Eq.~(\ref{Idef}) to become smaller as well. We therefore expect the cosmological constant to be correlated with the degree to which the gauge group of the heterotic string model is shattered, as found in Fig.~\ref{Fig13}. The fact that the average cosmological grows approximately linearly with the average rank of the gauge-group factors [as shown in Fig.~\ref{Fig13}(b)] then suggests that on average, this cosmological constant scaling is dominated by vector representations of the gauge groups (whose dimensions grow linearly with the rank of the gauge group). Such representations indeed tend to dominate the string spectrum at the massless level, since larger representations are often excluded on the basis of unitarity grounds for gauge groups with affine levels $k=1$. \begin{figure}[ht] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot14.eps} } \caption{The effects of non-abelianity: Average values of $\Lambda$ for 12$\,$000 heterotic string models with gauge groups of the form $U(1)^n\times SU(2)^{22-n}$, plotted as a function of $n$. Since varying $n$ does not change the average rank of each gauge-group factor, this correlation is statistically independent of the correlations shown in Fig.~\protect\ref{Fig13}.} \label{Fig14} \end{figure} Even {\it within}\/ a fixed degree of shattering ({\it i.e.}\/, even within a fixed average rank per gauge group), one may ask whether the degree to which the gauge group is abelian or non-abelian may play a role. In order to pose this question in a way that is independent of the correlations shown in Fig.~\ref{Fig13}, we can restrict our attention to those heterotic string models in our sample set for which $f=22$ [{\it i.e.}\/, models with gauge groups of the form $U(1)^n\times SU(2)^{22-n}$]. It turns out that there are approximately $12\,000$ models in this class. Varying $n$ does not change the average rank of each gauge-group factor, and thus any correlation between the cosmological constant and $n$ is statistically independent of the correlations shown in Fig.~\ref{Fig13}. The results are shown in Fig.~\ref{Fig14}, averaged over the $12\,000$ relevant string models. Once again, we see that ``bigger'' (in this case, non-abelian) groups lead to larger average values of the cosmological constant. The roots of this behavior are the same as those sketched above. We can also investigate how $\Lambda$ statistically depends on cross-correlations of gauge groups. Recall that in Table~\ref{table1}, we indicated the percentages of four-dimensional heterotic string models which exhibit various combinations of gauge groups. In Table~\ref{tablelam}, we indicate the average values of $\Lambda$ for those corresponding sets of models. We see, once again, that the correlation between average values of $\Lambda$ and the ``size'' of our gauge groups is striking. For example, looking along the diagonal in Table~\ref{tablelam}, we observe that the average values of $\Lambda$ for models containing gauge groups of the form $G\times G$ always monotonically increase as $G$ changes from $SU(3)$ to $SU(4)$ to $SU(5)$ to $SU(n>5)$; likewise, the same behavior is observed as $G$ varies from $SO(8)$ to $SO(10)$ to $SO(2n>10)$. Indeed, as a statistical collection, we see from Table~\ref{tablelam} that the models with the largest average values of $\Lambda$ are those with at least two exceptional factors. \begin{table} \centerline{ \begin{tabular}{||r|| r|r|r|r|r|r|r|r|r|r||r|r||} \hline \hline ~ & $U_1$~ & $SU_2$ & $SU_3$ & $SU_4$ & $SU_5$ & $SU_{>5}$ & $SO_8$ & $SO_{10}$ & $SO_{>10}$ & $E_{6,7,8}$ & SM~ & PS~ \\ \hline \hline $U_1$ & 104.6& 104.6& 83.2& 112.9& 110.7& 162.3& 131.2& 172.1& 238.8& 342.2& 80.8& 107.6 \\ \hline $SU_2$ & ~& 120.7& 80.8& 109.1& 106.6& 157.1& 155.5& 167.9& 282.6& 442.5& 80.4& 103.9 \\ \hline $SU_3$ & ~& ~& 85.9& 90.9& 113.3& 136.1& 117.6& 162.8& 193.5& 220.2& 83.0& 88.3 \\ \hline $SU_4$ & ~& ~& ~& 115.2& 115.0& 150.9& 129.1& 166.7& 235.3& 314.2& 88.9& 110.5 \\ \hline $SU_5$ & ~& ~& ~& ~& 135.9& 156.3& 128.1& 191.6& 199.2& ---~~ & 107.7& 110.3 \\ \hline $SU_{>5}$ & ~& ~& ~& ~& ~& 200.9& 156.4& 203.2& 274.5& 370.7& 133.5& 142.8 \\ \hline $SO_8$ & ~& ~& ~& ~& ~& ~& 192.7& 167.5& 301.6& 442.8& 115.3& 123.3 \\ \hline $S0_{10}$ & ~& ~& ~& ~& ~& ~& ~& 207.8& 253.4& 289.3& 166.0& 159.6 \\ \hline $SO_{>10}$ & ~& ~& ~& ~& ~& ~& ~& ~& 417.4& 582.8& 190.8& 220.0 \\ \hline $E_{6,7,8}$ & ~& ~& ~& ~& ~& ~& ~& ~& ~& 1165.9& 220.2& 272.3 \\ \hline \hline SM & ~& ~& ~& ~& ~& ~& ~& ~& ~& ~& 82.5& 85.5 \\ \hline PS & ~& ~& ~& ~& ~& ~& ~& ~& ~& ~& ~& 104.9 \\ \hline \hline total: & 108.8& 121.4& 83.2& 113.8& 110.7& 162.2& 163.0& 173.0& 298.5& 440.2& 80.8& 108.3 \\ \hline \hline \end{tabular} } \caption{Average values of $\Lambda$ for the four-dimensional heterotic string models which exhibit various combinations of gauge groups. This table follows the same organization and notational conventions as Table~\protect\ref{table1}. Interestingly, scanning across the bottom row of this table, we see that those string models which contain at least the Standard-Model gauge group have the smallest average values of $\Lambda$. } \label{tablelam} \end{table} Conversely, scanning across the bottom row of Table~\ref{tablelam}, we observe that the average value of $\Lambda$ is minimized for models which contain at least the Standard-Model gauge group $SU(3)\times SU(2)\times U(1)$ among their factors. Given our previous observations about the correlation between the average value of $\Lambda$ and the size of the corresponding gauge groups, it may seem surprising at first glance that models in which we demand only a single $U(1)$ or $SU(2)$ gauge group do not have an even smaller average value of $\Lambda$. However, models for which we require only a single $U(1)$ or $SU(2)$ factor have room for potentially larger gauge-group factors in their complete gauge groups than do models which simultaneously exhibit the factors $U(1)\times SU(2)\times SU(3)\equiv G_{\rm SM}$. Thus, on average, demanding the appearance of the entire Standard-Model gauge group is more effective in minimizing the resulting average value of $\Lambda$ than demanding a single $U(1)$ or $SU(2)$ factor alone. In fact, we see from Table~\ref{tablelam} that demanding $G_{\rm SM}\times U(1)$ is even more effective in reducing the average size of $\Lambda$, while demanding a completely shattered gauge group of the form $U(1)^n\times SU(2)^{22-n}$ produces averages which are even lower, as shown in Fig.~\ref{Fig14}. In this discussion, it is important to stress that we are dealing with statistical {\it averages}\/: individual gauge groups and cosmological constants can vary significantly from model to model. In other words, even though we have been plotting average values of $\Lambda$ in Figs.~\ref{Fig13} and \ref{Fig14}, there may be significant standard deviations in these plots. In order to understand the origin of these standard deviations, let us consider the ``inverse'' of Fig.~\ref{Fig13}(a) which can be constructed by binning our heterotic string models according to their values of $\Lambda$ and then plotting the average value of $f$, the number of gauge-group factors, for the models in each bin. The result is shown in Fig.~\ref{Fig15}, where we have plotted not only the average values of $f$ but also the corresponding standard deviations. Once again, we see that smaller values of $|\Lambda|$ are clearly correlated with increasingly shattered gauge groups. However, while a particular value of $\Lambda$ is directly correlated with a contiguous, relatively small range for $f$, the reverse is not true: a particular value of $f$ is correlated with {\it two}\/ distinct ranges for $\Lambda$ of opposite signs. Indeed, even the central magnitudes for $\Lambda$ in these two ranges are unequal because of the asymmetry of the data in Fig.~\ref{Fig15}, with the ``ascending'' portion of the curve having a steeper slope than the descending portion. As a result of this asymmetry, and as a result of the different numbers of string models which populate these two regions, the total value $\langle \Lambda\rangle$ averaged across these two regions does not cancel, but instead follows the curve in Fig.~\ref{Fig13}(a). Thus, while the curves in Figs.~\ref{Fig13} and \ref{Fig14} technically have large standard deviations, we have not shown these standard deviations because they do not reflect large uncertainties in the corresponding allowed values of $\Lambda$. Rather, they merely reflect the fact that the allowed values of $\Lambda$ come from two disjoint but relatively well-focused regions. \begin{figure}[t] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot15.eps} } \caption{The ``inverse'' of Fig.~\protect\ref{Fig13}(a): Here we have binned our heterotic string models according to their values of $\Lambda$ and then plotted $\langle f \rangle$, the average value of the number of gauge-group factors, for the models in each bin. The error bars delimit the range $\langle f\rangle \pm \sigma$ where $\sigma$ are the corresponding standard deviations. We see that while a particular value of $\Lambda$ restricts $f$ to a fairly narrow range, a particular value of $f$ only focuses $\Lambda$ to lie within two separate ranges of different central magnitudes $|\Lambda|$ and opposite signs.} \label{Fig15} \end{figure} Of course, as we approach the ``top'' of the curve in Fig.~\ref{Fig15} near $|\Lambda|\approx 0$, these two distinct regions merge together. However, even in this limit, it turns out that the sizes of the standard deviations depend on which physical quantity in the comparison is treated as the independent variable and held fixed. For example, if we restrict our attention to heterotic string models containing a gauge group of the form $U(1)^n\times SU(2)^{22-n}$ (essentially holding $f$, the number of gauge-group factors, fixed at $f=22$), we still find corresponding values of $\Lambda$ populating the rather wide range $-400\lsim \Lambda\lsim 500$. In other words, holding $f$ fixed does relatively little to focus $\Lambda$. By contrast, we have already remarked in Sect.~5 that across our entire sample of $\sim 10^5$ models, the smallest value of $|\Lambda|$ that we find is $\Lambda\approx 0.0187$. This value emerges for nine models, eight of which share the same state degeneracies $\lbrace a_{sd}\rbrace$ and one of which is their ``twin'' (as defined at the end of Sect.~5). If we take $\Lambda$ as the independent variable and hold $\Lambda\approx 0.0187$ fixed (which represents only one very narrow slice within the bins shown in Fig.~\ref{Fig15}), we then find that essentially {\it all}\/ of the corresponding gauge groups are extremely ``small'': four are of the form $U(1)^n\times SU(2)^{22-n}$ with $n=12, 13, 15,17$, and the only two others are $SU(3)\times SU(2)^4\times U(1)^{16}$ and $SU(3)^2\times SU(2)^3\times U(1)^{15}$. In other words, models with $\Lambda\approx 0.0187$ have $\langle f\rangle \approx 21.67$, with only a very small standard deviation. Indeed, amongst all distinct models with $|\Lambda|\leq 0.04$, we find none with gauge-group factors of rank exceeding $5$; only $8.2\%$ of such models have an individual gauge-group factor of rank exceeding $3$ and only $1.6\%$ have an individual gauge-group factor of rank exceeding $4$. None were found that exhibited any larger gauge-group factors. Thus, we see that keeping $\Lambda$ small goes a long way towards keeping the corresponding gauge-group factors small. It is, of course, dangerous to extrapolate from these observations of $\sim 10^5$ models in order to make claims concerning a full string landscape which may consist of $\sim 10^{500}$ models or more. Nevertheless, as we discussed at the end of Sect~2, we have verified that all of these statistical correlations appear to have reached the ``continuum limit'' (meaning that they appear to be numerically stable as more and more string models are added to our sample set). Indeed, although the precise minimum value of $|\Lambda|$ is likely to continue to decline as more and more models are examined, the correlation between small values of $|\Lambda|$ and small gauge groups is likely to persist. Needless to say, it is impossible to estimate how large our set of heterotic string models must become before we randomly find a model with $|\Lambda|\approx 10^{-120}$; indeed, if the curve in Fig.~\ref{Fig12} truly saturates at a finite value, such models may not even exist. However, if we assume (as in Ref.~\cite{BP}) that such models exist --- providing what would truly be a stable string ``vacuum'' --- then it seems overwhelmingly likely that \begin{itemize} \item {\bf Perturbative heterotic string vacua with observationally acceptable cosmological constants can be expected to have extremely small gauge-group factors, with $U(1)$, $SU(2)$, and $SU(3)$ overwhelmingly favored relative to larger groups such as $SU(5)$, $SO(10)$, or $E_{6,7,8}$. Thus, for such string vacua, the Standard-Model gauge group is much more likely to be realized at the string scale than any of its grand-unified extensions.} \end{itemize} As always, such a claim is subject to a number of additional assumptions: we are limited to perturbative heterotic string vacua, we are examining only one-loop string amplitudes, and so forth. Nevertheless, we find this type of correlation to be extremely intriguing, and feel that it is likely to hold for higher-loop contributions to the vacuum amplitude as well. Note, in particular, that the critical ingredient in this claim is the assumption of small cosmological constant. Otherwise, statistically weighting all of our string models equally without regard for their cosmological constants, we already found in Sect.~4 that the Standard Model is relatively {\it disfavored}\/, appearing only $10\%$ of the time, while the $SO(2n\geq 10)$ GUT groups appear with the much greater frequency $24.5\%$. Thus, it is the requirement of a small cosmological constant which is responsible for redistributing these probabilities in such a dramatic fashion. There is, however, another possible way to interpret this correlation between the magnitudes of $\Lambda$ and the sizes of gauge groups. As we have seen, smaller values of $\Lambda$ tend to emerge as the gauge group becomes increasingly shattered. However, as we know, there is a fundamental limit to how shattered our gauge groups can become: $f$ simply cannot exceed $22$, {\it i.e.}\/, there is no possible gauge-group factor with rank less than 1. Thus, the correlation between $\Lambda$ and average gauge-group rank may imply that there is likewise a minimum possible value for $\Lambda$. If so, it is extremely unlikely that perturbative heterotic string models will be found in which $\Lambda$ is orders of magnitude less than the values we have already seen. \begin{figure}[ht] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot16.eps} } \caption{The probability that a randomly chosen heterotic string model has a negative value of $\Lambda$ (\protect{\it i.e.}\/, a positive value of the vacuum energy density $\lambda$), plotted as a function of the number of gauge-group factors in the total gauge group of the model. We see that we do not have a significant probability of obtaining models with $\Lambda<0$ until our gauge group is ``shattered'' into at least four or five factors; this probability then remains roughly independent of the number of factors as further shattering occurs.} \label{Fig16} \end{figure} Another important characteristic of such string models is the {\it sign}\/ of $\Lambda$. For example, whether the vacuum energy $\lambda= -{\textstyle{1\over 2}} {\cal M}^4 \Lambda$ is positive or negative can determine whether the corresponding spacetime is de Sitter (dS) or anti-de Sitter (AdS).~ In Fig.~\ref{Fig16}, we show the probability that a randomly chosen heterotic string model has a negative value of $\Lambda$, plotted as a function of the number of gauge-group factors in the total gauge group of the model. For small numbers of factors, the corresponding models all tend to have very large positive values of $\Lambda$. Indeed, as indicated in Fig.~\ref{Fig16}, we do not accrue a significant probability of obtaining models with $\Lambda<0$ until our gauge group is ``shattered'' into at least four or five factors. The probability of obtaining negative values of $\Lambda$ then saturates near $\approx 1/4$, remaining roughly independent of the number of gauge-group factors as further shattering occurs. Thus, we see that regardless of the value of $f$, the ``ascending'' portion of Fig.~\ref{Fig15} is populated by only a quarter as many models as populate the ``descending'' portion. Since the overwhelming majority of models have relatively large numbers of gauge-group factors (as indicated in Fig.~\ref{Fig1}), we see that on average, approximately 1/4 of our models have negative values of $\Lambda$. This is consistent with the histogram in Fig.~\ref{Fig11}. \begin{figure}[h] \centerline{ \epsfxsize 4.0 truein \epsfbox {plot17.eps} } \caption{The degeneracy of values of the cosmological constant relative to gauge groups. We plot the probability that a given value of $\Lambda$ (chosen from amongst the total set of obtained values) will emerge from a model with $f$ gauge-group factors, as a function of $f$ for even and odd values of $f$. This probability is equivalently defined as the number of distinct $\Lambda$ values obtained from models with $f$ gauge-group factors, divided by the total number of $\Lambda$ values found across all values of $f$. The resemblance of these curves to those in Fig.~\protect\ref{Fig1} indicates that the vast degeneracy of cosmological-constant values is spread approximately uniformly across models with different numbers of gauge-group factors.} \label{Fig17} \end{figure} Finally, we can investigate how the vast redundancy in the values of the cosmological constant is correlated with the corresponding gauge groups and the degree to which they are shattered. In Fig.~\ref{Fig17}, we plot the number of distinct $\Lambda$ values obtained from models with $f$ gauge-group factors, divided by the total number of $\Lambda$ values found across all values of $f$. Note that the sum of the probabilities plotted in Fig.~\ref{Fig17} exceeds one. This is because a given value of $\Lambda$ may emerge from models with many different values of $f$ --- {\it i.e.}\/, the sets of values of $\Lambda$ for each value of $f$ are not exclusive. It turns out that this is a huge effect, especially as $f$ becomes relatively large. The fact that the curves in Fig.~\ref{Fig1} and Fig.~\ref{Fig17} have similar shapes as functions of $f$ implies that the vast degeneracy of cosmological-constant values is spread approximately uniformly across models with different numbers of gauge-group factors. Moreover, as indicated in Fig.~\ref{Fig12}(b), this degeneracy factor itself tends to decrease as more and more models are examined, leading to a possible saturation of distinct $\Lambda$ values, as discussed above. \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Discussion} \setcounter{footnote}{0} In this paper, we have investigated the statistical properties of a fairly large class of perturbative, four-dimensional, non-supersymmetric, tachyon-free heterotic string models. We focused on their gauge groups, their one-loop cosmological constants, and the statistical correlations that emerge between these otherwise disconnected quantities. Clearly, as stated in the Introduction, much more work remains to be done, even within this class of models. For example, it would be of immediate interest to examine other aspects of the full particle spectra of these models and obtain statistical information concerning Standard-Model embeddings, spacetime chirality, numbers of generations, and $U(1)$ hypercharge assignments, as well as cross-correlations between these quantities. This would be analogous to what has been done for Type~I orientifold models in Refs.~\cite{blumenhagen,schellekens}. One could also imagine looking at the gauge couplings and their runnings, along with their threshold corrections~\cite{Kaplunovsky}, to see whether it is likely that unification occurs given low-energy precision data~\cite{Prep,faraggi}. It would also be interesting to examine the properties of the cosmological constant {\it beyond}\/ one-loop order, with an eye towards understanding to what extent the unexpected degeneracies we have found persist. An analysis of other string amplitudes and correlation functions is clearly also of interest, particularly as they relate to Yukawa couplings and other phenomenological features. Indeed, a more sophisticated analysis examining all of these features with a significantly larger sample set of models is currently underway~\cite{next}. Needless to say, another option is to expand the class of heterotic string models under investigation. While the models examined in this paper are all non-supersymmetric, it is also important to repeat much of this work for four-dimensional heterotic models with ${\cal N}=1$, ${\cal N}=2$, and ${\cal N}=4$ supersymmetry. There are two distinct reasons why this is an important undertaking. First, because they are supersymmetric, such models are commonly believed to be more relevant to particle physics in addressing issues of gauge coupling unification and the gauge hierarchy problem. But secondly, at a more mathematical level, such models have increased stability properties relative to the non-supersymmetric models we have been examining here. Thus, by examining the statistical properties of such models and comparing them with the statistical properties of supersymmetric models, we can determine the extent to which supersymmetry has an effect on these other phenomenological features. Such analyses are also underway~\cite{next}. Indeed, in many cases these perturbative supersymmetric heterotic strings are dual to other strings ({\it e.g.}\/, Type~I orientifold models) whose statistical properties are also being analyzed. Thus, analysis of the perturbative heterotic landscape, both supersymmetric and non-supersymmetric, will enable {\it statistical}\/ tests of duality symmetries across the entire string landscape. Beyond this, of course, there are many broader classes of closed string models which may be examined --- some of these are discussed in Sect.~2. Indeed, of great interest are completely {\it stable}\/ non-supersymmetric models. As discussed in Ref.~\cite{heretic}, such models could potentially provide {\it non}\/-supersymmetric solutions not only for the cosmological-constant problem, but also for the apparent gauge hierarchy problem. Such models could therefore provide a framework for an alternative, non-supersymmetric approach towards string phenomenology~\cite{heretic}. However, given that no entirely stable non-supersymmetric perturbative heterotic string models have yet been constructed, the analysis of this paper represents the current ``state of the art'' as far as non-supersymmetric perturbative heterotic string model-building is concerned. As mentioned in the Introduction, this work may be viewed as part of a larger ``string vacuum project'' whose goal is to map out the properties of the landscape of string vacua. It therefore seems appropriate to close with two warnings concerning the uses and abuses of such large-scale statistical studies as a method of learning about the properties of the landscape. The first warning concerns what may be called ``lamppost'' effect --- the danger of restricting one's attention to only those portions of the landscape where one has control over calculational techniques. (This has been compared to searching for a small object in the darkness of night: the missing object may be elsewhere, but the region under the lamppost may represent the only location where the search can be conducted at all.) For example, our analysis in this paper has been restricted to string models exploiting ``free-field'' constructions (such as string constructions using bosonic lattices or free-fermionic formalisms). While this class of string models is very broad and lends itself naturally to a computer-automated search and analysis, it is entirely possible that the models with the most interesting phenomenologies are beyond such a class. Indeed, it is very easy to imagine that different constructions will have different strengths and weaknesses as far as their low-energy phenomenologies are concerned, and that one type of construction may easily realize features that another cannot accommodate. By contrast, the second danger can be called the ``G\"odel effect'' --- the danger that no matter how many conditions (or input ``priors'') one demands for a phenomenologically realistic string model, there will always be another observable for which the set of realistic models will make differing predictions. Therefore, such an observable will remain beyond our statistical ability to predict. (This is reminiscent of the ``G\"odel incompleteness theorem'' which states that in any axiomatic system, there is always another statement which, although true, cannot be deduced purely from the axioms.) Given that the full string landscape is very large, consisting of perhaps $10^{500}$ distinct models or more, the G\"odel effect may represent a very real danger. Thus, since one can never be truly sure of having examined a sufficiently sizable portion of the landscape, it is likewise never absolutely clear whether we can be truly free of such G\"odel-type ambiguities when attempting to make string predictions. Of course, implicit in each of these effects is the belief that one actually knows what one is looking for --- that we know {\it which}\/ theory of particle physics should be embedded directly into the string framework and viewed as emerging from a particular string vacuum. However, it is possible that nature might pass through many layers of effective field theories at higher and higher energy scales before reaching an ultimate string-theory embedding. In such cases, the potential constraints on a viable string vacuum are undoubtedly weaker. Nevertheless, we believe that there are many valid purposes for such statistical studies of actual string models. First, as we have seen at various points in this paper, it is only by examining actual string models --- and not effective supergravity solutions --- that many surprising features come to light. Indeed, one overwhelming lesson that might be taken from the analysis in this paper is that the string landscape is a very rich place, full of unanticipated properties and characteristics that emerge only from direct analysis of concrete string models. Second, through their direct enumeration, we gain valuable experience in the construction and analysis of phenomenologically viable models. This is, in some sense, a direct test of string theory as a phenomenological theory of physics. For example, it is clear from the results of this paper that obtaining the Standard-Model gauge group is a fairly non-trivial task within free-field constructions based on $\IZ_2$ periodic/antiperiodic orbifold twists; as we have seen in Fig.~\ref{Fig6}(b), one must induce a significant amount of gauge-group shattering before a sizable population of models with the Standard-Model gauge group emerges. This could not have been anticipated on the basis of low-energy effective field theories alone, and is ultimately a reflection of worldsheet model-building constraints. Such knowledge and experience are extremely valuable for string model-builders, and can serve as useful guideposts. Third, as string phenomenologists, we must ultimately come to terms with the landscape. Given that such large numbers of string vacua exist, it is imperative that string theorists learn about these vacua and the space of resulting possibilities. Indeed, the first step in any scientific examination of a large data set is that of enumeration and classification; this has been true in branches of science ranging from astrophysics and botany to zoology. It is no different here. But finally, we are justified in interpreting observed statistical correlations as general landscape features to the extent that we can attribute such correlations to the existence of underlying string-theoretic consistency constraints. Indeed, when the constraint operates only within a single class of strings, then the corresponding statistical correlation is likely to hold across only that restricted portion of the landscape. For example, in cases where we were able to interpret our statistical correlations about gauge groups and cosmological constants as resulting from deeper constraints such as conformal and modular invariance, we expect these correlations to hold across the entire space of perturbative closed-string vacua. As such, we may then claim to have extracted true phenomenological predictions from string theory. This is especially true when the string-correlated quantities would have been completely disconnected in quantum field theory. Thus, it is our belief that such statistical landscape studies have their place, particularly when the results of such studies are interpreted correctly and in the proper context. As such, we hope that this initial study of the perturbative heterotic landscape may represent one small step in this direction. \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex*{Acknowledgments} \setcounter{footnote}{0} Portions of this work were originally presented at the String Phenomenology Workshop at the Perimeter Institute in April 2005, at the Munich String Phenonomenology Conference in August 2005, and at the Ohio State String Landscape Conference in October 2005. This work is supported in part by the National Science Foundation under Grant PHY/0301998, by the Department of Energy under Grant Grant~DE-FG02-04ER-41298, and by a Research Innovation Award from Research Corporation. I am happy to thank R.~Blumenhagen, M.~Douglas, S.~Giddings, J.~Kumar, G.~Shiu, S.~Thomas, H.~Tye, and especially M.~Lennek for discussions. I am also particularly grateful to D.~S\'en\'echal for use of the computer programs~\cite{Senechal} which were employed fifteen years ago~\cite{PRL} to generate these string models and to determine their gauge groups. While some of this data was briefly mentioned in Ref.~\cite{PRL}, the bulk of the data remained untouched. Because of its prohibitively huge size (over 4 megabytes in 1990!), this data was safely offloaded for posterity onto a standard nine-track magnetic computer tape. I am therefore also extremely grateful to M.~Eklund and P.~Goisman for their heroic efforts in 2005 to locate the only remaining tape drive within hundreds of miles capable of reading such a fifteen-year-old computer artifact, and to S.~Sorenson for having maintained such a tape reader in his electronic antiquities collection and using it to resurrect the data on which this statistical analysis was based. \bigskip \vfill\eject \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.474609, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbbDxK0iCl7UGVyaN
\section*{Introduction} \hspace{\parindent} Tailoring the field response of magnetic mesospins using a hierarchy of energy scales, was originally demonstrated by Cowburn and Welland\cite{Cowburn_IOP_1999,Cowburn2000Sci}. In their case, the hierarchy was obtained from a distribution in the size and shape of small magnetic elements which facilitated the design of magnetic cellular automata, with potential uses in information processing and non-volatile information storage\cite{Cowburn2000Sci,Imre_Science_2006}. Even though these structures were thermally inactive, their results highlighted the importance of the hierarchy of energies on the observed magnetic order and global response of the system. Since then, the exploration of the magnetic properties of nano-arrays has expanded dramatically\cite{HeydermanStamps_review_2013,Nisoli_Kapaklis_NatPhys2017,Rougemaille_review_2019} and now includes arrays designed to exhibit geometrical\cite{Wang2006Nat,Canals_2016_NatComm,Perrin2016Nat,Ostman_NatPhys_2018,Farhan_2019_SciAdv} and topological frustration\cite{Chern2013PRL, Morrison2013NJP, Gilbert_Shakti_NatPhys2014, Gilbert_tetris_2015}. In addition to tailoring the lattice and geometry of the elements, the materials from which the arrays are now fabricated can be chosen to allow thermal fluctuations at suitable temperatures\cite{Sloetjes_arXiv_2020}. Such approaches have already enabled the investigation of spontaneous ordering, dynamics and phase transitions on the mesoscale \cite{Morgan2010NatPhys, Kapaklis2012NJP, Arnalds2012APL, Farhan2013NatPhys, Kapaklis2014NatNano, Anghinolfi_2015_NatComm, SciRep_Relaxation_2016, Ostman_NatPhys_2018, ShaktiME, Sendetskyi_2019_PRB, Pohlit_2020_Susceptibility_PRB, Leo_2020arXiv, Mellado_arXiv_2020}. Currently there are only a couple of lattices that have been investigated, incorporating multiple element sizes.\cite{Chern2013PRL,Gilbert_Shakti_NatPhys2014,Ostman_NatPhys_2018, ShaktiME} In \"{O}stman et al.\cite{Ostman_NatPhys_2018}, small circular elements were placed between the islands within the vertex and used as interaction modifiers to change the overall energy landscape of the lattice but their actual magnetic state/orientation was not determined. The first investigations of the Shakti lattice\cite{Chern2013PRL,Gilbert_Shakti_NatPhys2014} ignored the different size of the elements within the array, but a distinct ordering around the long elements was nevertheless shown by Stopfel et al.\cite{ShaktiME}. However, the highly degenerate ground state, together with the high symmetry of the Shakti lattice, masks possible longer range correlations and ordering. Thus, new lattice geometries are needed to probe the effect of energy hierarchy on the ordering within artificial spin ice structures, especially on length-scales beyond the nearest neighbour. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Fig1_new2} \caption{{\bf Illustration of the Saint George lattices}. {\bf a} The Saint George (SG) lattice with its two different island ({\it mesospin}) sizes. {\bf b} The modified Saint George (mSG) lattice has only one island size. In the Saint George lattice geometry there are two different horizontal chains, coloured green and yellow. These horizontal chains are connected by vertical islands. The green island chains are referred to as long-mesospin chains in the main text, as they are consisting of the long-islands in the SG lattice and have double the amount of islands in the mSG. The yellow island chains are equal in the SG and mSG lattice.} \label{fig:SG} \end{figure} In this communication, we address the effect on the magnetic order of having one or two activation energies influencing the magnetic correlations among Ising like mesospins. Experimentally we do this using mesospins within the Saint George (SG) and modified Saint George (mSG) structures, defined and illustrated in Fig.~\ref{fig:SG}. The SG and mSG structures are closely related to Shakti lattices described by Chern et al.\cite{Chern2013PRL}, and constructed in a similar fashion as introduced by Morrison et al.\cite{Morrison2013NJP}: removing and merging elements, whilst using the square artificial spin ice lattice as a starting base. The SG lattice is characterised by its horizontal Ising-like mesospin chains\cite{Ising1925ZfP, Ostman_Ising_JPCM_2018} which are connected by short vertical islands. There are two horizontal chain types: one composed of long-mesospins and the other with short-mesospins. Both chain types are coupled via short vertical mesospins which creates an axial anisotropy in the lattice, as can be seen in Fig.~\ref{fig:SG}. The SG lattice has two activation energies, one for the long- and another for the short-mesospins. All islands forming the mSG lattice have the same size and therefore the elements have one and the same intrinsic activation energy. The ratio of the activation energies within the SG lattice design can give rise to two distinct scenarios: (1) The two energy-scales are well separated. This results in a freezing of the long-mesospins, while the short-mesospins remain thermally active. Lowering the temperature further results in a freezing of the short-mesospins, influenced by a static magnetic field originating from the frozen long-islands. (2) The energy-scales of the short- and long-mesospins are close or overlapping. This results in an interplay between the differently sized elements during the freezing process. Both scenarios would give rise to an emergent magnetic order, but the correlations between the magnetic mesospins should be different for the two cases. When the activation energies are very different, one may naively expect the long-mesospins to behave as an independent array that, due to their shape anisotropy and close separation, act as an Ising chain. We therefore analyse the magnetic correlations of the elements in SG and mSG structures, comparing the results after freezing the mesospins \begin{figure}[h!] \centering \includegraphics[width=0.9\columnwidth]{PEEM} \caption{{\bf Photoemission electron microscopy images using the x-ray magnetic circular dicroism effect to visualize the magnetization direction.} Combined measurements of {\bf a} the SG lattice and {\bf b} of the mSG lattice. The magnetization direction of each individual island can be determined by its colour. Black illustrated mesospins pointing left or up while white mesospins have their magnetization pointing right or down.} \label{fig:PEEM} \end{figure} \section*{Results} \hspace{\parindent} The experimental investigations on the magnetic states of the elements are based on photoemission electron microscopy (PEEM-XMCD). The results for the SG and mSG lattice are illustrated in Fig.~\ref{fig:PEEM}. While the magnetization direction of each mesospin can be readily identified, it is difficult to see any obvious correlations in and/or between the SG and mSG lattices from a visual inspection of Fig.~\ref{fig:PEEM}. To explore the possible effect of different energy hierarchies, therefore, requires statistical analysis of the images. To determine the possible influence of the energy-scales on the observed magnetic ordering we present an analysis focusing on increasing length-scales starting with the magnetic ordering of the long-mesospin chains. \subsection*{Long-mesospin chains} \hspace{\parindent} In Fig.~\ref{fig:SGLongChains} we show the correlation between the magnetisation direction of the neighbouring long islands, after cooling the sample from room temperature to \SI{65}{K}. The correlation in the magnetization direction of neighbouring islands is calculated using: \begin{equation} \label{eq:correlation} \centering C^n = \dfrac{\sum_k^N m_k \cdot m_{n_k}}{N}. \end{equation} \noindent Here $N$ is the total number of long-mesospins, $m_k$ the magnetization direction of the reference mesospin and $m_{n_k}$ the magnetization direction of the $n^{th}$ neighbour of mesospin $k$. In this formalism, $m_k$ and $m_{n_k}$ can be $\pm 1$ depending on their magnetization direction (black or white in Fig.~\ref{fig:PEEM}). To determine the correlations in the mSG lattice, the same approach using Equation~\eqref{eq:correlation} was applied, but due to the different local starting condition (identified in Fig.~\ref{fig:SGLongChains}b with $\star$~or~$+$) the correlations were determined separately for each condition. For the long-mesospins, the correlation of the first neighbour is close to be zero, indicating a random arrangement of the long-mesospins. This is in contrast to the ferromagnetic order which would have been naively expected\cite{Ostman_Ising_JPCM_2018}, calling for an analysis of the vertex states. \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{SGmSG_longchains} \caption{{\bf Magnetic order of long-mesospin chains in the SG and mSG lattice. a} Correlation between neighbouring long-mesospins in the SG lattice shows an almost random alignment for the first mesospin neighbour, the slight negative correlation value indicates a preference towards anti-ferromagnetic order. {\bf b} The same mesospin chains are also investigated in the mSG, where two short-mesospins resemble one long-mesospin. Depending on the starting element ($\star$~or~$+$), the first mesospin neighbour has a positive value ($\star$) and therefore shows a preference for ferromagnetic alignement, or a negative value ($+$) and therefore a slight preference for anti-ferromagnetic alignment.} \label{fig:SGLongChains} \end{figure} \subsection*{Vertex abundances} \hspace{\parindent} \begin{figure*}[t!] \centering \includegraphics[width=\columnwidth]{SGstats+random.pdf} \caption{{\bf Vertex statistics for the SG and mSG lattice}. {\bf a} The degeneracy corrected vertex abundance for the fourfold coordinated vertices in the SG (blue) and mSG (red) lattice. {\bf b} The degeneracy corrected vertex abundance for the threefold coordinated vertices in the SG (blue) and mSG (red) lattice. The gray bars illustrate the vertex abundance for a random distribution as it would be received if the mesospins would not interact. The vertex type energies are corresponding to their notation, namely type-I$_{4(3)}$ being the lowest energy and therefore the ground state for the fourfold (threefold) coordinated vertex.} \label{fig:Stats} \end{figure*} Statistical analysis of the vertex configurations of the mesospins allows for some insight into the interactions and the degree of order in our arrays. The larger number of low energy vertex configurations as seen in Fig.~\ref{fig:Stats} clearly shows that the mesospins are interacting in both SG and mSG lattices. However, the number of higher excitations in both the fourfold and threefold vertices are a manifestation of thermal activity during the freezing process. These high energy vertex states (type-II$_4$-IV$_4$ and type-II$_3$-III$_3$, see Fig. \ref{fig:Stats} for vertex type illustration) have been recorded also in earlier works using the same magnetic material.\cite{ShaktiME} Whilst the distribution of vertex states are not random, it is still not possible to identify significant differences between the SG and mSG lattices. Therefore, we conclude that on the length-scale of the fourfold and threefold coordinated vertices no significant difference between the SG and mSG lattice can be identified, along with no measurable influence of the different energy-scales on the magnetic ordering. So far we have investigated the correlations of the magnetisation direction of neighbouring mesospins as well as the correlations of three and four interacting mesospins at a vertex. The next length-scale at which the anisotropy impacts the emergent behaviour is, therefore, across several interacting mesospins and vertices. \subsection*{Flux closure loops} \hspace{\parindent} \begin{figure*}[t!] \centering \includegraphics[width=\columnwidth]{Chirality_new2} \caption{{\bf Mesospin circulation in the SG and mSG lattice}. The circulation for loops of 5-mesospin in the SG lattice (blue), compared to the circulation for 6-mesospin loops in the mSG lattice (red) are shown in the left graph. The circulation of the loops was normalized to +1 (-1) for a total clockwise (anti-clockwise) alignment of all mesospins of a loop. This normalization results in a change of the loop circulation by $\pm$0.4 for each mesospin flipped in the SG lattice and by $\pm\frac{1}{3}$ in the mSG lattice. The right graph illustrates the degeneracy corrected abundance of flux closure loops and higher energy states for both the SG and mSG.} \label{fig:chirality} \end{figure*} Flux closure loops are a way to minimize the energy of the lattice by reducing the residual stray fields. The flux closure loops in the SG and mSG lattice are defined as follows: The smallest flux closure loop in the SG lattice consists of one long- and four short-mesospins and in the mSG of six short-mesospins, as depicted graphically in Fig.~\ref{fig:chirality}. The lowest energy fully flux closed loops are defined as having a circulation $+1$ for a clockwise and $-1$ for an anti-clockwise flux closure. Energetically, the most favorable circulations will be the fully closed loops with a circulation of $\pm1$. For higher energy states, where one or more islands are reversed, the normalised circulation is modified by $\pm$0.4 for each mesospin flipped in the SG lattice and by $\pm\frac{1}{3}$ in the mSG lattice. In Fig.~\ref{fig:chirality}, we compare the abundance of full and partial flux closure for the SG and the mSG lattices. The low energy flux closure loops, with values of $\pm1$ have the same abundance in the SG and the mSG lattice. The higher energy states, with one or more reversed mesospins at first glance appear more abundant in the SG lattice, but combining all flux closure loops (circulation values $\pm1$) to one value and summing up all remaining higher energy states to another value, it becomes clear that there are no significant differences between the flux closure loops in the SG and the mSG lattice (right graph in Fig \ref{fig:chirality}). Judging from this loop circulation evaluation in Fig.~\ref{fig:chirality} and the vertex statistics in Fig. \ref{fig:Stats}, we can assume that the SG and mSG lattice are similarly ordered and there is no major differences between them, originating from the different energy-scales. Thus far, we have only observed ordering and correlation at the short-range level, such as the vertex configurations and the smallest possible flux closure loops. To identify the next length-scale, we now turn our attention to the collective ground state of the SG lattice. \subsection*{Ground state manifold} \hspace{\parindent} The analysis presented above, focusing at the short-range scale, clearly indicates that the energy-scales of the two sizes overlap and drive an elaborate ordering mechanism. We can therefore, simplify the evaluation of the ground state manifold by only minimizing the vertex energies, not taking into account the two different activation energies for the long- and short-mesospins. This assumption results in the ground state manifold which we present in Fig.~\ref{fig:groundstate} for one of the possible ground state configurations for the SG and the mSG lattice. All fourfold coordinated vertices in the SG ground state manifold are in their energetically lowest state, the type-I$_4$ state (see Fig.~\ref{fig:Stats} for state notations). The geometrical configuration of the SG lattice leads to a vertex frustration at the threefold coordinated vertices, which is present at all temperatures. This vertex frustration results in 50\% of all threefold coordinated vertices being in their lowest energy state (type-I$_3$), with the remaining 50\% in their first excited state (type-II$_3$). The SG lattice geometry also yield a high degree of degeneracy, as the arrangement of the ground state unit-cells (illustrated in Fig.~\ref{fig:groundstate}) can be achieved in numerous ways. In addition, the central mesospin, inside these ground state unit-cells, defines the position of the type-II$_3$ excitation and is energetically degenerate. In a real sample, in which the dynamics of the freezing process itself also play a role, excitations and localised disorder are inevitably locked into the system which prevents long-range ground states from forming. This has been observed in the vertex statistics presented for these (Fig.~\ref{fig:Stats}) and similar artificial arrays such as the Shakti lattice\cite{ShaktiME}. \begin{figure*}[t!] \centering \includegraphics[width=\columnwidth]{SaintGeorges2} \caption{{\bf Proposed ground state configuration in the SG (left) and mSG lattice (right)}. This ground state configuration takes into account only the vertex energies, while ignoring the different activation energies for short- and long-mesospins. All fourfold-coordinated vertices are in their lowest energy state. This configuration leads to a frustration among the threefold-coordinated vertices, as 50\% of them are in their lowest energy state but the other 50\% are in their first excited state, equivalent to the Shakti lattice\cite{Chern2013PRL}. The proposed ground state creates flux closure loops, as unit-cells of the ground state manifold. The four different flux closure loops are coloured orange or green, depending on their sense of flux circulation (sense of rotation). The border mesospins, 8~in the SG and 10~in the mSG lattice, are head to tail aligned, with an either clockwise or anti-clockwise sense of rotation. The non-border mesospin is frustrated and located in the center of the flux closure loop. The magnetic orientation of this island has no impact on the overall energy of the entire lattice, but it defines the position of the excited threefold-coordinated vertex.} \label{fig:groundstate} \end{figure*} All of the analyses until now took place in the short-range and amongst two, three, four, five or six mesospins, residing in the short-range order. Looking in more detail at the ground state unit-cell, it becomes evident that flux closure loops consisting of two long- and six short-mesospins for the SG lattice and ten short-mesospins for the mSG lattice should be also considered when discussing the overall ordering of the lattice geometry. Furthermore, these ground state unit-cells with their 8 or 10 mesospins represent an intermediate length-scale within the SG lattice. As the ground state unit-cells can be arranged in multiple ways throughout the lattice geometry, the flux closure loops can not easily be investigated in the same fashion as we did with the circulation values in Fig.~\ref{fig:chirality}. It is possible however, to investigate and compare directly their abundances in the lattices. \subsection*{Magnetic ordering on an intermediate length-scale} \begin{figure*}[t!] \centering \includegraphics[width=\columnwidth]{GSmaps2} \caption{{\bf Mapping of ground state unit-cells in the SG and mSG lattice. a} The ground state unit-cells introduced in Fig.~\ref{fig:groundstate} are mapped out experimentally in our arrays. There is a clear difference in the quantity of ground state unit-cells found in the SG and the mSG lattice. A 22\% ground state realization is observed in the the SG lattice, with a 10\% realization in the mSG lattice. {\bf b} Conditional arrangement of the ground state unit-cells in the SG and mSG lattice, where two long-mesospins from subsequent long-mesospin chains are anti-ferromagnetically aligned to each other. In this conditional mesospin arrangement we compare the abundance of the ground state unit-cells in the SG and mSG lattice and see a clear evidence of a stronger ordering in the SG lattice.} \label{fig:GSmaps} \end{figure*} \hspace{\parindent} In pursuit of this magnetic ordering on an intermediate length-scale, we turn our attention towards the ground state unit-cells in our measured arrays (Fig.~\ref{fig:PEEM}). In this way, we are actually able to detect the first major differences between the SG and the mSG lattices, by simply counting the amount of ground state unit-cells present in our arrays. Fig.~\ref{fig:GSmaps}a illustrates the distribution of these ground state unit-cells in the measured lattices and in this representation the fact that more ground state unit-cells can be found in the SG lattice, than in the mSG lattice, is clearly highlighted. Evaluating further these ground state unit-cells quantitatively, we find 22\% ground state realization for the SG lattice but only 10\% ground state realization for the mSG lattice. This difference is attributed to the interplay of the two overlapping energy-scales, where the long-mesopins influence the ordering of the short-mesospins on length-scales beyond that of nearest neighbours. The highly susceptible short-mesospin matrix acts as an interaction enhancer and propagates the influence of the long-mesospins into the intermediate length-scale, which is reflected in the result presented in Fig. \ref{fig:GSmaps} and related ground state abundances. Accounting for the amount of elements present in a ground state unit-cell one can argue that the retrieved values are just a reflection of the degrees of freedom, which are reduced in the SG lattice when compared with the mSG lattice. To eliminate this bias, we also investigated a conditional arrangement of mesospins wherein the two short-mesospins representing the long-mesospin in the mSG lattice have to be ferromagnetically aligned to each other, see Fig. \ref{fig:GSmaps}b. In this way we restrict the degrees of freedom in this subset of mesospins in the mSG to be the same as that in the SG lattice. Searching for arrangements where two long-mesospins of subsequent long island chains are anti-ferromagnetically aligned to each other, we observed a total of 2139 and 1411 instances for the SG and mSG, respectively. The different abundance for this conditional arrangement in the SG and mSG is manifesting the influence of the degrees of freedom. By investigating these conditional arrangements for the ground state unit-cells we are comparing a subset of mesospins (six short-mesospins), which is identical in the SG and mSG lattice. If these six mesospins are randomly arranged we would receive an abundance of $\frac{1}{2^6} = 1.6 \%$. Investigating the conditional arrangements for ground state unit-cells an abundance of $9.8 (\pm0.7) \%$ for the SG and and $7.2 (\pm0.7) \%$ for the mSG can be found. These values are clearly distinguishable from a random arrangement, considering their uncertainty and can therefore be attributed to the interaction energies in these lattices. It is striking to see a difference between the SG and mSG lattice only at this length-scale. The different abundance of ground state unit-cells in the conditional mesospin arrangements can only be explained by the difference in activation energy for the short- and long-mesospins. The conditional mesospin arrangement shows therefore the direct influence of this activation energy on the intermediate-range ordering in mesoscopic spin systems. \section*{Discussion} \hspace{\parindent} The analysis of the results of this study, indicate that the long-mesospins act as ordering seeds, around which the short-mesospins preferably align themselves, which is in agreement with previous studies of the Shakti lattice\cite{ShaktiME}. In contrast to the latter case though, here the short-mesospins affect the character of the magnetic order amongst the long-mesospins, through an apparently stronger interaction between the two distinct mesospin energy-scales, arising from the lattice geometry. These fluctuating Ising-like short-mesospins can be seen as interaction modifiers for the long-mesospin chains, in a similar way as recently presented by \"Ostman {\it et al.} \cite{Ostman_NatPhys_2018}, where the placement of magnetic discs in the center of the square artificial spin ice vertex, alters the energy-landscape and hence the magnetic coupling between the Ising-like mesospins. In the case of the SG lattice, the short-mesospins are placed in between the long-mesospins but with a horizontal offset. During the cooling process the long-mesospins strongly interact with the highly susceptible fluctuating short-mesospin matrix, modifying the normally ferromagnetic correlations between the long-mesospins towards an anti-ferromagnetic alignment. As such, the interplay between the energy-hierarchy and the topology needs to be considered during the cooling process as the resulting magnetic order can not be understood by simply following a strict separation between the short- and long-mesospin ordering. A domination of the energy-hierarchy over the topological influence would favour predominantly the formation of ferromagnetic chains along the long-mesospin chains, which we do not observe in these samples. Focusing on the short-range order, we observed that there are no significant differences between the SG and the mSG lattice, neither at the vertex abundances nor the smallest flux closure loops length-scale. These observations hint that the two energy-scales have no impact on the magnetic ordering, but turning our attention to the intermediate length-scale we see a significant lower ground state realization in the mSG lattice, which is independent of the degrees of freedom. We therefore conclude that the two step freezing process in the SG lattice allows the mesospin system to achieve twice as much ground state realization as in the one step freezing system of the mSG lattice. This can be explained as an intermediate-range ordering originating from the long-mesospins, while the short-range order is dominated by the short-mesospins. The intermediate length-scale order is improved in the SG lattice by the long-mesospins interacting with the short-mesospin matrix during the freezing process, which is finally expressed in the higher degree of order on this length-scale. While the short-range order is purely dominated by the activation energy of the short-mesospins, reflected in non-distinguishable magnetic ordering in the SG and mSG at the short-range. Our study highlights the importance of the energy-scales in combination with the topology on the collective magnetic order in artificial ferroic systems. A full understanding of this interplay between energy-scales and topology, makes it possible to use artificial magnetic nanostructures with differently sized\cite{ShaktiME} and shaped elements\cite{Arnalds2014APL, Ostman_NatPhys_2018}, in order to design systems where emergence and frustration can be studied in a systematic way at the mesoscale\cite{Anderson:1972dn, Nisoli_Kapaklis_NatPhys2017}. Furthermore, this work calls for the development and studies of appropriate models accounting for the presence of multiple energy-scales\cite{Wilson:1979wn}, in settings where geometry and/or topology have a significance. This approach may have also long-term importance for the design of arrays with enhanced magnetic reconfigurability, which can be utilized for instance in magnonics\cite{Gliga_PRL2013, Bhat:2018dj, Ciuciulkaite:2019km, Gypens_2018}, exploiting the dependence of their dynamic magnetization spectra on their micromagnetic states\cite{Jungfleisch:2016fa, Lendinez_review_2020, Sloetjes_arXiv_2020}. \section*{Methods} \subsection*{Sample preparation} \hspace{\parindent} The magnetic nano-structures were fabricated by a post-growth patterning process on a thin film of $\delta$-doped Palladium (Iron)\cite{Parnaste2007}. $\delta$-doped Palladium (Iron) is in fact a tri-layer of Palladium (Pd) -- Iron (Fe) -- Palladium (Pd), where the Curie-temperature, $T_{\rm{C}}$ and the thermally active temperature range for mesospins is defined by the thickness of the Iron layer and their size\cite{ShaktiME,Arnalds2014APL}. In the present study the nominal Fe thickness is 2.0 monolayers, characterised by a Curie-temperature of $T_{\rm{C}}=400$~K\cite{Parnaste2007,Papaioannou2010JPCM} for the continuous films. The post-patterning was carried out at the Center for Functional Nanomaterials (CFN), Brookhaven National Laboratory in Upton, New York. All investigated structures for the present study as well as the investigations reported in Stopfel et al.\cite{ShaktiME}, were fabricated on the same substrate from the same $\delta$-doped Pd(Fe) film, ensuring identical intrinsic material properties as well as the same thermal history for all the investigated structures while performing magnetic imaging. The lengths of the stadium shaped short- and long-mesospins were 450 nm and 1050 nm respectively, while the width was kept at 150 nm for both. The lattice spacing between two parallel neighbouring short-elements was chosen to be 600 nm. For more patterning details see also Stopfel et al.\cite{ShaktiME}. \subsection*{Thermal protocol} \hspace{\parindent} The thermal protocol involves gradual cooling from a superparamagnetic state of the patterned elements, towards a completely arrested state at the lowest temperatures. Similar to previous studies on the Shakti lattice\cite{ShaktiME} and unlike the investigations on square artificial spin ice\cite{Kapaklis2012NJP,Kapaklis2014NatNano} the SG lattice exhibits two distinctive thermally active regimes, caused by the different sizes and therefore associated with intrinsic activation energies of the lattice building elements. Below the Curie temperature, ($T_{\rm{c}}$), the elements are magnetic and considered as mesospins. The temperature-dependent magnetostatic coupling between the elements influences the activation energy of the elements and is therefore biasing one of the two magnetization directions in the lattice geometry, depending on the states of the adjacent mesospins. \subsection*{Determination of the magnetization direction} \hspace{\parindent} The magnetic state of each mesospin was determined by Photo Emission Electron Microscopy (PEEM) using the X-ray Magnetic Circular Dichroism (XMCD) contrast. The experimental studies were performed at the 11.0.1 PEEM3 beamline of the Advanced Light Source, in Berkeley, California, USA. The islands were oriented 45$\degree$ with respect to the incoming X-ray beam, to identify the magnetization direction of all elements in the lattices. Multiple PEEM-XMCD images were acquired and stitched together to create one extended image of more then 5000 mesospins. The PEEM-XMCD images were taken at 65~K, far below the Curie-temperature and the blocking temperatures of the short- and long-mesospins. The PEEM-XMCD images were obtained with a sampling time of $t_{s} = 360~s$, which defines the time window linked to the observable thermal stability of the mesospins: At temperatures far below the blocking temperature of the elements, all magnetic states are stable (frozen) and can be imaged for long periods of time with no difference in the measurement. On the other extreme (higher temperatures), when the magnetization reversal times of the mesospins are much smaller than $t_{s}$ and the elements change their magnetic orientation during the duration of the measurement, no magnetic contrast can be obtained. \section*{Acknowledgements} \footnotesize The authors acknowledge support from the Knut and Alice Wallenberg Foundation project `{\it Harnessing light and spins through plasmons at the nanoscale}' (2015.0060), the Swedish Foundation for International Cooperation in Research and Higher Education (Project No. KO2016-6889) and the the Swedish Research Council (Project No. 2019-03581). The patterning was performed at the Center for Functional Nanomaterials, Brookhaven National Laboratory, supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-SC0012704. This research used resources of the Advanced Light Source, which is a DOE Office of Science User Facility under contract No. DE-AC02-05CH11231. U.B.A. acknowledges funding from the Icelandic Research Fund grants Nr. 207111 and 152483 and the University of Iceland Research Fund. The authors would also like to express their gratitude to Dr. Erik \"Ostman for assistance with PEEM-XMCD measurements and conversations, Dr. David Greving for help with the electron-beam lithography, as well as to Dr. Cristiano Nisoli (Los Alamos National Laboratory, U.S.A.) and Dr. Ioan-Augustin Chioar (Yale University, U.S.A.) for fruitful discussions and valuable feedback. The excellent on-site support by Dr. Andreas Scholl and Dr. Rajesh V. Chopdekar, at the 11.0.1 PEEM3 beamline of the Advanced LightSource, in Berkeley, California, USA, is also greatly acknowledged. \section*{Author contributions} \footnotesize H.S. and V.K. designed the experiment. H.S. carried out the thin film deposition and analysed the measured data. H.S. and A.S. performed the electron-beam lithography. H.S., U.B.A. and V.K. carried out the PEEM-XMCD measurements. H.S., T.P.A.H., B.H. and V.K. wrote the manuscript, while all authors commented on the manuscript.
{ "attr-fineweb-edu": 1.641602, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbcI4c3aisKzQywJs
\section{Introduction} In many physical situations, the behavior of some extended manifold is determined by the competition between its internal elasticity and interaction with external random potential, which at all reasonable time scales can be treated as quenched. A number of examples of such a kind includes domain walls in magnetic materials, vortices and vortex lattices in type-II superconductors, dislocations and phase boundaries in crystals, as well as some types of biological objects. It is expected that the presence of a quenched disorder makes at least some aspects of the behavior of such systems analogous to those of other systems with quenched disorder, in particular, spin glasses. Following Ref. \onlinecite{KZ}, in the case when internal dimension of an elastic manifold is equal to 1 (that is, an object interacting with a random potential is an elastic string), the systems of such a kind are traditionally discussed under the generic name of a directed polymer in a random medium. In such a case the problem turns out to be formally equivalent \cite{HHF} to the problem of a stochastic growth described by the Kardar-Parisi-Zhang (KPZ) equation \cite{KPZ}, which in its turn can be reduced to the Burgers equation \cite{Burgers} with random force (see Refs. \onlinecite{Kardar-R} and \onlinecite{HHZ} for reviews). The investigation of $P_L(F)$, the free-energy distribution function for a directed polymer (of a large length $L$) in a random potential, was initiated by Kardar \cite{Kardar}, who proposed an asymptotically exact method for the calculation of the moments $Z_n\equiv \overline{Z^n}$ of the distribution of the partition function $Z$ in a $(1+1)\,$-dimensional system (a string confined to a plane) with a $\delta$-correlated random potential and made an attempt of expressing the moments of $P_L(F)$ in terms of $Z_n$. Although soon after that, Medina and Kardar \cite{MK} (see also Refs. \onlinecite{Kardar-R} and \onlinecite{DIGKB}) realized that the implementation of the latter task is impossible, the knowledge of $Z_n$ allowed Zhang \cite{Zhang} to find the form of the tail of $P_L(F)$ at large negative $F$. The two attempts of generalizing the approach of Ref. \onlinecite{Zhang} to other dimensions were undertaken by Zhang \cite{Zhang90} and Kolomeisky \cite{Kolom}. Quite recently, it was understood \cite{KK07,KK08} that the method of Ref. \onlinecite{Zhang} allows one to study only the most distant part of the tail (the far-left tail), where $P_L(F)$ is not obliged to have the universal form \makebox{$P_L(F)=P_*(F/F_*)/F_*$} (with $F_*\propto L^\omega$) it is supposed to achieve in the thermodynamic limit, $L\rightarrow\infty$. For $(1+1)\,$-dimensional systems the full form of the universal distribution function is known from the ingenious exact solution of the polynuclear growth (PNG) model by Pr\"{a}hofer and Spohn \cite{PS}. However, there is hardly any hope of generalizing this approach to other dimensions or to other forms of random potential distribution. One more essential step in the investigation of different regimes in the behavior of $P_L(F)$ in systems of different dimensions has been made recently \cite{KK07,KK08} on the basis of the optimal fluctuation approach. The original version of this method was introduced in the 1960s for the investigation of the deepest part of the tail of the density of states of quantum particles localized in a quenched random potential \cite{HL,ZL,L67}. Its generalization to Burgers problem has been constructed in Refs. \onlinecite{GM} and \onlinecite{BFKL}, but for the quantities which in terms of the directed polymer problem are of no direct interest, in contrast to the distribution function $P_L(F)$ studied in Refs. \onlinecite{KK07} and \onlinecite{KK08}. Another accomplishment of Refs. \onlinecite{KK07} and \onlinecite{KK08} consists in extending the optimal fluctuation approach to the region of the universal behavior of $P_L(F)$, where the form of this distribution function is determined by an effective action with scale-dependent renormalized parameters and does not depend on how the system is described at microscopic scales. In the current work, the results of Refs. \onlinecite{KK07} and \onlinecite{KK08} describing the behavior of $P_L(F)$ at the largest positive fluctuations of the free energy $F$ (where they are not described by the universal distribution function) are rederived at a much more quantitative level by explicitly finding the form of the optimal fluctuation which is achieved in the limit of large $F$. This allows us not only to verify the conjectures used earlier for finding the scaling behavior of $S(F)\equiv -\ln[P_L(F)]$ in the corresponding regime, but also to establish the exact value of the numerical coefficient entering the expression for $S(F)$. For brevity, we call the part of the right tail of $P_L(F)$ studied below the far-right tail. The outlook of the paper is as follows. In Sec. \ref{II}, we formulate the continuous model which is traditionally used for the quantitative description of a directed polymer in a random medium and remind how it is related to the KPZ and Burgers problems. Sec. \ref{OFA} briefly describes the saddle-point problem which has to be solved for finding the form of the most optimal fluctuation of a random potential leading to a given value of $F$. In Sec. \ref{ExSol}, we construct the exact solution of the saddle-point equations introduced in Sec. \ref{OFA} for the case when the displacement of a considered elastic string is restricted to a plane (or, in other terms, the transverse dimension of the system $d$ is equal to 1). We do this for sufficiently large positive fluctuations of $F$, when the form of the solution becomes basically independent on temperature $T$ and therefore can be found by setting $T$ to zero. However, the solution constructed in Sec. \ref{ExSol} turns out to be not compatible with the required boundary conditions. Sec. \ref{FreeIC} is devoted to describing how this solution has to be modified to become compatible with free initial condition and in Sec. \ref{FixedIC}, the same problem is solved for fixed initial condition. In both cases, we find the asymptotically exact (including a numerical coefficient) expression for $S(F)$ for the limit of large $F$. In Sec. \ref{d>1}, the results of the two previous sections are generalized for the case of an arbitrary $d$ from the interval \makebox{$0<d<2$}, whereas the concluding Sec. \ref{concl} is devoted to summarizing the results. \section{Model \label{II}} In the main part of this work, our attention is focused on an elastic string whose motion is confined to a plane. The coordinate along the average direction of the string is denoted $t$ for the reasons which will become evident few lines below and $x$ is string's displacement in the perpendicular direction. Such a string can be described by the Hamiltonian \begin{equation} \label{H} H\{x(t)\} =\int_{0}^{L}dt \left\{\frac{J}{2}\left[\frac{d{ x}(t)}{dt}\right]^2 +V[t,{ x}(t)]\right\} \;, \end{equation} where the first term describes the elastic energy and the second one the interaction with a random potential $V(t,x)$, with $L$ being the total length of a string along axis $t$. Note that the form of the first term in Eq. (\ref{H}) relies on the smallness of the angle between the string and its preferred direction. The partition function of a string which starts at $t=0$ and ends at the point $(t,{x})$ is then given by the functional integral which has exactly the same form as the Euclidean functional integral describing the motion of a quantum-mechanical particle (whose mass is given \makebox{by $J$)} in a time-dependent random potential $V(t,{ x})$ (with $t$ playing the role of imaginary time and temperature $T$ of Plank's constant $\hbar$). As a consequence, the evolution of this partition function with the increase in $t$ is governed \cite{HHF} by the imaginary-time Schr\"{o}dinger equation \begin{equation} \label{dz/dt} -T{\dot z} =\left[-\frac{T^2}{2J}\nabla^2+V(t,{ x})\right]z(t,{ x})\,. \end{equation} Here and below, a dot denotes differentiation with respect to $t$ and $\nabla$ differentiation with respect to $x$. Naturally, $z(t,{ x})$ depends also on the initial condition at $t=0$. In particular, fixed initial condition, \makebox{${ x}(t=0)={ x}_0$}, corresponds to $z(0,{ x})=\delta({ x}-{ x}_0)$, whereas free initial condition (which implies the absence of any restrictions \cite{KK08,DIGKB} on $x$ at $t=0$) to \begin{equation} \label{FBC} z(0,{ x})=1\,. \end{equation} Below, the solution of the problem is found for both these types of initial condition. It follows from Eq. (\ref{dz/dt}) that the evolution of the free energy corresponding to $z(t,{ x})$, \begin{equation} \label{} f(t,{ x})=-T\ln\left[z(t,{ x})\right]\,, \end{equation} with the increase in $t$ is governed \cite{HHF} by the KPZ equation \cite{KPZ} \begin{equation} \label{KPZ} {\dot f}+\frac{1}{2J}(\nabla f)^2-\nu \nabla^2 f = V(t,{ x})\,, \end{equation} with the inverted sign of $f$, where $t$ plays the role of time and $\nu\equiv{T}/{2J}$ of viscosity. On the other hand, the derivation of Eq. (\ref{KPZ}) with respect to ${ x}$ allows one to establish the equivalence \cite{HHF} between the directed polymer problem and Burgers equation \cite{Burgers} with random potential force \begin{equation} \label{Burg} {\dot { u}}+u\nabla { u} -\nu\nabla^2{ u}={J}^{-1}\nabla V(t,{ x})\,, \end{equation} where $u(t,x)=\nabla f(t,x)/J$ plays the role of velocity. Note that in terms of the KPZ problem, the free initial condition (\ref{FBC}) corresponds to starting the growth from a flat interface, \makebox{$f(0,{ x})=\mbox{const}$,} and in terms of the Burgers problem, to starting the evolution from a liquid at rest, ${ u}(0,{ x})=0$. To simplify an analytical treatment, the statistic of a random potential $V(t,{ x})$ is usually assumed to be Gaussian with \begin{equation} \label{VV} \overline{V(t,{ x})}=0\,,~~~ \overline{V(t,{ x})V(t',{ x}')}=U(t-t',{ x}-{ x}')\,, \end{equation} where an overbar denotes the average with respect to disorder. {Although the analysis below is focused exclusively on the case of purely $\delta$-functional correlations, \begin{equation} \label{U(t,x)} U(t-t',{ x}-{ x}')=U_0\delta(t-t')\delta(x-x')\,, \end{equation} the results we obtain are applicable also in situations when the correlations of $V(t,x)$ are characterized by a finite correlation radius $\xi$ because in the considered regime, the characteristic size of the optimal fluctuation grows with the increase in $L$ and therefore for large-enough $L$, the finiteness of $\xi$ is of no importance and an expression for $U(t-t',{ x}-{ x}')$ can be safely replaced by the right-hand side of Eq. (\ref{U(t,x)}) with \begin{equation} \label{} U_0=\int_{-\infty}^{+\infty}dt\int_{-\infty}^{+\infty}dx\, U(t,{ x})\;. \end{equation} \section{Optimal fluctuation approach\label{OFA}} We want to find probability of a large positive fluctuation of free energy of a string which at $t=L$ is fixed at some point $x=x_L$. It is clear that in the case of free initial condition, the result cannot depend on $x_L$, so for the simplification of notation, we assume below $x_L=0$ and analyze fluctuations of $F=f(L,0)-f(0,0)$. As in other situations \cite{HL,ZL,L67}, the probability of a sufficiently large fluctuation of $F$ is determined by the most probable fluctuation of a random potential $V(t,x)$ among those leading to the given value of $F$. In its turn, the most probable fluctuation of $V(t,x)$ can be found \cite{FKLM} by looking for the extremum of the Martin-Siggia-Rose action \cite{MSR,dD,Jans} corresponding to the KPZ problem, \begin{eqnarray} S\{f,{V}\} & = & \frac{1}{U_0}\int\limits_0^L \! dt\int_{-\infty}^{+\infty}\! dx \left\{-\frac{1}{2}{V}^2+ \right. \nonumber\\ && +\left. {V}\left[\dot f+\frac{1}{2J}\left(\nabla f\right)^2- \nu\nabla^2f\right] \right\}\;, \label{S(f,V)} \end{eqnarray} both with respect to $f\equiv f(t,x)$ and to a random potential realization $V\equiv V(t,x)$. The form of Eq. (\ref{S(f,V)}) ensures that its variation with respect to $V(t,x)$ reproduces the KPZ equation (\ref{KPZ}), whose substitution back into Eq. (\ref{S(f,V)}) reduces it to the expression \begin{equation} \label{S(V)} S\{V\}=\frac{1}{2U_0}\int_{0}^{L}dt \int_{-\infty}^{+\infty}\!\!dx\, V^2(t,x)\,. \end{equation} determining the probability of a given realization of a random potential, \makebox{${\cal P}\{V\}\propto\exp(-S\{V\})$.} On the other hand, variation of Eq. (\ref{S(f,V)}) with respect to $f(t,x)$ shows that the time evolution of the optimal fluctuation of a random potential is governed by equation \cite{Fogedb} \begin{eqnarray} \dot {V}+\frac{1}{J}\nabla\left({V}\nabla f\right)+\nu\nabla^2{V} & =& 0\,, \label{peur-mu} \end{eqnarray} whose form implies that the integral of ${V}(t,x)$ over $dx$ is a conserved quantity. Our aim consist in finding the solution of Eqs. (\ref{KPZ}) and (\ref{peur-mu}) satisfying condition \begin{equation} \label{gusl-f} f(L,0)-f(0,0)=F\,, \end{equation} as well as an appropriate initial condition at $t=0$. The application of this procedure corresponds to calculating the full functional integral determining $P_L(F)$ with the help of the saddle-point approximation. In the framework of this approximation, the condition (\ref{gusl-f}) [which formally can be imposed by including into the functional integral determining $P_L(F)$ the corresponding $\delta$-functional factor] leads to the appearance of the condition on $V(t,x)$ at $t=L$ \cite{FKLM}, \begin{equation} \label{gusl-mu} {V}(L,x)=\lambda\delta(x)\,, \end{equation} where, however, the value of $\lambda$ should be chosen to ensure the fulfillment of condition (\ref{gusl-f}). The conditions for the applicability of the saddle-point approximation for the analysis of the far-right tail of $P_L(F)$ are given by $S\gg 1$ and $F\gg JU_0^2L/T^4$. The origin of the former inequality is evident, whereas the fulfillment of the latter one ensures the possibility to neglect the renormalization of the parameters of the system by small-scale fluctuations \cite{cond}. We also assume that $F \gg T$, which ensures that the characteristic length scale of the optimal fluctuation is sufficiently large to neglect the presence of viscous terms in Eqs. (\ref{KPZ}) and (\ref{peur-mu}) \cite{cond}. This allows us to replace Eqs. (\ref{KPZ}) and (\ref{peur-mu}) by \begin{subequations} \label{df&dmu/dt} \begin{eqnarray} \dot f+\frac{1}{2J}\left(\nabla f\right)^2 & = & {V},\quad \label{df/dt}\\ \dot {V}+\frac{1}{J}\nabla\left({V}\nabla f\right) & = & 0\,, \label{dmu/dt} \end{eqnarray} \end{subequations} which formally corresponds to considering the original (directed polymer) problem at zero temperature, $T=0$, where the free energy of a string is reduced to its ground state energy. In accordance with that, in the $T=0$ limit $f(L,0)$ is given by the minimum of Hamiltonian (\ref{H}) on all string's configurations $x(t)$ which at $t=0$ satisfy a chosen initial condition and at $t=L$ end up at $x(L)=0$. Exactly like Eq. (\ref{peur-mu}), Eq. (\ref{dmu/dt}) implies that ${V}(t,x)$ behaves itself like a density of a conserved quantity, but takes into account only the nondissipative component to the flow of ${V}$ given by ${V} u$, where \begin{equation} \label{} u\equiv u(t,{ x})\equiv{J}^{-1}\nabla f(t,{ x}) \end{equation} plays the role of velocity. Naturally, for $\nu=0$ the time evolution of $u$ is governed by the nondissipative version of the force-driven Burgers equation (\ref{Burg}), \begin{equation} \label{du/dt} \dot u+u\nabla u=\nabla{V}/J\;. \end{equation} \section{Exact solution of the saddle-point equations \label{ExSol}} It is clear from symmetry that in the optimal fluctuation we are looking for, both $f(t,x)$ and ${V}(t,x)$ have to be even functions of $x$. After expanding them at $x=0$ in Taylor series, it is easy to verify that an exact solution of Eqs. (\ref{df&dmu/dt}) can be constructed by keeping in each of these expansions only the first two terms: \begin{subequations} \label{f&mu} \begin{eqnarray} f(t,x) & = & J\left[A(t)-B(t)x^2\right]\,, \label{f} \\ {V}(t,x) & = & J\left[C(t)-D(t)x^2\right]\,. \label{mu} \end{eqnarray} \end{subequations} Substitution of Eqs. (\ref{f&mu}) into Eqs. (\ref{df&dmu/dt}) gives then a closed system of four equations, \begin{subequations} \label{a&b} \begin{eqnarray} \dot{A} & = & C\,, \\ \dot{B} & = & 2B^2+D\,, \label{eq-b} \end{eqnarray} \end{subequations} \vspace*{-7mm} \begin{subequations} \label{c&d} \begin{eqnarray} \dot{C} & = & 2BC\,,~~~~~ \\ \dot{D} & = & 6BD\,, \label{eq-d} \end{eqnarray} \end{subequations} which determines the evolution of coefficients $A$, $B$, $C$ and $D$ with the increase in $t$. It is easy to see that with the help of Eq. (\ref{eq-d}), $D(t)$ can be expressed in terms of $B(t)$, which allows one to transform Eq. (\ref{eq-b}) into a closed equation for $B(t)$, \begin{equation} \label{eqn-b} \dot B=2B^2+D(t_0)\exp\left[6\int_{t_0}^{t}dt'B(t')\right]\;. \end{equation} After making a replacement \begin{equation} \label{b} B(t)=-\frac{\dot{\varphi}}{2\varphi}\;, \end{equation} Eq. (\ref{eqn-b}) is reduced to an equation of the Newton's type, \begin{equation} \label{Neut} \ddot{\phi}+\frac{\alpha}{\phi^2}=0\;, \end{equation} where $\alpha\equiv 2D(t)\phi^3(t)$ is an integral of motion which does not depend on $t$. Eq. (\ref{Neut}) can be easily integrated which allows one to ascertain that its general solution can be written as \makebox{$\phi(t)=\phi_0\Phi[(t-t_0)/L_*]$}, where $t_0$ and $\phi_0\equiv\phi(t_0)$ are arbitrary constants, \begin{equation} \label{L*} L_*=\frac{\pi}{4[D(t_0)]^{1/2}}\, \end{equation} plays the role of the characteristic time scale, and $\Phi(\eta)$ is an even function of its argument implicitly defined in the interval $-1\leq\eta\leq 1$ by equation \begin{equation} \label{Phi} \sqrt{\Phi(1-\Phi)}+\arccos\sqrt{\Phi}=\frac{\pi}{2}|\eta|\;. \end{equation} With the increase of $|\eta|$ from $0$ to $1$, $\Phi(\eta)$ monotonically decreases from $1$ to $0$. In particular, on approaching \makebox{$\eta=\pm 1$}, the behavior of $\Phi(\eta)$ is given by \begin{equation} \label{Phi(1)} \Phi(\eta)\approx[(3\pi/4)(1-|\eta|)]^{2/3}\,. \end{equation} Since it is clear from the form of Eq. (\ref{b}) that the constant $\phi_0$ drops out from the expression for $B(t)$, one without the loss of generality can set $\phi_0=1$ and \begin{equation} \label{phi(t)} \phi(t)=\Phi\left(\frac{t-t_0}{L_*}\right)\,. \end{equation} The functions $A(t)$, $B(t)$, $C(t)$ and $D(t)$ can be then expressed in terms of $\phi\equiv\phi(t)$ as \begin{subequations} \label{a-d} \begin{eqnarray} A(t) & = & A_0+\mbox{sign}(t-t_0)\left.\frac{C_0}{D_0^{1/2}}\right. \,\arccos\sqrt{\phi}\;, \label{a(t)}\\ \label{b(t)} B(t) & = & \mbox{sign}(t-t_0)\left[D_0(1-\phi)/{\phi^3}\right]^{1/2}\;, \\ C(t) & = & {C_0}/{\phi}\;, \label{c(t)}\\ D(t) & = & {D_0}/{\phi^3}\;, \label{d(t)} \end{eqnarray} \end{subequations} where $A_0=A(t_0)$, $C_0=C(t_0)$ and $D_0=D(t_0)$. Thus we have found an exact solution of Eqs. (\ref{df&dmu/dt}) in which $f(t,x)$ is maximal at $x=0$ (for $t>t_0$) and the value of $f(t,0)$ monotonically grows with the increase in $t$. However, the optimal fluctuation also have to satisfy particular boundary conditions. The modifications of the solution (\ref{phi(t)})-(\ref{a-d}) compatible with two different types of initial conditions, free and fixed, are constructed in Secs. \ref{FreeIC} and \ref{FixedIC}, respectively. \section{Free initial condition \label{FreeIC}} When the initial end point of a polymer (at $t=0$) is not fixed (that is, is free to fluctuate), the boundary condition at $t=0$ can be written as $z(0,x)=1$ or \ f(0,x)=0\,. \ Apparently, this condition is compatible with Eq. (\ref{f}) and in terms of functions $A(t)$ and $B(t)$ corresponds to \begin{equation} \label{ab=0} A(0)=0\,,~~B(0)=0\,, \end{equation} from where $\dot\phi(0)=0$ and $t_0=0$. However, it is clear that the solution described by Eqs. (\ref{phi(t)})-(\ref{ab=0}) cannot be the optimal one because it does not respect condition (\ref{gusl-mu}) which has to be fulfilled at $t=L$. Moreover, this solution corresponds to an infinite action and the divergence of the action is coming from the regions where potential ${V}(t,x)$ is negative, which evidently cannot be helpful for the creation of a large positive fluctuation of $f(t,0)$. From the form of Eqs. (\ref{H}) and (\ref{S(V)}), it is clear that any region where \makebox{$V(t,x)<0$} cannot increase the energy of a string but makes a positive contribution to the action. Therefore, in a really optimal fluctuation with $F>0$, potential $V(t,x)$ should be either positive or zero. In particular, since just the elastic energy of any configuration $x(t)$ which somewhere crosses or touches the line \begin{equation} \label{} x_*(t)=\left[{2F(L-t)}/{J}\right]^{1/2} \end{equation} and at $t=L$ ends up at $x(L)=0$ is already larger than $F$, there is absolutely no reason for $V(t,x)$ to be nonzero at least for $|x|>x_{\rm *}(t)$. It turns out that the exact solution of the saddle-point equations (\ref{df&dmu/dt}) in which potential $V(t,x)$ satisfies boundary condition (\ref{gusl-mu}) and constraint $V(t,x)\geq 0$ can be constructed on the basis of the solution found in Sec. \ref{ExSol} just by cutting the dependences (\ref{f&mu}) at the points \begin{equation} \label{x*} x=\pm X_{}(t)\,,~~ X_{}(t)\equiv\left[\frac{C(t)}{D(t)}\right]^{1/2}\hspace*{-4mm} =\left(\frac{C_0}{D_0}\right)^{1/2}\hspace*{-3mm}\phi(t)\,, \end{equation} where ${V}(t,x)$ is equal to zero, and replacing them at \makebox{$|x|>X_{}(t)$} by a more trivial solution of the same equations with \makebox{${V}(t,x)\equiv 0$} which at $x=\pm X_{}(t)$ has the same values of $f(t,x)$ and $u(t,x)$ as the solution at $|x|\leq X_{}$. Such a replacement can be done because the flow of ${V}$ through the moving point $x=X_{}(t)$ in both solutions is equal to zero. In accordance with that, the integral of ${V}(t,x)$ over the interval $-X_{}(t)<x<X_{}(t)$ does not depend on $t$. It is clear from Eq. (\ref{x*}) that $X_{}(t)$ is maximal at $t=t_0$ and at $t>t_0$ monotonically decreases with the increase of $t$. The form of $f(t,x)$ at $|x|>X_{}(t)$ is then given by \begin{equation} \label{f1} f(t,x)=f[t,X_{}(t)]+J\int_{X_{}(t)}^{|x|}dx'\,u_0(t,x')\,, \end{equation} where $u_0(t,x)$ is the solution of Eq. (\ref{du/dt}) with zero right-hand side in the region $x>X(t)$ which at $x=X(t)$ satisfies boundary condition \begin{equation} \label{BounCond} u_0[t,X_{}(t)]=v(t)\,. \end{equation} In Eq. (\ref{BounCond}), we have taken into account that in the solution constructed in Sec. \ref{ExSol} \makebox{$u[t,X_{}(t)]= -2B(t)X_{}(t)$} coincides with \begin{equation} \label{v} v(t) = \frac{dX_{}}{dt} =\left(\frac{C_0}{D_0}\right)^{1/2}\hspace*{-1.3mm}\dot\phi =-2\sqrt{C_0(\phi^{-1}-1)}\;, \end{equation} the velocity of the point $x=X_{}(t)$. This immediately follows from Eq. (\ref{b}) and ensures that the points where spacial derivatives of $u(t,x)$ and ${V}(t,x)$ have jumps always coincide with each other. It is clear from Eq. (\ref{v}) that $v(t_0)=0$, whereas at $t>t_0$, the absolute value of $v(t)<0$ monotonically grows with the increase in $t$. Since Eq. (\ref{du/dt}) with vanishing right-hand side implies that the velocity of any Lagrangian particle does not depend on time, its solution satisfying boundary condition (\ref{BounCond}) can be written as \begin{equation} \label{u0} u_0(t,x)= v[\tau(t,x)]\,, \end{equation} where function $\tau(t,x)$ is implicitly defined by equation \begin{equation} \label{tau} x = X_{}(\tau)+(t-\tau)v(\tau)\,. \end{equation} Monotonic decrease of $v(\tau)<0$ with the increase in $\tau$ ensures that in the interval \makebox{$X_{}(t)<x<X_{0}\equiv X(t_0)$}, Eq. (\ref{tau}) has a well-defined and unique solution which at fixed $t$ monotonically decreases from $t$ at $x=X_{}(t)$ to $0$ at $x=X_{0}$. In accordance with that, $u_0(t,x)$ as a function of $x$ monotonically increases from $v(t)<0$ at $x=X_{}(t)$ to $0$ at $x=X_{0}$. For free initial condition (implying $t_0=0$), the form of the solution at $x>X_0$ remains the same as in the absence of optimal fluctuation, that is, $u_0[t,x>X_0]\equiv 0$. The fulfillment of the inequality $\partial u_0(t,x)/\partial x\leq 0$ in the interval $x>X(t)$ demonstrates the absence of any reasons for the formation of additional singularities (such as shocks), which confirms the validity of our assumption that the form of the solution can be understood without taking into account viscous terms in saddle-point equations (\ref{KPZ}) and (\ref{peur-mu}). Substitution of Eqs. (\ref{u0}) and (\ref{tau}) into Eq. (\ref{f1}) and application of Eqs. (\ref{df/dt}) and (\ref{du/dt}) allow one to reduce Eq. (\ref{f1}) to \begin{equation} \label{f1-b} f(t,x)=\frac{J}{2}\int\limits_{0}^{\tau(t,x)}d\tau'(t-\tau') \frac{dv^2(\tau')}{d\tau'}\,, \end{equation} from where it is immediately clear that on approaching $x=X_{0}$, where $\tau(t,x)$ tends to zero, $f(t,x)$ also tends to zero, so that at $|x|>X_{0}$, the free energy $f(t,x)$ is equal to zero (that is, remains exactly the same as in the absence of optimal fluctuation). However, for our purposes, the exact form of the solution at $|x|>X_{}(t)$ is of no particular importance, because this region does not contribute anything to the action. It is clear that the compatibility of the constructed solution with condition (\ref{gusl-mu}) is achieved when the interval $[-X_{}(t),X_{}(t)]$ where the potential is non-vanishing shrinks to a point. This happens when the argument of function $\Phi$ in Eq. (\ref{phi(t)}) is equal to 1, that is when \begin{equation} \label{t0+L*} t_0+L_*=L\,, \end{equation} which for $t_0=0$ corresponds to $L_*=L$ and \begin{equation} \label{d0} D_0=\left(\frac{\pi}{4L}\right)^2. \end{equation} On the other hand, Eq. (\ref{a(t)}) with $A_0=0$ gives \makebox{$A(L)=(\pi/2) C_0/D_0^{1/2}=2LC_0$}. With the help of the condition $A(L)=F/J$, following from Eq. (\ref{gusl-f}) this allows one to conclude that \begin{equation} \label{c0} C_0=\frac{F}{2JL}\;. \end{equation} Thus, for free initial condition, the half width of the region where the optimal fluctuation of a random potential is localized is equal to \begin{equation} \label{} X_{0}=\left(\frac{C_0}{D_0}\right)^{1/2} =\frac{2}{\pi}\left(\frac{2FL}{J}\right)^{1/2} \end{equation} at $t=0$ (when it is maximal) and monotonically decreases to zero as $X(t)=X_0\Phi(t/L)$ when $t$ increases to $L$. On the other hand, ${V}(t,0)$, the amplitude of the potential, is minimal at $t=0$ (when it is equal to $F/2L$) and monotonically increases to infinity. In the beginning of this section, we have argued that $V(t,x)$ has to vanish at least for \makebox{$|x|>x_*(t)=[2F(L-t)/J]^{1/2}$} and indeed it can be checked that $X(t)<x_*(t)$ at all $t$, the maximum of the ratio $X(t)/x_*(t)$ being approximately equal to $0.765$. In the case of free initial condition, the optimal fluctuation of a random potential at $T=0$ has to ensure that $E(x_0)$, the minimum of $H\{x(t)\}$ for all string's configurations with $x(0)=x_0$ and $x(L)=0$, for all values of $x_0$ should be equal or larger than $F$. In particular, for any $x_0$ from the interval $|x_0|<X_0$ where the potential is nonzero, the corresponding energy $E(x_0)$ has to be exactly {equal} to $F$, otherwise there would exist a possibility to locally decrease the potential without violating the condition $E(x_0)\geq F$. The configuration of a string, $x(t)$, which minimizes $H\{x(t)\}$ in the given realization of a random potential for the given values of $x_0$ and $x(L)$, at $0<x<L$ has to satisfy equation \begin{equation} \label{extrem} -J\frac{d^2x}{dt^2}+\frac{\partial V(t,x)}{\partial x}=0\,, \end{equation} which is obtained by the variation of Hamiltonian (\ref{H}) with respect to $x$. It is not hard to check that for the optimal fluctuation found above, the solution of this equation for an arbitrary $x_0$ from the interval \makebox{$-X_0<x_0<X_0$} can be written as \begin{equation} \label{x(t)} x(t)=\frac{x_0}{X_0}X(t)\,. \end{equation} All these solutions have the same energy, $E(x_0)=F$. The value of the action corresponding to the optimal fluctuation can be then found by substituting Eqs. (\ref{mu}), (\ref{c(t)}) and (\ref{d(t)}) into the functional (\ref{S(V)}), where the integration over $dx$ should be restricted to the interval $-X(t)<x<X(t)$, which gives \begin{equation} \label{def-b} S_{\rm free}=\frac{8}{15}\frac{C_0^{5/2}}{D_0^{1/2}}\frac{J^2}{U_0} \int\limits_0^{L}\frac{dt}{\phi(t)}= \frac{4\pi}{15}\frac{C_0^{5/2}}{D_0}\frac{J^2}{U_0}\;. \end{equation} The integral over $dt$ in Eq. (\ref{def-b}) can be calculated with the help of replacement \makebox{$dt/\phi=d\phi/(\phi\dot{\phi})$} and is equal to $\pi/2D_0^{1/2}$. Substitution of relations (\ref{d0}) and (\ref{c0}) allows one to rewrite Eq. (\ref{def-b}) in terms of the parameters of the original system as \begin{equation} \label{Sfree} S_{\rm free}(F,L)=K\frac{F^{5/2}}{U_0J^{1/2}L^{1/2}}\;,~~~ K=\frac{8\sqrt{2}}{15\pi}\;. \end{equation} The exponents entering Eq. (\ref{Sfree}) have been earlier found in Ref. \onlinecite{KK07} from the scaling arguments based on the assumption that for large $L$, the form of the optimal fluctuation involves a single relevant characteristic length scale with the dimension of $x$ which algebraically depends on the parameters of the system (including $L$) and grows with the increase of $L$ \cite{exp}. The analysis of this section has explicitly confirmed this assumption and has allowed us to find the exact value of the numerical coefficient $K$. Since the characteristic length scale of the solution we constructed is given by \makebox{$X_{0}\equiv X(t_0)\sim(FL/J)^{1/2}$}, the neglect of viscosity $\nu$ remains justified as long as the characteristic relaxation time corresponding to this length scale $\tau_{\rm rel}\sim X_{0}^2/\nu\sim FL/T$ is much larger than the time scale of this solution $L$, which corresponds to \begin{equation} \label{F>T} F \gg T\,. \end{equation} Another condition for the validity of Eq. (\ref{Sfree}) is the condition for the direct applicability of the optimal fluctuation approach. One can disregard any renormalization effects as long as the characteristic velocity inside optimal fluctuation is much larger \cite{KK08} than the characteristic velocity of equilibrium thermal fluctuations at the length scale $x_c\sim T^3/JU_0$, the only characteristic length scale with the dimension of $x$ which exists in the problem with \makebox{$\delta$-functional} correlations (that is, can be constructed from $T$, $J$ and $U_0$). In terms of $F$, this condition reads \begin{equation} \label{NoRenorm} F\gg U_0^2JL/T^4\,. \end{equation} It is easy to check that the fulfillment of conditions (\ref{F>T}) and (\ref{NoRenorm}) automatically ensures $S\gg 1$, which also is a necessary condition for the applicability of the saddle-point approximation. For $L\gg L_{c}$, where $L_{c}\sim T^5/JU_0^2$ is the only characteristic length scale with the dimension of $L$ which exists in the problem with $\delta$-functional correlations, condition (\ref{F>T}) automatically follows from condition (\ref{NoRenorm}) which can be rewritten as $F\gg (L/L_{c})T$. Thus, for a sufficiently long string (with $L\gg L_{c}$), the only relevant restriction on $F$ is given by Eq. (\ref{NoRenorm}). \section{Fixed initial condition \label{FixedIC}} When both end points of a string are fixed \makebox{[$x(0)=x_0$, $x(L)=x_L$],} one without the loss of generality can consider the problem with $x_0=x_L$. Due to the existence of so-called tilting symmetry \cite{SVBO}, the only difference between the problems with $x_0=x_L$ and \makebox{$x_0\neq x_L$} consists in the shift of the argument of $P_L(F)$ by \makebox{$\Delta F\equiv J(x_L-x_0)^2/2L$}. For this reason, we consider below only the case $x_0=x_L=0$. When a string is fastened at $t=0$ to the point $x=0$, in terms of $z(t,x)$, the boundary condition at $t=0$ can be written as $z(0,x)\propto\delta(x)$. In such a case, the behavior of $f(t,x)$ at $t\rightarrow 0$ is dominated by the elastic contribution to energy, which allows one to formulate the boundary condition in terms of $f(t,x)$ as \cite{KK08} \begin{equation} \label{FixedBC} \lim_{t\rightarrow 0}\left[f(t,x)-f^{(0)}(t,x)\right]=0\,, \end{equation} where $f^{(0)}(t,x)=Jx^2/2t$ is the free energy of the same system in the absence of a disorder. Since we are explicitly analyzing only the $T\rightarrow 0$ limit, we omit the linear in $T$ contribution to the expression for $f^{(0)}(t,x)$ which vanishes in this limit. The fulfillment of condition (\ref{FixedBC}) can be ensured, in particular, by setting \begin{equation} \label{f(eps)} f(\varepsilon,x)=f^{(0)}(\varepsilon,x)=Jx^2/2\varepsilon\,, \end{equation} which corresponds to suppressing the noise in the interval $0<t<\varepsilon$, and afterwards taking the limit $\varepsilon\rightarrow 0$. Naturally, the free initial condition also can be written in the form (\ref{FixedBC}) but with $f^{(0)}(t,x)=0$. Quite remarkably, initial condition (\ref{f(eps)}) is compatible with the structure of the solution constructed in Sec. \ref{ExSol} and in terms of functions $A(t)$ and $B(t)$ corresponds to \begin{equation} \label{a&b(eps)} A(\varepsilon)=0,~~~ B(\varepsilon)=-1/2\varepsilon\,. \end{equation} Substitution of Eqs. (\ref{L*}), (\ref{Phi(1)}) and (\ref{phi(t)}) into Eq. (\ref{b(t)}) allows one to establish that for $\varepsilon\ll L_*$, the condition $B(\varepsilon)=-1/2\varepsilon$ corresponds to \begin{equation} \label{t0-L*} t_0-L_*\approx \varepsilon/3\,. \end{equation} Exactly like in the case of free initial condition (see Sec. \ref{FreeIC}), we have to assume that at \makebox{$|x|>X_{}(t)$} the dependences (\ref{f}) and (\ref{mu}) are replaced, respectively, by Eq. (\ref{f1}) and \makebox{${V}(t,x)=0$}. The compatibility with condition (\ref{gusl-mu}) is achieved then when the interval \makebox{$-X_{}(t)<x<X_{}(t)$} where the potential is nonvanishing shrinks to a point, the condition for which is given by Eq. (\ref{t0+L*}). A comparison of Eq. (\ref{t0+L*}) with Eq. (\ref{t0-L*}) allows one to conclude that for initial condition (\ref{a&b(eps)}) $L_*\approx L/2-\varepsilon/6$ and $t_0\approx L/2+\varepsilon/6$, which after taking the limit $\varepsilon\rightarrow 0$ gives \begin{equation} \label{L*t0} L_*=L/2\,,~~~t_0=L/2\,. \end{equation} This unambiguously defines the form of the solution for the case of fixed initial condition. In this solution, the configuration of ${V}(t,x)$ is fully symmetric not only with respect to the change of the sign of $x$ but also with respect to replacement \begin{equation} \label{t->} t \Rightarrow L-t\,. \end{equation} The origin of this property is quite clear. In terms of an elastic string, the problem we are analyzing now is fully symmetric with respect to replacement (\ref{t->}), therefore it is quite natural that the spacial distribution of the potential in the optimal fluctuation also has to have this symmetry. Since we are considering the limit of zero temperature when the free energy of a string is reduced to its energy, which in its turn is just the sum of the energies of the two halves of the string, the form of the potential $V(t,x)$ in the symmetric optimal fluctuation can be found separately for each of the two halves after imposing free boundary condition at $t=L/2$. This form can be described by Eqs. (\ref{phi(t)}), (\ref{c(t)}) and (\ref{d(t)}) with $L_*=t_0=L/2$, where the values of $C_0$ and $D_0$ can be obtained from Eqs. (\ref{c0}) and (\ref{d0}), respectively, by replacement \begin{equation} \label{FL} F\Rightarrow F/2\,,~~~L\Rightarrow L/2\,. \end{equation} The value of the action corresponding to the optimal fluctuation can be then found by making the same replacement in Eq. (\ref{Sfree}) and multiplying the result by the factor of 2, \begin{equation} \label{Sfix} S_{\rm fix}(F,L)=2S_{\rm free}(F/2,L/2) =\frac{1}{2}S_{\rm free}(F,L)\,. \end{equation} Naturally, the conditions for the applicability of Eq. (\ref{Sfix}) are the same as for Eq. (\ref{Sfree}) (see the two last paragraphs of \makebox{Sec. \ref{FreeIC}}). The claim that the optimal fluctuation is symmetric with respect to replacement (\ref{t->}) and therefore both halves of the string make equal contributions to its energy can be additionally confirmed by noting that the sum \begin{equation} S_{\rm free}(F',L/2)+S_{\rm free}(F-F',L/2) \end{equation} is minimal when $F'=F-F'=F/2$. Like in the case of free initial condition, the form of the optimal fluctuation is such that the whole family of extremal string's configurations satisfying Eq. (\ref{extrem}) is characterized by the same value of energy, $E\equiv H\{x(t)\}=F$. Formally, this family again can be described by Eq. (\ref{x(t)}) where $x_0$ now should be understood not as $x(0)$ but more generally as $x(t_0)$. \section{Generalization to other dimensionalities \label{d>1}} The same approach can be applied in the situation when polymer's displacement is not a scalar quantity but a $d$-dimensional vector ${\bf x}$. In such a case, the expressions for the action and for the saddle-point equations retain their form, where now operator $\nabla$ should be understood as vector gradient. A spherically symmetric solution of Eqs. (\ref{df&dmu/dt}) can be then again found in the form (\ref{f&mu}) with $x^2\equiv {\bf x}^2$. For arbitrary $d$ substitution of Eqs. (\ref{f&mu}) into Eqs. (\ref{df&dmu/dt}) reproduces Eqs. (\ref{a&b}) in exactly the same form, whereas Eqs. (\ref{c&d}) are replaced by \begin{subequations} \label{c&d'} \begin{eqnarray} \dot{C} & = & 2dBC\,, \\ \dot{D} & = & (4+2d)BD\,. \end{eqnarray} \end{subequations} A general solution of Eqs. (\ref{a&b}) and (\ref{c&d'}) can be then written as \begin{subequations} \label{a-d'} \begin{eqnarray} A(t) & = & A_0+\mbox{sign}(t-t_0)\frac{C_0I_{-}(\phi,d)}{2(dD_0)^{1/2}} \;, \label{a(t)'}\\ \label{b(t)'} B(t) & = & \mbox{sign}(t-t_0)\left[\frac{D_0}{d}\frac{1-\phi^{d}}{\phi^{2+d}} \right]^{1/2}, \\ C(t) & = & {C_0}/{\phi^d}\;, \label{c(t)'}\\ D(t) & = & {D_0}/{\phi^{2+d}}\;, \label{d(t)'} \end{eqnarray} \end{subequations} where \begin{equation} \label{} \phi\equiv\phi(t)=\Phi\left(\frac{t-t_0}{L_*}\right)\, \end{equation} with \begin{equation} \label{} L_*=\frac{I_{+}(0,d)}{2(dD_0)^{1/2}}\; \end{equation} and $\Phi(\eta)$ is an even function of its argument implicitly defined in the interval $-1\leq\eta\leq 1$ by equation \begin{equation} \label{Phi'} I_{+}(\Phi,d))=I_{+}(0,d)|\eta|\;. \end{equation} Here, $I_{\pm}(\phi,d)$ stands for the integral \begin{equation} \label{Ipm} I_{\pm}(\phi,d)= \int_{\phi^{d}}^{1}dq\,\frac{q^{1/d-1\pm 1/2}}{(1-q)^{1/2}}\,, \end{equation} in accordance with which $I_{\pm}(0,d)$ is given by the Euler beta function \begin{equation} \label{} I_{\pm}(0,d)= B\left(\frac{1}{2}\;,\frac{1}{d}\pm\frac{1}{2}\right)= \frac{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{1}{d}\pm\frac{1}{2} \right)} {\Gamma\left(\frac{1}{d}+\frac{1}{2}\pm\frac{1}{2}\right)}\;. \end{equation} From the form of Eqs. (\ref{Phi'}) and (\ref{Ipm}), it is clear that with the increase of $|\eta|$ from 0 to 1, the function $\Phi(\eta)$ monotonically decreases from 1 to 0. It is not hard to check that at $d=1$, Eqs. (\ref{a-d'}) and (\ref{Phi'}) are reduced to Eqs. (\ref{a-d}) and (\ref{Phi}), respectively. Exactly like in the case $d=1$, for free initial condition one gets $t_0=0$, $L_*=L$, $A_0=0$ and $A(L)=F/J$, from where \begin{equation} \label{C0D0'} C_0=\frac{2-d}{2}\frac{F}{JL}\,,~~~ D_0=\frac{1}{d}\left[\frac{I_+(0,d)}{2L}\right]^2\;. \end{equation} On the other hand, Eq. (\ref{def-b}) is replaced by \begin{equation} \label{Sfree'} S_{\rm free}=\frac{4\Omega_d}{d(d+2)(d+4)} \frac{C_0^{1+d/2}}{D_0^{d/2}}\frac{JF}{U_0}\;, \end{equation} where $\Omega_d=2\pi^{d/2}/\Gamma(d/2)$ is the area of a $d$-dimensional sphere. Substitution of Eqs. (\ref{C0D0'}) into Eq. (\ref{Sfree'}) then gives \begin{subequations} \label{Sfree''} \begin{equation} \label{Sfree''a} S_{\rm free}(F,L)= K_d \frac{F^{2+d/2}}{U_0J^{d/2}L^{1-d/2}}\;. \end{equation} with \begin{equation} \label{K} K_d=\frac{8(2-d)^{1+d/2}(2d)^{d/2-1}}{(d+2)(d+4)\Gamma(d/2)} \left[\frac{\Gamma(1/d+1)}{\Gamma(1/d+1/2)}\right]^d. \end{equation} \end{subequations} Naturally, at $d=1$ numerical coefficient $K_d$ coincides with coefficient $K$ in Eq. (\ref{Sfree}). Like in the case of \makebox{$d=1$}, the value of the action for fixed initial condition, \makebox{${\bf x}(t=0)=0$}, can be found by making in Eq. (\ref{Sfree''a}) replacement (\ref{FL}) and multiplying the result by the factor of 2, which gives \begin{equation} \label{Sfixed'} S_{\rm fix}(F,L)=2S_{\rm free}\left({F}/{2},{L}/{2}\right) =\left(\frac{1}{2}\right)^{\,d}S_{\rm free}(F,L)\,. \end{equation} The exponents entering Eq. (\ref{Sfree''a}) and determining the dependence of $S_{\rm free}$ on the parameters of the system have been earlier found in Ref. \onlinecite{KK08} from the scaling arguments based on the assumption that for large $L$, the form of the optimal fluctuation involves a single relevant characteristic length scale with the dimension of $\bf x$ which algebraically depends on the parameters of the system (including $L$) and grows with the increase of $L$ \cite{exp}. However, the analysis of this section reveals that this length scale, $X_0=(C_0/D_0)^{1/2}$, tends to zero when $d$ approaches $2$ from below, as well as the value of the action given by Eqs. (\ref{Sfree''}). This provides one more evidence that at $d\geq 2$, the problem with purely delta-functional correlations of a random potential becomes ill defined \cite{ill-reg} and has to be regularized in some way, for example, by introducing a finite correlation length for the random potential correlator. In such a situation, the geometrical size of the optimal fluctuation is determined by this correlation length \cite{KK08} and its shape is not universal, that is, depends on the particular form of the random potential correlator. Thus, the range of the applicability of Eqs. (\ref{Sfree''}) is restricted to $0<d<2$ and includes only one physical dimension, $d=1$. \section{Conclusion \label{concl}} In the current work, we have investigated the form of $P_L(F)$, the distribution function of the free energy of an elastic string with length $L$ subject to the action of a random potential with a Gaussian distribution. This has been done in the framework of the continuous model traditionally used for the description of such systems, Eq. (\ref{H}). Our attention has been focused on the far-right tail of $P_L(F)$, that is on the probability of a very large positive fluctuation of free energy $F$ in the regime when this probability is determined by the probability of the most optimal fluctuation of a random potential leading to the given value of $F$. We have constructed the exact solution of the nonlinear saddle-point equations describing the asymptotic form of the optimal fluctuation in the limit of large $F$ when this form becomes independent of temperature. This has allowed us to find not only the scaling from of \makebox{$S(F)=-\ln[P_L(F)]$} but also the value of the numerical coefficient in the asymptotic expression for $S(F)$. The solution of the problem has been obtained for two different types of boundary conditions (corresponding to fixing either one or both end points of a string) and for an arbitrary dimension of the imbedding space \makebox{$1+d$} with $d$ from the interval $0<d<2$ ($d$ being the dimension of the displacement vector). Quite remarkably, in both cases the asymptotic expressions for $S(F)$, Eqs. (\ref{Sfree''}) and (\ref{Sfixed'}), are rather universal. In addition to being independent of temperature, they are applicable not only in the case of $\delta$-correlated random potential explicitly studied in this work, but also (for a sufficiently large $L$) in the case of potential whose correlations are characterized by a finite correlation radius. Note that our results cannot be compared to those of Brunet and Derrida \cite{BD} because these authors have considered a very specific regime when the transverse size of a system (with cylindrical geometry) scales in a particular way with its length $L$. Due to the existence of the equivalence \cite{HHF} between the directed polymer and KPZ problems, the distribution function of the directed polymer problem in situation when only one of the end points is fixed (and the other is free to fluctuate) describes also the fluctuations of height \cite{KK07,KK08} in the \makebox{$d$}-dimensional KPZ problem in the regime of nonstationary growth which have started from a flat configuration of the interface, $L$ being the total time of the growth. The only difference is that the far-right tail of $P_L(F)$ studied in this work in the traditional notation of the KPZ problem \cite{KPZ} corresponds to the far-left tail of the height distribution function. In terms of the KPZ problem, the independence of the results on temperature is translated into their independence on viscosity. \begin{center} {\bf Acknowledgments} \end{center} The authors are grateful to G. Blatter and V. B. Geshkenbein for useful discussions. This work has been supported by the RFBR Grant No. 09-02-01192-a and by the RF President Grants for Scientific Schools No. 4930.2008.2 (I.V.K) and No. 5786.2008.2 (S.E.K.).
{ "attr-fineweb-edu": 1.822266, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbcI5qoaAwbGC72CO
\section{Introduction} Properties of finite-dimensional spin glasses are still under active current investigations after thirty years since the proposal of basic models \cite{EA,SK}. Among issues are the existence and critical properties of spin glass transition, low-temperature slow dynamics, and competition between the spin glass and conventional phases \cite{Young}. Determination of the structure of the phase diagram belongs to this last class of problems. Recent developments of analytical theory for this purpose \cite{NN,MNN,TN,TSN}, namely a conjecture on the exact location of the multicritical point, have opened a new perspective. The exact value of the multicritical point, in addition to its intrinsic interest as one of the rare exact results for finite-dimensional spin glasses, greatly facilitates precise determination of critical exponents around the multicritical point in numerical studies. The theory to derive the conjectured exact location of the multicritical point has made use of the replica method in conjunction with duality transformation. The latter aspect restricts the direct application of the method to self-dual lattices. It has not been possible to predict the location of the multicritical point for systems on the triangular and hexagonal lattices although a relation between these mutually-dual cases has been given \cite{TSN}. In the present paper we use a variant of duality transformation to derive a conjecture for the $\pm J$ Ising model on the triangular lattice. The present type of duality allows us to directly map the triangular lattice to another triangular lattice without recourse to the hexagonal lattice or the star-triangle transformation adopted in the conventional approach. This aspect is particularly useful to treat the present disordered system under the replica formalism as will be shown below. The result agrees impressively with a recent numerical estimate of high precision. This lends additional support to the reliability of our theoretical framework \cite{NN,MNN,TN,TSN} to derive a conjecture on the exact location of the multicritical point for models of finite-dimensional spin glasses. \section{Ferromagnetic system on the triangular lattice} It will be useful to first review the duality transformation for the non-random $Z_q$ model on the triangular lattice formulated without explicit recourse to the hexagonal lattice or the star-triangle transformation \cite{Wu}. Let us consider the $Z_q$ model with an edge Boltzmann factor $x[\phi_i-\phi_j]$ for neighbouring sites $i$ and $j$. The spin variables $\phi_i$ and $\phi_j$ take values from 0 to $q-1$ (mod $q$). The function $x[\cdot]$ itself is also defined with mod $q$. An example is the clock model with coupling $K$, \begin{equation} x[\phi_i-\phi_j]=\exp \left\{ K\cos \left(\frac{2\pi }{q} (\phi_i-\phi_j) \right)\right\}. \end{equation} The Ising model corresponds to the case $q=2$. The partition function may be written as \begin{equation} Z=q \sum_{\{ \phi_{ij}\}} \prod_{\bigtriangleup}x[\phi_{12}]x[\phi_{23}]x[\phi_{31}] \delta (\phi_{12}+\phi_{23}+\phi_{31}) \prod_{\bigtriangledown}\delta \left(\sum \phi_{ij}\right). \label{Zdef} \end{equation} Here the product over $\bigtriangleup$ runs over up-pointing triangles shown shaded in Fig. \ref{fig1} and that for $\bigtriangledown$ is over unshaded down-pointing triangles. \begin{figure}[tb] \begin{center} \includegraphics[width=60mm]{fig1.eps} \end{center} \caption{Triangular lattice and its dual (shown dashed). The up-pointing triangle surrounded by the variables $\phi_1, \phi_2, \phi_3$ is transformed into the down-pointing triangle surrounded by $k_1, k_2, k_3$.} \label{fig1} \end{figure} The variable of summation is not written as the original $\phi_i$ but in terms of the directed difference $\phi_{ij}=\phi_i-\phi_j$ defined on each bond. This is possible if we introduce restrictions represented by the Kronecker deltas (which are defined with mod $q$) as in eq. (\ref{Zdef}) allocated to all up-pointing and down-pointing triangles. For instance, $\phi_{12}(=\phi_1-\phi_2)$, $\phi_{23}(=\phi_2-\phi_3)$, and $\phi_{31}(=\phi_3-\phi_1)$ are not independent but satisfy $\phi_{12}+\phi_{23}+\phi_{31}=0$ (mod $q$), where 1, 2, and 3 are sites around the unit triangle as indicated in Fig. \ref{fig1}. The overall factor $q$ on the right hand side of eq. (\ref{Zdef}) reflects the invariance of the system under the uniform change $\phi_i\to\phi_i+l~(\forall i, 0\le l \le q-1)$. It is convenient to Fourier-transform the Kronecker deltas for down-pointing triangles and allocate the resulting exponential factors to the edges of three neighbouring up-pointing triangles. Then the partition function can be written only in terms of a product over up-pointing triangles: \begin{equation} Z=q \sum_{\{k_i\}}\sum_{\{ \phi_{ij}\}} \prod_{\bigtriangleup}\left[\frac{1}{q}A[\phi_{12}, \phi_{23}, \phi_{31}] \delta (\phi_{12}+\phi_{23}+\phi_{31}) \exp\left\{ \frac{2\pi i}{q}(k_1\phi_{23}+k_2\phi_{31}+k_3\phi_{12})\right\}\right], \label{Z2} \end{equation} where $A[\phi_{12}, \phi_{23}, \phi_{31}]=x[\phi_{12}]x[\phi_{23}]x[\phi_{31}]$. Now let us regard the product over up-pointing triangles in eq. (\ref{Z2}) as a product over down-pointing triangles overlaying the original up-pointing triangles as shown dashed in Fig. \ref{fig1}. This viewpoint allows us to regard the quantity in the square brackets of eq. (\ref{Z2}) as the Boltzmann factor for the unit triangle (to be called the face Boltzmann factor hereafter) of the dual triangular lattice composed of overlaying down-pointing triangles: \begin{equation} Z=q \sum_{\{k_i\}} \prod_{\bigtriangledown^*}A^*[k_{12}, k_{23}, k_{31}], \end{equation} where \begin{eqnarray} A^*[k_{12}, k_{23}, k_{31}]&=&\frac{1}{q}\sum_{\phi_{12},\phi_{23},\phi_{31}=0}^{q-1} A[\phi_{12},\phi_{23},\phi_{31}]\delta (\phi_{12}+\phi_{23}+\phi_{31}) \nonumber\\ &&\times\exp\left\{ \frac{2\pi i}{q} (k_1\phi_{23}+k_2\phi_{31}+k_3\phi_{12})\right\}. \label{A_dual} \end{eqnarray} Here we have used the fact that the right hand side is a function of the differences $k_i -k_j\equiv k_{ij}~((ij)=(12), (23), (31))$ due to the constraint $\phi_{12}+\phi_{23}+\phi_{31}=0$. This is a duality relation which exchanges the original model on the triangular lattice with a dual system on the dual triangular lattice. $A^*[k_{12},k_{23},k_{31}]$ represents the face Boltzmann factor of the dual system, which is the function of the differences between the nearest neighbor sites on the unit triangles similarly to face Boltzmann factor $A[\phi_{12},\phi_{23},\phi_{31}]$ of the original system. It is easy to verify that the usual duality relation for the triangular lattice emerges from the present formulation. As an example, the ferromagnetic Ising model on the triangular lattice has the following face Boltzmann factors: \begin{equation} A[0, 0, 0]={\rm e}^{3K},~~A[1, 1, 0]=A[1,0,1]=A[0,1,1]={\rm e}^{-K},\cdots , \end{equation} where $A[0, 0, 0]$ is for the all-parallel neighbouring spin configuration for three edges of a unit triangle, and $A[1, 1, 0]$ is for two antiparallel pairs and a single parallel pair around a unit triangle. The dual are, according to eq. (\ref{A_dual}), \begin{equation} A^*[0,0,0]=\frac{1}{2}\left\{A[0,0,0]+3A[1,1,0]\right\}, ~A^*[1,1,0]=\frac{1}{2}\left\{A[0,0,0]-A[1,1,0]\right\}. \end{equation} It then follows that \begin{equation} {\rm e}^{-4K^*}\equiv \frac{A^*[1,1,0]}{A^*[0,0,0]} =\frac{1-{\rm e}^{-4K}}{1+3{\rm e}^{-4K}}. \end{equation} This formula is equivalent to the expression obtained by the ordinary duality, which relates the triangular lattice to the hexagonal lattice, followed by the star-triangle transformation: \begin{equation} (1+3{\rm e}^{-4K})(1+3{\rm e}^{-4K^*})=4. \end{equation} \section{Replicated system} It is straightforward to generalize the formulation of the previous section to the spin glass model using the replica method. The duality relation for the face Boltzmann factor of the replicated system is \begin{eqnarray} A^*[\{k_{12}^{\alpha}\},\{k_{23}^{\alpha}\},\{k_{31}^{\alpha}\}] &=&\frac{1}{q^n} \sum_{\{\phi\}} \left[ \prod_{\alpha=1}^n \delta (\phi_a^{\alpha}+\phi_b^{\alpha}+\phi_c^{\alpha})\right] A\left[\{\phi_a^{\alpha}\},\{\phi_b^{\alpha}\},\{\phi_c^{\alpha}\}\right] \nonumber\\ &&\times\exp \left\{\frac{2\pi i}{q} \sum_{\alpha=1}^n (k_1^{\alpha}\phi_b^{\alpha}+k_2^{\alpha}\phi_c^{\alpha} +k_3^{\alpha}\phi_a^{\alpha}) \right\}, \label{Astar} \end{eqnarray} where $\alpha$ is the replica index running from 1 to $n$, $\{k_{ij}^{\alpha}\}$ denotes the set $\{k_{ij}^{1}, k_{ij}^2, \cdots, k_{ij}^n\}$, and similarly for $\{\phi_a^{\alpha}\}$ etc. The variables $\phi_a^{\alpha}, \phi_b^{\alpha}, \phi_c^{\alpha}$ correspond to $\phi_{12},\phi_{23},\phi_{31}$ in eq. (\ref{Z2}). The original face Boltzmann factor is the product of three edge Boltzmann factors \begin{equation} A\left[\{\phi_a^{\alpha}\},\{\phi_b^{\alpha}\},\{\phi_c^{\alpha}\}\right] =\chi_{\phi_a^1\cdots \phi_a^n}\cdot\chi_{\phi_b^1\cdots \phi_b^n}\cdot \chi_{\phi_c^1\cdots \phi_c^n}, \end{equation} where $\chi_{\phi^1\cdots \phi^n}$ is the averaged edge Boltzmann factor \begin{equation} \chi_{\phi^1\cdots \phi^n}=\sum_{l=0}^{q-1}p_l \, x[\phi^1+l]x[\phi^2+l]\cdots x[\phi^n+l]. \end{equation} Here $p_l$ is the probability that the relative value of neighbouring spin variables is shifted by $l$. A simple example is the $\pm J$ Ising model ($q=2$), in which $p_0=p$ (ferromagnetic interaction) and $p_1=1-p$ (antiferromagnetic interaction). The average of the replicated partition function $Z_n$ is a function of face Boltzmann factors for various values of $\phi$'s. The triangular-triangular duality relation is then written as\cite{MNN,TN,TSN} \begin{equation} Z_n(A[\{0\},\{0\},\{0\}],\cdots ) =cZ_n(A^*[\{0\},\{0\},\{0\}],\cdots ), \label{duality_Zn} \end{equation} where $c$ is a trivial constant and $\{0\}$ denotes the set of $n$ 0's. Since eq. (\ref{duality_Zn}) is a duality relation for a multivariable function, it is in general impossible to identify the singularity of the system with the fixed point of the duality transformation. Nevertheless, it has been firmly established in simpler cases (such as the square lattice) that the location of the multicritical point in the phase diagram of spin glasses can be predicted very accurately, possibly exactly, by using the fixed-point condition of the principal Boltzmann factor for all-parallel configuration $\{0\}$. \cite{NN,MNN,TN,TSN} We therefore try the ansatz also for the triangular lattice that the exact location of the multicritical point of the replicated system is given by the fixed-point condition of the principal face Boltzmann factor: \begin{equation} A\left[\{0\},\{0\},\{0\}\right]=A^*\left[\{0\},\{0\},\{0\}\right], \label{AAstar} \end{equation} combined with the Nishimori line (NL) condition, on which the multicritical point is expected to lie \cite{HN81,HNbook}. For simplicity, we restrict ourselves to the $\pm J$ Ising model hereafter. Then the NL condition is ${\rm e}^{-2K}=(1-p)/p$, where $p$ is the probability of ferromagnetic interaction. The original face Boltzmann factor is a simple product of three edge Boltzmann factors, \begin{equation} A\left[\{0\},\{0\},\{0\}\right]=\chi_0^3 \end{equation} with \begin{equation} \chi_0=p{\rm e}^{nK}+(1-p){\rm e}^{-nK}=\frac{{\rm e}^{(n+1)K} +{\rm e}^{-(n+1)K}}{{\rm e}^K+{\rm e}^{-K}}. \end{equation} The dual Boltzmann factor $A^*[\{0\},\{0\},\{0\}]$ needs a more elaborate treatment. The constraint of Kronecker delta in eq. (\ref{Astar}) may be expressed as \begin{equation} \prod_{\alpha=1}^n \delta (\phi_a^{\alpha}+\phi_b^{\alpha}+\phi_c^{\alpha}) =2^{-n}\sum_{\{\tau_{\alpha}=0,1\}}\exp\left[ \pi i\sum_{\alpha =1}^n \tau_{\alpha} (\phi_a^{\alpha}+\phi_b^{\alpha}+\phi_c^{\alpha})\right]. \label{product_delta} \end{equation} The face Boltzmann factor $A\left[\{\phi_a^{\alpha}\},\{\phi_b^{\alpha}\},\{\phi_c^{\alpha}\}\right]$ is the product of three edge Boltzmann factors, each of which may be written as, on the NL, \cite{TSN} \begin{eqnarray} \chi_{\phi_a^1\cdots \phi_a^n} &=&p \exp\left[ \sum_{\alpha=1}^n (1-2\phi_a^{\alpha})K\right]+ (1-p)\exp\left[-\sum_{\alpha=1}^n (1-2\phi_a^{\alpha})K\right] \nonumber\\ &=&\frac{1}{2\cosh K}\sum_{\eta_a=\pm 1}\exp\left[\eta_a K+\eta_a K \sum_{\alpha=1}^n (1-2\phi_a^{\alpha}K)\right]. \label{chi_a} \end{eqnarray} Using eqs. (\ref{product_delta}) and (\ref{chi_a}), eq. (\ref{Astar}) can be rewritten as, for the principal Boltzmann factor with all $k_i^{\alpha}=0$, \begin{eqnarray} && A^*\left[\{0\},\{0\},\{0\}\right] =\frac{1}{4^n (2\cosh K)^3} \sum_{\eta}\sum_{\tau}\sum_{\phi} \exp\left[ \pi i \sum_{\alpha}\tau_{\alpha}(\phi_a^{\alpha} +\phi_b^{\alpha}+\phi_c^{\alpha})\right. \nonumber\\ &&+\left. K(\eta_a+\eta_b+\eta_c)+K\eta_a\sum_{\alpha}(1-2\phi_a^{\alpha}) +K\eta_b\sum_{\alpha}(1-2\phi_b^{\alpha}) +K\eta_c\sum_{\alpha}(1-2\phi_c^{\alpha})\right]. \label{AAexp} \end{eqnarray} The right hand side of this equation can be evaluated explicitly as shown in the Appendix. The result is \begin{eqnarray} &&A^*\left[\{0\},\{0\},\{0\}\right]=4^{-n}(2\cosh K)^{3n-3} \nonumber\\ && \times \left[ ({\rm e}^{3K}+3{\rm e}^{-K})(1+\tanh^3 K)^n +(3{\rm e}^{K}+{\rm e}^{-3K})(1-\tanh^3 K)^n \right]. \label{AAresult} \end{eqnarray} The prescription (\ref{AAstar}) for the multicritical point is therefore \begin{eqnarray} &&\frac{\cosh^3 (n+1)K}{\cosh^3 K}= \frac{(2\cosh K)^{3n}}{4^n (2\cosh K)^3} \nonumber\\ &&\times \left[ ({\rm e}^{3K}+3{\rm e}^{-K})(1+\tanh^3 K)^n +(3{\rm e}^{K}+{\rm e}^{-3K})(1-\tanh^3 K)^n \right]. \label{MCP} \end{eqnarray} \section{Multicritical point} The conjecture (\ref{MCP}) for the exact location of the multicritical point can be verified for $n=1, 2$ and $\infty$ since these cases can be treated directly without using the above formulation. The case $n=1$ is an annealed system and the problem can be solved explicitly. It is easy to show that the annealed $\pm J$ Ising model is equivalent to the ferromagnetic Ising model with effective coupling $\tilde{K}$ satisfying \begin{equation} \tanh \tilde{K}=(2p-1)\tanh K. \label{annealed1} \end{equation} If we insert the transition point of the ferromagnetic Ising model on the triangular lattice ${\rm e}^{4\tilde{K}}=3$, eq. (\ref{annealed1}) reads \begin{equation} (2p-1)\tanh K=2-\sqrt{3}. \label{annealed2} \end{equation} This formula represents the exact phase boundary for the annealed system. Under the NL condition, it is straightforward to verify that this expression agrees with the conjectured multicritical point of eq. (\ref{MCP}). It is indeed possible to show further that the whole phase boundary of eq. (\ref{annealed2}) can be derived by evaluating $A^{*}[0,0,0]$ directly for $n=1$ for arbitrary $p$ and $K$, giving $2A^{*}[0,0,0]=\chi_0^3+3\chi_1^2 \chi_0$ with $\chi_0=p{\rm e}^K+(1-p){\rm e}^{-K}, \chi_1=p{\rm e}^{-K}+(1-p){\rm e}^{K}$, and using the condition $A^{*}[0,0,0]=A[0,0,0]$. When $n=2$, a direct evaluation of the edge Boltzmann factor reveals that the system on the NL is a four-state Potts model with effective coupling $\tilde{K}$ satisfying \cite{TSN,MNN} \begin{equation} {\rm e}^{\tilde{K}}={\rm e}^{2K}-1+{\rm e}^{-2K}. \end{equation} Since the transition point of the non-random four-state Potts model on the triangular lattice is given by ${\rm e}^{\tilde{K}}=2$ \cite{Wu}, the (multi)critical point of the $n=2$ system is specified by the relation \begin{equation} {\rm e}^{2K}-1+{\rm e}^{-2K}=2. \end{equation} Equation (\ref{MCP}) with $n=2$ also gives this same expression, which confirms validity of our conjecture in the present case as well. The limit $n\to\infty$ can be analyzed as follows.\cite{MNN} The average of the replicated partition function \begin{equation} [Z^n]_{\rm av}=[{\rm e}^{-n\beta F}]_{\rm av}, \end{equation} where $[\cdots ]_{\rm av}$ denotes the configurational average, is dominated in the limit $n\to\infty$ by contributions from bond configurations with the smallest value of the free energy $F$. It is expected that the bond configurations without frustration (i.e., ferromagnetic Ising model and its gauge equivalents) have the smallest free energy, and therefore we may reasonably expect that the $n\to\infty$ systems is described by the non-random model. Thus the critical point is given by ${\rm e}^{4K}=3$. It is straightforward to check that eq. (\ref{MCP}) reduces to the same equation in the limit $n\to\infty$. These analyses give us a good motivation to apply eq. (\ref{MCP}) to the quenched limit $n\to 0$. Expanding eq. (\ref{MCP}) around $n=0$, we find, from the coefficients of terms linear in $n$, \begin{eqnarray} 3K\tanh K&=&3 \log (2\cosh K)-\log 4 \nonumber\\ &&+\frac{{\rm e}^{3K}+3{\rm e}^{-K}}{(2\cosh K)^3}\log (1+\tanh^3 K) +\frac{3{\rm e}^{K}+{\rm e}^{-3K}}{(2\cosh K)^3}\log (1-\tanh^3 K), \end{eqnarray} or, in terms of $p$, using ${\rm e}^{-2K}=(1-p)/p$, \begin{eqnarray} &&2p^2(3-2p)\log p+2(1-p)^2(1+2p)\log (1-p)+\log 2 \nonumber\\ && =p(4p^2-6p+3)\log (4p^2-6p+3) +(1-p)(4p^2-2p+1)\log (4p^2-2p+1). \label{pc} \end{eqnarray} Equation (\ref{pc}) is our conjecture for the exact location of the multicritical point $p_{\rm c}$ of the $\pm J$ Ising model on the triangular lattice. This gives $p_{\rm c}=0.8358058$, which agrees well with a recent high-precision numerical estimate, 0.8355(5) \cite{Queiroz}. If we further use the conjecture \cite{TSN} $H(p_{\rm c})+H(p_{\rm c}')=1$, where $H(p)$ is the binary entropy $-p\log_2 p-(1-p)\log_2 (1-p)$, to relate this $p_{\rm c}$ with that for the hexagonal lattice $p_{\rm c}'$, we find $p_{\rm c}'=0.9327041$. Again, the numerical result 0.9325(5) \cite{Queiroz} is very close to this conclusion. \section{Conclusion} To summarize, we have formulated the duality transformation of the replicated random system on the triangular lattice, which brings the triangular lattice to a dual triangular lattice without recourse to the hexagonal lattice. The result was used to predict the exact location of the multicritical point of the $\pm J$ Ising model on the triangular lattice. Correctness of our theory has been confirmed in directly solvable cases of $n=1, 2$ and $\infty$. Application to the quenched limit $n\to 0$ yielded a value in impressive agreement with a numerical estimate. The status of our result for the quenched system, eq. (\ref{pc}), is a conjecture for the exact solution. It is difficult at present to prove this formula rigorously. This is the same situation as in cases for other lattices and models \cite{NN,MNN,TN,TSN}. We nevertheless expect that such a proof should be eventually possible since a single unified theoretical framework always gives results in excellent agreement with independent numerical estimations for a wide range of systems. Further efforts toward a formal proof are required. \section*{Acknowledgement} This work was supported by the Grant-in-Aid for Scientific Research on Priority Area ``Statistical-Mechanical Approach to Probabilistic Information Processing" by the MEXT. \section*{Appendix} In this Appendix we evaluate eq. (\ref{AAexp}) to give eq. (\ref{AAresult}). Let us denote $4^n (2\cosh K)^3A^*[\{0\},\{0\},\{0\}]$ as $\tilde{A}$. The sums over $\alpha$ in the exponent of eq. (\ref{AAexp}) can be expressed as the product over $\alpha$: \begin{eqnarray} \tilde{A}&=&\sum_{\eta}{\rm e}^{K(n+1)(\eta_a+\eta_b+\eta_c)} \nonumber\\ &\times& \prod_{\alpha=1}^n \left[ \sum_{\tau_{\alpha}=0}^{1}\left( \sum_{\phi_{a}^{\alpha}=0}^1 {\rm e}^{\pi i\tau_{\alpha}\phi_a^{\alpha}-2K\eta_a \phi_a^{\alpha}} \sum_{\phi_{b}^{\alpha}=0}^1 {\rm e}^{\pi i\tau_{\alpha}\phi_b^{\alpha}-2K\eta_b \phi_b^{\alpha}} \sum_{\phi_{c}^{\alpha}=0}^1 {\rm e}^{\pi i\tau_{\alpha}\phi_c^{\alpha}-2K\eta_c \phi_c^{\alpha}}\right) \right]. \end{eqnarray} By performing the sums over $\phi$ and $\tau$ for each replica, we find \begin{eqnarray} \tilde{A}&=&\sum_{\eta_a,\eta_b,\eta_c=\pm 1} {\rm e}^{K(n+1)(\eta_a+\eta_b+\eta_c)} \prod_{\alpha=1}^n \left[ (1+{\rm e}^{-2K\eta_a})(1+{\rm e}^{-2K\eta_b})(1+{\rm e}^{-2K\eta_c})\right. \nonumber\\ && \left. +(1-{\rm e}^{-2K\eta_a})(1-{\rm e}^{-2K\eta_b})(1-{\rm e}^{-2K\eta_c})\right]. \end{eqnarray} It is straightforward to write down the eight terms appearing in the above sum over $\eta_a, \eta_b, \eta_c$ to yield \begin{eqnarray} \tilde{A}&=&{\rm e}^{3K(n+1)}\left[ (1+{\rm e}^{-2K})^3+(1-{\rm e}^{-2K})^3\right]^n \nonumber\\ &+&3{\rm e}^{K(n+1)}\left[(1+{\rm e}^{-2K})^2 (1+{\rm e}^{2K})+(1-{\rm e}^{-2K})^2(1-{\rm e}^{2K})\right]^n \nonumber\\ &+&3{\rm e}^{-K(n+1)}\left[(1+{\rm e}^{-2K}) (1+{\rm e}^{2K})^2+(1-{\rm e}^{-2K})(1-{\rm e}^{2K})^2\right]^n \nonumber\\ &+&{\rm e}^{-3K(n+1)}\left[(1+{\rm e}^{2K})^3+(1-{\rm e}^{2K})^3\right]^n, \end{eqnarray} which is further simplified into \begin{eqnarray} \tilde{A}&=&{\rm e}^{3K}\left[(2\cosh K)^3+(2\sinh K)^3\right]^n \nonumber\\ &+&3{\rm e}^{K}\left[(2\cosh K)^3-(2\sinh K)^3\right]^n \nonumber\\ &+&3{\rm e}^{-K}\left[(2\cosh K)^3+(2\sinh K)^3\right]^n \nonumber\\ &+&{\rm e}^{-3K}\left[(2\cosh K)^3-(2\sinh K)^3\right]^n \nonumber\\ &=& (2\cosh K)^{3n} \nonumber\\ &&\times\left[({\rm e}^{3K}+3{\rm e}^{-K})(1+\tanh^3 K)^n+(3{\rm e}^{K}+{\rm e}^{-3K})(1-\tanh^3 K)^n \right]. \end{eqnarray} This is eq. (\ref{AAresult}).
{ "attr-fineweb-edu": 1.229492, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbdc4dbjiU7SsVhSU
\section{Introduction} The goal of the present article is to re-derive the classification of topological phases of quantum matter proposed by Kitaev in his ``periodic table''~\cite{kitaev2009periodic} by means of basic tools from the topology (in particular homotopy theory) of classical groups, and standard factorization results in linear algebra. Kitaev's table classifies ground states of free-fermion systems according to their symmetries and the dimension of the configuration space. We reformulate the classification scheme in terms of homotopy theory, and proceed to investigate the latter issue in dimension $d \in \{0,1\}$. We restrict ourselves to low dimensions in order to illustrate our approach, and to provide constructive proofs for all the classes, using explicit factorization of the matrices that appear. Our intent is thus similar to previous works by Zirnbauer and collaborators \cite{Zirnbauer05,KennedyZirnbauer,KennedyGuggenheim}, which however formulate the notion of free-fermion ground state in a different way, amenable to the investigation of many-body systems. The Kitaev classification can be obtained in various ways, using different mathematical tools. Let us mention for instance derivations coming from index theory~\cite{Gro_mann_2015}, K-theory~\cite{Thiang_2015, ProdanSchulzBaldes} and KK-theory~\cite{Bourne_2016}. In addition to this (non-exhaustive) list, one should add the numerous works focusing on one particular case of the table. Our goal here is to provide a short and synthetic derivation of this table, using simple linear algebra. \subsection{Setting} Let ${\mathcal H}$ be a complex Hilbert space of finite dimension ${\rm dim } \, {\mathcal H} \, = N$. For $0 \le n \le N$, $n$-dimensional subspaces of ${\mathcal H}$ are in one-to-one correspondence with elements of the \emph{Grassmannian} \[ {\mathcal G}_n({\mathcal H}) := \left\{ P \in {\mathcal B}({\mathcal H}) : \: P^2 = P = P^*, \: {\rm Tr}(P) = n \right\} \] which is comprised of rank-$n$ orthogonal projections in the algebra ${\mathcal B}({\mathcal H})$ of linear operators on ${\mathcal H}$. In this paper we are interested in orthogonal-projection-valued continuous functions $P : {\mathbb T}^d \to {\mathcal G}_n({\mathcal H})$ which satisfy certain symmetry conditions (to be listed below), and in classifying homotopy classes of such maps. Here ${\mathbb T}^d = {\mathbb R}^d / {\mathbb Z}^d$ is a $d$-dimensional torus, which we often identify with $[-\tfrac12,\tfrac12]^d$ with periodic boundary conditions. We write $k \ni {\mathbb T}^d \mapsto P(k) \in {\mathcal G}_n({\mathcal H})$ for such maps, or $\{ P(k) \}_{k \in {\mathbb T}^d}$. It is well known \cite{Kuchment} that such families of projections arise from the Bloch-Floquet representation of periodic quantum systems on a lattice, in the one-body approximation. In this case, ${\mathcal H}$ is the Hilbert space accounting for local degrees of freedom in the unit cell associated to the lattice of translations, ${\mathbb T}^d$ plays the role of the Brillouin torus in (quasi-)momentum space, $k$ is the Bloch (quasi-)momentum, and $P(k)$ is the spectral subspace onto occupied energy levels of some $H(k)$, the Bloch fibers of a periodic lattice Hamiltonian $H$. Two Hamiltonians $H_0$ and $H_1$ are commonly referred to as being in the same topological insulating class if they share the same discrete symmetries (see below) and if they can be continuously deformed one into the other while preserving the symmetries and without closing the spectral gap. This implies that their associated spectral projections $P_0$ and $P_1$ below the spectral gap are homotopic. We thus investigate this homotopy classification directly in terms of projections in momentum space. The discrete symmetries that one may want to impose on a family of projections come from those of the underlying quantum-mechanical system. We set the following definitions. Recall that a map $T : {\mathcal H} \to {\mathcal H}$ is anti-unitary if it is anti-linear ($T(\lambda x) = \overline{\lambda} T(x)$) and \[ \forall x,y \in {\mathcal H}, \quad \langle T x, T y \rangle_{\mathcal H} = \langle y, x \rangle_{{\mathcal H}} \qquad ( = \overline{ \langle x, y \rangle_{\mathcal H} }). \] \begin{definition}[Time-reversal symmetry] \label{def:TRS} Let $T : {\mathcal H} \to {\mathcal H}$ be an anti-unitary operator such that $T^2 = \varepsilon_T {\mathbb I}_{{\mathcal H}}$ with $\varepsilon_T \in \{-1, 1\}$. We say that a continuous map $P : {\mathbb T}^d \to {\mathcal G}_n({\mathcal H})$ satisfies {\bf time-reversal symmetry}, or in short {\bf $T$-symmetry}, if \[ \boxed{ T^{-1} P(k) T = P(-k), \qquad \text{($T$-symmetry)}.} \] If $\varepsilon_T = 1$, this $T$-symmetry is said to {\bf even}, and if $\varepsilon_T = -1$ it is {\bf odd}. \end{definition} \begin{definition}[Charge-conjugation/particle-hole symmetry] \label{def:CCS} Let $C : {\mathcal H} \to {\mathcal H}$ be an anti-unitary operator such that $C^2 = \varepsilon_C {\mathbb I}_{{\mathcal H}}$ with $\varepsilon_C \in \{-1, 1\}$. We say that a continuous map $P : {\mathbb T}^d \to {\mathcal G}_n({\mathcal H})$ satisfies {\bf charge-conjugation symmetry} (also called {\bf particle-hole symmetry}), or in short {\bf $C$-symmetry}, if \[ \boxed{ C^{-1} P(k) C = {\mathbb I}_{{\mathcal H}} - P(-k), \quad \text{($C$-symmetry)}.} \] If $\varepsilon_C = 1$, this $C$-symmetry is said to {\bf even}, and if $\varepsilon_C = -1$ it is {\bf odd}. \end{definition} \begin{definition}[Chiral symmetry] \label{def:chiral} Let $S : {\mathcal H} \to {\mathcal H}$ be a unitary operator such that $S^2 = {\mathbb I}_{{\mathcal H}}$. We say that $P : {\mathbb T}^d \to {\mathcal G}_n({\mathcal H})$ satisfies {\bf chiral} or {\bf sublattice symmetry}, or in short {\bf $S$-symmetry}, if \[ \boxed{S^{-1} P(k) S = {\mathbb I}_{{\mathcal H}} - P(k), \quad \text{($S$-symmetry)}.} \] \end{definition} The simultaneous presence of two symmetries implies the presence of the third. In fact, the following assumption is often postulated \cite{Ryu_2010}: \begin{assumption} \label{S=TC} Whenever $T-$ and $C-$ symmetries are both present, we assume that their product $S:=TC$ is an $S$-symmetry, that is, $S$ is unitary and $S^2 = {\mathbb I}_{\mathcal H}$. \end{assumption} We are not aware of a model in which this assumption is not satisfied, {\em i.e.} in which the $S$ symmetry is unrelated to the $T-$ and $C-$ ones. \begin{remark} \label{rmk:TC=sigmaCT} This assumption is tantamount to require that the operators $T$ and $C$ commute or anti-commute among each other, depending on their even/odd nature. Indeed, the product of two anti-unitary operators is unitary, and the requirement that $S := TC$ satisfies $S^2 = {\mathbb I}_{\mathcal H}$ reads \[ TC TC = {\mathbb I}_{\mathcal H} \quad \Longleftrightarrow \quad TC = C^{-1}T^{-1} = \varepsilon_T \varepsilon_C CT. \] The same sign determines whether $S$ commutes or anti-commutes with $T$ and $C$. Indeed, we have \[ SC = T C^2 = \varepsilon_C T, \quad C S = CTC = T^{-1} C^{-1} C = \varepsilon_T T \quad \text{so} \quad SC = \varepsilon_T \varepsilon_C CS, \] and similarly $ST = \varepsilon_T \varepsilon_C TS$. \end{remark} Taking into account all possible types of symmetries leads to 10 symmetry classes for maps $P \colon {\mathbb T}^d \to {\mathcal G}_n({\mathcal H})$, the famous {\em tenfold way of topological insulators}~\cite{Ryu_2010}. The names of these classes are given in Table~\ref{table:us}, and are taken from the original works of E.~Cartan~\cite{cartan1,cartan2} for the classification of symmetric spaces, which were originally mutuated in \cite{AZ,Zirnbauer05} in the context of random-matrix-valued $\sigma$-models. For a dimension $d \in {\mathbb N} \cup \{ 0 \}$ and a rank $n \in {\mathbb N}$, and for a Cartan label ${\rm X}$ of one of these 10 symmetry classes, we denote by ${\rm X}(d,n,N)$ the set of continuous maps $P \colon {\mathbb T}^d \to {\mathcal G}_n({\mathcal H})$, with ${\rm dim }({\mathcal H}) = N$, and respecting the symmetry requirements of class ${\rm X}$. Given two continuous maps $P_0, P_1 \in {\rm X}(d, n ,N)$, we ask the following questions: \begin{itemize} \item Can we find explicit ${\rm Index} \equiv {\rm Index}_d^{\rm X}$ maps, which are numerical functions (integer- or integer-mod-2-valued) so that ${\rm Index}(P_0) = {\rm Index}(P_1)$ iff $P_0$ and $P_1$ are path-connected in ${\rm X}(d, n,N)$? \item If so, how to compute this Index? \item In the case where ${\rm Index}(P_0) = {\rm Index}(P_1)$, how to construct explicitly a path $P_s$, $s \in [0,1]$ connecting $P_0$ and $P_1$ in ${\rm X}(d, n, N)$? \end{itemize} In this paper, we answer these questions for all the 10 symmetry classes, and for $d \in \{0, 1\}$. We analyze the classes one by one, often choosing a basis for ${\mathcal H}$ in which the different symmetry operators $T$, $C$ and $S$ have a specific normal form. In doing so, we recover Cartan's symmetric spaces as ${\rm X}(d=0,n,N)$ -- see the boxed equations in the body of the paper. The topological indices that we find% \footnote{We make no claim on the group-homomorphism nature of the Index maps we provide.} % are summarized in Table~\ref{table:us}. Our findings agree with the previously mentioned ``periodic tables'' from the physics literature \cite{kitaev2009periodic,Ryu_2010} if one also takes into account the weak ${\mathbb Z}_2$ invariants (see Remark~\ref{rmk:weak}). We note that the $d = 0$ column is not part of the original table. It is related (but not equal) to the $d = 8$ column by Bott periodicity~\cite{Bott_1956}. For our purpose, it is useful to have it explicitly in order to derive the $d = 1$ column. \begin{table}[ht] \centering $\begin{array}{| c|ccc|cc|cc |} \hline \hline \multicolumn{4}{|c|}{\text{Symmetry}} & \multicolumn{2}{c|}{\text{Constraints}} & \multicolumn{2}{c|}{\text{Indices}} \\ \hline \text{Cartan label} & T & C & S & n & N & d=0 & d=1 \\ \hline \hyperref[ssec:A]{{\rm A}} & 0 & 0 & 0 & & & 0 & 0 \\ \hyperref[ssec:AIII]{{\rm AIII}} & 0 & 0 & 1 & & N=2n & 0 & {\mathbb Z} \\ \hline \hyperref[ssec:AI]{{\rm AI}} & 1 & 0 & 0 & & & 0 & 0 \\ \hyperref[ssec:BDI]{{\rm BDI}} & 1 & 1 & 1 & & N=2n & {\mathbb Z}_2 & {{\mathbb Z}_2\times{\mathbb Z}} \\ \hyperref[ssec:D]{{\rm D}} & 0 & 1 & 0 & & N=2n & {\mathbb Z}_2 & {{\mathbb Z}_2\times{\mathbb Z}_2} \\ \hyperref[ssec:DIII]{{\rm DIII}} & -1 & 1 & 1 & n=2m \in 2{\mathbb N} & N=2n=4m & 0 & {\mathbb Z}_2 \\ \hyperref[ssec:AII]{{\rm AII}} & -1 & 0 & 0 & n=2m \in 2{\mathbb N} & N=2M \in 2{\mathbb N} & 0 & 0 \\ \hyperref[ssec:CII]{{\rm CII}} & -1 & -1 & 1 & n=2m \in 2{\mathbb N} & N=2n=4m & 0 & {\mathbb Z} \\ \hyperref[ssec:C]{{\rm C}} & 0 & -1 & 0 & & N=2n & 0 & 0 \\ \hyperref[ssec:CI]{{\rm CI}} & 1 & -1 & 1 & & N=2n & 0 & 0 \\ \hline\hline \end{array}$ \medskip \caption{A summary of our main results on the topological ``Indices'' of the various symmetry classes of Fermi projections. In the ``Symmetry'' column, we list the sign characterizing the symmetry as even or odd; an entry ``$0$'' means that the symmetry is absent. Some ``Constraints'' may be needed for the symmetry class ${\rm X}(d,n,N)$ to be non-empty. } \label{table:us} \end{table} \renewcommand{\arraystretch}{1.0} \subsection{Notation} For ${\mathbb K} \in \{ {\mathbb R}, {\mathbb C}\}$, we denote by ${\mathcal M}_N({\mathbb K})$ the set of $N \times N$ {\em ${\mathbb K}$-valued} matrices. We denote by $K \equiv K_N : {\mathbb C}^N \to {\mathbb C}^N$ the usual complex conjugation operator. For a complex matrix $A \in {\mathcal M}_N({\mathbb C})$, we set $\overline{A} := K A K$ and $A^T := \overline{A^*}$, where $A^*$ is the adjoint matrix of $A$ for the standard scalar product on ${\mathbb C}^N$. We then denote by ${\mathcal S}_N({\mathbb K})$ the set of hermitian matrices ($A = A^*$), and by ${\mathcal A}_N({\mathbb K})$ the one of skew-hermitian matrices ($A = - A^*$). When ${\mathbb K} = {\mathbb C}$, we sometimes drop the notation ${\mathbb C}$. Also, we denote by ${\mathcal S}_N^{\mathbb R}({\mathbb C})$ and ${\mathcal A}_N^{{\mathbb R}}({\mathbb C})$ the set of symmetric ($A^T = A$) and antisymmetric matrices ($A^T = -A$). We denote by ${\rm U}(N)$ the subset of unitary matrices, by ${\rm SU}(N)$ the set of unitaries with determinant $1$, by ${\rm O}(N)$ the subset of orthogonal matrices, and by ${\rm SO}(N)$ the subset of orthogonal matrices with determinant $1$. We denote by ${\mathbb I}\equiv {\mathbb I}_N$ the identity matrix of ${\mathbb C}^N$. When $N = 2M$ is even, we also introduce the symplectic matrix \[ J \equiv J_{2M} := \begin{pmatrix} 0 & {\mathbb I}_M \\ - {\mathbb I}_M & 0 \end{pmatrix}. \] The symplectic group ${\rm Sp}(2M; {\mathbb K})$ is defined by \begin{equation} \label{eq:Sp} {\rm Sp}(2M;{\mathbb K}):= \set{A \in {\mathcal M}_{2M}({\mathbb K}) : A^T J_{2M} A = J_{2M}}. \end{equation} The {\em compact} symplectic group ${\rm Sp}(M)$ is \[ {\rm Sp}(M) := {\rm Sp}(2M;{\mathbb C}) \cap {\rm U}(2M) = \set{U \in {\rm U}(2M) : U^T J_{2M} U = J_{2M}}. \] \subsection{Structure of the paper} We study the classes one by one. We begin with the {\em complex classes} A and AIII in Section~\ref{sec:complexClasses}, where no anti-unitary operator is present. We then study non-chiral {\em real} classes (without $S$-symmetry) in Section~\ref{sec:nonchiral}, and chiral classes in Section~\ref{sec:chiral}. In Appendix~\ref{sec:LA}, we review some factorizations of matrices, which allow us to prove our results. \section{Complex classes: ${\rm A}$ and ${\rm AIII}$} \label{sec:complexClasses} The symmetry classes ${\rm A}$ and ${\rm AIII}$ are often dubbed as \emph{complex}, since they do not involve any antiunitary symmetry operator, and thus any ``real structure'' induced by the complex conjugation. By contrast, the other 8 symmetry classes are called \emph{real}. Complex classes where studied, for example, in~\cite{ProdanSchulzBaldes,denittis2018chiral}. \subsection{Class ${\rm A}$} \label{ssec:A} In class ${\rm A}$, no discrete symmetry is imposed. We have in this case \begin{theorem}[Class ${\rm A}$] \label{thm:A} The sets ${\rm A}(0, n, N)$ and ${\rm A}(1, n, N)$ are path-connected. \end{theorem} \begin{proof} Since no symmetry is imposed and ${\mathbb T}^0 = \{0\}$ consists of a single point, we have ${\rm A}(0,n,N) = {\mathcal G}_n({\mathcal H})$. It is known~\cite[Ch.~8, Thm.~2.2]{husemoller} that the complex Grassmannian is connected, hence so is ${\rm A}(0,n,N)$. This property follows from the fact that the map ${\rm U}(N) \to {\mathcal G}_n({\mathcal H})$ which to any $N \times N$ unitary matrix associates the linear span of its first $n$ columns (say in the canonical basis for ${\mathcal H} \simeq {\mathbb C}^N$), viewed as orthonormal vectors in ${\mathcal H}$, induces a bijection \begin{equation} \label{eq:U/UxU=G} \boxed{ {\rm A}(0,n,N) \simeq {\mathcal G}_n({\mathcal H}) \simeq {\rm U}(N) / {\rm U}(n) \times {\rm U}(N-n).} \end{equation} Since $U(N)$ is connected, so is ${\rm A}(0, n, N)$. \medskip To realize this explicitly, we fix a basis of ${\mathcal H} \simeq {\mathbb C}^N$. Let $P_0, P_1 \in {\rm A}(0, n, N)$. For $j \in \{ 0, 1\}$, we choose a unitary $U_j \in {\rm U}(N)$ such that its $n$ first column vectors span the range of $P_j$. We then choose a self-adjoint matrix $A_j \in {\mathcal S}_N$ so that $U_j = \re^{ \ri A_j}$. We now set, for $s \in (0, 1)$, \[ U_s := \re^{ \ri A_s}, \quad A_s := (1 - s) A_0 + s A_1. \] The map $s \mapsto U_s$ is continuous, takes values in ${\rm U}(n)$, and connects $U_0$ and $U_1$. The projection $P_s$ on the first $n$ column vectors of $U_s$ then connects $P_0$ and $P_1$, as wanted. \medskip We now prove our statement concerning ${\rm A}(1, n, N)$. Let $P_0, P_1 \colon {\mathbb T}^1 \to {\mathcal G}_n({\mathcal H})$ be two periodic families of projections. Recall that we identify ${\mathbb T}^1 \simeq [-1/2, 1/2]$. Consider the two projections $P_0(-\tfrac12) = P_0(\tfrac12)$ and $P_1(-\tfrac12) = P_1(\tfrac12)$, and connect them by some continuous path $P_s(-\frac12) = P_s(\frac12)$ as previously. The families $P_0(k)$ and $P_1(k)$, together with the maps $P_s(-\tfrac12)$ and $P_s(\tfrac12)$, define a continuous family of projectors on the boundary $\partial \Omega$ of the square \begin{equation} \label{eq:Omega} \Omega := [-\tfrac12, \tfrac12] \times [0,1] \ni (k,s). \end{equation} It is a standard result (see for instance~\cite[Lemma 3.2]{Gontier2019numerical} for a constructive proof) that such families can be extended continuously to the whole set $\Omega$. This gives an homotopy $P_s(k) = P(k,s)$ between $P_0$ and $P_1$. \end{proof} \subsection{Class ${\rm AIII}$} \label{ssec:AIII} In class ${\rm AIII}$, only the $S$-symmetry is present. It is convenient to choose a basis in which $S$ is diagonal. This is possible thanks to the following Lemma, which we will use several times in classes where the $S$-symmetry is present. \begin{lemma} \label{lem:formS} Assume ${\rm AIII}(d=0, n, N)$ is non-empty. Then $N = 2n$, and there is a basis of ${\mathcal H}$ in which $S$ has the block-matrix form \begin{equation} \label{eq:formS} S = \begin{pmatrix} {\mathbb I}_n & 0 \\ 0 & - {\mathbb I}_n \end{pmatrix}. \end{equation} In this basis, a projection $P$ satisfies $S^{-1} P S = {\mathbb I}_{\mathcal H} - P$ iff it has the matrix form \begin{equation} \label{eq:special_form_P} P = \frac12 \begin{pmatrix} {\mathbb I}_n & Q \\ Q^* & {\mathbb I}_n \end{pmatrix} \quad \text{with} \quad Q \in {\rm U}(n). \end{equation} \end{lemma} \begin{proof} Let $P_0 \in {\rm AIII}(0, n, N)$. Since $S^{-1} P_0 S = {\mathbb I}_{{\mathcal H}} - P_0$, $P_0$ is unitarily equivalent to ${\mathbb I}_{\mathcal H} - P_0$, hence ${\mathcal H} = {\rm Ran} \ P_0 \oplus {\rm Ran} \ ({\mathbb I}_{{\mathcal H}} - P_0)$ is of dimension $N= 2n$. \\ Let $(\psi_1, \psi_2, \cdots, \psi_n)$ be an orthonormal basis for ${\rm Ran } \, P_0$. We set \[ \forall i \in \{ 1, \cdots, n\}, \quad \phi_i := \frac{1}{\sqrt{2}}(\psi_i + S \psi_i), \quad \phi_{n+i} = \frac{1}{\sqrt{2}}(\psi_i - S \psi_i). \] The family $(\phi_1, \cdots, \phi_{2n})$ is an orthonormal basis of ${\mathcal H}$, and in this basis, $S$ has the matrix form~\eqref{eq:formS}. \medskip For the second point, let $P \in {\rm AIII}(0,n,2n)$, and decompose $P$ in blocks: \[ P = \frac12 \begin{pmatrix} P_{11} & P_{12} \\ P_{12}^* & P_{22} \end{pmatrix}. \] The equation $S^{-1} P S= {\mathbb I}_{\mathcal H} - P$ implies that $P_{11} = P_{22} = {\mathbb I}_n$. Then, the equation $P^2 = P$ shows that $P_{12} =: Q$ is unitary, and~\eqref{eq:special_form_P} follows. \end{proof} The previous Lemma establishes a bijection $P \longleftrightarrow Q$, that is \[ \boxed{ {\rm AIII}(0,n,2n) \simeq {\rm U}(n).} \] For $P \in {\rm AIII}(d, n, 2n)$, we denote by $Q : {\mathbb T}^d \to {\rm U}(n)$ the corresponding periodic family of unitaries. For a curve ${\mathcal C}$ homeomorphic to ${\mathbb S}^1$, and for $Q : {\mathcal C} \to {\rm U}(n)$, we denote by ${\rm Winding}({\mathcal C}, Q)$ the usual winding number of the determinant of $Q$ along ${\mathcal C}$. \begin{theorem}[Class ${\rm AIII}$] The set ${\rm AIII}(d, n, N)$ is non-empty iff $N=2n$. \begin{itemize} \item The set ${\rm AIII}(0,n,2n)$ is path-connected. \item Define the index map ${\rm Index}_1^{{\rm AIII}} : {\rm AIII}(1, n, 2n) \to {\mathbb Z}$ by \[ \forall P \in {\rm AIII}(1, n, 2n), \quad {\rm Index}_1^{{\rm AIII}} (P) := {\rm Winding}({\mathbb T}^1, Q). \] Then $P_0$ is homotopic to $P_1$ in ${\rm AIII}(1,n,2n)$ iff ${\rm Index}_1^{{\rm AIII}}(P_0) = {\rm Index}_1^{{\rm AIII}}(P_1)$. \end{itemize} \end{theorem} \begin{proof} We already proved that $N = 2n$. Since ${\rm U}(n)$ is connected, so is ${\rm AIII}(0, n, 2n)$. A constructive path can be constructed as in the previous section using exponential maps. We now focus on ${\rm AIII}(d = 1, n, 2n)$. Analogously, the question of whether two maps in ${\rm AIII}(1,n,2n)$ are continuously connected by a path can be translated in whether two unitary-valued maps $Q_0, Q_1 \colon {\mathbb T}^1 \to {\rm U}(n)$ are homotopic to each other. As in the previous proof, consider the unitaries $Q_0(-\tfrac12) = Q_0(\tfrac12) \in {\rm U}(n)$ and $Q_1(-\tfrac12) = Q_1(\tfrac12) \in {\rm U}(n)$. Connect them by some $Q_s(-\frac12) = Q_s(\frac12)$ in ${\rm U}(n)$. This defines a ${\rm U}(n)$-valued map on $\partial \Omega$, where the square $\Omega$ is defined in~\eqref{eq:Omega}. It is well known that one can extend such a family of unitaries to the whole $\Omega$ iff ${\rm Winding}(\partial \Omega,Q) = 0$ (see~\cite[Section IV.B]{Gontier2019numerical} for a proof, together with a constructive proof of the extension in the case where the winding vanishes). In our case, due to the orientation of the boundary of $\Omega$ and of the periodicity of $Q_0(k), Q_1(k)$, we have \[ {\rm Winding}(\partial \Omega, Q) = {\rm Winding}({\mathbb T}^1,\ Q_1) - {\rm Winding}({\mathbb T}^1, Q_0), \] which is independent of the previously constructed path $Q_s(\frac12)$. The conclusion follows. \end{proof} \section{Real non-chiral classes: ${\rm AI}$, ${\rm AII}$, ${\rm C}$ and ${\rm D}$} \label{sec:nonchiral} Next we consider those symmetry classes which are characterized by the presence of a \emph{single} anti-unitary symmetry: a $T$-symmetry (which even in class ${\rm AI}$ and odd in class ${\rm AII}$) or a $C$-symmetry (whih is even in class ${\rm D}$ and odd in class ${\rm C}$). In particular, these classes involve anti-unitarily intertwining $P(k)$ and $P(-k)$. For these symmetry classes, the analysis of their path-connected components in dimension $d=1$ is reduced to that of dimension $d=0$, thanks to the following Lemma. \begin{lemma}[Real non-chiral classes in $d=1$] \label{lem:NonCh1d} Let ${\rm X} \in \set{{\rm AI}, {\rm AII}, {\rm C}, {\rm D}}$. Then $P_0$ and $P_1$ are in the same connected component of ${\rm X}(1,n,N)$ iff \begin{itemize} \item $P_0(0)$ and $P_1(0)$ are in the same connected component in ${\rm X}(0,n,N)$, and \item $P_0(\tfrac12)$ and $P_1(\tfrac12)$ are in the same connected component in ${\rm X}(0,n,N)$. \end{itemize} \end{lemma} \begin{proof} We give the argument for the class ${\rm X} = {\rm D}$, but the proof is similar for the other classes. First, we note that if $P_s(k)$ connects $P_0$ and $P_1$ in ${\rm D}(1, n, N)$, then for $k_0 \in \{ 0, \tfrac12\}$ one must have $C^{-1} P_s(k_0) C = {\mathbb I}_{\mathcal H} - P_s(k_0)$, so $P_s(k_0)$ connects $P_0(k_0)$ and $P_1(k_0)$ in ${\rm D}(d = 0, n, N)$. Let us prove the converse. Assume that $P_0$ and $P_1$ are two projection-valued maps in ${\rm D}(1, n, N)$ so that there exist paths $P_s(k_0)$ connecting $P_0(k_0)$ and $P_1(k_0)$ in ${\rm D}(0,n,N)$, for the high symmetry points $k_0 \in \{ 0, \tfrac12\}$. Denote by $\Omega_0$ the half-square \begin{equation} \label{eq:Omega0} \Omega_0 := [0, \tfrac12] \times [0, 1] \quad \ni (k,s), \end{equation} (compare with~\eqref{eq:Omega}). The families \[ \set{P_0(k)}_{k \in [0,1/2]}, \quad \set{P_1(k)}_{k\in[0,1/2]}, \quad \set{P_s(0)}_{s\in[0,1]} \quad \text{and} \quad \set{P_s(\tfrac12)}_{s\in[0,1]}, \] together define a continuous family of projectors on the boundary $\partial \Omega_0$. As was already mentioned in Section~\ref{ssec:A}, this family can be extended continuously on the whole set $\Omega_0$. This gives a continuous family $\set{P_s(k)}_{k\in[0,1/2],\,s\in[0,1]}$ which connects continuously the restrictions of $P_0$ and $P_1$ to the half-torus $k \in [0, \tfrac12]$. We can then extend the family of projections to $k \in [-\tfrac12, 0]$ by setting \[ \forall k \in [- \tfrac12, 0], \ \forall s \in [0, 1], \quad P_s(k) := C \big[ {\mathbb I}_{\mathcal H} - P_s (-k) \big] C^{-1}. \] By construction, for all $s \in [0, 1]$, the map $P_s$ is in ${\rm D}(1,n,N)$. In addition, since at $k_0 \in \{0 ,\tfrac12\}$ we have $P_s(k_0) \in {\rm D}(0, n, N)$, the above extension is indeed continuous as a function of $k$ on the whole torus ${\mathbb T}^1$. This concludes the proof. \end{proof} \subsection{Class ${\rm AI}$} \label{ssec:AI} In class ${\rm AI}$, the relevant symmetry is an anti-unitary operator $T$ with $T^2 = {\mathbb I}_{\mathcal H}$. This case was studied for instance in~\cite{panati2007triviality,denittis2014real,FiorenzaMonacoPanatiAI}. \begin{lemma} \label{lem:formTeven} If $T$ is an anti-unitary operator on ${\mathcal H}$ such that $T^2 = {\mathbb I}_{\mathcal H}$, then there is a basis of ${\mathcal H}$ in which $T$ has the matrix form $T = K_N$. \end{lemma} \begin{proof} We construct the basis by induction. Let $\psi_1 \in {\mathcal H}$ be a normalized vector. if $T \psi_1 = \psi_1$, we set $\phi_1 = \psi_1$, otherwise we set \[ \phi_1 := \ri \dfrac{ \psi_1 - T \psi_1 }{\| \psi_1 - T \psi_1 \|}. \] In both cases, we have $T \phi_1 = \phi_1$ and $\| \phi_1 \| = 1$, which gives our first vector of the basis. Now take $\psi_2$ orthogonal to $\phi_1$. We define $\phi_2$ as before. If $\phi_2 = \psi_2$, then $\phi_2$ is automatically orthogonal to $\phi_1$. This also holds in the second case, since \[ \langle \psi_2 - T \psi_2, \phi_1 \rangle = - \langle T \psi_2, \phi_1 \rangle = - \langle T \phi_1, T^2 \psi_2 \rangle = - \langle \phi_1, \psi_2 \rangle = 0, \] where we used twice that $\langle \psi_2, \phi_1 \rangle = 0$. We go on, and construct the vectors $\phi_k$ inductively for $1 \le k \le N$. This gives an orthonormal basis in which $T = K$. \end{proof} \begin{theorem}[Class ${\rm AI}$] \label{thm:AI} The sets ${\rm AI}(0, n, N)$ and ${\rm AI}(1, n, N)$ are path-connected. \end{theorem} \begin{proof} In a basis in which $T = K_N$, we have the identification \[ {\rm AI}(0, n, N) = \left\{ P \in {\mathcal G}_n({\mathbb C}^N) : \overline{P} = P \right\}. \] In other words, ${\rm AI}(0, n, N)$ consists of \emph{real} subspaces of ${\mathcal H}$, {\em i.e.} those that are fixed by the complex conjugation $T=K$. One can therefore span such subspaces (as well as their orthogonal complement) by orthonormal \emph{real} vectors. This realizes a bijection similar to~\eqref{eq:U/UxU=G}, but where unitary matrices are replaced by orthogonal ones: more precisely \[ \boxed{ {\rm AI}(0, n, N) \simeq {\rm O}(N) / {\rm O}(n) \times {\rm O}(N-n).} \] We adapt the argument in the proof of Theorem~\ref{thm:A} to show that the latter space is path-connected. Let $P_0, P_1 \in {\rm AI}(0, n, N)$. We choose two {\em real} bases of ${\mathcal H}$, which we identify with columns of orthogonal matrices $U_0,U_1 \in {\rm O}(N)$, so that the first $n$ vectors of $U_j$ span the range of $P_j$, for $j \in \set{0,1}$. In addition, by flipping the first vector, we may assume $U_0, U_1 \in {\rm SO}(N)$. Then there is $A_0, A_1 \in {\mathcal A}_N({\mathbb R})$ so that $U_j = \re^{ A_j}$ for $j \in \{ 0, 1 \}$. We then set $U_s := \re^{A_s}$ with $A_s = (1 - s) A_0 + sA_1$. The projection $P_s$ on the first $n$ column vectors of $U_s$ then interpolates between $P_0$ and $P_1$, as required. In view of Lemma~\ref{lem:NonCh1d}, the path-connectedness of ${\rm AI}(0,n,N)$ implies the one of ${\rm AI}(1,n,N)$. \end{proof} \subsection{Class ${\rm AII}$} \label{ssec:AII} In class ${\rm AII}$ we have $T^2 = -{\mathbb I}_{\mathcal H}$. This case was studied for instance in~\cite{graf2013bulk, denittis2015quaternionic, FiorenzaMonacoPanatiAII, cornean2017wannier, monaco2017gauge}. \begin{lemma} \label{lem:formTodd} There is an anti-unitary map $T : {\mathcal H} \to {\mathcal H}$ with $T^2 = - {\mathbb I}_{\mathcal H}$ iff ${\rm dim } \, {\mathcal H} = N = 2M$ is even. In this case, there is a basis of ${\mathcal H}$ in which $T$ has the matrix form \begin{equation} \label{eq:oddT} T = \begin{pmatrix} 0 & K_M \\ - K_M & 0 \end{pmatrix} = J_{2M} \, K_{2M}. \end{equation} \end{lemma} \begin{proof} First, we note that $T \psi$ is always orthogonal to $\psi$. Indeed, we have \begin{equation} \label{eq:psiTpsi} \langle \psi, T \psi \rangle = \langle T^2 \psi, T \psi \rangle = - \langle \psi, T \psi \rangle, \quad \text{hence} \quad \langle \psi, T \psi \rangle = 0. \end{equation} We follow the strategy employed {\em e.g.}~in~\cite{graf2013bulk} and~\cite[Chapter 4.1]{cornean2017wannier}, and construct the basis by induction. Let $\psi_1 \in {\mathcal H}$ be any normalized vector, and set $\psi_2 := T \psi_1$. The family $\{\psi_1, \psi_2\}$ is orthonormal by~\eqref{eq:psiTpsi}. If ${\mathcal H} \neq {\rm Span } \{ \psi_1, \psi_2\}$, then there is $\psi_3 \in {\mathcal H}$ orthonormal to this family. We then set $\psi_4 = T \psi_3$, and claim that $\psi_4$ is orthonormal to the family $\{ \psi_1, \psi_2, \psi_3\}$. First, by~\eqref{eq:psiTpsi}, we have $\langle \psi_3, \psi_4 \rangle = 0$. In addition, we have \[ \langle \psi_4, \psi_1 \rangle = \langle T \psi_3, \psi_1 \rangle = \langle T \psi_1, T^2 \psi_3 \rangle = - \langle \psi_2, \psi_3 \rangle = 0, \] and, similarly, \[ \langle \psi_4, \psi_2 \rangle = \langle T \psi_3, T \psi_1 \rangle = \langle T^2 \psi_1, T^2 \psi_3 \rangle = \langle \psi_1, \psi_3 \rangle = 0. \] We proceed by induction. We first obtain that the dimension of ${\mathcal H}$ is even, $N = 2M$, and we construct an explicit basis $\{\psi_1, \cdots, \psi_{2M}\}$ for ${\mathcal H}$. In the orthonormal basis $\{\psi_1, \psi_3, \psi_5, \cdots, \psi_{2M-1}, \psi_2, \psi_4, \cdots \psi_{2M}\}$, the operator $T$ has the matrix form~\eqref{eq:oddT}. \end{proof} \begin{theorem}[Class ${\rm AII}$] \label{thm:AII} The sets ${\rm AII}(0, n, N)$ and ${\rm AII}(1, n, N)$ are non-empty iff $n = 2m \in 2{\mathbb N}$ and $N = 2M \in 2{\mathbb N}$. Both are path-connected. \end{theorem} \begin{proof} The proof follows the same lines as that of Theorems~\ref{thm:A} and~\ref{thm:AI}. The condition $T^{-1} P T = P$ for $P \in {\rm AII}(0, n, N)$ means that the range of the projection $P$ is stable under the action of $T$. This time, the operator $T$ endows the Hilbert space ${\mathcal H}$ with a \emph{quaternionic} structure, namely the matrices $\set{\ri {\mathbb I}_{\mathcal H}, T, \ri T}$ satisfy the same algebraic relations as the basic quaternions $\set{\mathbf{i},\mathbf{j},\mathbf{k}}$: they square to $-{\mathbb I}_{\mathcal H}$, they pairwise anticommute and the product of two successive ones cyclically gives the third. This allows to realize the class ${\rm AII}(0,2m,2M)$ as \[ \boxed{ {\rm AII}(0, 2m, 2M) \simeq {\rm Sp}(M) / {\rm Sp}(m) \times {\rm Sp}(M-m). } \] Matrices in ${\rm Sp}(M)$ are exponentials of Hamiltonian matrices, that is, matrices $A$ such that $J_{2M} A$ is symmetric \cite[Prop.~3.5 and Coroll.~11.10]{hall2015lie}. Such matrices form a (Lie) algebra, and therefore the same argument as in the proof of Theorem~\ref{thm:AI} applies, yielding path-connectedness of ${\rm AII}(0,2m,2M)$. This in turn implies, in combination with Lemma~\ref{lem:NonCh1d}, that ${\rm AII}(1,2m,2M)$ is path-connected as well. \end{proof} \subsection{Class ${\rm D}$} \label{ssec:D} We now come to classes where the $C$-symmetry is present. We first focus on the even case, $C^2 = + {\mathbb I}_{\mathcal H}$, characterizing class ${\rm D}$. One of the most famous models in this class is the 1-dimensional Kitaev chain~\cite{KitaevChain}. We choose to work in the basis of ${\mathcal H}$ in which $C$ has the form% \footnote{This is \emph{different} from the ``energy basis'', of common use in the physics literature, in which $C$ is block-off-diagonal, mapping ``particles'' to ``holes'' and vice-versa. We find this other basis more convenient for our purpose.} $C = K_N$ (see Lemma~\ref{lem:formTeven}). \begin{lemma} The set ${\rm D}(0, n, N)$ is non-empty iff $N = 2n$. In this case, and in a basis where $C = K_N$, a projection $P$ is in ${\rm D}(0, n,2n)$ iff it has the matrix form \[ P = \frac12 ({\mathbb I}_N + \ri A ), \quad \text{with} \quad A \in {\rm O}(2n) \cap {\mathcal A}_{2n}({\mathbb R}). \] \end{lemma} \begin{proof} A computation shows that \[ \begin{cases} P^* = P \\ P^2 = P \\ C^{-1} P C = {\mathbb I} - P \end{cases} \Longleftrightarrow \quad \begin{cases} A^* = -A \\ A^2 = - {\mathbb I}_N \\ \overline{A} = A \end{cases} \Longleftrightarrow \quad \begin{cases} A^* A = {\mathbb I}_N \\ A= \overline{A} = - A^T. \end{cases} \] This proves that $P \in {\rm D}(0, n, N)$ iff $A \in {\rm O}(N) \cap {\mathcal A}_{N}({\mathbb R})$. In particular, we have ${\rm det}(A) = (-1)^N {\rm det}(-A) = (-1)^N {\rm det}(A^T) = (-1)^N {\rm det}(A)$, so $N = 2m$ is even. Finally, since the diagonal of $A$ is null, we have $n = {\rm Tr}(P) = \frac12 {\rm Tr}({\mathbb I}_N) = m$. \end{proof} % In Corollary~\ref{cor:O(d)capA(d)} below, we prove that a matrix $A$ is in ${\rm O}(2n) \cap {\mathcal A}_{2n}({\mathbb R})$ iff it is of the form \[ A = W^T J_{2n} W, \quad \text{with} \quad W \in {\rm O}(2n). \] In addition, we have $W_0^T J_{2n} W_0 = W_1^T J_{2n} W_1$ with $W_0, W_1 \in {\rm O}(2n)$ iff $W_0 W_1^* \in {\rm Sp}(n) \cap {\rm O}(2n)$. Finally, in Proposition~\ref{prop:SpO=U}, we show that ${\rm Sp}(n) \cap {\rm O}(2n) \simeq {\rm U}(n)$. Altogether, this shows that \[ \boxed{ {\rm D}(0, n, 2n) \simeq {\rm O}(2n) \cap {\mathcal A}_{2n}({\mathbb R}) \simeq {\rm O}(2n)/{\rm U}(n) .} \] To identify the connected components of this class, recall that for an anti-symmetric matrix $A \in {\mathcal A}_{2n}^{\mathbb R}({\mathbb C})$, we can define its \emph{Pfaffian} \begin{equation} \label{eq:def:Pfaffian} {\rm Pf}(A) := \dfrac{1}{2^n n!} \sum_{\sigma} {\rm sgn}(\sigma) \prod_{i=1}^n a_{\sigma(2i-1), \sigma(2i)}, \end{equation} where the above sum runs over all permutations over $2n$ labels and ${\rm sgn}(\sigma)$ is the sign of the permutation $\sigma$. The Pfaffian satisfies \[ {\rm Pf}(A)^2 = {\rm det}(A). \] On the other hand, if $A \in {\rm O}(2n)$, then ${\rm det}(A) \in \{ \pm 1\}$, so if $A \in {\rm O}(2n) \cap {\mathcal A}_{2n}({\mathbb R})$, we must have ${\rm det}(A) = 1$ and ${\rm Pf}(A) \in \{ \pm 1\}$. \begin{theorem}[Class ${\rm D}$] The set ${\rm D}(d, n, N)$ is non-empty iff $N=2n$. \begin{itemize} \item The set ${\rm D}(0, n, 2n)$ has two connected components. Define the index map ${\rm Index}_0^{{\rm D}} \colon {\rm D}(0,n,2n) \to {\mathbb Z}_2 \simeq \{ \pm 1\}$ by \[ \forall P \in {\rm D}(0, n, 2n), \quad {\rm Index}_0^{\rm D} (P) := {\rm Pf}(A). \] Then $P_0$ is homotopic to $P_1$ in ${\rm D}(0,n,2n)$ iff $ {\rm Index}_0^{{\rm D}}(P_0) = {\rm Index}_0^{{\rm D}}(P_1)$. \item The set ${\rm D}(1, n, 2n)$ has four connected components. Define the index map ${\rm Index}_1^{{\rm D}} \colon {\rm D}(1,n,2n) \to {\mathbb Z}_2 \times {\mathbb Z}_2$ by \[ \forall P \in {\rm D}(1, n, 2n), \quad {\rm Index}_1^{\rm D} (P) := \left( {\rm Pf}(A(0)), {\rm Pf}(A(\tfrac12))\right). \] Then $P_0$ is homotopic to $P_1$ in ${\rm D}(1,n,2n)$ iff $ {\rm Index}_1^{{\rm D}}(P_0) = {\rm Index}_1^{{\rm D}}(P_1)$. \end{itemize} \end{theorem} \begin{proof} We start with ${\rm D}(0, n, N=2n)$. Let $P_0, P_1 \in {\rm D}(0, n, 2n)$. It is clear that if ${\rm Pf}(A_0) \neq {\rm Pf}(A_1)$, then $P_0$ and $P_1$ are in two different connected components (recall that ${\rm Pf}(\cdot)$ is a continuous map, with values in $\{ \pm 1\}$ in our case). It remains to construct an explicit homotopy in the case where ${\rm Pf}(A_0) = {\rm Pf}(A_1)$. In Corollary~\ref{cor:O(d)capA(d)} below, we recall that a matrix $A$ is in ${\rm O}(2n) \cap {\mathcal A}_{2n}({\mathbb R})$ iff there is $V \in {\rm SO}(2n)$ so that \[ A = V^T D V, \quad \text{with} \quad D = (1, 1, \cdots, 1, {\rm Pf}(A)) \otimes \begin{pmatrix} 0 & 1 \\ - 1 & 0 \end{pmatrix}. \] So, if $A_0, A_1 \in {\rm O}(2n) \cap {\mathcal A}_{2n}({\mathbb R})$ have the same Pfaffian, it is enough to connect the corresponding $V_0$ and $V_1$ in ${\rm SO}(2n)$. The proof follows since ${\rm SO}(2n)$ is path-connected (compare with the proof of Theorem~\ref{thm:AI}). The case for ${\rm D}(d = 1, n, 2n)$ is now a consequence of Lemma~\ref{lem:NonCh1d}. \end{proof} \begin{remark} \label{rmk:weak} For 1-dimensional translation-invariant systems, one can distinguish between a \emph{weak} ({\em i.e.}, lower-dimensional, depending solely on $P(k)$ at $k = 0$) index \[ {\rm Index}_0^{\rm D}(P(0)) = {\rm Pf}(A(0)) \in {\mathbb Z}_2 \] and a \emph{strong} (i.e., ``truly'' 1-dimensional) index \[ \widetilde{ {\rm Index}_0^{\rm D} } (P) := {\rm Pf}(A(0)) \cdot {\rm Pf}(A(\tfrac12)) \in {\mathbb Z}_2. \] Only the latter ${\mathbb Z}_2$-index appears in the periodic tables for free ground states \cite{kitaev2009periodic}. Our proposed index \[ {\rm Index}_1^{\rm D} (P) = \left( {\rm Pf}(A(0)), {\rm Pf}(A(\tfrac12))\right) \in {\mathbb Z}_2 \times {\mathbb Z}_2 \] clearly contains the same topological information of both the weak and strong indices. A similar situation will appear in class ${\rm BDI}$ (see Section~\ref{ssec:BDI}). \end{remark} \subsection{Class ${\rm C}$} \label{ssec:C} We now focus on the odd $C$-symmetry class, where $C^2 = - {\mathbb I}_{\mathcal H}$. Thanks to Lemma~\ref{lem:formTodd}, $N = 2M$ is even, and we can choose a basis of ${\mathcal H}$ in which $C$ has the matrix form \[ C = \begin{pmatrix} 0 & K_M \\ - K_M & 0 \end{pmatrix} = J_{2M}\, K_{2M}. \] Recall that ${\rm Sp}(n) := {\rm Sp}(2n; {\mathbb C}) \cap {\rm U}(2n)$. \begin{lemma} The set ${\rm C}(0, n, N)$ is non-empty iff $N = 2n$ (hence $n = M$). A projection $P$ is in ${\rm C}(0, n, 2n)$ iff it has the matrix form \[ P = \frac12 \left( {\mathbb I}_{2n} + \ri J_{2n}A \right), \quad \text{with} \quad A \in {\rm Sp}(n) \cap {\mathcal S}^{\mathbb R}_{2n}({\mathbb C}). \] \end{lemma} \begin{proof} With this change of variable, we obtain that \[ \begin{cases} P = P^* \\ P^2 = P \\ C^{-1} P C = {\mathbb I}_{2n} - P \end{cases} \Longleftrightarrow \quad \begin{cases} A^* J_{2n} = J_{2n} A\\ J_{2n} A J_{2n} A = - {\mathbb I}_{2n} \\ \overline{A} J_{2n} = J_{2n} A. \end{cases} \] With the two first equations, we obtain $A A^* = {\mathbb I}_{2n}$, so $A \in {\rm U}(2n)$. With the first and third equations, we get $A^T = A$, so $A \in {\mathcal S}_{2n}^{\mathbb R}({\mathbb C})$, and with the two last equations, $A^T J_{2n} A = J_{2n}$, so $A \in {\rm Sp}(2n; {\mathbb C})$. The result follows. \end{proof} In Corollary~\ref{cor:AutonneTagaki} below, we prove that a matrix $A$ is in ${\rm Sp}(n) \cap {\mathcal S}_{2n}^{\mathbb R}({\mathbb C})$ iff it is of the form \[ A = V^T V, \quad \text{for some} \quad V \in {\rm Sp}(n). \] In addition, $A = V_0^T V_0 = V_1^T V_1$ with $V_0, V_1 \in {\rm Sp}(n)$ iff $V_1 V_0^* \in {\rm Sp}(n) \cap {\rm O}(2n) \simeq {\rm U}(n)$ (see the already mentioned Proposition~\ref{prop:SpO=U} for the last bijection). This proves that \[ \boxed{ {\rm C}(0, n, N) \simeq {\rm Sp}(n) \cap {\mathcal S}^{\mathbb R}_{2n}({\mathbb C}) \simeq {\rm Sp}(n) / {\rm U}(n). } \] \begin{theorem}[Class ${\rm C}$] The sets ${\rm C}(0, n, N)$ and ${\rm C}(1, n, N)$ are non-empty iff $N = 2n$. Both are path-connected. \end{theorem} \begin{proof} For ${\rm C}(d=0, n, 2N)$, it is enough to prove that ${\rm Sp}(n) \cap {\mathcal S}_{2n}^{\mathbb R}({\mathbb C})$ is path-connected. To connect $A_0$ and $A_1$ in ${\rm Sp}(n) \cap {\mathcal S}_{2n}({\mathbb C})$ it suffices to connect the corresponding $V_0$ and $V_1$ in ${\rm Sp}(2n)$. This can be done as we already saw in the proof of Theorem~\ref{thm:AII}. Invoking Lemma~\ref{lem:NonCh1d} allows to conclude that ${\rm C}(1,n,2n)$ is path-connected as well. \end{proof} \section{Real chiral classes: ${\rm BDI}$, ${\rm DIII}$, ${\rm CII}$ and ${\rm CI}$} \label{sec:chiral} We now focus on the chiral real classes; by Assumption~\ref{S=TC}, the chiral symmetry operator $S$ will come from the combination of a $T$-symmetry with a $C$-symmetry. In what follows, we will always find a basis for ${\mathcal H}$ in which $S:=TC$ has the form~\eqref{eq:formS}. In particular, Lemma~\ref{lem:formS} applies, and any $P \in {\rm X}(d,n,2n)$ for ${\rm X} \in \set{{\rm BDI}, {\rm DIII}, {\rm CII},{\rm CI}}$ will be of the form \begin{equation} \label{eq:Q} P(k) = \frac12 \begin{pmatrix} {\mathbb I}_n & Q(k) \\ Q(k)^* & {\mathbb I}_n \end{pmatrix} \quad \text{with} \quad Q(k) \in {\rm U}(n). \end{equation} The $T$-symmetry (or equivalently the $C$-symmetry) of $P(k)$ translates into a condition for $Q(k)$, of the form \begin{equation} \label{eq:FTQ} F_T(Q(k)) = Q(-k). \end{equation} % With these remarks, we are able to formulate the analogue of Lemma~\ref{lem:NonCh1d} for real chiral classes. \begin{lemma}[Real chiral classes in $d=1$] \label{lem:Ch1d} Let ${\rm X} \in \set{{\rm BDI}, {\rm DIII}, {\rm CII},{\rm CI}}$. Then $P_0$ and $P_1$ are in the same connected component in ${\rm X}(1,n,2n)$ iff \begin{itemize} \item $P_0(0)$ and $P_1(0)$ are in the same connected component in ${\rm X}(0,n,2n)$, \item $P_0(\tfrac12)$ and $P_1(\tfrac12)$ are in the same connected component in ${\rm X}(0,n,2n)$, and \item there exists a choice of the above interpolations $P_s(0)$, $P_s(\tfrac12)$, $s \in [0,1]$, and therefore of the corresponding unitaries $Q_s(0)$, $Q_s(\tfrac12)$ as in~\eqref{eq:Q}, such that \[ {\rm Winding} (\partial\Omega_0, Q)=0, \] where $\Omega_0$ is the half-square defined in~\eqref{eq:Omega0}, and where $Q$ is the continuous family of unitaries defined on $\partial \Omega_0$ via the families \[ \left\{ Q_0(k)\right\}_{k\in [0, 1/2]}, \quad \left\{ Q_1(k)\right\}_{k\in [0, 1/2]}, \quad \left\{ Q_s(0)\right\}_{s\in [0, 1]}, \quad \text{and} \quad \left\{ Q_s(\tfrac12)\right\}_{s\in [0, 1]}. \] \end{itemize} \end{lemma} \begin{proof} As was already mentioned, the vanishing of the winding in the statement is equivalent to the existence of a continuous extension of the map $Q(k,s) \equiv Q_s(k)$ to $(k,s) \in \Omega_0$. For $k \in [-\tfrac12,0]$ and $s \in [0,1]$, we define \[ Q_s(k) := F_T(Q_s(-k)), \] where $F_T$ is the functional relation in~\eqref{eq:FTQ}. Using~\eqref{eq:Q}, we can infer the existence of a family of projections $\set{P_s(k)}_{k \in {\mathbb T}^1}$ which depends continuously on $s \in [0,1]$, is in ${\rm X}(1,n,2n)$ for all $s \in [0,1]$, and restricts to $P_0$ and $P_1$ at $s=0$ and $s=1$, respectively. This family thus provides the required homotopy. \end{proof} \subsection{Class ${\rm BDI}$} \label{ssec:BDI} We start from class ${\rm BDI}$, characterized by even $T$- and $C$-symmetries. \begin{lemma} \label{lem:normalForm_BDI} Assume ${\rm BDI}(0, n, N)$ is non empty. Then $N = 2n$, and there is a basis of ${\mathcal H}$ in which \begin{equation*} T = \begin{pmatrix} K_n & 0 \\ 0 & K_n \end{pmatrix}, \quad C = \begin{pmatrix} K_n & 0 \\ 0 & -K_n \end{pmatrix}, \quad \text{so that} \quad S = TC = \begin{pmatrix} {\mathbb I}_n & 0 \\ 0 & -{\mathbb I}_n \end{pmatrix}. \end{equation*} \end{lemma} \begin{proof} Let $P_0 \in {\rm BDI}(0, n, 2n)$, and let $\{\phi_1, \cdots, \phi_n\}$ be an orthonormal basis for ${\rm Ran} \, P_0$ such that $T \phi_j = \phi_j$ for all $1 \le j \le n$ (see Lemma~\ref{lem:formTeven}). We set \[ \forall 1 \le j \le n, \quad \phi_{n+j} = C \phi_j. \] Since $C$ is anti-unitary, and maps ${\rm Ran} \, P_0$ into ${\rm Ran} \, ({\mathbb I}-P_0)$, the family $\{ \phi_1, \cdots, \phi_{2n}\}$ is an orthonormal basis for ${\mathcal H}$. Since $T$ and $C$ commute, we have for all $1 \le j \le n$, \begin{equation} \label{eq:BDI_TC} T \phi_{n+j} = T C \phi_j = C T \phi_j = C \phi_j = \phi_{n+j}, \quad \text{and} \quad C \phi_{n+j} = C^2 \phi_j = \phi_j. \end{equation} Therefore in this basis the operators $T$ and $C$ take the form \[ T = \begin{pmatrix} K_n & 0 \\ 0 & K_n \end{pmatrix}, \quad C = \begin{pmatrix} 0 & K_n \\ K_n & 0 \end{pmatrix} \quad \text{and} \quad S = \begin{pmatrix} 0 & {\mathbb I}_n \\ {\mathbb I}_n & 0 \end{pmatrix}. \] We now change basis via the matrix $U := \frac{1}{\sqrt{2}} \begin{pmatrix} {\mathbb I}_n & {\mathbb I}_n \\ {\mathbb I}_n & - {\mathbb I}_n \end{pmatrix}$ to obtain the result. \end{proof} Using Lemma~\ref{lem:normalForm_BDI}, one can describe a projection $P(k)$ with its corresponding unitary $Q(k)$. The condition $T^{-1} P(k) T = P(-k)$ reads \[ \overline{Q}(-k) = Q(k). \] So a projection $P$ is in ${\rm BDI}(0, n , 2n)$ iff the corresponding matrix $Q \in {\rm U}(n)$ satisfies $\overline{Q} = Q$, that is $Q \in {\rm O}(n)$. This proves that \[ \boxed{ {\rm BDI}(0, n, 2n) \simeq {\rm O}(n). } \] Recall that ${\rm O}(n)$ has two connected components, namely ${\rm det}^{-1} \{ \pm 1\}$. \begin{theorem}[Class ${\rm BDI}$] \label{th:BDI} The set ${\rm BDI}(d, n, N)$ is non-empty iff $N=2n$. \begin{itemize} \item Let ${\rm Index}_0^{{\rm BDI}} \colon {\rm BDI}(0,n,2n) \to {\mathbb Z}_2$ be the index map defined by \[ \forall P \in {\rm BDI}(0,n, 2n), \quad {\rm Index}_0^{{\rm BDI}} (P) = {\rm det}(Q). \] Then $P_0$ is homotopic to $P_1$ in ${\rm BDI}(0,n,2n)$ iff ${\rm Index}_0^{{\rm BDI}}(P_0) = {\rm Index}_0^{{\rm BDI}}(P_1)$. \item There is an index map ${\rm Index}_1^{{\rm BDI}} \colon {\rm BDI}(1,n,2n) \to {\mathbb Z}_2 \times {\mathbb Z}$ such that $P_0$ is homotopic to $P_1 $ in ${\rm BDI}(1,n,2n)$ iff ${\rm Index}_1^{{\rm BDI}}(P_0) = {\rm Index}_1^{{\rm BDI}}(P_1)$. \end{itemize} \end{theorem} \begin{proof} Recall that ${\rm SO}(n)$ is path-connected, see the proof of Theorem~\ref{thm:AI}. The complement ${\rm O}(n) \setminus {\rm SO}(n)$ is in bijection with ${\rm SO}(n)$, by multiplying each orthogonal matrix with determinant $-1$ by the matrix ${\rm diag}(1,1,\ldots,1,-1)$. This proves the first part. \medskip We now focus on dimension $d = 1$. Let $P(k)$ be in ${\rm BDI}(1, n, 2n)$, and let $Q(k)$ be the corresponding unitary. Let $\alpha(k) : [0, \tfrac12] \to {\mathbb R}$ be a continuous map so that \[ \forall k \in [0, \tfrac12], \quad {\rm det} \, Q(k) = \re^{ \ri \alpha(k)}. \] Since $Q(0)$ and $Q(\tfrac12)$ are in ${\rm O}(n)$, we have ${\rm det} \, Q(0) \in \{ \pm 1\}$ and ${\rm det} \, Q(\tfrac12) \in \{ \pm 1\}$. We define \[ {\mathcal W}^{1/2} (P) := {\mathcal W}^{1/2} (Q) := \dfrac{1}{\pi} \left( \alpha(\tfrac12) - \alpha(0) \right) \quad \in {\mathbb Z}. \] The number ${\mathcal W}^{1/2} (Q) \in {\mathbb Z}$ counts the number of {\em half turns} that the determinant is winding as $k$ goes from $0$ to $\tfrac12$. We call this map the {\em semi-winding}. We finally define the index map ${\rm Index}_1^{{\rm BDI}} \colon {\rm BDI}(1,n,2n) \to {\mathbb Z}_2 \times {\mathbb Z}$ by \[ \forall P \in {\rm BDI}(1, n, 2n), \quad {\rm Index}_1^{{\rm BDI}}(P) := \left( {\rm det} \, Q(0), \ {\mathcal W}^{1/2}(P) \right) \quad \in {\mathbb Z}_2 \times {\mathbb Z}. \] Let $P_0, P_1$ be in ${\rm BDI}(1, n, 2n)$ such that ${\rm Index}_1^{{\rm BDI}}(P_0) = {\rm Index}_1^{{\rm BDI}}(P_1)$, and let us construct an homotopy between $P_0$ and $P_1$. First, we have ${\rm det} \, Q_0(0) = {\rm det} \, Q_1(0)$, and, since ${\mathcal W}^{1/2}(P_0) = {\mathcal W}^{1/2}(P_1)$, we also have ${\rm det} \, Q_0(\tfrac12) = {\rm det} \, Q_1(\tfrac12)$. Let $Q_s(0)$ be a path in ${\rm O}(n)$ connecting $Q_0(0)$ and $Q_1(0)$, and let $Q_s(\tfrac12)$ be a path connecting $Q_0(\tfrac12)$ and $Q_1(\tfrac12)$. This defines a continuous family of unitaries on the boundary of the half-square $\Omega_0 := [0, \tfrac12] \times [0,1]$. Since $Q_s(0)$ and $Q_s(\tfrac12)$ are in ${\rm O}(n)$ for all $s$, their determinants are constant, equal to $\{ \pm 1\}$, and they do not contribute to the winding of the determinant of this unitary-valued map. So the winding along the boundary equals \[ {\rm Winding}(\partial \Omega_0, Q) = {\mathcal W}^{1/2}(P_0) - {\mathcal W}^{1/2}(P_1) = 0. \] Lemma~\ref{lem:Ch1d} allows then to conclude the proof. \end{proof} \subsection{Class ${\rm CI}$} \label{ssec:CI} In class ${\rm CI}$, the $T$-symmetry is even ($T^2 = {\mathbb I}_{\mathcal H}$) while the $C$-symmetry is odd ($C^2 = - {\mathbb I}_{\mathcal H}$). \begin{lemma} Assume ${\rm CI}(0, n, N)$ is non empty. Then $N=2n$, and there is a basis of ${\mathcal H}$ in which \begin{equation} \label{eq:form_CI} T = \begin{pmatrix} 0 & K_n \\ K_n & 0 \end{pmatrix}, \quad C = \begin{pmatrix} 0 & -K_n \\ K_n & 0 \end{pmatrix} \quad \text{so that} \quad S = TC = \begin{pmatrix} {\mathbb I}_n & 0 \\ 0 & - {\mathbb I}_n \end{pmatrix}. \end{equation} \end{lemma} \begin{proof} The proof is similar to the one of Lemma~\ref{lem:normalForm_BDI}. This time, since $C^2 = - {\mathbb I}$ and $TC = - CT$, we have, instead of~\eqref{eq:BDI_TC}, \[ T \phi_{n+j}= T C \phi_{j} = - CT \phi_{j} = - C \phi_j = - \phi_{n+j}, \quad \text{and} \quad C \phi_{n+j} = C^2 \phi_{j} = - \phi_j. \qedhere \] \end{proof} Using again Lemma~\ref{lem:normalForm_BDI}, we describe a projection $P(k)$ with its corresponding unitary $Q(k)$. The condition $T^{-1} P(k) T = P(-k)$ gives \[ Q(-k)^T = Q(k). \] In particular, if $P \in {\rm CI}(0, n, 2n)$, the corresponding $Q$ satisfies $Q^T = Q$. In Corollary~\ref{cor:AutonneTagaki} below, we prove that a matrix $Q$ is in ${\rm U}(n) \cap {\mathcal S}_n^{{\mathbb R}}({\mathbb C})$ iff it is of the form \[ Q = V^T V, \quad \text{for some} \quad V \in {\rm U}(n). \] In addition, we have $Q = V_0^T V_0 = V_1^T V_1$ with $V_0, V_1 \in {\rm U}(n)$ iff $V_0 V_1^* \in {\rm O}(n)$. This proves that \[ \boxed{ {\rm CI}(0, n, 2n) \simeq {\rm U}(n) \cap {\mathcal S}_n^{{\mathbb R}}({\mathbb C}) \simeq {\rm U}(n) / {\rm O}(n).} \] \begin{theorem}[Class ${\rm CI}$] The set ${\rm CI}(d, n, N)$ is non-empty iff $N=2n$. It is path-connected both for $d=0$ and for $d=1$. \end{theorem} \begin{proof} Given two matrices $Q_0$, $Q_1$ in ${\rm U}(n) \cap {\mathcal S}_n^{{\mathbb R}}({\mathbb C})$, we can connect them in ${\rm U}(n) \cap {\mathcal S}_n^{{\mathbb R}}({\mathbb C})$ by connecting the corresponding $V_0$ and $V_1$ in ${\rm U}(n)$. This proves that ${\rm CI}(0, n, 2n)$ is connected. \medskip We now focus on the case $d = 1$. Let $P_0(k)$ and $P_1(k)$ be two families in ${\rm CI}(1,n)$, with corresponding unitaries $Q_0$ and $Q_1$. Let $V_0(0), V_1(0) \in {\rm U}(n)$ so that \[ Q_0(0) = V_0(0)^T V_0(0), \quad \text{and} \quad Q_1(0) = V_1(0)^T V_1(0). \] Let $V_s(0)$ be a homotopy between $V_0(0)$ and $V_1(0)$ in ${\rm U}(n)$, and set \[ Q_s(0) := V_s(0)^T V_s(0). \] Then, $Q_s(0)$ is a homotopy between $Q_0(0)$ and $Q_1(0)$ in ${\rm CI}(0,n, 2n)$. We construct similarly an homotopy between $Q_0(\tfrac12)$ and $Q_1(\tfrac12)$ in ${\rm CI}(0,n, 2n)$. This gives a path of unitaries on the boundary of the half-square $\Omega_0$. We can extend this family inside $\Omega_0$ iff the winding of the determinant along the boundary loop vanishes. \medskip Let $W \in {\mathbb Z}$ be this winding. There is no reason {\em a priori} to have $W = 0$. However, if $W \neq 0$, we claim that we can cure the winding by modifying the path $V_s(0)$ connecting $V_0(0)$ and $V_1(0)$. Indeed, setting \[ \widetilde{V}_s(0) = {\rm diag} (\re^{ \ri W\pi s/2}, 1, 1, \cdots, 1) V_s(0), \quad \text{and} \quad \widetilde{Q}_s(0) := \widetilde{V}_s(0)^T \widetilde{V}_s(0), \] we can check that the family $\widetilde{Q}_s(0)$ also connects $Q_0(0)$ and $Q_1(0)$ in ${\rm CI}(0,n ,2n)$, and satisfies \[ {\rm det} \ \widetilde{Q}_s(0) = \re^{ \ri W \pi s} \, {\rm det} \ Q_s(0). \] This cures the winding, and Lemma~\ref{lem:Ch1d} allows to conclude that the class ${\rm CI}(1, n, 2n)$ is path-connected. \end{proof} \subsection{Class ${\rm DIII}$} \label{ssec:DIII} The class ${\rm DIII}$ mirrors ${\rm CI}$, since here the $T$-symmetry is odd ($T^2 = -{\mathbb I}_{\mathcal H}$) while the $C$-symmetry is even ($C^2 = {\mathbb I}_{\mathcal H}$). This class has been studied {\em e.g.} in~\cite{deNittis2021cohomology}. \begin{lemma} \label{lem:normalForm_DIII} Assume ${\rm DIII}(0, n, N)$ is non empty. Then $n = 2m$ is even, and $N = 2n = 4m$ is a multiple of $4$. There is a basis of ${\mathcal H}$ in which \[ T = \begin{pmatrix} 0 & K_n J_n \\ K_n J_n & 0 \end{pmatrix}, \quad C = \begin{pmatrix} 0 & K_n J_n \\ - K_n J_n & 0 \end{pmatrix}, \quad \text{and} \quad S = \begin{pmatrix} {\mathbb I}_{n} & 0 \\ 0 & -{\mathbb I}_{n} \end{pmatrix}. \] \end{lemma} \begin{proof} Let $P_0 \in {\rm DIII}(0, n, 2n)$. Since $T$ is anti-unitary, leaves ${\rm Ran} \, P_0$ invariant, and satisfies $T^2 = - {\mathbb I}_{{\rm Ran} \, P_0}$ there, one can apply Lemma~\ref{lem:formTodd} to the restriction of $T$ on ${\rm Ran} \, P_0$. We first deduce that $n = 2m$ is even, and that there is a basis for ${\rm Ran } \, P_0$ of the form $\set{\psi_1, \ldots, \psi_{2m}}$, with $\psi_{m+j} = T \psi_j$. Once again we set $\psi_{2m+j} := C \psi_j$. This time, we have $TC = - CT$, so, in the basis $\set{\psi_1, \ldots, \psi_{4m}}$, we have \[ T = \begin{pmatrix} K J_n & 0 \\ 0 & -K J_n \end{pmatrix}, \quad C= \begin{pmatrix} 0 & K \\ K & 0 \end{pmatrix} \quad \text{hence} \quad S=TC = \begin{pmatrix} 0 & J_n \\ -J_n & 0 \end{pmatrix}, \] A computation reveals that \[ U^* \begin{pmatrix} 0 & J_n \\ -J_n & 0 \end{pmatrix} U = \begin{pmatrix} {\mathbb I}_n & 0 \\ 0 & - {\mathbb I}_n \end{pmatrix}, \quad \text{with} \quad U := \frac{1}{\sqrt{2}} \begin{pmatrix} {\mathbb I}_m & 0 & -{\mathbb I}_m & 0 \\ 0 & - {\mathbb I}_m & 0 & {\mathbb I}_m \\ 0 & {\mathbb I}_m & 0 & {\mathbb I}_m \\ {\mathbb I}_m & 0 & {\mathbb I}_m & 0 \end{pmatrix}, \] and that $U$ is unitary. With this change of basis, we obtain the result. \end{proof} In this basis, we have that $T^{-1} P(k) T = P(-k)$ iff the corresponding $Q$ satisfies \[ J_n Q^T(-k) J_n = - Q(k). \] In dimension $d=0$, the condition becomes $J_n Q^T J_n = - Q$, which can be equivalently rewritten as \[ A^T = - A, \quad \text{with} \quad A := Q J_n. \] The matrix $A$ is unitary and skew-symmetric, $A \in {\rm U}(n) \cap {\mathcal A}_{n}^{\mathbb R}({\mathbb C})$. In particular, the Pfaffian of $A$ is well-defined. In Corollary~\ref{cor:O(d)capA(d)} below, we recall that a matrix $A$ is in ${\rm U}(n) \cap {\mathcal A}_n^{\mathbb R}({\mathbb C})$ iff it is of the form \[ A = V^T J_n V, \quad \text{with} \quad V \in {\rm U}(n). \] In addition, we have $A = V_0^T J_n V_0 = V_1^T J_n V_1$ with $V_0, V_1 \in {\rm U}(n)$ iff $V_0 V_1^* \in {\rm Sp}(m)$. Therefore \[ \boxed{ {\rm DIII}(0, 2m, 4m) \simeq {\rm U}(2m) \cap {\mathcal A}_{2m}^{\mathbb R}({\mathbb C}) \simeq {\rm U}(2m) / {\rm Sp}(m). } \] \begin{theorem}[Class ${\rm DIII}$] The set ${\rm DIII}(d, n, N)$ is non-empty iff $n=2m \in 2{\mathbb N}$ and $N=2n=4m$. \begin{itemize} \item The set ${\rm DIII}(0,2m,4m)$ is path-connected. \item There is a map ${\rm Index}_1^{{\rm DIII}} \colon {\rm DIII}(1,2m,4m) \to {\mathbb Z}_2$ such that $P_0$ is homotopic to $P_1$ in ${\rm DIII}(1,2m,4m)$ iff ${\rm Index}_1^{{\rm DIII}}(P_0) = {\rm Index}_1^{{\rm DIII}}(P_1)$. \end{itemize} \end{theorem} The index ${\rm Index}_1^{{\rm DIII}}$ is defined below in~\eqref{eq:def:Index_DIII_1}. It matches the usual Teo-Kene formula in~\cite[Eqn. (4.27)]{Teo_2010}. \begin{proof} For the first part, it is enough to connect the corresponding matrices $V$'s in ${\rm U}(n)$, which is path-connected. Let us focus on the case $d = 1$. Let $P(k) \in {\rm DIII}(1, 2m, 4m)$ with corresponding matrices $Q(k) \in {\rm U}(2m)$ and $A(k) := J_{2m}^T Q(k) \in {\rm U}(2m) \cap \mathcal{A}^{\mathbb R}_{2m}({\mathbb C})$. Let $\alpha(k) : [0, 1/2] \to {\mathbb R}$ be a continuous phase so that \[ \forall k \in [0, \tfrac12], \quad {\rm det} \ A(k) = \re^{ \ri \alpha(k)}. \] For $k_0 \in \{ 0, 1/2 \}$, $A(k_0)$ is anti-symmetric, so we can define its Pfaffian, which satisfies ${\rm Pf}( A(k_0))^2 = {\rm det} \ A(k_0) = \re^{ \ri \alpha(k_0)}$. Taking square roots shows that there are signs $\sigma_0, \sigma_{1/2} \in \{ \pm 1 \}$ so that \[ {\rm Pf} \ A(0) = \sigma_0 \re^{ \ri \tfrac12 \alpha(0)}, \quad \text{and} \quad {\rm Pf} \ A(\tfrac12) = \sigma_{1/2} \re^{ \ri \tfrac12 \alpha(\tfrac12)}. \] We define the Index as the product of the two signs $\sigma_0 \cdot \sigma_{1/2}$. Explicitly, \begin{equation} \label{eq:def:Index_DIII_1} {\rm Index}_1^{\rm DIII}(P) := \dfrac{\re^{ \ri \tfrac12 \alpha(0)}} {{\rm Pf} \, A (0)} \cdot \dfrac{\re^{ \ri \tfrac12 \alpha(\frac12)}} {{\rm Pf} \, A (\frac12)} \quad \in \{ \pm 1\}. \end{equation} Note that this index is independent of the choice of the lifting $\alpha(k)$. Actually, this index is $1$ if, by following the continuous map $\re^{\ri \tfrac12 \alpha(k)}$, that is a continuous representation of $\sqrt{{\rm det}(A(k))}$, one goes from ${\rm Pf} \, A(0)$ to ${\rm Pf} \, A(\tfrac12)$, and is $-1$ if one goes from ${\rm Pf} \, A(0)$ to $-{\rm Pf} \, A(\tfrac12)$. Let us prove that if $P_0, P_1 \in {\rm DIII}(1, 2m, 4m)$, then ${\rm Index}_1^{\rm DIII}(P_0) = {\rm Index}_1^{\rm DIII}(P_1)$ iff there is an homotopy between the two maps. Let $V_0(0), V_1(0) \in {\rm U}(n)$ be so that \[ A_0(0) = V_0(0)^T J_n V_0(0), \quad \text{and} \quad A_1(0) = V_1(0)^T J_n V_1(0). \] Let $V_s(0)$ be a homotopy between $V_0(0)$ and $V_1(0)$ in ${\rm U}(n)$, and set \[ A_s(0) := V_s(0)^T J_n V_s(0). \] This gives a homotopy between $A_0(0)$ and $A_1(0)$ in ${\rm DIII}(0, n, 2n)$. We construct similarly a path $A_s(\tfrac12)$ connecting $A_0(\tfrac12)$ and $A_1(\tfrac12)$ in ${\rm DIII}(0, n, 2n)$. Define continuous phase maps $\alpha_0(k)$, $\widetilde{\alpha_s}(\tfrac12)$, $\alpha_1(k)$ and $\widetilde{\alpha_s}(0)$ so that \[ \forall k \in [0, \tfrac12], \quad {\rm det} \ A_0(k) = \re^{ \ri \alpha_0(k)} \quad \text{and} \quad {\rm det} \ A_1(k) = \re^{ \ri \alpha_1(k)}, \] while \[ \forall s \in [0, 1], \quad {\rm det} \ A_s(0) = \re^{ \ri \widetilde{\alpha_s}(0)} \quad \text{and} \quad {\rm det} \ A_s(\tfrac12) = \re^{ \ri \widetilde{\alpha_s}(\tfrac12)}, \] together with the continuity conditions \[ \alpha_0(k=\tfrac12) = \widetilde{\alpha_{s = 0}}(\tfrac12), \quad \widetilde{\alpha_{s=1}}(\tfrac12) = \alpha_1(k = \tfrac12), \quad \text{and} \quad \alpha_1(k=0) = \widetilde{\alpha_{s = 1}}(0). \] With such a choice, the winding of ${\rm det}(A)$ along the loop $\partial \Omega_0$ is \[ W := \dfrac{1}{2 \pi} \left[ \widetilde{\alpha_0}(0) - \alpha_0(0) \right] \in {\mathbb Z}. \] We claim that $W \in 2 {\mathbb Z}$ is even iff ${\rm Index}_1^{\rm DIII}(P_0) = {\rm Index}_1^{\rm DIII}(P_1)$. The idea is to follow a continuation of the phase of $\sqrt{ {\rm det} \, A}$ along the boundary. For $j \in \{ 0, 1\}$, we denote by $\varepsilon_j := {\rm Index}_1^{\rm DIII}(P_j)$ the index for the sake of clarity. By definition of the Index, we have \[ \dfrac{ \re^{ \ri \frac12 \alpha_0(\tfrac12)} } { {\rm Pf} \, A_0(\tfrac12)} = \dfrac{ \re^{ \ri \frac12 \alpha_0(0)} } { {\rm Pf} \, A_0(0) } \, \varepsilon_0, \quad \text{and, similarly, } \quad \dfrac{ \re^{ \ri \frac12 \alpha_1(\tfrac12)} } { {\rm Pf} \, A_1(\tfrac12)} = \dfrac{ \re^{ \ri \frac12 \alpha_1(0)} } { {\rm Pf} \, A_1(0) } \, \varepsilon_1 \] On the segment $(k,s) = \{ \tfrac12 \} \times [0, 1]$, the map $s \mapsto {\rm Pf} \, A_s(\tfrac12)$ is continuous, and is a continuous representation of the square root of the determinant. So \[ \dfrac{\re^{ \ri \tfrac12 \widetilde{\alpha_0}(\tfrac12)} }{{\rm Pf} \, A_0(\tfrac12)} = \dfrac{\re^{ \ri \tfrac12 \widetilde{\alpha_1}(\tfrac12)} }{{\rm Pf} \, A_1(\tfrac12)} , \quad \text{and similarly,} \quad \dfrac{\re^{ \ri \tfrac12 \widetilde{\alpha_0}(0)} }{{\rm Pf} \, A_0(0)} = \dfrac{\re^{ \ri \tfrac12 \widetilde{\alpha_1}(0)} }{{\rm Pf} \, A_1(0)}. \] Gathering all expressions, and recalling the continuity conditions, we obtain \[ \dfrac{ \re^{ \ri \tfrac12 \widetilde{\alpha_0}(0)} } { {\rm Pf} \, A_0(0) } = \varepsilon_0 \varepsilon_1 \dfrac{ \re^{ \ri \tfrac12 \alpha_0(0)} } { {\rm Pf} \, A_0(0) }, \quad \text{so} \quad \re^{ \ri \pi W} = \varepsilon_0 \varepsilon_1. \] This proves our claim. If the indices differ, then we have $\varepsilon_0 \varepsilon_1 = - 1$, hence $W$ is odd. In particular, $W \neq 0$, and one cannot find an homotopy in this case. Assume now that that two indices are equal, so that $\varepsilon_0 \varepsilon_1 = 1$ and $W \in 2 {\mathbb Z}$ is even. There is no reason {\em a priori} to have $W = 0$, but we can cure the winding. Indeed, we set \[ \widetilde{A}_s(0) := \widetilde{V}_s(0)^T J_n \widetilde{V}_s(0), \quad \text{with} \quad \widetilde{V}_s(0) := {\rm diag} ( \re^{ \ri \pi W s}, 1, \cdots, 1) V_s(0). \] The family $A_s(0)$ is a continuous family connecting $A_0(0)$ and $A_1(0)$ in ${\rm DIII}(0, n, 2n)$. In addition, we have ${\rm det} \, \widetilde{A}_s(0) = \re^{ 2 \ri \pi W s } {\rm det} \, A_s(0)$, so this new interpolation cures the winding. Invoking Lemma~\ref{lem:Ch1d} concludes the proof. \end{proof} \subsection{Class ${\rm CII}$} \label{ssec:CII} Finally, it remains to study the class ${\rm CII}$, in which we have both $T^2 = -{\mathbb I}_{\mathcal H}$ and $C^2 = -{\mathbb I}_{\mathcal H}$. \begin{lemma} Assume ${\rm CII}(0, n, N)$ is non empty. Then $n = 2m$ is even, and $N = 2n = 4m$ is a multiple of $4$. There is a basis of ${\mathcal H}$ in which \[ T = \begin{pmatrix} -K_n J_n & 0 \\ 0 & - K_n J_n \end{pmatrix}, \quad C = \begin{pmatrix} K_n J_n & 0 \\ 0 & - K_n J_n \end{pmatrix}, \quad \text{and} \quad S = \begin{pmatrix} {\mathbb I}_{n} & 0 \\ 0 & -{\mathbb I}_{n} \end{pmatrix}. \] \end{lemma} \begin{proof} The proof is similar to the one of Lemma~\ref{lem:normalForm_DIII}. Details are left to the reader. \end{proof} In this basis, the condition $T^{-1} P(k) T = P(-k)$ reads, in terms of $Q$, \[ J_n \overline{Q}(k) J_n = - Q(-k), \quad \text{or equivalently} \quad Q(k)^T J_n Q(-k) = J_n. \] In particular, in dimension $d = 0$, we have $Q \in {\rm U}(2m) \cap {\rm Sp}(2m; C) = {\rm Sp}(m)$. So \[ \boxed{ {\rm CII}(0, 2m, 4m) \simeq {\rm Sp}(m).} \] \begin{theorem}[Class ${\rm CII}$] The set ${\rm CII}(d, n, N)$ is non-empty iff $n=2m \in 2{\mathbb N}$ and $N=2n=4m$. \begin{itemize} \item The set ${\rm CII}(0,2m,4m)$ is path-connected. \item Define the map ${\rm Index}_1^{{\rm CII}} \colon {\rm CII}(1,2m,4m) \to {\mathbb Z}$ by \[ \forall P \in {\rm CII}(1, 2m, 4m), \quad {\rm Index}_1^{{\rm CII}}(P) := {\rm Winding}({\mathbb T}^1, Q). \] Then $P_0$ is homotopic to $P_1$ in ${\rm CII}(1,2m,4m)$ iff ${\rm Index}_1^{{\rm CII}}(P_0) = {\rm Index}_1^{{\rm CII}}(P_1)$. \end{itemize} \end{theorem} \begin{proof} We already proved in Theorem~\ref{thm:AII} that ${\rm Sp}(m)$ is connected, which yields the first part. For the $d = 1$ case, we first note that if $Q \in {\rm Sp}(m)$, we have $Q^T J_n Q = J_n$. Taking Pfaffians, we get ${\rm det}(Q) = 1$. As in the proof of Theorem~\ref{th:BDI}, we deduce that any path $Q_s(0)$ connecting $Q_0(0)$ and $Q_1(0)$ in ${\rm Sp}(m)$ has a determinant constant equal to $1$, hence does not contribute to the winding. The proof is then similar to the one of Theorem~\ref{th:BDI}. \end{proof}
{ "attr-fineweb-edu": 1.371094, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbdfxK7kjXLSrDxQd
\section{Introduction} \label{intro} One of the most exciting developments in the recent history of string theory has been the discovery of the holographic AdS/CFT correspondence \cite{renatoine,maldapasto,townrenatoi,serfro1,serfro2,serfro3}: \begin{equation} \label{holography} \mbox{SCFT on $\partial (AdS_{p+2})$} \quad \leftrightarrow \quad \mbox{SUGRA on $ AdS_{p+2}$} \end{equation} between a {\it $d=p+1$ quantum superconformal field theory} on the boundary of anti de Sitter space and {\it classical Kaluza Klein supergravity} \cite{Kkidea,freurub,round7a,squas7a,osp48}, \cite{kkwitten,noi321,spectfer,univer,freedmannicolai,multanna}, \cite{englert,biran,casher}, \cite{dafrepvn,dewit1,duffrev,castromwar,gunawar,gunay2} emerging from compactification of either superstrings or M-theory on the product space \begin{equation} AdS_{p+2} \, \times \, X^{D-p-2}\,, \label{adsX} \end{equation} where $X^{D-p-2}$ is a $D-p-2$--dimensional compact Einstein manifold. \par The present paper deals with the case: \begin{equation} p=2 \quad \leftrightarrow \quad d=3 \label{choice} \end{equation} and studies two issues: \begin{enumerate} \item The relation between the description of unitary irreducible representations of the $Osp(2 \vert 4)$ superalgebra seen as off--shell conformal superfields in $d=3$ or as on--shell particle supermultiplets in $d=4$ anti de Sitter space. Such double interpretation of the same abstract representations is the algebraic core of the AdS/CFT correspondence. \item The generic component form of an ${\cal N}=2$ gauge theory in three space-time dimensions containing the supermultiplet of an arbitrary gauge group, an arbitrary number of scalar multiplets in arbitrary representations of the gauge group and with generic superpotential interactions. This is also an essential item in the discussion of the AdS/CFT correspondence since the superconformal field theory on the boundary is to be identified with a superconformal infrared fixed point of a non abelian gauge theory of such a type. \end{enumerate} Before presenting our results we put them into perspective through the following introductory remarks. \subsection{The conceptual environment and our goals} The logical path connecting the two partners in the above correspondence (\ref{holography}) starts from considering a special instance of classical $p$--brane solution of $D$--dimensional supergravity characterized by the absence of a dilaton ($a=0$ in standard $p$--brane notations) and by the following relation: \begin{equation} \frac{d {\widetilde d} }{D-2} = 2 \label{specrel} \end{equation} between the dimension $d \equiv p+1$ of the $p$--brane world volume and that of its magnetic dual $ {\widetilde d} \equiv D-p-3$. Such a solution is given by the following metric and $p+2$ field strength: \begin{eqnarray} ds^2_{brane} & =& \left(1+\frac{k}{ r^{\widetilde d} } \right)^{- \frac { { \widetilde d}} { (D-2)}} \, dx^m \otimes dx^n \, \eta_{mn} - \left(1+\frac{k}{ r^{\widetilde d} } \right)^{ \frac { {d}} { (D-2)}} \, \left ( dr^2 + r^2 \, ds^2_{X}(y) \right )\,,\nonumber \\ F \equiv dA &= &\lambda (-)^{p+1}\epsilon_{m_1\dots m_{p+1}} dx^{m_1} \wedge \dots \wedge dx^{m_{p+1}} \wedge \, dr \, \left(1+\frac{k}{r^{\widetilde d}}\right )^{-2} \, \frac{1}{r^{{\widetilde d}+1}}\,. \label{elem} \end{eqnarray} In eq. (\ref{elem}) $ds^2_{X}(y)$ denotes the Einstein metric on the compact manifold $X^{D-p-2}$ and the $D$ coordinates have been subdivided into the following subsets \begin{itemize} \item $x^m$ $(m=0,\dots ,p)$ are the coordinates on the $p$--brane world--volume, \item $r= \mbox{radial}$ $\oplus$ $y^x =\mbox{angular on $X^{D-p-2}$}$ $(x=D- d+2,\dots ,D)$ are the coordinates transverse to the brane. \end{itemize} In the limit $ r \to 0$ the classical brane metric $ds^2_{brane}$ approaches the following metric: \begin{equation} \begin{array}{ccccc} ds^2 & = & \underbrace{\frac{r^{2 {\widetilde d}/d}}{k^{2/d}} \, dx^m \otimes dx^n \,\eta_{m n }-\frac{k^{2/{\widetilde d}}}{r^2} \, dr^2} & - &\underbrace{\phantom{\frac{1}{1^{1}}} k^{2/{\widetilde d}} \, ds^2_X(y) \phantom{\frac{1}{1^{1}}}} \\ \null & \null & AdS_{p+2} & \times & X^{D-p-2}\,,\\ \end{array} \label{rto0} \end{equation} that is easily identified as the standard metric on the product space $AdS_{p+2} \times X^{D-p-2}$. Indeed it suffices to set: \begin{equation} \rho=r^{{\widetilde d}/d} \, k^{2(d+{\widetilde d})/d^2} \label{rearra} \end{equation} to obtain : \begin{eqnarray} ds^2_{brane} & \stackrel{r \to 0}{\approx} & k^{2/{\widetilde d}} \,\left(ds^2_{AdS} -ds^2_X(y)\right)\,, \label{produ}\\ ds^2_{AdS} & =& \left( \rho^2 \, dx^m \otimes dx^n \,\eta_{m n }- \frac{d\rho^2}{\rho^2} \right)\,, \label{adsmet} \end{eqnarray} where (\ref{adsmet}) is the canonical form of the anti de Sitter metric in solvable coordinates \cite{g/hpape}. \par On the other hand, for $r \to \infty$ the brane metric approaches the limit: \begin{equation} \begin{array}{ccccc} ds^2_{brane} &\stackrel{r\to \infty}{\approx} & \underbrace{dx^m \otimes dx^n \,\eta_{m n }} & - & \underbrace{dr^2 +r^2 \, ds^2_{X}(y)}\\ \null & \null & M_{p+1} & \null & C\left (X^{D-p-2}\right)\,, \end{array} \end{equation} where $M_{p+1}$ denotes Minkowski space in $p+1$ dimensions while $C\left (X^{D-p-2}\right)$ denotes the $D-p-1$ dimensional {\it metric cone} over the {\it horizon manifold} $X^{D-p-2}$. The key point is that (compactified) Minkowski space can also be identified with the boundary of anti de Sitter space: \begin{equation} \partial \left( AdS_{p+2} \right) \equiv M_{p+1}\,, \label{identifia} \end{equation} so that we can relate supergravity on $AdS_{p+2} \times X^{D-p-2}$ to the {\it gauge theory} of a stack of $p$--branes placed in such a way as to have the metric cone as transverse space (see fig.\ref{cono1}) \begin{equation} C\left (X^{D-p-2}\right) = \mbox{transverse directions to the branes.} \label{transverse} \end{equation} \iffigs \begin{figure} \caption{The metric cone $C\left (X^{D-p-2}\right)$ is transverse to the stack of branes} \begin{center} \label{cono1} \epsfxsize = 10cm \epsffile{cono1.eps} \vskip 0.2cm \hskip 2cm \unitlength=1.1mm \end{center} \end{figure} \fi \par \par According to current lore on brane dynamics \cite{branedyn1,branedyn2,branedyn3,branedyn4}, if the metric cone $C(X^{D-p-2})$ can be reintepreted as some suitable resolution of an orbifold singularity \cite{c/ga1,c/ga2,c/ga3,c/ga4}: \begin{eqnarray} C(X^{D-p-2}) &=& \mbox{resolution of } \, \frac{R^{D-p-1}}{\Gamma}\,,\nonumber\\ \Gamma & = & \mbox{discrete group}, \label{resolut} \end{eqnarray} then there are means to identify a {\it gauge theory} in $M_{p+1}$ Minkowski space with supersymmetry ${\cal N}$ determined by the holonomy of the metric cone, whose structure and field content in the ultraviolet limit is determined by the orbifold $ {R^{D-p-1}}/{\Gamma}$. In the infrared limit, corresponding to the resolution $C(X^{D-p-2})$, such a gauge theory has a superconformal fixed point and defines the superconformal field theory $SCFT_{p+1}$ dual to supergravity on $AdS_{p+2} \times X^{D-p-2}$. \par In this general conceptual framework there are three main interesting cases where the basic relation (\ref{specrel}) is satisfied \begin{equation} \begin{array}{cccl} p=3 & D=10 & AdS_5 \times X^5 &\mbox{ $D3$--brane of type IIB theory} \\ p=2 & D=11 & AdS_4 \times X^7 & \mbox{ $M2$--brane of M--theory} \\ p=5 & D=11 & AdS_7 \times X^4 &\mbox{ $M5$--brane of M--theory} \ \end{array} \label{casi} \end{equation} The present paper focuses on the case of $M2$ branes and on the general features of ${\cal N}=2$ superconformal field theories in $d=3$. Indeed the final goal we are pursuing in a series of papers is that of determining the three--dimensional superconformal field theories dual to compactifications of D=11 supergravity on $AdS_4 \times X^7$, where the non spherical horizon $X^7$ is chosen to be one of the four homogeneous sasakian $7$--manifolds $G/H$: \begin{equation} X^7 = \cases{ \begin{array}{ccc} M^{1,1,1} & = & \frac{SU(3) \times SU(2) \times U(1)}{SU(2) \times U(1) \times U(1)} \\ Q^{1,1,1} &=& \frac{SU(2) \times SU(2) \times SU(2)}{ U(1) \times U(1) } \\ N^{0,1,0} & = & \frac{SU(3)}{U(1)} \\ V_{5,2} & = & \frac{SO(5)\times SO(2)}{SO(3) \times SO(2)} \end{array}} \label{sasaki} \end{equation} that were classified in the years 1982-1985 \cite{noi321,dafrepvn,castromwar} when Kaluza Klein supergravity was very topical. The Sasakian structure \cite{sask1,sask2,sask3,sask4} of $G/H$ reflects its holonomy and is the property that guarantees ${\cal N}=2$ supersymmetry both in the bulk $AdS_4$ and on the boundary $M_3$. Kaluza Klein spectra for $D=11$ supergravity compactified on the manifolds (\ref{sasaki}) have already been constructed \cite{m111spectrum} or are under construction \cite{loro52} and, once the corresponding superconformal theory has been identified, it can provide a very important tool for comparison and discussion of the AdS/CFT correspondence. \subsection{The specific problems addressed and solved in this paper} In the present paper we do not address the question of constructing the algebraic conifolds defined by the metric cones $C(G/H)$ nor the identification of the corresponding orbifolds. Here we do not discuss the specific construction of the superconformal field theories associated with the horizons (\ref{sasaki}) which is postponed to future publications \cite{cesar}: we rather consider a more general problem that constitutes an intermediate and essential step for the comparison between Kaluza Klein spectra and superconformal field theories. As anticipated above, what we need is a general translation vocabulary between the two descriptions of $Osp({\cal N} \vert 4)$ as the superisometry algebra in anti de Sitter ${\cal N}$--extended $d=4$ superspace and as a superconformal algebra in $d=3$. In order to make the comparison between superconformal field theories and Kaluza Klein results explicit, such a translation vocabulary is particularly essential at the level of unitary irreducible representations (UIR). On the Kaluza Klein side the UIR.s appear as supermultiplets of on--shell particle states characterized by their square mass $m^2$ which, through well established formulae, is expressed as a quadratic form: \begin{equation} m^{2} = c \, (E_0-a)(E_0 -b) \label{massformu} \end{equation} in the {\it energy eigenvalue} $E_0$ of a compact $SO(2)$ generator, by their spin $s$ with respect to the compact $SO(3)$ {\it little group} of their momentum vector and, finally, by a set of $SO({\cal N})$ labels. These particle states live in the bulk of $AdS_4$. On the superconformal side the UIR.s appear instead as multiplets of primary conformal operators constructed out of the fundamental fields of the gauge theory. They are characterized by their conformal weight $D$, their $SO(1,2)$ spin $J$ and by the labels of the $SO({\cal N})$ representation they belong to. Actually it is very convenient to regard such multiplets of conformal operators as appropriate conformally invariant superfields in $d=3$ superspace. \par Given this, what one needs is a general framework to convert data from one language to the other. \par Such a programme has been extensively developed in the case of the $AdS_5/CFT_4$ correspondence between ${\cal N}=4$ Yang--Mills theory in $D=4$, seen as a superconformal theory, and type IIB supergravity compactified on $AdS_5 \times S^5$. In this case the superconformal algebra is $SU(2,2\vert 4)$ and the relation between the two descriptions of its UIR.s as boundary or bulk supermultiplets was given, in an algebraic setup, by Gunaydin and collaborators \cite{gunaydinminiczagerman1,gunaydinminiczagerman2}, while the corresponding superfield description was discussed in a series of papers by Ferrara and collaborators \cite{sersupm1,sersupm2,sersupm3,sersupm4}. \par A similar discussion for the case of the $Osp({\cal N} \vert 4)$ superalgebra was, up to our knowledge, missing so far. The present paper is meant to fill the gap. \par There are relevant structural differences between the superalgebra ${\bf G} = SU(2,2\vert {\cal N})$ and the superalgebra ${\bf G}= Osp({\cal N} \vert 4)$ but the basic strategy of papers \cite{gunaydinminiczagerman1,gunaydinminiczagerman2} that consists of performing a suitable {\it rotation} from a basis of eigenstates of the maximal compact subgroup $SO(2) \times SO(p+1) \subset {\bf G}$ to a basis of eigenstates of the maximal non compact subgroup $SO(1,1) \times SO(1,p) \subset {\bf G}$ can be adapted. After such a rotation we derive the $d=3$ superfield description of the supermultiplets by means of a very simple and powerful method based on the {\it supersolvable parametrization } of anti de Sitter superspace \cite{torinos7}. By definition, anti de Sitter superspace is the following supercoset: \begin{equation} AdS_{4\vert{\cal N} } \equiv \frac{Osp({\cal N} \vert 4)}{SO(1,3) \times SO({\cal N})} \label{ads4N} \end{equation} and has $4$ bosonic coordinates labeling the points in $AdS_4$ and $4 \times {\cal N}$ fermionic coordinates $\Theta^{\alpha i}$ that transform as Majorana spinors under $SO(1,3)$ and as vectors under $SO({\cal N})$. There are many possible coordinate choices for parametrizing such a manifold, but as far as the bosonic submanifold is concerned it was shown in \cite{g/hpape} that a particularly useful parametrization is the solvable one where the $AdS_4$ coset is regarded as a {\it non--compact solvable group manifold}: \begin{equation} AdS_4 \equiv \frac{SO(2,3)}{SO(1,3)} = \exp \left [ Solv_{adS} \right] \label{solvads} \end{equation} The solvable algebra $Solv_{adS}$ is spanned by the unique non--compact Cartan generator $D$ belonging to the coset and by three abelian operators $P_m$ ($m=0,1,2$) generating the translation subalgebra in $d=1+2$ dimensions. The solvable coordinates are \begin{equation} \begin{array}{rclcrcl} \rho & \leftrightarrow & D & ; & z^m & \leftrightarrow & P_m \ \end{array} \label{solvcord} \end{equation} and in such coordinates the $AdS_4$ metric takes the form (\ref{adsmet}). Hence $\rho$ is interpreted as measuring the distance from the brane--stack and $z^m$ are interpreted as cartesian coordinates on the brane boundary $\partial (AdS_4)$. In \cite{torinos7} we addressed the question whether such a solvable parametrization of $AdS_4$ could be extended to a supersolvable parametrization of anti de Sitter supersapce as defined in (\ref{ads4N}). In practice that meant to single out a solvable superalgebra with $4$ bosonic and $4 \times {\cal N}$ fermionic generators. This turned out to be impossible, yet we easily found a supersolvable algebra $SSolv_{adS} $ with $4$ bosonic and $2 \times {\cal N}$ fermionic generators whose exponential defines {\it solvable anti de Sitter superspace}: \begin{equation} AdS^{(Solv)}_{4\vert 2{\cal N}}\equiv \exp\left[ SSolv_{adS}\right] \label{solvsup} \end{equation} The supermanifold (\ref{solvsup}) is also a supercoset of the same supergroup $Osp({\cal N} \vert 4)$ but with respect to a different subgroup: \begin{equation} AdS^{(Solv)}_{4\vert 2{\cal N}} = \frac{Osp(4\vert {\cal N})} {CSO(1,2\vert{\cal N})} \label{supcos2} \end{equation} where $CSO(1,2\vert {\cal N})\subset Osp({\cal N}\vert 4)$ is an algebra containing $3 + 3+ \ft{{\cal N}({\cal N}-1)}{2}$ bosonic generators and $2\times {\cal N}$ fermionic ones. This algebra is the semidirect product: \begin{equation} \begin{array}{ccc} CSO(1,2\vert {\cal N}) & = & {\underbrace {ISO(1,2\vert {\cal N}) \times SO({\cal N})}}\\ \null & \null & \mbox{semidirect} \end{array} \label{CSOdefi} \end{equation} of ${\cal N}$--extended {\it superPoincar\'e} algebra in $d=3$ ($ISO(1,2\vert {\cal N})$) with the orthogonal group $SO({\cal N})$. It should be clearly distinguished from the central extension of the Poincar\'e superalgebra $Z\left [ISO(1,2\vert {\cal N})\right ]$ which has the same number of generators but different commutation relations. Indeed there are three essential differences that it is worth to recall at this point: \begin{enumerate} \item In $Z\left [ISO(1,2\vert {\cal N})\right ]$ the ${{\cal N}({\cal N}-1)}/{2}$ internal generators $Z^{ij}$ are abelian, while in $CSO(1,2\vert {\cal N})$ the corresponding $T^{ij}$ are non abelian and generate $SO({\cal N})$. \item In $Z\left [ISO(1,2\vert {\cal N})\right ]$ the supercharges $q^{\alpha i}$ commute with $Z^{ij}$ (these are in fact central charges), while in $CSO(1,2\vert {\cal N})$ they transform as vectors under $T^{ij}$ \item In $Z\left [ISO(1,2\vert {\cal N})\right ]$ the anticommutator of two supercharges yields, besides the translation generators $P_m$, also the central charges $Z^{ij}$, while in $CSO(1,2\vert {\cal N})$ this is not true. \end{enumerate} We will see the exact structure of $CSO(1,2\vert {\cal N}) \subset Osp({\cal N} \vert 4)$ and of $ ISO(1,2\vert {\cal N}) \subset CSO(1,2\vert {\cal N})$ as soon as we have introduced the full orthosymplectic algebra. In the superconformal interpretation of the $Osp({\cal N}\vert 4)$ superalgebra, $CSO(1,2\vert{\cal N})$ is spanned by the conformal boosts $K_m$, the Lorentz generators $J^m$ and the special conformal supersymmetries $s^i_\alpha$. Being a coset, the solvable $AdS$--superspace $AdS^{(Solv)}_{4\vert 2{\cal N}}$ supports a non linear representation of the full $Osp({\cal N}\vert 4 )$ superalgebra. As shown in \cite{torinos7}, we can regard $AdS^{(Solv)}_{4\vert 2{\cal N}}$ as ordinary anti de Sitter superspace $AdS_{4\vert {\cal N}}$ where $2\times {\cal N}$ fermionic coordinates have being eliminated by fixing $\kappa$--supersymmetry. \par Our strategy to construct the boundary superfields is the following. First we construct the supermultiplets in the bulk by acting on the abstract states spanning the UIR with the coset representative of the solvable superspace $AdS^{(Solv)}_{4\vert 2{\cal N}}$ and then we reach the boundary by performing the limit $\rho \to 0$ (see fig. \ref{boufil}) \iffigs \begin{figure} \caption{Boundary superfields are obtained as limiting values of superfields in the bulk \label{boufil}} \begin{center} \epsfxsize = 10cm \epsffile{boufil.eps} \vskip 0.2cm \hskip 2cm \unitlength=1.1mm \end{center} \end{figure} \fi \par \par The general structure of the $Osp(2\vert 4)$ supermultiplets that may appear in Kaluza Klein supergravity has been determined recently in \cite{m111spectrum} through consideration of a specific example, that where the manifold $X^7$ is the sasakian $M^{1,1,1}$. Performing harmonic analysis on $M^{1,1,1}$ we have found {\it graviton, gravitino} and {\it vector multiplets} both in a {\it long} and in a {\it shortened} version. In addition we have found hypermultiplets that are always short and the ultra short multiplets corresponding to massless fields. According to our previous discussion each of these multiplets must correspond to a primary superfield on the boundary. We determine such superfields with the above described method. Short supermultiplets correspond to {\it constrained superfields}. The shortening conditions relating mass and hypercharges are retrieved here as the necessary condition to mantain the constraints after a superconformal transformation. \par As we anticipated above these primary conformal fields are eventually realized as composite operators in a suitable ${\cal N}=2$, $d=3$ gauge theory. Hence, in the second part of this paper we construct the general form of such a theory. To this effect, rather than superspace formalism we employ our favorite rheonomic approach that, at the end of the day, yields an explicit component form of the lagrangian and the supersymmetry transformation rules for all the physical and auxiliary fields. Although supersymmetric gauge theories in $d=3$ dimensions have been discussed in the literature through many examples (mostly using superspace formalism) a survey of their general form seems to us useful. Keeping track of all the possibilities we construct a supersymmetric off--shell lagrangian that employs all the auxiliary fields and includes, besides minimal gauge couplings and superpotential interactions also Chern Simons interactions and Fayet Iliopoulos terms. We restrict however the kinetic terms of the gauge fields to be quadratic since we are interested in microscopic gauge theories and not in effective lagrangians. Generalization of our results to non minimal couplings including arbitrary holomorphic functions of the scalars in front of the gauge kinetic terms is certainly possible but it is not covered in our paper. \par In particular we present general formulae for the scalar potential and we analyse the further conditions that an ${\cal N}=2$ gauge theory should satisfy in order to admit either ${\cal N}=4$ or ${\cal N}=8$ supersymmetry. This is important in connection with the problem of deriving the ultraviolet orbifold gauge theories associated with the sasakian horizons (\ref{sasaki}). Indeed a possible situation that might be envisaged is that where at the orbifold point the gauge theory has larger supersymmetry broken to ${\cal N}=2$ by some of the perturbations responsible for the singularity resolution. It is therefore vital to write ${\cal N}=4$ and ${\cal N}=8$ theories in ${\cal N}=2$ language. This is precisely what we do here. \subsection{Our paper is organized as follows:} In section \ref{ospn4al} we discuss the definition and the general properties of the orthosymplectic $Osp({\cal N}\vert 4)$ superalgebra. In particular we discuss its two five--gradings: compact and non compact, the first related to the supergravity interpretation , the second to the superconformal field theory interpretation. \par In section \ref{supercoset} we discuss the supercoset structure of superspace and the realization of the $Osp({\cal N}\vert 4)$ superalgebra as an algebra of transformations in two different supercosets, the first describing the bulk of $AdS_4$, the second its boundary $\partial ( AdS_4)$. \par In section \ref{supfieldbuetbo} we come to one of the main points of our paper and focusing on the case ${\cal N}=2$ we show how to construct boundary conformal superfields out of the Kaluza Klein $Osp(2\vert 4)$ supermultiplets. \par In section \ref{n2d3gauge} we discuss the rheonomic construction of a generic ${\cal N}=2$, $d=3$ gauge theory with arbitrary field content and arbitrary superpotential interactions. \par In section \ref{conclu} we briefly summarize our conclusions. \par Finally in appendix \ref{derivationkillings} the reader can find the explicit derivation of the Killing vectors generating the action of the $Osp({\cal N}\vert 4)$ superalgebra on superspace. These Killing vectors are an essential tool for the derivation of our result in section \ref{supfieldbuetbo} \section{The $Osp({\cal N}\vert 4)$ superalgebra: definition, properties and notations} \label{ospn4al} The non compact superalgebra $Osp({\cal N} \vert 4)$ relevant to the $AdS_4/CFT_3$ correspondence is a real section of the complex orthosymplectic superalgebra $Osp({\cal N} \vert 4, \relax\,\hbox{$\inbar\kern-.3em{\rm C}$})$ that admits the Lie algebra \begin{equation} G_{even} = Sp(4,\relax{\rm I\kern-.18em R}) \times SO({\cal N}, \relax{\rm I\kern-.18em R}) \label{geven} \end{equation} as even subalgebra. Alternatively, due to the isomorphism $Sp(4,\relax{\rm I\kern-.18em R})\equiv Usp(2,2)$ we can take a different real section of $Osp({\cal N} \vert 4, \relax\,\hbox{$\inbar\kern-.3em{\rm C}$})$ such that the even subalgebra is: \begin{equation} G_{even}^\prime = Usp(2,2) \times SO({\cal N}, \relax{\rm I\kern-.18em R}) \label{gevenp} \end{equation} In this paper we mostly rely on the second formulation (\ref{gevenp}) which is more convenient to discuss unitary irreducible representations, while in (\cite{torinos7}) we used the first (\ref{geven}) that is more advantageous for the description of the supermembrane geometry. The two formulations are related by a unitary transformation that, in spinor language, corresponds to a different choice of the gamma matrix representation. Formulation (\ref{geven}) is obtained in a Majorana representation where all the gamma matrices are real (or purely imaginary), while formulation (\ref{gevenp}) is related to a Dirac representation. \par Our choice for the gamma matrices in a Dirac representation is the following one\footnote{we adopt as explicit representation of the $SO(3)~\tau$ matrices a permutation of the canonical Pauli matrices $\sigma^a$: $\tau^1=\sigma^3$, $\tau^2=\sigma^1$ and $\tau^3=\sigma^2$; for the spin covering of $SO(1,2)$ we choose instead the matrices $\gamma$ defined in (\ref{gammamatrices}).}: \begin{equation} \Gamma^0=\left(\begin{array}{cc} \unity&0\\ 0&-\unity \end{array}\right)\,,\qquad \Gamma^{1,2,3}=\left(\begin{array}{cc} 0&\tau^{1,2,3}\\ -\tau^{1,2,3}&0 \end{array}\right)\,, \qquad C_{[4]}=i\Gamma^0\Gamma^3\,, \label{dirgamma} \end{equation} having denoted by $C_{[4]}$ the charge conjugation matrix in $4$--dimensions $C_{[4]}\, \Gamma^\mu \, C_{[4]}^{-1} = - ( \Gamma^\mu)^T$. \par Then the $Osp({\cal N}\vert 4)$ superalgebra is defined as the set of graded $(4+{\cal N})\times(4+{\cal N})$ matrices $\mu$ that satisfy the following two conditions: \begin{equation} \begin{array}{rccclcc} \mu^T & \left( \matrix {C_{[4]} & 0 \cr 0 & \relax{\rm 1\kern-.35em 1}_{{\cal N}\times {\cal N}} \cr} \right)& +&\left( \matrix {C_{[4]} & 0 \cr 0 & \relax{\rm 1\kern-.35em 1}_{{\cal N}\times {\cal N}} \cr} \right)& \mu & = & 0 \\ \null&\null&\null&\null&\null&\null&\null\\ \mu^\dagger & \left( \matrix {\Gamma^0 & 0 \cr 0 & -\relax{\rm 1\kern-.35em 1}_{{\cal N}\times {\cal N}} \cr} \right)& +&\left( \matrix {\Gamma^0 & 0 \cr 0 & -\relax{\rm 1\kern-.35em 1}_{{\cal N}\times {\cal N}} \cr} \right)& \mu & = & 0 \\ \end{array} \label{duecondo} \end{equation} the first condition defining the complex orthosymplectic algebra, the second one the real section with even subalgebra as in eq.(\ref{gevenp}). Eq.s (\ref{duecondo}) are solved by setting: \begin{equation} \mu = \left( \matrix {\varepsilon^{AB} \, \frac{1}{4} \, \left[\relax{{\rm I}\kern-.18em \Gamma}_A \, , \, \relax{{\rm I}\kern-.18em \Gamma}_B \right ] & \epsilon^i \cr {\bar \epsilon}^i & \mbox{i}\, \varepsilon_{ij}\, t^{ij} \cr }\right) \nonumber\\ \label{mumatri} \end{equation} In eq.({\ref{mumatri}) $\varepsilon_{ij}=-\varepsilon_{ji}$ is an arbitrary real antisymmetric ${\cal N} \times {\cal N}$ tensor, $t^{ij} = -t^{ji}$ is the antisymmetric ${\cal N} \times {\cal N}$ matrix: \begin{equation} ( t^{ij})_{\ell m} = \mbox{i}\left( \delta ^i_\ell \delta ^j_m - \delta ^i_m \delta ^j_\ell \right) \label{tgene} \end{equation} namely a standard generator of the $SO({\cal N})$ Lie algebra, \begin{equation} \relax{{\rm I}\kern-.18em \Gamma}_A=\cases{\begin{array}{cl} \mbox{i} \, \Gamma_5 \Gamma_\mu & A=\mu=0,1,2,3 \\ \Gamma_5\equiv\mbox{i}\Gamma^0\Gamma^1\Gamma^2\Gamma^3 & A=4 \\ \end{array} } \label{bigamma} \end{equation} denotes a realization of the $SO(2,3)$ Clifford algebra: \begin{eqnarray} \left \{ \relax{{\rm I}\kern-.18em \Gamma}_A \, ,\, \relax{{\rm I}\kern-.18em \Gamma}_B \right\} &=& 2 \eta_{AB}\nonumber\\ \eta_{AB}&=&{\rm diag}(+,-,-,-,+) \label{so23cli} \end{eqnarray} and \begin{equation} \epsilon^i = C_{[4]} \left( {\bar \epsilon}^i\right)^T \quad (i=1,\dots\,{\cal N}) \label{qgene} \end{equation} are ${\cal N}$ anticommuting Majorana spinors. \par The index conventions we have so far introduced can be summarized as follows. Capital indices $A,B=0,1,\ldots,4$ denote $SO(2,3)$ vectors. The latin indices of type $i,j,k=1,\ldots,{\cal N}$ are $SO({\cal N})$ vector indices. The indices $a,b,c,\ldots=1,2,3$ are used to denote spatial directions of $AdS_4$: $\eta_{ab}={\rm diag}(-,-,-)$, while the indices of type $m,n,p,\ldots=0,1,2$ are space-time indices for the Minkowskian boundary $\partial \left( AdS_4\right) $: $\eta_{mn}={\rm diag}(+,-,-)$. To write the $Osp({\cal N} \vert 4)$ algebra in abstract form it suffices to read the graded matrix (\ref{mumatri}) as a linear combination of generators: \begin{equation} \mu \equiv -\mbox{i}\varepsilon^{AB}\, M_{AB} +\mbox{i}\varepsilon_{ij}\,T^{ij} +{\bar \epsilon}_i \, Q^i \label{idegene} \end{equation} where $Q^i = C_{[4]} \left(\overline Q^i\right)^T$ are also Majorana spinor operators. Then the superalgebra reads as follows: \begin{eqnarray} \left[ M_{AB} \, , \, M_{CD} \right] & = & \mbox{i} \, \left(\eta_{AD} M_{BC} + \eta_{BC} M_{AD} -\eta_{AC} M_{BD} -\eta_{BD} M_{AC} \right) \nonumber\\ \left [T^{ij} \, , \, T^{kl}\right] &=& -\mbox{i}\,(\delta^{jk}\,T^{il}-\delta^{ik}\,T^{jl}- \delta^{jl}\,T^{ik}+\delta^{il}\,T^{jk})\, \nonumber \\ \left[M_{AB} \, , \, Q^i \right] & = & -\mbox{i} \frac{1}{4} \, \left[\relax{{\rm I}\kern-.18em \Gamma}_A \, , \, \relax{{\rm I}\kern-.18em \Gamma}_B \right ] \, Q^i \nonumber\\ \left[T^{ij}\, , \, Q^k\right] &=& -\mbox{i}\, (\delta^{jk}\, Q^i - \delta^{ik}\, Q^j ) \nonumber\\ \left \{Q^{\alpha i}, \overline Q_{\beta}^{\,j}\right \} & = & \mbox{i} \delta^{ij}\frac{1}{4}\,\left[\relax{{\rm I}\kern-.18em \Gamma}^A \, , \, \relax{{\rm I}\kern-.18em \Gamma}^B \right ]{}^{\alpha}_{\ \beta}M_{AB} +\mbox{i}\delta^{\alpha}_{\,\beta}\,T^{ij} \label{pippa} \end{eqnarray} The form (\ref{pippa}) of the $Osp({\cal N}\vert 4)$ superalgebra coincides with that given in papers \cite{freedmannicolai},\cite{multanna} and utilized by us in our recent derivation of the $M^{111}$ spectrum \cite{m111spectrum}. \par In the gamma matrix basis (\ref{dirgamma}) the Majorana supersymmetry charges have the following form: \begin{eqnarray} Q^i = \left(\matrix{a_\alpha^i\cr\varepsilon_{\alpha\beta}\bar a^{\beta i}}\right)\,, \qquad \bar a^{\alpha i} \equiv \left( a_\alpha^i \right)^\dagger \,, \end{eqnarray} where $a_\alpha^i$ are two-component $SL(2,\relax\,\hbox{$\inbar\kern-.3em{\rm C}$})$ spinors: $\alpha,\beta,\ldots = 1,2$. We do not use dotted and undotted indices to denote conjugate $SL(2,\relax\,\hbox{$\inbar\kern-.3em{\rm C}$})$ representations; we rather use higher and lower indices. Raising and lowering is performed by means of the $\varepsilon$-symbol: \begin{equation} \psi_\alpha = \varepsilon_{\alpha\beta} \psi^\beta \,, \qquad \psi^\alpha = \varepsilon^{\alpha\beta} \psi_\beta\,, \end{equation} where $\varepsilon_{12}=\varepsilon^{21}=1$, so that $\varepsilon_{\alpha\gamma}\varepsilon^{\gamma\beta}=\delta_\alpha^\beta$. Unwritten indeces are contracted according to the rule ``{\it from eight to two}''. \par In the second part of the paper where we deal with the lagrangian of $d=3$ gauge theories, the conventions for two--component spinors are slightly modified to simplify the notations and avoide the explicit writing of spinor indices. The Grassman coordinates of ${\cal N}\!\!=\!\!2$ three-dimensional superspace introduced in equation (\ref{complexthet}) , $\theta^\pm_\alpha$, are renamed $\theta$ and $\theta^c$. The reason for the superscript ``$\,^c\,$'' is that, in three dimensions the upper and lower components of the four--dimensional $4$--component spinor are charge conjugate: \begin{equation}\label{conjugations} \theta^c\equiv C_{[3]}\overline\theta^T\,,\qquad \overline\theta\equiv\theta^\dagger\gamma^0\,, \end{equation} where $C_{[3]}$ is the $d=3$ charge conjugation matrix: \begin{equation} \left\{\begin{array}{ccc} C_{[3]}\gamma^m C_{[3]}^{-1}&=&-(\gamma^m)^T\\ \gamma^0\gamma^m(\gamma^0)^{-1}&=&(\gamma^m)^\dagger \end{array}\right. \end{equation} The lower case gamma matrices are $2\!\times\!2$ and provide a realization of the $d\!=\!2\!+\!1$ Clifford algebra: \begin{equation} \{\gamma^m \, , \, \gamma^n \} = \eta^{mn} \label{so21cli} \end{equation} Utilizing the following explicit basis: \begin{equation}\label{gammamatrices} \left\{\begin{array}{ccl} \gamma^0&=&\sigma^2\\ \gamma^1&=&-i\sigma^3\\ \gamma^2&=&-i\sigma^1 \end{array}\right.\qquad C_{[3]}=-i\sigma^2\,, \end{equation} both $\gamma^0$ and $C_{[3]}$ become proportional to $\varepsilon_{\alpha\beta}$. This implies that in equation (\ref{conjugations}) the role of the matrices $C_{[3]}$ and $\gamma^0$ is just to convert upper into lower $SL(2,\relax\,\hbox{$\inbar\kern-.3em{\rm C}$})$ indices and viceversa. \par The relation between the two notations for the spinors is summarized in the following table: \begin{equation} \begin{array}{|c|c|} \hline (\theta^+)^\alpha&\sqrt{2}\,\theta\\ (\theta^+)_\alpha&\sqrt{2}\,\overline\theta^c\\ (\theta^-)^\alpha&-i\sqrt{2}\,\theta^c\\ (\theta^-)_\alpha&-i\sqrt{2}\,\overline\theta\\ \hline \end{array} \end{equation} With the second set of conventions the spinor indices can be ignored since the contractions are always made between barred (on the left) and unbarred (on the right) spinors, corresponding to the ``{\it eight to two}'' rule of the first set of conventions. Some examples of this ``{\it transcription}'' are given by: \begin{eqnarray} \overline\theta\theta=\ft{1}{2}i(\theta^-)_\alpha(\theta^+)^\alpha\nonumber\\ \overline\theta^c\gamma\theta^c=\ft{1}{2}i(\theta^+)_\alpha(\gamma)^\alpha_{\ \beta} (\theta^-)^\beta \end{eqnarray} \subsection{Compact and non compact five gradings of the $Osp({\cal N}|4)$ superalgebra} As it is extensively explained in \cite{gunaydinminiczagerman1}, a non-compact group $G$ admits unitary irreducible representations of the lowest weight type if it has a maximal compact subgroup $G^0$ of the form $G^0=H\times U(1)$ with respect to whose Lie algebra $g^0$ there exists has a {\it three grading} of the Lie algebra $g$ of $G$. In the case of a non--compact superalgebra the lowest weight UIR.s can be constructed if the three grading is generalized to a {\it five grading} where the even (odd) elements are integer (half-integer) graded: \begin{eqnarray} g = g^{-1} \oplus g^{-\ft12} \oplus g^0 \oplus g^{+\ft12} \oplus g^{+1}\,,\\ \nonumber\\ \left[g^k,g^l\right]\subset g^{k+l}\qquad g^{k+l}=0\ {\rm for}\ |k+l|>1\,. \end{eqnarray} For the supergroup $Osp({\cal N}|4)$) this grading can be made in two ways, choosing as grade zero subalgebra either the maximal compact subalgebra \begin{eqnarray} g^0 \equiv SO(3) \times SO(2) \times SO({\cal N}) \subset Osp({\cal N} \vert 4) \label{so3so2} \end{eqnarray} or the non-compact subalgebra \begin{eqnarray} {\widetilde g}^0 \equiv SO(1,2) \times SO(1,1) \times SO({\cal N}) \subset Osp({\cal N} \vert 4) \label{so12so11} \end{eqnarray} which also exists, has the same complex extension and is also maximal. \par The existence of the double five--grading is the algebraic core of the $AdS_4/CFT_3$ correspondence. Decomposing a UIR of $Osp({\cal N} \vert 4)$ into representations of $g^0$ shows its interpretation as a supermultiplet of {\it particles states} in the bulk of $AdS_4$, while decomposing it into representations of ${\widetilde g}^0$ shows its interpretation as a supermultiplet of {\it conformal primary fields} on the boundary $\partial (AdS_4)$. \par In both cases the grading is determined by the generator $X$ of the abelian factor $SO(2)$ or $SO(1,1)$: \begin{equation} [X,g^k]=k\,g^k \end{equation} In the compact case (see \cite{freedmannicolai}) the $SO(2)$ generator $X$ is given by $M_{04}$. It is interpreted as the energy generator of the four-dimensional $AdS$ theory. It was used in \cite{multanna} and \cite{m111spectrum} for the construction of the $Osp(2 \vert 4)$ representations, yielding the long multiplets of \cite{multanna} and the short and ultra-short multiplets of \cite{m111spectrum}. We repeat such decompositions here. \par We call $E$ the energy generator of $SO(2)$, $L_a$ the rotations of $SO(3)$: \begin{eqnarray} E &=& M_{04} \,, \nonumber \\ L_a &=& \ft12 \varepsilon_{abc} \, M_{bc} \,, \end{eqnarray} and $M_a^\pm$ the boosts: \begin{eqnarray} M_a^+ &=& - M_{a4} + i M_{0a} \,, \nonumber \\ M_a^- &=& M_{a4} + i M_{0a} \,. \end{eqnarray} The supersymmery generators are $a^i_\alpha$ and $\bar a^{\alpha i}$. Rewriting the $Osp({\cal N} \vert 4)$ superalgebra (\ref{pippa}) in this basis we obtain: \begin{eqnarray} {}[E, M_a^+] &=& M_a^+ \,,\nonumber \\ {}[E, M_a^-] &=& -M_a^- \,,\nonumber \\ {}[L_a, L_b] &=& i \, \varepsilon_{abc} L_c \,, \nonumber \\ {}[M^+_a, M^-_b] &=& 2 \, \delta_{ab}\, E + 2 i \, \varepsilon_{abc} \, L_c \,, \nonumber \\ {}[L_a, M^+_b ] &=& i \, \varepsilon_{abc} \, M^+_c \,, \nonumber \\ {}[L_a, M^-_b ] &=& i \, \varepsilon_{abc} \, M^-_c \,, \nonumber \\ {}[T^{ij}, T^{kl}] &=& -i\,(\delta^{jk}\,T^{il}-\delta^{ik}\,T^{jl}- \delta^{jl}\,T^{ik}+\delta^{il}\,T^{jk}) \,, \nonumber \\ {}[T^{ij}, \bar a^{\alpha k}] &=& -i\, (\delta^{jk}\, \bar a^{\alpha i} - \delta^{ik}\, \bar a^{\alpha j} ) \,, \nonumber \\ {}[T^{ij}, a_\alpha^k] &=& -i\, (\delta^{jk}\, a_\alpha^i - \delta^{ik}\, a_\alpha^i ) \,,\nonumber \\ {}[E, a_\alpha^i] &=& -\ft12 \, a_\alpha^i \,, \nonumber \\ {}[E, \bar a^{\alpha i}] &=& \ft12 \, \bar a^{\alpha i} \,, \nonumber \\ {}[M_a^+, a_\alpha^i] &=& (\tau_a)_{\alpha\beta}\, \bar a^{\beta i} \,, \nonumber \\ {}[M_a^-, \bar a^{\alpha i}] &=& - (\tau_a)^{\alpha\beta} \, a_\beta^i \,,\nonumber \\ {}[L_a, a_\alpha^i]&=& \ft12 \, (\tau_a)_{\alpha}{}^\beta \, a^i_\beta \,, \nonumber \\ {}[L_a, \bar a^{\alpha i}] &=& -\ft12 \, (\tau_a)^\alpha{}_\beta \, \bar a^{\beta i} \,, \nonumber \\ \{a_\alpha^i, a_\beta^j \} &=& \delta^{ij} \, (\tau^k)_{\alpha\beta}\, M_k^- \,, \nonumber \\ \{\bar a^{\alpha i}, \bar a^{\beta j} \} &=& \delta^{ij} (\tau^k)^{\alpha\beta} \, M_k^+ \,, \nonumber \\ \{ a_\alpha^i, \bar a^{\beta j} \} &=& \delta^{ij}\, \delta_\alpha{}^\beta \, E + \delta^{ij} \, (\tau^k)_\alpha{}^\beta \, L_k +i \, \delta_\alpha{}^\beta \, T^{ij} \,. \label{ospE} \end{eqnarray} The five--grading structure of the algebra (\ref{ospE}) is shown in fig. \ref{pistac} In the superconformal field theory context we are interested in the action of the $Osp({\cal N} \vert 4)$ generators on superfields living on the minkowskian boundary $\partial(AdS_4)$. To be precise the boundary is a compactification of $d=3$ Minkowski space and admits a conformal family of metrics $g_{mn} = \phi(z) \eta_{mn}$ conformally equivalent to the the flat Minkowski metric \begin{equation} \eta_{mn} = (+,-,-) \,, \qquad m,n,p,q = 0,1,2 \,. \label{minkio3} \end{equation} Precisely because we are interested in conformal field theories the the choice of representative metric inside the conformal family is immaterial and the flat one (\ref{minkio3}) is certainly the most convenient. The requested action of the superalgebra generators is obtained upon starting from the non--compact grading with respect to (\ref{so12so11}). To this effect we define the {\it dilatation} $SO(1,1)$ generator $D$ and the {\it Lorentz} $SO(1,2)$ generators $J^m$ as follows: \begin{equation} D \equiv i\, M_{34} \,, \qquad J^m = \ft{i}{2} \, \varepsilon^{mpq} M_{pq}\,. \label{dilalor} \end{equation} \begin{figure}[ht] \begin{center} \leavevmode \hbox{% \epsfxsize=11cm \epsfbox{unjck2.eps}} \caption{{\small Schematic representation of the root diagram of $Osp({\cal N}|4)$ in the $SO(2) \times SO(3)$ basis. \label{pistac} The grading w.r.t. the energy $E$ is given on the right. }} \end{center} \end{figure} In addition we define the the $d=3$ {\it translation generators} $P_m$ and {\it special conformal boosts} $K_m$ as follows: \begin{eqnarray} P_m = M_{m4} - M_{3m} \,, \nonumber \\ K_m = M_{m4} + M_{3m} \,. \label{Pkdefi} \end{eqnarray} Finally we define the generators of $d=3$ {\it ordinary} and {\it special conformal supersymmetries}, respectively given by: \begin{eqnarray} q^{\alpha i} = \ft{1}{\sqrt{2}}\left(a_\alpha^i + \bar a^{\alpha i}\right) \,, \nonumber \\ s_\alpha^i = \ft{1}{\sqrt{2}}\left(-a_\alpha^i + \bar a^{\alpha i}\right) \,. \label{qsdefi} \end{eqnarray} The $SO({\cal N})$ generators are left unmodified as above. In this new basis the $Osp({\cal N}\vert 4)$-algebra (\ref{pippa}) reads as follows \begin{eqnarray} {}[D, P_m] &=& -P_m \,, \nonumber \\ {}[D, K_m] &=& K_m \,, \nonumber \\ {}[J_m, J_n] &=& \varepsilon_{mnp} \, J^p \,, \nonumber \\ {}[K_m, P_n] &=& 2 \, \eta_{mn}\, D - 2 \, \varepsilon_{mnp} \, J^p \,, \nonumber \\ {}[J_m, P_n] &=& \varepsilon_{mnp} \, P^p \,, \nonumber \\ {}[J_m, K_n] &=& \varepsilon_{mnp} \, K^p \,, \nonumber \\ {}[T^{ij}, T^{kl}] &=& -i\,(\delta^{jk}\,T^{il}-\delta^{ik}\,T^{jl}- \delta^{jl}\,T^{ik}+\delta^{il}\,T^{jk}) \,, \nonumber \\ {}[T^{ij}, q^{\alpha k}] &=& -i\, (\delta^{jk}\, q^{\alpha i} - \delta^{ik}\, q^{\alpha j} ) \,, \nonumber \\ {}[T^{ij}, s_\alpha^k] &=& -i\, (\delta^{jk}\, s_\alpha^i - \delta^{ik}\, s_\alpha^i ) \,,\nonumber \\ {}[D, q^{\alpha i}] &=& -\ft12 \, q^{\alpha i} \,,\nonumber \\ {}[D, s_\alpha^i] &=& \ft12 \, s_\alpha^i \,,\nonumber \\ {}[K^m, q^{\alpha i} ] &=& - i\, (\gamma^m)^{\alpha \beta}\, s_\beta^i \,, \nonumber \\ {}[P^m, s_\alpha^i] &=& - i\, (\gamma^m)_{\alpha \beta}\, q^{\beta i} \,, \nonumber \\ {}[J^m, q^{\alpha i} ] &=& - \ft{i}{2} \, (\gamma^m)^\alpha{}_\beta q^{\beta i} \,, \nonumber \\ {}[J^m, s_\alpha^i] &=& \ft{i}{2} \, (\gamma^m)_\alpha{}^\beta s_\beta^i \,, \nonumber \\ \{q^{\alpha i}, q^{\beta j} \} &=& - i\, \delta^{ij}\, (\gamma^m)^{\alpha\beta} P_m \,, \nonumber \\ \{s_\alpha^i, s_\beta^j \} &=& i\, \delta^{ij} \, (\gamma^m)_{\alpha\beta} K_m \,, \nonumber \\ \{q^{\alpha i}, s_\beta^j \} &=& \delta^{ij} \delta^\alpha{}_\beta \, D - i\, \delta^{ij} (\gamma^m)^\alpha{}_\beta J_m + i \delta^\alpha{}_\beta T^{ij} \,. \label{ospD} \end{eqnarray} and the five grading structure of eq.s (\ref{ospD}) is displayed in fig.\ref{pirillo}. \begin{figure}[ht] \begin{center} \leavevmode \hbox{% \epsfxsize=11cm \epsfbox{unjck1.eps}} \caption{{\small Schematic representation of the root diagram of $Osp({\cal N}|4)$ in the $SO(1,1)\times SO(1,2)$ basis. The grading w.r.t. the dilatation $D$ is given on the right.\label{pirillo}}} \end{center} \end{figure} In both cases of fig.\ref{pistac} and fig.\ref{pirillo} if one takes the subset of generators of positive grading plus the abelian grading generator $X=\cases{E\cr D\cr}$ one obtains a {\it solvable superalgebra} of dimension $4+2{\cal N}$. It is however only in the non compact case of fig.\ref{pirillo} that the bosonic subalgebra of the solvable superalgebra generates anti de Sitter space $AdS_4$ as a solvable group manifold. Therefore the solvable superalgebra $Ssolv_{adS}$ mentioned in eq. (\ref{solvsup}) is the vector span of the following generators: \begin{equation} Ssolv_{adS} \equiv \mbox{span} \left\{ P_m, D, q^{\alpha i} \right\} \label{Span} \end{equation} \subsection{The lowest weight UIR.s as seen from the compact and non compact five--grading viewpoint} The structure of all the $Osp(2\vert 4)$ supermultiplets relevant to Kaluza Klein supergravity is known. Their spin content is upper bounded by $s=2$ and they fall into three classes: {\it long, short} and {\it ultrashort}. Such a result has been obtained in \cite{m111spectrum} by explicit harmonic analysis on $X^7 = M^{111}$, namely through the analysis of a specific example of ${\cal N}\!\!=\!\!2$ compactification on $AdS_4 \times X^7$. As stressed in the introduction the goal of the present paper is to reformulate the structure of these multiplets in a way appropriate for comparison with {\it composite operators} of the three-dimensional gauge theory living on the boundary $\partial(AdS_4)$ that behave as {\it primary conformal fields}. Actually, in view of the forthcoming Kaluza-Klein spectrum on $X^7=N^{010}$ \cite{n010}, that is arranged into $Osp(3\vert 4)$ rather than $Osp(2 \vert 4)$ multiplets, it is more convenient to begin by discussing $Osp({\cal N}\vert 4)$ for generic ${\cal N}$. \par We start by briefly recalling the procedure of \cite{freedmannicolai,heidenreich} to construct UIR.s of $Osp({\cal N}\vert 4)$ in the compact grading (\ref{so3so2}). Then, in a parallel way to what was done in \cite{gunaydinminiczagerman2} for the case of the $SU(2,2\vert 4)$ superalgebra we show that also for $Osp({\cal N}\vert 4)$in each UIR carrier space there exists a unitary rotation that maps eigenstates of $E, L^2, L_3$ into eigenstates of $D,J^2,J_2$. By means of such a rotation the decomposition of the UIR into $SO(2)\times SO(3)$ representations is mapped into an analogous decomposition into $SO(1,1) \times SO(1,2)$ representations. While $SO(2)\times SO(3)$ representations describe the {\it on--shell} degrees of freedom of a {\it bulk particle} with an energy $E_0$ and a spin $s$, irreducible representations of $SO(1,1) \times SO(1,2)$ describe the {\it off-shell} degrees of freedom of a {\it boundary field} with scaling weight $D$ and Lorentz character $J$. Relying on this we show how to construct the on-shell four-dimensional superfield multiplets that generate the states of these representations and the off-shell three-dimensional superfield multiplets that build the conformal field theory on the boundary. \par Lowest weight representations of $Osp({\cal N}\vert 4)$ are constructed starting from the basis (\ref{ospE}) and choosing a {\it a vacuum state} such that \begin{eqnarray} M_i^- \vert (E_0, s, \Lambda) \rangle &=& 0 \,, \nonumber \\ a^i_\alpha \vert (E_0, s, \Lambda) \rangle &=& 0 \,, \label{energyreps} \end{eqnarray} where $E_0$ denotes the eigenvalue of the energy operator $M_{04}$ while $s$ and $\Lambda$ are the labels of an irreducible $SO(3)$ and $SO({\cal N})$ representation, respectively. In particular we have: \begin{eqnarray} M_{04}\, \vert (E_0, s, \Lambda)\rangle & = & E_0 \, \vert (E_0, s, \Lambda) \rangle \nonumber \\ L^a \, L^a \, \vert (E_0, s, \Lambda) \rangle & = & s(s+1) \, \vert (E_0, s, \Lambda) \rangle \nonumber\\ L^3 \vert (E_0, s, \Lambda) \rangle & =& s \,\vert (E_0, s, \Lambda) \rangle\,. \label{eigval} \end{eqnarray} The states filling up the UIR are then built by applying the operators $M^-$ and the anti-symmetrized products of the operators $\bar a^i_\alpha$: \begin{eqnarray} \left( M_1^+ \right)^{n_1} \left( M_2^+ \right)^{n_2} \left( M_3^+ \right)^{n_3} [ \bar a^{i_1}_{\alpha_1} \dots \bar a^{i_p}_{\alpha_p}] \vert (E_0, s, \Lambda) \rangle \label{so3so2states} \end{eqnarray} \par Lowest weight representations are similarly constructed with respect to five--grading (\ref{ospD}). One starts from a vacuum state that is annihilated by the conformal boosts and by the special conformal supersymmetries \begin{eqnarray} K_m \, \vert (D_0, j, \Lambda) \rangle &=& 0 \,, \nonumber \\ s^i_\alpha \, \vert (D_0, j, \Lambda) \rangle &=& 0 \,, \label{primstate} \end{eqnarray} and that is an eigenstate of the dilatation operator $D$ and an irreducible $SO(1,2)$ representation of spin $j$: \begin{eqnarray} D \, \vert (D_0, j, \Lambda) \rangle &=& D_0 \, \vert (D_0, j, \Lambda) \rangle \nonumber\\ J^m \, J^n \, \eta_{mn} \,\vert (D_0, j, \Lambda) \rangle &=& j (j +1) \,\vert (D_0, j, \Lambda) \rangle\nonumber\\ J_2 \,\vert (D_0, j, \Lambda) \rangle & = & j \vert (D_0, j, \Lambda) \rangle \label{Djvac} \end{eqnarray} As for the $SO({\cal N})$ representation the new vacuum is the same as before. The states filling the UIR are now constructed by applying to the vacuum the operators $P_m$ and the anti-symmetrized products of $q^{\alpha i}$, \begin{eqnarray} \left( P_0\right)^{p_0} \left( P_1 \right)^{p_1} \left( P_2 \right)^{p_2} [ q^{\alpha_1 i_1} \dots q^{\alpha_q i_q}] \vert (D_0, j, \Lambda) \rangle\,. \label{so12so11states} \end{eqnarray} \par In the language of conformal field theories the vacuum state satisfying eq.(\ref{primstate}) is named a {\it primary state} (corresponding to the value at $z^m=0$ of a primary conformal field}. The states (\ref{so12so11states}) are called the {\it descendants}. \par The rotation between the $SO(3)\times SO(2)$ basis and the $SO(1,2)\times SO(1,1)$ basis is performed by the operator: \begin{eqnarray} U\equiv \exp \left[{\ft{i}{\sqrt{2}}\pi(E-D)} \right] \,, \label{rotationmatrix} \end{eqnarray} which has the following properties, \begin{eqnarray} D U &=& - U E \,, \nonumber \\ J_0 U &=& i \, U L_3 \,, \nonumber \\ J_1 U &=& U L_1 \,, \nonumber \\ J_2 U &=& U L_2 \,, \label{L0} \end{eqnarray} with respect to the grade $0$ generators. Furthermore, with respect to the non vanishing grade generators we have: \begin{eqnarray} K_0 U &=& -i \, U M_3^- \,, \nonumber \\ K_1 U &=& - U M_1^- \,, \nonumber \\ K_2 U &=& - U M_2^- \,, \nonumber \\ P_0 U &=& i\,U M_3^+ \,, \nonumber \\ P_1 U &=& U M_1^+ \,, \nonumber \\ P_2 U &=& U M_2^+ \,, \nonumber \\ q^{\alpha i} U &=& -i\, U \bar a^{\alpha i} \nonumber \\ s_\alpha^i U &=& i \, U a_\alpha^i \,. \label{L+L-} \end{eqnarray} As one immediately sees from (\ref{L+L-}), U interchanges the compact five--grading structure of the superalgebra with its non compact one. In particular the $SO(3)\times SO(2)$-vacuum with energy $E_0$ is mapped into an $SO(1,2)\times SO(1,1)$ primary state and one obtains all the descendants (\ref{so12so11states}) by acting with $U$ on the particle states (\ref{so3so2states}). Furthermore from (\ref{L0}) we read the conformal weight and the Lorentz group representation of the primary state $U \vert (E_0, s, J) \rangle$. Indeed its eigenvalue with respect to the dilatation generator $D$ is: \begin{equation} D_0 = - E_0 \,. \end{equation} and we find the following relation between the Casimir operators of $SO(1,2)$ and $SO(3)$, \begin{equation} J^2 U = U L^2 \,, \qquad J^2 \equiv -J_0^2 + J_1^2 + J_2^2 \,, \end{equation} which implies that \begin{equation} j = s \,. \end{equation} Hence under the action of $U$ a particle state of energy $E_0$ and spin $s$ of the bulk is mapped into a {\it primary conformal field} of conformal weight $-E_0$ and Lorentz spin $s$ on the boundary. This discussion is visualized in fig.\ref{rota} \begin{figure}[ht] \begin{center} \leavevmode \hbox{% \epsfxsize=12cm \epsfbox{rota.eps}} \caption{{\small The operator $U=\exp\{i\pi/\sqrt 2(E-D)\}$ rotates the Hilbert space of the physical states. It takes states labeled by the Casimirs ($E,\,s$) of the $SO(2)\times SO(3)\subset Osp({\cal N}|4)$ into states labeled by the Casimirs ($D,\,j$) of $SO(1,1)\times SO(1,2)$.\label{rota}}} \end{center} \end{figure} \section{$AdS_4$ and $\partial AdS_4$ as cosets and their Killing vectors} \label{supercoset} In the previous section we studied $Osp({\cal N}\vert 4)$ and its representations in two different bases. The form (\ref{ospE}) of the superalgebra is that we used in \cite{m111spectrum} to construct the $Osp(2\vert 4)$ supermultiplets from Kaluza Klein supergravity. It will be similarly used to obtain the $Osp(3\vert 4)$ spectrum on $X^7=N^{010}$. We translated these results in terms of the form (\ref{ospD}) of the $Osp({\cal N}\vert 4)$ algebra in order to allow a comparison with the three-dimensional CFT on the boundary. In this section we introduce the announced description of the anti de Sitter superspace and of its boundary in terms of supersolvable Lie algebra parametrization as in eq.s(\ref{solvsup}),(\ref{supcos2}). It turns out that such a description is the most appropriate for a comparative study between $AdS_4$ and its boundary. We calculate the Killing vectors of these two coset spaces since they are needed to determine the superfield multiplets living on both $AdS_4$ and $\partial AdS_4$. \par So we write both the bulk and the boundary superspaces as supercosets\footnote{For an extensive explanation about supercosets we refer the reader to \cite{castdauriafre}. In the context of $D=11$ and $D=10$ compactifications see also \cite{renatpiet}}, \begin{eqnarray} \frac{G}{H} \,, \label{GoverH} \end{eqnarray} Applying supergroup elements $g \in Osp({\cal N}\vert 4)$ to the coset representatives $L(y)$ these latter transform as follows: \begin{eqnarray} g \, L(y) = L(y^\prime) h(g,y) \,, \label{defcoset} \end{eqnarray} where $h(y)$ is some element of $H \subset Osp({\cal N}\vert 4)$, named the compensator that, generically depends both on $g$ and on the coset point $y\in G/H$. For our purposes it is useful to consider the infinitesimal form of (\ref{defcoset}), i.e. for infinitesimal $g$ we can write: \begin{eqnarray} g&=& 1 + \epsilon^A T_A \,, \nonumber \\ h &=& 1 + \epsilon^A W^H_A(y) T_H \,, \nonumber \\ y^{\mu\prime} \, & = & y^\mu+\epsilon^A k^\mu_A(y) \end{eqnarray} and we obtain: \begin{eqnarray} T_A L(y) &=& k_A L(y) + L(y) T_H W^H_A(y) \, ,\label{infinitesimalcosettransfo}\\ k_A & \equiv & k^\mu_A(y) \, \frac{\partial}{\partial y^\mu} \label{kildefi} \end{eqnarray} The shifts in the superspace coordinates $y$ determined by the supergroup elements (see eq.(\ref{defcoset})) define the Killing vector fields (\ref{kildefi}) of the coset manifold.\footnote{The Killing vectors satisfy the algebra with structure functions with opposite sign, see \cite{castdauriafre}} \par Let us now consider the solvable anti de Sitter superspace defined in eq.s (\ref{solvsup}),(\ref{supcos2}). It describes a $\kappa$--gauge fixed supersymmetric extension of the bulk $AdS_4$. As explained by eq.(\ref{supcos2}) it is a supercoset (\ref{GoverH}) where $G=Osp({\cal N}\vert 4)$ and $H=CSO(1,2\vert{\cal N}) \times SO({\cal N})$ Using the non--compact basis (\ref{ospD}), the subgroup $H$ is given by, \begin{eqnarray} H^{AdS} =CSO(1,2\vert{\cal N}) \,\equiv\, \mbox{span}\, \left\{ \, J^m, K_m, s_\alpha^{i}, T^{ij} \,\right\} \,. \label{HAdS} \end{eqnarray} A coset representative can be written as follows\footnote{We use the notation $x\cdot y \equiv x^m y_m$ and $\theta^i q^i \equiv \theta^i_\alpha q^{\alpha i}$. }: \begin{eqnarray} L^{AdS}(y) = \exp \left [{\rho D + i \, x \cdot P + \theta^i q^i} \right ]\,, \qquad y=(\rho, x,\theta) \,. \label{AdScosetrepresentative} \end{eqnarray} In $AdS_{4\vert 2{\cal N}}$ $s$-supersymmetry and $K$-symmetry have a non linear realization since the corresponding generators are not part of the solvable superalgebra $Ssolv_{adS}$ that is exponentiated (see eq.(\ref{Span}). \par The form of the Killing vectors simplifies considerably if we rewrite the coset representative as a product of exponentials \begin{eqnarray} L(y) = \exp \left[ i \, z \cdot P\right]\, \cdot \, \exp \left[ \xi^i q^i\right] \, \cdot \, \exp \left[ {\rho D} \right] \label{simplecostrepresentative} \end{eqnarray} This amounts to the following coordinate change: \begin{eqnarray} z&=& \left( 1-\ft12 \rho + \ft16 \rho^2 + {\cal O}(\rho^3) \right)\, x \,, \nonumber \\ \xi^i&=& \left(1-\ft14 \rho + \ft{1}{24} \rho^2 + {\cal O}(\rho^3)\right)\, \theta^i \,. \label{changecos} \end{eqnarray} This is the parametrization that was used in \cite{torinos7} to get the $Osp(8\vert 4)$-singleton action from the supermembrane. For this choice of coordinates the anti de Sitter metric takes the standard form (\ref{adsmet}). The Killing vectors are \begin{eqnarray} \stackrel{\rightarrow}{k}[ P_m ]&=& -i\, \partial_m \,, \nonumber \\ \stackrel{\rightarrow}{k} [q^{\alpha i}] &=& \frac{\partial}{\partial \xi_\alpha^i} -\frac{1}{2}\left(\gamma^m\xi^i\right)^\alpha \partial_m \,, \nonumber \\ \stackrel{\rightarrow}{k} [J^m] &=& \varepsilon^{mpq} z_p \partial_q -\frac{i}{2} \left(\xi^i\gamma^m \right)_\alpha \frac{\partial}{\partial \xi_\alpha^i} \,, \nonumber \\ \stackrel{\rightarrow}{k} [D] &=& \frac{\partial}{\partial \rho} - z \cdot \partial -\frac{1}{2}\xi_\alpha^i \frac{\partial}{\partial \xi_\alpha^i} \,, \nonumber \\ \stackrel{\rightarrow}{k} [s^{\alpha i}] &=& -\xi^{\alpha i} \frac{\partial}{\partial \rho} +\frac{1}{2} \xi^{\alpha i}\, z \cdot \partial + \frac{i}{2} \varepsilon^{pqm}\, z_p(\gamma_q \xi^i)^\alpha \partial_m \nonumber \\ & & -\frac{1}{8} (\xi^j \xi^j) (\gamma^m \xi^i)^\alpha \partial_m - z^m (\gamma_m)^\alpha{}_\beta \frac{\partial}{\partial \xi^i_\beta} -\frac{1}{4} (\xi^j\xi^j) \frac{\partial}{\partial \xi^i_\alpha} \nonumber \\ & & +\frac{1}{2} \xi^{\alpha i} \xi^{\beta j} \frac{\partial}{\partial\xi^j_\beta} -\frac{1}{2}(\gamma^m \xi^i)^\alpha \xi^j_\beta \gamma_m \frac{\partial}{\partial\xi^j_\beta} \,. \label{simplekilling} \end{eqnarray} and for the compensators we find: \begin{eqnarray} W[P]&=& 0 \,, \nonumber \\ W[q^{\alpha i}] &=& 0 \,, \nonumber \\ W[J^m] &=& J^m \,, \nonumber \\ W[D] &=& 0 \,, \nonumber \\ W[s^{\alpha i}] &=& s^{\alpha i} - i\, \left(\gamma^m \theta^i \right)^\alpha \,J_m + i \theta^{\alpha j} \, T^{ij} \,. \label{compensatorsAdSrzxi} \end{eqnarray} For a detailed derivation of these Killing vectors and compensators we refer the reader to appendix \ref{derivationkillings}. \par The boundary superspace $\partial(AdS_{4\vert 2{\cal N}})$ is formed by the points on the supercoset with $\rho=0$: \begin{eqnarray} L^{CFT}(y) = \exp \left[\mbox{ i}\, x \cdot P + \theta^i q^i\right] \label{boundarycosetrepresentative} \end{eqnarray} In order to see how the supergroup acts on fields that live on this boundary we use the fact that this submanifold is by itself a supercoset. Indeed instead of $H^{AdS} \subset Osp({\cal N} \vert 4)$ as given in (\ref{HAdS}), we can choose the larger subalgebra \begin{equation} H^{CFT} = \mbox{span} \, \left \{ D, J^m, K_m, s_\alpha^{i}, T^{ij} \right \} \,. \label{HCFT} \end{equation} and consider the new supercoset $G/H^{CFT}$. By defintion also on this smaller space we have a non linear realization of the full orthosymplectic superalgebra. For the Killing vectors we find: \begin{eqnarray} \stackrel{\rightarrow}{k} [P_m] &=& -i\, \partial_m \,, \nonumber \\ \stackrel{\rightarrow}{k} [q^{\alpha i}] &=& \frac{\partial}{\partial \theta_\alpha^i} -\frac{1}{2}\left(\gamma^m\theta^i\right)^\alpha \partial_m \,, \nonumber \\ \stackrel{\rightarrow}{k} [J^m] &=& \varepsilon^{mpq} x_p \partial_q -\frac{i}{2} \left(\theta^i\gamma^m \right)_\alpha \frac{\partial}{\partial \theta_\alpha^i} \,, \nonumber \\ \stackrel{\rightarrow}{k}[D] &=& - x \cdot \partial -\frac{1}{2}\theta_\alpha^i \frac{\partial}{\partial \theta_\alpha^i} \,, \nonumber \\ \stackrel{\rightarrow}{k} [s^{\alpha i}] &=& \frac{1}{2} \theta^{\alpha i}\, x \cdot \partial + \frac{i}{2} \varepsilon^{pqm}\, x_p(\gamma_q \theta^i)^\alpha \partial_m -\frac{1}{8} (\theta^j \theta^j) (\gamma^m \theta^i)^\alpha \partial_m \nonumber \\ & & - x^m (\gamma_m)^\alpha{}_\beta \frac{\partial}{\partial \theta^i_\beta} -\frac{1}{4} (\theta^j\theta^j) \frac{\partial}{\partial \theta^i_\alpha} +\frac{1}{2} \theta^{\alpha i} \theta^{\beta j} \frac{\partial}{\partial\theta^j_\beta} -\frac{1}{2}(\gamma^m \theta^i)^\alpha \theta^j_\beta \gamma_m \frac{\partial}{\partial\theta^j_\beta} \,. \nonumber\\ \label{bounkil} \end{eqnarray} and for the compensators we have: \begin{eqnarray} W[P_m] &=& 0 \,, \nonumber \\ W[q^{\alpha i}] &=& 0 \,, \nonumber \\ W[J^m] &=& J^m \,, \nonumber \\ W[D] &=& D \,, \nonumber\\ W[s^{\alpha i}] &=& - \theta^{\alpha i}\, D + s^{\alpha i} - i\, \left( \gamma^m \theta^i \right)^\alpha\,J_m + i \theta^j T^{ij} \,. \label{compensatorsdAdS} \end{eqnarray} If we compare the Killing vectors on the boundary (\ref{bounkil}) with those on the bulk (\ref{simplekilling}) we see that they are very similar. The only formal difference is the suppression of the $\frac{\partial}{\partial\rho }$ terms. The conceptual difference, however, is relevant. On the boundary the transformations generated by (\ref{bounkil}) are the {\it standard superconformal transformations} in three--dimensional (compactified) Minkowski space. In the bulk the transformations generated by (\ref{simplekilling}) are {\it superisometries} of anti de Sitter superspace. They might be written in completely different but equivalent forms if we used other coordinate frames. The form they have is due to the use of the {\it solvable coordinate frame} $(\rho, z, \xi)$ which is the most appropriate to study the restriction of bulk supermultiplets to the boundary. For more details on this point we refer the reader to appendix \ref{derivationkillings} \section{$Osp(2\vert 4)$ superfields in the bulk and on the boundary} \label{supfieldbuetbo} As we explained in the introduction our main goal is the determination of the ${\cal N}=2$ three dimensional gauge theories associated with the sasakian horizons (\ref{sasaki}) and the comparison between Kaluza Klein spectra of M--theory compactified on $AdS_4$ times such horizons with the spectrum of primary conformal superfields of the corresponding gauge theory. For this reason we mainly focus on the case of $Osp(2\vert 4)$ supermultiplets. As already stressed the structure of such supermultiplets has been determined in Kaluza Klein language in \cite{multanna, m111spectrum}. Hence they have been obtained in the basis (\ref{ospE}) of the orthosymplectic superalgebra. Here we consider their translation into the superconformal language provided by the other basis (\ref{ospD}). In this way we will construct a boundary superfield associated with each particle supermultiplet of the bulk. The components of the supermultiplet are Kaluza Klein states: it follows that we obtain a one--to--one correspondence between Kaluza Klein states and components of the boundary superfield. \subsection{Conformal $Osp(2 \vert 4)$ superfields: general discussion} So let us restrict our attention to ${\cal N}\!\!=\!\!2$. In this case the $SO(2)$ group has just one generator that we name the hypercharge: \begin{equation} Y \equiv T^{21} \,. \end{equation} Since it is convenient to work with eigenstates of the hypercharge operator, we reorganize the two Grassman spinor coordinates of superspace in complex combinations: \begin{eqnarray} \theta^\pm_\alpha = \frac{1}{\sqrt{2}}(\theta^1_\alpha \pm i \theta^2_\alpha) \,, \qquad Y \, \theta^\pm_\alpha = \pm \theta^\pm_\alpha \label{complexthet} \end{eqnarray} In this new notations the Killing vectors generating $q$--supersymmetries on the boundary (see eq.(\ref{bounkil})) take the form: \begin{equation} {\vec k}\left[q^{\alpha i}\right] \quad \longrightarrow \quad q^{\alpha \pm}= \frac{\partial}{\partial \theta^\mp_\alpha} - \ft{1}{2} \, (\gamma^m)^\alpha{}_\beta \theta^{\beta \pm} \partial_m \,, \label{qpm} \end{equation} A generic superfield is a function $\Phi(x,\theta)$ of the bosonic coordinates $x$ and of all the $\theta .s$ Expanding such a field in power series of the $\theta .s$ we obtain a multiplet of $x$--space fields that, under the action of the Killing vector (\ref{qpm}), form a representation of Poincar\'e supersymmetry. Such a representation can be shortened by imposing on the superfield $\Phi(x,\theta)$ constraints that are invariant with respect to the action of the Killing vectors (\ref{qpm}). This is possible because of the existence of the so called superderivatives, namely of fermionic vector fields that commute with the supersymmetry Killing vectors. In our notations the superderivatives are defined as follows: \begin{eqnarray} {\cal D}^{\alpha \pm} = \frac{\partial}{\partial \theta^\mp_\alpha} +\ft{1}{2} \, (\gamma^m)^\alpha{}_\beta \theta^{\beta \pm} \partial_m \,, \label{supdervcf} \end{eqnarray} and satisfy the required property \begin{equation} \begin{array}{rclcr} \{{\cal D}^{\alpha \pm}, q^{\beta \pm}\}&=& \{{\cal D}^{\alpha \pm}, q^{\beta \mp}\}&=&0 \,. \end{array} \label{commDq} \end{equation} As explained in \cite{castdauriafre} the existence of superderivatives is the manifestation at the fermionic level of a general property of coset manifolds. For $G/H$ the true isometry algebra is not $G$, rather it is $G \times N(H)_G$ where $N(H)_G$ denotes the normalizer of the stability subalgebra $H$. The additional isometries are generated by {\it right--invariant} rather than {\it left--invariant} vector fields that as such commute with the {\it left--invariant} ones. If we agree that the Killing vectors are left--invariant vector fields than the superderivatives are right--invariant ones and generate the additional superisometries of Poincar\'e superspace. Shortened representations of Poincar\'e supersymmetry are superfields with a prescribed behaviour under the additional superisometries: for instance they may be invariant under such transformations. We can formulate these shortening conditions by writing constraints such as \begin{equation} {\cal D}^{\alpha +}\Phi(x,\theta)=0 \, . \label{typconstr} \end{equation} The key point in our discussion is that a constraint of type (\ref{typconstr}) is guaranteed from eq.s (\ref{commDq}) to be invariant with respect to the superPoincar\'e algebra, yet it is not a priori guaranteed that it is invariant under the action of the full superconformal algebra (\ref{bounkil}). Investigating the additional conditions that make a constraint such as (\ref{typconstr}) superconformal invariant is the main goal of the present section. This is the main tool that allows a transcription of the Kaluza--Klein results for supermultiplets into a superconformal language. \par To develop such a programme it is useful to perform a further coordinate change that is quite traditional in superspace literature. Given the coordinates $x$ on the boundary (or the coordinates $z$ for the bulk) we set: \begin{eqnarray} y^m=x^m + \ft{1}{2} \, \theta^+ \gamma^m \theta^- \,. \end{eqnarray} Then the superderivatives become \begin{eqnarray} {\cal D}^{\alpha +} &=& \frac{\partial}{\partial \theta^-_\alpha} \,, \nonumber \\ {\cal D}^{\alpha -} &=& \frac{\partial}{\partial \theta^+_\alpha} + (\gamma^m)^\alpha{}_\beta\theta^{\beta -} \partial_m \,. \end{eqnarray} It is our aim to describe superfield multiplets both on the bulk and on the boundary. It is clear that one can do the same redefinitions for the Killing vector of $q$-supersymmetry (\ref{qpm}) and that one can introduce superderivatives also for the theory on the bulk. In that case one inserts the functions $t(\rho)$ and $\gamma(\rho)$ in the above formulas or if one uses the solvable coordinates $(\rho, z, \xi)$ as in (\ref{changecos}) then there is just no difference with the boundary case. \par So let us finally turn to superfields. We begin by focusing on boundary superfields since their treatment is slightly easier than the treatment of bulk superfields. \begin{definizione} A primary superfield is defined as follows (see \cite{gunaydinminiczagerman2, macksalam}), \begin{eqnarray} \Phi^{\partial AdS}(x, \theta) =\exp \left [\mbox{i}\, x\cdot P + \theta^i q^i\right ] \Phi(0)\,, \label{3Dsuperfield} \end{eqnarray} where $\Phi(0)$ is a primary state (see eq.(\ref{primstate})) \begin{eqnarray} s_\alpha^i \Phi(0) &=& 0 \,, \nonumber \\ K_m \Phi(0) &=& 0 \,. \end{eqnarray} of scaling weight $D_0$, hypercharge $y_0$ and eigenvalue $j$ for the ``third-component'' operator $J_2$ \begin{equation} \begin{array}{ccccc} D \, \Phi(0) = D_0 \, \Phi(0)&;& Y \, \Phi(0) = y_0 \, \Phi(0) &;& J_2\, \Phi(0) = j \, \Phi(0) \end{array} \label{primlab} \end{equation} \end{definizione} From the above defintion one sees that the primary superfield $\Phi^{\partial AdS}(x, \theta)$ is actually obtained by acting with the coset representative (\ref{boundarycosetrepresentative}) on the $SO(1,2)\times SO(1,1)$-primary state. Hence we know how it transforms under the infinitesimal transformations of the group $Osp(2\vert 4)$. Indeed one simply uses (\ref{infinitesimalcosettransfo}) to obtain the result. For example under dilatation we have: \begin{eqnarray} D\, \Phi^{\partial AdS}(x,\theta) = \left(-x\cdot\partial - \ft12 \theta^i \frac{\partial}{\partial \theta^i} + D_0 \right) \Phi(x,\theta), \end{eqnarray} where the term $D_0$ comes from the compensator in (\ref{compensatorsdAdS}). Of particular interest is the transformation under special supersymmetry since it imposes the constraints for shortening, \begin{equation} s^\pm \Phi^{\partial AdS}(x,\theta) = \stackrel{\rightarrow}{k} [s^\pm] \Phi(x, \theta) + e^{i\, x\cdot P + \theta^i q^i} \left( -\theta^\pm D - i\, \gamma^m \theta^\pm \,J_m +s^\pm \pm \theta^\pm Y \right) \Phi(0) \,. \label{spm} \end{equation} For completeness we give the form of $s^\pm$ in the $y$-basis where it gets a relatively concise form, \begin{eqnarray} \stackrel{\rightarrow}{k} [s^{\alpha-}] &=& - \left( y\cdot \gamma\right)^\alpha{}_\beta \frac{\partial}{\partial \theta_\beta^+} + \frac{1}{2} \left( \theta^- \theta^- \right) \frac{\partial}{\partial \theta^-_\alpha} \nonumber \\ \stackrel{\rightarrow}{k} [s^{\alpha+}] &=& \theta^{\alpha+} y\cdot\partial + i\, \varepsilon^{pqm} y_p \left( \gamma_p \theta^+ \right)^\alpha\partial_m + \frac{1}{2} \left(\theta^+ \theta^+\right) \frac{\partial}{\partial \theta^+_\alpha} \nonumber \\ & & + \theta^+\gamma^m \theta^- \left(\gamma_m\right)^\alpha{}_\beta \frac{\partial}{\partial \theta_\beta^-} \,, \label{finalkillings} \end{eqnarray} \par Let us now turn to a direct discussion of multiplet shortening and consider the superconformal invariance of Poincar\'e constraints constructed with the superderivatives ${\cal D}^{\alpha \pm}$. The simplest example is provided by the {\it chiral supermultiplet}. By definition this is a scalar superfield $\Phi_{chiral}(y,\theta)$ obeying the constraint (\ref{typconstr}) which is solved by boosting only along $q^-$ and not along $q^+$: \begin{eqnarray} \Phi_{chiral}(y,\theta)=e^{i\, y \cdot P + \theta^+ q^-} \Phi(0), \end{eqnarray} Hence we have \begin{eqnarray} \Phi_{chiral}(\rho, y, \theta) = X(\rho, y) + \theta^+ \lambda(\rho, y) + \theta^+ \theta^+ H(\rho, y) \end{eqnarray} on the bulk or \begin{eqnarray} \Phi_{chiral}(y, \theta) = X(y) + \theta^+ \lambda(y) + \theta^+ \theta^+ H(y) \label{chiralmultiplet3D} \end{eqnarray} on the boundary. The field components of the chiral multiplet are: \begin{eqnarray} X = e^{i\, y\cdot P} \, \Phi(0)\,,\qquad \lambda = i \, e^{i\, y\cdot P} \, q^- \Phi(0) \,, \qquad H = -\ft14 e^{i\, y\cdot P} \, q^- q^- \phi(0) \,. \label{chiralcomponents} \end{eqnarray} For completeness, we write the superfield $\Phi$ also in the $x$-basis\footnote{where $\Box=\partial^m \partial_m\,$.}, \begin{eqnarray} \Phi(x) &=& X(x) +\theta^+ \lambda(x) + (\theta^+\theta^+) H(x) +\ft{1}{2} \theta^+ \gamma^m \theta^- \partial_m X(x) \nonumber \\ & & +\ft{1}{4} (\theta^+\theta^+) \theta^- \hbox{\ooalign{$\displaystyle\partial$\cr$/$}} \lambda(x) +\ft{1}{16} (\theta^+ \theta^+) (\theta^- \theta^-) \Box X(x) \nonumber \\ &=& \exp\left(\ft{1}{2} \theta^+ \gamma^m\theta^- \partial_m \right) \Phi(y) \,. \label{formchiral} \end{eqnarray} \par Because of (\ref{commDq}), we are guaranteed that under $q$--supersymmetry the chiral superfield $\Phi_{chiral}$ transforms into a chiral superfield. We should verify that this is true also for $s$--supersymmetry. To say it simply we just have to check that $s^- \Phi_{chiral}$ does not depend on $\theta^-$. This is not generically true, but it becomes true if certain extra conditions on the quantum numbers of the primary state are satisfied. Such conditions are the same one obtains as multiplet shortening conditions when constructing the UIR.s of the superalgebra with the {\it norm method} of Freedman and Nicolai \cite{freedmannicolai} or with the {\it oscillator method} of G\"unaydin and collaborators \cite{gunay2,gunawar,gunaydinminiczagerman1, gunaydinminiczagerman2} \footnote{We are particularly grateful to S. Ferrara for explaining to us this general idea that, extended from the case of $AdS_5/CFT_4$ to the case $AdS_4/CFT_3$, has been an essential guiding line in the development of the present work}. \par In the specific instance of the chiral multiplet, looking at (\ref{spm}) and (\ref{finalkillings}) we see that in $s^-\Phi_{chiral}$ the terms depending on $\theta^-$ are the following ones: \begin{eqnarray} s^- \Phi \Big\vert_{\theta^-} = - \left( D_0+ y_0 \right) \theta^- \Phi =0 \,, \end{eqnarray} they cancel if \begin{equation} D_0 = - y_0 \,. \label{Disy} \end{equation} Eq.(\ref{Disy}) is easily recognized as the unitarity condition for the existence of $Osp(2\vert 4)$ hypermultiplets (see \cite{multanna,m111spectrum}). The algebra (\ref{Kiss2}) ensures that the chiral multiplet also transforms into a chiral multiplet under $K_m$. Moreover we know that the action of the compensators of $K_m$ on the chiral multiplet is zero. Furthermore, the compensators of the generators $P_m, q^i, J_m$ on the chiral multiplet are zero and from (\ref{infinitesimalcosettransfo}) we conclude that their generators act on the chiral multiplet as the Killing vectors. \par Notice that the linear part of the $s$-supersymmetry transformation on the chiral multiplet has the same form of the $q$-supersymmetry but with the parameter taken to be $\epsilon_q = -i\, y \cdot \gamma \epsilon_s$. As already stated the non-linear form of $s$-supersymmetry is the consequence of its gauge fixing which we have implicitly imposed from the start by choosing the supersolvable Lie algebra parametrization of superspace and by taking the coset representatives as in (\ref{AdScosetrepresentative}) and (\ref{boundarycosetrepresentative}). \footnote { Just as a comment we recall that the standard way of gauge fixing special supersymmetry in a superconformal theory is to impose a gauge-fixing condition and then modify $q$-supersymmetry by means of a decomposition rule, i.e. adding to it special supersymmetry with specific parameters that depend on the supersymmetry parameters, such that the gauge-fixing condition becomes invariant under the modified supersymmetry. In our case we still have the standard form of $q$-supersymmetry but upon gauge fixing $s$-supersymmetry has become non-linear. The fact that $s$-supersymmetry partly resembles $q$-supersymmetry comes from the fact that it can be seen as a $q$-like supersymmetry with its own superspace coordinates, which upon gauge fixing have become dependent on the $\theta$-coordinates.} In addition to the chiral multiplet there exists also the complex conjugate {\it antichiral multiplet} ${\bar \Phi}_{chiral}=\Phi_{antichiral}$ with opposite hypercharge and the relation $D_0=y_0$. \subsection{Matching the Kaluza Klein results for $Osp(2 \vert 4)$ supermultiplets with boundary conformal superfields} It is now our purpose to reformulate the ${\cal N}=2$ multiplets found in Kaluza Klein supergravity \cite{m111spectrum} in terms of superfields living on the boundary of the $AdS_4$ space--time manifold. This is the key step to convert information coming from classical harmonic analysis on the compact manifold $X^7$ into predictions on the spectrum of conformal primary operators present in the three--dimensional gauge theory of the M2--brane. Although the results obtained in \cite{m111spectrum} refer to a specific case, the structure of the multiplets is general and applies to all ${\cal N}=2$ compactifications, namely to all sasakian horizons $X^7$. Similarly general are the recipes discussed in the present section to convert Kaluza--Klein data into boundary superfields. \par As shown in \cite{m111spectrum} there are three types of {\bf long multiplets} with the following {\it bulk spin content}: \begin{enumerate} \item The long graviton multiplet $~~\left(1\left(2\right), 4\left(3\over 2\right),6\left(1\right),4\left(1\over 2\right), 1\left(0\right)\right)$ \item The long gravitino multiplet $~~\left(1\left({3\over 2}\right),4\left(1\right), 6\left(1\over 2\right),4\left(0\right)\right)$ \item The long vector multiplets $~~\left(1\left(1\right),4\left(1\over 2\right),5\left(0\right)\right)$ \end{enumerate} and four types of {\bf short multiplets} with the following {\it bulk spin content}: \begin{enumerate} \item the short graviton multiplet $~~\left(1\left(2\right),3\left(3\over 2\right),3\left(1\right), 1\left({1\over 2}\right)\right)$ \item the short gravitino multiplet $~~\left(1\left({3\over 2}\right),3\left(1\right),3\left(1\over 2\right), 1\left(0\right)\right)$ \item the short vector multiplet $~~\left(1\left(1\right),3\left(1\over 2\right),3\left(0\right)\right)$ \item the hypermultiplet $~~\left(2\left(1\over 2\right),4\left(0\right)\right)$ \end{enumerate} Finally there are the {\bf ultrashort multiplets} corresponding to the massless multiplets available in ${\cal N}=2$ supergravity and having the following {\it bulk spin content}: \begin{enumerate} \item the massless graviton multiplet $~~\left(1\left(2\right), 2\left({3\over 2}\right),1\left(1\right)\right)$ \item the massless vector multiplet $~~\left(1\left(1\right),2\left(\ft{1}{2}\right),2\left(0\right)\right)$ \end{enumerate} Interpreted as superfields on the boundary the {\it long multiplets} correspond to {\it unconstrained superfields} and their discussion is quite straightforward. We are mostly interested in short multiplets that correspond to composite operators of the microscopic gauge theory with protected scaling dimensions. In superfield language, as we have shown in the previous section, {\it short multiplets} are constrained superfields. \par Just as on the boundary, also in the bulk, we obtain such constraints by means of the bulk superderivatives. In order to show how this works we begin by discussing the {\it chiral superfield in the bulk} and then show how it is obtained from the hypermultiplet found in Kaluza Klein theory \cite{m111spectrum}. \subsubsection{Chiral superfields are the Hypermultiplets: the basic example} The treatment for the bulk chiral field is completely analogous to that of chiral superfield on the boundary. \par Generically bulk superfields are given by: \begin{eqnarray} \Phi^{AdS}(\rho, x, \theta) = \exp \left[\rho D + \mbox{i}\, x\cdot P + \theta^i q^i\right ] \, \Phi(0)\,, \label{4Dsuperfield} \end{eqnarray} Using the parametrization (\ref{changecos}) we can rewrite (\ref{4Dsuperfield}) in the following way: \begin{eqnarray} \Phi^{AdS}(\rho, z, \xi) = \exp \left[\mbox{i}\, z \cdot P + \xi^i q^i\right ] \, \cdot \, \exp \left[\rho D_0\right ]\, \Phi(0)\,. \label{simpleAdSsuperfield} \end{eqnarray} Then the generator $D$ acts on this field as follows: \begin{eqnarray} D \, \Phi^{AdS}(\rho, z, \xi) = \left(- z \cdot \partial - \ft12 \xi^i \frac{\partial}{\partial \xi^i} + D_0 \right) \Phi^{AdS}(\rho, z, \xi) \,. \end{eqnarray} Just as for boundary chiral superfields, also in the bulk we find that the constraint (\ref{typconstr}) is invariant under the $s$-supersymmetry rule (\ref{simplekilling}) if and only if: \begin{equation} D_0= - y_0 \label{dugaly} \end{equation} Furthermore, looking at (\ref{simpleAdSsuperfield}) one sees that for the bulk superfields $D_0=0$ is forbidden. This constraint on the scaling dimension together with the relation $E_0=-D_0$, coincides with the constraint: \begin{equation} E_0=\vert y_0 \vert \label{Eugaly} \end{equation} defining the $Osp(2\vert 4)$ hypermultiplet UIR of $Osp(2\vert 4)$ constructed with the norm method and in the formulation (\ref{ospE}) of the superalgebra (see \cite{multanna, m111spectrum}). The transformation of the bulk chiral superfield under $s, P_m, q^i, J_m$ is simply given by the bulk Killing vectors. In particular the form of the $s$-supersymmetry Killing vector coincides with that given in (\ref{finalkillings}) for the boundary. \par As we saw a chiral superfield in the bulk describes an $Osp(2\vert 4)$ hypermultiplet. To see this explicitly it suffices to look at the following table\footnote{The hypercharge $y_0$ in the table is chosen to be positive.} \begin{eqnarray} \centering \begin{array}{||c|c|c||} \hline {\rm Spin} & {\rm Particle \, \, states} & {\rm Name} \\ \hline \hline & & \\ \ft12 & \bar a^- \vert E_0=y_0 , y_0 \rangle & \lambda_L \\ 0 & \bar a^- \bar a^- \vert E_0=y_0, y_0 \rangle & \pi \\ 0 & \vert E_0=y_0, y_0 \rangle & S \\ & & \\ \hline \hline & & \\ \ft12 & \bar a^+ \vert E_0=y_0, -y_0 \rangle & \lambda_L \\ 0 & \bar a^+ \bar a^+ \vert E_0=y_0, -y_0 \rangle & \pi \\ 0 & \vert E_0=y_0, -y_0 \rangle & S \\ & & \\ \hline \end{array} \label{hyper} \end{eqnarray} where we have collected the particle states forming a hypermultiplet as it appears in Kaluza Klein supergravity on $AdS_4 \times X^7$, whenever $X^7$ is sasakian. The names of the fields are the standard ones introduced in \cite{univer} for the linearization of D=11 supergravity on $AdS_4 \times X^7$ and used in \cite{multanna, m111spectrum}. Applying the rotation matrix $U$ of eq. (\ref{rotationmatrix}) to the states in the upper part of this table we indeed find the field components (\ref{chiralcomponents}) of the chiral supermultiplet. \par Having clarified how to obtain the four-dimensional chiral superfield from the $Osp(2\vert 4)$ hypermultiplet we can now obtain the other shortened $Osp(2\vert 4)$ superfields from the information that was obtained in \cite{m111spectrum}. In \cite{m111spectrum} all the field components of the $Osp(2\vert 4)$ multiplets were listed together with their spins $s_0$, energy $E_0$ and their hypercharge $y_0$. This is sufficient to reconstruct the particle states of the multiplets which are given by the states (\ref{energyreps}). Indeed the energy determines the number of energy boosts that are applied to the vacuum in order to get the state. The hypercharge determines the number of $\bar a^+$ and/or $\bar a^-$ present. Finally $s_0$ tells us what spin we should get. In practice this means that we always have to take the symmetrization of the spinor indices, since $(\alpha_1\dots\alpha_n)$ yields a spin-$\ft{n}{2}$ representation. Following \cite{multanna} we ignore the unitary representations of $SO(3,2)$ that are built by the energy boosts $M_i^+$ and we just list the ground states for each UIR of $SO(2,3)$ into which the UIR of $Osp(2\vert 4)$ decomposes. \subsubsection{Superfield description of the short vector multiplet} Let us start with the short massive vector multiplet. From \cite{m111spectrum} we know that the constraint for shortening is \begin{equation} E_0=\vert y_0 \vert + 1 \label{eisyplusone} \end{equation} and that the particle states of the multiplet are given by,\footnote{The hypercharge $y_0$ in the table is chosen to be positive.} \begin{eqnarray} \label{shvecmu} \centering \begin{array}{||c|c|c||} \hline {\rm Spin} & {\rm Particle \, \, states} & {\rm Name} \\ \hline \hline & & \\ 1 & ( \bar a^- \tau^i \bar a^+ ) \vert E_0=y_0+1, y_0 \rangle & A \\ \ft12 & ( \bar a^- \bar a^- ) \bar a^+ \vert y_0+1, y_0 \rangle & \lambda_T \\ \ft12 & \bar a^+ \vert y_0+1, y_0 \rangle & \lambda_L \\ \ft12 & \bar a^- \vert y_0+1, y_0 \rangle & \lambda_L \\ 0 & ( \bar a^- \bar a^- ) \vert y_0+1, y_0 \rangle & \pi \\ 0 & ( \bar a^+ \bar a^- ) \vert y_0+1, y_0 \rangle & \pi \\ 0 & \vert y_0+1, y_0 \rangle & S \\ & & \\ \hline \hline & & \\ 1 & (\bar a^+ \tau^i \bar a^-) \vert E_0=y_0+1, - y_0 \rangle & A \\ \ft12 & ( \bar a^+ \bar a^+ ) \bar a^- \vert y_0+1, - y_0 \rangle & \lambda_T \\ \ft12 & \bar a^- \vert y_0+1, - y_0 \rangle & \lambda_L \\ \ft12 & \bar a^+ \vert y_0+1, - y_0 \rangle & \lambda_L \\ 0 & ( \bar a^+ \bar a^+ ) \vert y_0+1, - y_0 \rangle & \pi \\ 0 & ( \bar a^+ \bar a^- ) \vert y_0+1, - y_0 \rangle & \pi \\ 0 & \vert y_0+1, - y_0 \rangle & S \\ & & \\ \hline \end{array} \end{eqnarray} where we have multiplied the symmetrized product $\bar a^+_\alpha \bar a^-_\beta$ with the $\tau$-matrices in order to single out the $SO(3)$ vector index $i$ that labels the on--shell states of the $d=4$ massive vector field. Applying the rotation matrix $U$ to the states in the upper part of table (\ref{shvecmu}) we find the following states: \begin{eqnarray} S=\vert {\rm vac} \rangle \,, \qquad \lambda^{\pm}_L = i \, q^\pm \vert {\rm vac} \rangle \,, \qquad \pi^{--} = -\ft14 \, q^- q^- \vert {\rm vac} \rangle \,, \qquad {\rm etc} \dots \label{rotatedstates} \end{eqnarray} where we used the same notation for the rotated as for the original states and up to an irrelevant factor $\ft14$. We follow the same procedure also for the other short and massless multiplets. Namely in the superfield transcription of our multiplets we use the same names for the superspace field components as for the particle fields appearing in the $SO(3)\times SO(2)$ basis. Moreover when convenient we rescale some field components without mentioning it explicitly. The list of states appearing in (\ref{rotatedstates}) are the components of a superfield \begin{eqnarray} \Phi_{vector} &=& S + \theta^- \lambda_L^+ + \theta^+ \lambda_L^- + \theta^+ \theta^- \pi^0 + \theta^+ \theta^+ \pi^{--} + \theta^+ \hbox{\ooalign{$\displaystyle A$\cr$\hspace{.03in}/$}} \theta^- + \theta^+ \theta^+ \, \theta^- \lambda_T^- \,, \nonumber \\ & & \label{vecsupfil} \end{eqnarray} which is the explicit solution of the following constraint \begin{equation} {\cal D}^+ {\cal D}^+ \Phi_{vector} = 0 \,. \end{equation} imposed on a superfield of the form (\ref{4Dsuperfield}) with hypercharge $y_0$. \par In superspace literature a superfield of type (\ref{vecsupfil}) is named a linear superfield. If we consider the variation of a linear superfield with respect to $s^-$, such variation contains, a priori, a term of the form \begin{equation} s^- \Phi_{vector} \Big\vert_{\theta^-\theta^-}= \ft12 \left(D_0 + y_0+1\right)(\theta^-\theta^-) \lambda_L^+ \,, \end{equation} which has to cancel if $\Phi_{vector}$ is to transform into a linear multiplet under $s^-$. Hence the following condition has to be imposed \begin{equation} D_0 = - y_0 - 1 \,. \label{dismymone} \end{equation} which is identical with the bound for the vector multiplet shortening $E_0=y_0+1$ found in \cite{multanna,m111spectrum}. \subsubsection{Superfield description of the short gravitino multiplet} Let us consider the short gravitino multiplets found in \cite{m111spectrum}. The particle state content of these multiplet is given below\footnote{The hypercharge $y_0$ in the table is chosen to be positive.}: \begin{eqnarray} \centering \begin{array}{||c|c|c||} \hline {\rm Spin} & {\rm Particle \, \, states} & {\rm Name} \\ \hline \hline & & \\ \ft32 & \bar a^-_{(\alpha} \bar a^+_{\beta} \vert E_0=y_0+\ft32, s_0=\ft12, y_0 \rangle_{\gamma)} & \chi^{(+)} \\ 1 & (\bar a^- \bar a^-) \bar a^+\tau^i \vert y_0+\ft32, \ft12, y_0 \rangle & Z \\ 1 & \bar a^+ \tau^i \vert y_0+\ft32, \ft12, y_0 \rangle & A \\ 1 & \bar a^- \tau^i \vert y_0+\ft32, \ft12, y_0 \rangle & A \\ \ft12 & ( \bar a^- \bar a^- ) \vert y_0+\ft32, \ft12, y_0 \rangle & \lambda_T \\ \ft12 & (3 \,\bar a^- \bar a^+ + \bar a^- \tau^i \bar a^+\, \tau_i ) \vert y_0+\ft32, \ft12, y_0 \rangle & \lambda_T \\ \ft12 & \vert y_0+\ft32, \ft12, y_0 \rangle & \lambda_L \\ 0 & \bar a^- \vert y_0+\ft32, \ft12, y_0 \rangle & \phi \\ & & \\ \hline \hline & & \\ \ft32 & \bar a^+_{(\alpha} \bar a^-_{\beta} \vert E_0=y_0+\ft32, s_0=\ft12, -y_0 \rangle_{\gamma)}, & \chi^{(+)} \\ 1 & (\bar a^+ \bar a^+) \bar a^-\tau^i \vert y_0+\ft32, \ft12, -y_0 \rangle & Z \\ 1 & \bar a^- \tau^i \vert y_0+\ft32, \ft12, -y_0 \rangle & A \\ 1 & \bar a^+ \tau^i \vert y_0+\ft32, \ft12, -y_0 \rangle & A \\ \ft12 & ( \bar a^+ \bar a^+ ) \vert y_0+\ft32, \ft12, -y_0 \rangle & \lambda_T \\ \ft12 & (3 \,\bar a^+ \bar a^- + \bar a^+ \tau^i \bar a^-\, \tau_i ) \vert y_0+\ft32, \ft12, -y_0 \rangle & \lambda_T \\ \ft12 & \vert y_0+\ft32, \ft12, -y_0 \rangle & \lambda_L \\ 0 & \bar a^+ \vert y_0+\ft32, \ft12, -y_0 \rangle & \phi \\ & & \\ \hline \end{array} \label{shgrinomu} \end{eqnarray} Applying the rotation matrix $U$ (\ref{rotationmatrix}) to the upper part of table (\ref{shgrinomu}), and identifying the particle states with the corresponding rotated field states as we have done in the previous cases, we find the following spinorial superfield \begin{eqnarray} \Phi_{gravitino} &=& \lambda_L + \hbox{\ooalign{$\displaystyle A^+$\cr$\hspace{.03in}/$}} \theta^- + \hbox{\ooalign{$\displaystyle A^-$\cr$\hspace{.03in}/$}} \theta^+ + \phi^- \theta^+ + 3 \, (\theta^+ \theta^-) \lambda_T^{+-} - (\theta^+ \gamma^m \theta^-) \gamma_m \lambda_T^{+-} \nonumber \\ & & + (\theta^+ \theta^+) \lambda_T^{--} + (\theta^+ \gamma^m \theta^-) \chi_m^{(+)} + (\theta^+ \theta^+) \hbox{\ooalign{$\displaystyle Z^-$\cr$\hspace{.03in}/$}} \theta^- \,, \end{eqnarray} where the vector--spinor field $\chi^m$ is expressed in terms of the spin-$\ft32$ field with symmetrized spinor indices in the following way \begin{equation} \chi^{(+)m \alpha} = \left( \gamma^m \right)_{\beta\gamma}\, \chi^{(+)(\alpha\beta\gamma)} \label{chispin32} \end{equation} and where, as usual, $\hbox{\ooalign{$\displaystyle A^+$\cr$\hspace{.03in}/$}} = \gamma^m \, A_m$. \par The superfield $\Phi_{gravitino}$ is linear in the sense that it does not depend on the monomial $\theta^-\theta^-$, but to be precise it is a spinorial superfield (\ref{4Dsuperfield}) with hypercharge $y_0$ that fulfils the stronger constraint \begin{equation} {\cal D}^+_\alpha \Phi_{gravitino}^\alpha = 0 \,. \label{lineargravitino} \end{equation} \par The generic linear spinor superfield contains, in its expansion, also terms of the form $\varphi^+ \theta^-$ and $(\theta^+\theta^+) \varphi^-\theta^-$, where $\varphi^+$ and $\varphi^-$ are scalar fields and a term $(\theta^+ \gamma^m \theta^-) \chi_m$ where the spinor-vector $\chi_m$ is not an irreducible $\ft{3}{2}$ representation since it cannot be written as in (\ref{chispin32}). \par Explicitly we have: \begin{eqnarray} \Phi^\alpha_{linear} &=& \lambda_L + \hbox{\ooalign{$\displaystyle A^+$\cr$\hspace{.03in}/$}} \theta^- + \hbox{\ooalign{$\displaystyle A^-$\cr$\hspace{.03in}/$}} \theta^+ + \phi^- \theta^+ + \varphi^+ \theta^- + 3 \, (\theta^+ \theta^-) \lambda_T^{+-} + (\theta^+ \theta^+) \lambda_T^{--} \nonumber \\ & & + (\theta^+ \gamma^m \theta^-) \chi_m + (\theta^+ \theta^+) \hbox{\ooalign{$\displaystyle Z^-$\cr$\hspace{.03in}/$}} \theta^- + (\theta^+\theta^+) \varphi^-\theta^-\,, \end{eqnarray} The field component $\chi^{\alpha m}$ in a generic unconstrained spinor superfield can be decomposed in a spin-$\ft12$ component and a spin-$\ft32$ component according to, \begin{eqnarray} \begin{array}{|c|c|} \hline \,\,\, & \,\,\, \cr \hline \end{array} \times \begin{array}{|c|} \hline \,\,\, \cr \hline \end{array} = \begin{array}{c} \begin{array}{|c|c|} \hline \,\,\, & \,\,\, \cr \hline \end{array} \cr \begin{array}{|c|} \,\,\, \cr \hline \end{array} \begin{array}{c} \,\,\, \cr \end{array} \end{array} + \begin{array}{|c|c|c|} \hline \,\,\, & \,\,\, & \,\,\, \cr \hline \end{array} \label{gravitinodecomposition} \end{eqnarray} where $m={ \begin{array}{|c|c|} \hline & \cr \hline \end{array}}\, $. Then the constraint (\ref{lineargravitino}) eliminates the scalars $\varphi^\pm$ and eliminates the ${ \begin{array}{c} \begin{array}{|c|c|} \hline & \cr \hline \end{array} \cr \begin{array}{|c|} \cr \hline \end{array} \begin{array}{c} \cr \end{array} \end{array}}$-component of $\chi$ in terms of $\lambda_T^{+-}$. From \begin{equation} s_\beta^-\, \Phi_{gravitino}^\alpha\Big\vert_{\theta^-\theta^-} = \ft12 \left( -D_0 - y_0 - \ft32 \right)(\theta^- \theta^-) (\hbox{\ooalign{$\displaystyle A^+$\cr$\hspace{.03in}/$}})_\beta{}^\alpha \end{equation} we conclude that the constraint (\ref{lineargravitino}) is superconformal invariant if and only if \begin{equation} D_0 = - y_0 - \ft32 \,. \end{equation} \par Once again we have retrieved the shortening condition already known in the $SO(3) \times SO(2)$ basis: $E_0 = \vert y_0 \vert + \ft32$ \subsubsection{Superfield description of the short graviton multiplet} For the massive short graviton multiplet we have the following states \begin{eqnarray} \centering \begin{array}{||c|c|c||} \hline {\rm Spin} & {\rm Particle \, \, states} & {\rm Name} \\ \hline \hline & & \\ 2 & \bar a^-_{(\alpha} \bar a^+_\beta \vert E_0=y_0+2, s_0=1, y_0 \rangle_{\gamma\delta)} & h \\ \ft32 & (\bar a^- \bar a^-) \bar a^+_{(\alpha} \vert y_0+2, 1, y_0 \rangle_{\beta\gamma)} & \chi^{(-)} \\ \ft32 & \bar a^+_{(\alpha} \vert y_0+2, 1, y_0 \rangle_{\beta\gamma)} & \chi^{(+)} \\ \ft32 & \bar a^-_{(\alpha} \vert y_0+2, 1, y_0 \rangle_{\beta\gamma)} & \chi^{(+)} \\ 1 & ( \bar a^- \bar a^- ) \vert y_0+2, 1, y_0 \rangle & Z \\ 1 & ( \bar a^+ \bar a^- ) \vert y_0+2, 1, y_0 \rangle & Z \\ 1 & \vert E_0=y_0+2, 1, y_0 \rangle & A \\ \ft12 & \bar a^- \tau\cdot \vert E_0=y_0+2, 1, y_0 \rangle & \lambda_T \\ & & \\ \hline \hline & & \\ 2 & \bar a^-_{(\alpha} \bar a^+_\beta \vert E_0=y_0+2, s_0=1, - y_0 \rangle_{\gamma\delta)} & h \\ \ft32 & (\bar a^+ \bar a^+) \bar a^-_{(\alpha} \vert y_0+2, 1, - y_0 \rangle_{\beta\gamma)} & \chi^{(-)} \\ \ft32 & \bar a^-_{(\alpha} \vert y_0+2, 1, - y_0 \rangle_{\beta\gamma)} & \chi^{(+)} \\ \ft32 & \bar a^+_{(\alpha} \vert y_0+2, 1, - y_0 \rangle_{\beta\gamma)} & \chi^{(+)} \\ 1 & ( \bar a^+ \bar a^+ ) \vert y_0+2, 1, - y_0 \rangle & Z \\ 1 & ( \bar a^+ \bar a^- ) \vert y_0+2, 1, - y_0 \rangle & Z \\ 1 & \vert E_0=y_0+2, 1, - y_0 \rangle & A \\ \ft12 & \bar a^+ \tau\cdot \vert E_0=y_0+2, 1, - y_0 \rangle & \lambda_T \\ & & \\ \hline \end{array} \end{eqnarray} Applying the rotation $U$ (\ref{rotationmatrix}) to the upper part of the above table, and identifying the particle states with the corresponding boundary fields, as we have done so far, we derive the short graviton superfield: \begin{eqnarray} \Phi^m_{graviton} &=& A^m + \theta^+ \gamma^m \lambda_T^- + \theta^- \chi^{(+)+m} + \theta^+ \chi^{(+)-m} \nonumber \\ & & + (\theta^+ \theta^-) \, Z^{+-m} + \ft{i}{2}\, \varepsilon^{mnp} \, ( \theta^+ \gamma_n \theta^-) \, Z_p^{+-} + (\theta^+ \theta^+) \, Z^{--m} \nonumber \\ & & + ( \theta^+ \gamma_n \theta^-) \, h^{mn} + (\theta^+ \theta^+) \, \theta^- \chi^{(-)-m} \,, \end{eqnarray} where \begin{eqnarray} \chi^{(+)\pm m \alpha} &=& \left( \gamma^m \right)_{\beta\gamma}\chi^{(+)\pm(\alpha\beta\gamma)} \,, \nonumber \\ \chi^{(-)- m \alpha} &=& \left( \gamma^m \right)_{\beta\gamma}\chi^{(-)-(\alpha\beta\gamma)} \,, \nonumber \\ h^m{}_m &=& 0 \,, \label{gravitonchispin32} \end{eqnarray} This superfield satisfies the following constraint, \begin{eqnarray} {\cal D}^+_\alpha \Phi_{graviton}^{\alpha \beta}=0 \,, \end{eqnarray} where we have defined: \begin{eqnarray} \Phi^{\alpha\beta}= \left(\gamma_m\right)^{\alpha\beta} \, \Phi^m \end{eqnarray} Furthermore we check that $s^- \Phi^m_{graviton}$ is still a short graviton superfield if and only if: \begin{equation} D_0 = - y_0 - 2 \,. \end{equation} corresponding to the known unitarity bound \cite{multanna,m111spectrum}: \begin{equation} E_0 = \vert y_0 \vert +2 \label{graunibou} \end{equation} \subsubsection{Superfield description of the massless vector multiplet} \par Considering now ultrashort multiplets we focus on the massless vector multiplet containing the following bulk particle states: \begin{eqnarray} \centering \begin{array}{||c|c|c||} \hline {\rm Spin} & {\rm Particle \, \, states} & {\rm Name} \\ \hline \hline & & \\ 1 & \bar a^-_1 \bar a^+_1 \vert E_0=1 , s_0=0, y_0=0 \rangle \,, \bar a^-_2 \bar a^+_2 \vert E_0=1 , s_0=0, y_0=0 \rangle\,\, & A \\ \ft12 & \bar a^+ \vert 1 , 0, 0 \rangle & \lambda_L \\ \ft12 & \bar a^- \vert 1 , 0, 0 \rangle & \lambda_L \\ 0 & \bar a^- \bar a^+ \vert 1, 0, 0 \rangle & \pi \\ 0 & \vert 1, 0, 0 \rangle & S \\ & & \\ \hline \end{array} \end{eqnarray} where the gauge field $A$ has only two helicity states $1$ and $-1$. Applying the rotation $U$ (\ref{rotationmatrix}) we get, \begin{eqnarray} V= S + \theta^+ \lambda_L^- + \theta^- \lambda_L^+ + (\theta^+ \theta^-) \, \pi + \theta^+ \hbox{\ooalign{$\displaystyle A$\cr$\hspace{.03in}/$}} \theta^- \,. \label{Vfixed} \end{eqnarray} This multiplet can be obtained by a real superfield \begin{eqnarray} V &=& S + \theta^+ \lambda_L^- + \theta^- \lambda_L^+ + (\theta^+ \theta^-) \, \pi + \theta^+ \hbox{\ooalign{$\displaystyle A$\cr$\hspace{.03in}/$}} \theta^- \nonumber \\ & & + (\theta^+\theta^+) \, M^{--} + (\theta^-\theta^-) \, M^{++} \nonumber \\ & & + (\theta^+ \theta^+) \, \theta^- \mu^- + (\theta^- \theta^-) \, \theta^+ \mu^+ \nonumber \\ & & + (\theta^+\theta^+) (\theta^-\theta^-) \, F \,, \nonumber\\ V^\dagger &=& V \end{eqnarray} that transforms as follows under a gauge transformation\footnote{The vector component transforms under a $SU(2)$ or a $SU(3)$ gauge transformation in the case of \cite{m111spectrum}.} , \begin{eqnarray} V \rightarrow V + \Lambda + \Lambda^\dagger \,, \end{eqnarray} where $\Lambda$ is a chiral superfield of the form (\ref{formchiral}). In components this reads, \begin{eqnarray} S &\rightarrow& S + X + X^* \,, \nonumber\\ \lambda_L^- &\rightarrow& \lambda_L^- + \lambda \,, \nonumber \\ \pi &\rightarrow& \pi \,, \nonumber \\ A_m &\rightarrow& A_m + \ft{1}{2}\, \partial_m\left(X-X^*\right) \,, \nonumber \\ M^{--} &\rightarrow& M^{--} + H \,, \nonumber \\ \mu^{-} &\rightarrow& \mu^- +\ft{1}{4} \, \hbox{\ooalign{$\displaystyle\partial$\cr$/$}}\lambda \,, \nonumber \\ F &\rightarrow& F +\ft{1}{16} \, \Box X \,, \end{eqnarray} which may be used to gauge fix the real multiplet in the following way, \begin{equation} M^{--}= M^{++}=\mu^-=\mu^+= F =0\,, \label{gaugefixingvector} \end{equation} to obtain (\ref{Vfixed}). For the scaling weight $D_0$ of the massless vector multiplet we find $-1$. Indeed this follows from the fact that $\Lambda$ is a chiral superfield with $y_0=0, D_0=0$. Which is also in agreement with $E_0=1$ known from \cite{multanna, m111spectrum}. \par \subsubsection{Superfield description of the massless graviton multiplet} The massless graviton multiplet is composed of the following bulk particle states: \begin{eqnarray} \centering \begin{array}{||c|c|c||} \hline {\rm Spin} & {\rm Particle \, \, states} & {\rm Name} \\ \hline \hline & & \\ 2 & \bar a^- \tau^{(i} \bar a^+ \vert E_0=2 , s_0=1, y_0=0 \rangle^{j)} & h \\ \ft32 & \bar a^+ \vert 2 , 1, 0 \rangle^i & \chi^{(+)} \\ \ft32 & \bar a^- \vert 2 , 1, 0 \rangle^i & \chi^{(+)} \\ 1 & \vert 2, 1, 0 \rangle^i & A \\ & & \\ \hline \end{array} \end{eqnarray} from which, with the usual procedure we obtain \begin{eqnarray} g_m = A_m + \theta^+ \chi^{(+)-}_m + \theta^- \chi^{(+)+}_m + \theta^+ \gamma^n \theta^-\, h_{mn}\,. \end{eqnarray} Similarly as for the vector multiplet we may write this multiplet as a gauge fixed multiplet with local gauge symmetries that include local coordinate transformations, local supersymmetry and local $SO(2)$, in other words full supergravity. However this is not the goal of our work where we prepare to interprete the bulk gauge fields as composite states in the boundary conformal field theory.. \par This completes the treatment of the short $Osp(2\vert 4)$ multiplets of \cite{m111spectrum}. We have found that all of them are linear multiplets with the extra constraint that they have to transform into superfields of the same type under $s$-supersymmetry. Such constraint is identical with the shortening conditions found in the other constructions of unitary irreducible representations of the orthosymplectic superalgebra. \section{${\cal N}=2$, $d=3$ gauge theories and their rheonomic construction} \label{n2d3gauge} Next, as announced in the introduction, we turn to consider gauge theories in three space--time dimensions with ${\cal N}=2$ supersymmetry. From the view point of the $AdS_4/CFT_3$ correspondence these gauge theories, whose elementary fields we collectively denote $\phi^i(x)$, are microscopic field theories living on the $M2$ brane world volume such that suitable composite operators (see also fig.\ref{gaugefig}): \begin{equation} {\cal O}(x) = \phi^{i_1} (x, ) \, \dots \, \phi^{i_n}(x) c_{i_1 \dots i_n} \label{marmellata} \end{equation} \iffigs \begin{figure} \caption{Conformal superfields are composite operators in the gauge theory \label{gaugefig}} \begin{center} \epsfxsize = 13cm \epsffile{gauge.eps} \vskip 0.2cm \hskip 2cm \unitlength=1.1mm \end{center} \end{figure} \fi \par \par can be identified with the components of the conformal superfields described in the previous section and matching the Kaluza Klein classical spectrum. \par According to the specific horizon $X^7$, the world volume gauge group is of the form: \begin{equation} G^{WV}_{gauge}= U(k_1 \,N)^{\ell_1} \, \times \, \dots \,\times \, U(k_n N)^{\ell_n} \label{Wvgauge} \end{equation} where $k_i$ and $\ell_i$ are integer number and where the correspondence is true in the large $N \to \infty$ limit. Indeed $N$ is to be identified with the number of $M2$ branes in the stack. \par In addition the gauge theory has a {\it flavor group} which coincides with the gauge group of Kaluza Klein supergravity, namely with the isometry group of the $X^7$ horizon: \begin{equation} G^{WV}_{flavor} = G_{KK}^{bulk} = \mbox{isometry}(X^7) \label{flavgrou} \end{equation} Since our goal is to study the general features of the $AdS_4/CFT_3$ correspondence, rather than specific cases, we concentrate on the construction of a generic ${\cal N}=2$ gauge theory with an arbitrary gauge group and an arbitrary number of chiral multiplets in generic interaction. We are mostly interested in the final formulae for the scalar potential and on the restrictions that guarantee an enlargement of supersymmetry to ${\cal N}=4$ or ${\cal N}=8$, but we provide a complete construction of the lagrangian and of the supersymmetry transformation rules. To this effect we utilize the time honored method of rheonomy \cite{castdauriafre,billofre,fresoriani} that yields the the result for the lagrangian and the supersymmetry rules in component form avoiding the too much implicit notation of superfield formulation. The first step in the rheonomic construction of a rigid supersymmetric theory involves writing the structural equations of rigid superspace. \subsection{${\cal N}=2, \, d=3$ rigid superspace} The $d\!\!=\!\!3,~{\cal N}$--extended superspace is viewed as the supercoset space: \begin{equation} {\cal M}^{\cal N}_3=\frac{ISO(1,2|{\cal N})} {SO(1,2) }\, \equiv \, \frac{Z\left[ ISO(1,2|{\cal N})\right ] }{SO(1,2) \times \relax{\rm I\kern-.18em R}^{{\cal N}({\cal N}-1)/2}} \end{equation} where $ISO(1,2\vert{\cal N})$ is the ${\cal N}$--extended Poincar\'e superalgebra in three--dimensions. It is the subalgebra of $Osp({\cal N}\vert 4)$ (see eq. (\ref{ospD})) spanned by the generators $J_m$, $P_m$, $q^i$. The central extension $Z\left[ ISO(1,2|{\cal N})\right ]$ which is not contained in $Osp({\cal N}\vert 4)$ is obtained by adjoining to $ISO(1,2|{\cal N})$ the central charges that generate the subalgebra $\relax{\rm I\kern-.18em R}^{{\cal N}({\cal N}-1)/2}$. Specializing our analysis to the case ${\cal N}\!\!=\!\!2$, we can define the new generators: \begin{equation} \left\{\begin{array}{ccl} Q&=&\sqrt{2}q^-=(q^1-iq^2)\\ Q^c&=&\sqrt{2}iq^+=i(q^1+iq^2)\\ Z&=&Z^{12} \end{array}\right. \end{equation} The left invariant one--form $\Omega$ on ${\cal M}^{\cal N}_3$ is: \begin{equation} \Omega=iV^mP_m-i\omega^{mn}J_{mn}+i\overline{\psi^c}Q-i\overline{\psi}Q^c +i{\cal B}Z\,. \end{equation} The superalgebra (\ref{ospD}) defines all the structure constants apart from those relative to the central charge that are trivially determined. Hence we can write: \begin{eqnarray} d\Omega-\Omega\wedge\Omega&=&\left(dV^m-\omega^m_{\ n}\wedge V^n +i\overline{\psi}\wedge\gamma^m\psi +i\overline{\psi}^c\wedge\gamma^m\psi^c\right)P_m\nonumber\\ &&-\ft{1}{2}i\left(d\omega^{mn}-\omega^m_{\ p}\wedge\omega^{pn}\right)J_{mn}\nonumber\\ &&+\left(d\overline{\psi}^c +\ft{1}{2}\omega^{mn}\wedge\overline{\psi}^c\gamma_{mn}\right)Q\nonumber\\ &&+\left(d\overline{\psi} -\ft{1}{2}\omega^{mn}\wedge\overline{\psi}\gamma_{mn}\right)Q^c\nonumber\\ &&+i\left(d{\cal B}+i\overline{\psi}^c\wedge\psi^c -i\overline{\psi}\wedge\psi\right)Z \end{eqnarray} Imposing the Maurer-Cartan equation $d\Omega-\Omega\wedge\Omega=0$ is equivalent to imposing flatness in superspace, i.e. global supersymmetry. So we have \begin{equation} \left\{\begin{array}{ccl} dV^m-\omega^m_{\ n}\wedge V^n&=&-i\overline{\psi}^c\wedge\gamma^m\psi^c -i\overline{\psi}\wedge\gamma^m\psi\\ d\omega^{mn}&=&\omega^m_{\ p}\wedge\omega^{pn}\\ d\overline{\psi}^c&=&-\ft{1}{2}\omega^{mn}\wedge\overline{\psi}^c\gamma_{mn}\\ d\overline{\psi}&=&\ft{1}{2}\omega^{mn}\wedge\overline{\psi}\gamma_{mn}\\ d{\cal B}&=&-i\overline{\psi}^c\wedge\psi^c +i\overline{\psi}\wedge\psi \end{array}\right. \end{equation} The simplest solution for the supervielbein and connection is: \begin{equation} \left\{\begin{array}{ccl} V^m&=&dx^m-i\overline{\theta}^c\gamma^md\theta^c -i\overline{\theta}\gamma^m d\theta\\ \omega^{mn}&=&0\nonumber\\ \psi&=&d\theta\\ \psi^c&=&d\theta^c\\ {\cal B}&=&-i\overline{\theta}^c\,d\theta^c +i\overline{\theta}\,d\theta \end{array}\right. \end{equation} The superderivatives discussed in the previous sections (compare with \eqn{supdervcf}), \begin{equation} \left\{\begin{array}{ccl} D_m&=&\partial_m\\ D&=&\frac{\partial}{\partial\overline{\theta}}-i\gamma^m\theta\partial_m\\ D^c&=&\frac{\partial}{\partial\overline{\theta}^c}-i\gamma^m\theta^c\partial_m\\ \end{array}\right., \end{equation} are the vectors dual to these one--forms. \subsection{Rheonomic construction of the ${\cal N}=2,~d=3,$ lagrangian} As stated we are interested in the generic form of ${\cal N}=2,~d=3$ super Yang Mills theory coupled to $n$ chiral multiplets arranged into a generic representation $\cal R$ of the gauge group $\cal G$. \par In ${\cal N}=2,~d=3$ supersymmetric theories, two formulations are allowed: the on--shell and the off--shell one. In the on--shell formulation which contains only the physical fields, the supersymmetry transformations rules close the supersymmetry algebra only upon use of the field equations. On the other hand the off--shell formulation contains further auxiliary, non dynamical fields that make it possible for the supersymmetry transformations rules to close the supersymmetry algebra identically. By solving the field equations of the auxiliary fields these latter can be eliminated and the on--shell formulation can be retrieved. We adopt the off--shell formulation. \subsubsection{The gauge multiplet} The three--dimensional ${\cal N}=2$ vector multiplet contains the following Lie-algebra valued fields: \begin{equation}\label{vectorm} \left({\cal A},\lambda,\lambda^c,M,P\right)\,, \end{equation} where ${\cal A}={\cal A}^It_I$ is the real gauge connection one--form, $\lambda$ and $\lambda^c$ are two complex Dirac spinors (the \emph{gauginos}), $M$ and $P$ are real scalars; $P$ is an auxiliary field. \par The field strength is: \begin{equation}\label{defF} F=d{\cal A}+i{\cal A}\wedge{\cal A}\,. \end{equation} The covariant derivative on the other fields of the gauge multiplets is defined as: \begin{equation}\label{defnablagauge} \nabla X=dX+i\left[{\cal A},X\right]\,. \end{equation} From (\ref{defF}) and (\ref{defnablagauge}) we obtain the Bianchi identity: \begin{equation} \nabla^2 X=i\left[F,X\right]\,. \end{equation} The rheonomic parametrization of the \emph{curvatures} is given by: \begin{equation}\label{vectorB} \left\{\begin{array}{ccl} F&=&F_{mn}V^mV^n-i\overline{\psi}^c\gamma_m\lambda V^m-i\overline{\psi}\gamma_m\lambda^cV^m +iM\left(\overline{\psi}\psi-\overline{\psi}^c\psi^c\right)\\ \nabla\lambda&=&V^m\nabla_m\lambda+\nabla\!\!\!\!/ M\psi^c-F_{mn}\gamma^{mn}\psi^c +iP\psi^c\\ \nabla\lambda^c&=&V^m\nabla_m\lambda^c-\nabla\!\!\!\!/ M\psi-F_{mn}\gamma^{mn}\psi -iP\psi\\ \nabla M&=&V^m\nabla_mM+i\overline{\psi}\lambda^c-i\overline{\psi}^c\lambda\\ \nabla P&=&V^m\nabla_mP+\overline{\psi}\nabla\!\!\!\!/\lambda^c-\overline{\psi}^c\nabla\!\!\!\!/\lambda -i\overline{\psi}\left[\lambda^c,M\right]-i\overline{\psi}^c\left[\lambda,M\right] \end{array}\right. \end{equation} and we also have: \begin{equation} \left\{\begin{array}{ccl} \nabla F_{mn}&=&V^p\nabla_pF_{mn}+i\overline{\psi}^c\gamma_{[m}\nabla_{n]}\lambda +i\overline{\psi}\gamma_{[m}\nabla_{n]}\lambda^c\\ \nabla\nabla_mM&=&V^n\nabla_n\nabla_mM+i\overline{\psi}\nabla_m\lambda^c -i\overline{\psi}^c\nabla_m\lambda+\overline{\psi}^c\gamma_m\left[\lambda,M\right] +\overline{\psi}\gamma_m\left[\lambda^c,M\right]\\ \nabla\nabla_m\lambda&=&V^n\nabla_n\nabla_m\lambda+\nabla_m\nabla_nM\gamma^n\psi^c -\nabla_mF_{np}\gamma^{np}\psi^c\\ &&+i\nabla_mP\psi^c+\overline{\psi}\gamma_m\left[\lambda^c,\lambda\right]\\ \nabla_{[p}F_{mn]}&=&0\\ \nabla_{[m}\nabla_{n]}M&=&i\left[F_{mn},M\right]\\ \nabla_{[m}\nabla_{n]}\lambda&=&i\left[F_{mn},\lambda\right] \end{array}\right. \end{equation} The off--shell formulation of the theory contains an arbitrariness in the choice of the functional dependence of the auxiliary fields on the physical fields. Consistency with the Bianchi identities forces the generic expression of $P$ as a function of $M$ to be: \begin{equation}\label{defP} P^I=2\alpha M^I+\zeta^{\widetilde I}{\cal C}_{\widetilde I}^{\ I}\,, \end{equation} where $\alpha,~\zeta^{\widetilde I}$ are arbitrary real parameters and ${\cal C}_{\widetilde I}^{\ I}$ is the projector on the center of $Z[\cal G]$ of the gauge Lie algebra. The terms in the lagrangian proportional to $\alpha$ and $\zeta$ are separately supersymmetric. In the bosonic lagrangian, the part proportional to $\alpha$ is a Chern Simons term, while the part proportional to $\zeta$ constitutes the Fayet Iliopoulos term. Note that the Fayet Iliopoulos terms are associated only with a central abelian subalgebra of the gauge algebra $\cal G$. \par Enforcing (\ref{defP}) we get the following equations of motion for the spinors: \begin{equation} \left\{\begin{array}{c} \nabla\!\!\!\!/\lambda=2i\alpha\lambda-i\left[\lambda,M\right]\\ \\ \nabla\!\!\!\!/\lambda^c=2i\alpha\lambda^c+i\left[\lambda^c,M\right] \end{array}\right. \end{equation} Taking the covariant derivatives of these, we obtain the equations of motion for the bosonic fields: \begin{equation} \left\{\begin{array}{c} \nabla_m\nabla^m M^I=-4\alpha^2M^I-2\alpha\zeta^{\widetilde I}{\cal C}_{\widetilde I}^{\ I} -2\left[\overline{\lambda},\lambda\right]^I\\ \nabla^n F_{mn}=-\alpha\epsilon_{mnp}F^{np}-\ft{i}{2}\left[\nabla_mM,M\right] \end{array}\right. \end{equation} Using the rheonomic approach we find the following superspace lagrangian for the gauge multiplet: \begin{equation}\label{gaugeLag} {\cal L}_{gauge}={\cal L}_{gauge}^{Maxwell} +{\cal L}_{gauge}^{Chern-Simons} +{\cal L}_{gauge}^{Fayet-Iliopoulos}\,, \end{equation} where \begin{eqnarray} {\cal L}_{gauge}^{Maxwell}&=&Tr\left\{-F^{mn}\left[F +i\overline{\psi}^c\gamma_m\lambda V^m+i\overline{\psi}\gamma_m\lambda^cV^m -2iM\overline{\psi}\psi\right]V^p\epsilon_{mnp}\right.\nonumber\\ &+&\ft{1}{6}F_{qr}F^{qr}V^m V^n V^p\epsilon_{mnp} -\ft{1}{4}i\epsilon_{mnp}\left[\nabla\overline{\lambda}\gamma^m\lambda +\nabla\overline{\lambda}^c\gamma^m\lambda^c\right] V^n V^p\nonumber\\ &+&\ft{1}{2}\epsilon_{mnp}{\cal M}^m\left[\nabla M-i\overline{\psi}\lambda^c +i\overline{\psi}^c\lambda\right] V^nV^p -\ft{1}{12}{\cal M}^d{\cal M}_d\epsilon_{mnp}V^mV^nV^p\nonumber\\ &+&\nabla M \overline{\psi}^c\gamma_c\lambda V^p -\nabla M \overline{\psi}\gamma_p\lambda^cV^p\nonumber\\ &+&F\overline{\psi}^c\lambda+F\overline{\psi}\lambda^c +\ft{1}{2}i\overline{\lambda}^c\lambda\overline{\psi}^c\gamma_m\psi V^m +\ft{1}{2}i\overline{\lambda}\lambda^c\overline{\psi}\gamma_m\psi^cV^m\nonumber\\ &+&\ft{1}{12}{\cal P}^2V^mV^nV^p\epsilon_{mnp} -2i(\overline{\psi}\psi)M\left[\overline{\psi}^c\lambda +\overline{\psi}\lambda^c\right]\nonumber\\ &-&\left.\ft{1}{6}M\left[\overline\lambda,\lambda\right]V^mV^nV^p\epsilon_{mnp}\right\}\,, \end{eqnarray} \begin{eqnarray} {\cal L}_{gauge}^{Chern-Simons}&=&\alpha Tr\left\{ -\left(A\wedge F-iA\wedge A\wedge A\right) -\ft{1}{3}MP\epsilon_{mnp}V^m V^n V^p\right.\nonumber\\ &+&\ft{1}{3}\overline{\lambda}\lambda\epsilon_{mnp}V^m V^n V^p +M\epsilon_{mnp}\left[\overline{\psi}^c\gamma^m\lambda -\overline{\psi}\gamma^m\lambda^c\right]V^n V^p\nonumber\\ &&\left.-2iM^2\overline{\psi}\gamma_m\psi V^m\right\} \end{eqnarray} \begin{eqnarray} {\cal L}_{gauge}^{Fayet-Iliopoulos}&=&Tr\left\{\zeta{\cal C}\left[ -\ft{1}{6}P\epsilon_{mnp}V^mV^nV^p +\ft{1}{2}\epsilon_{mnp}\left(\overline{\psi}^c\gamma^m\lambda -\overline{\psi}\gamma^m\lambda^c\right)V^nV^p \right.\right.\nonumber\\ &&\left.\left.-2iM\overline{\psi}\gamma_m\psi V^m -2i{\cal A}\overline{\psi}\psi\right]\right\} \end{eqnarray} \subsubsection{Chiral multiplet} The chiral multiplet contains the following fields: \begin{equation}\label{chiralm} \left(z^i,\chi^i,H^i\right) \end{equation} where $z^i$ are complex scalar fields which parametrize a K\"ahler manifold. Since we are interested in microscopic theories with canonical kinetic terms we take this K\"ahler manifold to be flat and we choose its metric to be the constant $\eta_{ij^*}\equiv \mbox{diag}(+,+,\dots,+)$. The other fields in the chiral multiplet are $\chi^i$ which is a two components Dirac spinor and $H^i$ which is a complex scalar auxiliary field. The index $i$ runs in the representation $\cal R$ of $\cal G$. \par The covariant derivative of the fields $X^i$ in the chiral multiplet is: \begin{equation} \nabla X^i=dX^i+i\eta^{ii^*}{\cal A}^I(T_I)_{i^*j}X^j\,, \end{equation} where $(T_I)_{i^*j}$ are the hermitian generators of $\cal G$ in the representation $\cal R$. The covariant derivative of the complex conjugate fields $\overline X^{i^*}$ is: \begin{equation} \nabla\overline X^{i^*}=d\overline X^{i^*} -i\eta^{i^*i}{\cal A}^I(\overline T_I)_{ij^*}\overline X^{j^*}\,, \end{equation} where \begin{equation} (\overline T_I)_{ij^*}\equiv\overline{(T_I)_{i^*j}}=(T_I)_{j^*i}\,. \end{equation} The rheonomic parametrization of the curvatures is given by: \begin{equation}\label{chiralB} \left\{\begin{array}{ccl} \nabla z^i&=&V^m\nabla_m z^i+2\overline{\psi}^c\chi^i\\ \nabla\chi^i&=&V^m\nabla_m\chi^i-i\nabla\!\!\!\!/ z^i\psi^c+H^i\psi -M^I(T_I)^i_{\,j}z^j\psi^c\\ \nabla H^i&=&V^m\nabla_m H^i-2i\overline{\psi}\nabla\!\!\!\!/\chi^i -2i\overline{\psi}\lambda^I(T_I)^i_{\,j} z^j +2M^I(T_I)^i_{\,j}\overline{\psi}\chi^j \end{array}\right.\,. \end{equation} We can choose the auxiliary fields $H^i$ to be the derivatives of an arbitrary antiholomorphic superpotential $\overline W(\overline z)$: \begin{equation}\label{defW^*} H^i=\eta^{ij^*}\frac{\partial\overline W(\overline z)}{\partial z^{j^*}} =\eta^{ij^*}\partial_{j^*}\overline W \end{equation} Enforcing eq. (\ref{defW^*}) we get the following equations of motion for the spinors: \begin{equation}\label{chimotion} \left\{\begin{array}{c} \nabla\!\!\!\!/\chi^i=i\eta^{ij^*}\partial_{j^*}\partial_{k^*} \overline W\chi^{ck^*}-\lambda^I(T_I)^i_{\,j} z^j-iM^I(T_I)^i_{\,j}\chi^j\\ \\ \nabla\!\!\!\!/\chi^{ci^*}=i\eta^{i^*j}\partial_j\partial_k W\chi^k +\lambda^{cI}(\overline T_I)^{i^*}_{\,j^*}\overline z^{j^*} -iM^I(\overline T_I)^{i^*}_{\,j^*}\chi^{cj^*} \end{array}\right.\,. \end{equation} Taking the differential of (\ref{chimotion}) one obtains the equation of motion for $z$: \begin{eqnarray} \Box z^i&=&\eta^{ii^*}\partial_{i^*}\partial_{j^*}\partial_{k^*} \overline W(\overline z)\left(\overline{\chi}^{j^*}\chi^{ck^*}\right) -\eta^{ij^*}\partial_{j^*}\partial_{k^*} \overline W(\overline z)\partial_i W(z)\nonumber\\ &&+P^I(T_I)^i_{\,j}z^j-M^IM^J(T_IT_J)^i_{\,j}z^j -2i\overline{\lambda}^I(T_I)^i_{\,j}\chi^j \end{eqnarray} The first order Lagrangian for the chiral multiplet (\ref{chiralm}) is: \begin{equation}\label{chiralLag} {\cal L}_{chiral}={\cal L}_{chiral}^{Wess-Zumino} +{\cal L}_{chiral}^{superpotential}\,, \end{equation} where \begin{eqnarray} {\cal L}_{chiral}^{Wess-Zumino}&=&\ft{1}{2}\epsilon_{mnp}\overline{\Pi}^{m\,i^*} \eta_{i^*j}\left[\nabla z^j -2\overline{\psi}^c\chi^j\right]V^nV^p\nonumber\\ &+&\ft{1}{2}\epsilon_{mnp}\Pi^{m\,i}\eta_{ij^*}\left[\nabla \overline z^{j^*} -2\overline{\chi}\psi^{c\,j^*}\right]V^nV^p\nonumber\\ &-&\ft{1}{6}\epsilon_{mnp}\eta_{ij^*}\Pi_q^{\,i} \overline{\Pi}^{q\,j^*}V^mV^nV^p\nonumber\\ &+&\ft{1}{2}i\epsilon_{mnp}\eta_{ij^*}\left[\overline{\chi}^{j^*}\gamma^m\nabla\chi^i +\overline{\chi}^{c\,i}\gamma^m\nabla\chi^{c\,j^*} \right]V^nV^p\nonumber\\ &+&2i\eta_{ij^*}\left[\nabla z^i\overline{\psi}\gamma_m\chi^{c\,j^*} -\nabla \overline z^{j^*}\overline{\chi}^{c\,i}\gamma_m\psi \right]V^m\nonumber\\ &-&2i\eta_{ij^*}\left(\overline{\chi}^{j^*}\gamma_m\chi^i\right) \left(\overline{\psi}^c\psi^c\right)V^m -2i\eta_{ij^*}\left(\overline{\chi}^{j^*}\chi^i\right) \left(\overline{\psi}^c\gamma_m\psi^c\right)V^m\nonumber\\ &+&\ft{1}{6}\eta_{ij^*}H^i\overline H^{j^*}\epsilon_{mnp}V^mV^nV^p +\left(\overline{\psi}\psi\right) \eta_{ij^*}\left[\overline z^{j^*}\nabla z^i -z^i\nabla \overline z^{j^*}\right]\nonumber\\ &+&i\epsilon_{mnp}z^iM^I(T_I)_{ij^*}\overline{\chi}^{j^*}\gamma^m\psi^c V^nV^p\nonumber\\ &+&i\epsilon_{mnp}\overline z^{j^*}M^I(T_I)_{j^*i} \overline{\chi}^{c\,i}\gamma^m\psi V^nV^p\nonumber\\ &-&\ft{1}{3}M^I(T_I)_{ij^*}\overline{\chi}^{j^*}\chi^i \epsilon_{mnp}V^mV^nV^p\nonumber\\ &+&\ft{1}{3}i\left[\overline{\chi}^{j^*}\lambda^I(T_I)_{j^*i} z^i -\overline{\chi}^{c\,i}\lambda^{c\,I}(T_I)_{ij^*}\overline z^{j^*}\right] \epsilon_{mnp}V^mV^nV^p\nonumber\\ &+&\ft{1}{6}z^iP^I(T_I)_{ij^*}\overline z^{j^*}\epsilon_{mnp}V^mV^nV^p\nonumber\\ &-&\ft{1}{2}\left(\overline{\psi}^c\gamma^m\lambda^I(T_I)_{ij^*}\right) z^i\overline z^{j^*}\epsilon_{mnp}V^nV^p\nonumber\\ &+&\ft{1}{2}\left(\overline{\psi}\gamma^m\lambda^{c\,I}(T_I)_{ij^*}\right) z^i\overline z^{j^*}\epsilon_{mnp}V^nV^p\nonumber\\ &-&\ft{1}{6}M^IM^J\,z^i(T_IT_J)_{ij^*}\overline z^{j^*} \epsilon_{mnp}V^mV^nV^p\nonumber\\ &+&2iM^I(T_I)_{ij^*}z^i\overline z^{j^*}\overline{\psi}\gamma_m\psi V^m\,, \end{eqnarray} and \begin{eqnarray} {\cal L}_{chiral}^{superpotential}&=& -i\epsilon_{mnp}\left[\overline{\chi}^{j^*}\gamma^m\partial_{j^*}\overline W (\overline z)\psi+\overline{\chi}^{c\,j}\gamma^m\partial_j\overline W(z) \psi^c\right]V^nV^p\nonumber\\ &+&\ft{1}{6}\left[\partial_i\partial_jW(z)\overline{\chi}^{c\,i}\chi^j +\partial_{i^*}\partial_{j^*}\overline W(\overline z) \overline{\chi}^{i^*}\chi^{c\,j^*}\right]\epsilon_{mnp}V^mV^nV^p\nonumber\\ &-&\ft{1}{6}\left[H^i\partial_iW(z)+\overline H^{j^*}\partial_{j^*} \overline W(\overline z)\right]\epsilon_{mnp}V^mV^nV^p\nonumber\\ &-&2i\left[W(z)+\overline W(\overline z)\right] \overline{\psi}\gamma_m\psi^cV^m \end{eqnarray} \subsubsection{The space--time Lagrangian} In the rheonomic approach (\cite{castdauriafre}), the total three--dimensional ${\cal N}\!\!=\!\!2$ lagrangian: \begin{equation} {\cal L}^{{\cal N}=2}={\cal L}_{gauge}+{\cal L}_{chiral} \end{equation} is a closed ($d{\cal L}^{{\cal N}=2}=0$) three--form defined in superspace. The action is given by the integral of ${\cal L}^{{\cal N}=2}$ on a generic \emph{bosonic} three--dimensional surface ${\cal M}_3$ in superspace: \begin{equation} S=\int_{{\cal M}_3}{\cal L}^{{\cal N}=2}\,. \end{equation} Supersymmetry transformations can be viewed as global translations in superspace which move ${\cal M}_3$. Then, being ${\cal L}^{{\cal N}=2}$ closed, the action is invariant under global supersymmetry transformations. \par We choose as bosonic surface the one defined by: \begin{equation} \theta=d\theta=0\,. \end{equation} Then the space--time lagrangian, i.e. the pull--back of ${\cal L}^{{\cal N}=2}$ on ${\cal M}_3$, is: \begin{equation}\label{N=2stLag} {\cal L}^{{\cal N}=2}_{st}={\cal L}^{kinetic}_{st} +{\cal L}^{fermion~mass}_{st}+{\cal L}^{potential}_{st}\,, \end{equation} where \begin{eqnarray}\label{N=2chiralst} {\cal L}_{st}^{kinetic}&=&\left\{ \eta_{ij^*}\nabla_m z^i\nabla^m\overline z^{j^*} +i\eta_{ij^*}\left(\overline{\chi}^{j^*}\nabla\!\!\!\!/\chi^i +\overline{\chi}^{c\,i}\nabla\!\!\!\!/\chi^{c\,j^*}\right) \right.\nonumber\\ &&-g_{IJ}F^I_{mn}F^{J\,mn} +\ft{1}{2}g_{IJ}\nabla_m M^I\nabla^m M^J\nonumber\\ &&+\left.\ft{1}{2}ig_{IJ}\left(\overline{\lambda}^I\nabla\!\!\!\!/\lambda^J +\overline{\lambda}^{c\,I}\nabla\!\!\!\!/\lambda^{c\,J}\right)\right\}d^3x\\ &&\nonumber\\ {\cal L}_{st}^{fermion~mass}&=& \left\{i\left(\overline{\chi}^{c\,i}\partial_i\partial_jW(z)\chi^j +\overline{\chi}^{i^*}\partial_{i^*}\partial_{j^*}\overline W(\overline z) \chi^{c\,j^*}\right)\right.\nonumber\\ &&-f_{IJK}M^I\overline{\lambda}^J\lambda^K -2\overline{\chi}^{i^*}M^I(T_I)_{ij^*}\chi^{j^*}\nonumber\\ &&+2i\left(\overline{\chi}^{i^*}\lambda^I(T_I)_{i^*j}z^j -\overline{\chi}^{c\,i}\lambda^I(T_I)_{ij^*}\overline z^{j^*}\right)\nonumber\\ &&\left.+2\alpha g_{IJ}\overline{\lambda}^I\lambda^J\right\}d^3x\\ {\cal L}_{st}^{potential}&=&-U(z,\overline z,H,\overline H,M,P)d^3x\,, \end{eqnarray} and \begin{eqnarray}\label{defU} U(z,\overline z,H,\overline H,M,P)&=& H^i\partial_iW(z) +\overline H^{j^*}\partial_{j^*}\overline W(\overline z) -\eta_{ij^*}H^i\overline H^{j^*}\nonumber\\ &&-\ft{1}{2}g_{IJ}P^IP^J-z^iP^I(T_I)_{ij^*}\overline z^{j^*}\nonumber\\ &&+z^iM^I(T_I)_{ij^*}\eta^{j^*k}M^J(T_J)_{kl^*} \overline z^{l^*}\nonumber\\ &&+2\alpha g_{IJ}M^IP^J+\zeta^{\widetilde I}{\cal C}_{\widetilde I}^{\ I}g_{IJ}P^J \end{eqnarray} From the variation of the lagrangian with respect to the auxiliary fields $H^i$ and $P^I$ we find: \begin{eqnarray} H^i&=&\eta^{ij^*}\partial_{j^*}\overline W(\overline z)\,,\\ P^I&=&D^I(z,\overline z)+2\alpha M^I+\zeta^{\widetilde I}{\cal C}_{\widetilde I}^{\ I}\,, \end{eqnarray} where \begin{equation} D^I(z,\overline z)=-\overline z^{i^*}(T_I)_{i^*j}z^j \end{equation} Substituting this expressions in the potential (\ref{defU}) we obtain: \begin{eqnarray} U(z,\overline z,M)&=& -\partial_i W(z)\eta^{ij^*}\partial_{j^*}\overline W(\overline z)\nonumber\\ &&+\ft{1}{2}g^{IJ}\left(\overline z^{i^*}(T_I)_{i^*j}z^j\right) \left(\overline z^{k^*}(T_J)_{k^*l}z^l\right)\nonumber\\ &&+\overline z^{i^*}M^I(T_I)_{i^*j}\eta^{jk^*}M^J(T_J)_{k^*l} z^l\nonumber\\ &&-2\alpha^2g_{IJ}M^IM^J-2\alpha\zeta^{\widetilde I}{\cal C}_{\widetilde I}^{\ I}g_{IJ}M^J -\ft{1}{2}\zeta^{\widetilde I}{\cal C}_{\widetilde I}^{\ I}g_{IJ} \zeta^{\widetilde J}{\cal C}_{\widetilde J}^{\ J}\nonumber\\ &&-2\alpha M^I\left(\overline z^{i^*}(T_I)_{i^*j}z^j\right) -\zeta^{\widetilde I}{\cal C}_{\widetilde I}^{\ I} \left(\overline z^{i^*}(T_I)_{i^*j}z^j\right) \end{eqnarray} \subsection{A particular ${\cal N}=2$ theory: ${\cal N}=4$} A general lagrangian for matter coupled rigid ${\cal N}=4, d=3 $ super Yang Mills theory is easily obtained from the dimensional reduction of the ${\cal N}=2,d=4$ gauge theory (see \cite{BertFre}). The bosonic sector of this latter lagrangian is the following: \begin{eqnarray}\label{N=4Lag} {\cal L}_{bosonic}^{{\cal N}=4}&=&-\frac{1}{g^2_{_{YM}}}g_{IJ}F^I_{mn}F^{J\,mn} +\frac{1}{2g^2_{_{YM}}}g_{IJ}\nabla_m M^I\nabla^m M^J\nonumber\\ &&+\frac{2}{g^2_{_{YM}}}g_{IJ}\nabla_m\overline Y^I\nabla^mY^J +\frac{1}{2}Tr\left(\nabla_m\overline{\bf Q} \nabla^m{\bf Q}\right)\nonumber\\ &&-\frac{1}{g^2_{_{YM}}}g_{IN}f^I_{JK}f^N_{LM}\,M^J\overline Y^K\,M^L Y^M -M^IM^JTr\left(\overline{\bf Q}(\hat T_I\,\hat T_J) {\bf Q}\right)\nonumber\\ &&-\frac{2}{g^2_{_{YM}}}g_{IN}f^I_{JK}f^N_{LM}\,\overline Y^JY^K\, \overline Y^LY^M -\overline Y^IY^J\,Tr\left(\overline{\bf Q}\left\{\hat T_I,\hat T_J\right\} {\bf Q}\right) \nonumber\\ &&-\frac{1}{4}g^2_{_{YM}}g_{IJ}Tr\left(\overline{\bf Q}(\hat T^I){\bf Q}\, \overline{\bf Q}(\hat T^J){\bf Q}\right) \end{eqnarray} The bosonic matter field content is given by two kinds of fields. First we have a complex field $Y^I$ in the adjoint representation of the gauge group, which belongs to a chiral multiplet. Secondly, we have an $n$-uplet of quaternions ${\bf Q}$, which parametrize a (flat)\footnote{ Once again we choose the HyperK\"ahler manifold to be flat since we are interested in microscopic theories with canonical kinetic terms} HyperK\"ahler manifold: \begin{equation} {\bf Q}=\left(\begin{array}{ccl} Q^1&=&q^{1|0}\unity-iq^{1|x}\sigma_x\\ Q^2&=&q^{2|0}\unity-iq^{2|x}\sigma_x\\ &\cdots&\\ Q^A&=&q^{A|0}\unity-iq^{A|x}\sigma_x\\ &\cdots&\\ Q^n&=&q^{n|0}\unity-iq^{n|x}\sigma_x \end{array}\right) \qquad\begin{array}{l} q^{A|0},q^{A|x}\in\relax{\rm I\kern-.18em R}\\ \\ A\in\{1,\ldots,n\}\\ \\ x\in\{1,2,3\} \end{array} \end{equation} The quaternionic conjugation is defined by: \begin{equation} \overline Q^A=q^{A|0}\unity+iq^{A|x}\sigma_x \end{equation} In this realization, the quaternions are represented by matrices of the form: \begin{equation} Q^A=\left(\begin{array}{cc} u^A&i\overline v_{A^*}\\ iv_A&\overline u^{A^*} \end{array}\right)\qquad \overline Q^A=\left(\begin{array}{cc} \overline u^{A^*}&-i\overline v_{A^*}\\ -iv_A&u^A \end{array}\right)\qquad\begin{array}{l} u^A=q^{A|0}-iq^{A|3}\\ V^m=-q^{A|1}-iq^{A|2} \end{array} \end{equation} The generators of the gauge group ${\cal G}$ have a triholomorphic action on the flat HyperK\"ahler manifold, namely they respect the three complex structures. Explicitly this triholomorphic action on {\bf Q} is the following: \begin{eqnarray} \delta^I{\bf Q}&=&i\hat T^I{\bf Q}\nonumber\\ &&\nonumber\\ \delta^I\left(\begin{array}{cc} u^A&i\overline v_{A^*}\\ iv_A&\overline u^{A^*} \end{array}\right)&=& i\left(\begin{array}{cc} T^I_{A^*B}&\\ &-\overline T^I_{AB^*} \end{array}\right)\left(\begin{array}{cc} u^B&i\overline v_{B^*}\\ iv_B&\overline u^{B^*} \end{array}\right) \end{eqnarray} where the $T^I_{A^*B}$ realize a representation of ${\cal G}$ in terms of $n\times n$ hermitian matrices. We define $\overline T_{AB^*}\equiv\left(T_{A^*B}\right)^*$, so, being the generators hermitian ($T^*=T^T$), we can write: \begin{equation} T_{A^*B}=\overline T_{BA^*}. \end{equation} We can rewrite eq. (\ref{N=4Lag}) in the form: \begin{eqnarray}\label{N=4bosonicLag} {\cal L}_{bosonic}^{{\cal N}=4}&=&-\frac{1}{g^2_{_{YM}}}g_{IJ}F^I_{mn}F^{J\,mn} +\frac{1}{2g^2_{_{YM}}}g_{IJ}\nabla_m M^I\nabla^m M^J\nonumber\\ &&+\frac{2}{g^2_{_{YM}}}g_{IJ}\nabla_m\overline Y^I\nabla^m Y^J +\nabla_m\overline u\nabla^m u+\nabla_m\overline v\nabla^mv\nonumber\\ &&-\frac{2}{g^2_{_{YM}}}M^I M^J\overline Y^R f_{RIL}f^L_{\,JS}Y^S -M^I M^J\left(\overline u T_IT_J u +\overline v\overline T_I\overline T_J v\right)\nonumber\\ &&-\frac{2}{g^2_{_{YM}}}g_{IJ}\left[\overline Y,Y\right]^I \left[\overline Y,Y\right]^J -2\overline Y^IY^J\left(\overline u\{T_I,T_J\} u +\overline v\{\overline T_I,\overline T_J\} v\right)\nonumber\\ &&-2g^2_{_{YM}}g_{IJ}\left(vT^Iu\right)\left(\overline v\overline T^J \overline u\right) -\ft{1}{2}g^2_{_{YM}}g_{IJ}\left[\left(\overline u T^I u\right) \left(\overline uT^Ju\right)\right.\nonumber\\ &&\left.+\left(\overline v\overline T^I v\right)\left(\overline v \overline T^J v\right)-2\left(\overline u T^I u\right)\left( \overline v\overline T^J v\right)\right] \end{eqnarray} By comparing the bosonic part of (\ref{N=2stLag}) (rescaled by a factor ${4\over g_{_{YM}}^2}$) with (\ref{N=4bosonicLag}), we see that in order for a ${\cal N}\!\!=\!\!2$ lagrangian to be also ${\cal N}\!\!=\!\!4$ supersymmetric, the matter content of the theory and the form of the superpotantial are constrained. The chiral multiplets have to be in an adjoint plus a generic quaternionic representation of $\cal G$. So the fields $z^i$ and the gauge generators are \begin{equation} z^i=\left\{\begin{array}{l} \sqrt 2Y^I\\ g_{_{YM}}u^A\\ g_{_{YM}}v_A \end{array}\right.\qquad T^I_{i^*j}=\left\{\begin{array}{l} f^I_{\,JK}\\ (T^I)_{A^*B}\\ -(\overline T^I)_{AB^*} \end{array}\right.\,. \end{equation} Moreover, the holomorphic superpotential $W(z)$ has to be of the form: \begin{equation} W\left(Y,u,v\right)=2g^4_{_{YM}}\delta^{AA^*}Y^I\,v_A(T_I)_{A^*B}u^B\,. \end{equation} Substituting this choices in the supersymmetric lagrangian (\ref{N=2stLag}) we obtain the general ${\cal N}\!\!=\!\!4$ lagrangian expressed in ${\cal N}\!\!=\!\!2$ language. \par Since the action of the gauge group is triholomorphic there is a triholomorphic momentum map associated with each gauge group generator (see \cite{ALE,damia,BertFre}) \par The momentum map is given by: \begin{equation} {\cal P}=\ft{1}{2}i\left(\overline{\bf Q}\,\hat T\,{\bf Q}\right)= \left(\begin{array}{cc} {\cal P}_3&{\cal P}_+\\ {\cal P}_-&-{\cal P}_3 \end{array}\right)\,, \end{equation} where \begin{eqnarray} {\cal P}_3^I&=&-\left(\overline uT^Iu-\overline v\overline T^Iv\right)= D^I\nonumber\\ {\cal P}_+^I&=&-2\overline v\overline T^I\overline u= -g^{-4}_{_{YM}}\ \partial\overline W/\partial \overline Y_I\nonumber\\ {\cal P}_-^I&=&2vT^Iu= g^{-4}_{_{YM}}\ \partial W/\partial Y_I\,. \end{eqnarray} So the superpotential can be written as: \begin{equation} W=g^4_{_{YM}}Y_I{\cal P}_-^I\,. \end{equation} \subsection{A particular ${\cal N}=4$ theory: ${\cal N}=8$} In this section we discuss the further conditions under which the ${\cal N}\!\!=\!\!4$ three dimensional lagrangian previously derived acquires an ${\cal N}\!\!=\!\!8$ supersymmetry. To do that we will compare the four dimensional ${\cal N}\!\!=\!\!2$ lagrangian of \cite{BertFre} with the four dimensional ${\cal N}\!\!=\!\!4$ lagrangian of \cite{N8DF} (rescaled by a factor ${4\over g_{_{YM}}^2}$), whose bosonic part is: \begin{eqnarray}\label{N=8piphiLag} {\cal L}^{{\cal N}=4~D=4}_{bosonic}&=&{1\over g_{_{YM}}^2} \left\{-F^{\underline m\underline n}F_{\underline m\underline n}+ {1\over 4}\nabla^{\underline m}\phi^{AB}\nabla_{\underline m}\phi^{AB}+ {1\over 4}\nabla^{\underline m}\pi^{AB}\nabla_{\underline m}\pi^{AB} \right.\nonumber\\ &+&{1\over 64}\left( \left[\phi^{AB},\phi^{CD}\right]\left[\phi^{AB},\phi^{CD}\right]+ \left[\pi^{AB},\pi^{CD}\right]\left[\pi^{AB},\pi^{CD}\right] \right.\nonumber\\ &+&\left.\left.2\left[\phi^{AB},\pi^{CD}\right] \left[\phi^{AB},\pi^{CD}\right]\right)\right\} \end{eqnarray} The fields $\pi^{AB}$ and $\phi^{AB}$ are Lie-algebra valued: \begin{equation} \left\{\begin{array}{ccl} \pi^{AB}&=&\pi^{AB}_It^I\\ \phi^{AB}&=&\phi^{AB}_It^I \end{array}\right.\,, \end{equation} where $t^I$ are the generators of the gauge group $\cal G$. They are the real and imaginary parts of the complex field $\rho$: \begin{equation} \left\{\begin{array}{ccl} \rho^{AB}&=&\ft{1}{\sqrt{2}}\left(\pi^{AB}+i\phi^{AB}\right)\\ \overline{\rho}_{AB}&=&\ft{1}{\sqrt{2}}\left(\pi^{AB} -i\phi^{AB}\right) \end{array}\right.\,. \end{equation} $\rho^{AB}$ transforms in the represention $\bf 6$ of a global $SU(4)$-symmetry of the theory. Moreover, it satisfies the following pseudo-reality condition: \begin{equation} \rho^{AB}=-\ft{1}{2}\epsilon^{ABCD}\overline{\rho}_{CD} \end{equation} In terms of $\rho$ the lagrangian (\ref{N=8piphiLag}) can be rewritten as: \begin{equation}\label{N=8rhoLag} {\cal L}^{{\cal N}=8}_{bosonic}={1\over g_{_{YM}}^2}\left\{ -F^{\underline m\underline n}F_{\underline m\underline n} +{1\over 2}\nabla_{\underline m}\overline{\rho}_{AB}\nabla^{\underline m} \rho^{AB} +{1\over 16}\left[\overline{\rho}_{AB}\rho^{CD}\right]\left[\rho^{AB}, \overline{\rho}_{CD}\right]\right\} \end{equation} The $SU(2)$ global symmetry of the ${\cal N}\!\!=\!\!2,~D\!\!=\!\!4$ theory can be diagonally embedded into the $SU(4)$ of the ${\cal N}\!\!=\!\!4,~D\!\!=\!\!4$ theory: \begin{equation} {\cal U}=\left(\begin{array}{cc} U&0\\ 0&\overline U \end{array}\right)\ \in SU(2)\subset SU(4)\,. \end{equation} By means of this embedding, the $\bf 6$ of SU(4) decomposes as ${\bf 6}\longrightarrow{\bf 4+1+1}$. Correspondingly, the pseudo-real field $\rho$ can be splitted into: \begin{eqnarray} \rho^{AB}&=&\left(\begin{array}{cccc} 0&\sqrt 2Y&g_{_{YM}} u&ig_{_{YM}}\overline v\\ -\sqrt 2Y&0&ig_{_{YM}} v&g_{_{YM}}\overline u\\ -g_{_{YM}} u&-ig_{_{YM}} v&0&-\sqrt 2\overline Y\\ -ig_{_{YM}}\overline v&-g_{_{YM}}\overline u&\sqrt 2\overline Y&0 \end{array}\right)\nonumber\\ &&\nonumber\\ &=&\left(\begin{array}{cc} i\sqrt 2\sigma^2\otimes Y&g_{_{YM}}Q\\ &\\ -g_{_{YM}}Q^T&-i\sqrt 2\sigma^2\otimes\overline Y \end{array}\right)\,, \end{eqnarray} where $Y$ and $Q$ are Lie-algebra valued. The global $SU(2)$ transformations act as: \begin{equation} \rho\longrightarrow{\cal U}\rho{\cal U}^T=\left(\begin{array}{cc} i\sqrt 2\sigma^2\otimes Y&g_{_{YM}}UQU^{\dagger}\\ &\\ -g_{_{YM}}\left(UQU^{\dagger}\right)^T&-i\sqrt 2\sigma^2\otimes\overline Y \end{array}\right) \end{equation} Substituting this expression for $\rho$ into (\ref{N=8rhoLag}) and dimensionally reducing to three dimensions, we obtain the lagrangian (\ref{N=4Lag}). In other words the ${\cal N}\!\!=\!\!4,~D\!\!=\!\!3$ theory is enhanced to ${\cal N}\!\!=\!\!8$ provided the hypermultiplets are in the adjoint representation of $\cal G$. \section{Conclusions} \label{conclu} In this paper we have discussed an essential intermediate step for the comparison between Kaluza Klein supergravity compactified on manifolds $AdS_4 \times X^7$ and superconformal field theories living on an M2 brane world volume. Focusing on the case with ${\cal N}=2$ supersymmetry we have shown how to convert Kaluza Klein data on $Osp(2\vert 4)$ supermultiplets into conformal superfields living in three dimensional superspace. In addition since such conformal superfields are supposed to describe composite operators of a suitable $d=3$ gauge theory we have studied the general form of three dimesnional $N=2$ gauge theories. Hence in this paper we have set the stage for the discussion of specific gauge theory models capable of describing, at an infrared conformal point the Kaluza Klein spectra, associated with the sasakian seven--manifolds (\ref{sasaki}) classified in the eighties and now under active consideration once again. Indeed the possibility of constructing dual pairs \{$M2$--brane gauge theory,supergravity on $G/H$ \} provides a challenging testing ground for the exciting AdS/CFT correspondence. \section{Acknowledgements} We are mostly grateful to Sergio Ferrara for essential and enlightening discussions and for introducing us to the superfield method in the construction of the superconformal multiplets. We would also like to express our gratitude to C. Reina, A. Tomasiello and A. Zampa for very important and clarifying discussions on the geometrical aspects of the AdS/CFT correspondence and to A. Zaffaroni for many clarifications on brane dynamics. Hopefully these discussions will lead us to the construction of the gauge theories associated with the four sasakian manifolds of eq.(\ref{sasaki}) in a joint venture.
{ "attr-fineweb-edu": 1.709961, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbePxK5YsWV5L2XZa
\section{Introduction} Kitaev model is arguably one of the best settings for the experimental realization of quantum spin liquid in solid-state materials~\cite{hermanns2018,19JackeliNRP}. Recent efforts have resulted in the identification of several compounds with predominant Kitaev exchange, but all these compounds simultaneously showed long-range magnetic order caused by residual non-Kitaev interactions~\cite{17WinterJPCM}. External pressure was considered as a convenient tuning parameter that may reduce the unwanted interactions, suppress magnetic order, and bring a material closer to the Kitaev limit~\cite{kim2016,yadav2018}. However, hydrostatic pressure experiments performed on several model compounds -- different polymorphs of Li$_2$IrO$_3$~\cite{tsirlin2021} and $\alpha$-RuCl$_3$~\cite{18BastienPRB,18BiesnerPRB,cui2017,li2019} -- all revealed a competing structural instability (dimerization) that shortens one third of the metal-metal distances on the (hyper)honeycomb spin lattice and eliminates local magnetism of the $4d/5d$ ions~\mbox{\cite{18HermannPRB,clancy2018,19TakayamaPRB,19HermannPRB}}. Among the model compounds tested under pressure, only $\beta$-Li$_2$IrO$_3$ with the hyperhonecomb lattice of the Ir$^{4+}$ ions and the N\'eel temperature of $T_{\rm{N}}$ = 38~K at ambient pressure~\cite{14BiffinPRB,15TakayamaPRL,17RuizNC,19MajumderPRM}, showed some promise of entering the spin-liquid state before the structural dimerization occurs. Magnetization and muon spin relaxation ($\mu$SR) measurements revealed an abrupt suppression of magnetic order at $p_{\rm{c}}$ $\approx$ 1.4~GPa, with a mixture of dynamic and randomly frozen spins above this pressure~\cite{18MajumderPRL}. In contrast, the structural dimerization was observed only at $p_{\rm dim}\simeq 4$\,GPa at room temperature~\cite{18MajumderPRL,19TakayamaPRB,choi2020}. Taken together, these results were interpreted as the gradual tuning of magnetic interactions in $\beta$-Li$_2$IrO$_3$ toward a spin-liquid phase that sets in above $p_{\rm c}$ well before the structural transformation at $p_{\rm dim}$. However, this putative spin-liquid phase must be rather fragile, as part of its spins become frozen below $15-20$~K, presumably into a glassy state. \begin{figure} \includegraphics[angle=0,width=0.48\textwidth]{figIntro} \vspace{-12pt} \caption{\label{figIntro} Nondimerized ($Fddd$), partially dimerized ($P2_1/n$), and fully dimerized ($C2/c$) phases of $\beta$-Li$_2$IrO$_3$ under pressure, with the crystal structures taken from Ref.~\cite{19VeigaPRB}. The red Ir--Ir bonds denote the dimerized and non-magnetic Ir$^{4+}$ ions. } \vspace{-12pt} \end{figure} Subsequent low-temperature x-ray diffraction (XRD) experiments~\cite{19VeigaPRB} challenged this scenario. The critical pressure $p_{\rm dim}$ was shown to decrease upon cooling, and below 50\,K a structural transformation was observed already around $p_{\rm c}$ concomitant with the suppression of magnetic order. However, the transformation to the dimerized phase was not completed up to at least $2.0-2.5$~GPa. At low temperatures, the phase composition right above $p_{\rm c}$ is either a mixture of the fully dimerized ($C2/c$) and nondimerized ($Fddd$) phases or a distinct partially dimerized phase ($P2_1/n$)~\cite{19VeigaPRB}, see Fig.~\ref{figIntro}. Interestingly, the $\mu$SR experiments above $p_{\rm c}$ showed neither a dimerized state nor a pure spin-liquid state, but rather a combination of frozen and dynamic spins~\cite{18MajumderPRL} that could also be a result of multiple structural phases being present in the sample around this pressure. Moreover, no \textit{magnetic} signatures of structural dimerization in $\beta$-Li$_2$IrO$_3$ have been reported until now. Here, we shed light on some of these peculiarities using improved magnetization measurements under pressure. We show that a step-like feature in temperature-dependent magnetic susceptibility -- the magnetic signature of structural dimerization -- appears right above $p_c$. This feature confirms that the structural transformation not only accompanies, but also triggers the suppression of the long-range magnetic order in $\beta$-Li$_2$IrO$_3$. Our data exclude the presence of the nondimerized phase above $p_{\rm c}$ and corroborate the formation of the partially dimerized phase as the most plausible state at intermediate pressures. Using \textit{ab initio} calculations we show that this phase is thermodynamically stable in a finite pressure range above $p_{\rm c}$ and features magnetic as well as non-magnetic Ir$^{4+}$ sites. The magnetic sites form weakly coupled tetramers, which are expected to show cluster magnetism and naturally evade long-range magnetic order. Intriguingly, the low-temperature susceptibility of this partially dimerized phase also shows a Curie-like upturn with the paramagnetic effective moment of about 0.7\,$\mu_B$ that persists far above $p_{\rm c}$. \begin{figure*} \includegraphics[angle=0,width=0.95\textwidth]{fig1} \vspace{-12pt} \caption{\label{Figure1} Temperature-dependent magnetic susceptibility $\chi(T)$ of $\beta$-Li$_2$IrO$_3$ measured under various pressures from 2~K to 300~K in the 1~T magnetic field. Panels (a), (c), and (e) show the data from three different runs. Panels (b), (d), and (f) magnify the step-like features due to the dimerization transition at $T_d$ shown with the black arrows. A hysteresis loop is clearly seen at 1.51~GPa in panel (b) upon cooling and warming. } \vspace{-12pt} \end{figure*} \section{Methods} The polycrystalline sample of $\beta$-Li$_2$IrO$_3$ was prepared by a solid-state reaction, as described in Ref.~\cite{18MajumderPRL}. Sample quality was controlled by x-ray diffraction (Rigaku MiniFlex, CuK$_{\alpha}$ radiation). Magnetization was measured using the MPMS3 SQUID magnetometer from Quantum Design. Powder sample was loaded into the opposed-anvil-type CuBe pressure cell. Measurement runs No. 1 and 2 were carried out with the 1.8~mm anvil culet and the gasket with the sample space diameter of 0.9~mm. In this case, pressures up to 1.8~GPa could be reached. Higher pressures up to 3.0~GPa were achieved in run No.~3 with the 1~mm anvil culet and the gasket with the sample space diameter of 0.5~mm. Daphne oil 7373 was used as the pressure-transmitting medium. Pressure was determined by measuring the superconducting transition of a small piece of Pb. Magnetization of the empty cell was taken as the background. Pressure was applied at room temperature. The cell was cooled down to 2~K, and the data were collected on warming unless specified otherwise. Then pressure was increased at room temperature, and the procedure was repeated until the highest pressure feasible with the current gasket was reached. While this experimental procedure is very similar to the one implemented in Ref.~\cite{18MajumderPRL}, and the same sample and pressure cell have been used for the measurement, we took special care of the background subtraction in order to measure the weak signal above $p_{\rm c}$ with a much higher accuracy than in our previous study. We developed a software that allows the point-by-point subtraction of the background signal and an easy visual control of this procedure. Moreover, the number of raw data points for each temperature point has increased to 400 with MPMS3, compared to 48 with the much older MPMS 5\,T instrument used in Ref.~\cite{18MajumderPRL}. Full-relativistic density-functional-theory (DFT) band-structure calculations were used to assess thermodynamic stability of pressure-induced phases and their magnetism. Structural relaxations were performed in \texttt{VASP}~\cite{vasp1,vasp2} with the Perdew-Burke-Ernzerhof for solids (PBEsol) exchange-correlation potential~\cite{pbesol} that yields best agreement with the equilibrium unit cell volume of $\beta$-Li$_2$IrO$_3$ at ambient pressure. Correlation effects were taken into account on the DFT+$U$+SO level with the on-site Coulomb repulsion parameter $U_d=1.0$\,eV and Hund's coupling $J_d=0.3$\,eV. Additionally, \texttt{FPLO} calculations~\cite{fplo} were performed on the PBE level to obtain density of states, as well as tight-binding parameters via Wannier projections. \section{Experimental results} Figure~\ref{Figure1} shows magnetic susceptibility $\chi$ as a function of temperature measured under various pressures. Below 1.4~GPa, $\chi(T)$ increases smoothly upon cooling from room temperature, followed by a sharp upturn around 50~K and an anomaly at $T_{\rm{N}}$ $\approx$ 38~K due to magnetic ordering transition. The value of $T_{\rm{N}}$ is nearly independent of the applied pressure. In the narrow pressure range between 1.4 and 1.5~GPa, the transition at $T_{\rm{N}}$ can be still observed, but absolute values of the susceptibility become much lower. The transition coexists with a step-like feature appearing at $120-150$~K. This feature is accompanied by a narrow thermal hysteresis (Fig.~\ref{Figure1}b) and strongly resembles the magnetic signature of structural dimerization in $\alpha$-RuCl$_3$~\cite{18BastienPRB,cui2017}. At even higher pressures, only the dimerization anomaly can be observed. It shifts to higher temperatures upon increasing pressure (Fig.~\ref{Figure1}b, d, and f), while below this anomaly the susceptibility becomes nearly temperature-independent, except below 50~K where a Curie-like upturn appears. All these features were systematically observed in three separate measurement runs and are thus well reproducible. In the run No. 3, we used a smaller gasket and reached the pressure of 3~GPa at which the dimerization temperature $T_{\rm d}$ becomes as high as 250~K, whereas the Curie-like upturn remains nearly unchanged (Fig.~\ref{Figure1}f). A slightly negative value of the susceptibility at high temperatures for run No. 3 (Fig.~\ref{Figure1}f) is likely due to an imperfect background subtraction caused by the more severe deformation of the gasket, since sample volume and, thus, intrinsic magnetic signal are much reduced compared to runs No. 1 and 2. \begin{figure} \includegraphics[angle=0,width=0.49\textwidth]{fig2} \vspace{-12pt} \caption{\label{Figure2}Curie-Weiss analysis of the susceptibility data. (a) Magnetic susceptibility $\chi$ and inverse susceptibility $(\chi-\chi_0)^{-1}$ measured at 1.51~GPa and 1.77~GPa in run No. 1. The dashed line shows the Curie-Weiss fitting at 1.51~GPa. (b) The paramagnetic effective moment and Curie-Weiss temperature $\theta_{\rm{CW}}$ extracted from the Curie-Weiss fits for the low-temperature part of the magnetic susceptibility.} \vspace{-12pt} \end{figure} For the Curie-Weiss analysis, we fit the low-temperature part of the data with the formula ${\chi=\chi_0+C/(T-\theta_{\rm CW})}$ ($\chi_0$, a residual temperature-independent term; $C$, the Curie constant; $\theta_{\rm CW}$, the Curie-Weiss temperature), as shown in Fig.~\ref{Figure2}a for two representative pressures. The fits return the effective moment of about 0.7~$\mu_B$ and the Curie-Weiss temperature $\theta_{\rm CW}\simeq -2$~K, both reproducible between the different measurement runs and nearly pressure-independent (Fig.~\ref{Figure2}b). The very small $\theta_{\rm CW}$ indicates nearly decoupled magnetic moments that are typically impurity spins. However, they are not merely sample imperfections because no similar Curie-like contribution has been seen below 1.4~GPa. At ambient pressure and all the way up to $p_{\rm c}$, the low-temperature susceptibility follows the data reported for $\beta$-Li$_2$IrO$_3$ previously~\cite{19MajumderPRM,ruiz2020}. Therefore, these nearly free spins are native to the high-pressure phase(s) of $\beta$-Li$_2$IrO$_3$. Assuming the $j_{\rm eff}=\frac12$ state of Ir$^{4+}$ with the effective moment of 1.73~$\mu_B$ and using (0.7/1.73)$^2$ = 0.164, we estimate that roughly 1/6 of the Ir atoms should carry these weakly coupled, paramagnetc moments. However, this fraction may also be lower because x-ray absorption data show a rapid departure from the $j_{\rm eff}=\frac12$ state upon increasing pressure~\cite{17VeigaPRB}. \begin{figure} \includegraphics[angle=0,width=0.49\textwidth]{fig3} \vspace{-12pt} \caption{\label{Figure3} Temperature-pressure phase diagram of $\beta$-Li$_2$IrO$_3$ inferred from the magnetic susceptibility data. $T_{\rm{N}}$ marks the antiferromagnetic transition temperature of the nondimerized phase, whereas $T_{\rm{d}}$ stands for the dimerization transition temperature of the high-pressure phase. } \vspace{-12pt} \end{figure} With these revised magnetization data, we are able to confirm our earlier conclusion~\cite{18MajumderPRL} that the magnetic order in $\beta$-Li$_2$IrO$_3$ is suppressed around 1.4~GPa upon a first-order pressure-induced phase transition. However, this transition clearly has a structural component that leads to a partial dimerization with a fraction of the Ir$^{4+}$ ions becoming non-magnetic above $p_{\rm c}$ and below $T_{\rm d}$. Low-temperature XRD data~\cite{19VeigaPRB} suggests two possible scenarios for the phase composition above $p_{\rm c}$: i) a mixture of the fully dimerized and nondimerized phases; ii) the partially dimerized phase. Our data exclude the former and clearly support the latter, because the nondimerized phase with its $T_{\rm N}\simeq 38$~K should appear prominently in the magnetic susceptibility data. Indeed, it does, and the signatures of its magnetic ordering transition are still seen at 1.51~GPa as a kink in the inverse susceptibility. On the other hand, such signatures are clearly absent at any higher pressures (Fig.~\ref{Figure2}a). The temperature-pressure phase diagram inferred from the susceptibility data (Fig.~\ref{Figure3}) shows strong similarities to that of $\alpha$-RuCl$_3$~\cite{18BastienPRB}. The N\'eel temperature of the magnetically ordered phase does not change with pressure. At $p_{\rm c}$, this magnetically ordered phase abruptly disappears and gives way to a high-pressure phase characterized by the structural dimerization with the strong pressure dependence of $T_d$. Notwithstanding these apparent similarities to \mbox{$\alpha$-RuCl$_3$}, it seems pre-mature to associate the high-pressure phase of $\beta$-Li$_2$IrO$_3$ with the fully dimerized ($C2/c$) state that has been observed above 4~GPa at room temperature~\cite{18MajumderPRL,19TakayamaPRB}. Indeed, the Curie-like upturn of the susceptibility, which is clearly traced up to at least 3~GPa, suggests that a fraction of magnetic moments should persist in the high-pressure phase of $\beta$-Li$_2$IrO$_3$. The $\mu$SR observations do not support a non-magnetic phase above $p_{\rm c}$ either~\cite{18MajumderPRL}. They also suggest a partial spin freezing below $15-20$~K. \begin{figure} \includegraphics[angle=0,width=0.49\textwidth]{fig4} \vspace{-12pt} \caption{\label{Figure4}Zero-field-cooled and field-cooled magnetic susceptibility of $\beta$-Li$_2$IrO$_3$ measured at 1.77~GPa in the applied field of 0.1~T (run No. 2). No FC/ZFC splitting is observed around $15-20$~K where the volume fraction of static spins increases according to $\mu$SR~\cite{18MajumderPRL}. } \end{figure} However, $\mu$SR signatures of spin freezing should be taken with caution when dimerization comes into play. Freezing effects are sometimes observed in $\mu$SR experiments on dimer magnets~\cite{00AndreicaPB,03CavadiniPB,03CavadiniPB2}, presumably because muons perturb spin dimers and alter their singlet ground state. In this case, spin freezing is seen as the increase in the volume fraction of static spins in $\mu$SR but does not appear in the magnetic susceptibility. A similar situation occurs in $\beta$-Li$_2$IrO$_3$ (Fig.~\ref{Figure4}). Magnetic susceptibility measured at 1.77~GPa with $\mu_0H=0.1$~T, the lowest field that allows a reliable measurement of the weak signal in the pressure cell, does not show a splitting of the field-cooled and zero-field-cooled data around $15-20$~K, the temperature range where the volume fraction of static spins increases according to $\mu$SR~\cite{18MajumderPRL}. Therefore, spin-freezing effects in the high-pressure phase of $\beta$-Li$_2$IrO$_3$ seem to be extrinsic and driven by muons. \section{\textit{Ab initio} modeling} \subsection{Phase stability} We now extend this analysis with the \textit{ab initio} modeling for the partially dimerized phase ($P2_1/n$) identified by our magnetic susceptibility data as a plausible candidate state for $\beta$-Li$_2$IrO$_3$ above $p_{\rm c}$. The XRD data of Ref.~\cite{19VeigaPRB} reported this phase at 50~K, whereas at 25~K it could not be uniquely distinguished from a mixture of the nondimerized ($Fddd$) and fully dimerized ($C2/c$) phases. Therefore, the first question regarding the partially dimerized phase is its thermodynamic stability. Should there be a pressure range where it is more stable than both $Fddd$ and $C2/c$ phases, one may expect this partially dimerized phase to appear upon compression before full dimerization occurs. \begin{figure} \includegraphics{figX1} \caption{\label{fig:energy} Comparison of the nondimerized ($Fddd$), partially dimerized ($P2_1/n$), and fully dimerized ($C2/c$) phases: (a) volume dependence of energy; (b) pressure dependence of enthalpy. } \end{figure} In the following, we compare total energies and enthalpies of the phases in question. Crystal structures are relaxed at fixed volume to obtain volume dependence of energy, which is fitted with the Murnaghan equation of state, \begin{align*} E(V)=E_0+B_0V_0\left[\frac{1}{B_0'(B_0'-1)}\right. &\left(\frac{V}{V_0}\right)^{1-B_0'}+ \\ &\left.+\frac{1}{B_0'}\frac{V}{V_0}-\frac{1}{B_0'-1}\right]. \end{align*} The fit returns the equilibrium energy ($E_0$), equilibrium volume ($V_0$), bulk modulus ($B_0$), and its pressure derivative ($B_0'$), as given in Table~\ref{tab:eos}. \begin{table} \caption{\label{tab:eos} Fitted parameters of the second-order Murnaghan equation of state for different phases of $\beta$-Li$_2$IrO$_3$. Energies $E_0$ are given relative to the energy minimum of the nondimerized $Fddd$ phase. } \begin{ruledtabular} \begin{tabular}{ccccc}\medskip Space group & $E_0$ (meV/f.u.) & $V_0$ (\r A$^3$/f.u.) & $B_0$ (GPa) & $B_0'$ \\\smallskip $Fddd$ & 0 & 55.48(3) & 103(3) & 4.7(4) \\\smallskip $C2/c$ & 18(1) & 54.11(2) & 114(2) & 5.9(3) \\ $P2_1/n$ & 100(1) & 54.91(3) & 101(3) & 6.1(3) \end{tabular} \end{ruledtabular} \end{table} Fig.~\ref{fig:energy} shows that the fully and partially dimerized phases are indeed stabilized under pressure. The fully dimerized phase sets in around 2.7\,GPa, preceded by the partially dimerized phase that becomes thermodynamically stable already at 1.7\,GPa. These transition pressures are in a very good agreement with the experimental values of, respectively, 1.5\,GPa and 2.3\,GPa at 50\,K, as determined from the XRD data~\cite{19VeigaPRB}. Our \textit{ab initio} results thus support the partially dimerized phase as a thermodynamically stable form of $\beta$-Li$_2$IrO$_3$ at intermediate pressures. It may indeed appear upon compressing $\beta$-Li$_2$IrO$_3$ at low temperatures. \begin{figure} \includegraphics{figX2} \caption{\label{fig:dos} Density of states corresponding to the Ir $t_{2g}$ bands in the three phases of $\beta$-Li$_2$IrO$_3$. Note that the partially dimerized phase ($P2_1/n$) shares features of both dimerized and nondimerized phases. The calculations are performed on the DFT+SO level without taking Coulomb correlations into account. } \end{figure} \subsection{Magnetism} We now analyze the magnetism of this partially dimerized phase. A simple inspection of the crystal structure suggests that half of the Ir$^{4+}$ ions -- those with the Ir--Ir distance of 2.66\,\r A (Ir2 and Ir4 in the notation of Ref.~\cite{19VeigaPRB}) -- should be non-magnetic, while the remaining half (Ir1 and Ir3 with the Ir--Ir distance of 2.92\,\r A) should remain magnetic akin to the parent undimerized phase. Indeed, in DFT+$U$+SO calculations we find magnetic moments of about 0.4\,$\mu_B$ in the nondimerized phase, no moment in the dimerized phase, and the drastically different moments of 0.05\,$\mu_B$ on Ir2 and Ir4 vs. 0.35\,$\mu_B$ on Ir1 and Ir3 in the partially dimerized phase. This conclusion is additionally supported by the structure of the Ir $t_{2g}$ bands calculated on the DFT+SO level without taking electronic correlations into account, such that no band gap opens at the Fermi level (Fig.~\ref{fig:dos}). In the nondimerized phase, the splitting between the bands below and above $-0.5$\,eV reflects the separation of the atomic states into $j_{\rm eff}=\frac32$ and $j_{\rm eff}=\frac12$, respectively. This splitting disappears in the dimerized phase, where one finds instead several narrow bands arising from molecular orbitals of the Ir$_2$ dimers, similar to Ref.~\cite{antonov2018,antonov2021}. The partially dimerized phase combines both features. The Ir2 and Ir4 states participate in the dimer formation and, therefore, provide dominant contribution to the upper and lower bands around $0.3$~eV and $-1.8$~eV, respectively. The Ir1 and Ir3 states span a comparatively more narrow energy range. Partial dimerization breaks the hyperhoneycomb lattice of $\beta$-Li$_2$IrO$_3$ into finite Ir1--Ir3--Ir3--Ir1 clusters with the $X-Y-X$ Kitaev bonds (Fig.~\ref{fig:structure}). The $Z$-type bonds disappear because they always involve either Ir2 or Ir4. The nature of exchange interactions is verified by a direct calculation of the exchange parameters of the spin Hamiltonian, \begin{equation*} \mathcal H=\sum_{\langle ij\rangle} J_{ij}\mathbf S_i\mathbf S_j + \sum_{\langle ij\rangle} K_{ij} S_i^{\gamma}S_j^{\gamma}+\sum_{\langle ij\rangle} \Gamma_{ij} (S_i^{\alpha}S_j^{\beta}+S_i^{\beta}S_j^{\alpha}), \end{equation*} where $J_{ij}$, $K_{ij}$, and $\Gamma_{ij}$ stand, respectively, for the Heisenberg exchange, Kitaev exchange, and off-diagonal anisotropy, and $\alpha\neq\beta\neq\gamma$ ($\gamma=X$ for Ir1--Ir3 and $\gamma=Y$ for Ir3--Ir3). These parameters are obtained from the superexchange theory of Refs.~\cite{rau2014,winter2016} following the procedure described in Ref.~\cite{18MajumderPRL}. \begin{figure} \includegraphics{figX3} \caption{\label{fig:structure} Partially dimerized phase of $\beta$-Li$_2$IrO$_3$ with the non-magnetic Ir2--Ir4 dimers and magnetic Ir1--Ir3--Ir3--Ir1 tetramers. $J_{it}$ is the coupling between the tetramers. } \end{figure} We find $J_{13}=-6.3$\,meV, $K_{13}=-14.5$\,meV, and $\Gamma_{13}=-18.9$\,meV for the Ir1--Ir3 bonds, as well as $J_{33}=-4.4$\,meV, $K_{33}=-9.9$\,meV, and $\Gamma_{33}=-11.9$\,meV for the Ir3--Ir3 bonds. These parameters are only marginally different from the ambient-pressure values obtained using the same superexchange theory ($K=-12.1$\,meV, $\Gamma=-13.5$\,meV, and $J=-4.8$\,meV~\cite{18MajumderPRL}). Experimentally, one finds $|K|\simeq |\Gamma|\simeq 13$\,meV~\cite{19MajumderPRM,20MajumderPRB} and $J\simeq 0.3$\,meV~\cite{rousochatzakis2018}. Exact diagonalization for the Ir1--Ir3--Ir3--Ir1 tetramer was performed in Mathematica using exchange parameters for the partially dimerized phase. The energy spectrum of the tetramer features a singlet ground state separated from the first excited state by $\Delta\simeq 9.1$\,meV. For comparison, we also applied experimental exchange parameters at ambient pressure to the tetramer, and arrived at a quite similar low-energy spectrum with $\Delta\simeq 8.5$\,meV. We thus expect that the partially dimerized phase of $\beta$-Li$_2$IrO$_3$ is magnetic, but no long-range order ensues because tetramers remain decoupled. Their singlet state is protected by the sizable gap $\Delta$. This result is consistent with our experimental observation that magnetic order vanishes above $p_{\rm c}$, but signatures of local magnetism remain visible also at higher pressures. The couplings between the tetramers are mediated by the non-magnetic Ir2--Ir4 dimers. The shortest superexchange pathway with the Ir1--Ir1 distance of 5.10\,\r A yields the coupling with $J_{\rm it}=-0.3$\,meV, $K_{\rm it}=-0.8$\,meV, and $\Gamma_{\rm it}=0.1$\,meV (Fig.~\ref{fig:structure}). This weak coupling is by far insufficient to close the gap $\Delta$ and induce long-range order. \section{Discussion and Summary} Our data put in question the earlier scenario of the pressure-induced spin-liquid formation in $\beta$-Li$_2$IrO$_3$~\cite{18MajumderPRL}. We have shown that the breakdown of magnetic order at $p_{\rm c}$ leads to the step-like feature in the magnetic susceptibility -- a hallmark of structural dimerization. A small fraction of the nondimerized phase persists up to 1.5~GPa due to the pressure hysteresis of the first-order phase transition, but at higher pressures the magnetically ordered nondimerized phase vanishes entirely. This leaves two possibilities for the low-temperature behavior of $\beta$-Li$_2$IrO$_3$ at pressures above $p_{\rm c}$. One scenario is the fully dimerized state, similar to the high-pressure phase of $\alpha$-RuCl$_3$, but with a significant amount of defects that should account for the Curie-like upturn in the magnetic susceptibility at low temperatures. However, such a fully dimerized phase is at odds with the low-temperature XRD data at pressures right above $p_{\rm c}$~\cite{19VeigaPRB} and also fails to explain the $\mu$SR observation of the mixed frozen and dynamic spins in the high-pressure phase. An interpretation of the $\mu$SR data would require that some of the muons break the singlet state of the dimers, thus causing spin freezing, while other muons leave dimers intact and observe dynamic spins. Importantly, magnetic field distribution produced by these dynamic spins is temperature-dependent and becomes broader below 40~K~\cite{18MajumderPRL}. This fact would be especially difficult to reconcile with the scenario of complete dimerization. The alternative scenario of the partially dimerized phase seems more promising. An important aspect of this phase is that its magnetic Ir$^{4+}$ sites are confined to weakly coupled tetramer units and, therefore, evade magnetic ordering. The spins of the tetramer remain dynamic down to zero temperature, but they form a cluster magnet rather than a genuine spin liquid. The separation of the Ir$^{4+}$ ions into magnetic and non-magnetic \textit{within} one phase gives a natural explanation for the mixed $\mu$SR response of the dynamic and frozen spins coexisting in the sample. The frozen spins can be assigned to the non-magnetic Ir$^{4+}$ ions, with corresponding spin dimers perturbed by muons. The dynamic spins can be associated with the tetramers, and the temperature-dependent field distribution probed by muons may be caused by the development of spin-spin correlations on the tetramers. The 40:60 ratio of the static and dynamic spins in $\mu$SR seems rather close to the 50:50 ratio of the magnetic and non-magnetic Ir$^{4+}$ sites in the crystal structure, although some misbalance in the muon stop sites should probably be taken into account. One aspect of the data that even the partially dimerized phase fails to explain is the Curie-like susceptibility upturn, which is not expected in the tetramers because their ground state is a singlet. Such an upturn may be expected if trimers or pentamers are formed locally instead of the tetramers, for example as a consequence of pressure-induced strains and defects, but further studies would be needed to verify and elucidate this scenario. Local probes like nuclear magnetic resonance (NMR) and electron-spin resonance (ESR) might be particularly useful. Yet another aspect that requires further investigation is the transformation of the partially dimerized phase into the fully dimerized one. Our susceptibility data do not show any additional phase boundaries on the temperature-pressure phase diagram, but technical limitations restrict the highest pressure of our present study to 3~GPa, and already above 2~GPa the sensitivity of the measurement is reduced because of the smaller sample size. Magnetization measurements at higher pressures and the aforementioned local probes would certainly be useful to track further evolution of the unusual high-pressure phase of $\beta$-Li$_2$IrO$_3$. In summary, we revised the temperature-pressure phase diagram of $\beta$-Li$_2$IrO$_3$ using improved magnetization measurements. The breakdown of the long-range magnetic order around 1.4~GPa is accompanied by the appearance of a step-like anomaly due to the structural dimerization. This observation rules out the scenario of pressure-induced spin liquid in this material and suggests a structural instability as the main cause for the breakdown of magnetic order. The high-pressure phase shows signatures of local magnetism that may be related to the formation of a partially dimerized phase with coexisting magnetic (nondimerized) and non-magnetic (dimerized) Ir$^{4+}$ sites. \acknowledgments AAT thanks Ioannis Rousochatzakis for his help with exact diagonalization, continuous support, and fruitful discussions. This work was funded by the German Research Foundation (DFG) via the Project No. 107745057 (TRR80) and via the Sino-German Cooperation Group on Emergent Correlated Matter.
{ "attr-fineweb-edu": 1.547852, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbfM4uBhhxE6Lxe8N
\section{Introduction} Intersecting branes and branes ending on branes receive much attention now \cite{inters1}--\cite{West} in relation with the development of M-theory \cite{DbraneM} and its application to gauge theories \cite{witten96,witten97}. However, the studies of \cite{inters1}--\cite{West} were performed for the pure bosonic limit of the brane systems or for a supersymmetric description in the framework of the 'probe brane' approach only. In the first case they are based upon the observation that the ground state should not include the nontrivial expectation values of the fermions in order to keep (part of) the Lorentz invariance (corresponding to the configuration of the branes). Then it is possible to justify that some of the pure bosonic solutions preserve part of the target space supersymmetry and, just due to this property, saturate the Bogomolnyi bound and are thus stable (see e.g. \cite{SUGRA} and refs. therein). In the second case one of the branes is treated in an the 'external field' of the other brane. The latter can be considered either as the solution of low energy supergravity theory \cite{inters1,Sato}, or, in the frame of the superembedding approach \cite{bpstv,bsv,hs1,hs2,5-equiv}, as a superspace, where the ends of the probe brane are living \cite{Sezgin25,hsc,Sezgin98}. In such an approach the $\kappa$-symmetry of the probe brane plays an essential role for studying the 'host' brane and the coupled system. Despite many successes of these approaches, it is desirable to obtain a complete and manifestly supersymmetric description of interacting brane systems. Of course, the preservation of supersymmetry in the presence of boundaries (including the boundaries of open branes ending on other branes) requires the analysis of anomalies \cite{HW,Mourad}, while at the classical level the boundary breaks at least half of the supersymmetry \cite{gsw,typeI,Mourad}. So at that level one may search for an action for a coupled brane system, which includes manifestly supersymmetric bulk terms for all the branes and allows direct variations. The term 'supersymmetric' will be used below for an action of this type. The main problem to be faced in a search for such an action is that the coordinate functions of intersecting branes (or of the open brane and host brane), which define embeddings of their worldvolumes, say $${\cal M}^{1+p}= (\zeta ^m ), ~~~ m=0,\ldots p \qquad and \qquad {\cal M}^{1+p'}=(\xi^{m^\prime}), ~~~ m^\prime=0,\ldots p^\prime $$ into the tangent superspace $ {\cal M}^{(D~|~N^.2^{[D/2]})}= (X^{\underline{m}}, \Theta^{\underline{\mu}I}), ~~~~( \underline{m}= 0, \ldots D-1, ~~~ \underline{\mu}= 1, \ldots 2^{[D/2]}$, \\ $\quad I=1, \ldots N) $: $$ {\cal M}^{1+p} \in {\cal M}^{(D~|~N^.2^{[D/2]})} : \qquad X^{\underline{m}}=\tilde{X}^{\underline{m}} (\zeta), \qquad \Theta^{\underline{\mu}I} = \tilde{\Theta }^{\underline{\mu}I} (\zeta) $$ and $$ {\cal M}^{1+p'} \in {\cal M}^{(1+(D-1)|N^.2^{[D/2]})} : \qquad X^{\underline{m}}=\hat{X}^{\underline{m}}(\xi ), \qquad \Theta^{\underline{\mu}I} = \hat{\Theta}^{\underline{\mu}I} (\xi ) $$ should be identified at the intersection ${\cal M}^\cap \equiv {\cal M}^{1+p} \cap {\cal M}^{1+p'} = ( \tau ^r ), ~~ r=0, \ldots , dim({\cal M}^\cap )-1 $: \begin{equation}\label{idn} {\cal M}^\cap \equiv {\cal M}^{1+p} \cap {\cal M}^{1+p'} \in {\cal M}^{(D~|~N^.2^{[D/2]})} : \quad \tilde{X}^{\underline{m}} (\zeta (\tau )) =\hat{X}^{\underline{m}} (\xi (\tau ) ), \quad \tilde{\Theta }^{\underline{\mu}I} (\zeta (\tau ))= \hat{\Theta}^{\underline{\mu}I} (\xi (\tau ) ). \end{equation} Hence the variations $\d \tilde{X} (\zeta), \d \tilde{\Theta } (\zeta )$ and $\d \hat{X}(\xi ), \d \hat{\Theta} (\xi )$ may not be considered as completely independent. Recently we proposed two procedures to solve this problem and to obtain a supersymmetric action for an interacting brane system \cite{BK99}. One of them provides the necessary identification \p{idn} by the Lagrange multiplier method (SSPE or superspace embedded action \cite{BK99}). Another ('(D-1)-brane dominance' approach or Goldstone fermion embedded (GFE) action) involves a (dynamical or auxiliary) space-time filling brane and uses the identification of all the Grassmann coordinate fields of lower dimensional branes $\hat{\Theta} (\xi )$, $\tilde{\Theta } (\zeta)$ with the images of the (D-1)-brane Grassmann coordinate fields \begin{equation}\label{idThI} \hat{\Theta} (\xi ) = \Theta (x(\xi)), \qquad \tilde{\Theta} (\zeta ) = \Theta (x(\zeta)). \end{equation} We considered the general properties of the equations of motion which follows from such actions using the example of a superstring ending on a super--D3--brane. It was found that both approaches are equivalent and thus justify one the other. The super--9--brane was considered as an auxiliary object in \cite{BK99}. The study of supersymmetric equations of motion for this system will be the subject of forthcoming paper \cite{BK99-3}. Here we elaborate another example of the dynamical system consisting of the fundamental superstring ending on the super-D9--brane. We present explicitly the action for the coupled system and obtain equations of motion by its direct variation. As the super-D9--brane is the space time--filling brane of the type $IIB$ superspace, the GFE approach is most natural in this case. Moreover, the system involving the dynamical space time--filling brane has some peculiar properties which are worth studying (see e.g. \cite{Kth}). On the other hand, it can be regarded as a relatively simple counterpart of the supersymmetric dynamical system including superbranes and supergravity (see \cite{BK99} for some preliminary considerations). Several problems arise when one tries to find the action for a coupled system of the space--time filling superbrane and another super--p--brane. The main one is how to formulate the supersymmetric generalization of the current (or, more precisely, dual current form) distributions $J_{D-(p+1)}$ with support localized on the brane worldvolume ${\cal M}^{1+p}$. Such distributions can be used to present the action of a lower dimensional brane as an integral over the D--dimensional space--time, or, equivalently, to the (D-1)--brane worldvolume. Then the action for the coupled system of the lower dimensional branes and the space-time filling brane acquires the form of an integral over the $D$--dimensional manifold and permits direct variation. The solution of this problem was presented in \cite{BK99} and will be elaborated here in detail. For the space-time filling brane the world volume spans the whole bosonic part of the target superspace. As a consequence, it produces a nonlinear realization of the target space supersymmetry. The expression for the supersymmetry transformations of the bosonic current form distributions (which was used in \cite{bbs} for the description of the interacting bosonic M--branes \cite{BSTann,blnpst,schw5}) vanishes when the above mentioned identification of the Grassmann coordinates of the lower dimensional brane with the image of the Grassmann coordinate field of the space-time filling brane is imposed. This observation provides us with the necessary current distribution form and is the key point of our construction\footnote{ It is convenient to first adapt the description of the currents to the language of dual current forms, whose usefulness had been pointed out in \cite{DL92,Julia}.}. The second problem is related to the fact that the distributions $J_{D-p-1}$ can be used to lift the $(p+1)$ dimensional integral to the $D$--dimensional one, \begin{equation}\label{lift} \int_{{\cal M}^{1+p} } \hat{{\cal L}}_{p+1} = \int_{{\cal M}^{D} } J_{D-p-1} \wedge {{\cal L}}_{p+1}, \end{equation} only when the integrated $(p+1)$-form $\hat{{\cal L}}_{p+1}$ can be considered as a pull--back of a D-dimensional $(p+1)$-form ${{\cal L}}_{p+1}$ living on ${\cal M}^{D}$ onto the $(p+1)$--dimensional surface ${\cal M}^{1+p}\in {\cal M}^{D}$. Thus, e.g. the superstring Wess-Zumino form can be easily 'lifted' up to (i.e. rewritten as) the integral over the whole D9-brane worldvolume. However, not the entire superstring actions are written as an integral of a pull-back of a 10-dimensional form. For example, the kinetic term of the Polyakov formulation of the (super)string action \begin{equation}\label{superc} \int_{{\cal M}^{1+1}} {\cal L}^{Polyakov}_2 = \int d^2 \xi \sqrt{-g} g^{\mu\nu} \hat{\Pi}_\mu^{\underline{m}} \hat{\Pi}_\nu^{\underline{n}} \eta_{\underline{m}\underline{n}} \equiv \int_{{\cal M}^{1+1}} \hat{\Pi}^{\underline{m}} \wedge * \Pi^{\underline{n}} \eta_{\underline{m}\underline{n}} \end{equation} with $$ \hat{\Pi}^{\underline{m}} \equiv d\hat{X}^{\underline{m}}(\xi)- id\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^I\G^{\underline{m}} \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^I = d\xi^\mu \hat{\Pi}_\mu^{\underline{m}}, \qquad \mu=1,2 \qquad \xi^\mu=(\tau,\s ), $$ $$ \eta_{\underline{m}\underline{n}}=diag (+1,-1,\ldots ,-1) $$ does not possess such a formulation. Moreover, it is unclear how to define a straightforward extension of the 2--form ${\cal L}^{Polyakov}_2$ to the whole 10-dimensional space-time. The same problem exists for the Dirac-Nambu-Goto and Dirac-Born-Infeld kinetic terms of super-Dp-branes and usual super-p-branes. Though it is possible to treat the 'lifting' relation \p{lift} formally (see e.g. \cite{bbs} for a description of bosonic M-branes), to address the delicate problems of the supersymmetric coupled brane system it is desirable to have a version of the superstring and superbrane actions which admits a straightforward and explicit lifting to the whole 10--dimensional space or to the whole D9-brane world volume. Fortunately, such a formulation does exist. It is the so-called Lorentz harmonic formulation of the superstring \cite{BZ} which includes auxiliary moving frame (Lorentz harmonic) variables, treated as worldsheet fields. This is a geometric (in a sense the first-order) action which can be written in terms of differential forms without use of the Hodge operation \cite{bpstv,bsv}. The only world volume field which is not an image of a target space one is just the moving frame field (harmonics). However, it is possible to extend this field to an auxiliary 10-dimensional $SO(1,9)/(SO(1,1) \times SO(8)$ 'sigma model', which is subject to the only condition that it should coincide with the 'stringy' harmonics when restricted to the string worldsheet \footnote{Just the existence of the Lorentz harmonic actions for super--D--branes \cite{bst,baku2,abkz} and super--M--branes \cite{bsv,bpst} guarantees the correctness of the formal approach to the action functional description of interacting bosonic systems \cite{bbs}.}. In this way we obtain a supersymmetric action for the interacting system including super-D9-brane and a fundamental superstring 'ending' on the D9-brane, derive the supersymmetric equations of motion directly from the variation of the action and study different phases of the coupled dynamical system. We shortly discuss as well the generalization of our approach for the case of an arbitrary system of intersecting branes. For simplicity we are working in flat target $D=10$ $type ~IIB$ superspace. The generalization to brane systems in arbitrary supergravity {\sl background} is straightforward. Moreover, our approach allows to involve supergravity in an interacting brane system. To this end one can include a counterpart of the group manifold action for supergravity in the functional describing interacting branes instead of (or together with) the space--time filling brane action. In Section 2 we consider the peculiar features of an interacting system which contains a space-time filling brane. We describe an induced embedding of the superstring worldsheet into the D9-brane worldvolume. The geometric action \cite{abkz} and the geometric ('first order') form of the supersymmetric equations of motion for the super--D9--brane are presented in Section 3. Section 4 is devoted to the description of the geometric (twistor--like Lorentz harmonic) action and of the equations of motion for the free type $IIB$ superstring. Here Lorentz harmonic variables are used and the issue of supersymmetry breaking by boundaries is addressed briefly. In Section 5 we introduce the density with support localized on the superstring worldsheet and motivate that it becomes invariant under $D=10$ type $IIB$ supersymmetry when the identification \p{idThI} is imposed. The action functional describing the interacting system of the super--D9--brane and the (in general open) fundamental superstring ('ending' on the super-D9-brane) is presented in Section 6. The equations of motion of the interacting system are derived in Section 7 and analyzed in Section 8. The issues of kappa-symmetry and supersymmetry in the coupled system are addressed there. In the last Section we summarize our results and also discuss a generalization of our approach to an arbitrary system of intersecting branes. \defC.\arabic{equation}{\thesection.\arabic{equation}} \section{The space-time filling brane} \setcounter{equation}{0} The embedding of the super--D9--brane worldvolume \begin{equation}\label{D9wv} {\cal M}^{1+9}= \{ x^m \}, \qquad m=0,\ldots ,9 \end{equation} into the $D=10$ type $II$ target superspace \begin{equation}\label{IIBts} \underline{{\cal M}}^{(1+9|32)}= \{ X^{\underline{m}}, \Theta^{\underline{\mu}1}, \Theta^{\underline{\mu}2} \}, \qquad {\underline{m}}=0,\ldots ,9 \qquad {\underline{\mu}}=1,\ldots ,16 \end{equation} can be described locally by the coordinate superfunctions \begin{equation}\label{superco} X^{\underline{m}} = X^{\underline{m}} (x^m ), \qquad \Theta^{I\underline{\mu}} = \Theta^{I\underline{\mu}} (x^m), ~~~ I=1,2 . \end{equation} In addition, there is an intrinsic world volume gauge field living on the D9-brane world volume \begin{equation}\label{gf} A = dx^m A_m (x^n) . \end{equation} For nonsingular D9-brane configurations the function $X^{\underline{m}} (x^m )$ should be assumed to be nondegenerate in the sense $det \left(\partial_n X^{\underline{m}} (x^m )\right)\not= 0$. Thus the inverse function \begin{equation}\label{xX} x^m = x^m (X^{\underline{m}}) \end{equation} does exist and, hence, the Grassmann coordinate functions \p{superco} and the Born-Infeld gauge field \p{gf} can be considered as functions of $X^{\underline{m}}$ variables. In this manner an alternative parametrization of the D9-brane world volume is provided by \begin{equation}\label{supercVA} {{\cal M}}^{1+9} ~\rightarrow ~ \underline{{\cal M}}^{(1+9|32)} : \qquad {{\cal M}}^{1+9} = \{ (X^{\underline{m}}, \Theta^{I\underline{\mu}} (X^{\underline{m}} ) ) \}, \qquad A = dX^{\underline{m}} A_{\underline{m}}(X^{\underline{n}}), \end{equation} which clarifies the fact that the $D=10$, type $II$ super--D9--brane is a theory of Volkov-Akulov Goldstone fermion \cite{VA} combined into a supermultiplet with the vector field $A_{\underline{m}}(X^{\underline{n}})$ (see \cite{abkz}). Through the intermediate step \p{xX}, \p{supercVA} we can define the {\sl induced embedding of the superstring worldsheet into the D9-brane world volume}. \subsection{ Induced embedding of the superstring worldsheet} Indeed, the embedding of the fundamental superstring worldsheet \begin{equation}\label{IIBwv} {\cal M}^{1+1}= \{ \xi^{(\pm\pm )} \} = \{ \xi^{(++)}, \xi^{(--)} \}, \qquad \xi^{(++)}= \tau + \s, \qquad \xi^{(--)}= \tau - \s, \qquad \end{equation} into the $D=10$ type $IIB$ target superspace $\underline{{\cal M}}^{(1+9|32)}$ \p{IIBts} can be described locally by the coordinate superfunctions \begin{equation}\label{supercIIB} X^{\underline{m}} = \hat{X}^{\underline{m}} (\xi^{(\pm\pm )} ), \qquad \Theta^{I\underline{\mu}} = \hat{\Theta}^{I\underline{\mu}} (\xi^{(\pm\pm )}), ~~~ I=1,2 . \end{equation} However, using the existence of the inverse function \p{xX}, one can define the {\sl induced} embedding of the worldsheet into the D9-brane world volume \begin{equation}\label{x(xi)} x^m =x^m (\xi )\equiv x^m \left( \hat{X}^{\underline{m}} (\xi )\right). \end{equation} As superstring and super--D9--brane live in the same $D=10$ type $IIB$ superspace, we can use the identification of the Grassmann coordinate fields of the superstring with the images of the Grassmann coordinate fields of the super--D9--brane (Goldstone fermions) on the worldsheet \begin{equation}\label{X9Xs} \hat{\Theta}^{I\underline{\mu}} (\xi^{(\pm\pm )})= \Theta^{I\underline{\mu}} (\hat{X}^{\underline{m}} (\xi^{(\pm\pm )} )), \end{equation} or, equivalently, \begin{equation}\label{X9xXs} \hat{X}^{\underline{m}} (\xi^{(\pm\pm )} )= \hat{X}^{\underline{m}} \left(x^m(\xi^{(\pm\pm )} )\right), \qquad \hat{\Theta}^{I\underline{\mu}} (\xi^{(\pm\pm )})= \Theta^{I\underline{\mu}}\left(x^m(\xi^{(\pm\pm )} )\right), \end{equation} to study the interaction of the fundamental superstring with the super-D9-brane. The approach based on such an identification was called 'Goldstone fermion embedded' (GFE) in \cite{BK99} because, from another viewpoint, the superstring worldsheet can be regarded as embedded into Goldstone fermion theory rather than into superspace. \subsection{Tangent and cotangent space.} The pull-backs of the basic forms (flat supervielbeine) of flat $D=10$ type $IIB$ superspace \begin{equation}\label{Ea9} E^{\underline{a}} = \Pi^{\underline{m}}u_{\underline{m}}^{~\underline{a}} \equiv (dX^{\underline{m}} - i d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 \s^{\underline{m}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 - i d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 \s^{\underline{m}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2) u_{\underline{m}}^{~\underline{a}}, \end{equation} \begin{equation}\label{Eal9} E^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon} 1} = d\Theta^{1\underline{\mu}} v_{\underline{\mu}} ^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}} , \qquad E^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon} 2} = d\Theta^{2\underline{\mu}} v_{\underline{\mu}} ^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}} \end{equation} to the D9-brane worldvolume are defined by the decomposition on the holonomic basis $dx^m$ or $dX^{\underline{m}}$ \begin{equation}\label{dXdx} dX^{\underline{m}}= dx^m \partial_m X^{\underline{m}}(x). \end{equation} The basic relations are \begin{equation}\label{Pipb} \Pi^{\underline{m}}= dX^{\underline{n}} \Pi^{~\underline{m}}_{\underline{n}} = dx^m \Pi^{~\underline{m}}_{{m}}, \qquad \end{equation} \begin{equation}\label{Pi1} \Pi^{~\underline{m}}_{\underline{n}} \equiv \d^{\underline{m}}_{\underline{n}} - i \partial_{\underline{n}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 \s^{\underline{m}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 - i \partial_{\underline{n}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 \s^{\underline{m}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 , \end{equation} \begin{equation}\label{Pi2} \Pi^{~\underline{m}}_{{n}} \equiv \partial_n X^{\underline{m}} - i \partial_{{n}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 \s^{\underline{m}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 - i \partial_{{n}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 \s^{\underline{m}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 , \end{equation} \begin{equation}\label{dThpb} d\Theta^{\underline{\mu}I} = dX^{~\underline{m}} \partial_{\underline{m }}\Theta^{\underline{\mu}I} = dx^m \partial_m\Theta^{\underline{\mu}I} (x^n) , \qquad \end{equation} \begin{equation}\label{dThpbd} \partial_m\Theta^{\underline{\mu}I} (x^n) \equiv \partial_m X^{~\underline{m}} (x) \partial_{\underline{m }}\Theta^{\underline{\mu}I} . \qquad \end{equation} The matrices $u^{\underline{a}}_{\underline{m}}$, $v^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu}}$, involved into Eqs. \p{Ea9}, \p{Eal9} take their values in the Lorentz group \begin{equation}\label{uin} u^{\underline{a}}_{\underline{m}} \qquad \in SO(1,D-1), \end{equation} and in its doubly covering group $Spin(1,D-1)$ \begin{equation}\label{vin} v^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu }} \qquad \in Spin(1,D-1), \end{equation} respectively and represent the same Lorentz transformations. The latter imply the relations \begin{equation}\label{uvgv} u^{\underline{a}}_{\underline{m}} \tilde{\s}_{\underline{a}} ^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} = v^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu }} \tilde{\s}_{\underline{a}} ^{\underline{\mu}\underline{\nu}} v^{\underline{\b}}_{\underline{\nu }}, \qquad u^{\underline{a}}_{\underline{m}} \tilde{\s}^{\underline{m}} _{\underline{\mu}\underline{\nu}} = v^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu }} \tilde{\s}^{\underline{m}} _{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} v^{\underline{\b}}_{\underline{\nu }}, \qquad \end{equation} between these matrices which reflect the invariance of the $D=10$ sigma matrices under the Lorentz group transformations (see \cite{BZ,bpstv}). The variables \p{uin}, \p{vin} are not necessary for the description of the super--D9--brane itself. However, as we shall see below, they are useful for the description of the coupled system including a brane 'ending' on (interacting with) the D9-brane. For that system it is important to note that the pull-backs of bosonic supervielbein forms $\Pi^{\underline{m}}$ or $E^{\underline{a}}$ of type $IIB$ superspace can be used as a basis in the space cotangent to the world-volume of D9--brane. In other words, it is convenient to use the invertibility of the matrix $\Pi_n^{~\underline{m}}$ \p{Pi2} and the harmonic variables to define the covariant basis $\nabla_{\underline{m}}$ and the one $\nabla_{\underline{a}}$ of the space tangent to the D9-brane worldvolume by \begin{equation}\label{ddec} d \equiv dx^n \partial_m = dX^{\underline{m}} \partial_{\underline{m}}= \Pi^{\underline{m}} \nabla_{\underline{m}} = E^{\underline{a}} \nabla_{\underline{a}}, \end{equation} \begin{equation}\label{der} \nabla_{\underline{m}} = \Pi^{-1}{}^{~\underline{n}}_{\underline{m}} \partial_{\underline{n}}= \Pi^{-1}{}^n_{~\underline{m}}\partial_n, \qquad \nabla_{\underline{a}} \equiv u_{\underline{a}}^{~\underline{m}}\nabla_{\underline{m}}. \end{equation} \bigskip \section{Geometric action and equations of motion for super-D9-brane} \setcounter{equation}{0} \subsection{Geometric action} The geometric action for the super--D9--brane in flat $D=10$, type $IIB$ superspace is \cite{abkz} \begin{equation}\label{SLLL} S = \int_{{\cal{M}}^{10}} {\cal{L}}_{10} = \int_{{\cal{M}}^{10}} ({\cal{L}}^0_{10} + {\cal{L}}^1_{10} + {\cal{L}}^{WZ}_{10}) \end{equation} where \begin{equation}\label{L0} {\cal{L}}^0_{10} = \Pi^{\wedge 10} |\eta + F | \end{equation} with $$ |\eta + F | \equiv \sqrt{-det(\eta_{\underline{m}\underline{n}}+F_{\underline{m}\underline{n}})}, $$ \begin{equation}\label{Pi10} \Pi^{\wedge 10} \equiv {1 \over (10)!} \e_{\underline{m}_1\ldots \underline{m}_{10}} \Pi^{\underline{m}_1} \wedge ... \wedge \Pi^{\underline{m}_{10}}, \end{equation} \begin{equation}\label{L1} {\cal{L}}^1_{10} = Q_{8} \wedge (dA-B_2 - {1 \over 2} \Pi^{\underline{m}} \wedge \Pi^{\underline{n}} F_{\underline{n}\underline{m}}). \end{equation} $A= dx^m A_m(x)$ is the gauge field inherent to the Dirichlet branes, $B_2$ represents the NS--NS gauge field with flat superspace value \begin{equation}\label{B2def} B_2 = i \Pi^{\underline{m}}\wedge \left( d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1\s_{\underline{m}}\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 - d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2\s_{\underline{m}}\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 \right) + d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1\s^{\underline{m}}\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 \wedge d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2\s_{\underline{m}}\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 \end{equation} and field strength \begin{equation}\label{H3def} H_3= dB_2 = i \Pi^{\underline{m}}\wedge \left( d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1\s_{\underline{m}} \wedge d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 - d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2\s_{\underline{m}}\wedge d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 \right). \end{equation} The Wess-Zumino Lagrangian form is the same as the one appearing in the standard formulation \cite{c1,schw,c2,schw1,bt} \begin{equation}\label{LWZ} {\cal{L}}^{WZ}_{10} = e^{{\cal F}_2} \wedge C \vert_{10} , \qquad C = \oplus _{n=0}^{5} C_{2n} , \qquad e^{{\cal F}_2}= \oplus _{n=0}^{5} {1\over n!} {\cal F}_2^{\wedge n}, \end{equation} where the formal sum of the RR superforms $C= C_0 + C_2 +...$ and of the external powers of the 2--form \begin{equation}\label{calFD9} {\cal F}_2 \equiv dA - B_2 \end{equation} (e.g. ${\cal F}^{\wedge 2}\equiv {\cal{F}} \wedge {\cal{F}}$ etc.) is used and $\vert_{10}$ means the restriction to the $10$--superform input. Let us note that the restriction of the same expression \p{LWZ} to the $(p+1)$--form input (where $p=2k-1$ is odd) \begin{equation}\label{LWZp+1} {\cal{L}}^{WZ}_{p+1} = e^{\cal{F}} \wedge C \vert_{p+1} = \oplus _{n=0}^{5} C_{2n} \wedge \oplus _{n=0}^{5} {1\over n!} {\cal F}^{\wedge n} \vert_{p+1} \end{equation} describes the Wess-Zumino term of the super-Dp-brane of type $IIB$ theory \cite{c1,c2,bt}. This will be important for the description of the supersymmetric generalization of the Born-Infeld equations for the D9-brane gauge fields, where the D7-brane Wess-Zumino term appears. For most applications only the external derivative of the Wess-Zumino term is important. It has the form \begin{equation}\label{dLWZ} d{\cal{L}}^{WZ}_{10} = e^{\cal{F}} \wedge R \vert_{11} , \qquad R = \oplus _{n=0}^{5} R_{2n+1} , \qquad \end{equation} with the 'vacuum' (i.e. flat target superspace) values of the Ramond-Ramond curvatures specified as \begin{equation}\label{RRR} R = \oplus _{n=0}^{5} R_{2n+1} = e^{- {\cal F}} \wedge d( e^{\cal{F}} \wedge C) = 2i d\Theta^{2\underline{\nu} } \wedge d\Theta^{1\underline{\mu} } \wedge \oplus _{n=0}^{4} \hat{\s}^{(2n+1)}_{\underline{\nu}\underline{\mu} }. \qquad \end{equation} \bigskip In the action variations and expressions for currents the notion of 'dual' forms \begin{eqnarray}\label{dualf} && \Pi^{\wedge 9}_{\underline{m}} \equiv {1 \over 9!} \e_{\underline{m}\underline{m}_1\ldots\underline{m}_{9}} \Pi^{\underline{m}_1} \wedge ... \wedge \Pi^{\underline{m}_{9}}, \nonumber \\ && \Pi^{\wedge 8}_{\underline{m}\underline{n}} \equiv {1 \over 2 ^. 8!} \e_{\underline{m}\underline{n}\underline{m}_1\ldots \underline{m}_{8}} \Pi^{\underline{m}_1} \wedge ... \wedge \Pi^{\underline{m}_{8}}, \qquad \ldots \\ && \Pi^{\wedge (10-k)}_{\underline{m}_1\ldots \underline{m}_k} \equiv {1 \over k!(10-k)! } \e_{\underline{m}_1\ldots \underline{m}_k \underline{n}_1\ldots\underline{n}_{(10-k)}} \Pi^{\underline{n}_1} \wedge ... \wedge \Pi^{\underline{n}_{(10-k)}} \qquad \nonumber \end{eqnarray} is useful. The list of products of the forms \p{dualf} includes the useful identities \begin{eqnarray}\label{dualft} \Pi^{\wedge 9}_{\underline{m}} \wedge \Pi^{\underline{n}} = - \Pi^{\wedge 10} \d_{\underline{m}}^{\underline{n}}, \qquad \Pi^{\wedge 8}_{\underline{m}\underline{n} } \wedge \Pi^{\underline{l}} = \Pi^{\wedge 9}_{[\underline{m} } \d_{\underline{n}]}^{\underline{l}}, \qquad \nonumber \\ \Pi^{\wedge 7}_{\underline{m}\underline{n} \underline{k} } \wedge \Pi^{\underline{l}} = - \Pi^{\wedge 8}_{[\underline{m}\underline{n} } \d_{\underline{k}]}^{\underline{l}}, \Pi^{\wedge 6}_{\underline{m}\underline{n} \underline{k} \underline{l} } \wedge \Pi^{\underline{r}} = \Pi^{\wedge 7}_{[\underline{m}\underline{n} \underline{k} } \d_{\underline{l}]}^{\underline{r}}, \qquad \\ \Pi^{\wedge 6}_{\underline{m}\underline{n} \underline{k} \underline{l} } \wedge \Pi^{\underline{r}} \wedge \Pi^{\underline{s}} = \Pi^{\wedge 8}_{[\underline{m}\underline{n} } \d_{\underline{k}}^{\underline{r}} \d_{\underline{l}]}^{\underline{s}}. \qquad \nonumber \end{eqnarray} \subsection{Variation of geometrical action for D9-brane} The simplest way to vary the geometrical action \p{SLLL}--\p{L1} starts by taking the external derivative of the Lagrangian form ${\cal L}_{10}$ (cf. \cite{bst,abkz}) \begin{equation}\label{dL10} d{\cal L}_{10}= \left(dQ_8 + d{\cal L}_{8}^{WZ} \vert_{{\cal F}_2\rightarrow F_2 } \right) \wedge \left({\cal F}_2-F_2 \right) + \end{equation} $$ + (Q_8 - \Pi^{\wedge 8}_{\underline{n}\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{n}\underline{m}}) \wedge \Big(- {1 \over 2} \Pi^{\underline{m}} \wedge \Pi^{\underline{n}} \wedge d F_{\underline{n}\underline{m}} - i \Pi^{\underline{m}} \wedge (d\Theta^1 \s^{\underline{n}} \wedge d\Theta^1) (\eta - F)_{\underline{n}\underline{m}} + $$ $$ + i \Pi^{\underline{m}} \wedge (d\Theta^2 \s^{\underline{n}} \wedge d\Theta^2 ) (\eta + F)_{\underline{n}\underline{m}} \Big) $$ $$ + i \Pi^{\wedge 9}_{\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{m}\underline{n}} \s_{\underline{n}}{}_{\underline{\mu}\underline{\nu}} \wedge \left( d\Theta^{2\underline{\mu} }- d\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\mu} }\right) \wedge \left( d\Theta^{2\underline{\nu} }- d\Theta^{1\underline{\s}} h_{\underline{\s}}^{~\underline{\nu} }\right) + $$ $$ + {\cal O} \left( ({\cal F}_2 - F_2) ^{\wedge 2} )\right), $$ where ${\cal F}_2 \equiv dA- B_2$ \p{calFD9} and $F_2 \equiv {1 \over 2} \Pi^{\underline{m}} \wedge \Pi^{\underline{n}} F_{\underline{n}\underline{m}}$. Note that ${\cal F}_2 -F_2$ vanishes due to the algebraic equation which is implied by the Lagrange multiplier $Q_8$. This is the reason why the terms proportional to the second and higher (external) powers of $({\cal F}_2 - F_2)$ are indicated by ${\cal O} \left( ({\cal F}_2 - F_2) ^{\wedge 2} )\right)$ but not written explicitly. Then we can use the seminal formula \begin{equation}\label{delatL10} \d {\cal L}_{10}= i_\d d{\cal L}_{10} + d (i_\d {\cal L}_{10}) \end{equation} (usually applied for coordinate variations only) supplemented by the formal definition of the contraction with variation symbol \begin{equation}\label{idelatdef} i_\d d\Theta^{1, 2\underline{\nu} } = \d \Theta^{1, 2\underline{\nu} } , \quad i_\d \Pi^{\underline{m}} = \d X^{\underline{m}} - i \d \Theta^{1}\G^{\underline{m}} \Theta^{1} - i \d \Theta^{2}\G^{\underline{m}} \Theta^{2}, \end{equation} \begin{equation}\label{idelatdef1} i_\d dA = \d A , \qquad i_\d dQ_8 = \d Q_8 , \qquad i_\d dF_{\underline{m}\underline{n}} = \d F_{\underline{m}\underline{n}} , \qquad \ldots . \end{equation} To simplify the algebraic calculations, one notes that it is sufficient to write such a formal contraction modulo terms proportional to the square of the algebraic equations (the latter remains the same for the coupled system as well, because the auxiliary fields, e.g. $Q_8$, do not appear in the action of other branes): \begin{eqnarray}\label{dS10} && \d S_{D9} = \int_{{\cal M}^{1+9}} (Q_8 - \Pi^{\wedge 8}_{\underline{k}\underline{l}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{k}\underline{l}}) \wedge \Big(- {1 \over 2} \Pi^{\underline{m}} \wedge \Pi^{\underline{n}} \wedge \d F_{\underline{n}\underline{m}} + \ldots \Big)+ \nonumber \\ && + \int_{{\cal M}^{1+9}} \left(\d Q_8 + ... \right) \wedge \left({\cal F}_2-F_2 \right) + \nonumber \\ && +\int_{{\cal M}^{1+9}} \left(d Q_8 + d{\cal L}_{8}^{WZ} \vert_{{\cal F}_2\rightarrow F_2 } \right) \wedge \left(\d A - i_\d B_2 + \Pi^{\underline{n}} F_{\underline{n}\underline{m}} i_\d \Pi^{\underline{m}}\right) + \\ && + 2 i \int_{{\cal M}^{1+9}} \Pi^{\wedge 9}_{\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{m}\underline{n}} \s_{\underline{n}}{}_{\underline{\mu}\underline{\nu}} \wedge \left( d\Theta^{2\underline{\mu} }- d\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\mu} }\right) ~~ \left( \d \Theta^{2\underline{\nu} }- \d \Theta^{1\underline{\s}} h_{\underline{\s}}^{~\underline{\nu} }\right) + \nonumber \\ && + i \int_{{\cal M}^{1+9}} \Pi^{\wedge 8}_{\underline{k}\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{k}\underline{n}} \s_{\underline{n}}{}_{\underline{\mu}\underline{\nu}} \wedge \left( d\Theta^{2\underline{\mu} }- d\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\mu} }\right) \wedge \left( d\Theta^{2\underline{\nu} }- d\Theta^{1\underline{\s}} h_{\underline{\s}}^{~\underline{\nu} }\right) i_\d \Pi^{\underline{m}} \nonumber \end{eqnarray} Here the terms denoted by $\ldots$ produce contributions to the equations of motion which are proportional to the algebraic equations and are thus inessential. The spin-tensor matrix $h_{\underline{\mu}}^{~\underline{\nu}}$ entering Eqs. \p{dS10} is related to the antisymmetric tensor $F_{\underline{n}\underline{m}}$ by the Cayley image relations \begin{equation}\label{hinSpin} h_{\underline{\mu}}^{~\underline{\nu}} \qquad \in \qquad Spin(1,9) , \end{equation} \begin{equation}\label{Id1} (h\s^{\underline{m}} h^T)_{\underline{\mu}\underline{\nu}} = \s^{\underline{n}}_{\underline{\mu}\underline{\nu}} k_{\underline{n}}^{~\underline{m}} \equiv \s^{\underline{n}}_{\underline{\mu}\underline{\nu}} (\eta+F)^{-1}_{\underline{n}\underline{l}} (\eta-F)^{\underline{l}\underline{m}} , \qquad \end{equation} \begin{equation}\label{Id2} k_{\underline{n}}^{~\underline{m}} = (\eta+F)^{-1}_{\underline{n}\underline{l}} (\eta-F)^{\underline{l}\underline{m}} \equiv (\eta-F)_{\underline{n}\underline{l}}(\eta+F)^{-1~\underline{l}\underline{m}} \qquad \in \qquad SO(1,9). \end{equation} For more details we refer to \cite{abkz}. It is important that $\d A$ enters the compact expression \p{dS10} for the variation of the super--D9--brane action only in the combination \begin{equation}\label{susydA} i_\d \left({\cal F}_2 - F_2 \right) \equiv \left(\d A - i_\d B_2 + \Pi^{\underline{n}} F_{\underline{n}\underline{m}} i_\d \Pi^{\underline{m}}\right). \end{equation} It can be called a supersymmetric variation of the gauge field as the condition $i_\d \left({\cal F}_2 - F_2 \right)=0$ actually determines the supersymmetric transformations of the gauge fields (cf. \cite{c1,bst,abkz}). Together with \p{idelatdef}, the expression \p{susydA} defines the basis of supersymmetric variations, whose use simplifies in an essential manner the form of the equations of motion. \bigskip The formal external derivative of the Lagrangian form \p{dL10} can be used as well for the general coordinate variation of the action \p{SLLL}-\p{L1} \begin{equation}\label{dS10c} \d S_{D9}= \int_{{\cal M}^{1+9}} \d x^m i_m d{\cal L}_{10}, \end{equation} where for any q-form $Q_q$ the operation $i_m$ is defined by \begin{equation}\label{imQq} Q_q= {1 \over q!} dx^{m_1} \wedge \ldots \wedge dx^{m_q} Q_{m_q \ldots m_1} \qquad i_m Q_q= {1 \over (q-1)!} dx^{m_1} \wedge \ldots \wedge dx^{m_{q-1}} Q_{mm_{q-1}\ldots m_1}. \qquad \end{equation} For the free super--D9--brane such a variation vanishes identically when the 'field' equations of motion are taking into account. This reflects the evident diffeomorphism invariance of the action \p{SLLL}. It is not essential as well in the study of coupled branes in the present approach, while in another approach for the description of coupled superbranes \cite{BK99} such variations play an important role. \subsection{Equations of motion for super--D9--brane} The equations of motion from the geometric action \p{SLLL}--\p{L1} split into the algebraic ones obtained from the variation of auxiliary fields $Q_8$ and $F_{\underline{m}\underline{n}}$ \begin{equation}\label{delQ8} {\cal F}_2 \equiv dA- B_2 = F_2 \equiv {1 \over 2} \Pi^{\underline{m}} \wedge \Pi^{\underline{n}} F_{\underline{n}\underline{m}}, \end{equation} \begin{equation}\label{delF} Q_8 = \Pi^{\wedge 8}_{\underline{n}\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{n}\underline{m}} \end{equation} and the dynamical ones \begin{equation}\label{delA} dQ_8 + d{\cal L}_8^{WZ-D7} = 0 , \end{equation} \begin{equation}\label{dTh2eq} \Pi^{\wedge 9}_{\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{m}\underline{n}} \s_{\underline{n}}{}_{\underline{\mu}\underline{\nu}} \wedge \left( d\Theta^{2\underline{\nu} }- d\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\nu} }\right) = 0 . \end{equation} If one takes into account the expression for $Q_8$ \p{delF}, the identification of $F$ with the gauge field strength \p{delQ8}, as well as the expression for the D7-brane Wess-Zumino term \begin{equation}\label{WZD70} d{\cal L}_8^{WZ-D7} = e^{\cal{F}} \wedge R \vert_{9} , \qquad R = \oplus _{n=0}^{5} R_{2n+1} = 2i d\Theta^{2\underline{\nu} } \wedge d\Theta^{1\underline{\mu} } \wedge \oplus _{n=0}^{4} \hat{\s}^{(2n+1)}_{\underline{\nu}\underline{\mu} }, \qquad \end{equation} one finds that \p{delA} is just the supersymmetrized Born-Infeld equation. The fermionic equations \p{dTh2eq} appear as a result of the variation with respect to $\Theta^2$, while the variation with respect to $\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1$ does not produce any independent equations. This fact reflects the Noether identity corresponding to the local fermionic $\kappa$--symmetry of the super--D9--brane action \p{SLLL} \cite{abkz}. The explicit {\sl irreducible} form of the D9-brane $\kappa$-symmetry transformation can be written with the help of the spin--tensor field $h$ \p{hinSpin} -- \p{Id2} \cite{abkz} as \begin{equation}\label{kD9} \d\Theta^{1\underline{\mu} } =\kappa}\def\l{\lambda}\def\L{\Lambda}\def\s{\sigma}\def\S{\Sigma^{\underline{\mu} }, \qquad \d\Theta^{2\underline{\mu} } = \kappa}\def\l{\lambda}\def\L{\Lambda}\def\s{\sigma}\def\S{\Sigma^{\underline{\nu} } h_{\underline{\nu} }^{~\underline{\mu} } \end{equation} $$ i_\d \Pi^{\underline{m} } = 0 , \qquad \Leftrightarrow \qquad \d X^{\underline{m} } = i\d\Theta^{1} \s^{\underline{m} } \Theta^{1} - i\d\Theta^{2} \s^{\underline{m} } \Theta^{2}, \qquad $$ \begin{eqnarray}\label{dkA} && i_\d {\cal{F} } = 0 ~~ \Leftrightarrow \qquad \nonumber \\ && \d A = i_\d B_2 \equiv i \Pi^{\underline{m}}\wedge \left( \d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1\s_{\underline{m}} \wedge \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 - \d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2\s_{\underline{m}}\wedge \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 \right) + \\ && + d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1\s^{\underline{m}}\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 \wedge \d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2\s_{\underline{m}}\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 - \d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1\s^{\underline{m}}\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 \wedge d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2\s_{\underline{m}}\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 , \nonumber \end{eqnarray} $$ \d F_{\underline{m}\underline{n}} = 2i (\eta - F)_{\underline{l}[\underline{m}} \left( \nabla_{\underline{n}]} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1~\s^{\underline{l}}\d \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 - \nabla_{\underline{n}]}\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2~\s^{\underline{l}}\d \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 \right), \qquad \d Q_8 = 0. $$ \bigskip The Noether identity reflecting the evident diffeomorphism invariance of the action \p{SLLL} is the dependence of the equations obtained by varying the action \p{SLLL} with respect to $X^{\underline{m}}(x)$ \begin{equation}\label{dX} i \Pi^{\wedge 8}_{\underline{n}\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{n}\underline{k}} \s_{\underline{k}}{}_{\underline{\mu}\underline{\nu}} \wedge \left( d\Theta^{2\underline{\nu} }- d\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\nu} }\right) \wedge \left( d\Theta^{2\underline{\nu} }- d\Theta^{1\underline{\s}} h_{\underline{\s}}^{~\underline{\nu} }\right) = 0. \end{equation} Indeed it can be proved that Eq. \p{dX} is satisfied identically, when Eq.\p{dTh2eq} is taken into account. \bigskip Turning back to the fermionic equations \p{dTh2eq}, let us note that after decomposition \begin{equation}\label{dThdec} d\Theta^{2\underline{\nu} }- d\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\nu} }= \Pi^{\underline{m}} \Psi_{\underline{m}}^{~\underline{\nu}}, \end{equation} where \begin{equation}\label{Psidef} \Psi_{\underline{m}}^{~\underline{\nu}}= \nabla_{\underline{m}} \Theta^{2\underline{\nu} }- \nabla_{\underline{m}}\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\nu} } \end{equation} and $ \nabla_{\underline{m}}$ defined by $d= dx^m \partial_m = \Pi^{\underline{m}}\nabla_{\underline{m}}$ \p{ddec}, \p{der}, one arrives for \p{dTh2eq} at \cite{abkz} \begin{equation}\label{eqPsi} - i \Pi^{\wedge 10} \sqrt{|\eta + F|} \s_{\underline{k}}{}_{\underline{\mu}\underline{\nu}} \Psi_{\underline{m}}^{~\underline{\nu}} (\eta + F)^{-1~\underline{m}\underline{n}} =0, \qquad \Leftrightarrow \qquad \s_{\underline{k}}{}_{\underline{\mu}\underline{\nu}} \Psi_{\underline{m}}^{~\underline{\nu}} (\eta + F)^{-1~\underline{m}\underline{n}} =0. \end{equation} \bigskip \section{Geometric action and free equations of motion for type $IIB$ superstring} \setcounter{equation}{0} \subsection{Geometric action and moving frame variables (Lorentz harmonics)} In the geometric action for type IIB superstring \cite{BZ,bpstv,bsv} \begin{equation}\label{acIIB} S_{IIB}= \int\limits^{}_{{\cal M}^{(1+1)}} \hat{{\cal L}}_2^{IIB} = \int\limits^{}_{{\cal M}^{(1+1)}} \left({1\over 2} \hat{E}^{++} \wedge \hat{E}^{--} - \hat{B}_{2} \right) , \end{equation} the hat (as in \p{acIIB}) indicates the fields restricted to (or living on) a superstring worldsheet \p{IIBwv} $ {\cal M} ^{(1+1)}= \{ \xi^{(\pm\pm)} \}~~ $. $~\hat{B}_2$ is the pull-back of the NS-NS gauge field with the 'vacuum' (i.e. flat superspace) value \p{B2def} which plays the role of the Wess-Zumino term in the superstring action, furthermore \begin{equation}\label{Epm} \hat{E}^{\pm\pm} = \hat{\Pi}^{\underline{m}} \hat{u}_{\underline{m}}^{\pm\pm}, \qquad \end{equation} where $\hat{u}_{\underline{m}}^{\pm\pm}(\xi )$ are vector harmonics \cite{sok,BZ,bpstv,bsv}, i.e. two light--like vector fields entering the $SO(1,9)$ valued matrix \p{uin} \begin{equation}\label{harmv} \hat{u}^{\underline{a}}_{\underline{m}} (\xi ) \equiv ( \hat{u}^{++}_{\underline{m}}, \hat{u}^{--}_{\underline{m}}, \hat{u}^{i}_{\underline{m}} ) \qquad \in \qquad SO(1,D-1). \end{equation} This matrix describes a moving frame attached to the worldsheet and thus provides the possibility to adapt the general bosonic vielbein of the flat superspace to the embedding of the worldsheet \begin{equation}\label{Eunda} \hat{E}^{\underline{a}}= \left( \hat{E}^{++}, \hat{E}^{--}, \hat{E}^{i}\right) \equiv \hat{\Pi}^{\underline{m}} \hat{u}_{\underline{m}}^{~\underline{a}}. \qquad \end{equation} The properties of the harmonics \p{harmv} are collected in the Appendix A. To obtain equations of motion from the geometric action \p{acIIB} it is important that the variations of the light-like harmonics $\hat{u}_{\underline{m}}^{\pm\pm}$ should be performed with the constraint \p{harmv}, i.e. with \begin{equation}\label{harmvc} \hat{u}^{\underline{a}}_{\underline{m}} \eta_{\underline{m}\underline{n}} \hat{u}^{\underline{b}}_{\underline{n}} = \eta^{\underline{a}\underline{b}}~~~ \Rightarrow ~~~\cases{ u^{++}_{\underline{m}} u^{++\underline{m}} = 0 , ~~ u^{--}_{\underline{m}} u^{--\underline{m}} =0, \cr u^{~i}_{\underline{m}} u^{++\underline{m}} = 0 , ~~ u^{i}_{\underline{m}} u^{--\underline{m}} =0 , \cr u^{++}_{\underline{m}} u^{--\underline{m}} =2 , ~~~ u^{i}_{\underline{m}} u^{j\underline{m}} = - \d^{ij} \cr } , \end{equation} taken into account. The simplest way to implement this consists in solving the conditions of the conservation of the constraints \p{harmvc} $$ \d \hat{u}^{~\underline{a}}_{\underline{m}} \eta_{\underline{m}\underline{n}} \hat{u}^{~\underline{b}}_{\underline{n}} + \hat{u}^{~\underline{a}}_{\underline{m}} \eta_{\underline{m}\underline{n}} \d \hat{u}^{~\underline{b}}_{\underline{n}} = 0 $$ with respect to $\d \hat{u}^{~\underline{b}}_{\underline{n}}$ and, thus, to define a set of variations ('admissible variations' \cite{BZ}) \footnote{ This is the place to note that a similar technique was used (\cite{deWit} and refs. therein) in the study of the $G/H$ sigma model fields, appearing in the maximal $D=3,4,5$ supergravities ($G/H = E_{8(+8)}/SO(16), E_{7(+7)}/SU(8),$ $E_{6(+6)}/USp(8)$). } which then shall be treated as independent. Some of those variations $i_\d f^{++i}$, $i_\d f^{--i}$, $i_\d \om$ enter the expression for the admissible variations of the light-like harmonics \cite{BZ} \begin{equation}\label{du++} \d {u}^{++}_{\underline{m}} = u^{++}_{\underline{m}} i_\d \om + \hat{u}^{i}_{\underline{m}} i_\d f^{++i}, \qquad \d u^{--}_{\underline{m}} = -\hat{u}^{--}_{\underline{m}} i_\d \om + u^{i}_{\underline{m}} i_\d f^{--i}, \qquad \end{equation} while the other $i_\d A^{ij}$ are involved in the variations of the orthogonal components of a moving frame \begin{equation}\label{dui} \d u^{i}_{\underline{m}} = - u^{j}_{\underline{m}} i_\d A^{ji} + {1\over 2} u_{\underline{m}}^{++} i_\d f^{--i} + {1\over 2} u_{\underline{m}}^{--} i_\d f^{++i} (d) \end{equation} only and thus produce no inputs into the variation of the action \p{acIIB}. The derivatives of the harmonic variables should dealt with in the same way. \subsection{Action variation and equations of motion} The external derivative of the Lagrangian form ${\cal L}_2$ is \begin{equation}\label{dLIIB} d{\cal L}_2^{IIB} = -2i E^{++} \wedge E^{-1}_{\dot{q}}\wedge E^{-1}_{\dot{q}} + 2i E^{--} \wedge E^{+2}_{{q}}\wedge E^{+2}_{{q}} + \end{equation} $$ + {1 \over 2} E^i \wedge \left( E^{--} \wedge f^{++i} - E^{++} \wedge f^{--i} + 4i ( E^{+1}_{{q}}\wedge E^{-2}_{\dot{q}} - E^{+2}_{{q}}\wedge E^{-1}_{\dot{q}}) \g^i_{q\dot{q}}\right). $$ Here \begin{equation}\label{f++i} f^{++i} \equiv u^{++}_{\underline m} d u^{\underline{m}i} , \qquad f^{--i} \equiv u^{--}_{\underline m} d u^{\underline{m}i} , \qquad \end{equation} \begin{equation}\label{omA} \om \equiv {1 \over 2} u^{--}_{\underline m} d u^{\underline{m}++} , \qquad A^{ij} \equiv u^{i}_{\underline m} d u^{\underline{m}j} , \qquad \end{equation} are Cartan forms \cite{BZ,bpstv} (see Appendix A) and \begin{equation}\label{Eundal} \hat{E}^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon }I} \equiv d\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu} I}\hat{v}_{\underline{\mu }}^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon }}= \left(\hat{E}^{I+}_{~{q}}, \hat{E}^{I-}_{~\dot{q}} \right) \qquad \end{equation} $$ q=1, \ldots 8, \qquad \dot{q}=1, \ldots 8, \qquad $$ are pull-backs of the fermionic supervielbein forms which, together with \p{Eunda}, form a basis of the flat target superspace. They involve the spinor harmonics \cite{gds,BZ} \begin{equation}\label{harms} \hat{v}_{\underline{\mu }}^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon }}= \left(\hat{v}^{I+}_{\underline{\mu }q}, \hat{v}^{I-}_{\underline{\mu }\dot{q}} \right) \qquad \in \qquad Spin(1,9) \end{equation} which represent the same Lorentz rotation (relating the 'coordinate frame' $\Pi^{\underline{m}}, d\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^{\underline{\mu }I}$ of the target superspace with the arbitrary frame $E^{\underline{a}}, E^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon }I}$) as the vector harmonics \p{harmv} and, hence, are connected with them by Eqs. \p{uvgv}. The latter include in particular the relations \begin{equation}\label{0harms10} u^{{++}}_{\underline{m}} \s^{\underline{m}}_{\underline{\mu}\underline{\nu}} = 2 v^{~+}_{\underline{\mu}q} v^{~+}_{\underline{\mu}q} , \qquad u^{{--}}_{\underline{m}} \s^{\underline{m}}_{\underline{\mu}\underline{\nu}}= 2 v^{~-}_{\underline{\mu}\dot{q}} v^{~-}_{\underline{\mu}\dot{q}} , \qquad u^{{i}}_{\underline{m}} \s^{\underline{m}}_{\underline{\mu}\underline{\nu}}= 2 v^{~+}_{\{ \underline{\mu}q} \g^i_{q\dot{q}} v^{~-}_{\underline{\nu}\} \dot{q}} , \qquad \end{equation} which were used to write $d{\cal L}_2^{IIB}$ in a compact form \p{dLIIB}. For further details concerning harmonics we refer to Appendix A and to the original references \cite{gds,BZ,bpstv}. \bigskip Now one can calculate the variation of the action \p{acIIB} of closed type $IIB$ superstring from the expression \p{dLIIB} using the technique described in Section 2.2 \begin{eqnarray}\label{delacIIB} && \d S_{IIB}= \int\limits^{}_{{\cal M}^{(1+1)}} i_\d d{\cal L}_2^{IIB} = \int\limits^{}_{{\cal M}^{(1+1)}} {1\over 2} \hat{E}^{i} \wedge \left( E^{--} i_\d f^{++i} - E^{++} i_\d f^{--i} + \ldots \right) + \nonumber \\ && \int\limits^{}_{{\cal M}^{(1+1)}} \left( \hat{M}_{2}^i ~u_{\underline{m}}^i + 2i \hat{E}^{1+}_{~{q}} \wedge \hat{E}^{1+}_{~{q}} u_{\underline{m}}^{--} - 2i \hat{E}^{2-}_{~\dot{q}} \wedge \hat{E}^{2-}_{~\dot{q}} u_{\underline{m}}^{++} \right) i_\d \Pi^{\underline{m}} + \\ && + \int\limits^{}_{{\cal M}^{(1+1)}} \left(-4i \hat{E}^{++} \wedge \hat{E}^{1-}_{~\dot{q}} v_{\underline{\mu}\dot{q}}^{~-} \d \Theta^{1\underline{\mu}} + 4i \hat{E}^{--} \wedge \hat{E}^{2+}_{~{q}} v_{\underline{\mu}{q}}^{~+} \d \Theta^{2\underline{\mu}}\right). \nonumber \end{eqnarray} Here \begin{equation}\label{hM2def} \hat{M}_{2}^i \equiv {1\over 2} \hat{E}^{--} \wedge \hat{f}^{++i} - {1\over 2} \hat{E}^{++} \wedge \hat{f}^{--i} + 2i \hat{E}^{1+}_{~{q}}\wedge \g^i_{q\dot{q}}\hat{E}^{1-}_{~\dot{q}} - 2i \hat{E}^{2+}_{~{q}}\wedge \g^i_{q\dot{q}}\hat{E}^{2-}_{~\dot{q}} \end{equation} and the dots in the first line denote the terms $$ f^{++i} i_\d E^{--} - f^{--i} i_\d E^{++} + 4i \left(\hat{E}^{1+}_{~{q}} \g^i_{q\dot{q}} \hat{v}_{\underline{\mu}\dot{q}}^{~-} + \hat{E}^{1-}_{~\dot{q}} \g^i_{q\dot{q}} \hat{v}_{\underline{\mu}{q}}^{~+} \right) \d \Theta^{1\underline{\mu}} - 4i \left(\hat{E}^{2+}_{~{q}} \g^i_{q\dot{q}} \hat{v}_{\underline{\mu}\dot{q}}^{~-} + \hat{E}^{2-}_{~\dot{q}} \g^i_{q\dot{q}} \hat{v}_{\underline{\mu}{q}}^{~+} \right) \d \Theta^{2\underline{\mu}} $$ which produce contributions proportional to $E^i$ into the action variation. They are essential only when we search for the $\kappa$--symmetry of the free type $IIB$ superstring action. It is worth mentioning that, in contrast to the standard formulation \cite{gsw}, the geometric action \p{acIIB} possesses the {\sl irreducible} $\kappa$--symmetry whose transformation is given by (cf. \cite{BZ,bsv}) \begin{equation}\label{kappastr} \d \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1} = \kappa}\def\l{\lambda}\def\L{\Lambda}\def\s{\sigma}\def\S{\Sigma^{+q} \hat{v}^{-\underline{\mu}}_{\dot{q}}, \qquad \d \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2} = \kappa}\def\l{\lambda}\def\L{\Lambda}\def\s{\sigma}\def\S{\Sigma^{-\dot{q}} \hat{v}^{+\underline{\mu}}_{{q}}, \end{equation} $$ \d \hat{X}^{\underline{m}} = i \d\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1} \s^{\underline{m}}\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1} + i \d\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2} \s^{\underline{m}}\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2} $$ \begin{equation}\label{kappav} \d \hat{v}_{\underline{\mu}q}^{~+} = {1 \over 2} i_\d f^{++i} \g^i_{q\dot{q}} \hat{v}_{\underline{\mu}\dot{q}}^{~-}, \qquad \d \hat{v}_{\underline{\mu}\dot{q}}^{~-} = {1 \over 2} i_\d f^{--i} \g^i_{q\dot{q}} \hat{v}_{\underline{\mu}q}^{~+} \qquad \end{equation} $$ \d \hat{u}_{\underline{m}}^{++} = \hat{u}_{\underline{m}}^{i} i_\d f^{++i} , \qquad \d \hat{u}_{\underline{m}}^{--} = \hat{u}_{\underline{m}}^{i} i_\d f^{--i} , \qquad \d \hat{u}_{\underline{m}}^{i} = {1 \over 2} \hat{u}_{\underline{m}}^{++} i_\d f^{--i} + {1 \over 2} \hat{u}_{\underline{m}}^{--} i_\d f^{++i}, $$ (cf. Appendix A) with $i_\d f^{++i}, i_\d f^{--i}$ determined by \begin{eqnarray}\label{kappaf} && \hat{E}^{++} i_\d f^{--i} - \hat{E}^{--} i_\d f^{++i} = \\ && = 4i \left(\hat{E}^{1+}_{~{q}} \g^i_{q\dot{q}} \hat{v}_{\underline{\mu}\dot{q}}^{~-} + \hat{E}^{1-}_{~\dot{q}} \g^i_{q\dot{q}} \hat{v}_{\underline{\mu}{q}}^{~+} \right) \d \Theta^{1\underline{\mu}} - 4i \left(\hat{E}^{2+}_{~{q}} \g^i_{q\dot{q}} \hat{v}_{\underline{\mu}\dot{q}}^{~-} + \hat{E}^{2-}_{~\dot{q}} \g^i_{q\dot{q}} \hat{v}_{\underline{\mu}{q}}^{~+} \right) \d \Theta^{2\underline{\mu}}. \nonumber \end{eqnarray} The equations of motion for the free closed type $IIB$ superstring can be extracted easily from \p{delacIIB} \begin{equation}\label{Ei=0} \hat{E}^{i} \equiv \hat{\Pi}^{\underline{m}} \hat{u}_{\underline{m}}^{i}=0, \qquad \end{equation} \begin{equation}\label{M2=0} \hat{M}_2^{i} = 0, \end{equation} \begin{equation}\label{Th1str} \hat{E}^{++} \wedge \hat{E}^{1-}_{~\dot{q}} = 0, \qquad \end{equation} \begin{equation}\label{Th2str} \hat{E}^{--} \wedge \hat{E}^{2+}_{~{q}} = 0 , \qquad \end{equation} where $\hat{M}_2^{i}$, $E^{\pm\pm}$, $ \hat{E}^{2+}_{~{q}}$, ${E}^{1-}_{~\dot{q}}$ are defined in \p{hM2def}, \p{Epm}, \p{Eundal}, respectively. \bigskip \subsection{Linearized fermionic equations} The proof of equivalence of the Lorentz harmonic formulation \p{acIIB} with the standard action of the Green-Schwarz superstring has been given in \cite{BZ}. To make this equivalence intuitively evident, let us consider the fermionic equations of motion \p{Th1str}, \p{Th2str} in the linearized approximation, fixing a static gauge \begin{equation}\label{staticg} \hat{X}^{\pm\pm} \equiv X^{\underline{m}}u_{\underline{m}}^{\pm\pm}= \xi^{(\pm\pm )}. \end{equation} Moreover, we use the $\kappa$--symmetry \p{kappastr} to remove half the components $ \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{1+}_{~{q}} = \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1} \hat{v}^{~+}_{\underline{\mu}{q}}, ~\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{2-}_{~\dot{q}} = \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2} \hat{v}^{~-}_{\underline{\mu}\dot{q}}, $ of the Grassmann coordinate fields \begin{equation}\label{kappag} \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{1+}_{~{q}} = \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1} \hat{v}^{~+}_{\underline{\mu}{q}}= 0, \qquad \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{2-}_{~\dot{q}} = \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2} \hat{v}^{~-}_{\underline{\mu}\dot{q}}=0. \end{equation} Thus we are left with $8$ bosonic and $16$ fermionic fields \begin{equation}\label{analb} \hat{X}^{i} = \hat{X}^{\underline{\mu}} \hat{u}^{~i}_{\underline{m}}, \qquad \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{1-}_{~\dot{q}} = \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1} \hat{v}^{~-}_{\underline{\mu}\dot{q}}, \qquad \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{2+}_{~{q}} = \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2} \hat{v}^{~+}_{\underline{\mu}{q}}. \end{equation} In the linearized approximation all the inputs from the derivatives of harmonic variables (i.e. Cartan forms \p{f++i}, \p{omA}) disappear from the fermionic equations for the physical Grassmann coordinate fields. Thus we arrive at the counterpart of the gauge fixed string theory in the light--cone gauge. Then it is not hard to see that Eqs. \p{Th1str}, \p{Th2str} reduce to the opposite chirality conditions for physical fermionic fields \begin{equation}\label{linfeq} \partial_{--} \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{1-}_{~\dot{q}} = 0 , \qquad \partial_{++}\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{2+}_{~{q}} = 0. \end{equation} To obtain the bosonic equations, the derivatives of the harmonics (Cartan forms \p{f++i}) must be taken into account. After exclusion of the auxiliary variables one obtains that Eqs. \p{Ei=0}, \p{M2=0} reduce to the usual free field equations for $8$ bosonic fields $X^i$ (see Appendix B for details) \begin{equation}\label{linbeq} \partial_{--} \partial_{++}\hat{X}^{i} = 0. \end{equation} \bigskip \subsection{Geometric action with boundary term} To formulate the interaction of the open superstring with the super--D9--brane we have to add to the action \p{acIIB} the boundary term which describes the coupling to the gauge field $A=dx^m A_m(x)$ inherent to the D9-brane (see \p{L1}, \p{LWZ}, \p{calFD9}, \p{LWZp+1}). Thus the complete action for the open fundamental superstring becomes (cf. \cite{Sezgin98}) \begin{equation}\label{acIIBb} S_{I}= S_{IIB} + S_{b} = \int\limits^{}_{{\cal M}^{(1+1)}} \hat{{\cal L}}_{2} = \int\limits^{}_{{\cal M}_{(1+1)}} \left({1\over 2} \hat{E}^{++} \wedge \hat{E}^{--} - \hat{B}_{2} \right) + \int\limits^{}_{\partial {\cal M}^{(1+1)}} \hat{A} ~. \end{equation} The variation of the action \p{acIIBb} differs from $\int i_\d d {\cal L}_2^{IIB}$ in \p{delacIIB} by boundary contributions. In the supersymmetric basis \p{idelatdef}, \p{susydA} the variation becomes \begin{eqnarray}\label{varSI} && \d S_I = \int_{{\cal M}^{1+1}} i_\d d {\cal L}_2^{IIB} + \int_{\partial {\cal M}^{1+1}} i_\d \left( {\cal F}_2-F_2\right) + \nonumber \\ && +\int_{\partial {\cal M}^{1+1}} \left( {1 \over 2} {E}^{++} u_{\underline{m}}^{--} - {1 \over 2} {E}^{--} u_{\underline{m}}^{++} - \Pi^{\underline{m}} F_{\underline{n}\underline{m}} \right) i_\d \Pi^{\underline{m}}. \end{eqnarray} It is worth mentioning that no boundary contribution with variation $\d \Theta^I$ appears. This does not contradict the well--known fact that the presence of a worldsheet boundary breaks at least a half of the target space $N=2$ supersymmetry. Indeed, for the supersymmetry transformations \begin{equation}\label{susy1} \d_{susy} X^{\underline{m}} = \Theta^{I} \sigma^{\underline{m}} \e^I , \qquad \d_{susy} \Theta^{I\underline{\mu}} = \e^{I\underline{\mu}} \end{equation} the variation $i_{\d } \Pi^{\underline{m}}$ is nonvanishing and reads \begin{equation}\label{Pisusy} i_{\d_{susy}} \Pi^{\underline{m}}= 2 \d_{susy} X^{\underline{m}}= 2 \Theta^{I} \sigma^{\underline{m}} \e^I. \end{equation} Imposing the boundary conditions $\hat{\Theta}^{1\underline{\mu}}(\xi(\tau )) = \hat{\Theta}^{2\underline{\mu}} (\xi(\tau ))$ one arrives at the conservation of $N=1$ supersymmetry whose embedding into the type $IIB$ supersymmetry group is defined by $\e^{\underline{\mu}1}=- \e^{\underline{\mu}2}$. Actually these conditions provide $i_\d \hat{\Pi}^{\underline{m}} (\xi(\tau ))=0$ and, as a consequence, the vanishing of the variation \p{varSI} (remember that the supersymmetry transformations of the gauge fields are defined by $i_\d ({\cal F}_2 -F_2)=0 $). The above consideration in the frame of the Lorentz harmonic approach results in the interesting observation that the supersymmetry breaking by a boundary is related to the 'classical reparametrization anomaly': indeed the second line of the expression \p{varSI}, which produces the nonvanishing variation under $N=2$ supersymmetry transformation with \p{Pisusy}, contains only $i_\d \Pi^{\underline{m}}$ , which can be regarded as parameters of the reparametrization gauge symmetry of the free superstring ($i_\d \Pi^{\underline{m}} u_{\underline{m}}^{\pm\pm}$) and free super--D3--brane ($i_\d \Pi^{\underline{m}}$), respectively. There exists a straightforward way to keep half of the rigid target space supersymmetry of the superstring--super-D9-brane system by incorporation of the additional boundary term $\int_{\partial{\cal M}^{1+1}} \phi_{1\underline{\mu}} \left( \hat{\Theta}^{1\underline{\mu}}(\xi(\tau )) - \hat{\Theta}^{2\underline{\mu}} (\xi(\tau )) \right)$ with a Grassmann Lagrange multiplier one form $\phi_{1\underline{\mu}}$ (see Appendix A in \cite{BK99}). However, following \cite{Mourad,Sezgin25,BK99}, we accept in our present paper the 'soft' breaking of the supersymmetry by boundaries at the classical level (see \cite{HW,Mourad} for symmetry restoration by anomalies). We expect that the BPS states preserving part of the target space supersymmetry will appear as particular solutions of the coupled superbrane equations following from our action. \bigskip \section{Current forms and unified description of string and D9-brane } \setcounter{equation}{0} \subsection{Supersymmetric current form} For a simultaneous description of super--D9--brane and fundamental superstring, we have to define an 8-form distribution $J_8$ with support on the string worldsheet. In the pure bosonic case (see e.g. \cite{bbs}) one requires \begin{equation}\label{J8def} \int_{{\cal M}^{1+1}} \hat{{\cal L}}_2 = \int_{\underline{{\cal M}}^{1+9}} J_8 \wedge {\cal L}_2 , \end{equation} where \begin{equation}\label{L2IIB} {\cal L}_2 = {1 \over 2} dX^{\underline{m}} \wedge dX^{\underline{n}} {\cal L}_{\underline{n} \underline{m}} ( X^{\underline{l}} ) \end{equation} is an arbitrary two-form in the $D=10$ dimensional space-time $\underline{{\cal M}}^{1+9}$ and \begin{equation}\label{L2IIBs} \hat{{\cal L}}_2 = {1 \over 2} d\hat{X}^{\underline{m}} (\xi ) \wedge d\hat{X}^{\underline{n}} (\xi ) {\cal L}_{\underline{n} \underline{m}} ( \hat{X}^{\underline{m}}(\xi)) \end{equation} is its pull-back onto the string worldsheet. It is not hard to verify that the appropriate expression for the current form $J_8$ is given by \cite{bbs} \begin{equation}\label{J80} J_8 = (dX)^{\wedge 8}_{\underline{n} \underline{m}} J^{\underline{n} \underline{m}}( X) = { 1 \over 2! 8!} \e_{\underline{m} \underline{n}\underline{n}_1 \ldots \underline{n}_8} d{X}^{\underline{n}_1 } \wedge \ldots \wedge dX^{\underline{n}_8} \int_{{\cal M}^{1+1}} d\hat{X}^{\underline{m}} (\xi ) \wedge d\hat{X}^{\underline{n}} (\xi ) \d^{10} \left( X - \hat{X} (\xi )\right)~. \end{equation} Indeed, inserting \p{J80} and \p{L2IIB} into r.h.s. of \p{J8def}, using \p{dualft} and changing the order of integrations, after performing the integration over $d^{10}X$ one arrives at the l.h.s. of \p{J8def}. \subsection{Superstring boundaries and current (non)conservation} If the superstring worldsheet is closed ($\partial {\cal M}^{1+1}= 0$) the current $J^{\underline{m}\underline{n}}$ is conserved, i.e. $J_8$ is a closed form \begin{equation}\label{cl} \partial {\cal M}^{1+1}= 0 \qquad \Rightarrow \qquad dJ_8 = 0, \qquad \Leftrightarrow \qquad \partial_{\underline{m}} J^{\underline{n} \underline{m}} = 0 . \end{equation} For the open (super)string this does not hold. Indeed, assuming that the 10-dimensional space and the D9-brane worldvolume has no boundaries $ \partial {\cal M}^{1+9}= 0, $ substituting instead of ${\cal L}_2$ a closed two form, say $dA$, and using Stokes' theorem one arrives at \begin{equation}\label{dJ8def} \int_{\partial {\cal M}^{1+1}} A = \int_{{\cal M}^{1+1}} dA = \int_{{{\cal M}}^{1+9}} J_8 \wedge dA = \int_{{{\cal M}}^{1+9}} dJ_8 \wedge A . \end{equation} Thus the form $dJ_8$ has support localized at the boundary of the worldsheet (i.e. on the worldline of the string endpoints). This again can be justified by an explicit calculation with Eqs. \p{J80} and \p{dualft}, which results in \begin{equation}\label{dJ80} dJ_8 = - (dX)^{\wedge 9}_{\underline{n}} \partial_{\underline{m}} J^{\underline{m} \underline{n}}( X) = - (dX)^{\wedge 9}_{\underline{n}} j^{\underline{n}}( X) \end{equation} $$ \partial_{\underline{m}}J^{\underline{m}\underline{n}} ( X)= - j^{\underline{n}}( X), \qquad $$ with \begin{equation}\label{j0} j^{\underline{n}}( X) \equiv \int_{\partial{\cal M}^{1+1}} d\hat{X}^{\underline{m}} (\tau ) \d^{10} \left( X - \hat{X} (\tau )\right), \qquad \end{equation} { where the proper time $\tau$ parametrizes the boundary of the string worldsheet $\partial{\cal M}^{1+1}= \{ \tau \}$. Actually the boundary of superstring(s) shall have (at least) two connected components $\partial{\cal M}^{1+1}= \oplus_{j} {\cal M}_j^1$, each parametrized by the own proper time $\tau_j$. Then the rigorous expression for the boundary current \p{dJ80}, \p{j0} is $$ j^{\underline{n}}( X) \equiv \S_{j}^{} \int_{{\cal M}^{1}_j} d\hat{X}^{\underline{m}} (\tau_j ) \d^{10} \left( X - \hat{X} (\tau_j )\right). \qquad $$ We, however, will use the simplified notations \p{j0} in what follows.} It is useful to define the local density 1--form $j_1$ on the worldsheet with support on the boundary of worldsheet \begin{equation}\label{j10} j_1 = d\xi^{\mu} \e_{\mu\nu} \int_{\partial {\cal M}^{1+1}} d\tilde{\xi}^{\nu} (\tau ) \d^{2} \left( \xi - \tilde{\xi} (\tau )\right)~, \qquad \end{equation} which has the properties \begin{equation}\label{j1def} \int_{\partial {\cal M}^{1+1}} \hat{A} \equiv \int_{{\cal M}^{1+1}} d\hat{A} = \int_{{\cal M}^{1+1}} j_1 \wedge \hat{A} = \int_{{\cal M}^{1+9}} dJ_8 \wedge \hat{A} \end{equation} for any 1-form $$ A= dX^{\underline{m}} A_{\underline{m}} (X), $$ e.g., for the D9-brane gauge field \p{gf} considered in the special parametrization \p{xX}, \p{supercVA}. In the sense of the last equality in \p{j1def} one can write a formal relation \begin{equation}\label{dJ8j1} dJ_8 = J_8 \wedge j_1 \end{equation} (which cannot be treated straightforwardly as the form $j_1$ can not be regarded as pull-back of a 10-dimensional 1-form). \bigskip \subsection{Variation of current form distributions and supersymmetry } The variation of the form \p{J80} becomes \begin{equation}\label{deJ80} \d J_8 = 3 (dX)^{\wedge 8}_{[\underline{m} \underline{n}} \partial_{\underline{k}]} \int_{{\cal M}^{1+1}} d\hat{X}^{\underline{m}} (\xi ) \wedge d\hat{X}^{\underline{n}} (\xi ) \left( \d X^{\underline{k}}- \d \hat{X}^{\underline{k}} (\xi ) \right)~ \d^{10} \left( X - \hat{X} (\xi )\right)~- \end{equation} $$ - 2 (dX)^{\wedge 8}_{\underline{m} \underline{n}} \int_{\partial {\cal M}^{1+1}} d\hat{X}^{\underline{m}} (\tau ) \left( \d X^{\underline{n}}- \d \hat{X}^{\underline{n}} (\tau ) \right)~ \d^{10} \left( X - \hat{X} (\tau )\right)~. $$ Let us turn to the target space supersymmetry transformations \p{susy1}. For the string coordinate fields it has the form \begin{equation}\label{susyIIB} \d \hat{X}^{\underline{m}}(\xi ) = \hat{\Theta}^{I} (\xi ) \sigma^{\underline{m}} \e^I , \qquad \d \hat{\Theta}^{I\underline{\mu}} = \e^{I\underline{\mu}} \end{equation} while for the super-D9-brane it reads \begin{equation}\label{susyD9} \d X^{\underline{m}} (x) = \Theta^{I} (x) \sigma^{\underline{m}} \e^I , \qquad \d \Theta^{I\underline{\mu}} (x) = \e^{I\underline{\mu}}. \end{equation} In the parametrization \p{supercVA} corresponding to the introduction of the inverse function \p{xX} the transformation \p{susyD9} coincides with the Goldstone fermion realization \begin{equation}\label{susyD9X} \d X^{\underline{m}} = \Theta^{I} (X) \sigma \e^I , \qquad \d \Theta^{I\underline{\mu}} (X) \equiv \Theta^{I\underline{\mu}~\prime} (X^{\prime} )- \Theta^{I\underline{\mu}} (X) = \e^{I\underline{\mu}} \end{equation} Thus, if we identify the $X^{\underline{m}}$ entering the current density \p{J80} with the bosonic coordinates of superspace, parametrizing the super-D9-brane by \p{xX}, we can use \p{deJ80} to obtain the supersymmetry transformations of the current density \p{J80} \begin{equation}\label{deJ80susy} \d J_8 = 3 (dX)^{\wedge 8}_{[\underline{m} \underline{n}} \partial_{\underline{k}]} \int_{{\cal M}^{1+1}} d\hat{X}^{\underline{m}} (\xi ) \wedge d\hat{X}^{\underline{n}} (\xi ) \left( \Theta^{I} (X) - \hat{\Theta}^{I} (\xi ) \right)\sigma^{\underline{k}} \e^I ~ \d^{10} \left( X - \hat{X} (\xi )\right)~- \end{equation} $$ - 2 (dX)^{\wedge 8}_{\underline{m} \underline{n}} \int_{\partial {\cal M}^{1+1}} d\hat{X}^{\underline{m}} (\tau ) \left( \Theta^{I} (X) - \hat{\Theta}^{I} (\tau ) \right)\sigma^{\underline{n}} \e^I ~ \d^{10} \left( X - \hat{X} (\tau )\right)~. $$ Now it is evident that the current density \p{J80} becomes invariant under the supersymmetry transformations \p{susyIIB}, \p{susyD9} after the identification \begin{equation}\label{ThhatTh} \hat{\Theta}^{I} (\xi )= \Theta^{I} \left(\hat{X} (\xi ) \right) \end{equation} of the superstring coordinate fields $\hat{\Theta}^{I} (\xi )$ with the image $\Theta^{I} \left(\hat{X} (\xi ) \right)$ of the super--D9--brane coordinate field $\Theta^{I} (X)$ is implied. \subsection{Manifestly supersymmetric representations for the distribution form} In the presence of the D9-brane whose world volume spans the whole $D=10$ dimensional super-time, we can rewrite \p{J80} as \begin{equation}\label{J81} J_8 = (dx)^{\wedge 8}_{{n} {m}} J^{{n}{m}}( x) = { 1 \over 2! 8!} \e_{{n}{m} {n}_1 \ldots {n}_8} d{x}^{{n}_1 } \wedge \ldots \wedge dx^{{n}_8} \int_{{\cal M}^{1+1}} d\hat{x}^{{m}} (\xi ) \wedge d\hat{x}^{{n}} (\xi ) \d^{10} \left( x - \hat{x} (\xi )\right) , \end{equation} where the function $\hat{x} (\xi )$ is defined through $\hat{X} (\xi )$ with the use of the inverse function \p{xX}, i.e. $ \hat{X} (\xi )= X(\hat{x} (\xi )) $~, cf. \p{x(xi)}. Passing from \p{J80} to \p{J81} the identity \begin{equation}\label{deltapr} \d^{10} \left( X - \hat{X}(\xi ) \right) \equiv \d^{10} \left( X - X(\hat{x}(\xi )) \right) = {1 \over det( {\partial X \over \partial x})} \d^{10} \left( x - \hat{x}(\xi ) \right) \end{equation} has to be taken into account. The consequences of this observation are two-fold: \begin{itemize} \item {\bf i)} We can use $J_8$ to represent an integral over the string worldsheet as an integral over the D9-brane worldvolume \footnote{Note the difference of the manifolds involved into the r.h.s-s of \p{J8def1} and \p{J8def}. This will be important for the supersymmetric case. } \begin{equation}\label{J8def1} \int_{{\cal M}^{1+1}} \hat{{\cal L}}_2 = \int_{{{\cal M}}^{1+9}} J_8 \wedge {\cal L}_2 \end{equation} for any 2--form \begin{equation}\label{L2D9} {\cal L}_2 = {1 \over 2} dx^{{m}} \wedge dx^{{n}} {\cal L}_{{n} {m}} ( x^{{l}} ) \end{equation} {\sl living on the D9--brane world volume} ${{\cal M}}^{1+9}$, e.g. for the field strength ${\cal F}_2 = dA -B_2$ \p{calFD9} of the D9-brane gauge field \p{gf}. The pull--back \begin{equation}\label{L2D9s} \hat{{\cal L}}_2 = {1 \over 2} d\hat{x}^{{m}} (\xi ) \wedge d\hat{x}^{{n}} (\xi ) {\cal L}_{{n}{m}} ( \hat{x}^{{l}}(\xi)) \end{equation} is defined in \p{J8def1} with the use of the inverse function \p{xX}. \item {\bf ii)} As the coordinates $x^n$ are inert under the target space supersymmetry \p{susy1}, the current density $J_8$ is supersymmetric invariant. {\sl Hence, when the identification \p{ThhatTh} \begin{equation}\label{Thid} \hat{\Theta}^{I} (\xi)= {\Theta}^{I} \left(\hat{x}(\xi)\right) \qquad \end{equation} is made, it is possible to use Eqs. \p{J8def}, \p{J8def1}, \p{J81} to lift the complete superstring action \p{acIIBb} to the 10-dimensional integral form.} \end{itemize} The manifestly supersymmetric form of the current density appears after passing to the supersymmetric basis \p{Pipb}, \p{dThpb} of the space tangential to ${\cal M}^{1+9}$. With the decomposition \p{Pipb} $J_8$ becomes \begin{equation}\label{J8Pi9} J_8 = (\Pi)^{\wedge 8}_{\underline{n} \underline{m}} J_{(s)}^{\underline{n} \underline{m}}( X) = { 1 \over 2! 8!} \e_{\underline{n} \underline{m}\underline{n}_1 \ldots \underline{n}_8} \Pi^{\underline{n}_1 } \wedge \ldots \wedge \Pi^{\underline{n}_8} { 1 \over det(\Pi_{\underline{r}}^{~\underline{s}})} \int_{{\cal M}^{1+1}} \hat{\Pi}^{\underline{n}} \wedge \hat{\Pi}^{\underline{m}} \d^{10} \left( X - \hat{X} (\xi )\right). \end{equation} In Eq. \p{J8Pi9} the only piece where the supersymmetric invariance is not manifest is $\d^{10} \left( X - \hat{X} (\xi )\right)$. However, in terms of D9-brane world volume coordinates we arrive at \begin{equation}\label{J8Pi} J_8= { 1 \over 2! 8!} \e_{\underline{n} \underline{m}\underline{n}_1 \ldots \underline{n}_8} \Pi^{\underline{n}_1 } \wedge \ldots \wedge \Pi^{\underline{n}_8} { 1 \over det(\Pi_{{r}}^{~\underline{s}})} \int_{{\cal M}^{1+1}} \hat{\Pi}^{\underline{m}} \wedge \hat{\Pi}^{\underline{n}} \d^{10} \left( x - \hat{x} (\xi )\right) ~, \end{equation} where the determinant in the denominator is calculated for the matrix $ \Pi_{{n}}^{~\underline{m}} = \partial_{{n}} X^{\underline{m}}(x) - i \partial_{{n}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 \s^{\underline{m}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^1 - i \partial_{{n}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2 \s^{\underline{m}} \Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^2. $ \p{Pi2}. \bigskip The manifestly supersymmetric expression for the exact dual current 9-form $dJ_8$ \p{dJ80} is provided by \begin{equation}\label{dJ8Pi} dJ_8 \equiv (dx)^{\wedge 9}_{{n}} \partial_m J^{{m}{n}}(x) = - { 1 \over det(\Pi_{{r}}^{~\underline{s}})} (\Pi)^{\wedge 9}_{\underline{m}} \int_{\partial {\cal M}^{1+1}} \hat{\Pi}^{\underline{m}} \d^{10} \left( x - \hat{x} (\tau )\right). \end{equation} \begin{equation}\label{j1} \partial_{{m}}J^{{m}{n}}(x)= - j^{{n}}( x), \qquad j^{{n}}(x) \equiv \int_{\partial{\cal M}^{1+1}} d\hat{x}^{{n}} (\tau ) \d^{10} \left( x - \hat{x} (\tau )\right). \end{equation} \bigskip \section{An action for the coupled system} \setcounter{equation}{0} In order to obtain a covariant action for the coupled system with the current form $J_8$, one more step is needed. Indeed, our lifting rules \p{J8def} with the density $J_8$ \p{J80}, \p{J81} are valid for a form ${\hat{\cal L}}_2$ which is the pull-back of a form ${{\cal L}}_2$ living either on the whole $D=10$ type $IIB$ superspace \p{L2IIB}, or, at least, on the whole 10-dimensional worldvolume of the super-D9-brane \p{L2D9}. Thus imposing the identification \p{Thid} we can straightforwardly rewrite the Wess-Zumino term $\int \hat{B}_2$ and the boundary term of the superstring action $\int \hat{A}$ as integrals over the super--D9--brane world volume $\int J_8 \wedge B_2 + \int dJ_8 \wedge A$. But the 'kinetic term' of the superstring action \begin{equation}\label{kin} \int_{{\cal M}^{1+1}} {\hat{{\cal L}}}_0 \equiv \int_{{\cal M}^{1+1}} {1 \over 2} \hat{E}^{++}\wedge \hat{E}^{--} \end{equation} with $\hat{E}^{++}, \hat{E}^{--}$ defined by Eqs. \p{Epm} requires an additional consideration regarding the harmonics \p{harmv}, which so far were defined only as worldsheet fields. In order to represent the kinetic term \p{kin} as an integral over the D9-brane world volume too, we have to introduce a counterpart of the harmonic fields \p{harmv}, \p{harms} {\sl in the whole 10-dimensional space or in the D9-brane world volume} \begin{equation}\label{harmvD9} {u}^{\underline{a}}_{\underline{m}} (x) \equiv ( {u}^{++}_{\underline{m}}(x), {u}^{--}_{\underline{m}}(x), {u}^{i}_{\underline{m}}(x) ) \qquad \in \qquad SO(1,9) \end{equation} \begin{equation}\label{harmsD9} {v}_{\underline{\mu }}^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon }}= \left({v}^{I+}_{\underline{\mu }q}, {v}^{I-}_{\underline{\mu }\dot{q}} \right) \qquad \in \qquad Spin(1,9) \end{equation} (see \p{harmv}--\p{uvgv}). Such a 'lifting' of the harmonics to the super--D9--brane worldvolume creates the fields of an auxiliary ten dimensional $SO(1,9)/(SO(1,1) \times SO(8))$ 'sigma model'. The only restriction for these new fields is that they should coincide with the 'stringy' harmonics on the worldsheet: $ {u}^{\underline{a}}_{\underline{m}} \left(x(\xi )\right) = \hat{u}^{\underline{a}}_{\underline{m}} (\xi ) ~: $ \begin{equation}\label{harmvD9IIB} {u}^{++}_{\underline{m}} \left(x(\xi )\right) = \hat{u}^{++}_{\underline{m}} (\xi ), \qquad \hat{u}^{--}_{\underline{m}} \left(x(\xi )\right) = \hat{u}^{--}_{\underline{m}}(\xi ), \qquad {u}^{i}_{\underline{m}} \left(x(\xi )\right) = \hat{u}^{i}_{\underline{m}} (\xi) \end{equation} $ {v}_{\underline{\mu }}^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon }} \left(x(\xi )\right) = \hat{v}_{\underline{\mu }}^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon }} (\xi) ~: $ \begin{equation}\label{harmsD9IIB} {v}^{I+}_{\underline{\mu }q} \left(x(\xi )\right) = \hat{v}^{I+}_{\underline{\mu }q} (\xi ) , \qquad {v}^{I-}_{\underline{\mu }\dot{q}} \left(x(\xi )\right) = \hat{v}^{I-}_{\underline{\mu }\dot{q}} (\xi ) \qquad \end{equation} \bigskip In this manner we arrive at the full supersymmetric action describing the coupled system of the open fundamental superstring interacting with the super--D9--brane (cf. \p{SLLL}--\p{dLWZ}, \p{acIIB}): \begin{eqnarray}\label{SD9+IIB} && S = \int_{{\cal{M}}^{10}} \left( {{\cal{L}}_{10}} + J_8 \wedge {{\cal{L}}_{IIB}} + dJ_8 \wedge A \right) = \nonumber \\ && = \int_{{\cal{M}}^{10}} \left[\Pi^{\wedge 10} \sqrt{-det(\eta_{\underline{m}\underline{n}} +F_{\underline{m}\underline{n}})} + Q_{8} \wedge \left( dA-B_2 - {1 \over 2} \Pi^{\underline{m}} \wedge \Pi^{\underline{n}} ~F_{\underline{m}\underline{n}}\right) + e^{\cal{F}} \wedge C~ \vert_{10} ~\right] + \\ && + \int_{{\cal{M}}^{10}} J_8 \wedge \left({1 \over 2} {E}^{++} \wedge {E}^{--} - B_{2} \right) + \int_{{\cal{M}}^{10}} dJ_8 \wedge A \nonumber \end{eqnarray} \bigskip \section{Supersymmetric equations for the coupled system} \setcounter{equation}{0} \subsection{Algebraic equations} The Lagrange multiplier $Q_8$ and auxiliary field $F_{\underline{m}\underline{n}}$ are not involved into the superstring action \p{acIIBb}, while the harmonics are absent in the super--D9--brane part \p{SLLL}--\p{L1} of the action \p{SD9+IIB}. Thus we conclude that the {\sl algebraic equations} \p{delQ8}, \p{delF}, \p{Ei=0} are the same as in the free models. \subsubsection{Equations obtained from varying the harmonics } Indeed, variation with respect to the harmonics (now extended to the whole $D=10$ space time or, equivalently, to the super--D9--brane world volume \p{harmvD9}) produces the equations \begin{equation}\label{du(c)} J_8 \wedge E^i \wedge E^{\pm\pm} =0 \qquad \Leftrightarrow \qquad J_8 \wedge E^i \equiv J_8 \wedge \Pi^{\underline{m}} u_{\underline{m}}^i =0 \qquad \end{equation} whose image on the worldsheet coincides with Eq. \p{Ei=0} \footnote{The precise argument goes as follows: Take the integral of Eq. \p{du(c)} with an arbitrary 10-dimensional test function $f(X)$. The integral of the forms $\hat{E}^i \wedge \hat{E}^{++}$ and $\hat{E}^i \wedge \hat{E}^{--}$ multiplied by {\sl arbitrary} functions $f(\hat{X})$ vanishes $$ \int\limits^{}_{{\cal M}^{(1+1)}} \hat{E}^i \wedge \hat{E}^{++} f(\hat{X})=0, \qquad \int\limits^{}_{{\cal M}^{(1+1)}} \hat{E}^i \wedge \hat{E}^{--} f(\hat{X})=0. $$ From the arbitrariness of $f(\hat{X})$ then both 2-forms are identically zero on the world sheet $\hat{E}^i \wedge \hat{E}^{++}= \hat{E}^i \wedge \hat{E}^{--}=0 $. And from the independence of the pull--backs $\hat{E}^{++}$, $\hat{E}^{--}$ indeed \p{Ei=0} follows.} \begin{equation}\label{hEi=0} \hat{E}^i \equiv \hat{\Pi}^{\underline{m}}(\xi ) \hat{u}_{\underline{m}}^i (\xi ) =0 ~. \qquad \end{equation} Now it becomes clear why the basis $E^{\underline{a}}$ \p{Ea9} \begin{equation}\label{EundaD9} E^{\underline{a}} = \left( E^{++}, E^{--}, E^{i} \right) \qquad E^{\pm\pm}= \Pi^{\underline{m}}u_{\underline{m}}^{\pm\pm}, \qquad E^{i}= \Pi^{\underline{m}}u_{\underline{m}}^{i}, \qquad \end{equation} whose pull-back on the string worldsheet coincides with \p{Eunda}, is particularly convenient for the study of the coupled system. The dual basis $\nabla_{\underline{a}}$ \p{der} is constructed with the auxiliary moving frame variables \p{harmvD9}, \p{harmvD9IIB} $$ \nabla_{\underline{a}} = \left( \nabla_{++}, \nabla_{--} , \nabla_{{i}} \right) \equiv u_{\underline{a}}^{~\underline{m}}\nabla_{\underline{m}} $$ \begin{equation}\label{derIIB} \nabla_{++}= {1 \over 2} u^{\underline{m}--}\nabla_{\underline{m}}, \qquad \nabla_{--} = {1 \over 2} u^{\underline{m}++}\nabla_{\underline{m}}, \qquad \nabla_{{i}}= - u^{\underline{m}i}\nabla_{\underline{m}}, \end{equation} $$ \nabla_{\underline{m}}= \Pi^{-1}{}^{~n}_{\underline{m}}\partial_n . $$ The decomposition of any form on the basis \p{Ea9}, \p{derIIB} looks like \begin{equation}\label{hdec1} d\Theta^{\underline{\mu }I}= E^{\pm\pm} \nabla_{\pm\pm}\Theta^{\underline{\mu}I} + E^i \nabla_{i} \Theta^{\underline{\mu}I}, \end{equation} ($E^{\pm\pm} \nabla_{\pm\pm}\equiv E^{++} \nabla_{++} + E^{--} \nabla_{--}$) or \begin{equation}\label{hdec2} E^{+qI} \equiv d\Theta^{\underline{\mu}I} v^{~+}_{\underline{\mu}q} = E^{\pm\pm} E^{+qI}_{\pm\pm} + E^i E^{+qI}_{i}, \end{equation} \begin{equation}\label{hdec3} E^{-\dot{q}I} \equiv d\Theta^{\underline{\mu}I} v^{~-}_{\underline{\mu}\dot{q}} = E^{\pm\pm} E^{-\dot{q}I}_{\pm\pm} + E^i E^{-\dot{q}I}_{i} \end{equation} (cf. \p{ddec}). Due to \p{hEi=0}, only the terms proportional to $E^{++}, E^{--}$ survive in the pull--backs of \p{hdec1}--\p{hdec3} on the superstring worldsheet \begin{equation}\label{hdec1IIB} d\hat{\Theta}^{\underline{\mu I}} (\xi )= \hat{E}^{\pm\pm} \left( \nabla_{\pm\pm}\Theta^{\underline{\mu I}} \right)(x(\xi)), \end{equation} \begin{equation}\label{hdec2IIB} \hat{E}^{+qI} \equiv d\hat{\Theta}^{\underline{\mu}I} \hat{v}^{~+}_{\underline{\mu}q} = \hat{E}^{\pm\pm} \hat{E}^{+qI}_{\pm\pm} , \end{equation} \begin{equation}\label{hdec3IIB} \hat{E}^{-\dot{q}I} \equiv d\hat{\Theta}^{\underline{\mu}I} \hat{v}^{~-}_{\underline{\mu}\dot{q}} = \hat{E}^{\pm\pm} \hat{E}^{-\dot{q}I}_{\pm\pm}. \end{equation} An alternative way to represent Eqs. \p{hdec1IIB}, \p{hdec2IIB}, \p{hdec3IIB} is provided by the use of the current density \p{J81}, \p{J8Pi} and the equivalent version \p{du(c)} of Eq. \p{hEi=0} \begin{equation}\label{hdec1J} J_8 \wedge d{\Theta}^{\underline{\mu I}}= J_8 \wedge {E}^{\pm\pm} \nabla_{\pm\pm} \Theta^{\underline{\mu I}} (x), \end{equation} \begin{equation}\label{hdec2J} J_8 \wedge {E}^{+qI} \equiv J_8 \wedge d{\Theta}^{\underline{\mu}I} {v}^{~+}_{\underline{\mu}q} = J_8 \wedge {E}^{\pm\pm} {E}^{+qI}_{\pm\pm}, \end{equation} \begin{equation}\label{hdec3J} J_8 \wedge {E}^{-\dot{q}I} \equiv J_8 \wedge d{\Theta}^{\underline{\mu}I} {v}^{~-}_{\underline{\mu}\dot{q}} = J_8 \wedge {E}^{\pm\pm} {E}^{-\dot{q}I}_{\pm\pm}. \end{equation} \bigskip On the other hand, one can solve Eq. \p{du(c)} with respect to the current density. To this end we have to change the basis $\Pi^{\underline{m}} ~\rightarrow E^{\underline{a}}=\Pi^{\underline{a}} u_{\underline{m}}^{\underline{a}}$ (see \p{Ea9}, \p{harmvD9}) in the expression \p{J8Pi} (remember that $det(u)=1$ due to \p{harmvD9}). Then the solution of \p{du(c)} becomes \begin{equation}\label{J8Eort} J_8 = { 1 \over det(\Pi_{{r}}^{~\underline{s}})} (E^\perp)^{\wedge 8} ~{1 \over 2} \int_{{\cal M}^{1+1}} \hat{E}^{++} \wedge \hat{E}^{--} \d^{10} \left( x - \hat{x} (\xi )\right), \end{equation} where \begin{equation}\label{Eperp8} (E^\perp)^{\wedge 8} \equiv { 1 \over 8!} \e^{i_1 \ldots i_8} E^{i_1 } \wedge \ldots \wedge E^{i_1 } \end{equation} is the local volume element of the space orthogonal to the worldsheet. The current form \p{J8Eort} includes an invariant on-shell superstring current \begin{equation}\label{j(x)} J_8 = (E^\perp)^{\wedge 8} j(x) , \qquad j(x) = { 1 \over 2 det(\Pi_{{r}}^{~\underline{s}})} \int_{{\cal M}^{1+1}} \hat{E}^{++} \wedge \hat{E}^{--} \d^{10} \left( x - \hat{x} (\xi )\right)~. \end{equation} Note that it can be written with the use of the Lorentz harmonics only. The supersymmetric covariant volume can be decomposed as well in terms of the orthogonal volume form \begin{equation}\label{Pi8E8} (\Pi )^{\wedge 10} \equiv (E^\perp)^{\wedge 8} \wedge { 1 \over 2} E^{++} \wedge E^{--} . \end{equation} \bigskip \subsubsection{Equations for auxiliary fields of super--D9--brane} Variation with respect to the D9-brane Lagrange multiplier $Q_8$ yields the identification of the auxiliary antisymmetric tensor field $F$ with the generalized field strength ${\cal F}$ of the Abelian gauge field $A$ \begin{equation}\label{delQ8(c)} {\cal F}_2 \equiv dA- B_2 = F_2 \equiv {1 \over 2} \Pi^{\underline{m}} \wedge \Pi^{\underline{n}} F_{\underline{n}\underline{m}}~. \end{equation} On the other hand, from the variation with respect to the auxiliary antisymmetric tensor field $F_{\underline{n}\underline{m}}$ one obtains the expression for the Lagrange multiplier $Q_8$ \begin{equation}\label{delF(c)} Q_8 = \Pi^{\wedge 8}_{\underline{n}\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{n}\underline{m}} \equiv - { 1 \over \sqrt{|\eta + F|} } ~ [(\eta + F)_{\underline{.}\underline{.}} \Pi^{\underline{.}} ]^{\wedge 8 ~\underline{n}\underline{m}} F_{\underline{n}\underline{m}}, \end{equation} where $\Pi^{\wedge 8}_{\underline{n}\underline{m}}$ is defined by \p{dualf} and (in a suggestive notation) \begin{equation} \label{dualfn} [(\eta + F)_{\underline{.}\underline{.}} \Pi^{\underline{.}} ]^{\wedge 8 ~\underline{n}\underline{m}} = {1 \over 2 ^. 8!} \e^{\underline{m}\underline{n}\underline{m}_1\ldots \underline{m}_{8}} (\eta + F)_{\underline{m}_1 \underline{n}_1} \Pi^{\underline{n}_1} \wedge ... \wedge (\eta + F)_{\underline{m}_8 \underline{n}_8} \Pi^{\underline{n}_{8}}. \qquad \end{equation} The second form of \p{delF(c)} indicates that in the linearized approximation with respect to the gauge fields one obtains \begin{equation}\label{delF(c)l} Q_8 = - \Pi^{\wedge 8}_{\underline{n}\underline{m}} F^{\underline{n}\underline{m}} + {\cal O} (F^2) \equiv - {1 \over 2} * F_2 + {\cal O} (F^2), \end{equation} where $*$ denotes the $D=10$ Hodge operation and ${\cal O} (F^2) $ includes terms of second and higher orders in the field $F_{\underline{n}\underline{m}}$. \bigskip \subsection{Dynamical bosonic equations: Supersymmetric Born-Infeld equations with the source.} The supersymmetric generalization of the Born-Infeld dynamical equations \begin{equation}\label{delA(c)} dQ_8 + d{\cal L}_8^{WZ-D7} = - dJ_8 \end{equation} follows from variation with respect to the gauge field. Here we have to take into account the expression for $Q_8$ \p{delF(c)l}, the identification of $F$ with the gauge field strength \p{delQ8(c)} as well as the expression for the D7-brane Wess-Zumino term \begin{equation}\label{WZD7} d{\cal L}_8^{WZ-D7} = e^{\cal{F}} \wedge R \vert_{9} , \qquad R = \oplus _{n=0}^{5} R_{2n+1} = 2i d\Theta^{2\underline{\nu} } \wedge d\Theta^{1\underline{\mu} } \wedge \oplus _{n=0}^{4} \hat{\s}^{(2n+1)}_{\underline{\nu}\underline{\mu} }. \qquad \end{equation} Let us stress that, in contrast to the free Born--Infeld equation \p{delA}, Eq. \p{delA(c)} has a right hand side produced by the endpoints of the fundamental superstring. \bigskip Variation of the action with respect to $X^{\underline{m} }$ yields \begin{eqnarray}\label{dXD9} & {} & J_8 \wedge M^i_2 ~ u_{\underline{m} }^{~i} + \nonumber \\ & {} & + 2i J_8 \wedge \left({E}^{2+}_{~{q}}\wedge {E}^{2+}_{~{q}} u_{\underline{m} }^{--} - {E}^{1-}_{~\dot{q}}\wedge {E}^{1-}_{~\dot{q}} u_{\underline{m} }^{++} \right) + \nonumber \\ & {} & + \Pi^{\wedge 8}_{\underline{n}\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{n}\underline{l}} \s_{\underline{l}}{}_{\underline{\mu}\underline{\nu}} \wedge \left( d\Theta^{2\underline{\mu} }- d\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\mu} }\right) \wedge \left( d\Theta^{2\underline{\nu} }- d\Theta^{1\underline{\e}} h_{\underline{\e}}^{~\underline{\nu} }\right) + \nonumber \\ & {} & + dJ_8 \wedge \left( -{1 \over 2} E^{\pm\pm} u^{\mp\mp \underline{n}} F_{\underline{n}\underline{m} } +{1 \over 2} E^{++}u_{\underline{m} }^{--} -{1 \over 2} E^{--} u_{\underline{m} }^{++} \right) = 0 . \end{eqnarray} The first line of Eq. \p{dXD9} contains the lifting to the super--D9--brane worldvolume of the 2-form $\hat{M}_2^{i}$ \p{hM2def} which enters the l.h.s. of the free superstring bosonic equations \p{M2=0} \begin{equation}\label{M2} {M}_2^{i}\equiv 1/2 {E}^{--} \wedge {f}^{++i} - 1/2 {E}^{++} \wedge {f}^{--i} + 2i {E}^{1+}_{~{q}}\wedge \g^i_{q\dot{q}}{E}^{1-}_{~\dot{q}} - 2i {E}^{2+}_{~{q}}\wedge \g^i_{q\dot{q}}{E}^{2-}_{~\dot{q}}. \end{equation} The fourth line of Eq.\p{dXD9} again is the new input from the boundary. The second and third lines of Eq.\p{dXD9} vanish identically on the surface of the free fermionic equations of the free D9-brane and of the free superstring, respectively. These are the Noether identities reflecting the diffeomorphism invariance of the free D9-brane and the free superstring actions. Hence, it is natural to postpone the discussion of Eq. \p{dXD9} and turn to the fermionic equations for the coupled system. \bigskip \subsection{Fermionic field equations} The variation with respect to $\Theta^2$ produces the fermionic equation \begin{equation}\label{dTh2(c)} \Pi^{\wedge 9}_{\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{m}\underline{n}} \s_{\underline{n}}{}_{\underline{\mu}\underline{\nu}} \wedge \left( d\Theta^{2\underline{\nu} }- d\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\nu} }\right) = - 2 J_8 \wedge E^{--} \wedge d\Theta^{2\underline{\nu} } ~v_{\underline{\nu}q }^{~+}~ v_{\underline{\mu}q }^{~+}, \end{equation} with the r.h.s. localized at the worldsheet and proportional to the l.h.s. of the fermionic equations of the free type $IIB$ superstring (cf. \p{Th2str} and remembering that $ d\Theta^{2\underline{\nu} } ~v_{\underline{\nu}q }^{~+}~\equiv E^{2+}_{~q} $). The remaining fermionic variation $\d\Theta^1$ produces an equation which includes the form $J_8$ with support localized at the worldsheet only: \begin{equation}\label{dTh1(c)} J_8 \wedge \left( E^{++} \wedge E^{1-}_{~\dot{q}} v_{\underline{\mu}\dot{q} }^{~-} - E^{--} \wedge E^{2+}_{~q } h_{\underline{\mu}}^{~\underline{\nu}} v_{\underline{\nu}q }^{~+} \right) = 0. \end{equation} This equation is worth a special consideration. For clearness, let us write its image on the worldsheet \begin{equation}\label{dTh1(c)0} \hat{E}^{++} \wedge \hat{E}^{1-}_{~\dot{q}} \hat{v}_{\underline{\mu}\dot{q} }^{~-} - \hat{E}^{--} \wedge \hat{E}^{2+}_{~q } \hat{h}_{\underline{\mu}}^{~\underline{\nu}} \hat{v}_{\underline{\nu}q }^{~+} = 0. \end{equation} Contracting with inverse harmonics $v^{-\underline{\mu}}_{q}, v^{+\underline{\mu}}_{\dot{q}}$ and using the 'multiplication table' of the harmonics (\p{spharm1}) we arrive at the following covariant $8+8$ splitting representation for the $16$ equations \p{dTh1(c)0}: \begin{equation}\label{dTh1(c)1} \hat{E}^{++} \wedge \hat{E}^{1-}_{~\dot{q}} = \hat{E}^{--} \wedge \hat{E}^{2+}_{~q } \hat{h}^{++}_{\dot{q}q} \end{equation} \begin{equation}\label{dTh1(c)2} \hat{E}^{--} \wedge \hat{E}^{2+}_{~p } \hat{h}_{qp} = 0. \end{equation} Here \begin{equation}\label{h++} \hat{h}^{++}_{\dot{q}q} \equiv \hat{v}^{+\underline{\mu}}_{\dot{q} } \hat{h}_{\underline{\mu}}^{~\underline{\nu}} \hat{v}_{\underline{\nu}q }^{~+}, \qquad \end{equation} \begin{equation}\label{h-+} \hat{h}_{pq} \equiv \hat{v}^{-\underline{\mu}}_{{p} } \hat{h}_{\underline{\mu}}^{~\underline{\nu}} \hat{v}_{\underline{\nu}q }^{~+} \qquad \end{equation} are the covariant $8 \time 8$ blocks of the image $\hat{h}$ of the Lorentz group valued (and hence invertible!) spin-tensor field $h$ \p{hinSpin}-\p{Id2} \begin{equation} \label{h16} h^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\b}} \equiv v^{~\underline{\nu}}_{\underline{\b}} h^{~\underline{\nu}}_{\underline{\mu}} v^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu}} \equiv \left( \matrix{ h_{qp} & h^{--}_{q\dot{p}} \cr h^{++}_{\dot{q}p} & \tilde{h}_{\dot{q}\dot{p}} \cr } \right). \end{equation} Note that the source localized on the worldsheet of the open brane, as in \p{dTh2(c)} is characteristic for the system including a space-time filling brane. For the structure of the fermionic equations in the general case we refer to \cite{BK99}. \bigskip \section{Phases of the coupled system} \setcounter{equation}{0} It is useful to start with the fermionic equations of motion \p{dTh2(c)}, \p{dTh1(c)1}, \p{dTh1(c)2}. First of all we have to note that in the generic phase {\sl there are no true (complete) Noether identities for the $\kappa$--symmetry} in the equations for the coupled system, as all the $32$ fermionic equations are independent. \subsection{Generic phase describing decoupled system and appearance of other phases} In the generic case we shall assume that the matrix $h_{qp}(X(\xi)) = \hat{h}_{qp}(\xi)$ is invertible ($det( \hat{h}_{qp}) \not= 0$, for the case $det( \hat{h}_{qp})= 0$ see Section 7.3). Then Eq. \p{dTh1(c)2} implies $\hat{E}^{--} \wedge \hat{E}^{2+}_{~p } = 0$ and immediately results in the reduction of the Eq. \p{dTh1(c)1}: \begin{equation}\label{fgen} det( \hat{h}_{qp}) \not= 0 \qquad \Rightarrow \qquad \cases{ \hat{E}^{++} \wedge \hat{E}^{1-}_{~\dot{q}} = 0 \cr \hat{E}^{--} \wedge \hat{E}^{2+}_{~q } = 0 \cr } \end{equation} The equations \p{fgen} have the same form as the free superstring equations of motion \p{Th1str}, \p{Th2str}. As a result, the r.h.s. of Eq. \p{dTh2(c)} vanishes $$ det( \hat{h}_{qp}) \not= 0 \qquad \Rightarrow \qquad $$ \begin{equation}\label{dTh2(c)g} \Pi^{\wedge 9}_{\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{m}\underline{n}} \s_{\underline{n}}{}_{\underline{\mu}\underline{\nu}} \wedge \left( d\Theta^{2\underline{\nu} }- d\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\nu} }\right) = 0 \end{equation} which coincides with the fermionic equation for the free super--D9--brane \p{dTh2eq}. Then the third line in the equations of motion for $X^{\underline{m}}$ coordinate fields \p{dXD9} vanishes, as it does in the free super--D9--brane case \p{dX}. As the second line in Eq. \p{dXD9} is zero due to the equations \p{fgen} (e.g. $\hat{E}^{2+}_{~{q}}\wedge \hat{E}^{2+}_{~{q}} =$ $(\hat{E}^{\pm\pm}\hat{E}^{~2+}_{\pm\pm {q}}) \wedge (\hat{E}^{\mp\mp}\hat{E}^{~2+}_{\mp\mp {q}}) =$ $- 2 \hat{E}^{--} \wedge \hat{E}^{2+}_{~{q}} ~\hat{E}^{~2+}_{-- {q}} = 0$), in the generic case \p{fgen} the equations of motion for $X$ field \p{dXD9} become $$ det( \hat{h}_{qp}) \not= 0 \qquad \Rightarrow \qquad $$ \begin{equation}\label{dXD9g} J_8 \wedge M^i_2 ~ u_{\underline{m} }^{~i} + dJ_8 \wedge \left( -{1 \over 2} E^{\pm\pm} u^{\mp\mp \underline{n}} F_{\underline{n}\underline{m} } +{1 \over 2} E^{++}u_{\underline{m} }^{--} -{1 \over 2} E^{--} u_{\underline{m} }^{++} \right) = 0 . \end{equation} Contracting equation \p{dXD9g} with appropriate harmonics \p{harmvD9}, one can split it into three covariant equations \begin{equation}\label{dXD9gi} J_8 \wedge {M}^i_2 = {1 \over 2} dJ_8 \wedge {E}^{\pm\pm} {F}^{\mp\mp i} , \end{equation} \begin{equation}\label{dXD9g++} dJ_8 \wedge E^{++} \left( 1 - {1 \over 2} F^{++ ~--}\right) = 0, \end{equation} \begin{equation}\label{dXD9g--} dJ_8 \wedge E^{--} \left( 1 - {1 \over 2} F^{++ ~--}\right) = 0, \end{equation} where \begin{equation}\label{Fcomp} {F}^{\mp\mp i} \equiv u^{\underline{m}\pm\pm} u^{\underline{n}i} F_{\underline{m}\underline{n}}, \qquad {F}^{++~--} \equiv u^{\underline{m}++} u^{\underline{n}--} F_{\underline{m}\underline{n}} \end{equation} are contractions of the antisymmetric tensor field (gauge field strength) with the harmonics \p{harmvD9}. The l.h.s. of the first equation \p{dXD9gi} has support on the string world volume ${\cal M}^{1+1}$, while its r.h.s and all the equations \p{dXD9g++}, \p{dXD9g--} have support on the boundary of the string worldsheet $\partial{\cal M}^{1+1}$ only. An important observation is that the requirement for the superstring to have a nontrivial boundary $\partial{\cal M}^{1+1}\not= 0$ {\sl implies a specific restriction for the image of the gauge field strength on the boundary of the string worldsheet} \begin{equation}\label{F++--=2} \partial{\cal M}^{1+1}\not= 0 \qquad \Rightarrow \qquad \hat{F}^{++~--} \vert_{\partial{\cal M}^{1+1}} \equiv \hat{u}^{\underline{m}++} \hat{u}^{\underline{n}--} F_{\underline{m}\underline{n}} \vert_{\partial{\cal M}^{1+1}} = 2 . \end{equation} Eqs. \p{F++--=2} can be regarded as 'boundary conditions' for the super--D9--brane gauge fields on 1-dimensional defects provided by the endpoints of the fundamental superstring. Such boundary conditions describe a phase of the coupled system where the open superstring interacts with the D9-brane gauge fields through its endpoints. However, the most general phase, which implies no restrictions \p{F++--=2} on the image of the gauge field, is characterized by equations $dJ_8 \wedge E^{--}=0, ~dJ_8 \wedge E^{++}=0$ and $dJ_8=0$. This means the conservation of the superstring current and thus implies that the superstring is closed \begin{equation}\label{F++--n=2} \hat{F}^{++~--} \vert_{\partial{\cal M}^{1+1}}\not =2 \quad \Rightarrow \quad dJ_8=0 \quad \Rightarrow \quad \partial{\cal M}^{1+1}= 0. \end{equation} The equations decouple and become the equations of the free D9--brane and the ones of the free closed type $IIB$ superstring. Hence to arrive at the equations of a nontrivially coupled system of super--D9--brane and open fundamental superstring we have to consider phases related to special 'boundary conditions' for the gauge fields on the string worldvolume or its boundary. The weakest form of such boundary conditions are provided by \p{F++--=2}. Below we will describe some interesting phases characterized by the boundary conditions formulated on the whole superstring worldsheet, but before that some comments on the issues of $\kappa$--symmetry and supersymmetry seem to be important. \bigskip \subsection{Issues of $\kappa$--symmetry and supersymmetry} \subsubsection{On $\kappa$--symmetry} If one considers the field variation of the form \p{kD9}, \p{dkA} for the free D9-brane $\kappa$-symmetry transformation, one finds that they describe a gauge symmetry of the coupled system as well, if the parameter $\kappa$ is restricted by 'boundary conditions' on the two dimensional defect (superstring worldsheet) \begin{equation}\label{kappab} \kappa^{\underline{\mu}}(x(\xi)) \equiv \hat{\kappa}\def\l{\lambda}\def\L{\Lambda}\def\s{\sigma}\def\S{\Sigma}^{\underline{\mu}} (\xi ) =0. \end{equation} Thus we have a counterpart of the $\kappa$-symmetry inherent to the host brane (D9-brane) in the coupled system. As the defect (string worldsheet) is a subset of measure zero in 10-dimensional space (D9-brane world volume) we still can use this restricted $\kappa$--symmetry to remove half of the degrees of freedom of the fermionic fields all over the D9-brane worldvolume except for the defect. At the level of Noether identities this 'restricted' $\kappa$--symmetry is reflected by the fact that the half of the fermionic equations \p{dTh1(c)} has nonzero support on the worldsheet only. For a system of low-dimensional intersecting branes and open branes ending on branes, which does not include the super--D9--brane or other space--time filling brane we will encounter an analogous situation where the $\kappa$--symmetries related to both branes should hold outside the intersection. \bigskip However, we should note that, in the generic case \p{fgen}, all the $32$ variations of the Grassmann coordinates result in nontrivial equations. Thus we have {\sl no true counterpart} of the free brane $\kappa$-symmetry. Let us recall that the latter results in the dependence of half of the fermionic equations of the free superbrane. It is usually identified with the part (one-half) of target space supersymmetry preserved by the BPS state describing the brane (e.g. as the solitonic solutions of the supergravity theory). \bigskip \subsubsection{Bosonic and fermionic degrees of freedom and supersymmetry of the decoupled phase} As the general phase of our coupled system \p{fgen}, \p{F++--n=2} describes the decoupled super--D9--brane and closed type $IIB$ superstring, it must exhibit the complete $D=10$ type $IIB$ supersymmetry. Supersymmetry (in a system with dimension $d >1$) requires the coincidence of the numbers of bosonic and fermionic degrees of freedom. We find it instructive to consider how such a coincidence can be verified starting from the action of the coupled system, and to compare the verification with the one for the case of free branes. \bigskip In the free super--D9--brane case the $32$ fermionic fields $\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^{\underline{\mu}I}$ can be split into $16$ physical and $16$ unphysical (pure gauge) ones. For our choice of the sign of the Wess-Zumino term \p{LWZ} they can be identified with $\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^{\underline{\mu}2}$ and $\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^{\underline{\mu}1}$, respectively. Then one can consider the equations of motion \p{dTh2eq} as restrictions of the physical degrees of freedom (collected in $\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^{\underline{\mu}2}$), while the pure gauge degrees of freedom ( $\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^{\underline{\mu}1}$) can be removed completely by $\kappa$--symmetry \p{kD9}, i.e. we can fix a gauge $\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma^{\underline{\mu}1}=0$ (see \cite{schw1}). \bigskip A similar situation appears when one considers the free superstring model, where one can identify the physical degrees of freedom with the set of $ \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{1-}_{~\dot{q}} = \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1} \hat{v}^{~-}_{\underline{\mu}\dot{q}}, \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{2+}_{~{q}} = \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2} \hat{v}^{~+}_{\underline{\mu}{q}} $ \p{analb}, while the remaining components $ \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{1+}_{~{q}} = \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1} \hat{v}^{~+}_{\underline{\mu}{q}}, ~\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{2-}_{~\dot{q}} = \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2} \hat{v}^{~-}_{\underline{\mu}\dot{q}}, $ are pure gauge degrees of freedom with respect to the $\kappa$-symmetry whose irreducible form is given by \p{kappastr} (see \p{kappag}). To calculate the number of degrees of freedom we have to remember that \begin{itemize} \item pure gauge degrees of freedom are removed from the consideration {\sl completely}, \item the solution of second order equations of motion (appearing as a rule for bosonic fields, i.e. $\hat{X}^i(\xi )= \hat{X}^i(\tau, \s ) $ \p{analb}) for $n$ physical variables (extracted e.g. by fixing all the gauges) is characterized by $2n$ independent functions, which can be regarded as initial data for coordinates ($\hat{X}^i(0, \s )$) and momenta (or velocities $\partial_\tau \hat{X}^i( 0, \s ) $), \item the general solution of the first order equations (appearing as a rule for fermions, e.g. $\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{1-}_{\dot{q}}(\tau, \s ) $, $\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{2+}_{{q}}(\tau, \s ) $) is characterized by only $n$ functions, which can be identified with the initial data for coordinates ( $\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{1-}_{\dot{q}}(0, \s ) $, $\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{2+}_{{q}}(0, \s ) $) which are identical to their momenta in this case. \end{itemize} In this sense it is usually stated that $n$ physical (non pure gauge) fields satisfying the second order equations of motion {\sl carry $n$ degrees of freedom} (e.g. for $\hat{X}^i(\xi )$ ~~$n=(D-2)=8$), while $n$ physical fields satisfying the first order equations of motion {\sl carry $n/2$ degrees of freedom} (e.g. for $\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{1-}_{\dot{q}}(\tau, \s ) $, $\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{2+}_{{q}}(\tau, \s )$ ~~$n/2 = 2(D-2)/2 = 16/2=8$). This provides us with the same value $8$ for the number of bosonic and fermionic degrees of freedom for both the free super--D9--brane and the free type $II$ superstring ($8_{B}+8_{F}$). \bigskip If one starts from the action of a coupled system similar to \p{SD9+IIB}, the counting should be performed in a slightly different manner, because, as it was discussed above, {\sl we have no true $\kappa$--symmetry} in the general case. We still count $8$ physical bosonic degrees of freedom related to the super--D9--brane gauge field $A_{\underline{m}}(x)$ living in the whole bulk (super--D9--brane worldvolume), and $8$ physical bosonic degrees of freedom living on the 'defect' (superstring worldsheet) related to the orthogonal oscillations of the string $\hat{X}^i(\xi )$. The 32 fermionic coordinate fields ${\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1}(x), {\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2}(x)$ are restricted here by two sets of 16 equations \p{dTh2(c)}, \p{dTh1(c)}, with one set \p{dTh2(c)} involving the fields in the bulk (and also the source term with support on the worldsheet) and the other \p{dTh1(c)} with support on the worldsheet only. As the field theoretical degrees of freedom are related to the general solution of homogeneous equations (in the light of the correspondence with the initial data described above), the presence (or absence) of the source with local support in the right hand part of the coupled equations is inessential and we can, in analogy with the free D9-brane case, treat the first equation \p{dTh2(c)} as the restriction on $16$ physical fermionic fields (say ${\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2}(x)$) {\sl in the bulk}. As mentioned above, the coupled system has a D9-brane-like $\kappa$--symmetry with the parameter ${\kappa}\def\l{\lambda}\def\L{\Lambda}\def\s{\sigma}\def\S{\Sigma}^{\underline{\mu}}(x)$ restricted by the requirement that it should vanish on the defect $\hat{\kappa}\def\l{\lambda}\def\L{\Lambda}\def\s{\sigma}\def\S{\Sigma}^{\underline{\mu}}(\xi ) \equiv {\kappa}\def\l{\lambda}\def\L{\Lambda}\def\s{\sigma}\def\S{\Sigma}^{\underline{\mu}}\left(x(\xi)\right)=0$. Thus we can use this kappa symmetry to remove the rest of the 16 fermionic fields (say ${\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1}(x)$) all over the bulk except at the defect. Thus all over the bulk {\sl including} the defect we have 8 bosonic fields, which are the components of $A_{\underline{m}}(x)$ modulo gauge symmetries and $8=16/2$ fermionic fields, which can be identified with the on-shell content of ${\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}2}(x)$. On the defect we have in addition the 16 components $\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{\underline{\mu}1}(\xi )$, which are restricted (in the general case) by 16 first order equations \p{fgen} (or \p{Th1str}, \p{Th2str}) and, thus, carry $8$ degrees of freedom. This is the same number of degrees of freedom as the one of the orthogonal bosonic oscillations of superstring $\hat{X}^i (\xi) = \hat{X}^{\underline{m}}(\xi ) \hat{u}^i_{\underline{m}} (\xi )$. This explains why our approach to the coupled system allows to describe a decoupled supersymmetric phase. It should be remembered (see Section 3.4) that the presence of boundaries breaks at least half of the target space supersymmetry. \subsection{Phases implying restrictions on the gauge fields} As mentioned in Section 8.1, the open fundamental superstring can be described only when some restrictions on the image of the gauge field are implied. The simplest restriction is given by Eq. \p{F++--=2}. But it is possible to consider the phases where \p{F++--=2} appears as a consequence of stronger restrictions which hold on the whole defect (string worldvolume), but not only on its boundary. An interesting property of such phases is that there an interdependence of the fermionic equations of motion emerges. Such a dependence can be regarded as an additional 'weak' counterpart of the $\kappa$-symmetry of the free superbrane actions. \subsubsection{Phases with less than $8$ dependent fermionic equations} The dependence of fermionic equations arises naturally when the matrix $\hat{h}_{pq}$ is degenerate: \begin{equation}\label{fdeg} det( \hat{h}_{qp}) = 0 \qquad \Leftrightarrow \qquad r_h \equiv rank( \hat{h}_{qp}) < 8 \end{equation} Then $ \hat{h}_{qp}$ may be represented through a set of $8 \times r_h$ rectangular matrices $S_q^{~I}$ \begin{equation}\label{fdegdec} \hat{h}_{qp} = (\pm )S_q^{~I} S_p^{~I}, \qquad q,p=1,\ldots 8, \qquad I= 1,\ldots r_h, \qquad r_h < 8 , \end{equation} and Eq. \p{dTh1(c)2} implies only $r_h<8$ nontrivial relations \begin{equation}\label{ffeq} 0< r_h < 8 \qquad \Leftrightarrow \qquad \hat{E}^{--} \wedge \hat{E}^{2+}_{~q } S_q^{~I} = 0, \qquad I= 1,\ldots r_h , \end{equation} while the remaining $8 - r_h$ fermionic equations are dependent. The general solution of Eq. \p{ffeq} differs from the expression for the fermionic equations of the free superstring $\hat{E}^{--} \wedge \hat{E}^{2+}_{~q }=0$ by the presence of $(8 - r_h)$ arbitrary fermionic two-forms (actually functions, as on the worldsheet any two-form is proportional to the volume $\hat{E}^{++} \wedge \hat{E}^{--}$) \begin{equation}\label{ffeqs} 0< r_h < 8 \qquad \Leftrightarrow \qquad \hat{E}^{--} \wedge \hat{E}^{2+}_{~q } = R^{~\tilde{J}}_q \hat{\Psi}_2{}_+^{~\tilde{J}} \qquad I= 1,\ldots r_h \end{equation} where the $8 \times (8- r_h)$ matrix $R^{~\tilde{J}}_q$ is composed of $8-r_h$ $SO(8)$ '$s$--vectors' which complete the set of $r_h$ $SO(8)$ $s$--vectors $S^{~{J}}_q$ to the complete basis in the $8$ dimensional space, i.e. \begin{equation}\label{RS} R^{~\tilde{J}}_q S^{~I}_q = 0 . \end{equation} On the other hand, due to Eqs. \p{ffeqs}, \p{fdegdec}, the $R^{~\tilde{J}}_q$ are the 'null-vectors' of the matrix $h_{pq}$ \begin{equation}\label{hR=0} \hat{h}_{qp} R^{~\tilde{J}}_p =0 \end{equation} Thus, they may be used to write down the explicit form of the $8-r_h$ dependent fermionic equations \begin{equation}\label{depfe} rank( \hat{h}_{qp}) = r_h < 8 \qquad \Rightarrow \qquad \left(\hat{E}^{--} \wedge \hat{E}^{2+}_{~q }\right) \hat{h}_{qp} R^{~\tilde{J}}_p \equiv 0 . \end{equation} \subsubsection{Nonperturbative phase with 8 dependent fermionic equations. } The case with the maximal number $8$ of dependent fermionic equations appears when the matrix $h_{qp}$ vanishes at the defect ($\hat{h}_{qp} = 0$). As the complete matrix $h_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}^{~\underline{\b}}$ \p{h16}, \p{Id1}, \p{Id2} is Lorentz group valued \p{hinSpin} and, hence, nondegenerate ($det ( h_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}^{~\underline{\b}})\not= 0$), this implies that both antidiagonal $8 \times 8$ blocks $h^{--}_{q\dot{p}}$, $h^{++}_{\dot{q}p}$ are nondegenerate \begin{equation}\label{fmax} \hat{h}_{qp} = 0, \qquad \Rightarrow \qquad det(\hat{h}^{--}_{q\dot{p}}) \not=0, \qquad det(\hat{h}^{++}_{\dot{q}p}) \not=0. \end{equation} In this case the fermionic equations \p{dTh1(c)1} are satisfied identically and thus we arrive at the system of $16+8=24$ nontrivial fermionic equations. The dependence of Eq. \p{dTh1(c)1} for the gauge field subject to the 'boundary conditions' \p{fmax} (see \p{Id1}, \p{Id2}) can be regarded as a counterpart of 8 $\kappa$--symmetries. Thus it could be expected that ground state solutions corresponding to the BPS states preserving $1/4$ (i.e. $8$) of the $32$ target space supersymmetries should appear just in this phase. It is important that the phase \p{fmax} is {\sl nonperturbative} in the sense that it has no a weak gauge field limit. Indeed, in the limit $F_{\underline{m}\underline{n}} \rightarrow 0$ the spin-tensor $h_{\underline{\nu}}^{~\underline{\mu}}$ \p{hinSpin}, \p{Id1} should tend to unity $h_{\underline{\nu}}^{~\underline{\mu}}= \d_{\underline{\nu}}^{~\underline{\mu}} + {\cal O}(F)$. As $\hat{v}^{-\underline{\mu}}_{{p} } \hat{v}_{\underline{\mu}q }^{~+}=\d_{pq}$ (see Appendix A), the same is true for the $SO(8)$ $s$--tensor $h_{pq}$: $h_{pq}= \d_{pq} + {\cal O}(F)$. Thus the condition \p{fmax} cannot be obtained in the weak field limit. This reflects the fact that nontrivial coupling of the gauge field with string endpoints is described by this phase. Another way to justify the above statements is to use \p{Id1}, \p{Id2} with the triangle matrix \p{h16} \begin{equation} \label{h16tr} \hat{h}^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\b}} \equiv \hat{v}^{~\underline{\nu}}_{\underline{\b}} \hat{h}^{~\underline{\nu}}_{\underline{\mu}} \hat{h}^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu}} \equiv \left( \matrix{ 0 & \hat{h}^{--}_{q\dot{p}} \cr \hat{h}^{++}_{\dot{q}p} & \tilde{\hat{h}}_{\dot{q}\dot{p}} \cr } \right) \end{equation} and the explicit $SO(1,1)\times SO(8)$ invariant representation for $\sigma$--matrices (see Eq. \p{gammarep} in the Appendix A) to find that $\hat{h}_{pq} = 0$ implies (see Appendix C) \begin{equation}\label{F++--=2!} \hat{F}^{++~--} \equiv \hat{u}^{\underline{m}++} \hat{u}^{\underline{n}--} \hat{F}_{\underline{m}\underline{n}} = 2. \end{equation} Thus we see again that there is no weak gauge field limit, as the image of at least one of the gauge field strength components onto the string worldsheet has a finite value in the phase \p{fmax}. On the other hand, Eq. \p{F++--=2!} demonstrates that the condition \p{F++--=2} holds on the boundary of the worldsheet. Thus one can expect that this phase provides a natural possibility to describe the nontrivial coupling of the {\sl open} fundamental superstring with the D-brane gauge field. As we will prove below analysing the field equations, this is indeed the case. \bigskip In the 'nonperturbative phase' \p{fmax} one of the fermionic equations \p{dTh1(c)2} is satisfied identically and thus we have only one nontrivial fermionic equation \p{dTh1(c)1} on the string worldsheet. Using the consequences \p{hdec2IIB} of Eq. \p{hEi=0} the 2-form equation \p{dTh1(c)1} can be decomposed as \begin{equation}\label{dTh1(c)11} \hat{E}^{++} \wedge \hat{E}^{--} ( \hat{E}^{1~-}_{--\dot{q}} + \hat{E}^{2~+}_{++q } \hat{h}^{++}_{\dot{q}q}) = 0 . \end{equation} We find that it contains eight 0-form fermionic equations \begin{equation}\label{dTh1(c)12} \hat{E}^{1~-}_{--\dot{q}}= - \hat{E}^{2~+}_{++{q} } \hat{h}^{++}_{\dot{q}q}. \end{equation} Another version of Eq. \p{dTh1(c)12} is \begin{equation}\label{dTh1(c)14} J_8 ~~ \left({E}^{1~-}_{--\dot{q}} + {E}^{2~+}_{++{q} } {h}^{++}_{\dot{q}q}\right) =0. \end{equation} In the linearized approximation with respect to all fields {\sl except for the gauge field strength} $F$ ($h^{++}_{\dot{q}q}= {\cal O} (F)$) the equation \p{dTh1(c)12} becomes \begin{equation}\label{dTh1(c)13} \partial_{--} \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{-1}_{\dot{q}}= - \partial_{++} \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{+2}_{{q}} \hat{h}^{++}_{\dot{q}q}. \end{equation} \bigskip One should remember that the free superstring fermionic equations \p{Th1str}, \p{Th2str} as well as the equations \p{dTh1(c)1}, \p{dTh1(c)2} in the generic phase \p{fgen} imply $ det(\hat{h}_{qp})\not= 0, ~~ \Rightarrow ~~ \hat{E}^{1~-}_{--\dot{q}}=0, ~~ \hat{E}^{2~+}_{++{q} }=0, $ whose linearized limit is $ \partial_{--} \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{-1}_{\dot{q}}=0, ~~ \partial_{++} \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{+2}_{{q}}= 0 $ with chiral fields as a solution $ \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{-1}_{\dot{q}}=\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{-1}_{\dot{q}} (\xi^{(++)} ), ~~ \hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{+2}_{{q}}=\hat{\Theta}\def\th{\theta}\def\om{\omega}\def\Om{\Omega}\def\G{\Gamma}^{+2}_{{q}} (\xi^{(--)} ) $. The rest of the fermionic equations \p{dTh2(c)} \begin{equation}\label{dTh2(c)m} \Pi^{\wedge 9}_{\underline{m}} \sqrt{|\eta + F|} (\eta + F)^{-1~\underline{m}\underline{n}} \s_{\underline{n}}{}_{\underline{\mu}\underline{\nu}} \wedge \left( d\Theta^{2\underline{\nu} }- d\Theta^{1\underline{\rho}} h_{\underline{\rho}}^{~\underline{\nu} }\right) = - 2 J_8 \wedge E^{--} \wedge E^{2+}_{~{q}} v_{\underline{\mu}q}^{~+} \end{equation} $$ \equiv - 2 J_8 \wedge E^{++} \wedge E^{1-}_{~\dot{q}} (h^{++})^{-1}_{q\dot{q}} v_{\underline{\mu}q}^{~+} $$ has a nontrivial source localized on the superstring worldsheet. It is just proportional to the expression which vanishes in the free superstring case and in the generic phase, but remains nonzero in the phase \p{fmax}. In the present case the relations \p{F++--=2} hold (see Eq. \p{F++--=2!}). Using these relations, straightforward but tedious calculations demonstrate that the projections of Eqs. \p{dXD9} for $\d X^{\underline{m}}$ onto the harmonics $u_{\underline{m}}^{\pm\pm}$ vanish identically here (these are Noether identities for reparametrization symmetry on the superstring worldvolume), while the projection onto $u_{\underline{m}}^{~i}$ results in \begin{equation}\label{dXi} J_8 \wedge {M}_2^{i} = - {1 \over 2} dJ_8 \wedge (E^{++} F^{--i} + E^{--} F^{++ i}) , \end{equation} where $ F^{++ i}, F^{--i}$ and ${M}_2^{i}$ are defined in Eqs. \p{Fcomp} and \p{M2} respectively. Eq. \p{dXi} differs from the one of the free superstring by the nonvanishing r.h.s., which has support {\sl on the boundary} of the string worldsheet and describes the interaction with super-D9-brane gauge fields. The Born-Infeld equations has the form \p{delA(c)} (with \p{delF(c)}, \p{delQ8(c)} taken into account) and contains a nonvanishing source term $-dJ_8$. Thus, as expected, the phase \p{fmax} describes the open fundamental superstring interacting with the super-D9-brane. The ends of the superstring carry the charge of the super-D9-brane gauge field and provide the source for the supersymmetric Born-Infeld equation. Note that the source of the fermionic equations is localized on the whole worldsheet. This property is specific for the system including a space--time filling brane. \bigskip \section{Conclusion and outlook} \defC.\arabic{equation}{\thesection.\arabic{equation}} \setcounter{equation}{0} In this paper we present the derivation of a complete set of supersymmetric equations for a coupled system consisting of the super--D9--brane and the open fundamental superstring 'ending' on the D9-brane. To this end we construct a current distribution form $J_8$ which allows to write the action functional of superstring and D9-brane in similar forms, i.e. as an integral of a 10-form over the 10-dimensional space, after the Grassmann coordinates of the superstring are identified with the images of the Grassmann coordinate fields of the super--D9--brane. We prove supersymmetric invariance of $J_8$. { The proposed way to construct the action for the coupled system of superstring and space--time filling brane requires the use of the moving frame (Lorentz harmonic) actions \cite{BZ,bpstv,bsv} for the superstring. The reason is that its Lagrangian form (in distinction to the ones of the standard action \cite{gsw}) can be regarded as pull--backs of some D-dimensional differential 2--form and, thus, the moving frame actions for the free superstring can be written easily as an integral over a D--dimensional manifold by means of the current density $J_8$. Just the existence of the moving frame formulation may motivate the {\sl formal} lifting of the Lagrangian forms of the standard actions to D dimensions and their use for the description of the interaction with space--time filling branes and/or supergravity (see \cite{bbs} for bosonic branes). } We obtain a complete supersymmetric system of the equations of motion for the coupled system of superstring and super--D9--brane. Different phases of the coupled system are found. One of them can be regarded as generic, but describes the decoupled system of the closed superstring and the super-D9-brane, while one of the others corresponds to a 'singular' and nonperturbative 'boundary condition' for the gauge field on the worldsheet. It describes the coupled system of the {\sl open} superstring interacting with the D9-branes and implies an interdependence of the fermionic equations of motion which can be regarded as a weak counterpart of the (additional) $\kappa$--symmetry. \bigskip The method proposed in {\cite{BK99} and elaborated in} this paper may also be applied to the construction of the action for a coupled system containing any number $N_{2}$ of fundamental superstrings and any number $N_{2k}$ of type $IIB$ super-Dp-branes ($p=2k-1$) interacting with the super-D9-brane. In the action of such a coupled system \begin{equation}\label{acDp0} S = \int_{{\cal M}^{1+9}} \left( {\cal L}_{10} + \sum_{k=1}^{4}\sum_{r_{2k}=1}^{N_{p=2k}} \int_{{\cal M}^{1+9}} J^{(r_{2k})}_{10-2k} \wedge {\cal L}^{(r_{2k})}_{2k} \right) + \end{equation} $$ + \sum_{s=1}^{N_{2}} \int_{{\cal M}^{1+9}} J^{(s)}_8 \wedge \left( {\cal L}^{(s)}_2 + dA \right) $$ ${\cal L}_{10}= {\cal{L}}^0_{10} + {\cal{L}}^1_{10} + {\cal{L}}^{WZ}_{10}$ is the Lagrangian form of the super-D9-brane action \p{SLLL}--\p{L1}, \p{LWZ}. ${\cal L}^{(s)}_2$ represents the Lagrangian form \p{acIIB} for the $s$-th fundamental superstring lifted to the 9-brane world volume as in \p{SD9+IIB}, $J_8^{(s)}$ is the local supersymmetric current density \p{J81}, \p{J8Pi} for the $s$-th fundamental superstring. The latter is constructed with the help of the induced map of the worldsheet into the 10-dimensional worldvolume of the super--D9--brane. Finally, ${\cal L}^{(r)}_{2k}$ and $J^{(r)}_{10-2k}$ are the supersymmetric current density and a first order action functional for the $r$-th type $IIB$ super-Dp-brane with $p=2k-1=1,3,5,7$. The supersymmetric current density $J^{(r)}_{10-2k}$ $$ J^{(r)}_{10-2k} = (dx)^{\wedge 10-2k}_{{n}_1 \ldots {n}_{2k}} J^{{n}_1 \ldots {n}_{2k}}( x) = $$ $$ ={ 1 \over(10-2k)! (2k)!} \e_{{m}_1 \ldots {m}_{10-2k} n_1 \ldots {n}_{2k}} d{x}^{{m}_1 } \wedge \ldots \wedge dx^{{m}_{10-2k}} \times $$ $$ \times \int_{{\cal M}^{1+2k}} d\hat{x}^{(r) {n}_1} (\zeta ) \wedge \ldots \wedge d\hat{x}^{(r){n}_{2k}} (\xi ) \d^{10} \left(x - \hat{x}^{(r)} (\xi )\right)\equiv $$ \begin{equation}\label{J10-2k} {} \end{equation} $$ = { 1 \over (10-2k)! (2k)!} \e_{\underline{m}_1 \ldots \underline{m}_{10-2k} \underline{n}_1 \ldots \underline{n}_{2k} } { \Pi^{\underline{m}_1 } \wedge \ldots \wedge \Pi^{\underline{m}_{10-2k} } \over det(\Pi_{{l}}^{~\underline{k}})} \times $$ $$ \times \int_{{\cal M}^{1+1}} \hat{\Pi}^{(r)\underline{n}_1}(\zeta ) \wedge \ldots \wedge \hat{\Pi}^{(r)\underline{n}_{2k}} (\zeta ) \d^{10} \left( x - \hat{x}^{(r)} (\zeta)\right) $$ is defined by the induced map $x^m=\hat{x}^{(r)m} (\zeta)$ ($m=0,\ldots 9$) of the $r$-th Dp-brane worldvolume into the 10-dimensional worldvolume of the D9-brane, given by \begin{equation}\label{indmap} \hat{X}^{(r)\underline{m}} (\zeta) = {X}^{\underline{m}}\left( \hat{x}^{(r)} (\zeta)\right) \qquad \leftrightarrow \qquad \hat{x}^{(r)m} (\zeta)= x^m \left(\hat{X}^{(r)\underline{m}} (\zeta) \right). \end{equation} The Lagrangian form ${\cal L}_{2k}$ of the first order action for the free super-Dp-brane with $p=2k$ can be found in \cite{bst}. Certainly the form of the interaction between branes, which can be introduced into ${\cal L}^{(r)}_{2k}$ by the boundary terms requires a separate consideration (e.g. one of the important points is the interaction with the D9-brane gauge field through the Wess-Zumino terms of Dp-branes). We hope to return to these issues in a forthcoming publication. \bigskip It is worth mentioning that the {\sl super-D9-brane Lagrangian from ${\cal L}_{10}$ can be omitted form the action of the interacting system without loss of selfconsistency} { (cf. \cite{BK99})}. Thus one may obtain a supersymmetric description of the coupled system of fundamental superstrings and lower dimensional super-Dp-branes ($p=2k-1<9$), e.g. to the system of $N$ coincident super-D3-branes which is of interest for applications to gauge theory \cite{witten96,witten97}, as well as in the context of the Maldacena conjecture \cite{Maldacena}. The only remaining trace of the D9-brane is the existence of a map \p{indmap} of the super-Dp-brane ($p<9$) worldvolume into a 10-dimensional space whose coordinates are inert under a type $II$ supersymmetry. Thus the system contains an auxiliary all-enveloping 9-brane ({ '9-brane dominance'}). This means that we really do not need a space-time filling brane as a dynamical object and, thus, may be able to extend our approach to the $D=10$ type $IIA$ and $D=11$ cases, where such dynamical branes are not known. \bigskip Another interesting direction for future study is to replace the action of the space-time filling brane by a counterpart of the group-manifold action for the corresponding supergravity theory (see \cite{grm}). Such an action also implies the map of a D-dimensional bosonic surface into a space with $D$-bosonic dimensions, as the space time filling brane does. Thus we can define an induced map of the worldvolumes of superstrings and lower branes into the $D$--dimensional bosonic surface involved in the group manifold action and construct the covariant action for the coupled system of intersecting superbranes {\sl and supergravity}. In this respect the problem to construct the counterpart of a group-manifold actions for the $D=10$ $type~II$ supergravity \cite{lst} and duality-symmetric $D=11$ supergravity \cite{bst} seems to be of particular interest. \bigskip \section*{Acknowledgements} The authors are grateful to D. Sorokin and M. Tonin for interest in this paper and useful conversations and to R. Manvelyan, G. Mandal for relevant discussions. One of the authors (I.B.) thanks the Austrian Science Foundation for the support within the project {\bf M472-TPH}. He acknowledges a partial support from the INTAS Grant {\bf 96-308} and the Ukrainian GKNT grant {\bf 2.5.1/52}. \newpage \section*{Appendix A. Properties of Lorentz harmonic variables} \setcounter{equation}{0} \defC.\arabic{equation}{A.\arabic{equation}} The Lorentz harmonic variables $u_{\underline{m}}^{~\underline{a}}$, $v_{\underline{\mu}}^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}$ parameterizing the coset $$ {SO(1,9) \over SO(1,1) \otimes SO(8)} $$ which are used in the geometric action like \p{acIIB} for $D=10$ superstring models. \subsection*{Vector harmonics} In any number of space--time dimensions the Lorentz harmonic variables \cite{sok} which are appropriate to adapt the target space vielbein to the string world volume \cite{bpstv} are defined as $SO(1,D-1)$ group valued $D \times D$ matrix \begin{equation}\label{harm} u^{\underline{a}}_{\underline{m}} \equiv \left(u^{0}_{\underline{m}}, u^{i}_{\underline{m}}, u^{9}_{\underline{m}} \right) \equiv \left( {u^{++}_{\underline{m}} + u^{--}_{\underline{m}} \over 2}, u^{i}_{\underline{m}}, {u^{++}_{\underline{m}} - u^{--}_{\underline{m}} \over 2} \right) \qquad \in \qquad SO(1,D-1), \end{equation} \begin{equation}\label{harmor} \Leftrightarrow \qquad u^{\underline{a}}_{\underline{m}} u^{\underline{b}\underline{m}} = \eta^{\underline{a}\underline{b}} \equiv diag (+1,-1,...,-1). \end{equation} In the light-like notations \begin{equation}\label{harmpmpm} u^{0}_{\underline{m}}= {u^{++}_{\underline{m}} + u^{--}_{\underline{m}} \over 2}, \qquad u^{9}_{\underline{m}} = {u^{++}_{\underline{m}} - u^{--}_{\underline{m}} \over 2} \end{equation} the flat Minkowski metric acquires the form \begin{equation}\label{metr} \Leftrightarrow \qquad u^{\underline{a}}_{\underline{m}} u^{\underline{b}\underline{m}} = \eta^{\underline{a}\underline{b}} \equiv \left(\matrix{ 0 & 2 & 0 \cr 2 & 0 & 0 \cr 0 & 0 & I_{8 \times 8} \cr}\right) , \end{equation} and the orthogonality conditions look like \cite{sok} $$ \Leftrightarrow \qquad u^{++}_{\underline{m}} u^{++\underline{m}} = 0 , \qquad u^{--}_{\underline{m}} u^{--\underline{m}} =0, \qquad u^{++}_{\underline{m}} u^{--\underline{m}} =2 , $$ $$ u^{i}_{\underline{m}} u^{++\underline{m}} = 0 , \qquad u^{i}_{\underline{m}} u^{++\underline{m}} =0 , \qquad u^{i}_{\underline{m}} u^{j\underline{m}} = - \d^{ij}. $$ \subsection*{A2. Spinor Lorentz harmonics} For supersymmetric strings and branes we need to introduce the matrix $v_{\underline{\mu}}^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}$ which takes its values in the double covering $Spin(1,D-1)$ of the Lorentz group $SO(1,D-1)$ and provides the (minimal) spinor representation of the pseudo-rotation whose vector representation is given by the vector harmonics $u$ (spinor Lorentz harmonics \cite{B90,gds,BZ}). The latter fact implies the invariance of the gamma--matrices with respect to the Lorentz group transformations described by $u$ and $v$ harmonics \begin{equation}\label{harmsg} v^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu}} \qquad \in \qquad Spin(1,D-1) \qquad \Leftrightarrow \qquad u^{\underline{a}}_{\underline{m}} \Gamma^{\underline{m}}_{\underline{\mu}\underline{\nu}} = v^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu}} \Gamma^{\underline{a}}_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} v^{\underline{\b}\underline{\nu}} , \qquad u^{\underline{a}}_{\underline{m}} \Gamma_{\underline{a}}^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} = v^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu}} \Gamma_{\underline{m}}^{\underline{\mu}\underline{\nu}} v^{\underline{\b}\underline{\nu}}. \end{equation} \bigskip In this paper we use the $D=10$ spinor Lorentz harmonic variables $v_{\underline{\mu}}^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}$ parameterizing the coset $ Spin(1,9) /[Spin(1,1) \times SO(8)] $ \cite{BZ}, which are adequate for the description of $D=10$ superstrings. The splitting \p{harm} is reflected by the splitting of the $16\times 16$ Lorentz harmonic variables into two $16\times 8$ blocks $$ v^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu}} \equiv ( v^{~+ }_{\underline{\mu}q}, v^{~-}_{\underline{\mu}\dot{q}}) , \qquad \in \qquad Spin(1,9) $$ \begin{equation}\label{spharm} v^{~\underline{\mu}}_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}} \equiv ( v^{-\underline{\mu}}_{q}, v^{+\underline{\mu}}_{\dot{q}}) , \qquad \in \qquad Spin(1,9) \end{equation} $$ v^{~\underline{\mu}}_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}} v^{~\underline{\b}}_{\underline{\mu}} = \d_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}^{~\underline{\b}}, \qquad v^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu}} v^{~\underline{\nu}}_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}} = \d_{\underline{\mu}}^{~\underline{\nu}}, \qquad \Leftrightarrow $$ \begin{equation}\label{spharm1} v^{-\underline{\mu}}_{p} v^{~+ }_{\underline{\mu}q} = \d_{pq}, \qquad v^{+\underline{\mu}}_{\dot{p}} v^{~-}_{\underline{\mu}\dot{q}} = \d_{\dot{p}\dot{q}} , \qquad v^{+\underline{\mu}}_{\dot{q}} v^{~+ }_{\underline{\mu}q} = 0 = v^{-\underline{\mu}}_{q} v^{~-}_{\underline{\mu}\dot{q}} , \qquad \end{equation} $$ \d_{\underline{\mu}}^{\underline{\nu}} = v^{-\underline{\nu}}_{q} v^{~+}_{\underline{\mu}q} + v^{+\underline{\nu}}_{\dot{q}} v^{~-}_{\underline{\mu}q}. $$ To write in detail the relations \p{harmsg} between spinor and vector Lorentz harmonics we need the explicit $SO(1,1) \times SO(8)$ invariant representation for the $D=10$ Majorana--Weyl gamma matrices $\sigma^{\underline{a}}$ $$ \s^{\underline{ 0}}_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}}= \hbox{ {\it diag}}(\delta _{{ qp}}, \delta_{{\dot q}{\dot p}}) = \tilde{\s }^{\underline{0}~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} , \qquad \s^{\underline{9}}_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}}= \hbox{ {\it diag}} (\delta _{qp}, -\delta _{{\dot q}{\dot p}}) = -\tilde{\s }^{\underline{9}~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} , $$ \begin{equation}\label{gammarep} \s^{i}_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} = \left(\matrix{0 & \gamma ^{i}_{q\dot p}\cr \tilde{\gamma}^{i}_{{\dot q} p} & 0\cr} \right) = - \tilde{\s }^{i~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} , \end{equation} $$ \s^{{++}}_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} \equiv (\s^{\underline{ 0}}+ \s^{\underline{ 9}})_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}}= \hbox{ {\it diag}}(~2\delta _{qp},~ 0) = -(\tilde{\s }^{\underline{ 0}}- \tilde{\s }^{\underline{ 9}})^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} = \tilde{\s }^{{--}~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} , \qquad $$ $$ \s ^{{--}}_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}}\equiv (\s ^{\underline{ 0}}-\G ^{\underline{ 9}} )_{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}}= \hbox{ {\it diag}}(~0, ~2\delta _{{\dot q}{\dot p}}) = (\tilde{\s }^{\underline{ 0}}+ \tilde{\s }^{\underline{ 9}})^{\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}} = \tilde{\s }^{{++}~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}\underline{\b}}, $$ where $\g^i_{q\dot{q}} = \tilde{\g}^i_{\dot{q}q}$ are $8\times 8$ chiral gamma matrices of the $SO(8)$ group. Substituting \p{gammarep} we get from \p{harmsg} \cite{gds,BZ,bpstv} \begin{eqnarray}\label{harms10} u^{{++}}_{\underline{m}} \s^{\underline{m}}_{\underline{\mu}\underline{\nu}} =& 2 v^{~+}_{\underline{\mu}q} v^{~+}_{\underline{\mu}q} , \qquad u^{{++}}_{\underline{m}} \tilde{\s}^{\underline{m}~\underline{\mu}\underline{\nu}}=& 2 v^{+\underline{\mu}}_{\dot q} v^{+\underline{\nu}}_{\dot q}, \qquad \nonumber\\ u^{{--}}_{\underline{m}} \s^{\underline{m}}_{\underline{\mu}\underline{\nu}}=& 2 v^{~-}_{\underline{\mu}\dot{q}} v^{~-}_{\underline{\mu}\dot{q}} , \qquad u^{{--}}_{\underline{m}} \tilde{\s}^{\underline{m}~\underline{\mu}\underline{\nu}}=& 2 v^{-\underline{\mu}}_{q} v^{-\underline{\nu}}_{q}, \qquad \nonumber\\ u^{{i}}_{\underline{m}} \s^{\underline{m}}_{\underline{\mu}\underline{\nu}}=& 2 v^{~+}_{\{ \underline{\mu}q} \g^i_{q\dot{q}} v^{~-}_{\underline{\nu}\} \dot{q}} , \qquad u^{{i}}_{\underline{m}} \tilde{\s}^{\underline{m}~\underline{\mu}\underline{\nu}}=& - 2 v^{-\{ \underline{\mu}}_{q} \g^i_{q\dot{q}} v^{+\underline{\nu}\} }_{\dot{q}}, \qquad \end{eqnarray} $$ u^{{i}}_{\underline{m}} \g^{i}_{q \dot q} = v^{+}_{q} \tilde{\s}_{\underline{m}} v^{-}_{\dot q} = - v^{-}_{q} \s_{\underline{m}} v^{+}_{\dot q}, \qquad $$ $$ u^{{++}}_{\underline{m}} \d_{pq} = v^{+}_q\tilde{\sigma}_{\underline{m}} v^{+}_p, \qquad u^{{--}}_{\underline{m}} \d_{\dot{p}\dot{q}} = v^{-}_{\dot{q}}\tilde{\sigma}_{\underline{m}} v^{-}_{\dot{p}}. \qquad $$ The differentials of the harmonic variables are calculated easily by taking into account the conditions \p{harm}, \p{spharm}. For the vector harmonics this implies $$ d u^{\underline{a}}_{\underline{m}} u^{~\underline{b}\underline{m}} + u^{~\underline{a}\underline{m}} d u^{\underline{b}}_{\underline{m}} = 0, $$ whose solution is given by \begin{equation}\label{hdif} d u^{~\underline{a}}_{\underline{m}} = u^{~\underline{b}}_{\underline{m}} \Om^{~\underline{a}}_{\underline{b}} (d) \qquad \Leftrightarrow \qquad \cases { du^{~++}_{\underline{m}} = u^{~++}_{\underline{m}} \om + u^{~i}_{\underline{m}} f^{++i} (d ) , \cr du^{~--}_{\underline{m}} = - u^{~--}_{\underline{m}} \om + u^{~i}_{\underline{m}} f^{--i} (d ) , \cr d u^{i}_{\underline{m}} = - u^{j}_{\underline{m}} A^{ji} + {1\over 2} u_{\underline{m}}^{++} f^{--i} (d) + {1\over 2} u_{\underline{m}}^{--} f^{++i} (d) , \cr } \end{equation} where \begin{equation}\label{pC} \Om^{~\underline{a}}_{\underline{b}} \equiv u_{\underline{b}}^{\underline{m}} d u^{\underline{a}}_{\underline{m}} = \pmatrix { \om & 0 & {1 \over \sqrt{2}} f^{--i} (d ) \cr 0 & -\om & {1 \over \sqrt{2}} f^{++i} (d ) \cr {1 \over \sqrt{2}} f^{++i} (d ) & {1 \over \sqrt{2}} f^{--i} (d ) & A^{ji} (d) \cr} , \qquad \Om^{\underline{a}\underline{b}} \equiv \eta^{~\underline{a}\underline{c}} \Om^{~\underline{b}}_{\underline{c}} = - \Om^{\underline{b}\underline{a}} \end{equation} are $SO(1,D-1)$ Cartan forms. They can be decomposed into the $SO(1,1)\times SO(8)$ covariant forms \begin{equation}\label{+i} f^{++i} \equiv u^{++}_{\underline m} d u^{\underline{m}i} \end{equation} \begin{equation}\label{-i} f^{--i} \equiv u^{--}_{\underline m} d u^{\underline{m}i} , \end{equation} parameterizing the coset ${SO(1,9) \over SO(1,1) \times SO(8)}$, the $SO(1,1)$ spin connection \begin{equation}\label{0} \omega \equiv {1 \over 2} u^{--}_{\underline m} d u^{\underline m~++}\; , \end{equation} and $SO(8)$ connections (induced gauge fields) \begin{equation}\label{ij} A^{ij} \equiv u^{i}_{\underline m} d u^{\underline m~j}\; . \end{equation} The Cartan forms \p{pC} satisfy the Maurer-Cartan equation \begin{equation}\label{pMC} d\Om^{\underline{a}~\underline{b}} - \Om^{\underline{a}}_{~\underline{c}} \wedge \Om^{\underline{c}\underline{b}} = 0 \end{equation} which appears as integrability condition for Eq.\p{hdif}. It has the form of a zero curvature condition. This reflects the fact that the $SO(1,9)$ connections defined by the Cartan forms \p{pC} are trivial. The Maurer--Cartan equation \p{pMC} splits naturally into \begin{equation}\label{PC+} {\cal D} f^{++i} \equiv d f^{++i} - f^{++i} \wedge \omega + f^{++j} \wedge A^{ji} = 0 \end{equation} \begin{equation}\label{PC-} {\cal D} f^{--i} \equiv d f^{--i} + f^{--i} \wedge \omega + f^{--j} \wedge A^{ji} = 0 \end{equation} \begin{equation}\label{G} {\cal R} \equiv d \omega = {1\over 2} f^{--i} \wedge f^{++i} \end{equation} \begin{equation}\label{R} R^{ij} \equiv d A^{ij} + A^{ik} \wedge A^{kj} = - f^{-[i} \wedge f^{+j]} \end{equation} giving rise to the Peterson--Codazzi, Gauss and Ricci equations of classical Surface Theory (see \cite{Ei}). The differentials of the spinor harmonics can be expressed in terms of the same Cartan forms \p{+i}--\p{ij} \begin{equation}\label{vOmv} dv_{\underline{\mu}}^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}} = {1 \over 4} \Om^{\underline{a}\underline{b}} ~v_{\underline{\mu}}^{~\underline{\b}} ~(\sigma_{\underline{a}\underline{b}})_{\underline{\b}}^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}. \end{equation} Using \p{gammarep} we can specify \p{vOmv} as (cf. \cite{BZ}) \begin{equation}\label{0ij} v^{-\underline{\mu}}_p d v^{~+}_{\underline{\mu}q} = {1\over 2} \d_{pq} \om - {1 \over 4} A^{ij} \g^{ij}{}_{pq} , \qquad v^{+\underline{\mu}}_{\dot{p}} d v^{~-}_{\underline{\mu}\dot{q}} = -{1\over 2} \d_{\dot{p}\dot{q}} \om - {1 \over 4} A^{ij} \tilde{\g}^{ij}{}_{\dot{p}\dot{q}}, \qquad \end{equation} \begin{equation}\label{pmpmi1} v^{+\underline{\mu}}_{\dot{p}} d v^{~+}_{\underline{\mu}q} = {1\over 2} f^{++i} \g^i_{q\dot{p}}, \qquad v^{-\underline{\mu}}_{q} d v^{~-}_{\underline{\mu}\dot{p}} = {1\over 2} f^{--i} \g^i_{q\dot{p}} \end{equation} Note that in $D=10$ the relations between vector (v--), c--spinor and s--spinor representations of the $SO(8)$ connections have the completely symmetric form $$ A_{pq} = {1 \over 4} A^{ij} \g^{ij}{}_{pq} , \qquad A_{\dot{p}\dot{q}} = { 1 \over 4} A^{ij} \tilde{\g}^{ij}{}_{\dot{p}\dot{q}}, \qquad $$ $$ A^{ij} = {1 \over 4} A_{pq} \g^{ij}{}_{pq} = {1 \over 4} A_{\dot{p}\dot{q}} \tilde{\g}^{ij}{}_{\dot{p}\dot{q}}, \qquad $$ This expresses the well known triality property of the $SO(8)$ group (see e.g. \cite{gsw} and refs therein). \bigskip \section*{Appendix B. Linearized bosonic equations of type $IIB$ superstring} \setcounter{equation}{0} \defC.\arabic{equation}{B.\arabic{equation}} Here we present the derivation of the linearized bosonic equations \p{linbeq} of the superstring from the set of equations \p{Ei=0}, \p{M2=0}. In the gauge \p{kappag}, \p{staticg} fermionic inputs disappear from Eq. \p{M2=0}. Moreover, in the linearized approximation we can replace $\hat{E}^{\pm\pm}$ by the closed form $d\xi^{\pm\pm}$ (holonomic basis for the space tangent to the worldsheet) and solve the linearized Peterson-Codazzi equations \p{PC+}, \p{PC-} \begin{equation}\label{PClin} df^{++i}=0, \qquad df^{--i}=0 \qquad \end{equation} in terms of two $SO(8)$--vector densities $k^{++i}, k^{--i}=0$ (infinitesimal parameters of the coset $SO(1,9)/[SO(1,1) \times SO(8)]$) \begin{equation}\label{PClins} f^{++i}= 2dk^{++i}, \qquad f^{--i}= 2dk^{--i}. \qquad \end{equation} Then the linearized form of the equations \p{Ei=0}, \p{M2=0} is \begin{equation}\label{lEi=0} dX^i - \xi^{++} dk^{--i} - \xi^{--i}dk^{++i}=0, \qquad \end{equation} \begin{equation}\label{lM2=0} d\xi^{--i} \wedge dk^{++i} - d\xi^{++} \wedge dk^{--i} =0. \qquad \end{equation} Eq. \p{lM2=0} implies $$ \partial_{++}k^{++i}+ \partial_{--}k^{--i}=0, $$ while the integrability conditions for Eq. \p{lEi=0} are \begin{equation}\label{ilEi=0} d\xi^{--i} \wedge dk^{++i} + d\xi^{++} \wedge dk^{--i} =0. \qquad \rightarrow \qquad \partial_{++}k^{++i}- \partial_{--}k^{--i}=0 \end{equation} Hence we have \begin{equation}\label{hi=0} \partial_{++}k^{++i}=\partial_{--}k^{--i}=0. \end{equation} Now, extracting, e.g. the component of \p{lEi=0} proportional to $d\xi^{++}$ and taking into account \p{hi=0} one arrives at \begin{equation}\label{d++Xi} \partial_{++}X^i = \xi^{++i} \partial_{++}k^{--i}. \end{equation} The $\partial_{--}$ derivative of Eq. \p{d++Xi} again together with Eq. \p{hi=0} yields a relation which includes the $X^i$ field only \begin{equation}\label{d++--Xi} \partial_{--}\partial_{++} X^i = \xi^{++i} \partial_{++}\partial_{--}k^{--i}=0 \end{equation} and is just the free equation \p{linbeq}. \bigskip \section*{Appendix C: The gauge field of D9-brane described by block--triangular spin-tensor $h$} \setcounter{equation}{0} \defC.\arabic{equation}{C.\arabic{equation}} Here we will present the nontrivial solution of the characteristic equation \p{Id1} for the spin--tensor $h$ of the triangle form \p{h16tr} \begin{equation} \label{h16tr1} \hat{h}^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\b}} \equiv \hat{v}^{~\underline{\nu}}_{\underline{\b}} \hat{h}^{~\underline{\nu}}_{\underline{\mu}} \hat{h}^{~\underline{\alpha}\def\b{\beta}\def\g{\gamma}\def\d{\delta}\def\e{\epsilon}}_{\underline{\mu}} \equiv \left( \matrix{ 0 & \hat{h}^{--}_{q\dot{p}} \cr \hat{h}^{++}_{\dot{q}p} & \tilde{\hat{h}}_{\dot{q}\dot{p}} \cr } \right) \qquad \in \quad Spin(1,9). \end{equation} It corresponds to the $SO(1,9)$ valued matrix (cf. \p{Id2}) \begin{equation} \label{kab} k_{\underline{a}}^{~\underline{b}} \equiv u_{\underline{a}}^{~\underline{m}} k_{\underline{m}}^{~\underline{n}} u_{\underline{n}}^{~\underline{~b}} \equiv \left( {k^{++}_{\underline{a}} + k^{--}_{\underline{a}} \over 2}, k^{i}_{\underline{a}}, {k^{++}_{\underline{a}} - k^{--}_{\underline{a}} \over 2} \right) \qquad \in \quad SO(1,9) \end{equation} with the components \begin{equation} \label{k++} k_{\underline{a}}^{++} = {1 \over 2} \d_{\underline{a}}^{--} k^{++|++} , \qquad \end{equation} \begin{equation} \label{k--} k_{\underline{a}}^{--} = \d_{\underline{a}}^{++} {2 \over k^{++|++} } + \d_{\underline{a}}^{--} {k^{++j} k^{++j} \over 2k^{++|++} } - \d_{\underline{a}}^{i} {2k^{ij} k^{++j} \over k^{++|++} } , \qquad \end{equation} \begin{equation} \label{ki} k_{\underline{a}}^{i} = {1 \over 2 } \d_{\underline{a}}^{--} k^{++i} - \d_{\underline{a}}^{j} k^{ji}. \qquad \end{equation} The matrix $k^{ij}$, entering \p{k--}, \p{ki}, takes its values in the $SO(8)$ group: \begin{equation} \label{kkT} k^{ik}k^{jk} = \d^{ij} \qquad \Leftrightarrow \qquad k^{ij} \in SO(8). \qquad \end{equation} The nonvanishing $8\times 8$ blocks of the $16 \times 16$ matrix $h$ \p{h16tr1} are related with the independent components $k^{++|++}$, $k^{++i}$, $k^{ij} \in SO(8)$ of the matrix \p{kab} by \begin{equation} \label{h--h--} {h}^{--}_{q\dot{s}} {h}^{--}_{p\dot{s}}= \d_{qp} {2 \over k^{++|++} } , \qquad \end{equation} \begin{equation} \label{h--tih} {h}^{--}_{q\dot{s}} \tilde{h}_{\dot{q}\dot{s}} = - \g^i_{q\dot{q}} {k^{ij} k^{++j} \over 2k^{++|++} } , \qquad \end{equation} \begin{equation} \label{h++h++} {h}^{++}_{\dot{q}s} {h}^{++}_{\dot{p}s}= \d_{\dot{q}\dot{p}} {k^{++|++} \over 2}, \qquad \end{equation} \begin{equation} \label{tihtih} \tilde{h}_{\dot{q}\dot{s}} \tilde{h}_{\dot{p}\dot{s}} = \d_{\dot{q}\dot{p}} {k^{++j} k^{++j} \over 2k^{++|++} }, \qquad \end{equation} \begin{equation} \label{h++gih--} {h}^{++}_{\dot{q}s} \g^i_{s\dot{s}} {h}^{--}_{q\dot{s}}= - \g^j_{q\dot{q}} k^{ji}, \qquad \end{equation} \begin{equation} \label{h++gitih} 2{h}^{++}_{(\dot{q}|s} \g^i_{s\dot{s}} \tilde{h}_{|\dot{s})\dot{s}}= \d_{\dot{q}\dot{p}} k^{++i}. \qquad \end{equation} These equations are produced by Eq. \p{Id1} in the frame related to the harmonics \p{harmv}, \p{harms} of the fundamental superstring. The expression connecting the independent components $k^{++|++}$, $k^{++i}$, $k^{ij} \in SO(8)$ of the matrix \p{kab} with the components of the antisymmetric tensor $F$ (which becomes the field strength of the gauge field of the super--D9--brane on the mass--shell) $$ F_{\underline{a}\underline{b}} \equiv u_{\underline{a}}^a F_{ab} u_{\underline{~b}}^b = -F_{\underline{b}\underline{~a}} = (F^{--|++}, F^{++i}, F^{--i}, F^{ij}) $$ can be obtained from Eq. \p{Id2} in the frame related to the stringy harmonics \begin{equation} \label{F--++} F^{--|++} = 2, \qquad \end{equation} \begin{equation} \label{F++i} F^{++i} = - {1\over 2} k^{++|++} F^{--i}, \qquad \end{equation} \begin{equation} \label{F--kF--} F^{--j} k^{ji} = F^{--i} \Leftrightarrow F^{--j} (\d^{ji} - k^{ji}) = 0, \qquad \end{equation} \begin{equation} \label{F--k++=4} F^{--i} k^{++i} = 4, \qquad \end{equation} \begin{equation} \label{F--kk} F^{--i} k^{++j} k^{++j} = - 4k^{ij} k^{++j} - 4F^{ij'} k^{j'j} k^{++j} \equiv - 4(\d^{ij} +F^{ij}) 4k^{jj'} k^{++j'} , \qquad \end{equation} \begin{equation} \label{Fij} F^{ij'} (\d^{j'j}- k^{j'j}) = - (\d^{ij}+ k^{ij}) + {1 \over 2}F^{--i} k^{++j}. \qquad \end{equation} In particular, the above results demonstrate that Eq. $h_{pq}=0$ (\p{fmax} or \p{h16tr1}) indeed implies \p{F++--=2!} (see \p{F--++}). \newpage {\small
{ "attr-fineweb-edu": 1.961914, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbg7xK3YB9raXyf6j
\section{Introduction} Decision problems where self-interested intelligent systems or agents wish to optimize their individual cost objective function arise in many engineering applications, such as demand-side management in smart grids \cite{mohsenian2010autonomous}, \cite{saad2012game}, charging/discharging coordination for plug-in electric vehicles \cite{ma2011decentralized}, \cite{grammatico2017dynamic}, thermostatically controlled loads \cite{li2015market} and robotic formation control \cite{lin2014distributed}. The key feature that distinguishes these problems from multi-agent distributed optimization is the fact the cost functions are coupled together. Currently, one active research area is that of finding (seeking) actions that are self-enforceable, e.g. actions such that no agent has an incentive to unilaterally deviate from - the so-called Nash equilibrium (NE). Due to the coupling of the cost functions, on computing a NE algorithmically, some information on the actions of the other agents is necessary. The quality of this information can vary from knowing everything (full knowledge of the agent actions) \cite{yi2019operator}, estimates based on distributed consensus between the agents \cite{gadjov2019distributed}, to payoff-based estimates \cite{marden2009cooperative}, \cite{frihauf2011nash}. From the mentioned scenarios, the last one is of special interest as it requires no dedicated communication infrastructure. \\ \\ \emph{Literature review:} In payoff-based algorithms, each agent can only measure the value of their cost function, but does not know its analytic form. Many of such algorithms are designed for NEPs with static agents with finite action spaces, e.g. \cite{goto2012payoff}, \cite{marden2009cooperative}, \cite{marden2012revisiting}. In the case of continuous action spaces, measurements are most often used to estimate the pseudogradient. A prominent class of payoff-based algorithms is Extremum Seeking Control (ESC). The main idea is to use perturbation signals to ``excite'' the cost function and estimate its gradient. Since the first general proof of convergence \cite{krstic2000stability}, there has been a strong research effort to extend the original ESC approach \cite{tan2006non}, \cite{ghaffari2012multivariable}, as well as to conceive diverse variants, e.g. \cite{durr2013lie}. ESC was used for NE seeking in \cite{frihauf2011nash} where the proposed algorithm is proven to converge to a neighborhood of a NE for games with strongly monotone pseudogradient \cite{krilavsevic2021learning}. The results are extended in \cite{liu2011stochastic} to include stochastic perturbation signals. In \cite{poveda2017framework}, Poveda and Teel propose a framework for the synthesis of a hybrid controller which could also be used for NEPs. The authors in \cite{poveda2020fixed} propose a fixed-time Nash equilibrium seeking algorithm which also requires a strongly monotone pseudogradient and communication between the agents. To the best of our knowledge, there is still no ESC algorithm for solving NEPs with merely \textit{monotone} pseudogradient. \\ \\ A common approach for is to translate the NEP into a variational inequality (VI) \cite[Equ. 1.4.7]{facchinei2007finite}, and in turn into the problem of finding a zero of an operator \cite[Equ. 1.1.3]{facchinei2007finite}. For the special class of monotone operators, there exists a vast literature, see \cite{bauschke2011convex} for an overview. Each algorithm for finding zeros of monotone operators has different prerequisites and working assumptions that define what class of problems it can solve. For example, the forward-backward algorithm requires that the forward operator, typically the pseudogradient, is monotone and cocoercive \cite[Thm. 26.14]{bauschke2011convex}, whilst the forward-backward-forward algorithm requires only monotonicity of the forward operator \cite[Rem. 26.18]{bauschke2011convex}. The drawback of the latter is that it requires two forwards steps per iteration, namely two (pseudo)gradient computations. Other algorithms such as the extragradient \cite{korpelevich1976extragradient} and the subgradient extragradient \cite{censor2011subgradient} ensure convergence with mere monotonicity of the pseudogradient, but still require two forward steps per iteration. Recently, the golden ratio algorithm in \cite{malitsky2019golden} is proven to converge in the monotone case with only one forward step. All of the previously mentioned algorithms are designed as discrete-time iterations. Most of them can be transformed into continuous-time algorithms, such as the forward-backward with projections onto convex sets \cite{boct2017dynamical}, the forward-backward with projections onto tangent cones \cite{grammatico2017dynamic}, \cite{bianchi2021continuous}, forward-backward-forward \cite{bot2020forward} and the golden ratio \cite{gadjov2020exact}, albeit without projections (see Appendix \ref{app: projection case}). \\ \\ \emph{Contribution}: Motivated by the above literature and open research problem, to the best of our knowledge, we consider and solve for the first time the problem of learning (i.e., seeking via zeroth-order information) a NE in merely monotone games via ESC. Unlike other extremum seeking algorithms for NEPs, we do not require strong monotonicity of the pseudogradient. Specifically, we extend the results in \cite{gadjov2020exact} via hybrid systems theory to construct a novel extremum seeking scheme which exploits the single forward step property of the continuous-time golden ration algorithm. \\ \\ \emph{Notation}: $\mathbb{R}$ denotes the set of real numbers. For a matrix $A \in \mathbb{R}^{n \times m}$, $A^\top$ denotes its transpose. For vectors $x, y \in \mathbb{R}^{n}$, $M \in \mathbb{R}^{n \times n}$ a positive semi-definite matrix and $\mathcal{A} \subset \mathbb{R}^n$, $\vprod{x}{y}$, $\|x \|$, $\|x \|_M$ and $\|x \|_\mathcal{A}$ denote the Euclidean inner product, norm, weighted norm and distance to set respectively. Given $N$ vectors $x_1, \dots, x_N$, possibly of different dimensions, $\col{x_1, \dots x_N} \coloneqq \left[ x_1^\top, \dots, x_N^\top \right]^\top $. Collective vectors are defined as $\bfs{x} \coloneqq \col{x_1, \dots, x_N}$ and for each $i = 1, \dots, N$, $\bfs{x}_{-i} \coloneqq \col{ x_1, \dots, x_{i -1}, x_{i + 1}, \dots, x_N } $. Given $N$ matrices $A_1$, $A_2$, \dots, $A_N$, $\operatorname{diag}\left(A_{1}, \ldots, A_{N}\right)$ denotes the block diagonal matrix with $A_i$ on its diagonal. For a function $v: \mathbb{R}^{n} \times \mathbb{R}^{m} \rightarrow \mathbb{R}$ differentiable in the first argument, we denote the partial gradient vector as $\nabla_x v(x, y) \coloneqq \col{\frac{\partial v(x, y)}{\partial x_{1}}, \ldots, \frac{\partial v(x, y)}{\partial x_{N}}} \in \mathbb{R}^{n}$. We use $\mathbb{S}^{1}:=\left\{z \in \mathbb{R}^{2}: z_{1}^{2}+z_{2}^{2}=1\right\}$ to denote the unit circle in $\mathbb{R}^2$. The mapping $\mathrm{proj}_S : \mathbb{R}^n \rightarrow S$ denotes the projection onto a closed convex set $S$, i.e., $\mathrm{proj}_S(v) \coloneqq \mathrm{argmin}_{y \in S}\n{y - v}$. $\operatorname{Id}$ is the identity operator. $I_n$ is the identity matrix of dimension $n$ and $ \bfs{0}_n$ is vector column of $n$ zeros. A continuous function $\gamma: \mathbb{R}_+ \leftarrow \mathbb{R}_+$ is of class $\mathcal{K}$ if it is zero at zero and strictly increasing. A continuous function $\alpha: \mathbb{R}_+ \leftarrow \mathbb{R}_+$ is of class $\mathcal{L}$ if is non-increasing and converges to zero as its arguments grows unbounded. A continuous function $\beta: \mathbb{R}_+ \times \mathbb{R}_+ \rightarrow \mathbb{R}_+$ is of class $\mathcal{KL}$ if it is of class $\mathcal{K}$ in the first argument and of class $\mathcal{L}$ in the second argument. \begin{defn}[UGAS] For a dynamical system, with state $ x \in C \subseteq \mathbb{R}^n$ and \begin{align} & \dot{x} = f(x), \label{eq: reqular system} \end{align} where $f: \mathbb{R}^n \rightarrow \mathbb{R}^n$, a compact set $\mathcal{A} \subset C$ is uniformly globally asymptotically stable (UGAS) if there exists a $\mathcal{KL}$ function $\beta$ such that every solution of \nref{eq: reqular system} satisfies $\n{x(t)}_\mathcal{A} \leq \beta(\n{x(0)}_\mathcal{A}, t)$, for all $t \in \mathrm{dom}(x).$ \hspace*{\fill} $\square$ \end{defn} \begin{defn}[SGPAS] For a dynamical system parametrized by a vector of (small) positive parameters $\varepsilon \coloneqq \col{\varepsilon_1, \dots, \varepsilon_k}$, with state $x \in C \subseteq \mathbb{R}^n$ and \begin{align} \dot{x} = f_\varepsilon(x), \label{eq: small param system} \end{align} where $f_\varepsilon: \mathbb{R}^n \rightarrow \mathbb{R}^n$, a compact set $\mathcal{A} \subset C$ is semi-globally practically asymptotically stable (SGPAS) as $(\varepsilon_1, \dots, \varepsilon_k) \rightarrow 0^+$ if there exists a $\mathcal{KL}$ function $\beta$ such that the following holds: For each $\Delta>0$ and $v>0$ there exists $\varepsilon_{0}^{*}>0$ such that for each $\varepsilon_{0} \in\left(0, \varepsilon_{0}^{*}\right)$ there exists $\varepsilon_{1}^{*}\left(\varepsilon_{0}\right)>0$ such that for each $\varepsilon_{1} \in$ $\left(0, \varepsilon_{1}^{*}\left(\varepsilon_{0}\right)\right) \ldots$ there exists $\varepsilon_{j}^{*}\left(\varepsilon_{j-1}\right)>0$ such that for each $\varepsilon_{j} \in$ $\left(0, \varepsilon_{j}^{*}\left(\varepsilon_{j-1}\right)\right) \ldots, j=\{2, \ldots, k\},$ each solution $x$ of \nref{eq: small param system} that satisfies $\n{x(0)}_\mathcal{A} \leq \Delta$ also satisfies $\n{x(t)}_\mathcal{A} \leq \beta\left(\n{x(0)}_\mathcal{A} , t\right)+v$ for all $t \in \mathrm{dom}(x)$. \hspace*{\fill} $\square$ \end{defn} \begin{rem} In simple terms, for every initial conditions $x(0) \in \mathcal{A} + \Delta \mathbb{B}$, it is possible to tune the parameters $\varepsilon_1, \dots, \varepsilon_k$ in that order, such that the set $\mathcal{A} + v \mathbb{B}$ is UGAS. \end{rem} \section{Problem statement} We consider a multi-agent system with $N$ agents indexed by $i \in \mathcal{I} = \{1, 2, \dots , N \}$, each with cost function \begin{align} J_i(u_i, \bfs{u}_{-i})\label{eq: cost_i}, \end{align} where $u_i \in \mathbb{R}^{m_i}$ is the decision variable, $J_{i}: \mathbb{R}^{m_i} \rightarrow \mathbb{R}$. Let us also define $m \coloneqq \sum m_i$ and $m_{-i} \coloneqq \sum_{j \neq i} m_j$.\\ \\ In this paper, we assume that the goal of each agent is to minimize its cost function, i.e., \begin{align} \forall i \in \mathcal{I}:\ \min_{u_i \in \mathbb{R}^{m_i}} J_i(u_i, \boldsymbol{u}_{-i}), \label{def: dyn_game} \end{align}{} which depends on the decision variables of other agents as well. From a game-theoretic perspective, this is the problem to compute a Nash equilibrium (NE), as formalized next. \begin{defn}[Nash equilibrium] A collection of decisions $\bfs{u}^*\coloneqq\col{u_i^*}_{i \in \mathcal{I}}$ is a Nash equilibrium if, for all $i \in \mathcal{I}$, \begin{align} u_{i}^{*} \in \underset{v_{i} \in \mathbb{R}^{m_i}}{\operatorname{argmin}}\ J_{i}\left(v_{i}, \bfs{u}_{-i}^{*}\right). \label{def: gne} \end{align}\end{defn}{} with $J_i$ as in \nref{eq: cost_i}.\hspace*{\fill} $\square$ \\ \\ In plain words, a set of decision variables is a NE if no agent can improve its cost function by unilaterally changing its input. To ensure the existence of a NE and the solutions to the differential equations, we postulate the following basic assumption \cite[Cor. 4.2]{bacsar1998dynamic}: \begin{stan_assum}[Regularity] \label{sassum: regularity} For each $i \in \mathcal{I}$, the function $J_i$ in \nref{eq: cost_i} is continuously differentiable in $u_i$ and continuous in $\bfs{u}_{-i}$; the function $J_{i}\left(\cdot, \bfs{u}_{-i}\right)$ is strictly convex and radially unbounded in $u_i$ for every $\bfs{u}_{-i}$.\hspace*{\fill} $\square$ \end{stan_assum} By stacking the partial gradients $\nabla_{u_i} J_i(u_i, \boldsymbol{u}_{-i})$ into a single vector, we form the so-called pseudogradient mapping \begin{align} F(\boldsymbol{u}):=\operatorname{col}\left(\left(\nabla_{u_{i}} J_{i}\left(u_{i}, \bfs{u}_{-i}\right)\right)_{i \in \mathcal{I}}\right). \label{eq: pseudogradient} \end{align} From \cite[Equ. 1.4.7]{facchinei2007finite} and the fact that $u_i \in \mathbb{R}^{m_i}$ it follows that the Nash equilibrium $\bfs{u}^*$ satisfies \begin{align} F(\bfs{u}^*) = \bfs{0}_m. \label{eq: nash pseudogradient cond} \end{align} Let us also postulate the weakest working assumptions in NEPs with continuous actions, i.e. the monotonicity of the pseudogradient mapping and existence of solutions \cite[Def. 2.3.1, Thm. 2.3.4]{facchinei2007finite}. \begin{stan_assum}[Monotonicity and existence] The pseudogradient mapping $F$ in \nref{eq: pseudogradient} is monotone, i.e., for any pair $\bfs{u}, \bfs{v} \in \bfs{\Omega}$, it holds that $\vprod{\bfs{u} - \bfs{v}}{F(\bfs{u}) - F(\bfs{v})} \geq 0$; There exists a $\bfs{u}^*$ such that \nref{eq: nash pseudogradient cond} is satisfied.\hspace*{\fill} $\square$ \end{stan_assum}{} Finally, let us define the following sets \begin{align} \mathcal{S} &\coloneqq \{\bfs{u}\in \mathbb{R}^m \mid F(\bfs{u}) = 0\}\text{, (set of all NE)}\\ \mathcal{A} &\coloneqq \{\col{\bfs{u}, \bfs{u}} \in \mathbb{R}^{2m} \mid \bfs{u}\in \mathcal{S}\}. \end{align} Thus, here we consider the problem of finding a NE of the game in \nref{def: gne} via the use of only zeroth-order information only, i.e. measurements of the values of the cost functions. \section{Zeroth-order Nash Equilibrium seeking} In this section, we present our main contribution: a novel extremum seeking algorithm for solving monotone NEPs. The extremum seeking dynamics consist of an oscillator $\bfs{\mu}$ which is used to excite the cost functions, a first-order filter $\bfs{\xi}$ that smooths out the pseudogradient estimation and improves performance, and a scheme as in \cite[Thm. 1]{gadjov2020exact} used for monotone NE seeking that, unlike regular pseudogradient decent, uses additional states $\bfs{z}$ in order to ensure convergence without strict monotonicity. We assume that the agents have access to the cost output only, hence, they do not directly know the actions of the other agents, nor they know the analytic expressions of their partial gradients. In fact, this is a standard setup used in extremum seeking, see (\cite{krstic2000stability}, \cite{poveda2017framework} among others. Our proposed continuous-time algorithm reads as follows \begin{align} \forall i \in \mathcal{I}: \m{ \dot{z_i} \\ \dot{u_i} \\ \dot{\xi}\\ \dot{\mu_i} } &=\m{ \gamma_i\varepsilon_i\left(-z_i + u_i\right) \\ \gamma_i\varepsilon_i\left(-u_i + z_i - \xi_i \right) \\ \gamma_i\left( -\xi_i + \tilde F_i(\bfs{u}, \bfs{\mu}) \right) \\ {2 \pi} \mathcal{R}_{i} \mu_i }, \label{eq: single agent dynamics} \end{align} or in equivalent collective form: \begin{align} \m{ \dot{\bfs{z}} \\ \dot{\bfs{u}} \\ \dot{\bfs{\xi}} \\ \dot{\bfs{\mu}} }=\m{ \bfs{\gamma}\bfs{\varepsilon}\left(-\bfs{z} + \bfs{\omega}\right) \\ \bfs{\gamma}\bfs{\varepsilon}\left(-\bfs{\omega} + \bfs{z} - \bfs{\xi} \right) \\ \bfs{\gamma}\left( - \bfs{\xi} + \tilde F(\bfs{u}, \bfs{\mu}) \right) \\ {2 \pi} \mathcal{R}_{\kappa}\bfs{\mu} },\label{eq: complete system} \end{align} where $z_i, \xi_i \in \mathbb{R}^{m_{i}}$, $\mu_i \in \mathbb{S}^{m_i}$ are the oscillator states, $\varepsilon_i, \gamma_i \geq 0$ for all $i \in \mathcal{I}$, $\bfs{\varepsilon} \coloneqq \diag{ (\varepsilon_i I_{m_i})_{i \in \mathcal{I}}}$, $\bfs{\gamma} \coloneqq \diag{ (\gamma_i I_{m_i})_{i \in \mathcal{I}}}$, $\mathcal{R}_{\kappa} \coloneqq \diag{(\mathcal{R}_i)_{i \in \mathcal{I}}}$, $\mathcal{R}_i \coloneqq \diag{(\col{[0, -\kappa_j], [\kappa_j, 0]})_{j \in \mathcal{M}_i}}$, $\kappa_i > 0$ for all $i$ and $\kappa_i \neq \kappa_j$ for $i \neq j$, $\mathcal{M}_j \coloneqq \{\sum_{i = 1}^{j-1}m_i + 1, \dots, \sum_{i = 1}^{j-1}m_i + m_j\}$, $\mathbb{D}^n \in \mathbb{R}^{n \times 2n}$ is a matrix that selects every odd row from vector of size $2n$, $a_i > 0$ are small perturbation amplitude parameters, $A \coloneqq \diag{(a_i I_{m_i})_{j \in \mathcal{I}}}$, $J(\bfs{u}) = \diag{(J_i(u_i, \bfs{u}_{-i})I_{m_i})_{i \in \mathcal{I}}}$, $\tilde{F_i}(\bfs{u}, \bfs{\mu}) = \frac{2}{a_i}J_i(\bfs{u} + A \mathbb{D}^m \bfs{\mu}) \mathbb{D}^{m_i}{\mu_i}$ and $\tilde F(\bfs{u}, \bfs{\mu}) = 2A^{-1} J(\bfs{u} + A \mathbb{D}^m \bfs{\mu}) \mathbb{D}^m \bfs{\mu}$. Existence of solutions follows directly from \cite[Prop. 6.10]{goebel2009hybrid} as the the continuity of the right-hand side in \nref{eq: complete system} implies \cite[Assum. 6.5]{goebel2009hybrid}.\\ \\ Our main result is summarized in the following theorem. \begin{theorem} Let the Standing Assumptions hold and let $(\bfs{z}(t), \bfs{u}(t), \bfs{\xi}(t), \bfs{\mu}(t))_{t \geq 0}$ be the solution to \nref{eq: complete system} for arbitrary initial conditions. Then, the set $\mathcal{A} \times \mathbb{R}^m \times \mathbb{S}^m$ is SGPAS as $(\bar a, \bar\epsilon, \bar\gamma) = (\max_{i \in \mathcal{I}} a_i, \max_{i \in \mathcal{I}}\epsilon_i, \max_{i \in \mathcal{I}}\gamma_i) \rightarrow 0$. \hspace*{\fill} $\square$ \end{theorem} \begin{proof} We rewrite the system in \nref{eq: complete system} as \begin{align} \m{ \dot{\bfs{z}} \\ \dot{\bfs{u}} \\ \dot{\bfs{\xi}} \\ }&=\m{ \bar\gamma \bar\varepsilon \tilde{\bfs{\gamma}} \tilde{\bfs{\varepsilon}}\left(-\bfs{z} + \bfs{\omega}\right) \\ \bar\gamma \bar\varepsilon \tilde{\bfs{\gamma}} \tilde{\bfs{\varepsilon}}\left(-\bfs{\omega} + \bfs{z} - \bfs{\xi}\right) \\ \bar\gamma \tilde{\bfs{\gamma}} \left( - \bfs{\xi} + \tilde F(\bfs{\omega}, \bfs{\mu}) \right) },\label{eq: complete system rewritten} \\ \dot{\bfs{\mu}} &= {2 \pi} \mathcal{R}_{\kappa}\bfs{\mu}, \label{eq: oscilator} \end{align} where $\bar\gamma \coloneqq \max_{i \in \mathcal{I}} \gamma_i$, $\bar\varepsilon \coloneqq \max_{i \in \mathcal{I}} \varepsilon_i$, $\tilde{\bfs{\gamma}} \coloneqq \bfs{\gamma}/{\bar \gamma}$ and $\tilde{\bfs{\varepsilon}} \coloneqq \bfs{\varepsilon}/{\bar \varepsilon}$. The system in \nref{eq: complete system rewritten}, \nref{eq: oscilator} is in singular perturbation from where $\bar \gamma$ is the time scale separation constant. The goal is to average the dynamics of $\bfs{z}, \bfs{u}, \bfs{\xi}$ along the solutions of $\bfs{\mu}$. For sufficiently small $\bar a \coloneqq \max_{i \in \mathcal{I}} a_i$, we can use the Taylor expansion to write down the cost functions as \begin{align} &J_i(\bfs{u} + A \mathbb{D} \bfs{\mu}) = J_i(u_i, \bfs{u}_{-i}) + a_i (\mathbb{D}^{m_i} \mu_i)^\top \nabla_{u_i}J_i(u_i, \bfs{u}_{-i}) \nonumber \\ &\quad + A_{-i} (\mathbb{D}^{m_{-i}} \bfs{\mu}_{-i})^\top \nabla_{u_{-i}}J(u_i, \bfs{u}_{-i}) + O(\bar a^2), \label{eq: Taylor approx} \end{align} where $A_{-i} \coloneqq \diag{(a_i I_{m_i})_{j \in \mathcal{I} \setminus \{i\}}}$. By the fact that the right-hand side of \nref{eq: complete system rewritten}, \nref{eq: oscilator} is continuous, by using \cite[Lemma 1]{poveda2020fixed} and by substituting \nref{eq: Taylor approx} into \nref{eq: complete system rewritten}, we derive the well-defined average of the complete dynamics: \begin{align} \m{ \dot{\bfs{z}}^\text{a} \\ \dot{\bfs{u}}^\text{a} \\ \dot{\bfs{\xi}}^\text{a} }=\m{ \bar\varepsilon \tilde{\bfs{\gamma}} \tilde{\bfs{\varepsilon}}\left(-\bfs{z}^\text{a} + \bfs{u}^\text{a}\right) \\ \bar\varepsilon \tilde{\bfs{\gamma}} \tilde{\bfs{\varepsilon}}\left(-\bfs{u}^\text{a} + \bfs{z}^\text{a} - \bfs{\xi}^\text{a} \right) \\ \tilde{\bfs{\gamma}} \left( - \bfs{\xi}^\text{a} + F(\bfs{u}^\text{a}) + \mathcal{O}(\bar a)\right) }.\label{eq: average dynamics with O(a)} \end{align} The system in \nref{eq: average dynamics with O(a)} is an $\mathcal{O}(\bar a)$ perturbed version of the nominal average dynamics: \begin{align} \m{ \dot{\bfs{z}}^\text{a} \\ \dot{\bfs{u}}^\text{a} \\ \dot{\bfs{\xi}}^\text{a} }=\m{ \bar\varepsilon \tilde{\bfs{\gamma}} \tilde{\bfs{\varepsilon}}\left(-\bfs{z}^\text{a} + \bfs{u}^\text{a}\right) \\ \bar\varepsilon \tilde{\bfs{\gamma}} \tilde{\bfs{\varepsilon}}\left(-\bfs{u}^\text{a} + \bfs{z}^\text{a} - \bfs{\xi}^\text{a} \right) \\ \tilde{\bfs{\gamma}} \left( - \bfs{\xi}^\text{a} + F(\bfs{u}^\text{a})\right) }.\label{eq: real nominal average dynamics} \end{align} For sufficiently small $\bar \varepsilon$, the system in \nref{eq: real nominal average dynamics} is in singular perturbation form with dynamics $\bfs{\xi}^\text{a}$ acting as fast dynamics. The boundary layer dynamics \cite[Eq. 11.14]{khalil2002nonlinear} are given by \begin{align} \dot{\bfs{\xi}}^\text{a}_{\text{bl}} = \tilde{\bfs{\gamma}} \left( - \bfs{\xi}^\text{a}_{\text{bl}} + F(\bfs{\omega}^\text{a}_{\text{bl}})\right) \label{boundary layer dynamics} \end{align} For each fixed $\bfs{\omega}^\text{a}_{\text{bl}}$, $\{F(\bfs{\omega}^\text{a}_{\text{bl}}))\}$ is an uniformly globally exponentially stable equilibrium point of the boundary layer dynamics. By \cite[Exm. 1]{wang2012analysis}, it holds that the system in \nref{eq: real nominal average dynamics} has a well-defined average system given by \begin{align} \m{ \dot{\bfs{z}}_\text{r} \\ \dot{\bfs{u}}_\text{r}}=\m{ \tilde{\bfs{\gamma}} \tilde{\bfs{\varepsilon}}\left(-\bfs{z}_\text{r} + \bfs{u}_\text{r}\right) \\ \tilde{\bfs{\gamma}} \tilde{\bfs{\varepsilon}}\left(-\bfs{u}_\text{r} + \bfs{z}_\text{r} - F(\bfs{u}_\text{r})\right) }. \label{eq: reduced system dynamics dynamics} \end{align} To prove that the system in \nref{eq: reduced system dynamics dynamics} renders the set $\mathcal{A}$ UGAS, we consider the following Lyapunov function candidate: \begin{align} V(\bfs{u}_\text{r}, \bfs{z}_\text{r}) = \tfrac{1}{2}\n{\bfs{z}_\text{r} - \bfs{u}^*}^2_{{\tilde{\bfs{\gamma}}^{-1} \tilde{\bfs{\varepsilon}}^{-1}}} + \tfrac{1}{2}\n{\bfs{u}_\text{r} - \bfs{u}^*}^2_{{\tilde{\bfs{\gamma}}^{-1} \tilde{\bfs{\varepsilon}}^{-1}}}. \label{eq: lyapunov fun cand} \end{align} The time derivative of the Lyapunov candidate is then \begin{align} \dot{V}(\bfs{u}_\text{r}, \bfs{z}_\text{r}) &= \vprod{\bfs{z}_\text{r} - \bfs{u}^*}{-\bfs{z}_\text{r} + \bfs{u}_\text{r}} \nonumber \\ & \quad + \vprod{\bfs{u}_\text{r} - \bfs{u}^*}{-\bfs{u}_\text{r} + \bfs{z}_\text{r} - F(\bfs{u}_\text{r})} \nonumber \\ &= -\n{\bfs{u}_\text{r} - \bfs{z}_\text{r}}^2 - \vprod{\bfs{u}_\text{r} - \bfs{u}^*}{F(\bfs{u}_\text{r})} \nonumber \\ &= -\n{\bfs{u}_\text{r} - \bfs{z}_\text{r}}^2 - \vprod{\bfs{u}_\text{r} - \bfs{u}^*}{F(\bfs{u}_\text{r}) - F(\bfs{u}^*)} \nonumber \\ & \leq \n{\bfs{u}_\text{r} - \bfs{z}_\text{r}}^2,\label{eq: imperfect vdot} \end{align} where the last two lines follow from \nref{eq: nash pseudogradient cond} and Standing Assumption 2. Let us define the following sets: \begin{align} \Omega_c &\coloneqq \{(\bfs{u}_\text{r}, \bfs{z}_\text{r}) \in \mathbb{R}^{2m} \mid V(\bfs{u}_\text{r}, \bfs{z}_\text{r}) \leq c\} \nonumber \\ \Omega_0 &\coloneqq \{(\bfs{u}_\text{r}, \bfs{z}_\text{r}) \in \Omega_c \mid \bfs{u}_\text{r} = \bfs{z}_\text{r} \} \nonumber \\ \mathcal{Z} &\coloneqq \{(\bfs{u}_\text{r}, \bfs{z}_\text{r}) \in \Omega_c \mid \dot{V}(\bfs{u}_\text{r}, \bfs{z}_\text{r}) = 0\} \nonumber \\ \mathcal{O} &\coloneqq \{(\bfs{u}_\text{r}, \bfs{z}_\text{r}) \in \Omega_c \mid (\bfs{u}_\text{r}(0), \bfs{z}_\text{r}(0)) \in \mathcal{Z} \implies \nonumber \\ &\quad \quad (\bfs{u}_\text{r}(t), \bfs{z}_\text{r}(t)) \in \mathcal{Z}\ \forall t \in \mathbb{R}\}, \end{align} where $\Omega_c$ is a compact level set chosen such that it is nonempty, $\mathcal{Z}$ is the set of zeros of the Lyapunov function candidate derivative, $\Omega_0$ is the superset of $\mathcal{Z}$ which follows from \nref{eq: imperfect vdot} and $\mathcal{O}$ is the maximum invariant set as explained in \cite[Chp. 4.2]{khalil2002nonlinear}. Then it holds that \begin{align} \Omega_c \supseteq \Omega_0 \supseteq \mathcal{Z} \supseteq \mathcal{O} \supseteq \mathcal{A}. \end{align} Firstly, for any compact set $\Omega_c$, since the right-hand side of \nref{eq: reduced system dynamics dynamics} is (locally) Lipschitz continuous and therefore by \cite[Thm. 3.3]{khalil2002nonlinear} we conclude that solutions to \nref{eq: reduced system dynamics dynamics} exist and are unique. Next, in order to prove convergence to a NE solution, we will show that $\mathcal{A} \equiv \mathcal{O}$, which is equivalent to saying that the only $\omega$-limit trajectories in $\mathcal{O}$ are the stationary points defined by $\mathcal{A}$. It is sufficient to prove that there cannot exist any positively invariant trajectories in $\Omega_0$ on any time interval $[t_1, t_2]$ where it holds \begin{align} \m{\bfs{z}_\text{r}(t_1) \\ \bfs{u}_\text{r}(t_1)} \neq \m{\bfs{z}_\text{r}(t_2) \\ \bfs{u}_\text{r}(t_2)}\text{ and } \m{\bfs{z}_\text{r}(t_1) \\ \bfs{u}_\text{r}(t_1)},\m{\bfs{z}_\text{r}(t_2) \\ \bfs{u}_\text{r}(t_2)} \in \Omega_0. \label{contradicion} \end{align} For the sake of contradiction, let us assume otherwise: there exists at least one such trajectory, which would be then defined by the following differential equations \begin{align} \m{ \dot{\bfs{z}_\text{r}} \\ \dot{\bfs{u}_\text{r}}}=\m{ 0\\ - F(\bfs{u}_\text{r}) }. \label{eq: invariant traj} \end{align} For this single trajectory, as $\dot{\bfs{z}_\text{r}} = 0$, it must then hold that $\bfs{z}_\text{r}(t_1) = \bfs{z}_\text{r}(t_2)$ and from the properties of $\Omega_0$, it follows that $\bfs{u}_\text{r}(t_1) = \bfs{z}_\text{r}(t_1)$ and $\bfs{u}_\text{r}(t_2) = \bfs{z}_\text{r}(t_2)$. From these three statements we conclude that $\bfs{u}_\text{r}(t_1) = \bfs{u}_\text{r}(t_2)$. Moreover, as a part of the definition of the trajectory, we must have $\bfs{u}_\text{r}(t_1) \neq \bfs{u}_\text{r}(t_2)$. Therefore, we have reached a contradiction. Thus, there does not exist any positively invariant trajectory in $\Omega_0$ such that \nref{contradicion} holds. Thus the only possible positively invariant trajectories are the ones where we have $\bfs{u}_\text{r}(t_1) = \bfs{u}_\text{r}(t_2)$ and $\bfs{z}_\text{r}(t_1) = \bfs{z}_\text{r}(t_2)$, which implies that $(\bfs{u}_\text{r}, \bfs{z}_\text{r}) \in \mathcal{A}$. Since the set $\mathcal{O}$ is a subset of the set $\Omega_0$, we conclude that the $\omega$-limit set is identical to the set $\mathcal{A}$. Therefore, by La Salle's theorem \cite[Thm. 4]{khalil2002nonlinear}, we conclude that the set $\mathcal{A}$ is UGAS for the in dynamics in \nref{eq: reduced system dynamics dynamics}.\\ \\ Next, by \cite[Thm. 2, Exm. 1]{wang2012analysis}, the dynamics in \nref{eq: real nominal average dynamics} render the set $\mathcal{A} \times \mathbb{R}^m$ SGPAS as $(\bar \varepsilon \rightarrow 0)$. As the right-hand side of the equations in \nref{eq: real nominal average dynamics} is continuous, the system is a well-posed hybrid dynamical system \cite[Thm. 6.30]{goebel2009hybrid} and therefore the $O(\bar a)$ perturbed system in \nref{eq: average dynamics with O(a)} renders the set $\mathcal{A} \times \mathbb{R}^m$ SGPAS as $(\bar \varepsilon, \bar a)\rightarrow 0$ \cite[Prop. A.1]{poveda2021robust}. By noticing that the set $\mathbb{S}^{m}$ is UGAS under oscillator dynamics in \nref{eq: oscilator} that generate a well-defined average system in \nref{eq: average dynamics with O(a)}, and by averaging results in \cite[Thm. 7]{poveda2020fixed}, we obtain that the dynamics in \nref{eq: complete system} make the set $\mathcal{A} \times \mathbb{R}^m \times \mathbb{S}^{m}$ SGPAS as $(\bar \varepsilon, \bar a, \bar \gamma) \rightarrow 0$. \end{proof} \section{Simulation examples} \subsection{Failure of the pseudogradient algorithm} Let us consider a classic example on which the standard pseudogradient algorithm fails \cite{grammatico2018comments}: \begin{align} J_1(u_1, u_2) &= (u_1 - u_1^*)(u_2 - u_2^*) \nonumber \\ J_2(u_1, u_2) &= -(u_1 - u_1^*)(u_2 - u_2^*), \label{eq: example} \end{align} where the game in \nref{eq: example} has a unique Nash Equilibrium at $(u_1^*, u_2^*)$ and the pseudogradient is only monotone. We compare our algorithm to the original and modified algorithm in \cite{frihauf2011nash}, where in the modified version, we introduce additional filtering dynamics to improve the performance: \begin{align} \m{ \dot{\bfs{u}} \\ \dot{\bfs{\xi}} \\ \dot{\bfs{\mu}} }=\m{ -\bfs{\gamma}\bfs{\varepsilon} \bfs{\xi} \\ \bfs{\gamma}( - \bfs{\xi} + \tilde F(\bfs{u}, \bfs{\mu})) \\ {2 \pi} \mathcal{R}_{\kappa}\bfs{\mu} }. \end{align} As simulation parameters we choose $a_i = 0.1$, $\gamma_i = 0.1$, $\varepsilon_i = 1$ for all $i$, $u_1^* = 2$, $u_2^* = -3$ and the frequency parameters $\kappa_i$ randomly in the range $[0, 1]$. We show the numerical results in Figures \ref{fig: u trajectories} and \ref{fig: u states}. The proposed NE seeking (NESC) algorithm steers the agents towards their NE, while both versions of the algorithm in \cite{frihauf2011nash} fail to converge and exhibits circular trajectories as in the full-information scenario \cite{grammatico2018comments}. \begin{figure} \centering \includegraphics[width = \linewidth]{mnesc_vs_frihauf_vs_modified.pdf} \caption{Input trajectories from an initial condition $\bfs{u}(0)$ (denoted by {\color{green}$\circ$}) towards $\bfs{u}^*$ (denoted by {\color{violet} $*$}).} \label{fig: u trajectories} \end{figure} \begin{figure} \centering \includegraphics[width = \linewidth]{mnesc_vs_frihauf_vs_modified_response.pdf} \caption{Time evolution of inputs $u_1$ and $u_2$ for the proposed NESC algorithm (solid line), the original algorithm in \cite{frihauf2011nash} (dash dot line) and the filtered version of the algorithm in \cite{frihauf2011nash} (dashed lines). } \label{fig: u states} \end{figure} \subsection{Fixed demand problem} In the second simulation example, we consider the problem of determining the production outputs $u_i \in \mathbb{R}$ so that $N$ producers minimize their cost and meet the some fixed demand $U_d \in \mathbb{R}$ (see the power generator examples in \cite{aunedi2008optimizing} \cite{pantoja2011population}). The producers do not know the exact analytic form of their cost functions, which are given by: \begin{align} J_i(u_i, \lambda) = u_i(u_i - 2U_i) - \lambda u_i, \end{align} where the first part corresponds to the unknown part of their cost and $\lambda u_i$ corresponds to the profit made by selling the commodity at the price $\lambda$. The last agent in this game is the market regulator whose goal is to balance supply and demand via the commodity price, by adopting the following cost function: \begin{align} J_{N+1}(\bfs{u}, \lambda) = \lambda (-U_d + \Sigma_{i = 1}^N u_i). \end{align} The producers and the market regulator use the algorithm in \nref{eq: single agent dynamics} to determine the production output and price, albeit the market regulator uses the real value of its gradient, measurable as the discrepancy between the supply and demand. In the simulations, we use the following parameters: $N = 3$, $(U_1, U_2, U_3) = (172, 47, 66)kW$, $a_1 = a_2 = a_3 = 20$, $(\kappa_1, \kappa_2, \kappa_3) = (0.1778, 0.1238, 0.1824)$, $\epsilon_i = \tfrac{1}{3}$, $\gamma_i = 0.02$ for all $i$, $U_d = 350$ and zero initial conditions. In Figure \ref{fig:second_examplel}, we observe that the agents converge to the NE of the game. Additionally, we test the sensitivity of the commodity price with respect to additive measurement noise that obeys a Gaussian distribution with zero mean and standard deviation $\sigma$ for all producers. For different values of the standard deviation $\sigma$, we perform 200 numerical simulations. Next, we take the last 250 seconds of each simulation and sample it every 1 second. We group the resulting prices $\lambda(t)$ into bins of width 0.05, and plot the frequency of each bin. Three such plots are shown on Figure \ref{fig:second_example_distribution}. We observe that the price frequency plots seem to follow a Gaussian-like distribution. \begin{figure} \centering \includegraphics[width=\linewidth]{step_response_3noises.pdf} \caption{Time evolution of states in the fixed demand problem.} \label{fig:second_examplel} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{distribution_lambda_3noises.pdf} \caption{Price frequency distribution for three different cases.} \label{fig:second_example_distribution} \end{figure} \section{Conclusion} Monotone Nash equilibrium problems without constraints can be solved via zeroth-order methods that leverage the properties of the continuous-time Golden ratio algorithm and ESC theory based on hybrid dynamical systems. \endlinechar=13 \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.835938, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbj3xK0fkXPSOoo0X
\section{Introduction} Stochastic scheduling over multi-class queueing systems has important applications such as CPU scheduling, request processing in web servers, and QoS provisioning to different types of traffic in a telecommunication network. In these systems, power management is increasingly important due to their massive energy consumption. To study this problem, in this paper we consider a single-server multi-class queueing system whose instantaneous service rate is controllable by dynamic power allocations. This is modeled as a nonpreemptive multi-class $M/G/1$ queue with $N$ job classes $\{1, \ldots, N\}$, and the goal is to optimize average queueing delays of all job classes and average power consumption in this queueing network. We consider four delay and power control problems: \begin{enumerate} \item Designing a policy that yields average queueing delay $\overline{W}_{n}$ of class $n$ satisfying $\overline{W}_{n} \leq d_{n}$ for all classes, where $\{d_{1}, \ldots, d_{N}\}$ are given feasible delay bounds. Here we assume a fixed power allocation and no power control. \item Minimizing a separable convex function $\sum_{n=1}^{N} f_{n}(\overline{W}_{n})$ of average queueing delays $(\overline{W}_{n})_{n=1}^{N}$ subject to delay constraints $\overline{W}_{n} \leq d_{n}$ for all classes $n$; assuming a fixed power allocation and no power control. \item Under dynamic power allocation, minimizing average power consumption subject to delay constraints $\overline{W}_{n} \leq d_{n}$ for all classes $n$. \item Under dynamic power allocation, minimizing a separable convex function $\sum_{n=1}^{N} f_{n}(\overline{W}_{n})$ of average queueing delays $(\overline{W}_{n})_{n=1}^{N}$ subject to an average power constraint. \end{enumerate} These problems are presented with increasing complexity for the readers to gradually familiarize themselves with the methodology we use to attack these problems. Each of the above problems is highly nontrivial, thus novel yet simple approaches are needed. This paper provides such a framework by connecting two powerful stochastic optimization theories: The \emph{achievable region approach} in queueing systems, and the \emph{Lyapunov optimization theory} in wireless networks. In queueing systems, the achievable region approach that treats optimal control problems as mathematical programming ones has been fruitful; see~\cite{Yao02lecture, BPT94, Ber95, Nin09} for a detailed survey. In a nonpreemptive multi-class $M/G/1$ queue, it is known that the collection of all feasible average queueing delay vectors form a special polytope (a base of a polymatroid) with vertices being the performance vectors of strict priority policies (\cite{FaG88a}, see Section~\ref{sec:1104} for more details). As a result, every feasible average queueing delay vector is attainable by a randomization of strict priority policies. Such randomization can be implemented in a framed-based style, where a priority ordering is randomly deployed in every busy period using a probability distribution that is used in all busy periods (see Lemma~\ref{lem:2201} in Section~\ref{sec:1104}). This view of the delay performance region is useful in the first two delay control problems. In addition to queueing delay, when dynamic power control is part of the decision space, it is natural to consider dynamic policies that allocate a fixed power in every busy period. The resulting joint power and delay performance region is then spanned by frame-based randomizations of power control and strict priority policies. We treat the last two delay and power control problems as stochastic optimization over such a performance region (see Section~\ref{sec:1103} for an example). With the above characterization of performance regions, we solve the four control problems using \emph{Lyapunov optimization theory}. This theory is originally developed for stochastic optimal control over time-slotted wireless networks~\cite{TaE92,TaE93}, later extended by~\cite{GNT06,Nee03thesis} that allow optimizing various performance objectives such as average power~\cite{Nee06} or throughput utility~\cite{NML08}, and recently generalized to optimize dynamic systems that have a renewal structure~\cite{Nee09conf,Nee10conf,Nee10book,LaN10arXivb}. The Lyapunov optimization theory transforms time average constraints into virtual queues that need to be stabilized. Using a Lyapunov drift argument, we construct frame-based policies to solve the four control problems. The resulting policy is a sequence of \emph{base policies} implemented frame by frame, where the collection of all base policies span the performance region through time sharing or randomization. The base policy used in each frame is chosen by minimizing a ratio of an expected ``drift plus penalty'' sum over the expected frame size, where the ratio is a function of past queueing delays in all job classes. In this paper the base policies are nonpreemptive strict priority policies with deterministic power allocations. Our methodology is as follows. By characterizing the performance region using the collection of all randomizations of base policies, for each control problem, there exists an optimal random mixture of base policies that solves the problem. Although the probability distribution that defines the optimal random mixture is unknown, we construct a dynamic policy using Lyapunov optimization theory. This policy makes greedy decisions in every frame, stabilizes all virtual queues (thus satisfying all time average constraints), and yields near-optimal performance. The \emph{existence} of the optimal randomized policy is essential to prove these results. In our policies for the four control problems, requests of different classes are prioritized by a dynamic $c\mu$ rule~\cite{Yao02lecture} which, in every busy period, assigns priorities in the decreasing order of weights associated with each class. The weights of all classes are updated at the end of every busy period by simple queue-like rules (so that different priorities may be assigned in different busy periods), which capture the running difference between the current and the desired performance. The dynamic $c\mu$ rule in the first problem does not require any statistical knowledge of arrivals and service times. The policy for the second problem requires only the mean but not higher moments of arrivals and service times. In the last two problems with dynamic power control, beside the dynamic $c\mu$ rules, a power level is allocated in every busy period by optimizing a weighted sum of power and power-dependent average delays. The policies for the third and the last problem require the mean and the first two moments of arrivals and service times, respectively, because of dynamic power allocations. In each of the last three problems, our policies yield performance that is at most $O(1/V)$ away from the optimal, where $V>0$ is a control parameter that can be chosen sufficiently large to yield near-optimal performance. The tradeoff of choosing large $V$ values is the amount of time required to meet the time average constraints. In this paper we also propose a \emph{proportional delay fairness} criterion, in the same spirit as the well-known rate proportional fairness~\cite{Kel97} or utility proportional fairness~\cite{WPL06}, and show that the corresponding delay objective functions are quadratic. Overall, since our policies use dynamic $c\mu$ rules with weights of simple updates, and require limited statistical knowledge, they scale gracefully with the number of job classes and are suitable for online implementation. In the literature, work~\cite{FaG88b} characterizes multi-class $G/M/c$ queues that have polymatroidal performance regions, and provides two numerical methods to minimize a separable convex function of average delays as an unconstrained static optimization problem. But in~\cite{FaG88b} it is unclear how to control the queueing system to achieve the optimal performance. Minimizing a convex holding cost in a single-server multi-class queue is formulated as a restless bandit problem in~\cite{AGN03,GLA03}, and Whittle's index policies~\cite{Whi88} are constructed as a heuristic solution. Work~\cite{Mie95} proposes a generalized $c\mu$ rule to maximize a convex holding cost over a finite horizon in a multi-class queue, and shows it is asymptotically optimal under heavy traffic. This paper provides a dynamic control algorithm for the minimization of convex functions of average delays. Especially, we consider additional time average power and delay constraints, and our solutions require limited statistical knowledge and have provable near-optimal performance. This paper also applies to power-aware scheduling problems in computer systems. These problems are widely studied in different contexts, where two main analytical tools are competitive analysis~\cite{ALW10conf, BCP09conf, AWT09conf,AaF07,BPS07conf} and $M/G/1$-type queueing theory (see~\cite{WAT09conf} and references therein), both used to optimize metrics such as a weighted sum of average power and delay. This paper presents a fundamentally different approach for more directed control over average power and delays, and considers a multi-class setup with time average constraints. In the rest of the paper, the detailed queueing model is given in Section~\ref{sec:702}, followed by a summary of useful $M/G/1$ properties in Section~\ref{sec:1104}. The four delay-power control problems are solved in Section~\ref{sec:1803}-\ref{sec:1801}, followed by simulation results. \section{Queueing Model} \label{sec:702} We only consider \emph{queueing delay}, not \emph{system delay} (queueing plus service) in this paper. System delay can be easily incorporated since, in a nonpreemptive system, average queueing and system delay differ only by a constant (the average service time). We will use ``delay'' and ``queueing delay'' interchangeably in the rest of the paper. Consider a single-server queueing system processing jobs categorized into $N$ classes. In each class $n\in \{1, 2, \ldots, N\}$, jobs arrive as a Poisson process with rate $\lambda_{n}$. Each class $n$ job has size $S_{n}$. We assume $S_{n}$ is i.i.d. in each class, independent across classes, and that the first four moments of $S_{n}$ are finite for all classes $n$. The system processes arrivals nonpreemptively with instantaneous service rate $\mu(P(t))$, where $\mu(\cdot)$ is a concave, continuous, and nondecreasing function of the allocated power $P(t)$ (the concavity of rate-power relationship is observed in computer systems~\cite{KaM08book,WAT09conf,GHD09conf}). Within each class, arrivals are served in a first-in-first-out fashion. We consider a frame-based system, where each frame consists of an idle period and the following busy period. Let $t_{k}$ be the start of the $k$th frame for each $k\in\mathbb{Z}^{+}$; the $k$th frame is $[t_{k}, t_{k+1})$. Define $t_{0} = 0$ and assume the system is initially empty. Define $T_{k} \triangleq t_{k+1} - t_{k}$ as the size of frame $k$. Let $A_{n,k}$ denote the set of class $n$ arrivals in frame $k$. For each job $i\in A_{n,k}$, let $W_{n,k}^{(i)}$ denote its queueing delay. The control over this queueing system is power allocations and job scheduling across all classes. We restrict to the following frame-based policies that are both \emph{causal} and \emph{work-conserving}:\footnote{\emph{Causality} means that every control decision depends only on the current and past states of the system; \emph{work-conserving} means that the server is never idle when there is still work to do.} \begin{quote} In every frame $k\in\mathbb{Z}^{+}$, use a fixed power level $P_{k} \in[P_{\text{min}}, P_{\text{max}}]$ and a nonpreemptive strict priority policy $\pi(k)$ for the duration of the busy period in that frame. The decisions are possibly random. \end{quote} In these policies, $P_{\text{max}}$ denotes the maximum power allocation. We assume $P_{\text{max}}$ is finite, but sufficiently large to ensure feasibility of the desired delay constraints. The minimum power $P_{\text{min}}$ is chosen to be large enough so that the queue is stable even if power $P_{\text{min}}$ is used for all time. In particular, for stability we need \[ \sum_{n=1}^{N} \lambda_{n} \frac{\expect{S_{n}}}{ \mu(P_{\text{min}})} < 1 \Rightarrow \mu(P_{\text{min}}) > \sum_{n=1}^{N} \lambda_{n} \expect{S_{n}}. \] The strict priority rule $\pi(k) = (\pi_{n}(k))_{n=1}^{N}$ is represented by a permutation of $\{1, \ldots, N\}$, where class $\pi_{n}(k)$ gets the $n$th highest priority. The motivation of focusing on the above frame-based policies is to simplify the control of the queueing system to achieve complex performance objectives. Simulations in~\cite{AGM99}, however, suggest that this method may incur higher variance in performance than policies that take control actions based on job occupancies in the queue. Yet, job-level scheduling seems difficult to attack problems considered in this paper. It may involve solving high-dimensional (partially observable) Markov decision processes with time average power and delay constraints and convex holding costs. \subsection{Definition of Average Delay} \label{sec:2404} The average delay under policies we propose later may not have well-defined limits. Thus, inspired by~\cite{Nee10conf}, we define \begin{equation} \label{eq:1103} \overline{W}_{n} \triangleq \limsup_{K\to\infty} \frac{ \expect{\sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}} }{\expect{\sum_{k=0}^{K-1} \abs{A_{n,k}}}} \end{equation} as the average delay of class $n\in\{1, \ldots, N\}$, where $\abs{A_{n,k}}$ is the number of class $n$ arrivals during frame $k$. We only consider delay sampled at frame boundaries for simplicity. To verify~\eqref{eq:1103}, note that the running average delay of class $n$ jobs up to time $t_{K}$ is equal to \[ \frac{\sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}}{\sum_{k=0}^{K-1} \abs{A_{n,k}}} = \frac{\frac{1}{K}\sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}}{ \frac{1}{K} \sum_{k=0}^{K-1} \abs{A_{n,k}}}. \] Define \[ w_{n}^{\text{av}} \triangleq \lim_{K\to\infty} \frac{1}{K} \sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}, \ a_{n}^{\text{av}} \triangleq \lim_{K\to\infty} \frac{1}{K} \sum_{k=0}^{K-1} \abs{A_{n,k}}. \\ \] If both limits $w_{n}^{\text{av}}$ and $a_{n}^{\text{av}}$ exist, the ratio $w_{n}^{\text{av}}/a_{n}^{\text{av}}$ is the limiting average delay for class $n$. In this case, we get \begin{equation} \label{eq:1102} \begin{split} \overline{W}_{n} &= \frac{ \lim_{K\to\infty} \expect{ \frac{1}{K} \sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}} }{\lim_{K\to\infty} \expect{ \frac{1}{K}\sum_{k=0}^{K-1} \abs{A_{n,k}}}} \\ &= \frac{ \expect{ \lim_{K\to\infty} \frac{1}{K} \sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}} }{ \expect{ \lim_{K\to\infty} \frac{1}{K}\sum_{k=0}^{K-1} \abs{A_{n,k}}}} = \frac{w_{n}^{\text{av}}}{a_{n}^{\text{av}}}, \end{split} \end{equation} which shows $\overline{W}_{n}$ is indeed the limiting average delay.\footnote{The second equality in~\eqref{eq:1102}, where we pass the limit into the expectation, can be proved by a generalized Lebesgue's dominated convergence theorem stated as follows. Let $\{X_{n}\}_{n=1}^{\infty}$ and $\{Y_{n}\}_{n=1}^{\infty}$ be two sequences of random variables such that: (1) $ 0\leq \abs{X_{n}} \leq Y_{n}$ with probability $1$ for all $n$; (2) For some random variables $X$ and $Y$, $X_{n}\to X$ and $Y_{n}\to Y$ with probability $1$; (3) $\lim_{n\to\infty} \expect{Y_{n}} = \expect{Y} < \infty$. Then $\expect{X}$ is finite and $\lim_{n\to\infty} \expect{X_{n}} = \expect{X}$. The details are omitted for brevity.} The definition in~\eqref{eq:1103} replaces $\lim$ by $\limsup$ to guarantee it is well-defined. \section{Preliminaries} \label{sec:1104} This section summarizes useful properties of a nonpreemptive multi-class $M/G/1$ queue. Here we assume a fixed power allocation $P$ and a fixed service rate $\mu(P)$ (this is extended in Section~\ref{sec:powercontrol}). Let $X_{n} \triangleq S_{n}/\mu(P)$ be the service time of a class $n$ job. Define $\rho_{n} \triangleq \lambda_{n} \expect{X_{n}}$. Fix an arrival rate vector $(\lambda_{n})_{n=1}^{N}$ satisfying $\sum_{n=1}^{N} \rho_{n} < 1$; the rate vector $(\lambda_{n})_{n=1}^{N}$ is supportable in the queueing network. For each $k\in \mathbb{Z}^{+}$, let $I_{k}$ and $B_{k}$ denote the $k$th idle and busy period, respectively; the frame size $T_{k} = I_{k} + B_{k}$. The distribution of $B_{k}$ (and $T_{k}$) is fixed under any work-conserving policy, since the sample path of unfinished work in the system is independent of scheduling policies. Due to the memoryless property of Poisson arrivals, we have $\expect{I_{k}} = 1/(\sum_{n=1}^{N} \lambda_{n})$ for all $k$. For the same reason, the system renews itself at the start of each frame. Consequently, the frame size $T_{k}$, busy period $B_{k}$, and the per-frame job arrivals $\abs{A_{n,k}}$ of class $n$, are all i.i.d. over $k$. Using renewal reward theory~\cite{Ros96book} with renewal epochs defined at frame boundaries $\{t_{k}\}_{k=0}^{\infty}$, we have: \begin{align} &\expect{T_{k}} = \frac{\expect{I_{k}}}{1-\sum_{n=1}^{N} \rho_{n}} = \frac{1}{(1-\sum_{n=1}^{N} \rho_{n})\sum_{n=1}^{N} \lambda_{n}} \label{eq:2102} \\ &\expect{\abs{A_{n,k}}} = \lambda_{n} \expect{T_{k}},\, \forall n\in\{1, \ldots, N\},\, \forall k\in\mathbb{Z}^{+}. \label{eq:2101} \end{align} It is useful to consider the randomized policy $\pi_{\text{rand}}$ that is defined by a given probability distribution over all possible $N!$ priority orderings. Specifically, policy $\pi_{\text{rand}}$ randomly selects priorities at the beginning of every new frame according to this distribution, and implements the corresponding nonpreemptive priority rule for the duration of the frame. Again by renewal reward theory, the average queueing delays $(\overline{W}_{n})_{n=1}^{N}$ rendered by a $\pi_{\text{rand}}$ policy satisfy in each frame $k\in\mathbb{Z}^{+}$: \begin{equation} \label{eq:623} \expect{\sum_{i\in A_{n,k}} W_{n,k}^{(i)}} = \expect{\int_{t_{k}}^{t_{k+1}} Q_{n}(t)\, \mathrm{d} t} = \lambda_{n} \overline{W}_{n} \expect{T_{k}}, \end{equation} where we recall that $W_{n,k}^{(i)}$ represents only the queueing delay (not including service time), and $Q_{n}(t)$ denotes the number of class $n$ jobs waiting in the queue (not including that in the server) at time $t$. Next we summarize useful properties of the performance region of average queueing delay vectors $(\overline{W}_{n})_{n=1}^{N}$ in a nonpreemptive multi-class $M/G/1$ queue. For these results we refer readers to~\cite{FaG88a, Yao02lecture, BaG92book} for a detailed introduction. Define the value $x_{n} \triangleq \rho_{n} \overline{W}_{n}$ for each class $n\in\{1, \ldots, N\}$, and denote by $\Omega$ the performance region of the vector $(x_{n})_{n=1}^{N}$. The set $\Omega$ is a special polytope called \emph{(a base of) a polymatroid}~\cite{Wel76book}. An important property of the polymatroid $\Omega$ is: (1) Each vertex of $\Omega$ is the performance vector of a strict nonpreemptive priority rule; (2) Conversely, the performance vector of each strict nonpreemptive priority rule is a vertex of $\Omega$. In other words, there is a one-to-one mapping between vertices of $\Omega$ and the set of strict nonpreemptive priority rules. As a result, every feasible performance vector $(x_{n})_{n=1}^{N} \in \Omega$, or equivalently every feasible queueing delay vector $(\overline{W}_{n})_{n=1}^{N}$, is attained by a randomization of strict nonpreemptive priority policies. For completeness, we formalize the last known result in the next lemma. \begin{lem} \label{lem:2201} In a nonpreemptive multi-class $M/G/1$ queue, define \[ \mathcal{W} \triangleq \left\{ (\overline{W}_{n})_{n=1}^{N} \mid (\rho_{n} \overline{W}_{n})_{n=1}^{N} \in \Omega \right\} \] as the performance region~\cite{FaG88a} of average queueing delays. Then: \begin{enumerate} \item The performance vector $(\overline{W}_{n})_{n=1}^{N}$ of each frame-based randomized policy $\pi_{\text{rand}}$ is in the delay region $\mathcal{W}$. \item Conversely, every vector $(\overline{W}_{n})_{n=1}^{N}$ in the delay region $\mathcal{W}$ is the performance vector of a $\pi_{\text{rand}}$ policy. \end{enumerate} \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:2201}] Given in Appendix~\ref{sec:2204}. \end{proof} Optimizing a linear function over the polymatroidal region $\Omega$ will be useful. The solution is the following $c\mu$ rule: \begin{lem}[The $c\mu$ rule~\cite{Yao02lecture,BaG92book}] \label{lem:603} In a nonpreemptive multi-class $M/G/1$ queue, define $x_{n} \triangleq \rho_{n} \overline{W}_{n}$ and consider the linear program: \begin{align} \text{minimize:} &\quad \sum_{n=1}^{N} c_{n}\, x_{n} \label{eq:1301} \\ \text{subject to:} &\quad (x_{n})_{n=1}^{N} \in \Omega \label{eq:1302} \end{align} where $c_{n}$ are nonnegative constants. We assume $\sum_{n=1}^{N} \rho_{n} < 1$ for stability, and that second moments $\expect{X_{n}^{2}}$ of service times are finite for all classes $n$. The optimal solution to~\eqref{eq:1301}-\eqref{eq:1302} is a strict nonpreemptive priority policy that assigns priorities in the decreasing order of $c_{n}$. That says, if $c_{1} \geq c_{2} \geq \ldots \geq c_{N}$, then class $1$ gets the highest priority, class $2$ gets the second highest priority, and so on. In this case, the optimal average queueing delay $\overline{W}_{n}^{*}$ of class $n$ is \[ \overline{W}_{n}^{*} = \frac{R}{(1-\sum_{k=0}^{n-1} \rho_{k})(1-\sum_{k=0}^{n} \rho_{k})}, \] where $\rho_{0} \triangleq 0$ and $R\triangleq \frac{1}{2} \sum_{n=1}^{N} \lambda_{n} \expect{X_{n}^{2}}$. \end{lem} \section{Achieving Delay Constraints} \label{sec:1803} The first problem we consider is to construct a frame-based policy that yields average delays satisfying $\overline{W}_{n} \leq d_{n}$ for all classes $n \in \{1, \ldots, N\}$, where $d_{n}>0$ are given constants. We assume a fixed power allocation and that the delay constraints are feasible. Our solution relies on tracking the running difference between past queueing delays for each class $n$ and the desired delay bound $d_{n}$. For each class $n\in\{1, \ldots, N\}$, we define a discrete-time \emph{virtual delay queue} $\{Z_{n,k}\}_{k=0}^{\infty}$ where $Z_{n,k+1}$ is updated at frame boundary $t_{k+1}$ following the equation \begin{equation} \label{eq:601} Z_{n,k+1} = \max\left[ Z_{n,k} + \sum_{i\in A_{n,k}} \left(W_{n,k}^{(i)} - d_{n}\right), \, 0\right]. \end{equation} Assume $Z_{n,0} = 0$ for all $n$. In~\eqref{eq:601}, the delays $W_{n,k}^{(i)}$ and constant $d_{n}$ can viewed as arrivals and service of the queue $\{Z_{n,k}\}_{k=0}^{\infty}$, respectively. If this queue is stabilized, we know that the average arrival rate to the queue (being the per-frame average sum of class $n$ delays $\sum_{n\in A_{n,k}} W_{n,k}^{(i)}$) is less than or equal to the average service rate (being the value $d_{n}$ multiplied by the average number of class $n$ arrivals per frame), from which we infer $\overline{W}_{n} \leq d_{n}$. This is formalized below. \begin{defn} We say queue $\{Z_{n,k}\}_{k=0}^{\infty}$ is mean rate stable if $\lim_{K\to\infty} \expect{Z_{n,K}}/ K = 0$. \end{defn} \begin{lem} \label{lem:602} If queue $\{Z_{n,k}\}_{k=0}^{\infty}$ is mean rate stable, then $\overline{W}_{n} \leq d_{n}$. \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:602}] From~\eqref{eq:601} we get \[ Z_{n,k+1} \geq Z_{n,k} - d_{n} \abs{A_{n,k}} + \sum_{i\in A_{n,k}} W_{n,k}^{(i)}. \] Summing the above over $k\in\{0, \ldots, K-1\}$, using $Z_{n,0}=0$, and taking expectation yields \[ \expect{Z_{n,K}} \geq - d_{n} \expect{\sum_{k=0}^{K-1} \abs{A_{n,k}}} + \expect{\sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}}. \] Dividing the above by $\expect{\sum_{k=0}^{K-1} \abs{A_{n,k}}}$ yields \[ \frac{\expect{Z_{n,K}}}{\expect{\sum_{k=0}^{K-1} \abs{A_{n,k}}}} \geq \frac{ \expect{\sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}}}{\expect{\sum_{k=0}^{K-1} \abs{A_{n,k}}}} - d_{n}. \] Taking a $\limsup$ as $K\to\infty$ and using~\eqref{eq:1103} yields \[ \begin{split} \overline{W}_{n} \leq d_{n} + \limsup_{K\to\infty} \frac{\expect{Z_{n,K}}}{K} \frac{K}{\expect{\sum_{k=0}^{K-1} \abs{A_{n,k}}}}. \end{split} \] Using $\expect{\abs{A_{n,k}}} = \lambda_{n} \expect{T_{k}} \geq \lambda_{n} \expect{I_{k}} = \lambda_{n} \expect{I_{0}}$, we get \[ \overline{W}_{n} \leq d_{n} + \frac{1}{\lambda_{n} \expect{I_{0}}} \lim_{K\to\infty} \frac{\expect{Z_{n,K}}}{K} = d_{n} \] by mean rate stability of $Z_{n,k}$. \end{proof} \subsection{Delay Feasible Policy} \label{sec:701} The following policy stabilizes every $\{Z_{n,k}\}_{k=0}^{\infty}$ queue in the mean rate stable sense and thus achieves $\overline{W}_{n}\leq d_{n}$ for all classes $n$. \underline{\textit {Delay Feasible ($\mathsf{DelayFeas}$) Policy:}} \begin{itemize} \item In every frame $k\in\mathbb{Z}^{+}$, update $Z_{n,k}$ by~\eqref{eq:601} and serve jobs using nonpreemptive strict priorities assigned in the decreasing order of $Z_{n,k}$; ties are broken arbitrarily. \end{itemize} We note that the $\mathsf{DelayFeas}$ policy does not require any statistical knowledge of job arrivals and service times. Intuitively, each $Z_{n,k}$ queue tracks the amount of past queueing delays in class $n$ exceeding the desired delay bound $d_{n}$ (see~\eqref{eq:601}), and the $\mathsf{DelayFeas}$ policy gives priorities to classes that more severely violate their delay constraints. \subsection{Motivation of the $\mathsf{DelayFeas}$ Policy} \label{sec:2202} The structure of the $\mathsf{DelayFeas}$ policy follows a Lyapunov drift argument. Define vector $\bm{Z}_{k} \triangleq (Z_{n,k})_{n=1}^{N}$. For some finite constants $\theta_{n} >0$ for all classes $n$, we define the \emph{quadratic Lyapunov function} \[ L(\bm{Z}_{k}) \triangleq \frac{1}{2} \sum_{n=1}^{N} \theta_{n} Z_{n,k}^{2} \] as a weighted scalar measure of queue sizes $(Z_{n,k})_{n=1}^{N}$. Define the \emph{one-frame Lyapunov drift} \[ \Delta(\bm{Z}_{k}) \triangleq \expect{L(\bm{Z}_{k+1}) - L(\bm{Z}_{k}) \mid \bm{Z}_{k}} \] as the conditional expected difference of $L(\bm{Z}_{k})$ over a frame. Taking square of~\eqref{eq:601} and using $(\max[a, 0])^{2} \leq a^{2}$ yields \begin{equation} \label{eq:1602} Z_{n,k+1}^{2} \leq \left[Z_{n,k} + \sum_{i\in A_{n,k}} \left(W_{n,k}^{(i)} - d_{n}\right)\right]^{2}. \end{equation} Multiplying~\eqref{eq:1602} by $\theta_{n}/2$, summing over $n\in\{1, \ldots, N\}$, and taking conditional expectation on $\bm{Z}_{k}$, we get \begin{equation} \label{eq:104} \begin{split} \Delta(\bm{Z}_{k}) &\leq \frac{1}{2} \sum_{n=1}^{N} \theta_{n}\, \expect{ \left( \sum_{i\in A_{n,k}} \left(W_{n,k}^{(i)} - d_{n}\right) \right)^{2} \mid \bm{Z}_{k}} \\ &\quad + \sum_{n=1}^{N} \theta_{n} \, Z_{n,k}\, \expect{\sum_{i\in A_{n,k}} \paren{W_{n,k}^{(i)}-d_{n}} \mid \bm{Z}_{k}}. \end{split} \end{equation} Lemma~\ref{lem:608} in Appendix~\ref{appendix:701} shows that the second term of~\eqref{eq:104} is bounded by a finite constant $C>0$. It leads to the following Lyapunov drift inequality: \begin{equation} \label{eq:303} \Delta(\bm{Z}_{k}) \leq C + \sum_{n=1}^{N} \theta_{n}\, Z_{n,k} \, \expect{\sum_{i\in A_{n,k}} \paren{W_{n,k}^{(i)}-d_{n}} \mid \bm{Z}_{k}}. \end{equation} Over all frame-based policies, we are interested in the one that, in each frame $k$ after observing $\bm{Z}_{k}$, minimizes the right side of~\eqref{eq:303}. Recall that our policy on frame $k$ chooses which nonpreemptive priorities to use during the frame. To show that this is exactly the $\mathsf{DelayFeas}$ policy, we simplify~\eqref{eq:303}. Under a frame-based policy, we have by renewal reward theory \[ \expect{ \sum_{i\in A_{n,k}} W_{n,k}^{(i)} \mid \bm{Z}_{k}} = \lambda_{n} \overline{W}_{n,k}\, \expect{T_{k}}, \] where $\overline{W}_{n,k}$ denotes the long-term average delay of class $n$ if the control in frame $k$ is repeated in every frame. Together with $\expect{\abs{A_{n,k}}} = \lambda_{n} \expect{T_{k}}$, inequality~\eqref{eq:303} is re-written as \begin{equation} \label{eq:1801} \begin{split} \Delta(\bm{Z}_{k}) &\leq \left(C - \expect{T_{k}} \sum_{n=1}^{N} \theta_{n} \, Z_{n,k} \lambda_{n} d_{n} \right) \\ &\quad + \expect{T_{k}} \sum_{n=1}^{N} \theta_{n}\, Z_{n,k} \, \lambda_{n} \, \overline{W}_{n,k}. \end{split} \end{equation} Because in this section we do not have dynamic power allocation (so that power is fixed to the same value in every busy period), the value $\expect{T_{k}}$ is the same for all job scheduling policies. Then our desired policy, in every frame $k$, chooses a job scheduling to minimize the metric $ \sum_{n=1}^{N} \theta_{n}\, Z_{n,k}\, \lambda_{n} \overline{W}_{n,k} $ over all feasible delay vectors $(\overline{W}_{n,k})_{n=1}^{N}$. If we choose $\theta_{n} = \expect{X_{n}}$ for all classes $n$,\footnote{We note that the mean service time $\expect{X_{n}}$ as a value of $\theta_{n}$ is only needed in the arguments constructing the $\mathsf{DelayFeas}$ policy. The $\mathsf{DelayFeas}$ policy itself does not need the knowledge of $\expect{X_{n}}$.} the desired policy minimizes $\sum_{n=1}^{N} Z_{n,k}\, \lambda_{n}\, \expect{X_{n}} \overline{W}_{n,k}$ in every frame $k$. From lemma~\ref{lem:603}, this is achieved by the priority service rule defined by the $\mathsf{DelayFeas}$ policy. \subsection{Performance of the $\mathsf{DelayFeas}$ Policy} \label{sec:1802} \begin{thm} \label{thm:601} For every collection of feasible delay bounds $\{d_{1}, \ldots, d_{N}\}$, the $\mathsf{DelayFeas}$ policy yields average delays satisfying $\overline{W}_{n} \leq d_{n}$ for all classes $n\in\{1, \ldots, N\}$. \end{thm} \begin{proof}[Proof of Theorem~\ref{thm:601}] It suffices to show that the $\mathsf{DelayFeas}$ policy yields mean rate stability for all $Z_{n,k}$ queues by Lemma~\ref{lem:602}. By Lemma~\ref{lem:2201}, there exists a randomized priority policy $\pi_{\text{rand}}^{*}$ (introduced in Section~\ref{sec:1104}) that yields average delays $\overline{W}_{n}^{*}$ satisfying $\overline{W}_{n}^{*} \leq d_{n}$ for all classes $n$. Since the $\mathsf{DelayFeas}$ policy minimizes the last term of~\eqref{eq:1801} in each frame (under $\theta_{n} = \expect{X_{n}}$ for all $n$), comparing the $\mathsf{DelayFeas}$ policy with the $\pi_{\text{rand}}^{*}$ policy yields, in every frame $k$, \[ \sum_{n=1}^{N} \theta_{n}\, Z_{n,k} \,\lambda_{n} \overline{W}_{n,k}^{\mathsf{DelayFeas}} \leq \sum_{n=1}^{N} \theta_{n} \, Z_{n,k} \,\lambda_{n} \overline{W}_{n}^{*}. \] It follows that~\eqref{eq:1801} under the $\mathsf{DelayFeas}$ policy is further upper bounded by \[ \begin{split} \Delta(\bm{Z}_{k}) &\leq C + \expect{T_{k}} \sum_{n=1}^{N} \theta_{n}\, Z_{n,k}\, \lambda_{n} (\overline{W}_{n,k}^{\mathsf{DelayFeas}} - d_{n}) \\ &\leq C + \expect{T_{k}} \sum_{n=1}^{N} \theta_{n}\, Z_{n,k}\, \lambda_{n} (\overline{W}_{n}^{*} - d_{n}) \leq C. \end{split} \] Taking expectation, summing over $k\in\{0, \ldots, K-1\}$, and noting $L(\bm{Z}_{0}) = 0$, we get \[ \expect{L(\bm{Z}_{K})} = \frac{1}{2} \sum_{n=1}^{N} \theta_{n} \expect{Z_{n,K}^{2}} \leq KC. \] It follows that $ \expect{Z_{n,K}^{2}} \leq 2KC/\theta_{n} $ for all classes $n$. Since $Z_{n,K}\geq 0$, we get \[ 0\leq \expect{Z_{n,K}} \leq \sqrt{\expect{Z_{n,K}^{2}}} \leq \sqrt{2KC/\theta_{n}}. \] Dividing the above by $K$ and passing $K\to\infty$ yields \[ \lim_{K\to\infty} \frac{\expect{Z_{n,K}}}{K} = 0, \quad \forall n\in\{1, \ldots, N\}, \] and all $Z_{n,k}$ queues are mean rate stable. \end{proof} \section{Minimizing Delay Penalty Functions} \label{sec:1804} Generalizing the first delay feasibility problem, next we optimize a separable penalty function of average delays. For each class $n$, let $f_{n}(\cdot)$ be a nondecreasing, nonnegative, continuous, and convex function of average delay $\overline{W}_{n}$. Consider the constrained penalty minimization problem \begin{align} \text{minimize:} &\quad \sum_{n=1}^{N} f_{n}(\overline{W}_{n}) \label{eq:613} \\ \text{subject to:} &\quad \overline{W}_{n} \leq d_{n}, \quad \forall n\in\{1, \ldots, N\}. \label{eq:617} \end{align} We assume that a constant power is allocated in all frames, and that constraints~\eqref{eq:617} are feasible. The goal is to construct a frame-based policy that solves~\eqref{eq:613}-\eqref{eq:617}. Let $(\overline{W}_{n}^{*})_{n=1}^{N}$ be the optimal solution to~\eqref{eq:613}-\eqref{eq:617}, attained by a randomized priority policy $\pi_{\text{rand}}^{*}$ (by Lemma~\ref{lem:2201}). \subsection{Delay Proportional Fairness} \label{sec:703} One interesting penalty function is the one that attains \emph{proportional fairness}. We say a delay vector $(\overline{W}_{n}^{*})_{n=1}^{N}$ is \emph{delay proportional fair} if it is optimal under the quadratic penalty function $f_{n}(\overline{W}_{n}) = \frac{1}{2} c_{n} \overline{W}_{n}^{2}$ for each class $n$, where $c_{n}>0$ are given constants. The intuition is two-fold. First, under the quadratic penalty functions, any feasible delay vector $(\overline{W}_{n})_{n=1}^{N}$ necessarily satisfies \begin{equation} \label{eq:621} \sum_{n=1}^{N} f_{n}'(\overline{W}_{n}^{*}) (\overline{W}_{n}-\overline{W}_{n}^{*}) = \sum_{n=1} c_{n} (\overline{W}_{n} - \overline{W}_{n}^{*}) \overline{W}_{n}^{*} \geq 0, \end{equation} which is analogous to the \emph{rate proportional fair}~\cite{Kel97} criterion \begin{equation} \label{eq:620} \sum_{n=1}^{N} c_{n} \frac{x_{n} - x_{n}^{*}}{x_{n}^{*}} \leq 0, \end{equation} where $(x_{n})_{n=1}^{N}$ is any feasible rate vector and $(x_{n}^{*})_{n=1}^{N}$ is the optimal rate vector. Second, rate proportional fairness, when deviating from the optimal solution, yields the aggregate change of proportional rates less than or equal to zero (see~\eqref{eq:620}); it penalizes large rates to increase. When delay proportional fairness deviates from the optimal solution, the aggregate change of proportional delays is always nonnegative (see~\eqref{eq:621}); small delays are penalized for trying to improve. \subsection{Delay Fairness Policy} In addition to having the $\{Z_{n,k}\}_{k=0}^{\infty}$ queues updated by~\eqref{eq:601} for all classes $n$, we setup new discrete-time virtual queues $\{Y_{n,k}\}_{k=0}^{\infty}$ for all classes $n$, where $Y_{n,k+1}$ is updated at frame boundary $t_{k+1}$ as: \begin{equation} \label{eq:622} Y_{n,k+1} = \max\left[ Y_{n,k} + \sum_{i\in A_{n,k}} \left(W_{n,k}^{(i)} - r_{n,k} \right), 0\right], \end{equation} where $r_{n,k} \in[0, d_{n}]$ are auxiliary variables chosen at time $t_{k}$ independent of frame size $T_{k}$ and the number $\abs{A_{n,k}}$ of class $n$ arrivals in frame $k$. Assume $Y_{n,0} = 0$ for all $n$. Whereas the $Z_{n,k}$ queues are useful to enforce delay constraints $\overline{W}_{n}\leq d_{n}$ (as seen in Section~\ref{sec:1803}), the $Y_{n,k}$ queues are useful to achieve the optimal delay vector $(\overline{W}_{n}^{*})_{n=1}^{N}$. \underline{\textit{Delay Fairness ($\mathsf{DelayFair}$) Policy:}} \begin{enumerate} \item In the $k$th frame for each $k\in\mathbb{Z}^{+}$, after observing $\bm{Z}_{k}$ and $\bm{Y}_{k}$, use nonpreemptive strict priorities assigned in the decreasing order of $(Z_{n,k} + Y_{n,k})/\expect{S_{n}}$, where $\expect{S_{n}}$ is the mean size of a class $n$ job. Ties are broken arbitrarily. \item At the end of the $k$th frame, compute $Z_{n,k+1}$ and $Y_{n,k+1}$ for all classes $n$ by~\eqref{eq:601} and \eqref{eq:622}, respectively, where $r_{n,k}$ is the solution to the convex program: \begin{align*} \text{minimize:} &\quad V f_{n}(r_{n,k}) - Y_{n,k}\, \lambda_{n} \, r_{n,k} \\ \text{subject to:} &\quad 0\leq r_{n,k}\leq d_{n}, \end{align*} where $V>0$ is a predefined control parameter. \end{enumerate} While the $\mathsf{DelayFeas}$ policy in Section~\ref{sec:1803} does not require any statistical knowledge of arrivals and service times, the $\mathsf{DelayFair}$ policy needs the mean but not higher moments of arrivals and service times for all classes $n$. In the example of delay proportional fairness with quadratic penalty functions $f_{n}(\overline{W}_{n}) = \frac{1}{2} c_{n} \overline{W}_{n}^{2}$ for all classes $n$, the second step of the $\mathsf{DelayFair}$ policy solves: \begin{align*} \text{minimize:} &\quad \left(\frac{1}{2} V \,c_{n} \right) r_{n,k}^{2} - Y_{n,k} \, \lambda_{n} \, r_{n,k}\\ \text{subject to:} &\quad 0\leq r_{n,k} \leq d_{n}. \end{align*} The solution is $r_{n,k}^{*} = \min \left[d_{n}, \frac{Y_{n,k} \lambda_{n}}{V\, c_{n}} \right]$. \subsection{Motivation of the $\mathsf{DelayFair}$ Policy} The $\mathsf{DelayFair}$ policy follows a Lyapunov drift argument similar to that in Section~\ref{sec:1803}. Define $\bm{Z}_{k} \triangleq (Z_{n,k})_{n=1}^{N}$ and $\bm{Y}_{k} \triangleq (Y_{n,k})_{n=1}^{N}$. Define the Lyapunov function $L(\bm{Z}_{k}, \bm{Y}_{k}) \triangleq \frac{1}{2} \sum_{n=1}^{N} (Z_{n,k}^{2} + Y_{n,k}^{2})$ and the one-frame Lyapunov drift \[ \Delta(\bm{Z}_{k}, \bm{Y}_{k}) \triangleq \expect{L(\bm{Z}_{k+1}, \bm{Y}_{k+1}) - L(\bm{Z}_{k}, \bm{Y}_{k})\mid \bm{Z}_{k}, \bm{Y}_{k}}. \] Taking square of~\eqref{eq:622} yields \begin{equation} \label{eq:1603} Y_{n,k+1}^{2} \leq \left[ Y_{n,k} + \sum_{i\in A_{n,k}} \left(W_{n,k}^{(i)} - r_{n,k} \right) \right]^{2}. \end{equation} Summing~\eqref{eq:1602} and~\eqref{eq:1603} over all classes $n\in\{1, \ldots, N\}$, dividing the result by $2$, and taking conditional expectation on $\bm{Z}_{k}$ and $\bm{Y}_{k}$, we get \begin{equation} \label{eq:616} \begin{split} &\Delta(\bm{Z}_{k}, \bm{Y}_{k}) \leq C - \sum_{n=1}^{N} Z_{n,k}\, d_{n}\, \expect{\abs{A_{n,k}} \mid \bm{Z}_{k}, \bm{Y}_{k} } \\ &\quad - \sum_{n=1}^{N} Y_{n,k}\, \expect{r_{n,k} \abs{A_{n,k}} \mid \bm{Z}_{k}, \bm{Y}_{k} } \\ &\quad + \sum_{n=1}^{N} (Z_{n,k}+Y_{n,k})\, \expect{ \sum_{i\in A_{n,k}} W_{n,k}^{(i)} \mid \bm{Z}_{k}, \bm{Y}_{k} }, \end{split} \end{equation} where $C>0$ is a finite constant, different from that used in Section~\ref{sec:2202}, upper bounding the sum of all $(\bm{Z}_{k}, \bm{Y}_{k})$-independent terms. This constant exists using arguments similar to those in Lemma~\ref{lem:608} of Appendix~\ref{appendix:701}. Adding to both sides of~\eqref{eq:616} the weighted penalty term $ V \sum_{n=1}^{N} \expect{ f_{n}(r_{n,k})\, T_{k}\mid \bm{Z}_{k}, \bm{Y}_{k}} $, where $V>0$ is a predefined control parameter, and evaluating the result under a frame-based policy (similar as the analysis in Section~\ref{sec:1802}), we get the following Lyapunov \emph{drift plus penalty} inequality: \begin{equation} \label{eq:614} \begin{split} & \Delta(\bm{Z}_{k}, \bm{Y}_{k}) + V \sum_{n=1}^{N} \expect{ f_{n}(r_{n,k})\, T_{k}\mid \bm{Z}_{k}, \bm{Y}_{k}} \\ & \leq \left( C - \expect{T_{k}} \sum_{n=1}^{N} Z_{n,k} \, \lambda_{n} \, d_{n} \right) \\ & + \expect{T_{k}} \sum_{n=1}^{N} \expect{ V \, f_{n}(r_{n,k}) - Y_{n,k}\, \lambda_{n} \, r_{n,k}\mid \bm{Z}_{k}, \bm{Y}_{k}} \\ &+ \expect{T_{k}} \sum_{n=1}^{N} (Z_{n,k}+Y_{n,k}) \lambda_{n} \overline{W}_{n,k}. \end{split} \end{equation} We are interested in minimizing the right side of~\eqref{eq:614} in every frame $k$ over all frame-based policies and (possibly random) choices of $r_{n,k}$. Recall that in this section a constant power is allocated in all frames so that the value $\expect{T_{k}}$ is fixed under any work-conserving policy. The first and second step of the $\mathsf{DelayFair}$ policy minimizes the last (by Lemma~\ref{lem:603}) and the second-to-last term of~\eqref{eq:614}, respectively. \subsection{Performance of the $\mathsf{DelayFair}$ Policy} \label{sec:1101} \begin{thm} \label{thm:602} Given any feasible delay bounds $\{d_{1}, \ldots, d_{N}\}$, the $\mathsf{DelayFair}$ policy yields average delays satisfying $\overline{W}_{n} \leq d_{n}$ for all classes $n\in\{1, \ldots, N\}$, and attains average delay penalty satisfying \[ \begin{split} & \limsup_{K\to\infty}\sum_{n=1}^{N} f_{n}\left( \frac{\expect{\sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}}}{\expect{\sum_{k=0}^{K-1} \abs{A_{n,k}}}}\right) \\ &\quad \leq \frac{C \sum_{n=1}^{N} \lambda_{n}}{V} + \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}), \end{split} \] where $V>0$ is a predefined control parameter and $C>0$ a finite constant. By choosing $V$ sufficiently large, we attain arbitrarily close to the optimal delay penalty $ \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*})$. \end{thm} We remark that the tradeoff of choosing a large $V$ value is the amount of time required for virtual queues $\{Z_{n,k}\}_{k=0}^{\infty}$ and $\{Y_{n,k}\}_{k=0}^{\infty}$ to approach mean rate stability (see~\eqref{eq:2002} in the next proof), that is, the time required for the virtual queue backlogs to be negligible with respect to the time horizon. \begin{proof}[Proof of Theorem~\ref{thm:602}] Consider the optimal randomized policy $\pi_{\text{rand}}^{*}$ that yields optimal delays $\overline{W}_{n}^{*} \leq d_{n}$ for all classes $n$. Since the $\mathsf{DelayFair}$ policy minimizes the right side of~\eqref{eq:614}, comparing the $\mathsf{DelayFair}$ policy with the policy $\pi_{\text{rand}}^{*}$ and with the genie decision $r_{n,k}^{*}= \overline{W}_{n}^{*}$ for all classes $n$ and frames $k$, inequality~\eqref{eq:614} under the $\mathsf{DelayFair}$ policy is further upper bounded by \begin{align} & \Delta(\bm{Z}_{k}, \bm{Y}_{k}) + V \sum_{n=1}^{N} \expect{ f_{n}(r_{n,k})\, T_{k} \mid \bm{Z}_{k}, \bm{Y}_{k}} \notag \\ & \leq C - \expect{T_{k}} \sum_{n=1}^{N} Z_{n,k}\, \lambda_{n} \,d_{n} + \expect{T_{k}} \sum_{n=1}^{N}(Z_{n,k}+Y_{n,k}) \lambda_{n} \overline{W}_{n}^{*} \notag \\ &\quad + \expect{T_{k}} \sum_{n=1}^{N} \left(V f_{n}(\overline{W}_{n}^{*}) - Y_{n,k} \, \lambda_{n} \overline{W}_{n}^{*} \right) \notag \\ & \leq C + V \expect{T_{k}} \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}). \label{eq:1005} \end{align} Removing the second term of~\eqref{eq:1005} yields \begin{equation} \label{eq:2001} \Delta(\bm{Z}_{k}, \bm{Y}_{k}) \leq C + V \expect{T_{k}} \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}) \leq C+VD, \end{equation} where $D \triangleq \expect{T_{k}} \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*})$ is a finite constant. Taking expectation of~\eqref{eq:2001}, summing over $k\in\{0, \ldots, K-1\}$, and noting $L(\bm{Z}_{0}, \bm{Y}_{0}) =0$ yields $\expect{L(\bm{Z}_{K}, \bm{Y}_{K})} \leq K(C + VD)$. It follows that, for each class $n$ queue $\{Z_{n,k}\}_{k=0}^{\infty}$, we have \begin{equation} \label{eq:2002} \begin{split} 0 \leq \frac{\expect{Z_{n,K}}}{K} &\leq \sqrt{\frac{\expect{Z_{n,K}^{2}}}{K^{2}}} \\ &\leq \sqrt{\frac{2 \expect{L(\bm{Z}_{k}, \bm{Y}_{K})}}{K^{2}}} \leq \sqrt{\frac{2C}{K} + \frac{2VD}{K}}. \end{split} \end{equation} Passing $K\to\infty$ proves that queue $\{Z_{n,k}\}_{k=0}^{\infty}$ is mean rate stable for all classes $n$. Thus constraints $\overline{W}_{n} \leq d_{n}$ are satisfied by Lemma~\ref{lem:602}. Similarly, the $\{Y_{n,k}\}_{k=0}^{\infty}$ queues are mean rate stable for all classes $n$. Next, taking expectation of~\eqref{eq:1005}, summing over $k\in\{0, \ldots, K-1\}$, dividing by $V$, and noting $L(\bm{Z}_{0}, \bm{Y}_{0})=0$ yields \[ \begin{split} &\frac{\expect{L(\bm{Z}_{K}, \bm{Y}_{K})}}{V} + \sum_{n=1}^{N} \expect{\sum_{k=0}^{K-1} f_{n}(r_{n,k})\, T_{k} } \\ &\qquad \leq \frac{KC}{V} + \expect{\sum_{k=0}^{K-1} T_{k}} \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}). \end{split} \] Removing the first term and dividing by $\expect{\sum_{k=0}^{K-1} T_{k}}$ yields \begin{align} \sum_{n=1}^{N} &\frac{ \expect{\sum_{k=0}^{K-1} f_{n}(r_{n,k})\,T_{k}}}{\expect{\sum_{k=0}^{K-1} T_{k}}} \leq \frac{KC}{V\expect{\sum_{k=0}^{K-1} T_{k}}} + \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}) \notag \\ &\stackrel{(a)}{\leq} \frac{C\sum_{n=1}^{N}\lambda_{n}}{V} + \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}), \label{eq:1105} \end{align} where (a) follows $\expect{T_{k}} \geq \expect{I_{k}} = 1/(\sum_{n=1}^{N} \lambda_{n})$. By~\cite[Lemma~$7.6$]{Nee10book} and convexity of $f_{n}(\cdot)$, we get \begin{equation} \label{eq:1106} \sum_{n=1}^{N} \frac{\expect{\sum_{k=0}^{K-1} f_{n}(r_{n,k})\,T_{k} }}{\expect{\sum_{k=0}^{K-1} T_{k}}} \geq \sum_{n=1}^{N} f_{n}\left( \frac{\expect{\sum_{k=0}^{K-1} r_{n,k} T_{k}}}{\expect{\sum_{k=0}^{K-1} T_{k}}}\right). \end{equation} Combining~\eqref{eq:1105}\eqref{eq:1106} and taking a $\limsup$ as $K\to\infty$ yields \[ \begin{split} & \limsup_{K\to\infty} \sum_{n=1}^{N} f_{n}\left( \frac{\expect{\sum_{k=0}^{K-1} r_{n,k} T_{k}}}{\expect{\sum_{k=0}^{K-1} T_{k}}}\right) \\ &\qquad \leq \frac{C\sum_{n=1}^{N}\lambda_{n}}{V} + \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}). \end{split} \] The next lemma, proved in Appendix~\ref{sec:2203}, completes the proof. \begin{lem} \label{lem:1801} If queues $\{Y_{n,k}\}_{k=0}^{\infty}$ are mean rate stable for all classes $n$, then \[ \begin{split} &\limsup_{K\to\infty} \sum_{n=1}^{N} f_{n}\left( \frac{\expect{\sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}}}{\expect{\sum_{k=0}^{K-1} \abs{A_{n,k}}}}\right) \\ & \quad\leq \limsup_{K\to\infty} \sum_{n=1}^{N} f_{n}\left( \frac{\expect{\sum_{k=0}^{K-1} r_{n,k} T_{k}}}{\expect{\sum_{k=0}^{K-1} T_{k}}}\right). \end{split} \] \end{lem} \end{proof} \section{Delay-Constrained Optimal Power Control} \label{sec:powercontrol} In this section we incorporate dynamic power control into the queueing system. As mentioned in Section~\ref{sec:702}, we focus on frame-based policies that allocate a constant power $P_{k} \in [P_{\text{min}}, P_{\text{max}}]$ over the duration of the $k$th busy period (we assume zero power is allocated when the system is idle). Here, interesting quantities such as frame size $T_{k}$, busy period $B_{k}$, the set $A_{n,k}$ of per-frame class $n$ arrivals, and queueing delay $W_{n,k}^{(i)}$ are all functions of power $P_{k}$. Similar to the delay definition~\eqref{eq:1103}, we define the average power consumption \begin{equation} \label{eq:1701} \overline{P} \triangleq \limsup_{K\to\infty} \frac{\expect{\sum_{k=0}^{K-1} P_{k}\, B_{k}(P_{k})}}{\expect{\sum_{k=0}^{K-1} T_{k}(P_{k})}}, \end{equation} where $B_{k}(P_{k})$ and $T_{k}(P_{k})$ emphasize the power dependence of $B_{k}$ and $T_{k}$. It is easy to show that both $B_{k}(P_{k})$ and $T_{k}(P_{k})$ are decreasing in $P_{k}$. The goal is to solve the delay-constrained power minimization problem: \begin{align} \text{minimize:} &\quad \overline{P} \label{eq:701} \\ \text{subject to:} &\quad \overline{W}_{n} \leq d_{n}, \quad \forall n\in\{1, \ldots, N\} \label{eq:702} \end{align} over frame-based power control and nonpreemptive priority policies. \subsection{Power-Delay Performance Region} \label{sec:1103} Every frame-based power control and nonpreemptive priority policy can be viewed as a timing sharing or randomization of stationary policies that make the same deterministic decision in every frame. Using this point of view, next we give an example of the joint power-delay performance region resulting from frame-based policies. Consider a two-class nonpreemptive $M/G/1$ queue with parameters: \begin{itemize} \item $\lambda_{1} = 1$, $\lambda_{2} = 2$, $\expect{S_{1}} = \expect{S_{2}} = \expect{S_{2}^{2}} = 1$, $\expect{S_{1}^{2}} = 2$. $\mu(P) = P$. For each class $n\in\{1, 2\}$, the service time $X_{n}$ has mean $\expect{X_{n}} = \expect{S_{n}}/P$ and second moments $\expect{X_{n}^{2}} = \expect{S_{n}^{2}}/P^{2}$. For stability, we must have $\lambda_{1} \expect{X_{1}} + \lambda_{2} \expect{X_{2}} < 1 \Rightarrow P>3$. In this example, let $[4, 10]$ be the feasible power region. \end{itemize} Under a constant power allocation $P$, let $\mathcal{W}(P)$ denote the set of achievable queueing delay vectors $(\overline{W}_{1}, \overline{W}_{2})$. Define $\rho_{n} \triangleq \lambda_{n} \expect{X_{n}}$ and $R \triangleq \frac{1}{2} \sum_{n=1}^{2} \lambda_{n}\expect{X_{n}^{2}}$. Then we have \begin{equation} \label{eq:1115} \mathcal{W}(P) = \Set{ (\overline{W}_{1}, \overline{W}_{2}) | \begin{gathered} \overline{W}_{n} \geq \frac{R}{1-\rho_{n}},\, n\in\{1, 2\} \\ \sum_{n=1}^{2} \rho_{n} \overline{W}_{n} = \frac{(\rho_{1}+\rho_{2}) R}{1-\rho_{1}-\rho_{2}} \end{gathered} }. \end{equation} The inequalities in $\mathcal{W}(P)$ show that the minimum delay for each class is attained when it has priority over the other. The equality in $\mathcal{W}(P)$ follows the $M/G/1$ conservation law~\cite{Kle64book}. Using the above parameters, we get \[ \mathcal{W}(P) = \Set{ (\overline{W}_{1}, \overline{W}_{2}) | \begin{gathered} \overline{W}_{1} \geq \frac{2}{P(P-1)} \\ \overline{W}_{2} \geq \frac{2}{P(P-2)} \\ \overline{W}_{1} + 2 \overline{W}_{2} = \frac{6}{P(P-3)} \end{gathered} }. \] Fig.~\ref{fig:601} shows the collection of delay regions $\mathcal{W}(P)$ for different values of $P\in [4, 10]$. This joint region contains all feasible delay vectors under constant power allocations. \begin{figure}[htb] \centering \includegraphics[width=2in]{fig/fig-delayregion} \caption{The collection of average delay regions $\mathcal{W}(P)$ for different power levels $P\in[4, 10]$.} \label{fig:601} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=2in]{fig/fig-powerdelay} \caption{The augmented performance region of power-delay vectors $(P, \overline{W}_{1}(P), \overline{W}_{2}(P))$.} \label{fig:602} \end{figure} Fig.~\ref{fig:602} shows the associated augmented performance region of power-delay vectors $(P, \overline{W}_{1}(P), \overline{W}_{2}(P))$; its projection onto the delay plane is Fig.~\ref{fig:601}. After timing sharing or randomization, the performance region of all frame-based power control and nonpreemptive priority policies is the convex hull of Fig.~\ref{fig:602}. The problem~\eqref{eq:701}-\eqref{eq:702} is viewed a stochastic optimization over such a convexified power-delay performance region. \subsection{Dynamic Power Control Policy} \label{sec:1102} We setup the same virtual delay queues $Z_{n,k}$ as in~\eqref{eq:601}, and assume $Z_{n,0} = 0$ for all classes $n$. We represent a strict nonpreemptive priority policy by a permutation $(\pi_{n})_{n=1}^{N}$ of $\{1, \ldots, N\}$, where $\pi_{n}$ denotes the job class that gets the $n$th highest priority. \underline{\textit{Dynamic Power Control ($\mathsf{DynPower}$) Policy:}} \begin{enumerate} \item In the $k$th frame for each $k\in\mathbb{Z}^{+}$, use the nonpreemptive strict priority rule $(\pi_{n})_{n=1}^{N}$ that assigns priorities in the decreasing order of $Z_{n,k}/\expect{S_{n}}$; ties are broken arbitrarily. \item Allocate a fixed power $P_{k}$ in frame $k$, where $P_{k}$ is the solution to the following minimization of a weighted sum of power and average delays: \begin{align} \text{minimize:} &\quad \left(V\sum_{n=1}^{N} \lambda_{n} \expect{S_{n}}\right) \frac{P_{k}}{\mu(P_{k})} \label{eq:607} \\ &\qquad + \sum_{n=1}^{N} Z_{\pi_{n}, k} \, \lambda_{\pi_{n}} \overline{W}_{\pi_{n}}(P_{k}) \notag \\ \text{subject to:} &\quad P_{k} \in [P_{\text{min}}, P_{\text{max}}]. \label{eq:608} \end{align} The value $\overline{W}_{\pi_{n}}(P_{k})$, given later in~\eqref{eq:625}, is the average delay of class $\pi_{n}$ under the priority rule $(\pi_{n})_{n=1}^{N}$ and power allocation $P_{k}$. \item Update queues $Z_{n,k}$ for all classes $n\in\{1, \ldots, N\}$ by~\eqref{eq:601} at every frame boundary. \end{enumerate} The above $\mathsf{DynPower}$ policy requires the knowledge of arrival rates and the first two moments of job sizes for all classes $n$ (see~\eqref{eq:625}). We can remove its dependence on the second moments of job sizes, so that it only depends on the mean of arrivals and job sizes; see Appendix~\ref{sec:2201} for details. \subsection{Motivation of the $\mathsf{DynPower}$ Policy} We construct the Lyapunov drift argument. Define the Lyapunov function $L(\bm{Z}_{k}) = \frac{1}{2} \sum_{n=1}^{N} Z_{n,k}^{2}$ and the one-frame Lyapunov drift $\Delta(\bm{Z}_{k}) = \expect{L(\bm{Z}_{k+1}) - L(\bm{Z}_{k}) \mid \bm{Z}_{k}}$. Similar as the derivation in Section~\ref{sec:2202}, we have the Lyapunov drift inequality: \begin{equation} \label{eq:624} \Delta(\bm{Z}_{k}) \leq C + \sum_{n=1}^{N} Z_{n,k}\, \expect{\sum_{i\in A_{n,k}} \paren{W_{n,k}^{(i)}-d_{n}} \mid \bm{Z}_{k}}. \end{equation} Adding the weighted energy $V \expect{P_{k}\, B_{k}(P_{k}) \mid \bm{Z}_{k}}$ to both sides of~\eqref{eq:624}, where $V>0$ is a control parameter, yields \begin{equation} \label{eq:603} \Delta(\bm{Z}_{k}) + V \expect{P_{k}\, B_{k}(P_{k}) \mid \bm{Z}_{k}} \leq C + \Phi(\bm{Z}_{k}), \end{equation} where \[ \begin{split} \Phi(\bm{Z}_{k}) &\triangleq \mathbb{E} \Bigg[ V P_{k}\, B_{k}(P_{k}) \\ &\quad +\sum_{n=1}^{N} Z_{n,k} \sum_{i\in A_{n,k}} (W_{n,k}^{(i)}-d_{n})\mid \bm{Z}_{k} \Bigg]. \end{split} \] We are interested in the frame-based policy that, in each frame $k$, allocates power and assigns priorities to minimize the ratio \begin{equation} \label{eq:604} \frac{ \Phi(\bm{Z}_{k}) }{ \expect{T_{k}(P_{k}) \mid \bm{Z}_{k}} }. \end{equation} Note that frame size $T_{k}(P_{k})$ depends on $\bm{Z}_{k}$ because the power allocation that affects $T_{k}(P_{k})$ may be $\bm{Z}_{k}$-dependent. For any given power allocation $P_{k}$, $T_{k}(P_{k})$ is independent of $\bm{Z}_{k}$. Lemma~\ref{lem:607} next shows that the minimizer of~\eqref{eq:604} is a deterministic power allocation and strict nonpreemptive priority policy. Specifically, we may consider each $p\in \mathcal{P}$ in Lemma~\ref{lem:607} denotes a deterministic power allocation and strict priority policy, and random variable $P$ denotes a randomized power control and priority policy. \begin{lem} \label{lem:607} Let $P$ be a continuous random variable with state space $\mathcal{P}$. Let $G$ and $H$ be two random variables that depend on $P$ such that, for each $p \in \mathcal{P}$, $G(p)$ and $H(p)$ are well-defined random variables. Define \[ p^{*} \triangleq \operatorname{argmin}_{p\in\mathcal{P}} \frac{\expect{G(p)}}{\expect{H(p)}},\quad U^{*} \triangleq \frac{\expect{G(p^{*})}}{\expect{H(p^{*})}}. \] Then $\frac{\expect{G}}{\expect{H}} \geq U^{*}$ regardless of the distribution of $P$. \end{lem} \begin{proof} For each $p\in \mathcal{P}$, we have $\frac{\expect{G(p)}}{\expect{H(p)}} \geq U^{*}$. Then \[ \frac{\expect{G}}{\expect{H}} = \frac{\mathbb{E}_{P}\left[ \expect{G(p)} \right]}{\mathbb{E}_{P}\left[ \expect{H(p)} \right]} \geq \frac{ \mathbb{E}_{P}\left[ U^{*} \expect{H(p)} \right]}{\mathbb{E}_{P}\left[ \expect{H(p)} \right]} = U^{*}, \] which is independent of the distribution of $P$. \end{proof} Under a fixed power allocation $P_{k}$ and a strict nonpreemptive priority rule,~\eqref{eq:604} is equal to \begin{equation} \label{eq:1604} \begin{split} &\frac{ V \expect{P_{k}B_{k}(P_{k})} + \sum_{n=1}^{N} Z_{n,k} \, \lambda_{n} (\overline{W}_{n,k}(P_{k}) - d_{n}) \expect{T_{k}(P_{k})} }{\expect{T_{k}(P_{k})}} \\ &\quad = V P_{k} \frac{\sum_{n=1}^{N} \lambda_{n} \expect{S_{n}}}{\mu(P_{k})} + \sum_{n=1}^{N} Z_{n,k} \, \lambda_{n} (\overline{W}_{n,k}(P_{k})-d_{n}), \end{split} \end{equation} where by renewal theory \[ \frac{\expect{B_{k}(P_{k})}}{\expect{T_{k}(P_{k})}} = \sum_{n=1}^{N} \rho_{n}(P_{k}) = \sum_{n=1}^{N} \lambda_{n} \frac{\expect{S_{n}}}{\mu(P_{k})} \] and power-dependent terms are written as functions of $P_{k}$. It follows that our desired policy in every frame $k$ minimizes \begin{equation} \label{eq:606} \left(V\sum_{n=1}^{N}\lambda_{n}\expect{S_{n}}\right) \frac{P_{k}}{\mu(P_{k})} + \sum_{n=1}^{N} Z_{n,k} \, \lambda_{n} \overline{W}_{n,k}(P_{k}) \end{equation} over constant power allocations $P_{k}\in [P_{\text{min}}, P_{\text{max}}]$ and nonpreemptive strict priority rules. To further simplify, for each fixed power level $P_{k}$, by Lemma~\ref{lem:603}, the $c\mu$ rule that assigns priorities in the decreasing order of $Z_{n,k}/\expect{S_{n}}$ minimizes the second term of~\eqref{eq:606} (note that minimizing a linear function over strict priority rules is equivalent to minimizing over all randomized priority rules, since a vertex of the performance polytope attains the minimum). This strict priority policy is optimal regardless of the value of $P_{k}$, and thus is overall optimal; priority assignment and power control are decoupled. We represent the optimal priority policy by $(\pi_{n})_{n=1}^{N}$, recalling that $\pi_{n}$ denotes the job class that gets the $n$th highest priority. Under priorities $(\pi_{n})_{n=1}^{N}$ and a fixed power allocation $P_{k}$, the average delay $\overline{W}_{\pi_{n}}(P_{k})$ for class $\pi_{n}$ is equal to \begin{equation} \label{eq:625} \begin{split} \overline{W}_{\pi_{n}}(P_{k}) &= \frac{\frac{1}{2} \sum_{n=1}^{N} \lambda_{n} \expect{X_{n}^{2}}}{(1-\sum_{m=0}^{n-1} \rho_{\pi_{m}})(1-\sum_{m=0}^{n} \rho_{\pi_{m}})} \\ &= \frac{ \frac{1}{2} \sum_{n=1}^{N} \lambda_{n} \expect{S_{n}^{2}} }{ (\mu(P_{k}) - \sum_{m=0}^{n-1} \hat{\rho}_{\pi_{m}})(\mu(P_{k}) - \sum_{m=0}^{n} \hat{\rho}_{\pi_{m}}) }, \end{split} \end{equation} where $\hat{\rho}_{\pi_{m}} \triangleq \lambda_{\pi_{m}} \expect{S_{\pi_{m}}}$ if $m\geq 1$ and $0$ if $m=0$. The above discussions lead to the $\mathsf{DynPower}$ policy. \subsection{Performance of the $\mathsf{DynPower}$ Policy} \begin{thm} \label{thm:603} Let $P^{*}$ be the optimal average power of the problem~\eqref{eq:701}-\eqref{eq:702}. The $\mathsf{DynPower}$ policy achieves delay constraints $\overline{W}_{n}\leq d_{n}$ for all classes $n\in\{1, \ldots, N\}$ and attains average power $ \overline{P}$ satisfying \[ \overline{P} \leq \frac{C\sum_{n=1}^{N}\lambda_{n}}{V} + P^{*}, \] where $C>0$ is a finite constant and $V>0$ a predefined control parameter. \end{thm} \begin{proof}[Proof of Theorem~\ref{thm:603}] As discussed in Section~\ref{sec:1103}, the power-delay performance region in this problem is spanned by stationary power control and nonpreemptive priority policies that use the same (possibly random) decision in every frame. Let $\pi^{*}$ denote one such policy that yields the optimal average power $P^{*}$ with feasible delays $\overline{W}_{n}^{*} \leq d_{n}$ for all classes $n$. Let $P_{k}^{*}$ be its power allocation in frame $k$. Since policy $\pi^{*}$ makes i.i.d. decisions over frames, by renewal reward theory we have \[ P^{*} = \frac{\expect{P_{k}^{*} B(P_{k}^{*})}}{\expect{T(P_{k}^{*})}}. \] Then the ratio $\frac{\Phi(\bm{Z}_{k})}{\expect{T_{k}(P_{k}) \mid \bm{Z}_{k}}}$ under policy $\pi^{*}$ (see the left side of~\eqref{eq:1604}) is equal to \[ V\frac{\expect{P_{k}^{*} B(P_{k}^{*})}}{\expect{T(P_{k}^{*})}} + \sum_{n=1}^{N} Z_{n,k}\, \lambda_{n} \left(\overline{W}_{n}^{*}-d_{n}\right) \leq V P^{*}. \] Since the $\mathsf{DynPower}$ policy minimizes $\frac{\Phi(\bm{Z}_{k})}{\expect{T_{k}(P_{k}) \mid \bm{Z}_{k}}}$ over frame-based policies, including the optimal policy $\pi^{*}$, the ratio $\frac{\Phi(\bm{Z}_{k})}{\expect{T_{k}(P_{k}) \mid \bm{Z}_{k}}}$ under the $\mathsf{DynPower}$ policy satisfies \[ \frac{\Phi(\bm{Z}_{k})}{\expect{T_{k}(P_{k}) \mid \bm{Z}_{k}}} \leq V P^{*} \Rightarrow \Phi(\bm{Z}_{k}) \leq V P^{*} \expect{T_{k}(P_{k}) \mid \bm{Z}_{k}}. \] Using this bound in~\eqref{eq:603} yields \[ \Delta(\bm{Z}_{k}) + V \expect{P_{k}\, B_{k}(P_{k}) \mid \bm{Z}_{k}} \leq C+ V P^{*} \, \expect{T_{k}(P_{k}) \mid \bm{Z}_{k}}. \] Taking expectation, summing over $k\in\{0, \ldots, K-1\}$, and noting $L(\bm{Z}_{0})=0$ yields \begin{equation} \label{eq:610} \begin{split} & \expect{L(\bm{Z}_{K})} + V \sum_{k=0}^{K-1} \expect{P_{k}\,B_{k}(P_{k})} \\ &\quad \leq KC + V P^{*}\, \expect{\sum_{k=0}^{K-1} T_{k}(P_{k})}. \end{split} \end{equation} Since $\expect{T_{k}(P_{k})}$ is decreasing in $P_{k}$ and, under a fixed power allocation, is independent of scheduling policies, we get $\expect{T_{k}(P_{k})} \leq \expect{T_{0}(P_{\text{min}})}$ and \[ \begin{split} & \expect{L(\bm{Z}_{K})} + V \sum_{k=0}^{K-1} \expect{P_{k}\,B_{k}(P_{k})} \\ &\quad \leq K( C + V P^{*}\, \expect{T_{0}(P_{\text{min}})}). \end{split} \] Removing the second term and dividing by $K^{2}$ yields \[ \frac{\expect{L(\bm{Z}_{K})}}{K^{2}} \leq \frac{C+V P^{*}\, \expect{T_{0}(P_{\text{min}})}}{K}. \] Combining it with \[ 0\leq \frac{\expect{Z_{n,K}}}{K} \leq \sqrt{\frac{\expect{Z_{n,K}^{2}}}{K^{2}}} \leq \sqrt{ \frac{2 \expect{L(\bm{Z}_{K})}}{K^{2}}} \] and passing $K\to\infty$ proves that queue $\{Z_{n,k}\}_{k=0}^{\infty}$ is mean rate stable for all classes $n$. Thus $\overline{W}_{n}\leq d_{n}$ for all $n$ by Lemma~\ref{lem:602}. Further, removing the first term in~\eqref{eq:610} and dividing the result by $V \expect{ \sum_{k=0}^{K-1} T_{k}(P_{k})}$ yields \[ \begin{split} \frac{ \expect{\sum_{k=0}^{K-1} P_{k}\, B_{k}(P_{k})}}{\expect{ \sum_{k=0}^{K-1} T_{k}(P_{k})}} &\leq \frac{C}{V}\frac{K}{\expect{ \sum_{k=0}^{K-1} T_{k}(P_{k})}} + P^{*} \\ &\stackrel{(a)}{\leq} \frac{C\sum_{n=1}^{N}\lambda_{n}}{V} + P^{*}, \end{split} \] where (a) uses $\expect{T_{k}(P_{k})} \geq \expect{I_{k}} = 1/(\sum_{n=1}^{N} \lambda_{n})$. Passing $K\to\infty$ completes the proof. \end{proof} \section{Optimizing Delay Penalties with Average Power Constraint} \label{sec:1801} The fourth problem we consider is to, over frame-based power control and nonpreemptive priority policies, minimize a separable convex function of delay vectors $(\overline{W}_{n})_{n=1}^{N}$ subject to an average power constraint: \begin{align} \text{minimize:} &\quad \sum_{n=1}^{N} f_{n}(\overline{W}_{n}) \label{eq:1705} \\ \text{subject to:} &\quad \overline{P} \leq P_{\text{const}}. \label{eq:1706} \end{align} The value $\overline{P}$ is defined in~\eqref{eq:1701} and $P_{\text{const}} >0$ is a given feasible bound. The penalty functions $f_{n}(\cdot)$ are assumed nondecreasing, nonnegative, continuous, and convex for all classes $n$. Power allocation in every busy period takes values in $[P_{\text{min}}, P_{\text{max}}]$, and no power is allocated when the system is idle. In this problem, the region of feasible power-delay vectors $(\overline{P}, \overline{W}_{1}, \ldots, \overline{W}_{N})$ is complicated because feasible delays $(\overline{W}_{n})_{n=1}^{N}$ are indirectly decided by the power constraint~\eqref{eq:1706}. Using the same methodology as in the previous three problems, we construct a frame-based policy to solve~\eqref{eq:1705}-\eqref{eq:1706}. We setup the virtual delay queue $\{Y_{n,k}\}_{k=0}^{\infty}$ for each class $n\in\{1, \ldots, N\}$ as in~\eqref{eq:622}, in which the auxiliary variable $r_{n,k}$ takes values in $[0, R^{\text{max}}_{n}]$ for some $R^{\text{max}}_{n}>0$ sufficiently large.\footnote{For each class $n$, we need $R^{\text{max}}_{n}$ to be larger than the optimal delay $\overline{W}_{n}^{*}$ in problem~\eqref{eq:1705}-\eqref{eq:1706}. One way is to let $R^{\text{max}}_{n}$ be the maximum average delay over all classes under the minimum power allocation $P_{\text{min}}$.} Define the discrete-time \emph{virtual power queue} $\{X_{k}\}_{k=0}^{\infty}$ that evolves at frame boundaries $\{t_{k}\}_{k=0}^{\infty}$ as \begin{equation} \label{eq:1702} X_{k+1} = \max\left[ X_{k} + P_{k} B_{k}(P_{k}) - P_{\text{const}} T_{k}(P_{k}), \, 0\right]. \end{equation} Assume $X_{0} = 0$. The $\{X_{k}\}_{k=0}^{\infty}$ queue helps to achieve the power constraint $\overline{P} \leq P_{\text{const}}$. \begin{lem} \label{lem:1701} If the virtual power queue $\{X_{k}\}_{k=0}^{\infty}$ is mean rate stable, then $\overline{P} \leq P_{\text{const}}$. \end{lem} \begin{proof} Given in Appendix~\ref{sec:2401}. \end{proof} \subsection{Power-constrained Delay Fairness Policy} \label{sec:1805} \underline{\textit{Power-constrained Delay Fairness $(\mathsf{PwDelayFair})$ Policy:}} In the busy period of each frame $k\in\mathbb{Z}^{+}$, after observing $X_{k}$ and $(Y_{n,k})_{n=1}^{N}$: \begin{enumerate} \item Use the nonpreemptive strict priority rule $(\pi_{n})_{n=1}^{N}$ that assigns priorities in the decreasing order of $Y_{n,k}/\expect{S_{n}}$; ties are broken arbitrarily. \item Allocate power $P_{k}$ for the duration of the busy period, where $P_{k}$ solves: \begin{align*} \text{minimize:} &\quad X_{k} \left[ - P_{\text{const}} + \frac{P_{k}}{\mu(P_{k})} \sum_{n=1}^{N} \lambda_{n} \expect{S_{n}} \right] \\ &\qquad + \sum_{n=1}^{N} Y_{\pi_{n},k} \, \lambda_{\pi_{n}} \overline{W}_{\pi_{n}}(P_{k}) \\ \text{subject to:} &\quad P_{k} \in [P_{\text{min}}, P_{\text{max}}], \end{align*} where $\overline{W}_{\pi_{n}}(P_{k})$ is defined in~\eqref{eq:625}. \item Update $X_{k}$ and $Y_{n,k}$ for all classes $n$ at every frame boundary by~\eqref{eq:1702} and~\eqref{eq:622}, respectively. In~\eqref{eq:622}, the auxiliary variable $r_{n,k}$ is the solution to \begin{align*} \text{minimize:} &\quad V f_{n}(r_{n,k}) - Y_{n,k} \, \lambda_{n}\, r_{n,k} \\ \text{subject to:} &\quad 0\leq r_{n,k}\leq R^{\text{max}}_{n}. \end{align*} \end{enumerate} \subsection{Motivation of the $\mathsf{PwDelayFair}$ Policy} The construction of the Lyapunov drift argument follows closely with those in the previous problems; details are omitted for brevity. Define vector $\chi_{k} = [X_{k}; Y_{1,k}, \ldots, Y_{N,k}]$, the Lyapunov function $L(\chi_{k}) \triangleq \frac{1}{2} (X_{k}^{2} + \sum_{n=1}^{N} Y_{n,k}^{2})$, and the one-frame Lyapunov drift $\Delta(\chi_{k}) \triangleq \expect{ L(\chi_{k+1}) - L(\chi_{k}) \mid \chi_{k}}$. We can show there exists a finite constant $C>0$ such that \begin{equation} \label{eq:2205} \begin{split} \Delta(\chi_{k}) &\leq C + X_{k} \expect{P_{k}B_{k}(P_{k}) - P_{\text{const}} \, T_{k}(P_{k}) \mid \chi_{k} } \\ &\quad + \sum_{n=1}^{N} Y_{n,k} \, \expect{\sum_{i\in A_{n,k}} \left(W_{n,k}^{(i)}\ - r_{n,k} \right) \mid \chi_{k} }. \end{split} \end{equation} Adding the term $V \sum_{n=1}^{N} \expect{f_{n}(r_{n,k}) \, T_{k}(P_{k}) \mid \chi_{k}}$ to both sides of~\eqref{eq:2205}, where $V>0$ is a control parameter, and evaluating the result under a frame-based policy yields \begin{equation} \label{eq:1707} \Delta(\chi_{k}) + V \sum_{n=1}^{N} \expect{f_{n}(r_{n,k})\, T_{k}(P_{k}) \mid \chi_{k}} \leq C + \Psi(\chi_{k}), \end{equation} where \[ \begin{split} &\Psi(\chi_{k}) \triangleq \expect{T_{k}(P_{k}) \mid \chi_{k}} \sum_{n=1}^{N} Y_{n,k} \, \lambda_{n} \overline{W}_{n,k}(P_{k}) \\ &\quad + X_{k} \expect{P_{k} B_{k}(P_{k}) \mid \chi_{k}} - X_{k} P_{\text{const}} \, \expect{T_{k}(P_{k}) \mid \chi_{k}} \\ &\quad + \expect{T_{k}(P_{k}) \mid \chi_{k}} \sum_{n=1}^{N} \expect{V f_{n}(r_{n,k}) - Y_{n,k} \, \lambda_{n}\, r_{n,k} \mid \chi_{k}}, \end{split} \] where $\overline{W}_{n,k}(P_{k})$ is the average delay of class $n$ if the control and power allocation in frame $k$ is repeated in every frame. We are interested in the frame-based policy that minimizes the ratio $\frac{\Psi(\chi_{k})}{\expect{T_{k}(P_{k}) \mid \chi_{k}}}$ in each frame $k\in \mathbb{Z}^{+}$. Lemma~\ref{lem:607} shows the minimizer is a deterministic policy, under which the ratio is equal to \[ \begin{split} & \sum_{n=1}^{N} Y_{n,k} \, \lambda_{n} \overline{W}_{n,k}(P_{k}) + X_{k} \left( P_{k}\, \rho_{\text{sum}}(P_{k}) - P_{\text{const}} \right) \\ & + \sum_{n=1}^{N} \left( V f_{n}(r_{n,k}) - Y_{n,k} \, \lambda_{n} \, r_{n,k} \right), \end{split} \] where $\rho_{\text{sum}}(P_{k}) \triangleq \sum_{n=1}^{N} \lambda_{n} \expect{S_{n}} / \mu(P_{k})$. Under similar simplifications as the $\mathsf{DynPower}$ policy in Section~\ref{sec:1102}, we can show that the $\mathsf{PwDelayFair}$ policy is the desired policy. \subsection{Performance of the $\mathsf{PwDelayFair}$ Policy} \begin{thm} \label{thm:1801} For any feasible average power constraint $\overline{P}\leq P_{\text{const}}$, the $\mathsf{PwDelayFair}$ policy satisfies $\overline{P}\leq P_{\text{const}}$ and yields average delay penalty satisfying \begin{equation} \label{eq:1805} \begin{split} & \limsup_{K\to\infty}\sum_{n=1}^{N} f_{n}\left( \frac{\expect{\sum_{k=0}^{K-1} \sum_{i\in A_{n,k}} W_{n,k}^{(i)}}}{\expect{\sum_{k=0}^{K-1} \abs{A_{n,k}}}}\right) \\ &\quad \leq \frac{C \sum_{n=1}^{N} \lambda_{n}}{V} + \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}), \end{split} \end{equation} where $V>0$ is a predefined control parameter. \end{thm} \begin{proof}[Proof of Theorem~\ref{thm:1801}] Let $\pi_{\text{rand}}^{*}$ be the frame-based randomized policy that solves~\eqref{eq:1705}-\eqref{eq:1706}. Let $(\overline{W}_{n}^{*})_{n=1}^{N}$ be the optimal average delay vector, and $\overline{P}^{*}$, where $\overline{P}^{*}\leq P_{\text{const}}$, be the associated power consumption. In frame $k\in\mathbb{Z}^{+}$, the ratio $\frac{\Psi(\chi_{k})}{\expect{T_{k}(P_{k}) \mid \chi_{k}}}$ evaluated under policy $\pi_{\text{rand}}^{*}$ and genie decisions $r_{n,k}^{*} = \overline{W}_{n}^{*}$ for all classes $n$ is equal to \begin{equation} \label{eq:1708} \begin{split} &\sum_{n=1}^{N} Y_{n,k} \, \lambda_{n} \overline{W}_{n}^{*} + X_{k} \overline{P}^{*} - X_{k} P_{\text{const}} \\ &+ \sum_{n=1}^{N} \left( V f_{n}(\overline{W}_{n}^{*}) - Y_{n,k} \, \lambda_{n} \overline{W}_{n}^{*}\right) \leq V \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}). \end{split} \end{equation} Since the $\mathsf{PwDelayFair}$ policy minimizes $\frac{\Psi(\chi_{k})}{\expect{T_{k}(P_{k}) \mid \chi_{k}}}$ in every frame $k$, the ratio under the $\mathsf{PwDelayFair}$ policy satisfies \[ \frac{\Psi(\chi_{k})}{\expect{T_{k}(P_{k}) \mid \chi_{k}}} \leq V \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}). \] Then~\eqref{eq:1707} under the $\mathsf{PwDelayFair}$ policy satisfies \begin{equation} \label{eq:1803} \begin{split} &\Delta(\chi_{k}) + V \expect{\sum_{n=1}^{N} f_{n}(r_{n,k})\, T_{k}(P_{k}) \mid \chi_{k}} \\ &\quad \leq C + V \expect{T_{k}(P_{k}) \mid \chi_{k}} \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}). \end{split} \end{equation} Removing the second term in~\eqref{eq:1803} and taking expectation, we get \[ \expect{L(\chi_{k+1})} - \expect{L(\chi_{k})} \leq C + V\expect{T_{k}(P_{k})} \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}). \] Summing over $k\in\{0, \ldots, K-1\}$, and using $L(\chi_{0})=0$ yields \begin{equation} \label{eq:1804} \expect{L(\chi_{K})} \leq KC + V \expect{\sum_{k=0}^{K-1} T_{k}(P_{k})} \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*}) \leq K C_{1} \end{equation} where $C_{1} \triangleq C + V \expect{T_{0}(P_{\text{min}})} \sum_{n=1}^{N} f_{n}(\overline{W}_{n}^{*})$, and we have used $\expect{T_{k}(P_{k})}\leq \expect{T_{0}(P_{\text{min}})}$. Inequality~\eqref{eq:1804} suffices to conclude that queues $X_{k}$ and $Y_{n,k}$ for all classes $n$ are all mean rate stable. From Lemma~\ref{lem:1701} the constraint $\overline{P} \leq P_{\text{const}}$ is achieved. The proof of~\eqref{eq:1805} follows that of Theorem~\ref{thm:602}. \end{proof} \section{Simulations} \label{sec:2402} Here we simulate the $\mathsf{DelayFeas}$ and $\mathsf{DelayFair}$ policy in the first two delay control problems; simulations for the $\mathsf{DynPower}$ and $\mathsf{PwDelayFair}$ policy in the last two delay-power control problems are our future work. The setup is as follows. Consider a two-class $M/M/1$ queue with Poisson arrival rates $(\lambda_{1}, \lambda_{2}) = (1, 2)$, loading factors $(\rho_{1}, \rho_{2}) = (0.4, 0.4)$, and mean exponential service times $\expect{X_{1}} = \rho_{1}/\lambda_{1} = 0.4$ and $\expect{X_{2}} = \rho_{2}/\lambda_{2}= 0.2$ (we use service times directly since there is no power control). The average delay region $\mathcal{W}$ of this two-class $M/M/1$ queue, given in~\eqref{eq:1115}, is \begin{equation} \label{eq:1119} \mathcal{W} = \Set{ (\overline{W}_{1}, \overline{W}_{2}) | \begin{gathered} \overline{W}_{1} + \overline{W}_{2} = 2.4 \\ \overline{W}_{1} \geq 0.4, \, \overline{W}_{2} \geq 0.4 \end{gathered}}. \end{equation} For the $\mathsf{DelayFeas}$ policy, we consider five sets of delay constraints $(d_{1}, d_{2}) = (0.45, 2.05)$, $(0.85, 1.65)$, $(1.25, 1.25)$, $(1.65, 0.85)$, and $(2.05, 0.45)$; they are all $(0.05, 0.05)$ away from a feasible point on $\mathcal{W}$. For each constraint set $(d_{1}, d_{2})$, we repeat the simulation for $10$ times and take an average on the resulting average delay, where each simulation is run for $10^{6}$ frames. The results are given in Fig.~\ref{fig:1901}, which shows that the $\mathsf{DelayFeas}$ policy adaptively yields feasible average delays in response to different constraints. \begin{figure}[htb] \centering \includegraphics[width=3in]{fig/fig-delayfeas-sim} \caption{The performance of the $\mathsf{DelayFeas}$ policy under different delay constraints $(d_{1}, d_{2})$.} \label{fig:1901} \end{figure} Next, for the $\mathsf{DelayFair}$ policy, we consider the following delay proportional fairness problem: \begin{align} \text{minimize:} &\quad \frac{1}{2} \overline{W}_{1}^{2} + 2 \overline{W}_{2}^{2} \label{eq:1116} \\ \text{subject to:} &\quad (\overline{W}_{1}, \overline{W}_{2}) \in \mathcal{W} \label{eq:1117} \\ &\quad \overline{W}_{1} \leq 2, \overline{W}_{2} \leq 2 \label{eq:1901} \end{align} where the delay region $\mathcal{W}$ is given in~\eqref{eq:1119}. The additional delay constraints~\eqref{eq:1901} are chosen to be non-restrictive for the ease of demonstration. The optimal solution to~\eqref{eq:1116}-\eqref{eq:1901} is $(\overline{W}_{1}^{*}, \overline{W}_{2}^{*}) = (1.92, 0.48)$; the optimal delay penalty is $\frac{1}{2} (\overline{W}_{1}^{*})^{2} + 2 (\overline{W}_{2}^{*})^{2} =2.304$. We simulate the $\mathsf{DelayFair}$ policy for different values of control parameter $V\in\{10^{2}, 10^{3}, 5\times 10^{3}, 10^{4}\}$. The results are in Table~\ref{table:1102}. \begin{table}[htbp] \centering \begin{tabular}{cccc} $V$ & $\overline{W}_{1}^{\mathsf{DelayFair}}$ & $\overline{W}_{2}^{\mathsf{DelayFair}}$ & Delay penalty \\ \hline $100$ & $1.611$ & $0.785$ & $2.529$ \\ $1000$ & $1.809$ & $0.591$ & $2.335$ \\ $5000$ & $1.879$ & $0.523$ & $2.312$ \\ $10000$ & $1.894$ & $0.503$ & $2.301$ \\ \hline Optimal value: & $1.92$ & $0.48$ & $2.304$ \end{tabular} \caption{The average delays and delay penalty under the $\mathsf{DelayFair}$ policy for different values of control parameter $V$.} \label{table:1102} \end{table} Every entry in Table~\ref{table:1102} is the average over $10$ simulation runs, where each simulation is run for $10^{6}$ frames. As $V$ increases, the $\mathsf{DelayFair}$ policy yields average delays approaching the optimal $(1.92, 0.48)$ and the optimal penalty $2.304$. \section{Conclusions} This paper solves constrained delay-power stochastic optimization problems in a nonpreemptive multi-class $M/G/1$ queue from a new mathematical programming perspective. After characterizing the performance region by the collection of all frame-based randomizations of \emph{base policies} that comprise deterministic power control and nonpreemptive strict priority policies, we use the Lyapunov optimization theory to construct dynamic control algorithms that yields near-optimal performance. These policies greedily select and run a base policy in every frame by minimizing a ratio of an expected ``drift plus penalty'' sum over the expected frame size, and require limited statistical knowledge of the system. Time average constraints are turned into virtual queues that need to be stabilized. While this paper studies delay and power control in a nonpreemptive multi-class $M/G/1$ queue, our framework shall have a much wider applicability to other stochastic optimization problems over queueing networks, especially those that satisfy strong (possibly generalized~\cite{BaN96}) conservation laws and have polymatroidal performance regions. Different performance metrics such as throughput (together with admission control), delay, power, and functions of them can be mixed together to serve as objective functions or time average constraints. It is of interest to us to explore all these directions. Another connection is, in~\cite{LaN10arXivb}, we have used the frame-based Lyapunov optimization theory to optimize a general functional objective over an inner bound on the performance region of a restless bandit problem with Markov ON/OFF bandits. This inner bound approach can be viewed as an approximation to such complex restless bandit problems. Multi-class queueing systems and restless bandit problems are two prominent examples of general stochastic control problems. Thus it would be interesting to develop the Lyapunov optimization theory as a unified framework to attack other open stochastic control problems. \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.181641, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbjQ5qsFAftmvNUT_
\section{Introduction} The nature of exotic nuclei with extreme isospin values is one of the most exciting challenges today, both experimentally and theoretically. Thanks to developments in experimental technology \cite{[Gei95]}, we are in the process of exploring the very limits of nuclear existence, namely the regions of the periodic chart in the neighborhood of the particle drip lines. The systems of interest are characterized by extreme values of isospin corresponding to large proton or neutron excess. On the neutron-rich side, there appears a region of loosely bound few-body systems, neutron halos (see Refs.~\cite{[Mue93],[Rii94],[Han95],[Tan96]} for reviews). In these nuclei the weak neutron binding implies large spatial dimensions and the existence of the halo (i.e., a dramatic excess of neutrons at large distances). Theoretically, the weak binding and corresponding closeness of the particle continuum, together with the need for the explicit treatment of few-body dynamics, makes the subject of halos both extremely interesting and difficult. Neutron halos and heavy weakly bound neutron-rich nuclei offer an opportunity to study the wealth of phenomena associated with the closeness of the particle threshold: particle emission (ionization to the continuum) and characteristic behavior of cross sections \cite{[Wig48],[Fan61]}, existence of soft collective modes and low-lying transition strength \cite{[Uch85],[Fay91],[Yok95],[Sag95],[Ham96]}, and dramatic changes in shell structure and various nuclear properties in the sub-threshold regime \cite{[Ton78],[Hir91],[Dob94],[Dob96]}. In this study we address the notion of shape deformations in halo nuclei. The importance of non-spherical intrinsic shapes in halo nuclei has been stressed in some papers, especially in the context of a one-neutron halo $^{11}$Be and the nucleus $^{8}$B, alleged to be a one-proton halo. The ground state of $^{11}$Be is a 1/2$^+$ state. The low neutron separation energy, $S_n$=504 keV, allows for only one bound excited level (1/2$^-$ at 320 keV). The halo character of $^{11}$Be has been confirmed by studies of reaction cross sections \cite{[Fuk91a]} and the importance of deformation can be inferred from the large quadrupole moment of its core $^{10}$Be, $|Q|$=229\,mb \cite{[Ram87]}. The halo character of $^{8}$B has been suggested in Ref. \cite{[Min92]} where the large measured quadrupole moment of its $I^\pi$=2$^+$ ground state, $|Q|$=68\,mb, has been attributed to the weak binding of the fifth proton in the $1p_{3/2}$ state. (The existence of proton halo in $^8$B is still heavily debated \cite{[Rii94],[Tan96]}. For instance, Nakada and Otsuka \cite{[Nak94]}, in a shell-model calculation, demonstrated that the large value of $|Q|$ in $^8$B could be understood without introducing the proton halo.) The role of deformation in lowering the excitation energy of the 1/2$^+$ intruder level in $^{11}$Be has been recognized long ago. For instance, Bouten {\it et al.} \cite{[Bou78]} pointed out that the position of the abnormal-parity intruder orbitals in odd $p$-shell nuclei can be dramatically lowered by deformation, and they performed the projected Hartree-Fock calculations for the parity doublet in $^{11}$Be. In another paper, based on the cranked Nilsson model, Ragnarsson {\it et al.} \cite{[Rag81]} demonstrated that the parity doublet could be naturally understood in terms of the [220]1/2 ($1d_{5/2}\otimes 2s_{1/2}$) Nilsson orbital. In particular, they calculated a very large triaxial deformation for the positive-parity level, and a less-deformed prolate shape for the negative-parity state. Muta and Otsuka \cite{[Mut95],[Ots95]} studied the structure of $^{11}$Be and $^{8}$B with a deformed Woods-Saxon potential considering quadrupole deformation as a free parameter adjusted to the data. Tanihata {\it et al.} \cite{[Tan95]} concluded, based on a spherical one-body potential, that the positions of experimental drip lines are consistent with the spherical picture; they emphasized the effect of increased binding of the low-$\ell$ shell model states near the threshold that can give rise to the level inversion. In none of these above papers, however, have both effects, i.e., the loose binding and self-consistency been simultaneously considered. Figure~\ref{Ba_radii} shows the interaction radii for a series of Be isotopes deduced from measured interaction cross sections \cite{[Tan88]}. The relatively large radius for $^{11}$Be has been interpreted as a sign for halo structure of this nucleus. It is, however, quite interesting to note that calculated deformations (as obtained in Nilsson-Strutinsky calculations) of the Be isotopes are found to vary in such a way that the corresponding nuclear radii reproduce the data quite well. In this case, no effects from the halo structure of $^{11}$Be have been considered since the calculations are based on the modified oscillator potential. Although these calculations are somewhat unrealistic, the result displayed in Fig.~\ref{Ba_radii} clearly stresses the importance of simultaneously considering both deformation and halo effects as, e.g., in $^{11}$Be. Another, more microscopic, approach is the work by Kanada-En'yo {\it et al.} \cite{[Kan95]} based on antisymmetrized molecular dynamics with improved asymptotics. They obtained very large quadrupole deformation for the 1/2$^+$ state in $^{11}$Be, and a less deformed 1/2$^-$ state. A similar conclusion has been drawn in the recent self-consistent Skyrme-Hartree-Fock calculations \cite{[Li96]}. Also, very recently, the notion of deformation in $^{11}$Be has been pursued by several authors \cite{[Esb95],[Vin95],[Ber95],[Nun96],[Rid96]} within several variants of the weak coupling scheme. In these models, the odd neutron moving in a Woods-Saxon potential is weakly coupled to the deformed core of $^{10}$Be, and interacts with the core through the quadrupole field. The deformation of $^{11}$Be is not treated self-consistently; either the strength of the quadrupole coupling is adjusted to the data to reproduce the quadrupole moment of $^{10}$Be, or the deformation is adjusted to reproduce the energies of the $I$=1/2 doublet. The advantage of the weak coupling approach is that the total wave functions are eigenstates of the total angular momentum and they have correct asymptotic behavior. In this context, a nice molecular analogy, in which the adiabatic coupling of the valence particle to the deformed core is applied, is the weak binding of an electron to a rotationally excited dipolar system \cite{[Gar71]}, or to a neutral symmetric molecule with a non-zero quadrupole moment \cite{[Com96]}. In this study, we address the question of deformed two-body halos by considering the single-particle motion in the axial spheroidal square well. The corresponding Schr\"odinger equation can be separated into three ordinary differential equations in the spheroidal coordinate system. The properties of the deformed single-particle states, especially in the subthreshold region, are analyzed by making the multipole decomposition in the spherical partial waves with well-defined orbital angular momentum. Large spatial extensions of deformed halos are discussed in terms of spherical radial form-factors. The paper is organized as follows. Section \ref{general} contains the discussion of the generic properties of deformed halos. The method of evaluation of the prolate spheroidal wave functions is described in Sec. \ref{model}, and the results of calculations are discussed in Sec. \ref{results}. Finally, Sec. \ref{conclusions} contains the main conclusions of the paper. \section{Deformed neutron halos, general properties}\label{general} This section contains some general arguments regarding the concept of shape deformation in two-body halo systems. The material contained in this section is the extension of the analysis of Ref.~\cite{[Rii92]} carried out for the spherical case. (For the corresponding discussion of three-body halo asymptotics, see Ref.\cite{[Fed93]}.) Let us assume that the weakly bound neutron moves in a deformed average potential $U(\bbox{r})$ (usually well approximated by a sum of the central potential and the spin-orbit potential). The neutron wave function can be obtained by solving the deformed single-particle Schr\"odinger equation \begin{equation}\label{deformedSE} \left[\nabla^2 - \frac{2m}{\hbar^2}U(\bbox{r}) - \kappa_\nu^2 \right] \psi_\nu(\bbox{r}) =0, \end{equation} where $\kappa_\nu$=$\sqrt{-2m\epsilon_\nu/\hbar^2}$ and $\epsilon_\nu$ is the corresponding single-particle energy ($\epsilon_\nu$$<$0). In the following considerations we assume the axial reflection-symmetric central potential. (The generalization to the triaxial and/or reflection-asymmetric case is straightforward.) Since the asymptotic properties of radial matrix elements depend very weakly on intrinsic spin, the spin-orbit term is neglected. Thanks to the axial symmetry, the single-particle states are labeled by means of $\Lambda$ - the projection of the single-particle angular momentum onto the symmetry axis ($z$-axis) and parity, $\pi$. Since the Hamiltonian considered is invariant with respect to the time-reversal symmetry, we shall only consider non-negative values of $\Lambda$. The deformed wave function can be decomposed into spherical partial waves with the well-defined orbital angular momentum $\ell$: \begin{equation}\label{multipole} \psi^\Lambda_\nu(\bbox{r}) = \sum_{\ell} R_{\ell\Lambda\nu} (r) Y_{\ell\Lambda}(\hat{\bbox{r}}), \end{equation} where, due to the assumed reflection symmetry, \begin{equation}\label{parity} \pi = (-1)^{\ell}. \end{equation} At large distances ($r$$>$$R$, where $R$ defines the arbitrary but fixed distance at which the nuclear interacion becomes unimportant), $U(\bbox{r}$) vanishes and the radial functions $R_{\ell\nu} (r)$ satisfy the free-particle radial Schr\"odinger equation \begin{equation}\label{sphericalSE} \left[\frac{d^2}{dr^2} + \frac{2}{r}\frac{d}{dr} -\kappa_\nu^2 - \frac{\ell(\ell+1)}{r^2}\right] R_{\ell\nu} (r)=0. \end{equation} (Of course, $R_{\ell\Lambda\nu}(r)$=$R_{\ell\nu} (r)$ for $r\geq R$.) The solution of Eq.~(\ref{sphericalSE}) is $R_{\ell\nu}(r)= B_{\ell} h^+_{\ell}(i\kappa_\nu r)$ where $h^+_{\ell}$ is the spherical Hankel function and $B_{\ell}$ is a constant, given by \begin{equation}\label{BB} B_{\ell} = R_{\ell\nu}(R)/h^+_{\ell}(i\kappa_\nu R). \end{equation} The spatial properties of the system can be characterized by radial moments, $\langle \psi^\Lambda_\nu| r^n|\psi^\Lambda_\nu\rangle$, and multipole moments $\langle \psi^\Lambda_\nu| r^n Y_{n0}|\psi^\Lambda_\nu\rangle$. Both quantities require the evaluation of radial matrix elements \begin{equation}\label{rme} \langle\ell\Lambda\nu|r^n|\ell'\Lambda\nu\rangle \equiv \int_{0}^{\infty} r^{n+2} R^*_{\ell\Lambda\nu}(r)R_{\ell'\Lambda\nu}(r) dr = I_{n\ell\ell'\Lambda\nu}+ O_{n\ell\ell'\nu}, \end{equation} where $I$ represents the contribution from the inner ($r$$<$$R$) region and $O$ is the outer region contribution ($r$$>$$R$). Thanks to parity conservation [Eq.~(\ref{parity})], $\ell'=\ell \bmod 2$. The inner integral is, by definition, finite. The outer integral can be written as \begin{eqnarray}\label{asymptote} O_{n\ell\ell'\nu} & = & \int_{R}^{\infty} r^{n+2} B^*_{\ell} B_{\ell'} h^{+*}_{\ell}(i\kappa_\nu r) h^+_{\ell'}(i\kappa_\nu r) dr \\ & = & B^*_{\ell} B_{\ell'} \kappa_\nu^{-(n+3)} \int_{R\kappa_\nu}^{\infty} h^{+*}_{\ell}(ix) h^+_{\ell'}(ix) x^{n+2} dx. \end{eqnarray} In the limit of a very weak binding ($\kappa_\nu\rightarrow 0$) one can replace the value of $h^+_{\ell}(i\kappa_\nu R)$ in Eq.~(\ref{BB}) by the asymptotic expression valid for small arguments. This gives: \begin{equation} B_{\ell} \approx \frac{i^{\ell+1}}{1\times 3\times...(2\ell -1)} R_{\ell\nu}(R) (R\kappa_\nu)^{\ell +1}. \end{equation} Now, following the arguments of Ref.~\cite{[Rii92]}, one can demonstrate that for small values of $\kappa_\nu$, $O_{n\ell\ell'\nu}$ behaves asymptotically as \begin{equation}\label{finale} O_{n\ell\ell'\nu} \propto \kappa_\nu^{\ell + \ell' - n - 1}\left\{ \frac{{x_0}^{n-\ell-\ell'+1} - {(R\kappa_\nu)}^{n-\ell-\ell'+1}}{n-\ell-\ell'+1} + \text{const.} \right\}, \end{equation} where $x_0\gg R\kappa_\nu$ is a small constant. Consequently, the asymptotic behavior of $O_{n\ell\ell'\nu}$ in the limit of small $\epsilon$ strongly depends on quantum numbers $n$, $\ell$, and $\ell'$. Namely, for \begin{eqnarray}\label{conditions} n> \ell + \ell' -1: & O_{n\ell\ell'\nu} & ~\text{diverges as} ~(-\epsilon_\nu)^{(\ell + \ell' -n -1)/2},\label{c1} \\ n= \ell + \ell' -1: & O_{n\ell\ell'\nu} & ~\text{diverges as} ~-\frac{1}{2}\ln(-\epsilon_\nu),\label{c2}\\ n< \ell + \ell' -1: & O_{n\ell\ell'\nu} & ~\text{remains finite} \label{c3}. \end{eqnarray} \subsection{The normalization integral}\label{nint} The norm of the deformed state [Eq~(\ref{multipole})], ${\cal N}_{\Lambda\nu}$, can be expressed through the zeroth radial moment \begin{equation}\label{norm} ({\cal N}_{\Lambda\nu})^2=\langle\psi^\Lambda_\nu|\psi^\Lambda_\nu\rangle = \sum_{\ell}\langle\ell\Lambda\nu|r^0|\ell\Lambda\nu\rangle=\sum_{\ell} \left(I_{0\ell\ell\Lambda\nu}+ O_{0\ell\ell\nu}\right). \end{equation} Consequently, according to Eq.~(\ref{conditions}), the norm is divergent only if the deformed state contains an admixture of the $s$-wave. This is possible only for orbitals with $\pi$=+ and $\Lambda$=0. In this case, the norm behaves asymptotically as $\sqrt{O_{000\nu}}$$\propto$$(-\epsilon_\nu)^{-1/4}$, and the probability to find the neutron in the outer region, \begin{equation}\label{outer} P_{\rm outer}=\frac{O}{I+O}, \end{equation} approaches one for zero binding. \subsection{The rms radius} The root-mean-square radius of a deformed orbital is given by \begin{equation}\label{radius} \langle \Lambda\nu| r^2| \Lambda\nu\rangle \equiv \frac{\langle\psi^\Lambda_\nu|r^2|\psi^\Lambda_\nu\rangle} {\langle\psi^\Lambda_\nu|\psi^\Lambda_\nu\rangle} = ({\cal N}_{\Lambda\nu})^{-2} \sum_{\ell}\langle\ell\Lambda\nu|r^2|\ell\Lambda\nu\rangle. \end{equation} As discussed in Ref.~\cite{[Rii92]}, with decreasing binding energy the integral $O_{2\ell\ell\nu}$ diverges as $(-\epsilon_\nu)^{-3/2}$ for $\ell$=0 and as $(-\epsilon_\nu)^{-1/2}$ for $\ell$=1 [see Eq.~(\ref{conditions})]. Therefore, in the deformed system, the rms radius diverges only if the Nilsson orbital in question contains a component of an $s$ or a $p$ state. This leaves only three classes of states for which the spatial extension can be arbitrary large: \begin{eqnarray}\label{conditions_r} \pi=+, \Lambda=0:~~~ & \langle r^2 \rangle & ~\text{diverges as} ~(-\epsilon_\nu)^{-1},\nonumber \\ \pi=-, \Lambda=0~\text{or}~1: & \langle r^2 \rangle & ~\text{diverges as} ~(-\epsilon_\nu)^{-1/2}. \end{eqnarray} In the following, these states are referred to as ``halo states" or ``halos". Of course, this does not mean that other Nilsson orbitals cannot form very extended structures when their binding becomes very small. However, it is only for the states (\ref{conditions_r}) that the rms radius becomes infinite asymptotically. \subsection{The quadrupole moment} The average quadrupole moment of a deformed orbital is given by \begin{equation}\label{quadrupole} \langle \Lambda\nu| r^2 Y_{20}| \Lambda\nu\rangle = \frac{\langle\psi^\Lambda_\nu|r^2Y_{20}|\psi^\Lambda_\nu\rangle} {\langle\psi^\Lambda_\nu|\psi^\Lambda_\nu\rangle} = ({\cal N}_{\Lambda\nu})^{-2} \sum_{\ell\ell'}\langle\ell\Lambda\nu|r^2|\ell'\Lambda\nu\rangle \langle\ell\Lambda|Y_{20}|\ell'\Lambda\rangle, \end{equation} where the angular matrix element is \begin{equation}\label{CG} \langle\ell\Lambda|Y_{20}|\ell'\Lambda\rangle = \sqrt{\frac{5}{4\pi}}\sqrt{\frac{2\ell' +1}{2\ell +1}} \langle \ell'\Lambda 2 0|\ell\Lambda\rangle \langle \ell'0 2 0|\ell 0\rangle. \end{equation} According to Eq.~(\ref{c1}), in the $\epsilon_\nu\rightarrow 0$ limit the quadrupole matrix element diverges if $\ell+\ell'< 3$. Since the quadrupole moment of an $s$ state vanishes, the only diverging matrix element among the $\pi$=+ states comes from an $s$$\leftrightarrow$$d$ coupling. At small binding energies, the corresponding integral $O_{202\nu}$ behaves as $(-\epsilon_\nu)^{-1/2}$. For negative-parity orbitals, the only diverging matrix element is the diagonal one, $O_{211\nu}$, which also behaves as $(-\epsilon_\nu)^{-1/2}$. However, because of different asymptotic properties of the normalization integrals, the low-$\epsilon$ behavior of the single-particle quadrupole moment of a weakly bound orbital does depend on parity. With $\epsilon_\nu$$\rightarrow$0, the quadrupole moment (\ref{quadrupole}) of the $\pi$=+ halo approaches the finite limit. On the other hand, for the $\pi$=-- halos the norm of the state remains finite and the quadrupole moment behaves as $(-\epsilon_\nu)^{-1/2}$. It is instructive to consider the quadrupole deformation $\beta_2$ extracted from the ratio \begin{equation}\label{beta} \beta_2 \equiv \frac{4\pi}{5} \frac{\langle r^2 Y_{20}\rangle} {\langle r^2 \rangle}. \end{equation} By splitting ${\langle r^2 Y_{20}\rangle}$ and ${\langle r^2 \rangle}$ into contributions from the core (c) and from the valence (v) nucleons, one obtains \begin{equation}\label{b11} \beta_2 = \frac{4\pi}{5} \frac{\langle r^2 Y_{20}\rangle_{\rm c} + \langle r^2 Y_{20}\rangle_{\rm v}} {\langle r^2 \rangle_{\rm c}+ \langle r^2 \rangle_{\rm v}}. \end{equation} For positive-parity halos ($\pi$=+, $\Lambda$=0), the numerator in Eq.~(\ref{b11}) is finite while the denominator diverges as $(-\epsilon_\nu)^{-1}$. Hence $\beta_2$ is asymptotically linear in $\epsilon_\nu$, i.e., it {\em vanishes} in the limit of zero binding: \begin{equation}\label{bpos} \beta_2 (\pi=+, \Lambda=0)\stackrel{\epsilon_\nu\rightarrow 0} {\longrightarrow} 0. \end{equation} On the other hand, for negative-parity halos ($\pi$=--, $\Lambda$=0 or 1), the ratio (\ref{b11}) is solely determined by the the $p$-wave components in the valence state: \begin{equation}\label{bneg} \beta_2 (\pi=-, \Lambda) \stackrel{\epsilon_\nu\rightarrow 0} {\longrightarrow} \frac{4\pi}{5} \langle 1\Lambda|Y_{20}|1\Lambda\rangle = \left\{ \begin{array}{ll} +0.63 & \text{if}~\Lambda=0 \\ -0.31 & \text{if}~\Lambda=1. \end{array} \right. \end{equation} That is, the deformation of the halo is solely determined by the spatial structure of the valence state wave function, independently of the shape of the core. The deformed core merely establishes the quantization axis of the system, important for determining the projection $\Lambda$. \subsection{Higher moments and multipole deformations} The above discussion is easily generalized to the case of higher multipoles. For instance, for $n$=4, the hexadecapole moment $\langle r^4 Y_{40}\rangle$ behaves asymptotically in the same manner as the quadrupole moment, i.e., it approaches the finite limit for the $\pi$=+ halos and diverges as $(-\epsilon_\nu)^{-1/2}$ for the $\pi$=-- halos. However, the corresponding deformation $\beta_4$, proportional to $\langle r^4 Y_{40}\rangle/\langle r^4 \rangle$ approaches zero, regardless of parity of the halo orbital. \section{The Model}\label{model} In our study, the deformed potential $U(\bbox{r})$ has been approximated by a prolate spheroidal finite square well potential. Spheroidal {\em infinite} square well was early used by Moszkowski \cite{[Mos55]} to discuss the properties of single-particle deformed orbitals. Merchant and Rae \cite{[Mer94]} investigated the continuum spectrum ($\epsilon$$>$0) of the spheroidal finite square well potential to calculate the particle decay widths of deformed nuclei. Since the main focus of our work is the behavior of bound single-particle orbitals very close to the $\epsilon$=0 threshold, particular attention was paid to a precise numerical solution of the Schr\"odinger equation in the limit of very small binding energies and/or large deformations. \subsection{Prolate Spheroidal Coordinates and Parametrization of Nuclear Shape} Assuming the $z$-axis to be a symmetry axis of the nucleus, the coordinate transformation between the prolate spheroidal coordinates ($\xi$, $\eta$, $\phi$) and the cartesian coordinates ($x$,$y$,$z$) reads \cite{[Mor53],[Lit56],[Abr65]}: \begin{eqnarray}\label{coord} x & = & a\sqrt{(\xi^2-1)(1-\eta^2)}\cos\phi ,\label{coordx} \\ y & = & a\sqrt{(\xi^2-1)(1-\eta^2)}\sin\phi ,\label{coordy}\\ z & = & a\xi\eta ,\label{coordz} \end{eqnarray} where $a>0$, $1\leq\xi\leq\infty$, $-1\leq\eta\leq 1$, and $0\leq\phi\leq 2\pi$. Surfaces of constant $\xi$=$\xi_0$ represent confocal ellipses, \begin{equation}\label{ellipse} \frac{x^2+y^2}{a^2(\xi_0^2-1)} + \frac{z^2}{a^2\xi_0^2} = 1, \end{equation} with foci at (0,0,$\pm$$a$), minor axis $R_\perp$=$a\sqrt{\xi_0^2-1}$, and major axis $R_\parallel$=$a\xi_0$. (Since the purpose of this study is to investigate the generic features of weakly bound states in a deformed potential, the analysis is limited to prolate shapes. However, the calculations can easily be extended to the oblate side through a simple coordinate transformation.) It is seen from Eq.~(\ref{ellipse}) that the parameter $\xi_0$ defines the shape deformation of a system. Indeed, for $\xi_0\gg1$, the surface (\ref{ellipse}) becomes that of a sphere with the radius $a\xi_0$, while the limit $\xi_0\rightarrow 1$ corresponds to a segment. Following Ref.~\cite{[Mos55]}, we introduce the deformation parameter $\delta$: \begin{equation}\label{delta} \delta=\left(\frac{R_\parallel}{R_\perp} \right)^{2/3}-1= \left(\frac{\xi^2_0}{\xi^2_0-1}\right)^{1/3}-1. \end{equation} The volume-conservation condition [the volume inside the surface (\ref{ellipse}) should not depend on $\delta$] yields \begin{equation}\label{RRR} a = \frac{R_0}{(\xi_0^3-\xi_0)^{1/3}}, \end{equation} where $R_0$ is the corresponding spherical radius. To find the relation between $\delta$ and other quadrupole deformation parameters, one can compare the macroscopic quadrupole moment of the surface (\ref{ellipse}) \begin{equation}\label{Qe} Q_2 = \sqrt{\frac{16\pi}{5}}\langle r^2Y_{20}\rangle = \frac{2}{5}R_0^2 \delta\frac{\delta^2+3\delta+3}{\delta+1} \end{equation} with those obtained using other shape parametrizations \cite{[Naz96a]}. For example, the relation between $\delta$ and the oscillator deformation $\delta_{\rm osc}$ is \begin{equation}\label{deltaosc} \delta=\left(\frac{1+{1\over 3}\delta_{\rm osc}} {1-{2\over 3}\delta_{\rm osc}}\right)^{2/3}-1. \end{equation} For small values of $\delta_{\rm osc}$, Eq.~(\ref{deltaosc}) gives $\delta = \frac{2}{3}\delta_{\rm osc}$. However, at a superdeformed shape ($\frac{R_\parallel}{R_\perp}$=2), both deformations are very close: $\delta_{\rm osc}$=0.6 while $\delta$=2$^{2/3}$--1=0.587. Figure~\ref{shapes} shows the family of shapes representing different deformations $\delta$. \subsection{Bound States in the Prolate Spheroidal Well} The deformed spheroidal square well potential is given by \begin{equation}\label{pot} U(\xi) = \left\{ \begin{array}{ll} U_{0} & \text{for}~\xi\leq\xi_{0} \\ 0 & \text{for}~\xi>\xi_{0}, \end{array} \right. \end{equation} where $U_0$ is the depth of the potential well ($U_0$$<$0) and $\xi_0$ depends on $\delta$ through Eq.~(\ref{delta}). Expressed in prolate spheroidal coordinates, the time-independent Schr\"{o}dinger equation (\ref{deformedSE}) can be written as \begin{eqnarray}\label{schrod} \left[\frac{\partial}{\partial\xi}\left\{(\xi^2-1) \frac{\partial}{\partial\xi}\right\}\right. & + & \left. \frac{\partial}{\partial\eta}\left\{(1-\eta^2) \frac{\partial}{\partial\eta}\right\} +\frac{\xi^2-\eta^2}{(\xi^2-1)(1-\eta^2)} \frac{\partial^2}{\partial\phi^2}\right] \psi(\xi,\eta,\phi) \nonumber\\ & + & \frac{2ma^2(\eta^2-\xi^2)}{\hbar^2}\left[U(\xi)-\epsilon\right] \psi(\xi,\eta,\phi) = 0. \end{eqnarray} Following Ref.~\cite{[Mor53]}, this equation can be separated into three ordinary differential equations by assuming $\psi(\xi,\eta,\phi)=R(\xi)S(\eta)\Phi(\phi)$. The functions $R$, $S$, and $\Phi$ are solutions of the ordinary differential equations \begin{eqnarray}\label{schrodsep} \frac{d}{d\xi} \left[(\xi^2-1)\frac{dR_{\Lambda l}(c,\xi)}{d\xi}\right] &-& \left[\lambda_{\Lambda l}-c^2\xi^2+\frac{\Lambda^2}{\xi^2-1}\right] R_{\Lambda l}(c,\xi) = 0 ,\label{schrod1} \\ \frac{d}{d\eta} \left[(1-\eta^2)\frac{dS_{\Lambda l}(c,\eta)}{d\eta}\right] & + & \left[\lambda_{\Lambda l}-c^2\eta^2-\frac{\Lambda^2}{1-\eta^2}\right] S_{\Lambda l}(c,\eta) = 0 ,\label{schrod2}\\ \frac{d^2\Phi_\Lambda(\phi)}{d\phi^2} & = & -\Lambda^2 \Phi_\Lambda(\phi),\label{schrod3} \end{eqnarray} where $\lambda_{\Lambda l}$ is the separation constant and \begin{equation}\label{c22} c = \left\{ \begin{array}{ll} c_{\rm int} = a\sqrt{2m(\epsilon-U_0)}/\hbar & \text{for}~\xi\leq\xi_0 \\ ic_{\rm ext} = ia\sqrt{-2m \epsilon}/\hbar & \text{for}~\xi>\xi_0. \end{array} \right. \end{equation} In the following, $R_{\Lambda l}(c,\xi)$ and $S_{\Lambda l}(c,\eta)$ are referred to as the radial and angular spheroidal functions, respectively. For positive values of $\epsilon$, scattering solutions for the spheroidal square well were solved in Ref.~\cite{[Mer94]}. Here, we concentrate on bound states with $\epsilon<0$. The angular solution $S_{\Lambda l}(c,\eta)$ can be expressed in terms of a series of the associated Legendre functions of the first kind \begin{equation}\label{angwave1} S_{\Lambda l}(c,\eta) = {\sum_{k}^{\infty}}' d^{\Lambda l}_k(c)P^\Lambda_{\Lambda+k}(\eta), \end{equation} where the prime over the summation sign indicates that $k$=0, 2, $\ldots$ if ($l-\Lambda$) is even, and $k$=1, 3, $\ldots$ if ($l-\Lambda$) is odd \cite{[Mor53],[Abr65]} (parity conservation). The radial functions $R_{\Lambda l}(c,\xi)$ are expanded in terms of spherical Bessel functions of the first kind, $f^{(1)}_k(c\xi)\equiv j_k(c\xi)$, and spherical Bessel functions of the third kind $f^{(3)}_k(c\xi)\equiv h_k(c\xi)=j_k(c\xi)+in_k(c\xi)$. The internal radial function $R^{(1)}_{\Lambda l}(c,\xi)$ ($\xi\le\xi_0$) and the external radial function $R^{(3)}_{\Lambda l}(c,\xi)$ ($\xi >\xi_0$) can be written as \begin{equation}\label{radwave} R^{(p)}_{\Lambda l}(c,\xi) =\left\{{\sum_{k}^{\infty}}'\frac{(2\Lambda+k)!}{k!} d^{\Lambda l}_k(c)\right\}^{-1} \left(\frac{\xi^2-1}{\xi^2}\right)^{\Lambda/2} {\sum_{k}^{\infty}}' i^{k+\Lambda- l}\frac{(2m+k)!}{k!}d^{\Lambda l}_k(c) f^{(p)}_{\Lambda+k}(c\xi) \end{equation} where $p$=1 or 3. Finally, the deformed single-particle wave function is given by \begin{equation}\label{waveint} \psi_{\Lambda n_{\rm exc}}(\xi,\eta,\phi) = \left\{ \begin{array}{ll} \sum_{l}^{\infty}A_{n_{\rm exc}\Lambda l} R^{(1)}_{\Lambda l}(c_{\rm int},\xi) S_{\Lambda l}(c_{\rm int},\eta)\Phi_\Lambda(\phi) & \mbox{for $\xi\leq\xi_0$} \\ \sum_{l}^{\infty}B_{n_{\rm exc}\Lambda l} R^{(3)}_{\Lambda l}(ic_{\rm ext},\xi) S_{\Lambda l}(ic_{\rm ext},\eta)\Phi_\Lambda(\phi) & \mbox{for $\xi>\xi_0$} \end{array} \right. \end{equation} where $c_{\rm int}$ and $c_{\rm ext}$ are defined in Eq.~(\ref{c22}), and $n_{\rm exc}$ is the excitation quantum number labeling orbitals having the same quantum numbers $\Lambda$ and $\pi$=$(-1)^l$. By matching the internal and external wave functions at $\xi$=$\xi_0$, one finds the eigenenergies $\epsilon$ and the amplitudes $A_{n_{\rm exc}\Lambda l}$ and $B_{n_{\rm exc}\Lambda l}$. The details of the calculation are outlined in Appendix~\ref{appA}. The procedure used to calculate the separation constant $\lambda_{\Lambda l}$ and coefficients $d^{\Lambda l}_k(c)$ is discussed in Appendix~\ref{appB}. \section{Results}\label{results} This section illuminates the general properties of deformed Nilsson orbitals discussed in Sec.~\ref{general} using the spheroidal square well potential. The single-particle energies of the finite spheroidal well with $U_0$=--80\,MeV and $R_0$=4\,fm are shown in Fig.~\ref{sp} as functions of deformation $\delta$. At a spherical shape the orbitals are characterized by means of spherical quantum numbers ($n\ell$). The deformed orbitals are labeled by parity $\pi$, angular momentum projection $\Lambda$, and the excitation quantum number $n_{\rm exc}$ which specifies the energetic order of a single-particle orbital in a given ($\pi\Lambda$)-block, counting from the bottom of the well (e.g., $n_{\rm exc}$=1 corresponds to the lowest state, $n_{\rm exc}$=2 is the second state, and so on). In the following, the deformed orbitals are labeled as [$n_{\rm exc}\Lambda\pi$]. For example, the $\Lambda$=1 orbital originating from the spherical shell $1d$ is referred to as [11+] (see Fig.~\ref{sp}). \subsection{Radial Properties of Deformed Orbitals} The dependence of the single-particle rms radius on binding energy is illustrated in Fig.~\ref{rads} (spherical shape), and Figs. \ref{raddp} and \ref{raddn} (superdeformed shape). (In calculations, the binding energy was varied by changing the well depth $U_0$.) The spherical case has been discussed in detail in Ref.~\cite{[Rii92]}; here it is shown for the mere reference only. In all cases the asymptotic conditions [Eq.~(\ref{conditions_r})] are met rather quickly. Indeed, in the considered range of binding energy the values of $\langle r^2 \rangle$ for the $1s$ state shown in Fig.~\ref{rads} and the [10+], [20+], and [30+] orbitals of Fig.~\ref{raddp} approach an asymptotic limit [$(-\epsilon)^{-1}$ dependence], and similar holds for the $1p$ state and [10--], [20--], [11--], and [21--] orbitals (see Fig.~\ref{raddn}) which behave as $(-\epsilon)^{-1/2}$ at low binding energy. The remaining states do not exhibit any halo effect, as expected. Figure \ref{out} displays the probability $P_{\rm outer}$ [Eq.~(\ref{outer})] to find the neutron in the classically forbidden region, $\xi > \xi_0$, as a function of $\epsilon$ for three superdeformed states with different values of $\Lambda$. At low values of binding energy, the $\ell$=0 component completely dominates the structure of the [20+] state and $P_{\rm outer}\rightarrow 1$. The radial form factors $R_{\ell\Lambda n_{\rm exc}}(r)$ appearing in the multipole decomposition [Eq.~(\ref{multipole})] carry information about the spatial extension of the wave function. They can be obtained by the angular integration: \begin{equation}\label{ffactors} R_{\ell\Lambda n_{\rm exc}} (r) = \int \psi^\Lambda_{n_{\rm exc}\pi} (\bbox{r}) Y^*_{\ell\Lambda}(\hat{\bbox{r}})\,d\hat{\bbox{r}}. \end{equation} Since in our calculations the total wave function $\psi^\Lambda_{n_{\rm exc}\pi}(\bbox{r})$ is normalized to unity, the integral \begin{equation}\label{pr} P_{\ell}(\Lambda n_{\rm exc}) \equiv \int_{0}^{\infty} |R_{\ell\Lambda n_{\rm exc}} (r)|^2 r^2\,dr \end{equation} represents the probability to find the partial wave $\ell$ in the state [$n_{\rm exc} \Lambda \pi$]. Of course, \begin{equation} \sum_{\ell}P_{\ell}(\Lambda n_{\rm exc})=1. \end{equation} Figures~\ref{rff1} and \ref{rff2} display the radial formfactors for several orbitals in a superdeformed well assuming the subthreshold binding energy of $\epsilon$=--5 keV. For the $\pi$=+, $\Lambda$=0 orbitals (Fig.~\ref{rff1}), the $\ell$=0 component dominates at this extremely low binding energy, in spite of a very large deformation. Indeed, according to the discussion in Sect.~\ref{nint}, the value of $P_{\ell=0}(0 n_{\rm exc})$ approaches one at small binding energies. In other words, the $\pi$=+, $\Lambda$=0 halos behave at low values of $\epsilon$ like $s$ waves. It is interesting to note that both the [20+] and [30+] orbitals are dominated by the $2s$ component; the corresponding $\ell$=0 form factors have only one node. For the $\pi$=-- halo orbitals with $\Lambda$=0 and 1 (Fig.~\ref{rff2}), the $p$ component does not dominate the wave function completely (Sect.~\ref{nint}), but a significant excess of a $p$ wave at large distances is clearly seen. The radial decomposition of other orbitals ($\Lambda>1$), shown in Figs.~\ref{rff1} and \ref{rff2}, very weakly depends on binding energy; it reflects the usual multipole mixing due to the deformed potential. The results presented in Figs.~\ref{rff1} and \ref{rff2} illustrate the fact that the multipole decomposition of the deformed level depends on {\em both} deformation and the binding energy. Figures \ref{ppp} ($\pi$=+) and \ref{ppn} ($\pi$=--) show contour maps of $P_{\ell}(\Lambda n_{\rm exc})$ for the $\Lambda$=0 orbitals as functions of $\epsilon$ and $\delta$. The structure of the [10+] level, originating from the spherical $1s$ state, is completely dominated by the $\ell$=0 component, even at very large deformations. A rather interesting pattern is seen in the diagram for the [20+] orbital originating from the spherical $1d$ state. The $\ell$=2 component dominates at low and medium deformations, and the corresponding probability $P_{\ell=2}$ slowly decreases with $\delta$, at large deformations approaching the (constant) asymptotic limit. However, a similar effect, namely the decrease of the $\ell$=2 component, is seen when approaching the $\epsilon$=0 threshold. In the language of the perturbation theory \cite{[Fan61]}, this rapid transition comes from the coupling to the low-energy $\ell$=0 continuum; the $\ell$=0 form factor of the [20+] orbital shows, at low values of $\epsilon$, a one-nodal structure characteristic of the $2s$ state (see Fig.~\ref{rff1}). At low deformations, the amplitude of the $s$ component in the [20+] state is proportional to $\delta$. Remembering that the norm, Eq.~(\ref{norm}), behaves as $(-\epsilon)^{-1/4}$ for the $\ell$=0 state, one can conclude that at low values of $\delta$ and $\epsilon$ the contours of constant $P_{\ell=0}$ correspond to the power law $\delta^2 \propto (-\epsilon)^{1/2}$. This result, seen in Fig.~\ref{ppp}, tells us that the $s$ component takes over very quickly even if deformation $\delta$ is small. (A similar conclusion for the $1/2^+$ ground state of $^{11}$Be has been reached in Ref.~\cite{[Rid96]}.) The partial-wave probabilities calculated for the negative parity states [10--] and [20$-$] presented in Fig.~\ref{ppn} do not show such a dramatic rearrangement around the threshold. Namely, the [10--] orbital retains its $\ell$=1 structure in the whole deformation region considered, and the structure of the [20--] state at large deformations can be viewed in terms of a mixture of $p$ and $f$ waves. In the latter case there is a clear tendency to increase the $\ell$=1 contribution at low binding energies, but this effect is much weaker compared to the [20+] case discussed above. \subsection{Quadrupole Moments and Deformations} Deformation properties of single-particle orbitals, namely the intrinsic quadrupole moments $\langle r^2Y_{20}\rangle$ and quadrupole deformations $\beta_2$, Eq.~(\ref{beta}), are displayed in Figs. \ref{raddp} and \ref{raddn} as functions of binding energy at $\delta$=0.6 (superdeformed shape). It is seen that the asymptotic limits discussed in Sec.~\ref{general} are reached at low values of $\epsilon$ in practically all cases. In particular, the values of $\beta_2$ for the $\pi$=+, $\Lambda$=0 orbitals approach zero with $\epsilon\rightarrow 0$ [Eq.~(\ref{bpos})], those for the $\pi$=--, $\Lambda$=0 states approach the limit of 0.63, and the value of $\beta_2$ for the [11$-$] orbital is close to the value of --0.31 [see Eq.~(\ref{bneg})]. The only exception is the [21--] orbital which at deformation $\delta$=0.6 contains only 31\% of the $\ell$=1 component (see Fig.~\ref{rff2}); hence the positive contribution of the $1f$ state to $\langle r^2Y_{20}\rangle$ still dominates. To illustrate the interplay between the core deformation and that of the valence particle [Eq.~(\ref{b11})], we display in Figs.~\ref{b2tot1} and \ref{b2tot2}: (i) quadrupole deformation $\beta_2$ of the valence orbital, (ii) quadrupole deformation of the core, and (iii) total quadrupole deformation of the system. Here we assume that the core consists of {\em all} single-particle orbitals lying energetically below the valence orbital, and that each state (including the valence one) is occupied by two particles. It is convenient to rewrite Eq.~(\ref{b11}) in the form: \begin{equation} \beta_{2,{\rm tot}} =\frac{\beta_{2,{\rm c}}\langle r^2\rangle_{\rm c} +\beta_{2,{\rm v}}\langle r^2\rangle_{\rm v}} {\langle r^2\rangle_{\rm c}+\langle r^2\rangle_{\rm v}} = \frac{\beta_{2,{\rm v}}+\chi\beta_{2,{\rm c}}}{1+\chi}, \end{equation} where \begin{equation} \chi \equiv \frac{\langle r^2\rangle_{\rm c}}{\langle r^2\rangle_{\rm v}}. \end{equation} For the halo states, $\chi\rightarrow 0$ and $\beta_{2,{\rm tot}}\rightarrow \beta_{2,{\rm v}}$. The results shown in Figs.~\ref{b2tot1} and \ref{b2tot2} nicely illustrate this behavior. Namely, for very small binding energy the total deformation of the system coincides with that of valence, {\em regardless} of the core deformation. These results illustrate the deformation decoupling of the deformed halo from the rest of the system. A nice example of this decoupling has been discussed by Muta and Otsuka \cite{[Mut95]}, who demonstrated that the halo proton in $^8$B occupying the $1p_{3/2}$, $\Lambda$=1 weakly bound orbit produces the oblate density distribution which greatly reduces the large quadrupole moment of the prolate core. \subsection{Deformation Softness of Halo Systems in the Mean-Field Calculations}\label{mfield} The effect of decoupling of the valence particles from the deformed core in the limit of a very weak binding suggests that in such cases the constrained Hartree-Fock (CHF) or Nilsson-Strutinsky (NS) calculations would produce very shallow potential energy surfaces. Indeed, in the CHF theory the nuclear Hamiltonian $H$ is minimized under the constraint that the multipole operator that defines the intrinsic shape has a fixed expectation value $\langle Q \rangle$=$q$. The intrinsic wave functions are found by minimizing the Routhian \begin{equation}\label{chf} H' = H - \beta Q, \end{equation} where $\beta$ is the corresponding Lagrange multiplier. If $Q$ is the quadrupole moment and the nucleus is weakly bound, then, especially in the case of halo systems, $\langle Q \rangle$ is very sensitive to small variations in the single-particle energy $\epsilon_\nu$ of the last occupied single-particle orbital. In particular, for the $\pi$=-- halos, $\langle Q \rangle$ can take practically any value without changing the HF energy $\langle H \rangle$. This means that the numerical procedure used for searching for the self-consistent solution can be rather susceptible to uncontrolled variations of $q$ with $\epsilon_\nu$. In the Nilsson-Strutinsky calculations, the bulk part of the binding energy comes from the the macroscopic energy $E_{\rm macro}$. Commonly used is the Yukawa-plus-exponential macroscopic energy formula \cite{[Kra79]} which accounts for the surface thickness. The corresponding generalized surface energy reads \begin{equation}\label{krappe} E_s = -\frac{c_s}{8\pi^2r_0^2a^3}\int\int_V\left(\frac{\sigma}{a}-2\right) \frac{e^{-\sigma/a}}{\sigma}d^3\bbox{r}\,d^3\bbox{r}', \end{equation} where $c_s$ is the surface-energy coefficient, $R_0$=$r_0 A^{1/3}$, $a$ is the surface diffuseness parameter, $\sigma=|\bbox{r}-\bbox{r}'|$, and $V$ denotes the volume enclosed by the deformed nuclear surface. The latter has been defined in our study by means of the axial multipole expansion in terms of deformation parameters $\beta_\lambda$: \begin{equation}\label{radius1} R(\Omega ) = c(\beta )R_0\left[ 1 + \sum_{\lambda} \beta_{\lambda}Y_{\lambda 0} (\Omega )\right] \end{equation} with $c(\beta )$ being determined from the volume-conservation condition. As demonstrated in Ref. \cite{[Kra79]}, for small deformations the generalized surface energy, Eq.~(\ref{krappe}), is given to second order by \begin{equation}\label{krappe1} E_s = E_s({\rm sph}) + \sum_{\lambda} c_\lambda(\zeta) \beta^2_\lambda, \end{equation} where the expansion coefficients $c_\lambda$ solely depend on the dimensionless parameter \begin{equation}\label{xi} \zeta=\frac{R_0}{a}=\frac{r_0 A^{1/3}}{a}. \end{equation} It can be shown \cite{[Jon96]} that the function $c_\lambda(\zeta)$ becomes negative below the critical value of $\zeta_c$ which is roughly proportional to the multipolarity $\lambda$. Consequently, the generalized surface energy is stable to $\beta_\lambda$ if $\zeta > \zeta_c \approx 0.8 \lambda$, or \begin{equation}\label{krappe2} a\lambda < R_0/0.8. \end{equation} According to Eq.~(\ref{krappe2}), for a given nucleus {\em both} large multipolarity {\em and} large diffuseness can trigger the shape instability. This conclusion also holds for the finite-range droplet model mass formula \cite{[Jon96]}. The weakly bound neutron-rich nuclei and halo systems are characterized by very diffused density distributions. For instance, it has been predicted in Ref. \cite{[Dob94]} that the average diffuseness in neutron drip-line nuclei can increase by as much as 50\% as compared to the standard value representative of nuclei around the beta stability line. The effect of the large diffuseness on the macroscopic energy is illustrated in Fig. \ref{LD}, which displays $E_{\rm macro}$ with the parameters of Ref. \cite{[Mol88]} for a light $A$=20 nucleus as a function of deformations $\beta_2$, $\beta_4$, and $\beta_6$. Since at low deformations different multipolarities are decoupled, Eq.~(\ref{krappe1}), they can be varied separately and the remaining ones are put to zero. The calculations are performed for three values of $a$. It is seen that the general rule given by Eq.~(\ref{krappe2}) holds. Namely, for larger values of $a$ and $\lambda$ the macroscopic energy becomes unstable to shape deformation, mainly due to instability of $E_s$ (for very light nuclei the effect of the Coulomb term is much weaker). Interestingly, the effect is fairly pronounced even for quadrupole distortions; the potential energy curve becomes unstable to $\beta_2$ already for $a$=1.3$a_{\rm std}$. The above results indicate that in the microscopic-macroscopic approach both single-particle and macroscopic energy become extremely shallow to deformation for weakly bound systems. \section{Conclusions}\label{conclusions} In the limit of very weak binding, the geometric interpretation of shape deformation is lost. Consider, e.g., a deformed core with a prolate deformation and a weakly-bound halo neutron in a negative-parity orbital. According to the discussion above, the total quadrupole moment of the system diverges at the limit of vanishing binding (i.e., $\langle r^2Y_{20}\rangle$ can take {\em any} value). On the other hand, depending on the geometry of the valence orbital, the total quadrupole deformation of the (core+valence) system is consistent with a superdeformed shape ($\pi$=--, $\Lambda$=0 halo) or oblate shape ($\pi$=--, $\Lambda$=1 halo). For a $\pi$=+ halo, the quadrupole moment is finite but $\beta_2$ approaches zero. In the language of the self-consistent mean-field theory, this result reflects the extreme softness of the system to the quadrupole distortion. Figure~\ref{density} shows an example of such a situation: The two valence particles occupying the weakly bound [11--] orbital give rise to an oblate deformation of the system, in spite of the prolate deformation of the core and the prolate shape of the underlying spheroidal well ($\delta$=0.2). Shape deformation is an extremely powerful concept provided that the nuclear surface can be properly defined. However, for very diffused and spatially extended systems the geometric interpretation of multipole moments and deformations is lost. The presence of the spatially extended neutron halo gives rise to the presence of low-energy isovector modes in neutron-rich nuclei. The deformation decoupling of the halo implies that the nuclei close to the neutron drip line are excellent candidates for isovector quadrupole deformations, with different quadrupole deformations for protons and neutrons. Such nuclei are expected to have a very interesting rotational behavior and unusual magnetic properties. For instance, the rotational features of such systems (moments of inertia, $B(E2)$ values, $g$-factors) should be solely determined by the deformed core. An example of the above scenario has been predicted in the self-consistent calculations for the neutron-rich sulfur isotopes performed using Skyrme Hartree-Fock and relativistic mean field methods \cite{[Wer94a],[Wer96]}. When approaching the neutron drip line, the calculated values of $\beta_2$ for neutrons are systematically smaller than those of the proton distribution. This example illustrates once again that in the drip-line nuclei, due to spatially extended wave functions, the ``radial" contribution to the quadrupole moment might be as important as the ``angular" part. Finally, it is interesting to note that the anisotropic (non-spherical) halo systems have been investigated in molecular physics. A direct molecular analogy of a quadrupole-deformed halo nucleus is the electron weakly bound by the quadrupole moment of the neutral symmetric molecule such as CS$_2$ \cite{[Com96]}. \section*{Acknowledgments} This work has been supported by the U.S. Department of Energy through Contract No. DE-FG05-93ER40770. Oak Ridge National Laboratory is managed for the U.S. Department of Energy by Lockheed Martin Energy Research Corp. under Contract No. DE-AC05-96OR22464. The Joint Institute for Heavy Ion Research has as member institutions the University of Tennessee, Vanderbilt University, and the Oak Ridge National Laboratory; it is supported by the members and by the Department of Energy through Contract No. DE-FG05-87ER40361 with the University of Tennessee.
{ "attr-fineweb-edu": 1.841797, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbjvxK7Ehm4qszGmq
\section{\label{sec:1}Introduction} In birefringent silica fibers, stable propagation of a monochromatic wave can be inhibited by a nonlinear process called \emph{vector} modulation instability (V-MI) in both dispersion regimes (normal or anomalous).\cite{H75,TW92,HH92} This contrasts with the \emph{scalar} modulation instability (S-MI) that does not require birefringence but can only arise in the anomalous dispersion regime (at least when second order dispersion dominates\cite{PM03,H03}). Two limits, those of weak and strong birefringence, are amenable to relatively simple analytical study.\cite{Agrawal95} These predictions have been confirmed experimentally in a number of cases,\cite{Rothenberg90,Drummond90,Murdoch95} particularly in the normal dispersion regime. The only experimental investigation of V-MI in the anomalous dispersion regime that we are aware of is a recent unsuccessful attempt using photonic crystal fibers.\cite{K04} Here we report what is to our knowledge the first experimental study of V-MI in strongly birefringent silica fibers in the anomalous dispersion regime. We also carry out a very precise comparison between experimental results and the predictions of numerical simulations. Modulation instabilities (MI) can be induced by classical noise present initially together with the pump beam. But MI can also arise spontaneously through amplification of vacuum fluctuations.\cite{Potasek87} In practice classical input noise and vacuum fluctuations compete for inducing MI. The experiment reported here is carried out in the regime where the quantum noise is dominant. Elsewhere,\cite{BrainisSTO} we present an unified approach to the problem of scalar and vector MI based on the \emph{stochastic nonlinear Schr\"{o}dinger equations} (SNLSE) which generalizes the work of Ref.~\onlinecite{K91}. This approach is particularly well suited for numerical simulations in complex situations where classical noise and vacuum fluctuations act together, where the pump is depleted, or where the higher order harmonics of MI appear. In previous work on modulation instability, comparison between theory and experiment has generally been limited to noting that the frequency at which the maximum gain of the MI occurs is correctly predicted. Here we show that there is excellent agreement between the experimental results and the numerical integration of the SNLSE. In particular the detailed shape of the output spectrum can be predicted in detail, even in the case where higher order harmonics appear (which cannot be predicted by linear perturbation theory). To our knowledge this is the first time experimental and theoretical studies of MI are compared in such detail. A related work is the comparison between theory and experiment for RF noise measurements reported in Ref.~\onlinecite{C03}. The experimental setup is reported in Fig.~\ref{F1}. It consists of a Q switched laser (Cobolt Tango) that produces pulses at 1536 nm, with a 3.55~ns full-width-at-half-maximum (FWHM) duration $\tau$ and a 2.5~kHz re\-petition rate $f$. The Gaussian spectral shape of the laser has been characterized using a Fabry-Perot interfero\-meter. The measured $0.214$~GHz FWHM spectral width is slightly larger than expected in the Fourier transform limit. The pump power is adjusted using variable neutral density filters (ND). We measured the injected mean power $P_{m}$ at the end of the fiber. The peak power $P_{0}$ is relied to the mean power according to the relation: \begin{equation} \label{power} P_{0}=2\sqrt{\frac{\ln(2)}{\pi}}\frac{P_m}{ f\tau}=1.06\times10^{5} P_{m} , \end{equation} A polarizing beam splitter (PBS1) ensures the pump pulse is linearly polarized. A half-wave plate is used to tune the angle $\theta$ between the pump polarization direction and the principal axes of the fiber. A polarizing beam splitter (PBS2) can be used in order to observe the field components polarized along the fast or slow axes separately. Lastly, spectra are recorded using an optical spectral analyser (OSA). In our experiment we use the Fibercore HB1250P optical fiber. The fiber length $L=51$~m, the group-velocity dispersion (GVD) parameter $\beta_2=-15$~ ps$^2$~km$^{-1}$ and the group-velocity mismatch parameter $\Delta\beta_1=286.1$~fs~m$^{-1}$ have been measured by independent methods (only significant digits have been indicated). Note that the accuracy on the value of $\beta_2$ is poor compared to the standards. This is because the interferometric method \cite{Merritt89} that we used turned out to be difficult to implement with a birefringent fiber. The group-velocity mismatch parameter $\Delta\beta_1$ is deduced from the walk-off between pulses propagating on the principal axes of the fiber. The fiber length $L$ is deduced from a measurement of the pulse time of flight. The other important parameters of the fiber, and a more accurate estimation of $\beta_2$, can be inferred from MI spectral peaks positions, as explained further. Fig.~\ref{F2} shows a typical spectrum at the fiber output when the angle $\theta$ is set to 45 degrees. The fast and slow polarization components have been separated using PBS2 and their spectra recorded successively. The plot clearly exhibits two V-MI peaks at 1511.4~nm and 1561.4~nm that are polarized along the fast and slow axes respectively. It also shows S-MI peaks at 1530.0~nm and 1541.9~nm, with first harmonics. In contrast with V-MI, S-MI is equally generated on both principal axes. By polarizing the input field along the fast or slow axes, we have observed that V-MI disappears and that the amplitude of the S-MI peaks increases dramatically (figure not shown). According to linear perturbation analysis, the angular frequency shifts from the pump of the MI peaks are given by \begin{eqnarray} \Delta\Omega_{S-MI}^{2}&\approx&\frac{\gamma P_0}{|\beta_2|}\left[1-\frac{2}{9}\frac{\gamma P_0}{|\beta_2|} \left(\frac{|\beta_2|}{\Delta\beta_1}\right)^{2}\right] \label{MIS}\\ \Delta\Omega_{V-MI}^{2}&\approx&\left(\frac{\Delta\beta_1}{|\beta_2|}\right)^{2}\left[1+\frac{2\gamma P_0}{|\beta_2|}\left(\frac{|\beta_2|}{\Delta\beta_1}\right)^{2}\right]\label{MIV} \end{eqnarray} for S-MI and V-MI peaks respectively. Here, $\gamma$ stands for the Kerr nonlinearity parameter of the fiber. Fig.~\ref{F3} shows the evolution of the spectrum of light emerging from the fiber when the pump power is increased. Using Eqs.~(\ref{MIS}) and (\ref{MIV}), the ratios $\frac{\Delta\beta_1}{|\beta_2|}=18.740$~(rad)~THz and $\frac{\gamma}{|\beta_2|}=0.2135$~(rad)~THz$^2$~W$^{-1}$ where deduced from these measurement. The first ratio and the measured value of $\Delta\beta_1$ permits to infer that $\beta_2=-15.27$~ps$^2$~km$^{-1}$, which is compatible with the independently measured value. From the second ratio, we deduce that $\gamma=3.26 $~W$^{-1}$~km$^{-1}$. The exponential growth of the MI peaks and harmonics is clearly apparent on Fig.~\ref{F3}. From these measurements we deduce that the ratio between the maximum gain of the V-MI and of the S-MI is 0.67$\pm0.05$, in good agreement with the theoretical value $2/3$. We also find that the ratio between the maximum gain of the 1st harmonic and of the S-MI is 1.88$\pm0.15$, in good agreement with the theoretical value\cite{Hasegawa,Tai86a} of 2. We now focus on the quantitative comparison between experimental spectral amplitudes and those predicted by the SNLSE model for spontaneous (or vacuum-fluctuations induced) modulation instabilities. This comparison makes sense because the exact shape of the spectrum, and in particular the relative intensities of the modulation instability peaks and harmonics, is very strongly dependent on the initial noise and pump peak power. Experimental and computed spectra are plotted together in Fig.~\ref{F4}. In the simulations we used the parameters deduced from experimental MI peaks positions (see above), but in order to obtain a good fit we had to increase the peak pump power by 5\% with respect to that deduced from the measurements using Eq.~(\ref{power}). We are not sure of the origin of this discrepancy. It could either be due to a systematic error in the measured values of $P_m$, to an error in the experimental measure of $\Delta\beta_1$, to the fact that the experimental pulses are not exactly Fourier-transform limited, or to some classical noise photons (for instance due to Raman scattering in the fiber) that are added to vacuum fluctuations and slightly speed up the instability. In any case the discrepancy is small enough to confidently conclude that in our experiment the MI is mainly induced by vacuum-fluctuations. Indeed with this small adjustment the experimental MI spectra are very well reproduced by numerical integration of the SNLSE model. In summary, we report what is to our knowledge the first experimental observation of spontaneous vector modulation instability in a highly birefringent silica fiber in the anomalous dispersion regime. The pump power dependence of the detuning of both scalar and vector side-bands, as well as their polarizations, agree with linear perturbation theory when the pump depletion is small. We have also obtained very good agreement between the experimental spectra and those obtained by numerical integration of the SNLSE derived from the quantum theory. This is to our knowledge the first time that theoretical and experimental spectra are compared in such quantitative detail. This very good agreement between the two approaches proves that the modulation instability that we observed was truly spontaneous, in the sense that it mainly results from the amplification of vacuum-fluctuations. \section*{Acknowledgments} This research is supported by the Interuniversity Attraction Poles Programme - Belgium Science Policy - under grant V-18. We are also grateful to Fonds Defay for financial support. \bibliographystyle{osajnl}
{ "attr-fineweb-edu": 1.77832, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUblI4dbjiU69DgdiI
\section{Introduction} Compact Einstein spaces have long been of interest in mathematics and physics. One of their principal applications in physics has been in higher-dimensional supergravity, string theory and M-theory, where they can provide backgrounds for reductions to lower-dimensional spacetimes. Of particular interest are Einstein spaces that admit Killing spinors, since these can provide supersymmetric backgrounds. In recent times, one of the most important applications of this type has been to supersymmetric backgrounds AdS$_5\times K_5$ of type IIB string theory, where $K_5$ is a compact Einstein space admitting Killing spinors. Such configurations provide examples for studying the AdS/CFT Correspondence, which relates bulk properties of the dimensionally-reduced AdS supergravity to a superconformal field theory on the four-dimensional boundary of AdS$_5$ \cite{mald,guklpo,wit}. The most studied cases have been when $K_5$ is the standard round 5-sphere, which is associated with an ${\cal N}=4$ superconformal boundary field theory. Another case that has been extensively studied is when $K_5$ is the space $T^{1,1}=(SU(2)\times SU(2))/ U(1)$. Until recently, these two homogeneous spaces were the only explicitly known examples of five-dimensional Einstein spaces admitting Killing spinors, although the general theory of Einstein-Sasaki spaces, which are odd-dimensional Einstein spaces admitting a Killing spinor, was well established, and some existence proofs were known (see, for example, \cite{boygal1,boygal2}). In recent work by Gauntlett, Martelli, Sparks and Waldram, the picture changed dramatically with the construction of infinitely many explicit examples of Einstein-Sasaki spaces in five \cite{gamaspwa1} and higher \cite{gamaspwa2} odd dimensions. Their construction was based on some earlier results in \cite{berber,pagpop}, in which local Einstein-K\"ahler metrics of cohomogeneity 1 were obtained as line bundles over Einstein-K\"ahler bases. Using the well-known result that an Einstein-Sasaki metric can be written as circle bundle over an Einstein-K\"ahler metric, this yielded the new local metrics discussed in \cite{gamaspwa1,gamaspwa2}. In the case of five dimensions, the resulting Einstein-Sasaki metrics are characterised by a non-trivial real parameter. They have cohomogeneity 1, with principal orbits $SU(2)\times U(1)\times U(1)$. In general these local metrics become singular where the orbits degenerate, but if the real parameter is appropriately restricted to rational values, the metric at the degeneration surfaces extends smoothly onto a complete and non-singular compact manifold. The resulting Einstein-Sasaki spaces were denoted by $Y^{p,q}$ in \cite{gamaspwa1}, where $p$ and $q$ are coprime integers with $q<p$. Further generalisations were obtained in \cite{gamaspwa3,chlupopo} It was shown in \cite{hasaya} that the Einstein-Sasaki spaces $Y^{p,q}$, and the higher-dimensional generalisations obtained in \cite{gamaspwa2}, could be obtained by taking certain limits of the Euclideanised Kerr-de Sitter rotating black hole metrics found in five dimensions in \cite{hawhuntay}, and in all higher dimensions in \cite{gilupapo1,gilupapo2}. Specifically, the limit considered in \cite{hasaya} involved setting the $n$ independent rotation parameters of the general $(2n+1)$-dimensional Kerr-de Sitter metrics equal, and then sending this parameter to a limiting value that corresponds, in the Lorentzian regime, to having rotation at the speed of light at infinity. [This BPS scaling limit for the black hole metrics with two equal angular momenta was recently studied in \cite{cvgasi}.] In a recent paper \cite{cvlupapo}, we showed that vastly greater classes of complete and non-singular Einstein-Sasaki spaces could be obtained by starting from the general Euclideanised Kerr-de Sitter metrics, with unequal rotation parameters, and again taking an appropriate limit under which the metrics become locally Einstein-Sasakian. In fact, this limit can be understood as a Euclidean analogue of the BPS condition that leads to supersymmetric black hole metrics. In five dimensions, this construction leads to local Einstein-Sasaki metrics of cohomogeneity 2, with $U(1)\times U(1)\times U(1)$ principal orbits, and two non-trivial real parameters. In dimension $D=2n+1$, the local Einstein-Sasaki metrics have cohomogeneity $n$, with $U(1)^{n+1}$ principal orbits, and they are characterised by $n$ non-trivial real parameters. In general the metrics are singular, but by studying the behaviour of the collapsing orbits at endpoints of the ranges of the inhomogeneous coordinates, we showed in \cite{cvlupapo} that the metrics extend smoothly onto complete and non-singular compact manifolds if the real parameters are appropriately restricted to be rational. This led to new classes of Einstein-Sasaki spaces, denoted by $L^{p,q,r_1,\cdots ,r_{n-1}}$ in $2n+1$ dimensions, where $(p,q,r_1,\ldots,r_{n-1})$ are $(n+1)$ coprime integers. If the integers are specialised appropriately, the rotation parameters become equal and the spaces reduce to those obtained previously in \cite{gamaspwa1} and \cite{gamaspwa2}. For example, in five dimensions our general class of Einstein-Sasaki spaces $L^{p,q,r}$ reduce to those in \cite{gamaspwa1} if $p+q=2r$, with $Y^{p,q}= L^{p-q,p+q,p}$ \cite{cvlupapo}. In this paper, we elaborate on some of our results that appeared in \cite{cvlupapo}, and give further details about the Einstein-Sasaki spaces that result from taking BPS limits of the Euclideanised Kerr-de Sitter metrics. We also give details about new complete and non-singular compact Einstein spaces that are not Sasakian, which we also discussed briefly in \cite{cvlupapo}. These again arise by making special choices of the non-trivial parameters in the Euclideanised Kerr-de Sitter metrics, but this time without first having taken a BPS limit. The five-dimensional Einstein-Sasaki spaces are discussed in section 2, and the higher-dimensional Einstein-Sasaki spaces in section 3. In section 4 we discuss the non-Sasakian Einstein spaces, and the paper ends with conclusions in section 5. In an appendix, we discuss certain singular BPS limits, where not all the rotation parameters are taken to limiting values. This discussion also encompasses the case of even-dimensional Kerr-de Sitter metrics, which do not give rise to non-singular spaces with Killing spinors. \section{Five-Dimensional Einstein-Sasaki Spaces}\label{es5sec} \subsection{The local five-dimensional metrics} Our starting point is the five-dimensional Kerr-AdS metric found in \cite{hawhuntay}, which is given by \begin{eqnarray} ds_5^2 &=& -\fft{\Delta}{\rho^2}\, \Big[ dt - \fft{a\, \sin^2\theta}{\Xi_a}\, d\phi - \fft{b\, \cos^2\theta}{\Xi_b}\, d\psi\Big]^2 + \fft{\Delta_\theta\, \sin^2\theta}{\rho^2}\, \Big[ a\, dt -\fft{r^2+a^2}{\Xi_a}\, d\phi\Big]^2\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && + \fft{\Delta_\theta\, \cos^2\theta}{\rho^2}\, \Big[ b\, dt -\fft{r^2+b^2}{\Xi_b}\, d\psi\Big]^2 + \fft{\rho^2\, dr^2}{\Delta} + \fft{\rho^2\, d\theta^2}{\Delta_\theta} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && + \fft{(1+ g^2 r^2)}{r^2\, \rho^2}\, \Big[ a\, b\, dt - \fft{b\, (r^2+a^2)\, \sin^2\theta}{\Xi_a}\, d\phi - \fft{a\, (r^2+b^2)\, \cos^2\theta}{\Xi_b}\, d\psi\Big]^2\,, \label{hawkmet} \end{eqnarray} where \begin{eqnarray} \Delta &\equiv & \fft1{r^2}\, (r^2+a^2)(r^2+b^2)(1 + g^2 r^2) -2m\,, \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \Delta_\theta &\equiv& 1 - g^2 a^2\, \cos^2\theta - g^2 b^2\, \sin^2\theta\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \rho^2 &\equiv& r^2 + a^2\, \cos^2\theta + b^2\, \sin^2\theta\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \Xi_a &\equiv& 1 - g^2 a^2\,,\qquad \Xi_b \equiv 1- g^2 b^2\,. \end{eqnarray} The metric satisfies $R_{\mu\nu}=-4 g^2\, g_{\mu\nu}$. As shown in \cite{gibperpop}, the energy and angular momenta are given by \begin{equation} E = \fft{\pi\, m\, (2\Xi_a + 2\Xi_b -\Xi_a\, \Xi_b)}{4 \Xi_a^2 \, \X_b^2}\,, \qquad J_a = \fft{\pi\, m\, a}{2 \Xi_a^2\, \Xi_b}\,,\qquad J_b = \fft{\pi\, m\, b}{2 \Xi_b^2\, \Xi_a}\,.\label{ejrels} \end{equation} As discussed in \cite{cvgilupo}, the BPS limit can be found by studying the eigenvalues of the {Bogomol'nyi\ } matrix arising in the AdS superalgebra from the anticommutator of the supercharges. In $D=5$, these eigenvalues are then proportional to \begin{equation} E \pm g J_a \pm g J_b\,.\label{bog} \end{equation} A BPS limit is achieved when one or more of the eigenvalues vanishes. For just one zero eigenvalue, the four cases in (\ref{bog}) are equivalent under reversals of the angular velocities, so we may without loss of generality consider $E-g J_a -g J_b=0$. From (\ref{ejrels}), we see that this is achieved by taking a limit in which $g a$ and $g b$ tend to unity, namely, by setting $g a=1-\ft12\epsilon \alpha$, $gb=1-\ft12\epsilon\beta$, rescaling $m$ according to $m=m_0\epsilon^3$, and sending $\epsilon$ to zero. As we shall see, the metric remains non-trivial in this limit. An equivalent discussion in the Euclidean regime leads to the conclusion that in the corresponding limit, one obtains five-dimensional Einstein metrics admitting a Killing spinor. We perform a Euclideanisation of (\ref{hawkmet}) by making the analytic continuations \begin{equation} t\rightarrow {i} t\,,\quad g\rightarrow \fft{{i} }{\sqrt{\lambda}}\,, \quad a\rightarrow {i} a\,,\quad b\rightarrow {i} b\,, \end{equation} and then take the BPS limit by setting \begin{eqnarray} &&a=\lambda^{-\ft12} (1 - \ft12\alpha\,\epsilon)\,,\quad b=\lambda^{-\ft12} (1 - \ft12\beta\,\epsilon)\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&r^2=\lambda^{-1} (1 - x\epsilon)\,,\quad m=\ft12\lambda^{-1} \mu \epsilon^3 \end{eqnarray} and sending $\epsilon\rightarrow 0$. The metric becomes \begin{equation} \lambda\,ds_5^2 = (d\tau + \sigma)^2 + ds_4^2\,,\label{5met} \end{equation} where \begin{eqnarray} ds_4^2 &=& \fft{\rho^2\,dx^2}{4\Delta_x} + \fft{\rho^2\,d\theta^2}{\Delta_\theta} + \fft{\Delta_x}{\rho^2} (\fft{\sin^2\theta}{\alpha} d\phi + \fft{\cos^2\theta}{\beta} d\psi)^2\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && + \fft{\Delta_\theta\sin^2\theta\cos^2\theta}{\rho^2} (\fft{\alpha - x}{\alpha} d\phi - \fft{\beta - x}{\beta} d\psi)^2\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \sigma &=& \fft{(\alpha -x)\sin^2\theta}{\alpha} d\phi + \fft{(\beta-x)\cos^2\theta}{\beta} d\psi\,,\label{d4met}\\ \Delta_x &=& x (\alpha -x) (\beta - x) - \mu\,,\quad \rho^2=\Delta_\theta-x\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \Delta_\theta &=& \alpha\, \cos^2\theta + \beta\, \sin^2\theta\,. \nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \end{eqnarray} A straightforward calculation shows that the four-dimensional metric in (\ref{d4met}) is Einstein. Note that the parameter $\mu$ is trivial, and can be set to any non-zero constant, say, $\mu=1$, by rescaling $\alpha$, $\beta$ and $x$. The metrics depend on two non-trivial parameters, which we can take to be $\alpha$ and $\beta$ at fixed $\mu$. However, it is sometimes convenient to retain $\mu$, allowing it to be determined as the product of the three roots $x_i$ of $\Delta_x$. It is also straightforward to verify that the four-dimensional Einstein metric in (\ref{d4met}) is K\"ahler, with K\"ahler form $J=\ft12 d\sigma$. We find that \begin{equation} J = e^1\wedge e^2 + e^3\wedge e^4\,, \end{equation} when expressed in terms of the vielbein \begin{eqnarray} e^1 &=& \fft{\rho dx}{2\sqrt{\Delta_x}}\,,\qquad e^2 = \fft{\sqrt\Delta_x}{\rho}\, (\fft{\sin^2\theta}{\alpha}\, d\phi+ \fft{\cos^2\theta}{\beta}\, d\psi)\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ e^3 &=& \fft{\rho d\theta}{\sqrt{\Delta_\theta}}\,,\qquad e^4 = \fft{\sqrt{\Delta_\theta}\, \sin\theta\cos\theta}{\rho}\, (\fft{\alpha - x}{\alpha} d\phi - \fft{\beta - x}{\beta} d\psi)\,. \end{eqnarray} A straightforward calculation confirms that $J$ is indeed covariantly constant. \subsection{Global structure of the five-dimensional solutions} Having obtained the local form of the five-dimensional Einstein-Sasaki metrics, we can now turn to an analysis of the global structure. The metrics are in general of cohomogeneity 2, with toric principal orbits $U(1)\times U(1)\times U(1)$. The orbits degenerate at $\theta=0$ and $\theta=\ft12 \pi$, and at the roots of the cubic function $\Delta_x$ appearing in (\ref{d4met}). In order to obtain metrics on complete non-singular manifolds, one must impose appropriate conditions to ensure that the collapsing orbits extend smoothly, without conical singularities, onto the degenerate surfaces. If this is achieved, one can obtain a metric on a non-singular manifold, with $0\le\theta\le\ft12\pi$ and $x_1\le x\le x_2$, where $x_1$ and $x_2$ are two adjacent real roots of $\Delta_x$. In fact, since $\Delta_x$ is negative at large negative $x$ and positive at large positive $x$, and since we must also have $\Delta_x>0$ in the interval $x_1<x<x_2$, it follows that $x_1$ and $x_2$ must be the smallest two roots of $\Delta_x$. The easiest way to analyse the behaviour at each collapsing orbit is to examine the associated Killing vector $\ell$ whose length vanishes at the degeneration surface. By normalising the Killing vector so that its ``surface gravity'' $\kappa$ is equal to unity, one obtains a translation generator $\partial/\partial \chi$ where $\chi$ is a local coordinate near the degeneration surface, and the metric extends smoothly onto the surface if $\chi$ has period $2\pi$. The ``surface gravity'' for the Killing vector $\ell$ is given, in the Euclidean regime, by \begin{equation} \kappa^2 = -\fft{g^{\mu\nu}\, (\partial_\mu \ell^2)(\partial_\nu \ell^2)}{4\ell^2} \end{equation} in the limit that the degeneration surface is reached. The normalised Killing vectors that vanish at the degeneration surfaces $\theta=0$ and $\theta=\ft12\pi$ are simply given by $\partial/\partial \phi$ and $\partial/\partial\psi$ respectively. At the degeneration surfaces $x=x_1$ and $x=x_2$, we find that the associated normalised Killing vectors $\ell_1$ and $\ell_2$ are given by \begin{equation} \ell_i = c_i\, \fft{\partial}{\partial \tau} + a_i\, \fft{\partial}{\partial\phi} + b_i\, \fft{\partial}{\partial\psi}\,,\label{ells} \end{equation} where the constants $c_i$, $a_i$ and $b_i$ are given by \begin{eqnarray} a_i &=& \fft{\alpha c_i}{x_i - \alpha}\,,\qquad b_i = \fft{\beta c_i}{x_i-\beta}\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ c_i &=& \fft{(\alpha-x_i)(\beta-x_i)}{2(\alpha+\beta) x_i - \alpha\beta - 3 x_i^2}\,.\label{abci} \end{eqnarray} Since we have a total of four Killing vectors $\partial/\partial\phi$, $\partial/\partial\psi$, $\ell_1$ and $\ell_2$ that span a three-dimensional space, there must exist a linear relation amongst them. Since they all generate translations with a $2\pi$ period repeat, it follows that unless the coefficients in the linear relation are rationally related, then by taking integer combinations of translations around the $2\pi$ circles, one could generate a translation implying an identification of arbitrarily nearby points in the manifold. Thus one has the requirement for obtaining a non-singular manifold that the linear relation between the four Killing vectors must be expressible as \begin{equation} p \ell_1 + q \ell_2 + r\, \fft{\partial}{\partial\phi} + s \, \fft{\partial}{\partial\psi}=0\label{lincomb} \end{equation} for {\it integer} coefficients $(p,q,r,s)$, which may, of course, be assumed to be coprime. We must also require that all subsets of three of the four integers be coprime too. This is because if any three had a common divisor $k$, then dividing (\ref{lincomb}) by $k$ one could deduce that the direction associated with the Killing vector whose coefficient was not divisible by $k$ would be identified with period $2\pi/k$, thus leading to a conical singularity. From (\ref{lincomb}), and (\ref{ells}), we have \begin{eqnarray} && p a_1 + q a_2 + r=0\,,\qquad p b_1 + q b_2 + s=0\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && p c_1 + q c_2=0\,.\label{klmn} \end{eqnarray} From these relations it then follows that the ratios between each pair of the four quantities \begin{equation} a_1 c_2-a_2 c_1\,,\quad b_1 c_2 -b_2 c_1 \,,\quad c_1\,,\quad c_2\label{rationals} \end{equation} must be rational. Thus in order to obtain a metric that extends smoothly onto a complete and non-singular manifold, we must choose the parameters in (\ref{d4met}) so that the rationality of the ratios is achieved. In fact it follows from (\ref{abci}) that \begin{equation} 1+a_i + b_i + 3 c_i = 0\label{abcid} \end{equation} for all roots $x_i$, and using this one can show that there are only two independent rationality conditions following from the requirements of rational ratios for the four quantities in (\ref{rationals}). One can also see from (\ref{abcid}) that \begin{equation} p+q-r-s=0\,,\label{klmnrel} \end{equation} and so the further requirement that all triples chosen from the coprime integers $(p,q,r,s)$ also be coprime is automatically satisfied. The upshot from the above discussion is that we can have complete and non-singular five-dimensional Einstein-Sasaki spaces $L^{p,q,r}$, where \begin{equation} pc_1+q c_2=0\,,\quad pa_1 + q a_2 + r=0\,. \end{equation} These equations and (\ref{abcid}) allow one to solve for $\alpha$, $\beta$ and the roots $x_1$ and $x_2$, for positive coprime integer triples $(p,q,r)$. The requirements $0<x_1\le x_2<x_3$, and $\alpha>x_2$, $\beta>x_2$, restrict the integers to the domain $0< p \le q$ and $0 <r < p+q$. All such coprime triples yield complete and non-singular Einstein-Sasaki spaces $L^{p,q,r}$, and so we get infinitely many new examples. The spaces $L^{p,q,r}$ all have the topology of $S^2\times S^3$. We are very grateful to Krzysztof Galicki for the following argument which shows this: The total space of the Calabi-Yau cone, with metric $ds_6^2= dy^2 + y^2\, \lambda ds_5^2$, can be viewed as a circle reduction (\rm i.e.\ a symplectic quotient) of ${{\mathbb C}}^4$ by the diagonal action of $S^1(p,q,-r,-s)$ with $p+q-r-s=0$. The topology of the $L^{p,q,r}$ spaces has also been discussed in detail in \cite{martspar}. The volume of $L^{p,q,r}$ (with $\lambda=1$) is given by \begin{equation} V=\fft{\pi^2(x_2-x_1) (\alpha+\beta -x_1-x_2)\Delta\tau}{2k\alpha\beta}\,, \end{equation} where $\Delta\tau$ is the period of the coordinate $\tau$, and $k=\hbox{gcd}\, (p,q)$. Note that the $(\phi,\psi)$ torus is factored by a freely-acting ${{\mathbb Z}}_k$, along the diagonal. $\Delta\tau$ is given by the minimum repeat distance of $2\pi c_1$ and $2\pi c_2$, \rm i.e.\ the minimisation of $|2\pi c_1 M + 2\pi c_2 N|$ over the integers $(M,N)$. We have \begin{equation} 2\pi(c_1 M+ c_2 N) = \fft{2\pi c_1}{q}\, (M q - N p)\,, \end{equation} and so if $p$ and $q$ have greatest common divisor $k$, the integers $M$ and $N$ can be chosen so that the minimum of $|M q - N p|$ is $k$, and so \begin{equation} \Delta\tau=\fft{2\pi k |c_1|}{q}\,. \end{equation} The volume of $L^{p,q,r}$ is therefore given by \begin{equation} V=\fft{\pi^3 |c_1| (x_2-x_1) (\alpha+\beta -x_1-x_2)}{q\alpha\beta}\,, \end{equation} There is a quartic equation expressing $V$ purely in terms of $(p,q,r)$. Writing \begin{equation} V=\fft{\pi^3 (p+q)^3 W}{8pqrs}\,, \label{VWrel} \end{equation} we find \begin{eqnarray} 0&=& 27 W^4-8(2-9h_+) W^3- [8h_+(2-h_+)^2-h_-^2(30+9h_+)]W^2\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && - 2h_-^2[2(2-h_+)^2-3h_-^2]W -(1-f^2)(1-g^2)h_-^4\label{quartic} \end{eqnarray} where \begin{equation} f\equiv \fft{q-p}{p+q}\,,\qquad g \equiv\fft{r-s}{p+q}\,,\label{fgdef} \end{equation} and $h_\pm=f^2\pm g^2$. The central charge of the dual field theory is rational if $W$ is rational, which, as we shall show below, is easily achieved. If one sets $p+q=2r$, implying that $\alpha$ and $\beta$ become equal, our Einstein-Sasaki metrics reduce to those in \cite{gamaspwa1}, and the conditions we have discussed for achieving complete non-singular manifolds reduce to the conditions for the $Y^{p,q}$ obtained there, with $Y^{p,q}= L^{p-q,p+q,p}$. The quartic (\ref{quartic}) then factorises over the rationals into quadrics, giving the volumes found in \cite{gamaspwa1}. Further special limits also arise. For example, if we take $p=q=r=1$, the roots $x_1$ and $x_2$ coalesce, $\alpha=\beta$, and the metric becomes the homogeneous $T^{1,1}$ space, with the four-dimensional base space being $S^2\times S^2$. In another limit, we can set $\mu=0$ in (\ref{d4met}) and obtain the round metric on $S^5$, with $CP^2$ as the base. (In fact, we obtain $S^5/Z_q$ if $p=0$.) Except in these special ``regular'' cases, the four-dimensional base spaces themselves are singular, even though the Einstein-Sasaki spaces $L^{p,q,r}$ are non-singular. The Einstein-Sasaki space is called quasi-regular if $\partial/\partial\tau$ has closed orbits, which happens if $c_1$ is rational. If $c_1$ is irrational the orbits of $\partial/\partial\tau$ never close, and the Einstein-Sasaki space is called irregular. \subsection{Quasi-regular examples}\label{qregsec} We find that we can obtain quasi-regular Einstein-Sasaki 5-spaces, with rational values for the roots $x_i$, the parameters $\alpha$, $\beta$, and the volume factor $W$ if the integers $(p,q,r)$ are chosen such that \begin{equation} \fft{q-p}{p+q}= \fft{2(v-u)(1+ u v)}{4- (1+u^2)(1+v^2)}\,,\qquad \fft{r-s}{p+q}= \fft{2(v+u)(1- u v)}{4- (1+u^2)(1+v^2)}\,,\label{uv1} \end{equation} where $u$ and $v$ are any rational numbers satisfying \begin{equation} 0<v<1\,,\qquad -v <u < v\,.\label{uvregion} \end{equation} A convenient choice that eliminates the redundancy in the $(\alpha,\beta,\mu)$ parameterisation of the local solutions is by taking $x_3=1$, in which case we then have rational solutions with \begin{eqnarray} x_1&=& \ft14(1+u)(1-v)\,,\qquad x_2= \ft14 (1-u)(1+v)\,,\qquad x_3= 1\,,\label{rootsuv}\\ \alpha&=&1 -\ft14 (1+u)(1+v)\,,\qquad \beta =1-\ft14(1-u)(1-v)\,,\qquad \mu= \ft1{16} (1-u^2)(1-v^2)\,.\nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \end{eqnarray} From these, we have \begin{eqnarray} c_1 &=& -\fft{2(1-u)(1+v)}{(v-u)[4-(1+u)(1-v)]}\,,\qquad c_2= \fft{2(1+u)(1-v)}{(v-u)[4-(1-u)(1+v)]}\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ a_1&=&\fft{(1+v)(3-u-v-uv)}{(v-u)[4-(1+u)(1-v)]}\,,\qquad a_2= - \fft{(1+u)(3-u-v-uv)}{(v-u)(4-(1-u)(1+v)]}\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ b_1 &=& \fft{(1-u)(3 +u + v-uv)}{(v-u)[4-(1+u)(1-v)]}\,,\qquad b_2=- \fft{(1-v)(3+u+v-uv)}{(v-u)[4-(1-u)(1+v)]}\,. \end{eqnarray} It follows that $c_1$ is also rational, and so these Einstein-Sasaki spaces are quasi-regular, with closed orbits for $\partial/\partial\tau$. The volume is given by (\ref{VWrel}), with \begin{equation} W= \fft{16(1-u^2)^2\, (1-v^2)^2}{(3- u^2 - v^2 - u^2 v^2)^3}\,, \end{equation} and so the ratio of $V$ to the volume of the unit 5-sphere (which is $\pi^3$) is rational too. Note that although we introduced the $(u,v)$ parameterisation in order to write quasi-regular examples with rational roots and volumes, the same parameterisation is also often useful in general. One simply takes $u$ and $v$ to be real numbers, not in general rational, defined in terms of $p$, $q$, $r$ and $s$ by (\ref{uv1}). They are again subject to the restrictions (\ref{uvregion}). \subsection{Volumes and the Bishop bound} Note that the volume $V$ can be expressed in terms of $u$, $v$ and $p$ as \begin{equation} V = \fft{16\pi^3 (1+u)(1-v)}{p\, (3+u+v-uv)(3+u-v+uv)(3-u-v-uv)}\,, \end{equation} where $u$ and $v$ are given in terms of $p$, $q$ and $r$ by \ref{uv1}. It is easy to verify that the volume is always bounded above by the volume of the unit 5-sphere,\footnote{Recall that all our volume formulae are with respect to spaces normalised to $R_{ij} = 4 g_{ij}$.} as it must be by Bishop's theorem \cite{thebish}. To see this, define \begin{equation} Y\equiv 1- \fft{p V}{\pi^3} \,. \end{equation} Since $p$ is a positive integer, then if we can show that $Y>0$ for all our inhomogeneous Einstein-Sasaki spaces, it follows that they must all have volumes less than $\pi^3$, the volume of the unit $S^5$. It is easy to see that \begin{equation} Y = \fft{(1-u)(1+v) \, F }{ (3+u+v-uv)(3+u-v+uv)(3-u-v-uv)}\,, \end{equation} where \begin{equation} F = 11 + (u+v)^2 + 2uv + 4(u-v) - u^2 v^2\,. \end{equation} With $u$ and $v$ restricted to the region defined by (\ref{uvregion}), it is clear that the sign of $Y$ is the same as the sign of $F$. It also follows from (\ref{uvregion}) that \begin{equation} 2uv > -2\,,\qquad 4(u-v) > -8\,,\qquad - u^2 v^2 > -1\,, \end{equation} and so we have $F>0$. Thus $Y >0$ for all the inhomogeneous Einstein-Sasaki spaces, proving that they all satisfy $V<\pi^3$. \subsection{The $u\leftrightarrow -u$ symmetry} By making appropriate redefinitions of the coordinates, we can make manifest the discrete symmetry of the five-dimensional metrics under the transformation $u\rightarrow -u$, which, from (\ref{uv1}), corresponds to the exchange of the integers $(p,q)$ with the integers $(r,s)$. Accordingly, we define new coordinates\footnote{A similar redefinition of coordinates has also been given in \cite{martspar}.} \begin{equation} y= \Delta_\theta\,,\qquad \hat\psi= \fft{\phi-\psi}{\beta-\alpha}\,,\qquad \hat \phi = \fft{\alpha^{-1}\, \phi-\beta^{-1} \psi}{\beta-\alpha}\,,\qquad \hat\tau = \tau + \fft{\beta\psi-\alpha \phi}{\beta-\alpha}\,. \end{equation} In terms of these, the Einstein-Sasaki metrics become \begin{equation} \lambda ds_5^2 = (d\hat \tau + \hat\sigma)^2 + ds_4^2\,,\label{5metxy} \end{equation} with \begin{eqnarray} ds_4^2 &=& \fft{(y-x)dx^2}{4 \Delta_x} + \fft{(y-x) dy^2}{4\Delta_y} + \fft{\Delta_x}{y-x}\, (d\hat\psi- y d\hat\phi)^2 + \fft{\Delta_y}{y-x}\, (d\hat\psi - x d\hat\phi)^2\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \hat\sigma &=& (x+y) d\hat\psi - xy d\hat\phi\,. \end{eqnarray} The metric functions $\Delta_x$ and $\Delta_y$ are given by \begin{equation} \Delta_x = x(\alpha-x)(\beta-x) -\mu\,,\qquad \Delta_y = -y(\alpha-y)(\beta-y)\,, \end{equation} It is convenient now to adopt the parameterisation introduced in (\ref{uv1}). Note that this can be done whether or not $u$ and $v$ are chosen to be rational. Then from (\ref{rootsuv}) we have \begin{eqnarray} \Delta_x &=& \ft1{16}(x-1)[4x- (1-u)(1+v)][4x-(1+u)(1-v)]\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \Delta_y &=& -\ft1{16} y[4(1-y)- (1-u)(1-v)][4(1-y)-(1+u)(1+v)]\,. \end{eqnarray} It is now manifest that the five-dimensional metric (\ref{5metxy}) is invariant under sending $u\rightarrow -u$, provided that at the same time we make the coordinate transformations \begin{eqnarray} x &\longrightarrow& 1-y\,,\qquad y \longrightarrow 1-x\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \hat \psi &\longrightarrow& -\hat\psi + \hat\phi\,,\qquad \hat\tau \longrightarrow \hat\tau + 2\hat\psi -\hat\phi\,. \end{eqnarray} The argument above shows that to avoid double counting, we should further restrict the coprime integers $(p,q,r)$ so that either $u\ge0$, or else $u\le 0$. We shall make the latter choice, which implies that we should restrict $(p,q,r)$ so that \begin{equation} 0\le p\le q\le r \le p+q\,. \end{equation} \subsection{The $u=0$ case} Having seen that there is a symmetry under $u\leftrightarrow -u$, it is of interest to study the fixed point of this discrete symmetry, \rm i.e.\ $u=0$, which corresponds to setting $q=r$ (and hence $p=s$). From (\ref{rootsuv}), a convenient way of restricting to $u=0$ is by choosing \begin{equation} \mu= \ft2{27} (2\alpha-\beta)(2\beta-\alpha)(\alpha+\beta)\,. \end{equation} This allows us to factorise the function $\Delta_x$, giving \begin{equation} \Delta_x = \ft1{27} (2\alpha-\beta-3x)(\alpha-2\beta+3x) (2\alpha+2\beta -3x)\,. \end{equation} Now we introduce new tilded coordinates, defined by \begin{eqnarray} \tilde x &=& -x + \ft13(\alpha+\beta)\,,\qquad \tilde y= \Delta_\theta - \ft13 (\alpha+\beta)\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \phi &=& -\alpha[\tilde\psi + \ft13(2\beta-\alpha) \tilde\phi]\,,\qquad \psi = - \beta[ \tilde\psi + \ft13(2\alpha-\beta)\tilde\phi]\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \tilde\tau &=& \tau -\ft13(\alpha+\beta)\tilde\psi + \ft19 (2\alpha^2 -5\alpha\beta + 2\beta^2)\tilde\phi\,. \end{eqnarray} After doing this, the metric takes on the form \begin{equation} \lambda ds_5^2 = (d\tilde\tau+\tilde\sigma)^2 + ds_4^2\,, \end{equation} with \begin{eqnarray} ds_4^2 &=& \fft{(\tilde x+\tilde y)d\tilde x^2}{4 \Delta(\tilde x)} + \fft{(\tilde x+ \tilde y) d\tilde y^2}{4\Delta(\tilde y)} + \fft{\Delta(\tilde x)}{\tilde x+\tilde y}\, (d\tilde\psi+ \tilde y d\tilde \phi)^2 + \fft{\Delta(\tilde y)}{\tilde x+\tilde y}\, (d\tilde\psi - \tilde x d\tilde\phi)^2\,, \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \tilde\sigma &=& (\tilde y-\tilde x) d\tilde\psi - \tilde x \tilde yd\tilde\phi\,. \end{eqnarray} The metric functions $\Delta(\tilde x)$ and $\Delta(\tilde y)$, which are now the same function with either $\tilde x$ or $\tilde y$ as argument, are given by \begin{equation} \Delta(z) = \ft1{27} (\alpha-2\beta + 3z)(2\alpha-\beta-3z) (\alpha+\beta+3z)\,. \end{equation} It is therefore natural to define new parameters $\gamma_1$ and $\gamma_2$, given by \begin{equation} \gamma_1=\ft13(2\alpha-\beta)\,,\qquad \gamma_2= \ft13(2\beta-\alpha)\,. \end{equation} In terms of these, the function $\Delta(z)$ becomes \begin{equation} \Delta(z) = -(\gamma_1 -z)(\gamma_2-z)(\gamma_1+\gamma_2+z)\,. \end{equation} There is now a manifest discrete symmetry, under which we send \begin{equation} \tilde x\leftrightarrow \tilde y\,,\qquad \tilde\phi\leftrightarrow -\tilde\phi\,. \end{equation} It is worth remarking that the quartic polynomial (\ref{quartic}) determining the volume factorises in quadrics over the rationals in the case that $u=0$, giving \begin{equation} V=\fft{4\pi^3}{27p^2q^2} \Big((p+q)(p-2q)(q-2p) + 2 (p^2 -pq + q^2)^{3/2}\Big)\,. \end{equation} \subsection{Curvature invariants} In this section, we present some results for curvature invariants for the five-dimensional Einstein-Sasaki metrics. We find \begin{eqnarray} I_2 &\equiv & R^{ijk\ell} \, R_{ijk\ell} = \fft{192(\rho^{12} + 2\mu^2)}{\rho^{12}}\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ I_3 &\equiv& R^{ij}{}_{k\ell}\, R^{k\ell}{}_{mn}\, R^{mn}{}_{ij}= \fft{384(5\rho^{18} + 12\mu^2\rho^6 + 8 \mu^3)}{\rho^{18}}\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ J_3&\equiv & R^i{}_j{}^k{}_\ell\, R^j{}_m{}^\ell{}_n\, R^m{}_i{}^n{}_k = \fft{96(\rho^{18} - 12\mu^2 \rho^6 + 16\mu^3)}{\rho^{18}}\,. \end{eqnarray} Since these curvature invariants depend on the coordinates only via the single combination $\rho^2= \alpha\sin^2\theta + \beta\cos^2\theta -x$, one might wonder whether the Einstein-Sasaki metrics, despite ostensibly being of cohomogeneity 2, were actually only of cohomogeneity 1, becoming manifestly so when described in an appropriate coordinate system. In fact this is not the case, as can be seen by calculating the scalar invariant \begin{equation} K = g^{\mu\nu} (\partial_\mu I_2) (\partial_\nu I_2)\,, \end{equation} which turns out to be given by \begin{eqnarray} K &=& \fft{2^{18} 3^4 \mu^4}{\rho^{30}}\Big[-\rho^6 + (2\beta-\alpha) \rho^4 -\beta(\beta-\alpha)\rho^2 -\mu\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && \qquad\qquad+ (\alpha-\beta)[3\rho^2 - 2(2\beta-\alpha) ]\rho^2\, \cos^2\theta -3(\alpha-\beta)^2\, \rho^2\, \cos^4\theta\Big]\,. \end{eqnarray} Since this invariant does not depend on the $x$ and $\theta$ coordinates purely via the combination $\rho^2= \alpha\sin^2\theta + \beta\cos^2\theta -x$, we see that the metrics do indeed genuinely have cohomogeneity 2. They do, of course, reduce to cohomogeneity 1 if the parameters $\alpha$ and $\beta$ are set equal. \section{Higher-Dimensional Einstein-Sasaki Spaces}\label{esnsec} The construction of five-dimensional Einstein-Sasaki spaces that we have given in section \ref{es5sec} can be extended straightforwardly to all higher odd dimensions. We take the rotating Kerr-de Sitter metrics obtained in \cite{gilupapo1,gilupapo2}, and impose the Bogomol'nyi conditions $E- g\sum_i J_i=0$, where $E$ and $J_i$ are the energy and angular momenta that were calculated in \cite{gibperpop}, and given in (\ref{ejrels}). We find that a non-trivial BPS limit exists where $g a_i = 1 - \ft12\alpha_i \epsilon$, $m= m_0 \epsilon^{n+1}$. After Euclideanisation of the $D=2n+1$ dimensional rotating black hole metrics obtained in \cite{gilupapo1}, which is achieved by sending $\tau\rightarrow {i}\, \tau$, and $a_i\rightarrow {i}\, a_i$ in equation (3.1) of that paper (and using $y$ rather than $r$ as the radial variable, to avoid a clash of notations later), one has \begin{eqnarray} ds^2 &=& W\, (1 -\lambda\,y^2)\, d\tau^2 + \fft{U\, dy^2}{V-2m} + \fft{2m}{U}\Bigl(d\tau - \sum_{i=1}^n \fft{a_i\, \mu_i^2\, d\varphi_i}{ 1 - \lambda\, a_i^2}\Bigr)^2 \label{blodd}\\ && + \sum_{i=1}^n \fft{y^2 - a_i^2}{1 - \lambda\, a_i^2} \, [d\mu_i^2 + \mu_i^2\, (d\varphi_i +\lambda\, a_i\, d\tau)^2] + \fft{\lambda}{W\, (1-\lambda y^2)} \Big( \sum_{i=1}^n \fft{(y^2 - a_i^2)\mu_i\, d\mu_i}{ 1 - \lambda\, a_i^2}\Big)^2 \,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \end{eqnarray} where \begin{eqnarray} V &\equiv& \fft1{y^2}\, (1-\lambda\, y^2)\, \prod_{i=1}^n (y^2 - a_i^2) \,,\qquad W \equiv \sum_{i=1}^n \fft{\mu_i^2}{1-\lambda\, a_i^2}\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ U &=& \sum_{i=1}^n \fft{\mu_i^2}{y^2 - a_i^2}\, \prod_{j=1}^n (y^2 - a_j^2)\,. \end{eqnarray} The BPS limit is now achieved by setting \begin{eqnarray} && a_i=\lambda^{-\ft12} (1 - \ft12\alpha_i\,\epsilon)\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&y^2=\lambda^{-1} (1 - x\epsilon)\,,\quad m=\ft12\lambda^{-1} \mu \epsilon^{n+1}\,,\label{dbpslim} \end{eqnarray} and then sending $\epsilon\rightarrow 0$. We then obtain $D=2n+1$ dimensional Einstein-Sasaki metrics $ds^2$, given by \begin{equation} \lambda ds^2 = (d\tau+\sigma)^2 + d\bar s^2\,,\label{esgenmet} \end{equation} with $R_{\mu\nu}=2n\lambda g_{\mu\nu}$, where the $2n$-dimensional metric $d\bar s^2$ is Einstein-K\"ahler, with K\"ahler form $J=\ft12d\sigma$, and \begin{eqnarray} d\bar s^2 &=& \fft{Y dx^2}{4x F} - \fft{x(1-F)}{Y} \Big(\sum_i \alpha_i^{-1}\, \mu_i^2 d\varphi_i\Big)^2 + \sum_i (1-\alpha_i^{-1}\, x)(d\mu_i^2 + \mu_i^2 d\varphi_i^2) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&+ \fft{x}{\sum_i \alpha_i^{-1} \mu_i^2}\, \Big( \sum_j \alpha_j^{-1}\, \mu_j d\mu_j\Big)^2 -\sigma^2\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \sigma &=& \sum_i (1-\alpha_i^{-1}x)\mu_i^2\, d\varphi_i\,,\\ Y&=&\sum_i\fft{\mu_i^2}{\alpha_i-x}\,,\qquad F= 1- \fft{\mu}{x}\, \prod_i(\alpha_i-x)^{-1}\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \end{eqnarray} where $\sum_i \mu_i^2=1$. The $D=2n+1$ dimensional Einstein-Sasaki metrics have cohomogeneity $n$, with $U(1)^{n+1}$ principal orbits. The discussion of the global properties is completely analogous to the one we gave previously for the five-dimensional case. The $n$ Killing vectors $\partial/\partial\varphi_i$ vanish at the degenerations of the $U(1)^{n+1}$ principal orbits where each $\mu_i$ vanishes, and conical singularities are avoided if each coordinate $\varphi_i$ has period $2\pi$. The Killing vectors \begin{equation} \ell_i = c(i) \,\fft{\partial}{\partial\tau} + \sum_j b_j(i)\, \fft{\partial}{\partial\varphi_j} \end{equation} vanish at the roots $x=x_i$ of $F(x)$, and have unit surface gravities there, where \begin{equation} b_j(i) = -\fft{c(i) \alpha_j}{\alpha_j-x_i}\,,\quad c(i)^{-1} = \sum_j \fft{x_i}{\alpha_j-x_i} -1\,.\label{acdefs} \end{equation} The metrics extend smoothly onto complete and non-singular manifolds if \begin{equation} p \ell_1 + q\ell_2 + \sum_j r_j \fft{\partial}{\partial\varphi_j}=0 \end{equation} for coprime integers $(p,q,r_j)$, where in addition all possible subsets of $(n+1)$ of the integers are also coprime (which is again automatic--see below). This implies the algebraic equations \begin{equation} p \, c(1) + q \, c(2)=0\,,\quad p \, b_j(1) + q \, b_j(2) + r_j=0\,,\label{bcpqrj} \end{equation} determining the roots $x_1$ and $x_2$, and the parameters $\alpha_j$. The two roots of $F(x)$ must be chosen so that $F>0$ when $x_1<x<x_2$. With these conditions satisfied, we obtain infinitely many new complete and non-singular compact Einstein-Sasaki spaces in all odd dimensions $D=2n+1$. It follows from (\ref{acdefs}) that \begin{equation} \sum_j b_j(i) + (n+1) c(i) + 1=0\,, \end{equation} and hence using (\ref{bcpqrj}) we have \begin{equation} p+q=\sum_j r_j\,. \label{pqrj} \end{equation} This can be used to eliminate $r_n$ in favour of the other $(n+1)$ integers. The Einstein-Sasaki spaces, which we denote by $L^{p,q,r_1,\cdots ,r_{n-1}}$, are therefore characterised by specifying $(n+1)$ coprime integers, which must lie in an appropriate domain. Without loss of generality, we may choose $p<q$, and order the two roots $x_1$ and $x_2$ so that $x_1<x_2$. It follows that we shall have \begin{equation} c_1<0\,,\qquad c_2>0\,,\qquad |c_1| > c_2\,. \end{equation} The parameters $\alpha_j$ must all satisfy $\alpha_j>x_2$, to ensure that $Y$ is always positive. From (\ref{bcpqrj}) we have therefore have \begin{equation} r_j = \fft{q c(2)\, \alpha_j (x_2-x_1)}{(\alpha_j-x_1)(\alpha_j-x_2)}>0\,. \end{equation} To avoid overcounting, we can therefore specify the domain by \begin{equation} 0<p<q\,, \qquad 0<r_1 \le r_2 \le \cdots \le r_{n-1} \le r_n\,. \end{equation} The $n$-torus of the $\varphi_j$ coordinates is in general factored by a freely-acting ${{\mathbb Z}}_k$, where $k=\hbox{gcd}\, (p,q)$. The volume (with $\lambda=1$) is given by \begin{equation} V= \fft{|c(1)|}{q}\, {\cal A}_{2n+1}\, \Big[\prod_i \Big(1-\fft{x_1}{\alpha_i}\Big) -\prod_i\Big(1-\fft{x_2}{\alpha_i} \Big)\Big]\,, \end{equation} since $\Delta\tau$ is given by $2\pi k|c(1)|/q$, where ${\cal A}_{2n+1}$ is the volume of the unit $(2n+1)$-sphere. In the special case that the rotations $\alpha_i$ are set equal, the metrics reduce to those obtained in \cite{gamaspwa2}. \section{Non-Sasakian Einstein Spaces} So far in this paper, we have concentrated on the situations, in odd dimensions, where a limit of the Euclideanised Kerr-de Sitter metrics can be taken in which one has a Killing spinor. In this section, we shall describe the more general situation in which no limit is taken, and so one has Einstein metrics that do not have Killing spinors. They are therefore Einstein spaces that are not Sasakian. Again, the question arises as to whether these metrics can, for suitable choices of the parameters, extend smoothly onto complete and non-singular compact manifolds. As in the previous discussion in the Einstein-Sasaki limit, this question reduces to whether smooth extensions onto the surfaces where the principal orbits degenerate are possible. This question was partially addressed in \cite{gilupapo1,gilupapo2}, where the problem was studied in the case that the two roots defining the endpoints of the range of the radial variable were taken to be coincident. This ensured that the surface gravities at the endpoints of the (rescaled) radial variable were equal in magnitude. However, as we have seen in the discussion for the Einstein-Sasaki limits, the requirement of equal surface gravities is more restrictive than is necessary for obtaining non-singular spaces. In this section, we shall study the problem of obtaining non-singular spaces within this more general framework. \subsection{Odd dimensions} The Euclideanised Kerr-de Sitter metrics in odd dimensions $D=2n+1$ are given in equation (\ref{blodd}). From the results in \cite{gilupapo1,gilupapo2}, the Killing vector \begin{equation} \tilde\ell \equiv \fft{\partial}{\partial\tau} -\sum_{j=1}^n \fft{a_j\, (1-\lambda y_0^2)}{y_0^2 -a_j^2} \,\fft{\partial}{\partial\varphi_j} \end{equation} has vanishing norm at a root $y=y_0$ of $V(y)-2m=0$, and it has a ``surface gravity'' given by \begin{equation} \kappa = y_0\, (1-\lambda y_0^2)\, \sum_{j=1}^n \fft{1}{y_0^2-a_j^2} - \fft1{y_0}\,. \end{equation} Following the strategy we used for studying the degenerate orbits of the metrics in the Einstein-Sasaki limit, we now introduce a rescaled Killing vector $\ell = c\, \tilde\ell$ with $c$ chosen so that $\ell$ has unit surface gravity. Thus we define Killing vectors \begin{eqnarray} \ell_1 &=& c(1) \, \fft{\partial}{\partial\tau} + \sum_{j=1}^n b_j(1)\, \fft{\partial}{\partial\varphi_j}\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \ell_2 &=& c(2) \, \fft{\partial}{\partial\tau} + \sum_{j=1}^n b_j(2)\, \fft{\partial}{\partial\varphi_j}\,, \end{eqnarray} which vanish at two adjacent roots $y=y_1$ and $y=y_2$ respectively, each of whose surface gravities is of unit magnitude. The constants are therefore given by \begin{eqnarray} c(i)^{-1} &=& y_i\, (1-\lambda y_i^2) \, \sum_{j=1}^n \fft{1}{y_i^2-a_j^2} - \fft1{y_i}\,,\qquad \qquad i=1,2\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ b_j(i) &=& -\fft{a_j\, (1-\lambda y_i^2)\, c(i)}{y_i^2-a_j^2}\,, \qquad\qquad i=1,2\,. \end{eqnarray} We shall assume, without loss of generality, that $y_1<y_2$, and we require that $V(y)-2m>0$ for $y_1 < y < y_2$, to ensure that the metric has positive definite signature. With these assumptions, we shall have \begin{equation} c(1) > 0\,,\qquad c(2) <0\,. \end{equation} The Killing vectors $\ell_1$, $\ell_2$ and $\partial/\partial\varphi_j$ have zero norm at the degeneration surfaces $r=y_1$ and $r=y_2$ respectively. The Killing vector $\partial/\partial\varphi_j$ has zero norm at the degeneration surface where $\mu_j=0$. Since the $(n+2)$ Killing vectors $\ell_1$, $\ell_2$ and $\partial/\partial\varphi_j$ span a vector space of dimension $(n+1)$, it follows that they must be linearly dependent. Since each of the Killing vectors generates a translation that closes with period $2\pi$ at its own degeneration surface, it follows that the coefficients in the linear relation must be rationally related in order not to have arbitrarily nearby points being identified, and so we may write the linear relation as \begin{equation} p\, \ell_1 + q\, \ell_2 + \sum_{j=1}^n r_j \fft{\partial}{\partial\varphi_j}=0 \end{equation} for coprime integers $(p,q,r_j)$. For the same reason we discussed for the Einstein-Sasaki cases, here too no subset of $(n+1)$ of these integers must have any common factor either. Thus we have the equations \begin{equation} p\, c(1) + q\, c(2)=0\,,\qquad p \, b_j(1) + q\, b_j(2) + r_j=0\,. \label{pqsjeqns} \end{equation} Unlike the Einstein-Sasaki limits, however, in this general case we do not have any relation analogous to (\ref{pqrj}) that imposes a linear relation on the $(n+2)$ integers. This is because in the general case the local metrics (\ref{blodd}) have $(n+1)$ non-trivial continuous parameters, namely $m$ and the rotations $a_j$, which can then be viewed as being determined, via the $(n+1)$ equations (\ref{pqsjeqns}), in terms of the $(n+1)$ rational ratios $p/q$ and $r_j/q$. By contrast, in the Einstein-Sasaki limits the local metrics in $D=2n+1$ dimensions have only $n$ non-trivial parameters, and so there must exist an equation (namely (\ref{pqrj})) that relates the $(n+1)$ ratios $p/q$ and $r_j/q$. Since, without loss of generality, we are taking $r_1<r_2$, the integers $p$ and $q$ must be such that $p<q$.\footnote{The case $p=q$, corresponding to a limit with $r_1=r_2$, was discussed extensively in \cite{gilupapo1,gilupapo2}, and so for convenience we shall exclude this from the analysis here.} From (\ref{pqsjeqns}) we have \begin{equation} r_j = \fft{p\, a_j\, c(1)\, (y_2^2-y_1^2)(1-\lambda a_j^2)}{ (y_1^2-a_j^2)(y_2^2-a_j^2)}\,.\label{rjlim} \end{equation} We must have $y>a_j$ in the entire interval $y_1\le y\le y_2$, in order to ensure that the metric remains positive definite, and hence from (\ref{rjlim}) it follows that we must have $r_j>0$. We shall denote the associated $D=2n+1$ dimensional Einstein spaces by $\Lambda^{p,q,r_1,\ldots, r_n}$. From an expression for the determinant of the Kerr-de Sitter metrics given in \cite{gibperpop}, it is easily seen that the volume of the Einstein space $\Lambda^{p,q,r_1,\ldots,r_n}$ is given by \begin{equation} V = \fft{ {\cal A}_{2n+1}}{(\prod_j \Xi_j)}\, \fft{\Delta\tau}{2\pi}\, \Big[ \prod_{j=1}^n (y_2^2- a_j^2) - \prod_{j=1}^n(y_1^2-a_j^2)\Big]\, \Big( \prod_{k=1}^n \int \fft{d\varphi_k}{2\pi}\Big)\,, \end{equation} where ${\cal A}_{2n+1}$ is the volume of the unit $(2n+1)$-sphere. If $p$ and $q$ have a greatest common divisor $k=\hbox{gcd}\, (p,q)$ then the $n$-torus of the $\varphi_j$ coordinates will be factored by a freely-acting ${{\mathbb Z}}_k$, and hence it will have volume $(2\pi)^n/k$. The period of $\tau$ will be $\Delta\tau=2\pi\, k\, c(1)/q$, and hence the volume of $\Lambda^{p,q,r_1,\ldots,r_n}$ is \begin{equation} V = \fft{ {\cal A}_{2n+1}}{(\prod_j \Xi_j)}\, \fft{c(1)}{q}\, \Big[ \prod_{j=1}^n (y_2^2- a_j^2) - \prod_{j=1}^n(y_1^2-a_j^2)\Big]\,. \end{equation} \subsection{Even dimensions} A similar discussion applies to the Euclideanised Kerr-de Sitter metrics in even dimensions $D=2n$, which are also given in \cite{gilupapo1,gilupapo2}. There is, however, a crucial difference, stemming from the fact that while there are $n$ latitude coordinates $\mu_i$ with $1\le i\le n$, there are only $n-1$ azimuthal coordinates $\varphi_j$, with $1\le j\le n-1$. Since the $\mu_i$ coordinates are subject to the condition $\sum_{i=1}^n \mu_i^2=1$, this means that now, unlike the odd-dimensional case, there exist surfaces where {\it all} the azimuthal Killing vectors $\partial/\partial\varphi_j$ simultaneously have vanishing norm. This is achieved by taking \begin{equation} \mu_j=0\,,\qquad 1\le j\le n-1\,;\qquad \mu_n=\pm 1\,. \end{equation} Thus if we consider the Killing vectors $\partial/\partial\varphi_j$ together with $\ell_1$ whose norm vanishes at $y=y_1$ and $\ell_2$ whose norm vanishes at $y=y_2$, then from the relation \begin{equation} p\, \ell_1 + q\, \ell_2 + \sum_j r_j\, \fft{\partial}{\partial\varphi_j}=0\,, \end{equation} which can be written as \begin{equation} \ell_2 = -\fft{p}{q}\, \ell_1 - \sum_j\fft{r_j}{q}\, \fft{\partial}{\partial\varphi_j}\,,\label{van2} \end{equation} we see that at $(y=y_1,\mu_n=\pm1)$ it will also be the case that $\ell_2$ has vanishing norm.\footnote{Note that in a positive-definite metric signature, if two two vectors $A$ and $B$ have vanishing norm at any point, then so does $A+ \lambda\, B$ for any $\lambda$. This can be seen from $(A\pm B)^2\ge0$, which shows that if $A^2$ and $B^2$ vanish at a point, then so does $A\cdot B$.} Since $\ell_1$, $\ell_2$ and $\partial/\partial\varphi_j$ all, by construction generate translations at their respective degeneration surfaces that close with period $2\pi$, it follows from (\ref{van2}) that there will in general be a conical singularity at $y=y_2$ associated with a factoring by ${{\mathbb Z}}_q$. A similar argument shows there will in general be a conical singularity at $y=y_1$ associated with a factoring by ${{\mathbb Z}}_p$. The upshot from the above discussion is that one can only get non-singular Einstein spaces in the $D=2n$ dimensional case if $p=q=1$. Since $p=q$, this implies that the two roots $y_1$ and $y_2$ coincide, and hence the analysis reduces to that which was given in \cite{gilupapo1,gilupapo2}. Since the calculations in four dimensions are very simple, it is instructive to examine this example in greater detail. The Euclideanised four-dimensional Kerr-de Sitter metric is given by \begin{equation} ds^2 = \rho^2 \Big(\fft{dy^2}{\Delta_y} + \fft{d\theta^2}{\Delta_\theta}\Big) + \fft{\Delta_\theta\sin^2\theta}{\rho^2}\Big(a d\tau + (y^2-a^2) \fft{d\phi}{\Xi}\Big)^2 +\fft{\Delta_y}{\rho^2} \Big(d\tau - a \sin^2\theta \fft{d\phi}{\Xi}\Big)^2\,, \end{equation} where \begin{eqnarray} &&\rho^2=y^2 - a^2 \cos^2\theta\,,\qquad \Delta_y = (y^2 - a ^2)(1-\lambda\, y^2) - 2m y\,,\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\Delta_\theta =1 - \lambda\, a^2\cos^2\theta\,,\qquad \Xi = 1 - \lambda a^2\,. \end{eqnarray} The function $\Delta_y$ is a quartic polynomial in $y$, which goes to $-\infty$ for $y\rightarrow \pm \infty$. Thus one necessary condition for obtaining a non-singular space is that there exist at least two real roots. If there are exactly two real roots, $y_1$ and $y_2$, with $y_1\le y_2$, then radial variable $y$ must lie in the interval $y_1\le y\le y_2$. If there four real roots, then we should choose $y_1$ and $y_2$ to be adjacent roots, which are either the smallest pair or the largest pair. The Killing vectors that vanish at $y=y_1$ and $y=y_2$ are given by \begin{equation} \ell_i= c_i \fft{\partial}{\partial\tau} + b_i \fft{\partial}{\partial\phi}\,, \end{equation} where \begin{equation} b_i=\fft{a (1-a ^2) c_i}{a^2-y_i^2}\,,\qquad c_i=\fft{2(a^2-y_i^2) y_i}{a ^2 + y_i^2 + a^2 y_i^2 -3 y_i^4} \end{equation} The Killing vector that vanishes at $\sin\theta=0$ is given by \begin{equation} \ell_3=\fft{\partial}{\partial\phi}\,. \end{equation} Thus we have the conditions \begin{equation} p\,\ell_1 + q \ell_2 + r\,\ell_3 = 0 \end{equation} for $(p,q,r)$ which are pairwise coprime integers. For a four-dimensional compact Einstein space, the Euler number is given by \begin{equation} \chi = \fft1{32\pi^2}\int |\hbox{Riem}|^2\, \sqrt{g}\, d^4 x\,. \end{equation} This is easily evaluated for the four-dimensional metrics given above. With the angular coordinates having periods \begin{equation} \Delta\phi =2\pi\,,\qquad \Delta\tau = \fft{2\pi c_1}{q}\,,\label{periods} \end{equation} we find that \begin{equation} \chi = \fft2{p} + \fft2{q}\,.\label{chires} \end{equation} If $p=q=1$ we have $\chi=4$. This is indeed the correct Euler number for the $S^2$ bundle over $S^2$, which, as shown in \cite{pagemet} and \cite{gilupapo1,gilupapo2}, is the only non-singular case that arises when the roots $y_1$ and $y_2$ coincide. If one were to consider cases where $p\ne 1$ or $q\ne 1$, then in general $\chi$ would not be an integer, in accordance with our observations above that in such cases there are ${{\mathbb Z}}_p$ and ${{\mathbb Z}}_q$ orbifold type singularities in the space. It is possible that one might be able to blow up these singularities, and thereby obtain a non-singular space with a more complicated topology. The fact that the ``Euler number'' $\chi$ given in (\ref{chires}) has a simple rational form can perhaps be taken as supporting evidence. \section{Conclusions} In this paper, we have elaborated on the results which we obtained in \cite{cvlupapo}, constructing new Einstein-Sasaki spaces $L^{p,q,r_1, \cdots,r_{n-1}}$ and non-Sasakian Einstein spaces $\Lambda^{p,q,r_1,\cdots, r_n}$, in all odd dimensions $D=2n+1\ge 5$. These spaces are all complete and non-singular compact manifolds. The metrics have cohomogeneity $n$, with isometry group $U(1)^{n+1}$, which acts transitively on the $(n+1)$-dimensional principal orbits. The Einstein-Sasaki metrics arise after Euclideanisation, by taking certain BPS limits of the Kerr-de Sitter spacetimes constructed in $D=5$ in \cite{hawhuntay}, and in all higher dimensions in \cite{gilupapo1,gilupapo2}. The BPS limit effectively implies that there is a relation between the mass and the $n$ rotation parameters of the $(2n+1)$-dimensional Kerr-de Sitter metric, and thus the local Einstein-Sasaki metrics have $n$ non-trivial free parameters. These metrics are in general singular, but by imposing appropriate restrictions on the parameters, we find that the metrics extend smoothly onto complete and non-singular compact manifolds, which we denote by $L^{p,q,r_1,\cdots,r_{n-1}}$, where the integers $(p,q,r_1,\ldots, r_{n-1})$ are coprime. In the case of the five-dimensional Einstein-Sasaki spaces $L^{p,q,r}$, we have been able to obtain an explicit formula expressing the volume in terms of the coprime integers $(p,q,r)$, via a quartic polynomial. In the AdS/CFT correspondence, it is expected that the boundary field theory dual to the type IIB string on AdS$_5\times L^{p,q,r}$ will be a quiver gauge theory. In particular, the central charge of the quiver theory should be related to the inverse of the volume of $L^{p,q,r}$. The central charges have recently been calculated using the technique of $a$-maximisation, and it has been shown that they are indeed precisely in correspondence with the volumes given by our polynomial (\ref{quartic}) \cite{benkru}. (See \cite{martspar2,berbigcot,befrhamasp,marspayau} for the analysis of the dual quiver theories, and $a$-maximisation, for the previous $Y^{p,q}$ examples.) We have also shown, for the five-dimensional $L^{p,q,r}$ spaces, that a convenient characterisation can be given in terms of the two parameters $u$ and $v$, introduced in equation (\ref{uv1}), where $0<u<v<1$. If $u$ and $v$ are taken to be arbitrary rational numbers in this range, then we obtain a corresponding Einstein-Sasaki space that is ``quasi-regular,'' meaning that the orbits of $\partial/\partial\tau$ are closed. In general, when $u$ and $v$ are irrational numbers, again related to the coprime integers $(p,q,r)$ by (\ref{uv1}), the orbits of $\partial/\partial\tau$ will never close, and the corresponding Einstein-Sasaki space is called ``irregular.''\footnote{One should not be misled by the terminology ``quasi-regular'' and ``irregular'' that is applied to Einstein-Sasaki spaces; all the spaces $L^{p,q,r}$ are complete and non-singular.} Several other papers have appeared making use of our results in \cite{cvlupapo}. These include \cite{geppal}, where branes in backgrounds involving the $L^{p,q,r}$ spaces are constructed, and \cite{ahnpor}, where marginal deformations of eleven-dimensional backgrounds involving the $L^{p,q,r_1,r_2}$ spaces are constructed. In addition to discussing the Einstein-Sasaki spaces $L^{p,q,r_1,\cdots, r_{n-1}}$, we have also elaborated in this paper on the non-Sasakian Einstein spaces $\Lambda^{p,q,r_1,\cdots,r_n}$ that were constructed in $D=2n+1$ dimensions in \cite{cvlupapo}. These arise by Euclideanising the Kerr-de Sitter metrics without taking any BPS limit. As local metrics they are characterised by $(n+1)$ parameters, corresponding to the mass and the $n$ independent rotations of the $(2n+1)$-dimensional rotating black holes. Again, by choosing these parameters appropriately, so that they are characterised by the $(n+2)$ coprime integers $(p,q,r_1,\ldots, r_n)$, we find that the local metrics extend smoothly onto complete and non-singular compact manifolds. \bigskip \noindent{\Large{\bf Acknowledgements}} M.C. and D.N. Page are grateful to the George P. \& Cynthia W. Mitchell Institute for Fundamental Physics for hospitality. Research supported in part by DOE grants DE-FG02-95ER40893 and DE-FG03-95ER40917, NSF grant INTO3-24081, the NESRC of Canada, the University of Pennsylvania Research Foundation Award, and the Fay R. and Eugene L.Langberg Chair. \bigskip \bigskip\bigskip \noindent{\Large \bf Appendix}
{ "attr-fineweb-edu": 1.891602, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbno5qWTD6cbksRgf
\section{Introduction} The analysis of gravitational wave (GW) signals detected by the LIGO-Virgo-KAGRA (LVK) collaboration~\cite{LIGOScientific:2018mvr,LIGOScientific:2020ibl,LIGOScientific:2021djp,LIGOScientific:2014pky,VIRGO:2014yos,KAGRA:2020agh} relies on fast and accurate waveform models. The need for accurate templates will be heightened when the third generation of interferometers~\cite{Reitze:2019iox,Punturo:2010zz,LISA:2017pwj} will start to detect GW events at higher rates and with improved signal-to-noise ratios. Numerical Relativity (NR) simulations~\cite{Pretorius:2005gq, Campanelli:2005dd,Mroue:2013xna, Husa:2015iqa, Jani:2016wkt, Boyle:2019kee, Healy:2019jyf} play a key role in providing accurate solutions of Einstein's equations in the strong-field regime. Their downside is the computational cost, which makes it impossible to use them directly in parameter estimation studies. NR surrogates~\cite{Field:2013cfa, Blackman:2017dfb, Varma:2019csw} partially circumvent this problem, but they are limited in terms of parameter space coverage and waveform length. All other GW approximants used by the LVK collaboration are, at least to some extent, based on analytical approximations. The most accurate ones are either models based on the effective-one-body (EOB) approach~\cite{Buonanno:1998gg,Buonanno:2000ef,Gamba:2021ydi, Ossokine:2020kjp} or phenomenological approximants~\cite{Khan:2019kot,Pratten:2020fqn,Garcia-Quiros:2020qpx,Pratten:2020ceb}. Both of these families are built using (approximate) analytical solutions of Einstein's equations and completed through the use of NR information. Historically, the prime analytical approximation method has been the post-Newtonian (PN) expansion~\cite{Blanchet:1989ki,Blanchet:2013haa,Damour:2014jta,Levi:2015uxa,Bini:2017wfr,Schafer:2018kuf, Bini:2019nra,Bini:2020wpo,Bini:2020nsb,Bini:2020hmy,Antonelli:2020ybz}, which assumes small velocities ($v/c \ll 1$), together with weak fields [$G M / (r c^2) \ll 1$]. A few years ago, the interest in developing the post-Minkowskian approximation, which only assumes weak fields [$G M / (r c^2) \ll 1$], and allows for arbitrary large velocities, has been pointed out ~\cite{Damour:2016gwp,Damour:2017zjx}. This triggered a lot of recent activity, using various approaches to gravitational scattering, such as: scattering amplitudes (see, e.g.,~\cite{Cheung:2018wkq,Guevara:2018wpp,Kosower:2018adc,Bern:2019nnu, Bern:2019crd,Bern:2019nnu,Bjerrum-Bohr:2019kec, Bern:2021dqo,Bern:2021yeh,Bjerrum-Bohr:2021din,Manohar:2022dea, Saketh:2021sri}); eikonalization (e.g.,~\cite{KoemansCollado:2019ggb,DiVecchia:2019kta,DiVecchia:2021bdo,DiVecchia:2022nna}); effective field theory (e.g.,~\cite{Kalin:2020mvi,Kalin:2020fhe, Mougiakakos:2021ckm,Dlapa:2021npj,Dlapa:2021vgp,Kalin:2022hph,Dlapa:2022lmu}); and worldline (classical or quantum) field theory (e.g.,~\cite{Mogull:2020sak,Riva:2021vnj,Jakobsen:2021smu,Jakobsen:2022psy}). As the PM expansion is particularly suitable for describing scattering systems, it could be of help to improve GW models for eccentric and hyperbolic binaries signals~\cite{Chiaramello:2020ehz,Nagar:2021gss,Placidi:2021rkh,Khalil:2021txt,Ramos-Buades:2021adz,Nagar:2021xnh,Khalil:2022ylj}. One expects that the LVK collaboration will eventually observe (and has possibly already observed~\cite{Romero-Shaw:2020thy,CalderonBustillo:2020odh,Gayathri:2020coq,Gamba:2021gap}) GW signals emitted by highly eccentric, capture or even hyperbolic binary systems. Such observations (which are expected to be more common in third generation detectors) would improve our knowledge of BH formation and evolution~\cite{OLeary:2005vqo,OLeary:2008myb,Samsing:2013kua,Rodriguez:2016kxx,Belczynski:2016obo,Samsing:2017xmd}. While PM results have been extended to take into account spin (e.g.,~\cite{Bini:2017xzy,Bini:2018ywr,Vines:2017hyw,Vines:2018gqi,Guevara:2019fsj,Kalin:2019inp,Kosmopoulos:2021zoq,Aoude:2022thd,Jakobsen:2022fcj,Bern:2020buy,Bern:2022kto,FebresCordero:2022jts}) and tidal (e.g.,~\cite{Bini:2020flp,Bern:2020uwk,Cheung:2020sdj,Kalin:2020lmz}) effects, in this paper we will focus on non-spinning black hole (BH) binaries, for which 4PM-accurate results are available, both for conservative~\cite{Bern:2019nnu,Kalin:2020fhe,Bjerrum-Bohr:2021din,Bern:2021yeh,Dlapa:2021vgp} and radiation-reacted dynamics~\cite{Damour:2020tta,DiVecchia:2021ndb,Cho:2021arx,DiVecchia:2021bdo,Herrmann:2021tct,Bini:2021gat,Bini:2021qvf,Manohar:2022dea,Dlapa:2022lmu}. The main aims of this paper are: (i) to compare\footnote{Our analysis has some overlap with Sec.~V of Ref.~\cite{Khalil:2022ylj}. However, the emphasis of our work is different and we use the information coming from the lowest impact-parameter (stronger-field) systems of Ref.~\cite{Damour:2014afa}, which were not taken into account in Ref.~\cite{Khalil:2022ylj}.} the equal-mass, nonspinning scattering simulations of Ref.~\cite{Damour:2014afa} to the analytical PM results of Refs.~\cite{Bern:2021yeh,Bern:2021dqo,Dlapa:2021vgp,Manohar:2022dea,Dlapa:2022lmu}; (ii) to propose a new resummation of the PM-expanded scattering angle incorporating the presence of a singularity at low impact parameters; (iii) to use a new approach (introduced in Ref.~\cite{Damour:2017zjx}) for encoding the analytical PM information in the form of corresponding effective-one-body (EOB) radial gravitational potentials; and (iv) to use Firsov's inversion formula~\cite{Kalin:2019rwq} to extract a (radiation-reacted) gravitational potential directly from NR data. \section{Post-Minkowskian scattering angle and effective-one-body} \label{sec:scatt_angle} Throughout this paper, we use units such that $G = c = 1$. We consider a nonspinning BH binary system with masses $m_1, m_2$ and describe its dynamics with mass-rescaled coordinates and momenta: \begin{align} r &\equiv R/M\,, \hspace{1cm} t \equiv T/M\,, \nonumber \\ p_\alpha &\equiv P_\alpha/ \mu\,, \hspace{1cm} j \equiv J/(\mu M)\,, \end{align} where $M = m_1 + m_2$ is the total mass of the system and $\mu = (m_1 m_2)/M$ its reduced mass. We denote the symmetric mass ratio as $\nu \equiv \mu/M$. \subsection{Post-Minkowskian-expanded scattering angle} The scattering angle of nonspinning BH binaries can be PM-expanded as a power series in the inverse dimensionless angular momentum $j$ as \begin{align} \label{eq:chiPM_1} \chi \left(\gamma, j\right) &= 2 \frac{\chi_1 \left(\gamma\right)}{j} + 2 \frac{\chi_2 \left(\gamma\right)}{j^2} + \nonumber \\ &\quad + 2 \frac{\chi_3 \left(\gamma\right)}{j^3} + 2 \frac{\chi_4 \left(\gamma\right)}{j^4} + \mathcal{O}\left[\frac{1}{j^5}\right], \end{align} where the expansion coefficients depend on the total energy, $E$, of the system in the incoming state. See Eq.~\eqref{eq:gamma} below for the relation between the energy $E$, the effective EOB energy $\mathcal{E}_{\rm eff}$, and the quantity $\gamma = -u_1 \cdot u_2$, used in Eq.~\eqref{eq:chiPM_1}, which is the relative Lorentz factor of the two incoming worldlines. The scattering angle at the $n$PM-order accuracy, $\chi_{n{\rm PM}}$, is then defined as \begin{equation} \label{eq:chiPM} \chi_{n{\rm PM}}(\gamma,j) \equiv \sum_{i = 1}^{n} 2 \frac{\chi_i(\gamma)}{j^i}\,. \end{equation} The $\chi_i(\gamma)$ coefficients read \begin{align} \label{eq:chi_i} \chi_1(\gamma) &= \frac{2 \gamma^2 - 1}{\sqrt{\gamma^2-1}}\,, \nonumber \\ \chi_2(\gamma) &= \frac{3 \pi}{8} \frac{(5 \gamma^2 - 1)}{h(\gamma;\nu)}\,, \nonumber \\ \chi_3(\gamma) &= \chi_3^{\rm cons}(\gamma) + \chi_3^{\rm rr}(\gamma)\,, \nonumber \\ \chi_4(\gamma) &= \chi_4^{\rm cons}(\gamma) + \chi_4^{\rm rr, odd}(\gamma) + \chi_4^{\rm rr, even}(\gamma)\,. \end{align} Here we defined the rescaled energy $h(\gamma;\nu)$ as \begin{equation} \label{eq:h} h(\gamma;\nu) \equiv \frac{E}{M} = \sqrt{1 + 2 \nu (\gamma - 1)}\,, \end{equation} and, starting at 3PM, we separated the $\chi_i(\gamma)$ coefficients into conservative and radiation-reaction contributions. The 3PM contributions to the scattering angle read~\cite{Bern:2019nnu,Kalin:2020fhe,Damour:2020tta} \begin{align} \chi_3^{\rm cons}(\gamma) &= \chi_3^{\rm Schw}(\gamma) - \frac{2 \nu \, p_\infty}{h^2(\gamma;\nu)} \bar{C}^{\rm cons}(\gamma)\,, \nonumber \\ \chi_3^{\rm rr}(\gamma) &= - \frac{2 \nu \, p_\infty}{h^2(\gamma;\nu)} \bar{C}^{\rm rad}(\gamma)\,, \end{align} where \begin{equation} \chi_3^{\rm Schw}(\gamma) = \frac{64 p_\infty^6 + 72 p_\infty^4 + 12 p_\infty^2 - 1}{3 p_\infty^3}\,, \end{equation} with \begin{equation} p_\infty \equiv \sqrt{\gamma^2 - 1}\,. \end{equation} The other building blocks entering $\chi_3(\gamma)$ are \begin{align} A(\gamma) &\equiv 2 \, {\rm arcsinh \sqrt{\frac{\gamma-1}{2}}}, \\ \bar{C}^{\rm cons}(\gamma) &= \frac23 \gamma (14 \gamma^2 + 25) + 2(4 \gamma^4 - 12\gamma^2 - 3) \frac{A (\gamma)}{p_\infty}\,, \nonumber \\ \bar{C}^{\rm rad}(\gamma) &= \frac{\gamma \, (2\gamma^2 - 1)^2}{3 (\gamma^2 - 1)^2} \left[\frac{5 \gamma^2 - 8}{\gamma} p_\infty + (9 - 6\gamma^2) A(\gamma)\right]\,. \end{align} As indicated in the last line of Eq.~\eqref{eq:chi_i}, the 4PM scattering angle is conveniently decomposed as the sum of three contributions: (i) the conservative contribution $\chi_{\rm 4}^{\rm cons}$~\cite{Bern:2021yeh,Dlapa:2021vgp}; (ii) the radiation-reaction contribution that is odd under time-reversal~\cite{Bini:2012ji,Bini:2021gat,Manohar:2022dea}; and (iii) the radiation-reaction contribution that is even under time-reversal~\cite{Dlapa:2022lmu,Bini:2022enm}. Its structure then reads \begin{equation} \chi_4(\gamma;\nu) = \frac{1}{h^{3}(\gamma;\nu)} \left[\chi_4^{\rm Schw}(\gamma) + \nu\, \chi_4^{(1)}(\gamma) + \nu^2 \, \chi_4^{(2)}(\gamma) \right]\,, \end{equation} where the test-mass contribution is \begin{align} \chi_4^{\rm Schw}(\gamma) = \frac{105}{128}\pi\left(1-18\gamma^2+33\gamma^4\right) \, , \end{align} and where the explicit forms of $\chi_4^{(1)}(\gamma)$ and $\chi_4^{(2)}(\gamma)$ can be found in the ancillary file of Ref.~\cite{Dlapa:2022lmu}. \subsection{Effective-one-body mass-shell condition and potential} The EOB formalism~\cite{Buonanno:1998gg,Buonanno:2000ef,Damour:2000we} entails a map of the two body dynamics onto a one-body Hamiltonian describing the relative motion of the two objects (in the center-of-mass frame). The ``real'' (center-of-mass) Hamiltonian is related to the ``effective'' Hamiltonian through \begin{equation} H_{\rm real} = M \sqrt{1+2\nu\left(\frac{H_{\rm eff}}{\mu}-1\right)}\, . \end{equation} The comparison to Eq.~\eqref{eq:h} shows that the Lorentz-factor variable $\gamma = -u_1 \cdot u_2$ is equal to the dimensionless effective energy of the system $\hat{\mathcal{E}}_{\rm eff} \equiv \mathcal{E}_{\rm eff}/\mu$, i.e. \begin{equation} \label{eq:gamma} \gamma = \hat{\mathcal{E}}_{\rm eff} = \frac{\left(E_{\rm real}\right)^2-m_1^2-m_2^2}{2 m_1 m_2}\, . \end{equation} The EOB dynamics is encapsulated in a mass-shell condition of the general form \begin{equation} \mu^2 + g_{\rm eff}^{\mu \nu} P_\mu P_\nu + Q(X^\mu,P_\mu) = 0\, , \end{equation} where $Q(X^\mu,P_\mu)$ is a Finsler-type term accounting for higher-than-quadratic momenta contributions. In rescaled variables, $p_\alpha \equiv P_\alpha/ \mu$, $x^\alpha \equiv X^\alpha/M$, and $\hat{Q} \equiv Q/\mu^2$, this becomes \begin{equation} \label{eq:mass-shell} 1 + g_{\rm eff}^{\mu \nu} p_\mu p_\nu + \hat{Q}(x^\mu,p_\mu) = 0\, . \end{equation} The quadratic-in-momenta term, $g_{\rm eff}^{\mu \nu} p_\mu p_\nu$, defines an effective metric of the general form \begin{equation} \label{eq:metric} g_{\mu \nu}^{\rm eff} dx^\mu dx^\nu = -A(r) dt^2 + B(r)dr^2 + C(r)d\Omega^2\, . \end{equation} The symmetries of Eq.~\eqref{eq:mass-shell} imply the existence of two constants of motion, energy and angular momentum, namely $\hat{\mathcal{E}}_{\rm eff} = -p_0$ and $j = p_\varphi$. The general mass-shell condition, Eq.~\eqref{eq:mass-shell}, has been used in the EOB literature in various gauges. Following the post-Schwarzschild approach of Ref.~\cite{Damour:2017zjx}, it is convenient to fix the metric $g_{\mu \nu}^{\rm eff}$ to be the Schwarzschild metric. Writing the latter metric in terms of isotropic coordinates [with a radial coordinate\footnote{The isotropic radial coordinate $\bar{r}$ is related to the usual Schwarzschild-like $r$ by $r = \bar{r}\left[1+1/(2 \bar{r})\right]$.} $\bar{r}$ that satisfies $\bar{C}(\bar{r}) = \bar{r}^2 \bar{B}(\bar{r})$]~\cite{Damour:2017zjx,Damour:2019lcq} and expressing the momentum dependence of $\hat{Q}(x^\mu,p_\mu)$ only in terms of $p_0 = - \gamma$, leads to a mass-shell condition of the form \begin{equation} \label{eq:p2} p_{\bar{r}}^2 + \frac{j^2}{\bar{r}^2} =p_\infty^2 + w(\bar{r},\gamma) \, . \end{equation} This mass-shell condition has the useful property of being Newtonian-like and of reducing the two-body dynamics in the Newtonian dynamics of a non-relativistic particle moving in the radial potential $-w(\bar{r},\gamma)$. When considering motions with a given $j$, the mass-shell condition \eqref{eq:p2} can be re-written as \begin{equation} p_{\bar{r}}^2 = p_\infty^2 - V(\bar{r},\gamma,j)\, , \end{equation} where we introduced the effective potential \begin{equation} \label{eq:V} V(\bar{r},\gamma,j) = \frac{j^2}{\bar{r}^2} - w(\bar{r},\gamma)\, . \end{equation} This effective potential features a usual-looking centrifugal potential $j^2/\bar{r}^2$ [containing the entire $j$ dependence of $V(\bar{r},\gamma,j)$] together with the original (energy-dependent) radial potential $-w(\bar{r},\gamma)$. Recalling that $p_\infty^2 = \gamma^2 -1$, the radial potential $w(\bar{r},\gamma)$ is obtained, combining Eqs.~\eqref{eq:mass-shell}-\eqref{eq:metric}, as \begin{align} \label{eq:w} w(\bar{r}, \gamma) &= \gamma^2 \left[\frac{{\bar B}(\bar{r})}{{\bar A}(\bar{r})}-1 \right] + \nonumber \\ &\quad - \left[{\bar B}(\bar{r})-1 \right] - {\bar B}(\bar{r}) \hat{Q}(\bar{r}, \gamma)\,. \end{align} In this formula, \begin{equation} \label{eq:ASchw} {\bar A}(\bar{r}) = \left(\frac{1- \frac{1}{2\bar{r}}}{1+ \frac{1}{2\bar{r}}}\right)^2\,, \hspace{1cm} {\bar B}(\bar{r}) =\left(1+ \frac{1}{2\bar{r}} \right)^4\,, \end{equation} describe the Schwarzschild metric in isotropic coordinates. Instead, $\hat{Q}(\bar{r},\gamma)$ admits a PM expansion in the inverse of the radius $\frac{1}{\bar{r}}$ ($= \frac{GM}{\bar{R}}$ in unrescaled units) of the form \begin{equation} \label{eq:Q} \hat{Q}(\bar{r}, \gamma) = \frac{\bar{q}_2(\gamma)}{\bar{r}^2} + \frac{\bar{q}_3(\gamma)}{\bar{r}^3} + \frac{\bar{q}_4(\gamma)}{\bar{r}^4} + \mathcal{O}\left[\frac{1}{\bar{r}^5} \right]\,. \end{equation} The first term in this expansion is at at 2PM order, i.e. $\propto \frac{G^2 M^2}{\bar{R}^2}$. Such an isotropic-coordinate description of the EOB dynamics with an energy-dependent potential was introduced in Ref.~\cite{Damour:2017zjx}. This EOB potential is similar to the isotropic-gauge EFT-type potential later-introduced in Ref.~\cite{Cheung:2018wkq}, and used e.g. in Ref.~\cite{Kalin:2019rwq}. The relation between these two types of potential is discussed in Appendix~A of Ref.~\cite{Damour:2019lcq} (see also Ref.~\cite{Antonelli:2019ytb}). Inserting the expansion~\eqref{eq:Q} into Eq.~\eqref{eq:w}, and re-expanding\footnote{One could also define a potential where the Schwarzschild metric coefficients $\bar{A}$ and $\bar{B}$ are \textit{not} expanded in powers of $\bar{u}$.} in powers of $\bar{u}$, defines the PM-expansion of the (energy-dependent) radial potential in the form \begin{equation} \label{eq:wr} w(\bar{r}, \gamma) = \frac{w_1(\gamma)}{\bar{r}} + \frac{w_2(\gamma)}{\bar{r}^2} +\frac{w_3(\gamma)}{\bar{r}^3} +\frac{w_4(\gamma)}{\bar{r}^4} + \mathcal{O}\left[\frac{1}{\bar{r}^5}\right]\,. \end{equation} This energy-dependent potential encodes the \textit{attractive} gravitational interaction between the two bodies. It is a relativistic generalization of the Newtonian potential $U = +(G M)/R$. In the following, we will introduce the sequence of $n$PM potentials, defined as \begin{equation} \label{eq:wPM} w_{n{\rm PM}}(\bar{r},\gamma) \equiv \sum_{i = 1}^{n} \frac{w_i(\gamma)}{ \bar{r}^i}\,. \end{equation} We can then compute the scattering angle $\chi \left(\gamma, j\right)$ as \begin{equation} \label{eq:chi_pr} \pi + \chi \left(\gamma, j\right) = - \int_{-\infty}^{+\infty} d\bar{r}\, \frac{\partial p_{\bar{r}}\left(\bar{r},j,\gamma\right)}{\partial j}\,, \end{equation} where the limits of integration $\bar{r} = \mp \infty$ are to be interpreted as the incoming and final state (at $t = \mp \infty$ respectively). In isotropic coordinates, it is easy to invert the mass-shell condition to obtain the radial momentum as \begin{align} p_{\bar{r}}\left(\bar{r},j,\gamma\right) &= \pm \sqrt{p_\infty^2 - V(\bar{r},\gamma,j)}\,, \nonumber \\ &= \pm \sqrt{p_\infty^2 + w(\bar{r},\gamma) - \frac{j^2}{\bar{r}^2}}\,, \end{align} where the square root has a negative sign along the ingoing trajectory and a positive one during the outgoing motion. Its derivative with respect to $j$ reads \begin{equation} \frac{\partial p_{\bar{r}}\left(\bar{r},j,\gamma\right)}{\partial j} = - \frac{j}{\bar{r}^2 \, p_{\bar{r}}\left(\bar{r},j,\gamma\right)}\, . \end{equation} Eq.~\eqref{eq:chi_pr} thus becomes \begin{equation} \label{eq:chi_w} \pi + \chi \left(\gamma, j\right) = 2\, j \int_{\bar{r}_{\rm min}(\gamma,j)}^{+\infty} \frac{d\bar{r}}{\bar{r}^2}\, \frac{1}{\sqrt{p_\infty^2 + w(\bar{r},\gamma) - j^2/\bar{r}^2}}\, , \end{equation} in which we introduced the radial turning point $\bar{r}_{\rm min}(\gamma,j)$, defined, in the present scattering context, as the largest root of the radial momentum $p_{\bar{r}}\left(\bar{r},j,\gamma\right)$. In the following, it will often be convenient to replace $\bar{r}$ by its inverse \begin{equation} \bar{u} \equiv \frac{1}{\bar{r}}\, , \end{equation} so that Eq.~\eqref{eq:chi_w} reads \begin{equation} \label{eq:chi_wu} \pi + \chi \left(\gamma, j\right) = 2\, j \int_{0}^{\bar{u}_{\rm max}(\gamma,j)} \, \frac{d\bar{u}}{\sqrt{p_\infty^2 + w(\bar{u},\gamma) - j^2 \bar{u}^2}}\, , \end{equation} where $\bar{u}_{\rm max}(\gamma,j) \equiv 1/\bar{r}_{\rm min}(\gamma,j)$. Inserting the PM-expanded $w(\bar{u}, \gamma)$, Eq.~\eqref{eq:wr}, into Eq.~\eqref{eq:chi_wu}, PM-expanding the integrand, and taking the \textit{partie finie} of the resulting (divergent) integrals yields the relation\footnote{An alternative route to connect the $\chi_i(\gamma)$ to the $w_i(\gamma)$ is to go through the coefficients $\bar{q}_i(\gamma)$ of Eq.~\eqref{eq:Q}. See Ref.~\cite{Damour:2017zjx}.} between the PM coefficients of the scattering angle, Eq.~\eqref{eq:chiPM}, to the ones of the radial potential~\cite{Damour:1988mr,Damour:2019lcq}. Namely, \begin{align} \label{eq:coeffs_wchi} \chi_1(\gamma) &= \frac12 \frac{w_1(\gamma)}{p_\infty}\,, \nonumber \\ \chi_2(\gamma) &= \frac \pi 4 w_2(\gamma)\,, \nonumber \\ \chi_3(\gamma) &= - \frac{1}{24} \left[\frac{w_1(\gamma)}{p_\infty}\right]^3 + \frac12 \frac{w_1(\gamma) w_2(\gamma)}{p_\infty} + p_\infty w_3(\gamma)\,, \nonumber \\ \chi_4(\gamma) &= \frac{3 \pi}{8} \left[\frac12 w_2^2(\gamma) + w_1(\gamma) w_3(\gamma) + p_\infty^2 w_4(\gamma)\right]\,. \end{align} Inserting the explicit values of $\chi_i(\gamma)$, Eq.~\eqref{eq:chi_i}, and solving for the $w_i(\gamma)$'s then yields \begin{align} \label{eq:chi_to_w} w_1(\gamma) &= 2(2 \gamma^2 - 1)\,, \nonumber \\ w_2(\gamma) &= \frac{3}{2} \frac{(5 \gamma^2 - 1)}{h(\gamma;\nu)}\,, \nonumber \\ w_3(\gamma) &= w_3^{\rm cons}(\gamma) + w_3^{\rm rr}(\gamma)\,, \nonumber \\ w_4(\gamma) &= w_4^{\rm cons}(\gamma) + w_4^{\rm rr, odd}(\gamma) + w_4^{\rm rr, even}(\gamma)\,. \end{align} Here the 3PM coefficient $w_3(\gamma)$ has been separated into conservative and radiation-reacted parts, with \begin{align} w_3^{\rm cons}(\gamma) &= 9 \gamma^2 -\frac12 - B(\gamma)\left[\frac{1}{h(\gamma;\nu)} - 1\right] \nonumber \\ &- \frac{2 \nu}{h^2(\gamma;\nu)} \bar{C}^{\rm cons}(\gamma)\, , \nonumber \\ w_3^{\rm rr}(\gamma) &= - \frac{2 \nu}{h^2(\gamma;\nu)} \bar{C}^{\rm rad}(\gamma)\, , \nonumber \\ B(\gamma) &= \frac32 \frac{(2 \gamma^2 - 1)(5 \gamma^2 - 1)}{\gamma^2 - 1}\, . \end{align} The 4PM coefficient $w_4(\gamma)$, correspondingly to Eq.~\eqref{eq:chi_i}, is again divided into three contributions: conservative, $w_4^{\rm cons}(\gamma)$, time-odd radiation-reaction, $w_4^{\rm rr, odd}(\gamma)$, and time-even radiation-reaction, $w_4^{\rm rr, even}(\gamma)$. Its complicated expression can be found by inverting the last line of Eq.~\eqref{eq:chi_to_w}. \section{Defining a sequence of scattering angles associated to the effective-one-body radial potentials} \label{sec:chiw} \subsection{On the angular-momentum dependence of the $V$ potentials} PM expansions are well adapted to describing the large-$j$ (weak-field) behavior of the scattering angle $\chi$. However, we expect $\chi(\gamma,j)$ to have some type of singularity below a critical value of $j$. In our present framework, the scattering at the $n$PM order is defined by an effective energy-dependent potential (which can include radiation-reaction effects), of the form \begin{equation} V_{n{\rm PM}}(\bar{r},\gamma,j) \equiv \frac{j^2}{\bar{r}^2} - w_{n{\rm PM}}(\bar{r},\gamma)\, . \end{equation} The fact that the centrifugal barrier term $j^2/\bar{r}^2$ is separate from the attractive potential $- w_{n{\rm PM}}(\bar{r},\gamma)$ means that, depending on the relative magnitudes of the radial potential and of the centrifugal barrier, the system will either scatter (finite $\chi$; possibly with zoom-whirl behavior) or plunge ($\chi$ undefined). This is illustrated in Fig.~\ref{fig:wPM2} for specific values of $\gamma$ and $j$, corresponding to the smallest impact-parameter NR simulation of Ref.~\cite{Damour:2014afa}, that we will use below. \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig01.pdf} \caption{ \label{fig:wPM2} PM gravitational potential $V_{n{\rm PM}}$ at different perturbative orders. Energy and angular momentum are fixed to the first simulation of Ref.~\cite{Damour:2014afa}, i.e. $\hat{E}_{\rm in} \equiv E_{\rm in}/M \simeq 1.02256$ and $\hat{J}_{\rm in} \equiv J_{\rm in}/M^2 \simeq 1.100$. The black, dot-dashed horizontal line marks the corresponding value of $p_\infty^2$. The high-order PM corrections tend to make the (radiation-reacted) radial potential more and more attractive. The system scatters in every PM potential, while it would plunge in the Schwarzschild case (plotted as reference). } \end{figure} The bottom curve in Fig.~\ref{fig:wPM2} displays the potential $V^{\rm Schw}(\bar{r},\gamma,j)$ corresponding to the exact (un-expanded) Schwarzschild potential $w^{\rm Schw}(\gamma,\bar{r})$ (test-mass limit) defined by Eqs.~\eqref{eq:w}-\eqref{eq:ASchw}. In this case, the horizontal dash-dotted curve (corresponding to $p_\infty^2$) is above $V^{\rm Schw}$, which means that the centrifugal contribution is not strong enough to allow the system to scatter (contrary to the result of the NR simulation), but would lead to a plunge. By contrast, all the other PM-expanded potentials lead to scattering. This is due to the fact that the $-w_{n{\rm PM}}(\gamma,\bar{r})$ radial potentials are \textit{less attractive} than $-w^{\rm Schw}(\gamma,\bar{r})$. The first two PM potentials are somewhat exceptional. In the 1PM potential, \begin{equation} V_{\rm 1PM}(\bar{r},\gamma,j) = \frac{j^2}{\bar{r}^2} - \frac{2(2 \gamma^2 - 1)}{\bar{r}}\, , \end{equation} the centrifugal barrier ultimately dominates over the attractive $1/\bar{r}$ potential for any $j\neq0$. Instead, in the 2PM potential \begin{align} V_{\rm 2PM}(\bar{r},\gamma,j) &= \left[j^2 - w_2(\bar{r},\gamma)\right]\frac{1}{\bar{r}^2} - w_1(\bar{r},\gamma)\,, \nonumber \\ &= \left[j^2 - \frac{3}{2} \frac{(5 \gamma^2 - 1)}{h(\gamma;\nu)}\right]\frac{1}{\bar{r}^2} - \frac{2(2 \gamma^2 - 1)}{\bar{r}}\, , \end{align} the small-$\bar{r}$ behavior is always repulsive (leading to scatter) if $j^2 > w_2(\bar{r},\gamma)$, and always attractive (leading to plunge) if $j^2 < w_2(\bar{r},\gamma)$. For higher PM orders (except for the conservative 4PM case discussed below), the radial potential $-w_{n{\rm PM}}(\bar{r},\gamma)$ is always attractive and wins over the centrifugal potential $j^2/\bar{r}^2$ at small $\bar{r}$. Then, there exists an energy-dependent critical angular momentum, say $j_0(\gamma,\nu)$, such that the system scatters for $j > j_0$ and plunges for $j < j_0$. When $j \rightarrow j_{0}^{+}$, the scattering angle $\chi(\gamma,j)$ has a logarithmic singularity, as discussed below. The conservative 4PM radial potential $-w_{4{\rm PM cons}}(\bar{r},\gamma)$ is exceptional, in that the coefficient of $1/\bar{r}^4$, i.e. $w_4^{\rm cons}(\gamma;\nu)$, is not always positive. More precisely, $w_4^{\rm cons}(\gamma;\nu)$ is positive if $\nu \lesssim 0.2$ and $\gamma \gtrsim 1.425$, but becomes negative in a sub-region of the rectangle $(0.2\lesssim\nu\le0.25,1\leq\gamma\lesssim1.425)$. Our NR data values ($\nu=1/4$, $\gamma \simeq 1.09136$) happen to fall in the region when $w_4^{\rm cons}$ is negative. When this happens, $-w_{4{\rm PM cons}}(\bar{r},\gamma)$ features an (unphysically) repulsive core at small distances. As a consequence, though there still exists a critical $j_0$ leading to a logarithmic singularity in the scattering angle, the system does not plunge for $j < j_0$, but instead scatters against the repulsive core. In this latter region, the scattering angle $\chi$ monotonically decreases with $j$ (even becoming negative) and tends to $-\pi$ in the head-on limit, $j\rightarrow 0$. \subsection{Singularity of $\chi(\gamma,j)$ in the test-mass limit} As a warm-up, let us briefly discuss the singularity structure of $\chi(\gamma,j)$ in the test-mass limit. In this case, it is easier to evaluate $\chi(\gamma,j)$ by using a Schwarzschild radial coordinate $r$, rather than the isotropic $\bar{r}$. This leads to a scattering angle given by \begin{align} \label{eq:chi_Schw} \chi^{\rm Schw} \left(\gamma, j\right) &= -\pi + \frac{2 \sqrt{2}}{\sqrt{u_3 - u_1}} \times \nonumber \\ &\quad \times \left[K(k_{\rm Schw}^2) - F\left(\sin \varphi_{\rm Schw}; k_{\rm Schw}^2\right)\right]\,, \end{align} where $K$ and $F$ are the complete and incomplete elliptic integrals of the fist kind, defined as \begin{align} F(\sin \varphi; k^2) &\equiv \int_{0}^{\varphi} \frac{d \theta}{\sqrt{1 - k^2 \sin^2 \theta}}\,, \nonumber \\ K(k^2) &\equiv F(1; k^2)\,. \end{align} The elliptic parameters entering Eq.~\eqref{eq:chi_Schw} are \begin{align} k_{\rm Schw}^2 &= \frac{u_2-u_1}{u_3-u_1}\,, \nonumber \\ \sin \varphi_{\rm Schw} &= \sqrt{\frac{- u_1}{u_2 - u_1}}\,, \end{align} where $u_i = 1/r_i$ are the three roots of $p_r^2 = 0$ in Schwarzschild coordinates. In our scattering situation, the three solutions satisfy $u_1 < 0 < u_2 < u_3$. The critical angular momentum $j_0^{\rm Schw}$ corresponds to the case where $u_2 = u_3$, and is given by \begin{equation} j_0^{\rm Schw} = \sqrt{\frac{1}{u_0^2} \left(\frac{\gamma^2}{1-2 u_0}-1\right)}\,, \end{equation} where \begin{equation} u_0 = \frac{4-3 \gamma^2 + \gamma \sqrt{9 \gamma^2 - 8}}{8}\,. \end{equation} In the limit $j \rightarrow j_0^{\rm Schw+}$, $k_{\rm Schw}^2 \rightarrow 1^{-}$ and the complete elliptic integral $K(k_{\rm Schw}^2)$ diverges logarithmically, so that \begin{equation} \chi^{\rm Schw} \left(\gamma, j\right) \overset{\hspace{0.2cm} j \rightarrow j_0^{+}}{\approx} \frac{2}{\left(1-\frac{12}{j^2}\right)^{\frac{1}{4}}}\, \ln\left[\frac{1}{1-\frac{j_0(\gamma)}{j}}\right]\,. \end{equation} \subsection{Map between $\chi(\gamma,j)$ and effective-one-body PM radial potentials $w_{n{\rm PM}}$} Let us now compute the explicit expression of the scattering angle corresponding to a given $n$PM radial potential $w_{n{\rm PM}}(\bar{r},\gamma)$, as defined in Eq.~\eqref{eq:wPM}. Explicitly we define \begin{equation} \label{eq:chi_wPM} \chi^{w\,{\rm eob}}_{n{\rm PM}} \left(\gamma, j\right) \equiv 2\, j \int_{0}^{\bar{u}_{\rm max}(\gamma,j)} \hspace{-0.75cm} \frac{d\bar{u}}{\sqrt{p_\infty^2 + w_{n{\rm PM}}(\bar{u},\gamma) - j^2 \bar{u}^2}} -\pi \,. \end{equation} On the right-hand side of this definition, enters the $n$PM-expanded radial potential $w_{n{\rm PM}}(\bar{r},\gamma) \sim 1/\bar{r} + \cdots + 1/\bar{r}^n$. The corresponding nonlinear transformation, \begin{equation} w_{n{\rm PM}}(\bar{r},\gamma) \longrightarrow \chi^{w\,{\rm eob}}_{n{\rm PM}} \left(\gamma, j\right)\,, \end{equation} defines a new sequence of scattering angles, such that the $n^{\rm th}$ angle, $\chi^{w\,{\rm eob}}_{n{\rm PM}} \left(\gamma, j\right)$, incorporates analytical PM information up to the $n$PM order included. Each $\chi^{w\,{\rm eob}}_{n{\rm PM}}\left(\gamma, j\right)$ defines a function of $j$ which differs from the corresponding PM-expanded $\chi_{n{\rm PM}}\left(\gamma, j\right) \sim 1/j + \cdots + 1/j^n$. It is a fully nonlinear function of $j$, whose first $n$ terms of its large-$j$ expansion coincide $\chi_{n{\rm PM}}$. The sequence $\chi^{w\,{\rm eob}}_{n{\rm PM}}$ defines a \textit{resummation} ($w^{\rm eob}$-resummation) of the PM-expanded $\chi_{n{\rm PM}}$. In the following, we study the properties of the $w^{\rm eob}$-resummed sequence $\chi^{w\,{\rm eob}}_{n{\rm PM}}$ and show how it improves the agreement with numerical data, notably because it incorporates a singular behavior of $\chi(j)$ at some critical $j_0^{w_{n{\rm PM}}}$. Let us explicate the values of $\chi^{w\,{\rm eob}}_{n{\rm PM}}$ up to $n=4$. At 1PM and 2PM, the radicand entering the denominator of the integral is quadratic in $\bar{u}$ and the integration is trivial, yielding \begin{align} \chi^{w\,{\rm eob}}_{1{\rm PM}} \left(\gamma, j\right) &= 4 \arctan \left[\sqrt{\frac{\sqrt{w_1^2(\gamma) + 4j^2} + w_1(\gamma)}{\sqrt{w_1^2(\gamma) + 4j^2} - w_1(\gamma)}}\right] - \pi\,, \nonumber \\ \chi^{w\,{\rm eob}}_{2{\rm PM}} \left(\gamma, j\right) &= -\pi + \frac{4\, j}{\sqrt{j^2 - w_2(\gamma)}} \times \nonumber \\ &\hspace{-1cm}\times \arctan \left\{\sqrt{\frac{\sqrt{w_1^2(\gamma) - 4\left[w_2(\gamma)-j^2\right]} + w_1(\gamma)}{\sqrt{w_1^2(\gamma) - 4\left[w_2(\gamma)-j^2\right]} - w_1(\gamma)}}\right\}\,. \end{align} As already said, $\chi^{w\,{\rm eob}}_{1{\rm PM}}$ is defined for any $j$ and has no singularity. Instead, $\chi^{w\,{\rm eob}}_{2{\rm PM}}$ is only defined for angular momenta $j^2 \geq w_2(\gamma)$, below which the system plunges instead of scattering. In the limit $j^2 \rightarrow w_2(\gamma)^{+}$, $\chi^{w\,{\rm eob}}_{2{\rm PM}}$ has a power-law singularity $\chi^{w\,{\rm eob}}_{2{\rm PM}} \approx \frac{2 \pi j}{\sqrt{j^2-w_2(\gamma)}}$. At 3PM and 4PM, the scattering angle can be expressed as a combination of elliptic integrals (similarly to the Schwarzschild case). At 3PM, for high-enough angular momenta, $p_{\bar{r}}^2$ has (generally) three real roots $(\bar{u}_1,\bar{u}_2,\bar{u}_3)$, with $\bar{u}_1 < 0 < \bar{u}_2 \leq \bar{u}_3$ (so that $\bar{u}_{\rm max} \equiv \bar{u}_2$), and $\chi^{w\,{\rm eob}}_{3{\rm PM}}$ reads \begin{align} \chi^{w\,{\rm eob}}_{3{\rm PM}} \left(\gamma, j\right) &= -\pi + \frac{4\, j}{\sqrt{w_3(\gamma)(\bar{u}_3 - \bar{u}_1)}} \times \nonumber \\ &\quad \times \left[K(k_{\rm 3PM}^2) - F\left(\sin \varphi_{\rm 3PM}; k_{\rm 3PM}^2\right)\right]\,, \end{align} where \begin{align} k_{\rm 3PM}^2 &= \frac{\bar{u}_2-\bar{u}_1}{\bar{u}_3-\bar{u}_1}\,, \nonumber \\ \sin \varphi_{\rm 3PM} &= \sqrt{\frac{- \bar{u}_1}{\bar{u}_2 - \bar{u}_1}}\,. \end{align} This formula is valid only when $k_{\rm 3PM}^2 < 1$ (i.e. $\bar{u}_3 > \bar{u}_2$). Like in the test-mass limit, $\chi^{w\,{\rm eob}}_{3{\rm PM}}$ diverges logarithmically in the limit $\bar{u}_3 \rightarrow \bar{u}_2$, determining the smallest angular momentum for which scattering occurs. If the 4PM radial potential $w_{4{\rm PM}}(\bar{u},\gamma)$ is such that $p_{\bar{r}}^2$ has 4 real roots $(u_1,u_2,u_3,u_4)$, with $u_1 < u_2 < 0 < u_3 \leq u_4$ (and $\bar{u}_{\rm max} \equiv u_3$), we obtain\footnote{This is the case for $w_4(\gamma)$ but not for $w_4^{\rm cons}(\gamma)$ for the considered energy and mass ratio. In this case, there is only one negative real root and the result can be written as a different combination of elliptic integrals.} \begin{align} \chi^{w\,{\rm eob}}_{4{\rm PM}} \left(\gamma, j\right) &= -\pi + \frac{4\, j}{\sqrt{w_4(\gamma)(u_3 - u_1)(u_4 - u_2)}} \times \nonumber \\ &\quad \times \left[K(k_{\rm 4PM}^2) - F\left(\sin \varphi_{\rm 4PM}; k_{\rm 4PM}^2\right)\right]\,, \end{align} with \begin{align} k_{\rm 4PM}^2 &= \frac{(u_4-u_1)(u_3-u_2)}{(u_4-u_2)(u_3-u_1)}\,, \nonumber \\ \sin \varphi_{\rm 4PM} &= \sqrt{\frac{u_2(u_3-u_1)}{u_1(u_3-u_2)}}. \end{align} This formula is again valid only for $u_3 < u_4$. The critical $j_0$ is determined by the coalescence $u_3 \rightarrow u_4$\,, which yields $k_{\rm 4PM}^2 \rightarrow 1$. At higher PM orders, the computations will be similar but will involve hyper-elliptic integrals. \section{Resumming $\chi$ using its singularity structure} \label{sec:chilog} The origin of the logarithmic behavior in $(j - j_0)$ near the critical angular momentum $j_0$ is expected to be general (at least within our potential-based framework). This logarithmic behavior is simply related to the fact that the two largest positive real roots of the equation $p_\infty^2 - j^2 \bar{u}^2 + w(\bar{u},\gamma) = 0$ coalesce when $j \rightarrow j_0$. Indeed, denoting these roots as $a(j)$ and $b(j)$, with $a(j) < b(j)$ when $j > j_0$ (e.g. $a=\bar{u}_2$ and $b=\bar{u}_3$ at 3PM), near the coalescence $a(j_0)=b(j_0)$, the scattering angle is given by an integral of the form \begin{equation} \chi \approx 2 j \int_{}^{a} \frac{d\bar{u}}{\sqrt{k(\bar{u}-a)(\bar{u}-b)}} \approx \frac{4 j}{\sqrt{k}} \ln \left[\frac{2 \sqrt{a}}{\sqrt{b-a}}\right]\,. \end{equation} In turn, this singularity implies a logarithmic singularity in $j$ near $j=j_0$ of the type \begin{equation} \label{eq:div_chi} \chi(j) \overset{\hspace{0.2cm} j \rightarrow j_0^+}{\sim} \frac{j}{j_0} \ln \left[\frac{1}{1-\frac{j_0}{j}}\right]\,. \end{equation} The existence of such universal logarithmic divergence suggests a procedure to resum the PM-expanded angle $\chi_{n{\rm PM}}(\gamma,j)$ by incorporating such a divergence. Specifically, we propose the following resummation procedure of $\chi_{n{\rm PM}}(\gamma,j)$. For convenience, let us define the function \begin{equation} \label{eq:L} \mathcal{L}\left(x\right) \equiv \frac{1}{x} \ln \left[\frac{1}{1-x}\right]\,, \end{equation} such that, when $|x| < 1$, it admits the convergent power-series expansion \begin{equation} \mathcal{L}\left(x\right) = 1 + x + \frac{x^2}{2} + \cdots + \frac{x^n}{n} + \cdots\,, \end{equation} and such that the logarithmic divergence in $\chi(j)$ near $j = j_0$, Eq.~\eqref{eq:div_chi}, precisely features the factor $\mathcal{L}\left(\frac{j_0}{j}\right)$. We can now define the following $\mathcal{L}$-resummation of the PM-expanded $\chi_{n{\rm PM}} \sim 1/j + \cdots 1/j^n$ as being the unique function $\chi_{n{\rm PM}}^{\mathcal{L}}(j)$ such that \begin{equation} \label{eq:chilogPM} \chi_{n{\rm PM}}^{\mathcal{L}}(\gamma,j) = \mathcal{L}\left(\frac{j_0}{j}\right)\hat{\chi}_{n{\rm PM}}(\gamma,j;j_0)\,, \end{equation} where $\hat{\chi}_{n{\rm PM}}(\gamma,j;j_0)$ is a $n^{\rm th}$ order polynomial in $1/j$, say, \begin{equation} \label{eq:hatchi} \hat{\chi}_{n{\rm PM}}(\gamma,j;j_0) \equiv \sum_{i = 1}^{n} 2 \frac{\hat{\chi}_i(\gamma;j_0)}{j^i}\,, \end{equation} such that the $n^{\rm th}$ first terms in the large-$j$ expansion of $\chi_{n{\rm PM}}^{\mathcal{L}}(\gamma,j)$ coincide with $\chi_{n{\rm PM}}(\gamma,j)$. The latter condition uniquely determines the expressions of the $\hat{\chi}_i(\gamma;j_0)$ coefficients in terms of $j_0$ and of the original PM coefficients $\chi_i(\gamma)$, namely \begin{align} \label{eq:hatchi_n} \hat{\chi}_1 &= \chi_1\,, \nonumber \\ \hat{\chi}_2 &= \chi_2 - \frac{j_0}{2}\chi_1\,, \nonumber \\ \hat{\chi}_3 &= \chi_3 - \frac{j_0}{2}\chi_2 - \frac{j_0^2}{12}\chi_1\,, \nonumber \\ \hat{\chi}_4 &= \chi_4 - \frac{j_0}{2}\chi_3 - \frac{j_0^2}{12}\chi_2 - \frac{j_0^3}{24}\chi_1\,. \end{align} In order to define such a resummation procedure, we need to choose a value of $j_0$. Such a value can be \textit{analytically} defined by comparing the successive terms of the expansion \begin{equation} \mathcal{L}\left(\frac{j_0}{j}\right) = 1 + \frac{j_0}{j} + \frac{j_0^2}{j^2} + \cdots + \frac{j_0^n}{j^n} + \cdots \,, \end{equation} to the corresponding terms in \begin{equation} \frac{\chi_{n{\rm PM}}(\gamma)}{\chi_1} = 1 + \frac{\chi_2}{\chi_1 j} + \frac{\chi_3}{\chi_1 j^2} + \cdots + \frac{\chi_{n}}{\chi_1 j^{n-1}}\,. \end{equation} Following \textit{Cauchy's rule}, one expects to have a more accurate estimate of $j_0$ by using the highest known term in the PM expansion, i.e. \begin{equation} \label{eq:j0PM} j_0^{n{\rm PM}}(\gamma) \equiv \left[n\frac{\chi_n(\gamma)}{\chi_1(\gamma)}\right]^{\frac{1}{n-1}}\,, \hspace{1cm} n>1\,. \end{equation} In the present paper, we use this resummation procedure for two applications: (i) for resumming $\chi_{\rm 4PM}$; and (ii) for fitting the NR data. \section{Comparison between PM and NR scattering angles} \label{sec:nr_comp} In this section we compare four different analytical definitions of the scattering angles against NR simulations: (i) the non-resummed PM-expanded scattering angle $\chi_{n{\rm PM}}$; (ii) the $\mathcal{L}$-resummed PM scattering angle $\chi^{\mathcal{L}}_{n{\rm PM}}$; (iii) the $w^{\rm eob}$-resummed PM scattering angle $\chi^{w\,{\rm eob}}_{n{\rm PM}}$; and, finally, (iv) a sequence of scattering angles predicted by one specific EOBNR dynamics (\texttt{TEOBResumS}{}~\cite{Nagar:2021xnh,Hopper:2022rwo}). Concerning numerical results, we use one of the few publicly available NR simulation suites of BH scatterings~\cite{Damour:2014afa}. Reference~\cite{Damour:2014afa} computed a sequence of ten simulations of equal-mass nonspinning BH binaries with (almost)\footnote{The initial energies of the NR simulations range between $\hat{E}_{\rm in} = 1.0225555(50)$ and $\hat{E}_{\rm in} = 1.0225938(50)$. In all the plots, we fixed $\hat{E}_{\rm in}$ to the average value $\hat{E}_{\rm in} = 1.0225846$.} fixed initial energy $\hat{E}_{\rm in} \equiv E_{\rm in}/M \simeq 1.02258$ and with initial angular momenta $\hat{J}_{\rm in} \equiv J_{\rm in}/M^2$ varying between $\hat{J}_{\rm in} = 1.832883(58)$ (corresponding to an NR impact parameter $b_{\rm NR} = 16.0 M$) and $\hat{J}_{\rm in} = 1.099652(36)$ (corresponding to $b_{\rm NR} = 9.6 M$). The simulation corresponding to the latter impact parameter probes the strong-field interaction of two BHs and led to a scattering angle $\chi_{\rm NR} = 5.337(45)$ radians, i.e. at the beginning of the zoom-whirl regime. \subsection{Comparing NR data to non-resummed ($\chi_{n{\rm PM}}$) and $\mathcal{L}$-resummed ($\chi^{\mathcal{L}}_{n{\rm PM}}$) scattering angles} \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig02.pdf} \caption{ \label{fig:chiPM} Scattering angle comparison between the numerical results of Ref.~\cite{Damour:2014afa} and the PM-expanded scattering angles $\chi_{n{\rm PM}}$. In order to show the effect of radiative terms, we also plot the conservative part of the 3PM and 4PM scattering angles (dotted lines). Using dashed lines, we exhibit the $\mathcal{L}$-resummed 3PN and 4PM (radiation-reacted) scattering angles $\chi^\mathcal{L}_{\rm 3PM}$ and $\chi^\mathcal{L}_{\rm 4PM}$. } \end{figure} In Fig.~\ref{fig:chiPM} we contrast NR data (indicated by black dots together with their error bars) to the sequence of analytically available PM-expanded scattering angles $\chi_{n{\rm PM}}$ (i.e. $1 \leq n \leq 4$), given by Eq.~\eqref{eq:chiPM}. For the 3PM and 4PM orders we include both the conservative scattering angle and the radiation-reacted one. This figure extends Fig.~6 of Ref.~\cite{Khalil:2022ylj} in two ways: (i) we include in the comparison the three smallest impact parameter simulations; and (ii) we take advantage of recent analytical work~\cite{Manohar:2022dea,Dlapa:2022lmu,Bini:2022enm} to include radiation-reaction effects in the 4PM scattering. As expected, PM expansions correctly capture the NR behavior for the largest values of the angular momentum, i.e. in the weak-field regime. By contrast, for lower angular momenta (stronger fields), the differences between NR and PM results become large. Increasing the PM order improves the agreement with NR but is not enough to reach satisfying agreement. For instance, for the lower angular momentum datum, the 4PM-expanded (radiation-reacted) prediction yields $\chi_{\rm 4PM} \simeq 2.713$, while the NR result, $\chi_{\rm NR} = 5.337(45)$, is almost twice bigger. In Fig.~\ref{fig:chiPM}, we added the (purely analytical) $\mathcal{L}$-resummed predictions for the radiation-reacted 3PM and 4PM angles (topmost dashed curves), using Eq.~\eqref{eq:chilogPM}, and the corresponding $j_0^{n{\rm PM}}$ from Eq.~\eqref{eq:j0PM}. For clarity, we do not exhibit the corresponding conservative 3PM and 4PM scattering angles. They are both quite close to (but somewhat below) their respective radiation-reacted versions. Figure~\ref{fig:chiPM} shows how the $\mathcal{L}$-resummation is quite efficient at improving the agreement between analytical PM information and NR data. For example, for the lower angular momentum datum, the $\mathcal{L}$-resummed 4PM (radiation-reacted) prediction is $\chi^\mathcal{L}_{\rm 4PM} \simeq 4.088$, which is $\simeq 23\%$ smaller than the corresponding NR value. \subsection{Comparing NR data to $w^{\rm eob}$-resummed ($\chi^{w\,{\rm eob}}_{n{\rm PM}}$) scattering angles} \begin{figure}[t] \includegraphics[width=0.485\textwidth]{fig03.pdf} \caption{ \label{fig:chiwPM2} Same comparison as Fig.~\ref{fig:chiPM} using the $w^{\rm eob}$-resummed scattering angles $\chi^{w\,{\rm eob}}_{n{\rm PM}}$ derived through the use of the EOB radial potentials $w_{n{\rm PM}}$. The agreement using 4PM results including radiation-reaction terms is excellent. } \end{figure} Fig.~\ref{fig:chiwPM2} compares the NR data to the $w^{\rm eob}$-resummed angles $\chi^{w\,{\rm eob}}_{n{\rm PM}}$ [Eq.~\eqref{eq:chi_wPM}], i.e. to the sequence of angles computed by studying the scattering of a particle in the corresponding $n$PM-order potential $w_{n{\rm PM}}(\bar{r},\gamma) \sim 1/\bar{r} + \cdots + 1/\bar{r}^n$ [Eq.~\eqref{eq:wPM}]. Let us emphasize that each such potential is completely analytically defined from the corresponding $n$PM scattering angles, via Eq.~\eqref{eq:chi_to_w}. Again, for the 3PM and 4PM orders, we show scattering angles computed through both the corresponding conservative and radiation-reacted $w(\bar{r},\gamma)$ potentials. Apart from the conservative 4PM (light-blue dotted) curve, the $w^{\rm eob}$-resummed angles $\chi^{w\,{\rm eob}}_{n{\rm PM}}$ succeed in defining a sequence of approximants which not only gets closer to NR data as the PM order increases, but also reaches, at the (radiation-reacted) 4PM level, an excellent agreement with NR data. E.g., for the lower angular momentum datum, its prediction yields $\chi^{w\,{\rm eob}}_{\rm 4PM} \simeq 5.490$, which is only $\simeq 2.9\%$ higher than the corresponding NR result, $\chi_{\rm NR} = 5.337(45)$. This excellent agreement is partly due to the fact that the critical angular momentum $j_0$ determined from the radial potential $w_{4{\rm PM}}(\bar{r},\gamma)$ [namely $j_0^{w_{4{\rm PM}}}(\gamma \simeq 1.09136) \simeq 4.3138$], is very close to the one obtained by $\mathcal{L}$-fitting the NR data [namely $j_0^{\rm fit} \simeq 4.3092 $]. See next Section. The exceptional character of the conservative 4PM radial potential (discussed above) is at the root of the relatively poor performance of the corresponding $\chi^{w\,{\rm eob}}_{\rm 4PM, cons}$ prediction. In particular, the presence of a repulsive core influences the value of the critical $j_0$ and determines $j_0^{w_{\rm 4PM, cons}}(\gamma \simeq 1.09136) \simeq 4.0888$, which is rather different from the radiation-reacted 4PM estimate and is lower than the 3PM ones, e.g. $j_0^{w_{\rm 3PM}}(\gamma \simeq 1.09136) \simeq 4.1432$. \subsection{Comparing NR data to \texttt{TEOBResumS}{} scattering angles} Finally, in Fig.~\ref{fig:chiPM_TEOB}, we compare the scattering angle predictions of the EOBNR model \texttt{TEOBResumS}{}\footnote{This EOB model combines an NR-calibrated high PN accuracy Hamiltonian with an analytical radiation-reaction force.}~\cite{Nagar:2021xnh,Hopper:2022rwo} to NR data and to the (radiation-reacted) $w^{\rm eob}$-resummed 3PM and 4PM scattering angles. This EOB model is run with initial conditions $(\hat{E}_{\rm in},\hat{J}_{\rm in})$. The scattering angle predicted by \texttt{TEOBResumS}{} (denoted by $\chi^ {\rm EOBNR}$) exhibits an excellent agreement with NR data for all angular momenta. However, the bottom panel of Fig.~\ref{fig:chiPM_TEOB}, which displays the fractional differences with NR data (see corresponding Table~\ref{tab:chi_NR}), shows that the EOBNR differences are systematically larger (in absolute value) than the $w_{\rm 4PM}$-NR ones. Both of them, however, are compatible with the error bar of NR data, except for the two smallest impact-parameter data points. [The center-of-mass impact parameters, $b = \frac{h j}{p_\infty}$ of the first two rows of Table~\ref{tab:chi_NR} are respectively $b \simeq 10.20$ and $ b \simeq 10.41$.] \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig04.pdf} \caption{ \label{fig:chiPM_TEOB} Comparison between NR simulations, PM results and the EOBNR model \texttt{TEOBResumS}{}. Top panel: scattering angles. Bottom panel: fractional differences with respect to numerical results. The shaded grey area represents the NR errors. } \end{figure} \begin{table}[t] \caption{ \label{tab:chi_NR} Comparison between PM, \texttt{TEOBResumS}{} and NR scattering angle for the equal-mass, nonspinning configurations of Ref.~\cite{Damour:2014afa}. We report (in order): initial energy $\hat{E}_{\rm in}$; initial angular momentum $\hat{J}_{\rm in}$; NR scattering angle $\chi^{\rm NR}$; NR percentage error $\widehat{\sigma}\chi^{\rm NR}$; EOBNR scattering angle $\chi^{\rm EOBNR}$ and corresponding fractional difference with respect to NR data $\widehat{\Delta} \chi^{\rm EOBNR} \equiv \chi^{\rm EOBNR}/\chi^{\rm NR} -1$; $\chi_{\rm 4PM}^{w\,{\rm eob}}$ and fractional difference $\widehat{\Delta} \chi_{\rm 4PM}^{w\,{\rm eob}}$. } \begin{center} \begin{ruledtabular} \begin{tabular}{ c c | c c | c c | c c } $\hat{E}_{\rm in}$ & $\hat{J}_{\rm in}$ & $\chi^{\rm NR}$ & $\widehat{\sigma} \chi^{\rm NR}$ & $\chi^{\rm EOBNR}$ & $\widehat{\Delta} \chi^{\rm EOBNR}$ & $\chi_{\rm 4PM}^{w\,{\rm eob}}$ & $\widehat{\Delta} \chi_{\rm 4PM}^{w\,{\rm eob}}$ \\ \hline 1.023 & 1.100 & 5.337 & 0.85\% & 5.539 & 3.78\% & 5.490 & 2.86\% \\ 1.023 & 1.123 & 4.416 & 0.55\% & 4.498 & 1.86\% & 4.473 & 1.29\% \\ 1.023 & 1.146 & 3.890 & 0.76\% & 3.917 & 0.68\% & 3.909 & 0.49\% \\ 1.023 & 1.214 & 3.002 & 0.81\% & 2.989 & $-$0.42\% & 3.006 & 0.13\% \\ 1.023 & 1.260 & 2.653 & 0.86\% & 2.638 & $-$0.58\% & 2.657 & 0.14\% \\ 1.023 & 1.375 & 2.107 & 1.24\% & 2.093 & $-$0.64\% & 2.109 & 0.11\% \\ 1.023 & 1.489 & 1.773 & 1.67\% & 1.764 & $-$0.53\% & 1.775 & 0.11\% \\ 1.023 & 1.604 & 1.541 & 2.04\% & 1.535 & $-$0.38\% & 1.544 & 0.16\% \\ 1.023 & 1.718 & 1.368 & 2.30\% & 1.364 & $-$0.30\% & 1.371 & 0.17\% \\ 1.023 & 1.833 & 1.234 & 2.69\% & 1.230 & $-$0.29\% & 1.236 & 0.13\% \end{tabular} \end{ruledtabular} \end{center} \end{table} In order to further probe the relative performances of our two best analytical scattering predictions, we report in Table~\ref{tab:chi_Seth} the comparisons of $\chi^{\rm EOBNR}$ and $\chi^{w\,{\rm eob}}_{\rm 4PM}$ to the recent NR results of Ref.~\cite{Hopper:2022rwo}. The latter simulations are somewhat complementary to the ones of Ref.~\cite{Damour:2014afa}, because they were performed with (almost) fixed angular momentum and varying energies. Table~\ref{tab:chi_Seth} confirms the excellent performances of both \texttt{TEOBResumS}{} and the $w^{\rm eob}$-resummation of the 4PM scattering angle. Again, the EOBNR differences with respect to the numerical data are systematically larger (in absolute value) than the $w_{\rm 4PM}$ ones, though only the two smallest impact-parameter data points exhibit differences larger than the NR error bar. [The center-of-mass impact parameters of the last two rows of Table~\ref{tab:chi_Seth} are respectively $b \simeq 10.24$ and $ b \simeq 8.63$.] \begin{table}[t] \caption{ \label{tab:chi_Seth} Comparison between PM, \texttt{TEOBResumS}{} and NR scattering angle for the equal-mass, nonspinning configurations of Ref.~\cite{Hopper:2022rwo}. The initial angular momentum is approximately constant, while the energy varies. The reported columns are the same of Table~\ref{tab:chi_NR}. } \begin{center} \begin{ruledtabular} \begin{tabular}{ c c | c c | c c | c c } $\hat{E}_{\rm in}$ & $\hat{J}_{\rm in}$ & $\chi^{\rm NR}$ & $\widehat{\sigma} \chi^{\rm NR}$ & $\chi^{\rm EOBNR}$ & $\widehat{\Delta} \chi^{\rm EOBNR}$ & $\chi_{\rm 4PM}^{w\,{\rm eob}}$ & $\widehat{\Delta} \chi_{\rm 4PM}^{w\,{\rm eob}}$ \\ \hline 1.005 & 1.152 & 3.524 & 2.37\% & 3.500 & $-$0.69\% & 3.538 & 0.39\% \\ 1.015 & 1.152 & 3.420 & 0.66\% & 3.395 & $-$0.71\% & 3.421 & 0.04\% \\ 1.020 & 1.152 & 3.613 & 0.48\% & 3.614 & 0.02\% & 3.625 & 0.32\% \\ 1.025 & 1.152 & 3.936 & 0.39\% & 3.997 & 1.54\% & 3.977 & 1.02\% \\ 1.035 & 1.152 & 5.360 & 0.29\% & 6.038 & 12.63\% & 5.834 & 8.84\% \\ \end{tabular} \end{ruledtabular} \end{center} \end{table} \section{Extracting the effective-one-body radial potential from numerical scattering data} \label{sec:inversion} In Secs.~\ref{sec:scatt_angle} and ~\ref{sec:chiw}, we have shown how to compute the scattering angle of a BH binary from the knowledge of a radial potential $w(\bar{r},\gamma)$, which encapsulates the general relativistic gravitational interaction in the presently used EOB formalism. Let us now do the inverse: starting from the knowledge of a sequence of scattering angles at fixed energy (and varying $j$), we wish to extract the value of a corresponding (energy-dependent, radiation-reacted) radial potential $w_{\rm NR}(\bar{r},\gamma)$. We will do so in two steps: (i) we replace the discrete set of NR scattering angles (at fixed energy) by a continuous function of $j$; and (ii) we use Firsov's inversion formula~\cite{Landau:1960mec} (see also Ref.~\cite{Kalin:2019rwq}) to invert Eq.~\eqref{eq:chi_w}. \subsection{$\mathcal{L}$-resummation of NR data} For the first step, we adapt the $\mathcal{L}$-resummation technique of Sec.~\ref{sec:chilog}, to the discrete sequence of NR data. Namely, we define a continuous function of $j$, $\chi_{\rm NR}^{\rm fit}(j)$, by least-square fitting the ten NR data points\footnote{In doing so, we neglect the fractionally small differences in initial energies and only consider the average energy $\hat{E}_{\rm in} = 1.0225846$.} to a function of $1/j$ incorporating the logarithmically singular function $\mathcal{L}\left(j_0/j\right)$, defined in Eq.~\eqref{eq:L}. More precisely, we use a fitting template of the general form \begin{align} \label{eq:chi_fit_gen} &\chi^{\rm fit}_{\rm gen}(j;j_0,a_{n+1},\cdots,a_{n+k}) = \nonumber \\ &\mathcal{L}\left(\frac{j_0}{j}\right)\left[\hat{\chi}_{n{\rm PM}}(j;j_0) + 2 \frac{a_{n+1}}{j^{n+1}} + \dots + 2 \frac{a_{n+k}}{j^{n+k}} \right]\,, \end{align} where $\hat{\chi}_{n{\rm PM}}$ is the ($j_0$-dependent) polynomial in $1/j$ defined in Eqs.~\eqref{eq:hatchi} and \eqref{eq:hatchi_n}. Given any fitting template of the form \eqref{eq:chi_fit_gen}, the procedure is to determine the values of the $k+1$ free parameters $(j_0,a_{n+1},\cdots,a_{n+k})$ by least-square fitting the discrete NR data. Such a procedure depends on the choice of PM-order, namely $n$, that we are ready to assume as known. Here we shall take $n = 3$ in order to incorporate the correct large-$j$ in a minimal way. This choice makes our results completely independent from the recently acquired 4PM knowledge. Concerning the choice of $k$, we used the minimal value that led to a reduced chi-squared smaller than one. We found, rather remarkably, that it was enough to take $k=1$ and that increasing $k$ clearly led to overfitting. In conclusion, we used as fitting template the function \begin{equation} \label{eq:chifit} \chi_{\rm NR}^{\rm fit}(j) = \mathcal{L}\left(\frac{j_0}{j}\right)\left[\hat{\chi}_{\rm 3PM}(j;j_0) + 2 \frac{a_4}{j^4} \right], \end{equation} which depends only on two\footnote{When using templates incorporating either no singularities in $j$ (polynomials in $1/j$) or a different singularity (e.g. a simple pole), we found that one needed more parameters and that the resulting fits seemed less reliable.} fitting parameters: an effective value of the critical angular momentum $j_0$; and an effective value of the 4PM-level parameter $a_4$. The best-fit parameters are found to be \begin{align} \label{eq:fit_params} a_4^{\rm fit} &= 9.61 \pm 0.68\,, \nonumber \\ j_0^{\rm fit} &= 4.3092 \pm 0.0018\,, \end{align} leading to a reduced chi-squared $\chi^2/(10-2) \simeq 0.096$, corresponding to ten data points and two degrees of freedom. One should keep in mind that this representation is only valid for one value of the energy, namely $\hat{E}_{\rm in} \simeq 1.02258$, corresponding to $\gamma \simeq 1.09136$, and for angular momenta $\hat{J}_{\rm in} \gtrsim 1.100$. This formula also assumes that we are in the equal-mass case, i.e. $\nu = \frac14$, so that $j = 4\hat{J}_{\rm in}$. Since we decided not to include the full analytical knowledge at our disposal but to instead leave the 4PM-level coefficient as a free parameter, it is possible to compare the analytical (radiation-reacted) 4PM coefficient $\chi_4(\gamma)$ [Eq.~\eqref{eq:chi_i}] to its corresponding NR-fitted value, say $\chi_{4}^{\rm fit}$, obtained by extracting the coefficient of $1/j^4$ in the expansion of $\frac12\chi_{\rm NR}^{\rm fit}(j)$ as a power-series in $1/j$. For the considered energy, we find $\chi_{4}^{\rm fit} \simeq 58.11$, while its analytical counterpart is $\chi_{4} (\gamma \simeq 1.09136) \simeq 63.33$. \subsection{Comparing various estimates of the critical angular momentum $j_0$} \label{subsec:j0} In the previous subsection, we compared NR-extracted information about $\chi_4$ to its analytical value. Similarly, one can compare NR-extracted and analytical estimates of the critical angular momentum $j_0$ that determines the boundary between scattering and plunge. Above, we indicated several analytical ways of estimating $j_0$. As explained in Sec.~\ref{sec:chilog}, an analytical value of $j_0$, say $j_0^{w_{n{\rm PM}}}$, is determined at each PM level (with $n\geq3$) by studying the coalescence of the two largest positive real roots of the equation $\mathcal{E}_{(j,\gamma)}(\bar{u}) \equiv p_\infty^2 - j^2 \bar{u}^2 + w_{n{\rm PM}}(\bar{u},\gamma) = 0$. As $w_{n{\rm PM}}(\bar{u},\gamma)$ is a polynomial in $\bar{u}$, the search of the critical $j_0$ is obtained by solving polynomial equations [discriminant of $\mathcal{E}_{(j,\gamma)}(\bar{u})$]. For the considered energy and equal masses, we find for the (radiation-reacted) 3PM and 4PM estimates \begin{align} j_0^{w_{3{\rm PM}}}(\gamma \simeq 1.09136) &\simeq 4.1432\,, \nonumber \\ j_0^{w_{4{\rm PM}}}(\gamma \simeq 1.09136) &\simeq 4.3138\,. \end{align} Note that the 4PM value is remarkably close to the fitting parameter $j_0^{\rm fit}$, Eq.~\eqref{eq:fit_params}. For completeness, let us also mention the other (Cauchy-like) PM-related analytical estimates of $j_0$, say $j_0^{{\rm C}{n{\rm PM}}}(\gamma)$, defined in Eq.~\eqref{eq:j0PM}. These are, at the (radiation-reacted) 3PM and 4PM levels \begin{align} j_0^{\rm C3{\rm PM}}(\gamma \simeq 1.09136) &\simeq 3.8886\,, \nonumber \\ j_0^{\rm C4{\rm PM}}(\gamma \simeq 1.09136) &\simeq 4.1890\,. \end{align} Note that the Cauchy-based 4PM value is rather close, though less close, to $j_0^{\rm fit}$, than the $w$-based estimate. Besides the NR-based critical $j_0^{\rm fit}$ extracted here from the NR data of Ref.~\cite{Damour:2014afa}, other numerical simulations have estimated the value the critical value of $j_0$ in high-energy BH collisions~\cite{Shibata:2008rq,Sperhake:2009jz}. Reference~\cite{Shibata:2008rq} extracted the value of the critical impact parameter in the collision of equal-mass BHs with center-of-mass velocities $v_{\rm cm} = (0.6,0.7,0.8,0.9)$. Reference~\cite{Sperhake:2009jz} extracted $j_0$ for $v_{\rm cm} = 0.94$. It is convenient to express these results in terms of the quantity $J_0/E^2$ (measuring the dimensionless ``Kerr parameter'') of the system, namely \begin{equation} \frac{J_0}{E^2} = \frac{\nu\, j_0}{1+2\nu\left(\gamma-1\right)}\,. \end{equation} In Fig.~\ref{fig:j0}, we compare various estimates of $J_0/E^2$ for the equal-mass case, $\nu = \frac{1}{4}$, as a function of the center-of-mass velocity \begin{equation} v_{\rm cm} = \sqrt{\frac{\gamma-1}{\gamma+1}}\,. \end{equation} \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig05.pdf} \caption{ \label{fig:j0} Comparison between various analytical estimates of the critical (rescaled) angular momentum $J_0/E^2$ and NR results. We display the value determined by the (radiation-reacted) $w_{\rm 3PM}$ and $w_{\rm 4PM}$, together with the respective estimates using Cauchy's rule [Eq.\eqref{eq:j0PM}]. The Schwarzschild estimate (grey line) is plotted as reference. The NR points are from left to right: (i) our estimate coming from the fit of Eqs.~\eqref{eq:chifit} and \eqref{eq:fit_params}; (ii) the four values computed in Ref.~\cite{Shibata:2008rq}; (iii) the value calculated in Ref.~\cite{Sperhake:2009jz}. } \end{figure} The topmost curve in Fig.~\ref{fig:j0} is the test-mass (Schwarzschild) estimate of $J_0/E^2$, which is plotted as a reference. It monotonically increases with $v_{\rm cm}$ from 1 at $v_{\rm cm} = 0$ to $\frac32\sqrt{3} \approx 2.598$ at $v_{\rm cm} = 1$. The numerical simulations suggest that the critical $J_0^{\rm NR}/E^2$ has a finite limit, slightly higher than 1, in the ultra-high-energy regime $v_{\rm cm} \rightarrow 1$ ($\gamma \rightarrow \infty$). The last NR data point~\cite{Sperhake:2009jz} is $J_0^{\rm NR}/E^2 = (1.175 \pm 0.025)$ for $v_{\rm cm} = 0.94$, i.e. $\gamma \simeq 16.18$. For the mildly-relativistic velocities considered in Ref.~\cite{Damour:2014afa}, $v_{\rm cm} \simeq 0.2090$ ($\gamma \simeq 1.09136$), our $w_{\rm 4PM}$ estimate, $J_0^{w{\rm 4PM}}/E^2 \simeq 1.03134$, is the closest to the NR result, $J_0^{\rm NR, fit}/E^2 = (1.03024 \pm 0.00043)$. By contrast, Fig.~\ref{fig:j0} suggests that both 4PM-level estimates become inaccurate for velocities $v_{\rm cm} \gtrsim 0.6$ (corresponding to $\gamma \gtrsim 2.125$). Actually, both 4PM-level estimates of $J_0/E^2$ have a power-law divergence when $v_{\rm cm} \rightarrow 1$ ($\gamma \rightarrow \infty$) linked to the power-law divergence of $\chi_{4}/\gamma^4 \propto \gamma^{1/2}$ in the high-energy limit~\cite{Bini:2022enm}. On the other hand, both (radiation-reacted) 3PM-level analytical estimates seem to be in qualitative agreement with corresponding NR data for most velocities\footnote{The 3PM Cauchy-like estimate has however wrong scaling in the low-velocity limit.}, especially in the high-energy limit. This is probably linked to the good high-energy behavior of the \textit{radiation-reacted} 3PM scattering angle~\cite{Amati:1990xe,Damour:2020tta,DiVecchia:2021ndb}. In order to clarify the physics of scattering BHs it would be important to fill the gap in NR data visible in Fig.~\ref{fig:j0} by exploring center-of-mass velocities in the range $0.2 \lesssim v_{\rm cm} \lesssim 0.6$. \subsection{Extracting the EOB radial potential $w_{\rm NR}$ from NR data} \label{subsec:inv} Let us now come to the second step of our strategy for extracting information from NR data. It is based on Firsov's inversion formula, which reads \begin{equation} \label{eq:Firsov_j} \ln \left[1 + \frac{w(\bar{u},p_\infty)}{p_\infty^2}\right] = \frac{2}{\pi} \int_{\bar{r} |p(\bar{r},\gamma)|}^{\infty} dj \frac{\chi (\gamma, j)}{\sqrt{j^2 - \bar{r}^2 \, p^2(\bar{r},\gamma)}}\,, \end{equation} where \begin{equation} p^2(\bar{r},\gamma) \equiv p_\infty^2 + w(\bar{u},p_\infty)\,. \end{equation} Introducing the rescaled radial potential \begin{equation} \hat{w}(\bar{r},\gamma) \equiv \frac{w(\bar{r},\gamma)}{p_\infty^2}\,, \end{equation} and an \textit{effective} impact parameter \begin{equation} b \equiv \frac{j}{p_\infty}, \end{equation} Eq. \eqref{eq:Firsov_j} becomes \begin{equation} \label{eq:w_inv} \ln \left[1 + \hat{w}(\bar{r},\gamma)\right] = \frac{2}{\pi} \int_{\bar{r}\, \sqrt{1 + \hat{w}(\bar{r},\gamma)}}^{\infty} db \frac{\chi (\gamma, b)}{\sqrt{b^2 - \bar{r}^2 \left[1 + \hat{w}(\bar{r},\gamma)\right]}}\,. \end{equation} This is a recursive expression for defining $\hat{w}(\bar{r},\gamma)$ that can be solved iteratively. Instead of solving Eq.~\eqref{eq:Firsov_j} iteratively to get $\hat{w}$ as a function of $\bar{r}$, we can obtain a parametric representation of both $\hat{w}$ and $\bar{r}$ as functions of an auxiliary parameter $\rho$ by defining the function \begin{equation} \label{eq:Arho} A_{\chi}(\rho) \equiv \frac{2}{\pi} \int_{\rho}^{\infty} db \frac{\chi (\gamma, b)}{\sqrt{b^2 - \rho^2}}\,, \end{equation} which is related to the Abel transform of $\chi(\gamma,b)$. In terms of the function $A_{\chi}(\rho)$, we get the exact parametric representation \begin{align} \label{eq:w_par} \hat{w}(\rho) = -1 + e^{A_\chi(\rho)}\,, \nonumber \\ \bar{r}(\rho) = \rho \, e^{- \frac12 A_\chi(\rho)}\,. \end{align} Note that the value of the parameter $\rho$, when considered as a function of $\bar{r}$, is \begin{equation} \rho(\bar{r},\gamma) = \bar{r}\, |p(\bar{r},\gamma)| = \bar{r}\, \sqrt{1 + \hat{w}(\bar{r},\gamma)}\,. \end{equation} Equations~\eqref{eq:w_par} and \eqref{eq:Arho} allows us to extract information from the NR scattering data of Ref.~\cite{Damour:2014afa}. Inserting our fit, Eqs.~\eqref{eq:chifit} and \eqref{eq:fit_params}, into Eqs.~\eqref{eq:w_par} and \eqref{eq:Arho}, we are able to numerically compute an NR-estimate of the (EOB) radial potential $w_{\rm NR}(\bar{r},\gamma)$. The latter radial potential is determined from NR data only down to a radius $\bar{r}$ corresponding to the lowest $j$ for which NR simulations are available. In our case, the minimum $\hat{J}_{\rm in} = 1.099652(36)$ corresponds to $\bar{r}_{\rm min} \simeq 2.567$. \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig06.pdf} \caption{ \label{fig:wNR} Comparison between the radial potential $w_{\rm NR}$, extracted from NR simulations up to $\bar{u}_{\rm max} = 1/\bar{r}_{\rm min} \simeq 0.3891$ (vertical dashed line), and the corresponding $w_{n{\rm PM}}$ ones. The series of PM-expanded potentials converges towards to NR-extracted one. In particular, the (radiation-reacted) 4PM potential is remarkably close to $w_{\rm NR}$. The test-mass equivalent, $w^{\rm Schw}$ (grey line), is plotted as reference. } \end{figure} In Fig.~\ref{fig:wNR} we display the EOB-type (isotropic coordinates) radial potential extracted from the numerical data of Ref.~\cite{Damour:2014afa}, $w_{\rm NR}(\bar{r})$ (here plotted as a function of $\bar{u}\equiv1/\bar{r}$). We again remind the reader that this radial potential is energy-dependent and contains radiation-reaction effects. It is determined here only for $\gamma \simeq 1.09136$. Fig.~\ref{fig:wNR} compares $w_{\rm NR}(\bar{u})$ to the PM-expanded potentials $w_{n{\rm PM}}(\bar{u})$ for $1\leq n\leq4$. For $n=3$ and 4, we exhibit both the conservative and the radiation-reacted avatars of the potential. As we expected from the scattering angle comparison above, the \textit{radiation-reacted} 4PM potential is remarkably close to the NR one (see inset). By contrast, the conservative 4PM potential is less close to $w_{\rm NR}(\bar{u})$ than the 3PM ones. The error bar on $w_{\rm NR}(\bar{u})$ coming from NR inaccuracies, together with our fitting procedure, would be barely visible and will be discussed below. \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig07.pdf} \caption{ \label{fig:VNR} Top panel: comparison between the gravitational potential $V$ extracted from NR simulations and some of the PM ones, as showed in Fig.~\ref{fig:wPM2}. The horizontal line, corresponding to $p_\infty^2$, marks the maximum point up to which we can extract information from the numerical simulations. Bottom panel: fractional differences between PM potentials and the NR one, expressed as $(V_{n{\rm PM}}-V_{\rm NR})/w_{\rm NR} = (w_{\rm NR}-w_{n{\rm PM}})/w_{\rm NR}$. The shaded area is an estimated fractional error computed using the fit errors of Eq.\eqref{eq:chifit}. The (radiation-reacted) 4PM $w$ potential fractionally differs from $w_{\rm NR}$ only by $\sim \pm 2\times10^{-3}$. } \end{figure} For reference, we also displayed in Fig.~\ref{fig:wNR}, the (exact) Schwarzschild $w$ potential defined by setting $\hat{Q} = 0$ in Eqs.~\eqref{eq:w} and \eqref{eq:ASchw}. Note that $w_{\rm NR}(\bar{r})$ lies significantly below $w_{\rm Schw}(\bar{r})$ which gives a direct NR-based proof that the Einstenian gravitational interaction between two equal-mass BHs is \textit{less attractive} than its test-mass limit. In order to extract the $w$ potential for other energies, and other mass ratios, one would need corresponding NR simulation suites with fixed energy, fixed mass ratio, and varying angular momentum. A complementary view of the physics described by the potential $w_{\rm NR}(\bar{r})$ is presented in Fig.~\ref{fig:VNR}. This figure contrasts several versions of the Newtonian-like potential $V(\bar{r},\gamma,j)$, defined in Eq.~\eqref{eq:V}, plotted versus $\bar{r}$. We recall that this potential combines a (repulsive) centrifugal potential $j^2/\bar{r}^2$ with the (attractive) radial potential $-w(\bar{r},\gamma)$. For definiteness, we use in Fig.~\eqref{fig:VNR} the specific values of initial energy, $\hat{E}_{\rm in} \simeq 1.02256$, and angular momentum, $\hat{J}_{\rm in} = j/4 \simeq 1.100$, corresponding to the smallest impact-parameter simulation used here. Figure~\ref{fig:VNR} compares $V_{\rm NR}(\bar{r},\hat{E}_{\rm in},\hat{J}_{\rm in})$, which is undefined below $\bar{r}_{\rm min} \approx 2.567$, to two PM-informed $V$ potentials: the (radiation-reacted) 3PM and 4PM ones. For reference, the test-mass limit potential $V^{\rm Schw}$ is also included. This figure shows to what extent the smallest impact-parameter simulation has probed the topmost part of the $V$ potential. Simulations with the same energy but smaller angular momenta, especially in the range between $\frac14j_0^{\rm fit} \simeq \frac14j_0^{w_{\rm 4PM}} \simeq 1.078$ and $\hat{J}_{\rm in}\simeq 1.100$, would allow one to explore the hill-top of the $V$ potential. This figure also shows that the potential ruling the dynamics of equal-mass BH collisions is quite different from its test-mass limit. The lower panel of Fig.~\ref{fig:VNR} displays the fractional differences between the considered PM-informed $V$ potentials and the NR one, expressed as $(V_{n{\rm PM}}-V_{\rm NR})/w_{\rm NR} = (w_{\rm NR}-w_{n{\rm PM}})/w_{\rm NR}$. The shaded area is an estimate of the fractional error on $V_{\rm NR}$ computed by adding in quadrature the 68$\%$ confidence-level errors on our (2-parameter) best-fit template, Eqs.~\eqref{eq:chifit} and \eqref{eq:fit_params}. The (radiation-reacted) 4PM $w$ potential fractionally differs from $w_{\rm NR}$ only by $\sim \pm 2\times10^{-3}$. \section{Conclusions} \label{sec:concl} In this work, we first reviewed the current knowledge of the PM scattering angle up to 4PM~\cite{Bern:2021dqo,Bern:2021yeh}, including radiative terms~\cite{Manohar:2022dea,Dlapa:2022lmu,Bini:2022enm}. We then emphasized that, instead of using the PM results in the standard form of a PM-expanded scattering angle, the same PM information can be usefully reformulated in terms of radial potentials entering a simple EOB mass-shell condition~\cite{Damour:2017zjx}. We found that this reformulation ($w^{\rm eob}$-resummation) greatly improves the agreement between the scattering angle obtained from PM analytical results and NR simulations of equal-mass, nonspinning BH binaries~\cite{Damour:2014afa,Hopper:2022rwo}, see Fig.~\ref{fig:chiwPM2}, to be compared to the four bottom curves of Fig.~\ref{fig:chiPM}. The scattering angle computed using a radiation-reacted 4PM $w$ potential is as accurate (and even slightly more accurate) than the scattering angle computed by using one of the state-of-the-art (NR-calibrated, high-PN accuracy) EOB dynamics, namely the \texttt{TEOBResumS}{} one~\cite{Nagar:2021xnh,Hopper:2022rwo}. The agreement between NR data and $\chi_{\rm 4PM}^{w\,{\rm eob}}$ is better than $1\%$ for most data points. See Fig.~\ref{fig:chiPM_TEOB} and Tables~\ref{tab:chi_NR} and~\ref{tab:chi_Seth}. Separately from the just mentioned $w^{\rm eob}$-resummation, we introduced a new resummation technique of scattering angles ($\mathcal{L}$-resummation). This technique consists in incorporating a logarithmic singularity (corresponding to the critical angular momentum separating scattering motion from plunging ones) into the representation of the scattering angle as a function of angular momentum. We show the usefulness of the $\mathcal{L}$-resummation technique in two specific applications: (i) resummation of the PM-expanded scattering angles (see dashed lines in Fig.~\ref{fig:chiPM}, to be contrasted to the non-resummed bottom lines); and (ii) accurate representation of a discrete sequence of NR scattering data by means an analytic fitting template, see Eqs.~\eqref{eq:chifit} and~\eqref{eq:fit_params}. In Sec.~\ref{subsec:j0} we compared various estimates of the critical angular momentum $J_0$ separating scattering motions from coalescing ones. This critical $J_0$ is a function of the initial energy and of the symmetric mass ratio. In Fig.~\ref{fig:j0} we compare four different PM-based analytic estimates of $J_0$ to NR data of both mildly-relativistic~\cite{Damour:2014afa} and highly-relativistic~\cite{Shibata:2008rq,Sperhake:2009jz} BH scatterings. Figure~\ref{fig:j0} shows that the excellent agreement that we found between the radiation-reacted 4PM-based estimate of $j_0$ and the mildly relativistic NR data of Ref.~\cite{Damour:2014afa} ($v_{\rm cm} \simeq 0.2$) is not maintained at center-of-mass velocities $v_{\rm cm} \gtrsim 0.6$. This is probably linked to the anomalous power-law behavior of $\chi_{\rm 4PM}$ in the high-energy limit. This shows the need to numerically explore BH scattering in the range of velocities bridging the present gap between mildly-relativistic and highly-relativistic regimes, i.e. $0.2 \lesssim v_{\rm cm} \lesssim 0.6$. Finally, we made use of Firsov's inversion formula to extract, for the first time, from NR data a gravitational potential, $w_{\rm NR}$, describing, within the EOB framework, the scattering of two BHs. This potential contains both conservative and radiation reaction effects and is determined here only for the specific initial energy $\hat{E}_{\rm in}\simeq 1.02258$. We found that the (radiation-reacted) EOB potential $w_{\rm 4PM}$ is remarkably close to the NR-extracted one (see Figs.~\ref{fig:wNR} and~\ref{fig:VNR}). This result opens a new avenue to extract useful information from NR simulations of scattering BHs. The computation of additional sequences of configurations at constant energy and varying angular momentum down to their critical $J_0$ will help to extend (following the strategy of Sec.~\ref{subsec:inv}) the knowledge of the energy-dependent radial potential $w_{\rm NR}(\bar{r},\gamma)$ to a larger range of energies. This seems within reach of public codes like the Einstein Toolkit~\cite{Loffler:2011ay,EinsteinToolkit:2022_05}, which was already employed to obtain the NR scattering angles~\cite{Damour:2014afa,Hopper:2022rwo} used in this work. A parallel extension of both the procedure of Sec.~\ref{subsec:inv} and of NR simulations to spinning and unequal-mass systems is evidently called for. Such a knowledge will be a useful guideline to probe and compare the accuracy of various theoretical results (PM-based, PN-based EOBNR, $\dots$) and will offer new prospects for improving the accuracy of templates for eccentric and hyperbolic systems. The unprecedented agreement between PM-based information and numerical results presented here was obtained by using one specific way (isotropic gauge, non-resummed energy-dependent $w$, $\dots$) of incorporating PM information into an EOB framework. This gives a new motivation to exploit the analytical flexibility of the EOB approach and to explore various ways of using PM information so as to improve our analytical description of binary systems not only in scattering situations, but also in bound states. \section*{Acknowledgments} P. R. thanks the hospitality and the stimulating environment of the Institut des Hautes Etudes Scientifiques. We thank A.~Nagar for collaboration at the beginning of this project, for suggestions and a careful reading of the manuscript. The authors are also grateful to D.~Bini, and R.~Russo for useful discussions during the development of this work. P.~R. is supported by the Italian Minister of University and Research (MUR) via the PRIN 2020KB33TP, {\it Multimessenger astronomy in the Einstein Telescope Era (METE)}. The present research was also partly supported by the ``\textit{2021 Balzan Prize for Gravitation: Physical and Astrophysical Aspects}'', awarded to Thibault Damour.
{ "attr-fineweb-edu": 1.883789, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbovxK0iCl4YJmVXK
\section{Methodology} \subsection{Overview} Driven by the severity of reconstruction attacks, and limitations of existing defenses, we focus on a new mitigation opportunity in this paper: \emph{transforming the sensitive training samples to make the reconstruction difficult or even infeasible}. Image transformation has been widely adopted to mitigate adversarial examples \cite{qiu2020fencebox,qiu2020mitigating,qiu2020towards}, backdoor attacks \cite{zeng2020deepsweep}, and attack watermarking schemes \cite{guo2020hidden}. We repurpose it for defeating reconstruction attacks. Specifically, given a private dataset $D$, we aim to find a policy composed of a set of transformation functions $T = t_1 \circ t_2 \circ ... \circ t_n$, to convert each sample $x\in D$ to $\widehat{x} = T(x)$ and establish a new dataset $\widehat{D}$. The data owner can use $\widehat{D}$ to calculate the gradients and safely share them with untrusted collaborators in collaborative learning. Such a transformation policy must satisfy two requirements: (1) the adversarial participant is not able to infer $\widehat{x}$ (and $x$) from $\bigtriangledown W(\widehat{x}, y)$. (2) The final model should maintain similar performance as the one trained from $D$. We formally define our strategy as below: \begin{definition}(($\epsilon, \delta, \gamma$)-Privacy-aware Transformation Policy) Given a dataset $D$, and an ensemble of transformations $T$, let $\widehat{D}$ be another dataset transformed from $D$ with $T$. Let $M$ and $\widehat{M}$ be the models trained over $D$ and $\widehat{D}$, respectively. $T$ is defined to be ($\epsilon, \delta, \gamma$)-privacy-aware, if the following two requirements are met: \begin{myalign} & \emph{Pr}[\emph{\texttt{PSNR}}(x^*, \widehat{x})< \epsilon] \geq 1-\delta, \forall x \in D, \\ & \emph{\texttt{ACC}}(M) - \emph{\texttt{ACC}}(\widehat{M}) < \gamma, \end{myalign} where $\widehat{x}=T(x)$, $x^*$ is the reconstructed input from $\bigtriangledown W(\widehat{x}, y)$, and \emph{\texttt{ACC}} is the prediction accuracy function. \end{definition} It is critical to identify the transformation functions that can satisfy the above two requirements. With the advance of computer vision, different image transformations have been designed for better data augmentation. We aim to repurpose some of these data augmentation approaches to enhance the privacy of collaborative learning. Due to the large quantity and variety of augmentation functions, we introduce a systematic and automatic method to search for the most privacy-preserving and efficient policy. Our idea is inspired by AutoAugment \cite{iccv2019autoaugment}, which exploited AutoML techniques~\cite{wistuba2019survey} to automatically search for optimal augmentation policies to improve the model accuracy and generalization. However, it is difficult to apply this solution directly to our privacy problem. We need to address two new challenges: (1) how to efficiently evaluate the satisfaction of the two requirements for each policy (Sections \ref{sec:performance-metric} and \ref{sec:search-train}); and (2) how to select the appropriate search space and sampling method (Section \ref{sec:search-train}). \subsection{Privacy Score} \label{sec:privacy-metric} During the search process, we need to quantify the privacy effect of the candidate policies. The PSNR metric is not efficient here, as it requires to perform an end-to-end reconstruction attack over a well-trained model. Instead, we design a new privacy score, which can accurately reflect the privacy leakage based on the transformation policy and a semi-trained model which is trained for only a few epochs. We first define a metric \texttt{GradSim}, which measures the gradient similarity of two input samples ($x_1$, $x_2$) with the same label $y$: \begin{myequation} \texttt{GradSim}(x_{1}, x_{2}) = \frac{\langle \bigtriangledown W(x_1, y), \bigtriangledown W(x_2, y) \rangle} {|| \bigtriangledown W(x_1, y) || \cdot || \bigtriangledown W(x_2, y) ||}. \label{eq:gradsim} \end{myequation} Assume the transformed image is $\widehat{x}$, which the adversary tries to reconstruct. He starts from a random input $x'=x_0$, and updates $x'$ iteratively using Equation \ref{eq:attack} until $\bigtriangledown W(x', y)$ approaches $\bigtriangledown W(\widehat{x}, y)$. Figure~\ref{fig:motivation} visualizes this process: the y-axis is the gradient similarity $\texttt{GradSim}(x', \widehat{x})$, and x-axis is $i\in[0, 1]$ such that $x'=(1-i)*x_0 + i*\widehat{x}$. The optimization starts with $i=0$ (i.e., $x=x_0$) and ideally completes at $i=1$ (i.e., $x'=\widehat{x}$ and $\texttt{GradSim}=1$). \begin{figure} [t] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0){\includegraphics[ width=0.9\linewidth]{figs/motivation.pdf}}; \node at (2.1, 1.9){\includegraphics[width=0.08\linewidth]{figs/motivation-figs/ori_5.png}}; \node at (3.6, 2.2){\includegraphics[width=0.08\linewidth]{figs/motivation-figs/ori_10.png}}; \node at (5.15, 3){\includegraphics[width=0.08\linewidth]{figs/motivation-figs/ori_15.png}}; \node at (6.0, 3.5){\includegraphics[width=0.08\linewidth]{figs/motivation-figs/ori_20.png}}; \node at (2.9, 1.2){\includegraphics[width=0.08\linewidth]{figs/motivation-figs/trans_5.png}}; \node at (4.4, 1.3){\includegraphics[width=0.08\linewidth]{figs/motivation-figs/trans_10.png}}; \node at (5.9, 1.7){\includegraphics[width=0.08\linewidth]{figs/motivation-figs/trans_15.png}}; \node at (6.9, 3){\includegraphics[width=0.08\linewidth]{figs/motivation-figs/trans_20.png}}; \end{tikzpicture} \vspace{-1em} \caption{Visualization of the optimization process in reconstruction attacks.} \vspace{-1em} \label{fig:motivation} \end{figure} A good policy can thwart the convergence from $x_0$ to $\widehat{x}$. As shown in Figure~\ref{fig:motivation} (blue solid line), $\texttt{GradSim}$ is hardly changed with $i$ initially from $x_0$. This reveals the difficulty of the adversary to find the correct direction towards $\widehat{x}$ based on the gradient distance. In contrast, if the collaborative learning system does not employ any transformation function (red dash line), $\texttt{GradSim}$ is increased stably with $i$. This gives the adversary an indication to discover the correct moving direction, and steadily make $x'$ approach $x$ by minimizing the gradient distance. Based on this observation, we use the area under the $\texttt{GradSim}$ curve to denote the effectiveness of a transformation policy in reducing privacy leakage. A good transformation policy will give a small area as the $\texttt{GradSim}$ curve is flat for most values of $i$ until there is a sharp jump when $i$ is close to 1. In contrast, a leaky learning system has a larger area as the $\texttt{GradSim}$ curve increases gradually with $i$. Formally, our privacy score is defined as below: \begin{myalign} & S_{pri}(T) = \frac{1}{|D|} \sum_{x\in D} \int_{0}^{1} \texttt{GradSim}(x'(i), T(x)) di, \nonumber \\ & x'(i) = (1-i) * x_0 + i * T(x). \label{eq:privacy-score1} \end{myalign} For simplicity, we can approximate this score as a numeric integration, which is adopted in our implementation: \begin{myalign} S_{pri}(T) \approx \frac{1}{|D|K} \sum_{x\in D} \sum_{j=0}^{K-1} \texttt{GradSim}(x'(\frac{j}{K}), T(x)). \label{eq:privacy-score} \end{myalign} \noindent\textbf{Empirical validation.} We also run some experiments to empirically verify the correlation between $S_{pri}$ and PSNR. Specifically, we randomly select 100 transformation policies, and apply each to the training set. For each policy, we collect the PSNR value by performing the reconstruction attack \cite{geiping2020inverting} with a reduced iteration of 2500. We also measure the privacy score using Equation \ref{eq:privacy-score}. As shown in Figure \ref{fig:privacy-distribution}, $S_{pri}$ is linearly correlated to PSNR with a Pearson Correlation Coefficient of 0.697. This shows that we can use $S_{pri}$ to quantify the attack effects. \renewcommand{\multirowsetup}{\centering} \begin{figure} [t] \begin{center} \begin{tabular}{c} \includegraphics[ width=0.9\linewidth]{figs/privacy-distribution/score-psnr.pdf} \end{tabular} \end{center} \vspace{-2em} \caption{Correlation between PSNR and $S_{pri}$.} \vspace{-1em} \label{fig:privacy-distribution} \end{figure} \subsection{Accuracy Score} \label{sec:performance-metric} Another important requirement for a qualified policy is to maintain model accuracy. Certain transformations introduce large-scale perturbations to the samples, which can impair the model performance. We expect to have an efficient and accurate criterion to judge the performance impact of each transformation policy during the search process. Joseph \etal~\cite{mellor2020neural} proposed a novel technique to search neural architectures without model training. It empirically evaluates the correlations between the local linear map and the architecture performance, and identifies the maps that yield the best performance. Inspired by this work, we adopt the technique to search for the transformations that can preserve the model performance. Specifically, we prepare a randomly initialized model $f$, and a mini-batch of data samples transformed by the target policy $T$: $\{\widehat{x}_{n}\}_{n=1}^{N}$. We first calculate the Gradient Jacobian matrix, as shown below: \begin{myequation} J = \begin{pmatrix} \frac{\partial f}{\partial \widehat{x}_{1}}, & \frac{\partial f}{\partial \widehat{x}_{2}}, & \cdots & \frac{\partial f}{\partial \widehat{x}_{N}} \\ \end{pmatrix}^{\top}. \label{eq:J} \end{myequation} Then we compute its correlation matrix: \begin{myalign} & \left(M_J\right)_{ i, j} = \frac{1}{N} \sum_{n=1}^NJ_{i, n}, \nonumber \\ & C_J = (J-M_J)(J-M_J)^T, \nonumber \\ & \left(\Sigma_J\right)_{i, j} = \frac{\left(C_J\right)_{i, j}}{\sqrt{\left(C_J\right)_{i, i} \cdot \left(C_J\right)_{j, j} }}. \label{eq:cor-matrix} \end{myalign} \iffalse and compute the the correlation matrix $\Sigma_J$ as follows: \begin{equation} C_J = (J-M_J)(J-M_J)^T \end{equation} \begin{equation} \left(M_J\right)_{ i, j} = \frac{1}{N} \sum_{n=1}^NJ_{i, n} \end{equation} \begin{equation} \left(\Sigma_J\right)_{i, j} = \frac{\left(C_J\right)_{i, j}}{\sqrt{\left(C_J\right)_{i, i} \left(C_J\right)_{j, j} }} \end{equation} \fi Let $\sigma_{J, 1} \leq \dots \leq \sigma_{J, N}$ be the $N$ eigenvalues of $\Sigma_J$. Then our accuracy score is given by \begin{equation} S_{acc}(T)=\frac{1}{N}\sum_{i=0}^{N-1} {log(\sigma_{J,i} + \epsilon) + (\sigma_{J,i} + \epsilon)^{-1}}, \label{eq:accuracy-score} \end{equation} where $\epsilon$ is set as $10^{-5}$ for numerical stability. This accuracy score can be used to quickly filter out policies which incur unacceptable performance penalty to the model. \subsection{Searching and Applying Transformations} \label{sec:search-train} We utilize the privacy and accuracy scores to identify the optimal policies, and apply them to collaborative training. \noindent\textbf{Search space.} We consider the data augmentation library adopted by AutoAugment \cite{iccv2019autoaugment,pool}. This library contains 50 various image transformation functions, including rotation, shift, inversion, contrast, posterization, etc. We consider a policy combining at most $k$ functions. This leads to a search space of $\sum_{i=1}^k 50^i$. Instead of iterating all the policies, we only select and evaluate $C_{max}$ policies. For instance, in our implementation, we choose $k=3$, and the search space is $127,550$. We set $C_{max}=1,500$, which is large enough to identify qualified policies. \noindent\textbf{Search algorithm.} Various AutoML methods have been designed to search for the optimal architecture, e.g., importance sampling~\cite{neal2001annealed}, evolutionary sampling~\cite{kwok2005evolutionary}, reinforcement learning-based sampling~\cite{zoph2016neural}. We adopt a simple \emph{random} search strategy, which is efficient and effective in discovering the optimal policies. Algorithm \ref{alg:search} illustrates our search process. Specifically, we aim to identify a policy set $\mathcal{T}$ with $n$ qualified policies. We need to prepare two local models: (1) $M^{s}$ is used for privacy quantification. It is trained only with 10\% of the original training set for 50 epochs. This overhead is equivalent to the training with the entire set for 5 epochs, which is very small. (2) $M^{r}$ is a randomly initialized model without any optimization, which is used for accuracy quantification. We randomly sample $C_{max}$ policies, and calculate the privacy and accuracy scores of each policy. The policies with accuracy scores lower than a threshold $T_{acc}$ will be filtered out. We select the top-$n$ policies based on the privacy score to form the final policy set $\mathcal{T}$. \begin{algorithm}[htb] \caption{Searching optimal transformations.}\label{alg:search} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Augmentation library $\mathcal{P}$, $T_{acc}$, $C_{max}$, $M^s$, $M^r$, $D$} \Output{Optimal policy set $\mathcal{T}$ with $n$ policies} \For{$i \in [1, C_{max}]$ \label{line:init}}{ Sample functions from $\mathcal{P}$ to form a policy $T$\; Calculate $S_{acc}(T)$ from $M^r$, $D$ (Eq. \ref{eq:accuracy-score})\; \If{$S_{acc}(T) \geq T_{acc}$}{ \eIf{$|\mathcal{T}|<n$}{ Insert $T$ to $\mathcal{T}$\; }{ Calculate $S_{pri}(T)$ from $M^s$, $D$ (Eq. \ref{eq:privacy-score})\; $T^* \gets \text{argmax}_{T'\in \mathcal{T}}S_{pri}(T')$\; \If{$S_{pri}(T)<S_{pri}(T^*)$}{ Replace $T^*$ with $T$ in $\mathcal{T}$\; } } } } \If{$|\mathcal{T}|<n$}{ Go to Line \ref{line:init}\; } \Return $\mathcal{T}$ \end{algorithm} \noindent\textbf{Applying transformations.} With the identified policy set $\mathcal{T}$, we can apply the functions over the sensitive training data. One possible solution is to always pick the policy with the smallest $S_{pri}$, and apply it to each sample. However, a single fixed policy can incur domain shifts and bias in the input samples. This can impair the model performance although we have tested it with the accuracy metric. Instead, we can adopt a hybrid augmentation strategy which is also used in \cite{iccv2019autoaugment}: we randomly select a transformation policy from $\mathcal{T}$ to preprocess each data sample. All the selected transformation policies cannot have common transformation functions. This can guarantee low privacy leakage and high model accuracy. Besides, it can also improve the model generalization and eliminate domain shifts. \iffalse \begin{algorithm}[htb] \caption{Applying transformation policies to train models}\label{alg:train} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Optimal set $\mathcal{T}$, $D$, $N$} \Output{Well trained model $M^{w}$} \For{$i \in [1, N]$}{ Sample a mini-batch $B$ from $D$\; Initialize $\widehat{B}$ as empty\; \For{$x \in B$}{ Randomly select a policy $T$ from $\mathcal{T}$\; $\widehat{x} = T(x)$\; insert $\widehat{x}$ to $\widehat{B}$\; Follow the collaborative learning protocol to train the model\; } } \Return the final model $M^w$ \end{algorithm} \fi \section{Implementation Details} \label{sec:implementation-detail} \bheading{Searching Transformation Policies.} Instead of each participant searching for their own transformation policies, we first adopt Algorithm 1 to obtain a universal policy set $\mathcal{T}$ on a validation set. During the training process, all participants would use $\mathcal{T}$ for privacy protection. In particular, for each policy $T$, we calculate the corresponding privacy score $S_{pri}(T)$ on 100 randomly selected images in the validation set except the first 100 images that are used for attack evaluation. We optimize the randomly initialized model $f$ for 10 forward-background rounds and use the average value of the accuracy scores of the ten rounds as $S_{acc}$. The batch size of each round is set as 128. We adaptively adjust the accuracy threshold $T_{acc}$ for different architectures and datasets. In particular, we set $T_{acc}$ as $-85$ (ResNet20, CIFAR100), $-80$ (ConvNet, CIFAR100), $-12$ (ResNet20, F-MNIST), and $-10$ (ConvNet, F-MNIST), respectively. \bheading{Training Implementation} We utilize SGD with momentum 0.9 and weight decay $5\cdot 10^{-4}$ to optimize the deep neural networks. We set the training epoch as 200 (resp. 100) for CIFAR100 (resp. F-MNIST). The initial learning rate is 0.1 and steply decays by a factor of 0.1 at $\frac{3}{8}$, $\frac{5}{8}$, and $\frac{7}{8}$ of all iterations. \bheading{Attacking Implementation} For attacks without using L-BFGS, we follow the same settings in~\cite{geiping2020inverting}. We set the iteration number of the reconstruction optimizations as 4800 and adopt the same training policy as the above collaborative learning. The total variation weight is set as $10^{-4}$. Because The results of L-BFGS-based reconstruction attacks are unstable, we run the attacks 16 times and select the best reconstruction results. For fair comparison, we reduce the iteration number to 300. \bheading{Figure Implementation} We present the implementation details of the figures in our manuscript as follows: \begin{packeditemize} \item We select $3-1-7$ as the privacy-aware transformation policy to generate Figure~4-2. We adopt a semi-train ResNet20 on CIFAR100 to calculate the $GradSim$ values of the interpolation images. \item In Figure~4-3, we randomly sample 100 different transformation policies from the 127, 550 policies an calculate the average $PSNR$ of reconstructed images under IG~\cite{geiping2020inverting}. We adopt the semi-train ResNet20 as the initial model to calculate the gradients on the first 100 images in the validation set. \item The reconstruction attack in Figure 5-7 is IG~\cite{geiping2020inverting}. The random transformation policy is $19-1-18$ and the searched policy is the hybrid of $3-1-7$ and $43-18-18$. \end{packeditemize} \section{Transformation Space} \label{sec:augmentation-detail} We summarize the 50 transformations used in our manuscript in Table~\ref{tab:transformation-detail}. \begin{table* \centering \captionsetup[subtable]{position = below} \resizebox{0.16\textwidth}{!}{ \begin{subtable}{0.25\linewidth} \centering \begin{tabular}{ccc} \toprule \textbf{Index} & \textbf{Transformation} & \textbf{Magnitude} \\ \midrule 0 & invert & 7 \\ 1 & contrast & 6 \\ 2 & rotate & 2 \\ 3 & translateX & 9 \\ 4 & sharpness & 1 \\ % 5 & sharpness & 3 \\ 6 & shearY & 2 \\ 7 & translateY & 2 \\ 8 & autocontrast & 5 \\ 9 & equalize & 2 \\ 10 & shearY & 5 \\ 11 & posterize & 5 \\ 12 & color & 3 \\ \bottomrule \end{tabular} \end{subtable}}% \hspace*{3em} \resizebox{0.16\textwidth}{!}{ \begin{subtable}{0.25\linewidth} \centering \begin{tabular}{|ccc} \toprule \textbf{Index} & \textbf{Transformation} & \textbf{Magnitude} \\ \midrule 13 & brightness & 5 \\ 14 & sharpness & 9 \\ 15 & brightness & 9 \\ 16 & equalize & 5 \\ 17 & equalize & 1 \\ 18 & contrast & 7 \\ 19 & sharpness & 5 \\ 20 & color & 5 \\ 21 & translateX & 5 \\ 22 & equalize & 7 \\ 23 & autocontrast & 8 \\ 24 & translateY & 3 \\ 25 & sharpness & 6 \\ \bottomrule \end{tabular} \end{subtable}}% \hspace*{3em} \resizebox{0.16\textwidth}{!}{ \begin{subtable}{0.25\linewidth} \centering \begin{tabular}{|ccc} \toprule \textbf{Index} & \textbf{Transformation} & \textbf{Magnitude} \\ \midrule 26 & brightness & 6 \\ 27 & color & 8 \\ 28 & solarize & 0 \\ 29 & invert & 0 \\ 30 & equalize & 0 \\ 31 & autocontrast & 0 \\ 32 & equalize & 8 \\ 33 & equalize & 4 \\ 34 & color & 5 \\ 35 & equalize & 5 \\ 36 & autocontrast & 4 \\ 37 & solarize & 4 \\ 38 & brightness & 3 \\ \bottomrule \end{tabular} \end{subtable}}% \hspace*{3em} \resizebox{0.16\textwidth}{!}{ \begin{subtable}{0.25\linewidth} \centering \begin{tabular}{|ccc} \toprule \textbf{Index} & \textbf{Transformation} & \textbf{Magnitude} \\ \midrule 39 & color & 0 \\ 40 & solarize & 1 \\ 41 & autocontrast & 0 \\ 42 & translateY & 3 \\ 43 & translateY & 4 \\ 44 & autocontrast & 1 \\ 45 & solarize & 1 \\ 46 & equalize & 5 \\ 47 & invert & 1 \\ 48 & translateY & 3 \\ 49 & autocontrast & 1 \\ & & \\ & & \\ \bottomrule \end{tabular} \end{subtable}}% \captionsetup[table]{position=bottom} \caption{Summary of the 50 transformations.} \label{tab:transformation-detail} \end{table*} \section{More Transformation Results} We provide more experimental results of other high-ranking transformations in Tab~\ref{tab:other-results}. The reconstruction attack is IG~\cite{geiping2020inverting}. \begin{table* \centering \captionsetup[subtable]{position = below} \resizebox{0.21\textwidth}{!}{ \begin{subtable}{0.22\linewidth} \centering \begin{tabular}{c|cc} \hline \textbf{Policy} & \textbf{PSNR} & \textbf{ACC} \\ \hline 3-18-28 & 7.3 & 72.09 \\ 7-3 & 7.64 & 71.63 \\ \hline \end{tabular} \caption{CIFAR100 with ResNet20} \end{subtable}}% \hspace*{2em} \resizebox{0.21\textwidth}{!}{ \begin{subtable}{0.22\linewidth} \centering \begin{tabular}{c|cc} \hline \textbf{Policy} & \textbf{PSNR} & \textbf{ACC} \\ \hline 15-43 & 8.27 & 68.66 \\ 37-33-3 & 7.83 & 67.89 \\ \hline \end{tabular} \caption{CIFAR100 with ConvNet} \end{subtable}}% \hspace*{2em} \resizebox{0.21\textwidth}{!}{ \begin{subtable}{0.22\linewidth} \centering \begin{tabular}{c|cc} \hline \textbf{Policy} & \textbf{PSNR} & \textbf{ACC} \\ \hline 15-40-5 & 7.82 & 88.03 \\ 0-39-35 & 6.97 & 87.10 \\ \hline \end{tabular} \caption{F-MNIST with ResNet20} \end{subtable}}% \hspace*{2em} \resizebox{0.21\textwidth}{!}{ \begin{subtable}{0.22\linewidth} \centering \begin{tabular}{c|cc} \hline \textbf{Policy} & \textbf{PSNR} & \textbf{ACC} \\ \hline 43-43-48 & 7.00 & 88.08 \\ 42-26-45 & 7.51 & 87.91 \\ \hline \end{tabular} \caption{F-MNIST with ConvNet} \end{subtable}}% \captionsetup[table]{position=bottom} \caption{PSNR (db) and model accuracy (\%) of different transformation configurations for each architecture and dataset.} \label{tab:other-results} \end{table*} \begin{figure*} \centering \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[ width=\linewidth]{figs/privacy-accuracy/distribution_500.png} \caption{$S_{pri} \in [0.262, 0.530]$} \label{fig:privacy-accuracy-distribution1} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[ width=\linewidth]{figs/privacy-accuracy/distribution_1500.png} \caption{$S_{pri} \in [0.231, 0.617]$} \label{fig:privacy-accuracy-distribution2} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[ width=\linewidth]{figs/privacy-accuracy/distribution_5000.png} \caption{$S_{pri} \in [0.231, 0.617]$} \label{fig:privacy-accuracy-distribution3} \end{subfigure} \caption{$S_{pri}-S_{acc}$ distributes of different numbers of policies.} \label{fig:privacy-accuracy-distribution} \end{figure*} \section{Recap Search Process} \label{sec:search-process} \iffalse \begin{figure} \centering \subfigure[{$S_{pri} \in [0.310, 0.604]$}]{\includegraphics[ width=0.32\linewidth]{figs/privacy-accuracy/distribution_500.png}} \subfigure[{$S_{pri} \in [0.291, 0.617]$}]{\includegraphics[ width=0.32\linewidth]{figs/privacy-accuracy/distribution_1500.png}} \subfigure[{$S_{pri} \in [0.287, 0.617]$}]{\includegraphics[ width=0.32\linewidth]{figs/privacy-accuracy/distribution_5000.png}} \caption{$S_{pri}-S_{acc}$ distributes similarity in different candidate numbers. This suggests that random search strategy can find the candidate with satisfactory performance.} \label{fig:privacy-accuracy-distribution} \end{figure} \fi We adopt a simple yet effective search strategy, i.e., \emph{random search}, to find satisfactory transformation policies. More intelligent search methods, such as evolution algorithm and reinforcement learning, could mitigate meaningless candidates and improve the search efficiency. We also investigate the affect of the number of policy candidates to the proposed defense. We test 500, 1500, and 5000 policies and plot the privacy-accuracy distributions in Figure~\ref{fig:privacy-accuracy-distribution}. We observe the distributions of 1500 and 5000 candidates are similar and the privacy score ranges are the same. This indicates that 1500 policies are enough for searching satisfactory transformation policies. \section{Proof of Theorem 1} \begin{theorem} Consider two transformation policies $T_1$, $T_2$. Let $x$ be a training sample, and $x^*_1, x^*_2$ be the reconstructed samples via Equation 2 with $T_1$ and $T_2$ respectively. If ${\small \emph{\texttt{GradSim}}(x'(i), T_1(x)) \geq \emph{\texttt{GradSim}}(x'(i), T_2(x))}$ is satisfied for $\forall i \in [0, 1]$, then $T_2$ is more effective against reconstruction attacks than $T_1$, i.e., \begin{equation}\nonumber \emph{Pr}[\emph{\texttt{PSNR}}(x^*_1, T_1(x))\geq \epsilon] \geq \emph{Pr}[\emph{\texttt{PSNR}}(x^*_2, T_2(x))\geq \epsilon]. \end{equation} \end{theorem} \begin{proof} Since distance-based reconstruction attacks adopt gradient decent based optimization algorithms, the distance between two adjacent points $f_{T}(i)$, $f_{T}(i+\bigtriangleup i)$ is positively correlated with $Pr(T, x'(i), x'(i+\bigtriangleup i))$, where $Pr(T, x'(i), x'(i+\bigtriangleup i))$ is the probability that the adversary can reconstruct $x'(i+\bigtriangleup i)$ from $x'(i)$, $$x'(i+\bigtriangleup i) = \frac{K-i-\bigtriangleup i}{K} * x_0 + \frac{i+\bigtriangleup i}{K} * T(x).$$ In particular, if $|f_{T}(i) - f_{T}(i+\bigtriangleup i)|$ is larger, the gradient decent based optimization is more likely to update $x'(i)$ to $x'(i+\bigtriangleup i)$. Without loss of generality, we assume the derivative of $f_{T}(i)$, $f_{T}'(i)$ is differentiable and $f_{T}'(i) \varpropto Pr(T, x'(i), x'(i+\bigtriangleup i))$. Then, we have \begin{equation} f_{T}(i) = \int_{0}^{i} f_{T}'(z) \,\mathrm{d}z. \end{equation} and \begin{equation} f_{T}(i) \varpropto \int_{0}^{i} Pr(T, x'(z), x'(z+\bigtriangleup z))\,\mathrm{d}z. \end{equation} Thus, \begin{equation} \int_{0}^{1} f_{T}(i) \,\mathrm{d}i \varpropto \int_{0}^{1} \int_{0}^{i} Pr(T, x'(z), x'(z+\bigtriangleup z))\,\mathrm{d}z\mathrm{d}i, \end{equation} where $$\emph{Pr}[\emph{\texttt{PSNR}}(x^*, T(x))\geq \epsilon] = \int_{0}^{1} \int_{0}^{i} Pr(T, x'(z), x'(z+\bigtriangleup z))\,\mathrm{d}z\mathrm{d}i.$$ Since $f_{T_1}(i) \geq f_{T_2}(i)$ for $\forall i \in [0, 1]$, \begin{equation} \int_{0}^{1} f_{T_1}(i) \,\mathrm{d}i \geq \int_{0}^{1} f_{T_2}(i) \,\mathrm{d}i, \end{equation} and therefore, \begin{equation} \emph{Pr}[\emph{\texttt{PSNR}}(x^*_1, T_1(x))\geq \epsilon] \geq \emph{Pr}[\emph{\texttt{PSNR}}(x^*_2, T_2(x))\geq \epsilon]. \end{equation} \end{proof} \section{Adaptive Attack} Actually the general adaptive attack towards our privacy preserving is not easy to design, we appreciate one of reviewers points out that the adversary can possibly initialize the starting image $x_{0}$ as black pixels when the shift transformation is applied. The results are shown in \ref{tab:cifar100-zero} \begin{table}[htp] \begin{center} \resizebox{0.45\textwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline \textbf{Model} & \textbf{Transformation} & \textbf{Initialized to} & \textbf{Initialized to} \\ & \textbf{Policy} & \textbf{random values} & \textbf{black pixels} \\\hline ResNet20 & 3-1-7 & 6.58 & 6.64 \\ \hline ConvNet & 21-13-3 & 5.76 & 5.78 \\ \hline \end{tabular}} \end{center} \caption{PSNR (db) of different initialization for each architecture on CIFAR100.} \label{tab:cifar100-zero} \end{table} \section{More Visualization Results} We show more original images in CIFAR100 (Figure \ref{fig:more-images-ori}) and the corresponding reconstructed images under IG without any transformation (Figure \ref{fig:attack-images-no-transform}). We also illustrate the transformed original images with the hybrid policy adopted for protecting ResNet20 on CIFAR100 (Figure \ref{fig:more-images-transform}) and the corresponding reconstructed images under IG with the transformation policy (Figure \ref{fig:attack-images-transform}). The batch size of the above collaborative training processes is 1. We further visually show the corresponding figures (Figure \ref{fig:more-images-ori-8}-\ref{fig:attack-images-transform-8}) when the batch size is 8. In consideration of the expensive cost of attacking ImageNet~\cite{deng2009imagenet}, we only present several visualized images in \ref{fig:imagenet}. \begin{figure*} [t] \begin{center} \begin{tabular}{c@{\hspace{0.01\linewidth}}c@{\hspace{0.01\linewidth}}c@{\hspace{0.01\linewidth}} c@{\hspace{0.01\linewidth}} } \includegraphics[ width=0.23\linewidth]{figs/imagenet/iv/ori_21772.jpg} & \includegraphics[ width=0.23\linewidth]{figs/imagenet/iv/rec_21772.jpg} & \includegraphics[ width=0.23\linewidth]{figs/imagenet/3-1-7+43-18-18/ori_21772.jpg} & \includegraphics[ width=0.23\linewidth]{figs/imagenet/3-1-7+43-18-18/rec_21772.jpg} \\ \includegraphics[ width=0.23\linewidth]{figs/imagenet/iv/ori_426740.jpg} & \includegraphics[ width=0.23\linewidth]{figs/imagenet/iv/rec_426740.jpg} & \includegraphics[ width=0.23\linewidth]{figs/imagenet/3-1-7+43-18-18/ori_426740.jpg} & \includegraphics[ width=0.23\linewidth]{figs/imagenet/3-1-7+43-18-18/rec_426740.jpg} \\ \includegraphics[ width=0.23\linewidth]{figs/imagenet/iv/ori_729014.jpg} & \includegraphics[ width=0.23\linewidth]{figs/imagenet/iv/rec_729014.jpg} & \includegraphics[ width=0.23\linewidth]{figs/imagenet/3-1-7+43-18-18/ori_729014.jpg} & \includegraphics[ width=0.23\linewidth]{figs/imagenet/3-1-7+43-18-18/rec_729014.jpg} \\ original & reconstruct & w/ transformation & recover from transformation \end{tabular} \end{center} \vspace{-1em} \caption{Visualization ImageNet results on untrained ResNet18 architecture. The transformation policy is 3-1-7+43-18-18. } \label{fig:imagenet} \end{figure*} \section{Experiments} \label{sec:exp} \renewcommand{\multirowsetup}{\centering} \subsection{Implementation and Configurations} \noindent\textbf{Datasets and models.} Our approach is applicable to various image datasets and classification models. Without loss of generality, we choose two datasets (CIFAR100~\cite{krizhevsky2009learning}, Fashion MNIST~\cite{xiao2017fashion}) and two conventional DNN models (ResNet20~\cite{he2016deep}, 8-layer ConvNet). These were the main targets of reconstruction attacks in prior works. \noindent\textbf{System and attack implementation.} We implement a collaborative learning system with ten participants, where each one owns a same number of training samples from the same distribution. They adopt the SGD optimizer with momentum, weight decay and learning decay techniques to guarantee the convergence of the global model. Our solution is able to thwart all existing reconstruction attacks with their variants. We evaluate six attacks in our experiments, named in the format of ``optimizer+distance measure''. These techniques\footnote{The attack in \cite{zhao2020idlg} inherited the same technique from \cite{zhu2019deep}, with a smaller computational cost. So we do not consider it in our experiments.} cover different optimizers and distance measures: (1) LBFGS+L2 \cite{zhu2019deep}; (2) Adam+Cosine \cite{geiping2020inverting}; (3) LBFGS+Cosine; (4) Adam+L1; (5) Adam+L2; (6) SGD+Cosine. It is straightforward that the reconstruction attacks become harder with larger batch sizes. To fairly evaluate the defenses, we consider the strongest attacks where the batch size is 1. \iffalse \begin{packeditemize} \item DLG \cite{zhu2019deep}: this attack adopts LBFGS and L2 norm for optimization. \item IG \cite{geiping2020inverting}: this attack adopts Adam and cosine distance for optimization. \item DLG+COS: this is a variant of DLG that replaces L2 norm with cosine distance. \item IG+L1/L2: this is a variant of IG that replaces cosine distance with L1 or L2 norm. \item SGD+COS: this is a variant of IG that replaces Adam with SGD. \end{packeditemize} \fi \noindent\textbf{Defense implementation.} We adopt the data augmentation library \cite{pool}, which contains 50 various transformations. We consider a policy with maximum 3 functions concatenated. It is denoted as $i-j-k$, where $i$, $j$, and $k$ are the function indexes from \cite{pool}. Note that index values can be the same, indicating the same function is applied multiple times. We implement the following defenses as the baseline. \begin{packeditemize} \item \emph{Gaussian/Laplacian}: using differential privacy to obfuscate the gradients with Gaussian or Laplacian noise. For instance, Gaussian($10^{-3}$) suggests a noise scale of $N(0, 10^{-3})$. \item \emph{Pruning}: adopting the layer-wise pruning technique~\cite{dutta2019discrepancy} to drop parameter gradients whose absolute values are small. For instance, a compression ratio of 70\% means for each layer, the gradients are set as zero if their absolute values rank after the top-$30\%$. \item \emph{Random augmentation}: we randomly sample transformation functions from \cite{pool} to form a policy. For each experiment, we apply 10 different random policies to obtain the average results. \end{packeditemize} We adopt PSNR to measure the visual similarity between the attacker's reconstructed samples and transformed samples, as the attack effects. We measure the trained model's accuracy over the corresponding validation dataset to denote the model performance. \noindent\textbf{Testbed configuration.} We adopt PyTorch framework~\cite{paszke2019pytorch} to realize all the implementations. All our experiments are conducted on a server equipped with one NVIDIA Tesla V100 GPU and 2.2GHz Intel CPUs. \subsection{Search and Training Overhead} \noindent\textbf{Search cost.} For each transformation policy under evaluation, we calculate the average $S_{pri}$ of 100 images randomly sampled from the validation set\footnote{The first 100 images in the validation set are used for attack evaluation, not for $S_{pri}$ calculation.}. We also calculate $S_{acc}$ with 10 forward-background rounds. We run 10 search jobs in parallel on one GPU. Each policy can be evaluated within 1 minutes. Evaluation of all $C_{max}=1,500$ policies can be completed within 2.5 hours. The entire search overhead is very low. In contrast, the attack time of reconstructing 100 images using \cite{geiping2020inverting} is about 10 GPU hours. \noindent\textbf{Training cost.} Applying the searched policies to the training samples can be conducted offline. So we focus on the online training performance. We train the ResNet20 model on CIFAR100 with 200 epochs. Figure \ref{fig:training-analysis} reports the accuracy and loss over the training and validation sets with and without our transformation policies. We can observe that although the transformation policies can slightly slow down the convergence speed on the training set, the speeds on the validation set are identical. This indicates the transformations incur negligible overhead to the training process. \begin{figure} \centering \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[ width=\linewidth]{ figs/training/accuracy.pdf} \caption{Accuracy} \label{fig:accuracy} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[ width=\linewidth]{ figs/training/losses.pdf} \caption{Loss} \label{fig:loss} \end{subfigure} \caption{Model performance of ResNet20 on CIFAR100 during the training process.} \label{fig:training-analysis} \end{figure} \subsection{Effectiveness of the Searched Policies} As an example, Figure~\ref{fig:overview} illustrates the visual comparison of the reconstructed images with and without the searched policies under the Adam+Cosine attack \cite{geiping2020inverting} for the two datasets. We observe that without any transformations, the adversary can recover the images with very high fidelity (row 2). In contrast, after the training samples are transformed (row 3), the adversary can hardly obtain any meaningful information from the recovered images (row 4). We have similar results for other attacks as well. \begin{figure*} [t] \begin{center} \begin{tabular}{c@{\hspace{0.01\linewidth}}c@{\hspace{0.01\linewidth}}c@{\hspace{0.03\linewidth}} c@{\hspace{0.01\linewidth}}c@{\hspace{0.01\linewidth}}c@{\hspace{0.03\linewidth}} c@{\hspace{0.01\linewidth}}c@{\hspace{0.01\linewidth}}c@{\hspace{0.03\linewidth}} c@{\hspace{0.01\linewidth}}c@{\hspace{0.01\linewidth}}c@{\hspace{0.01\linewidth}} } \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/normal/ori_21.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/normal/ori_22.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/normal/ori_98.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/normal/ori_13.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/normal/ori_68.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/normal/ori_82.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/normal/ori_8.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/normal/ori_56.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/normal/ori_17.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/normal/ori_11.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/normal/ori_21.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/normal/ori_62.jpg} \\ \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/normal/rec_21.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/normal/rec_22.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/normal/rec_98.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/normal/rec_13.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/normal/rec_68.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/normal/rec_82.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/normal/rec_8.jpg}& \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/normal/rec_56.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/normal/rec_17.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/normal/rec_11.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/normal/rec_21.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/normal/rec_62.jpg} \\ 13.97dB & 12.14dB & 16.32dB & 15.61dB & 11.96dB & 14.83dB & 10.42dB & 9.65dB & 13.57dB & 9.52dB & 9.49dB & 9.17dB \\ \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/aug/ori_21.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/aug/ori_22.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/aug/ori_98.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/aug/ori_13.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/aug/ori_68.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/aug/ori_82.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/aug/ori_8.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/aug/ori_56.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/aug/ori_17.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/aug/ori_11.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/aug/ori_21.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/aug/ori_62.jpg} \\ \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/aug/rec_21.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/aug/rec_22.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ResNet/aug/rec_98.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/aug/rec_13.jpg}& \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/aug/rec_68.jpg} & \includegraphics[ width=0.07\linewidth]{figs/cifar100-ConvNet/aug/rec_82.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/aug/rec_8.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/aug/rec_56.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ResNet/aug/rec_17.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/aug/rec_11.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/aug/rec_21.jpg} & \includegraphics[ width=0.07\linewidth]{figs/FashionMnist-ConvNet/aug/rec_62.jpg} \\ 6.94dB & 5.62dB & 7.64dB & 6.97dB & 6.74dB & 7.02dB & 7.72dB & 6.56dB & 8.24dB & 6.63dB & 7.02dB & 6.04dB \\ \\ \multicolumn{3}{l}{(a) CIFAR100 with ResNet20} & \multicolumn{3}{l}{(b) CIFAR100 with ConvNet} & \multicolumn{3}{l}{(c) F-MNIST with ResNet20} & \multicolumn{3}{l}{(d) F-MNIST with ConvNet} \\ \end{tabular} \end{center} \vspace{-1em} \caption{Visual results and the PSNR values of the reconstruction attacks \cite{geiping2020inverting} with and without our defense. Row 1: clean samples. Row 2: reconstructed samples without transformation. Row 3: transformed samples. Row 4: reconstructed samples with transformation. The adopted transformations are the corresponding \emph{Hybrid} policies in Table \ref{tab:overview}. } \label{fig:overview} \end{figure*} Table~\ref{tab:overview} reports the quantitative results of Adam+Cosine attacks and model accuracy. For each dataset and architecture, we consider the model training with no transformations, random selected policies, the top-2 of the searched policies and their hybrid. We observe that randomly selected policies fail to invalidate reconstruction attacks. In contrast, the searched policies can effectively reduce the deep leakage from the gradients. The hybrid of policies exhibits higher generalization ability on the final model. \begin{table* \centering \captionsetup[subtable]{position = below} \resizebox{0.21\textwidth}{!}{ \begin{subtable}{0.22\linewidth} \centering \begin{tabular}{c|cc} \hline \textbf{Policy} & \textbf{PSNR} & \textbf{ACC} \\ \hline None & 13.88 & 76.88 \\ Random & 11.41 & 73.94 \\ 3-1-7 & 6.579 & 70.56 \\ 43-18-18 & 8.56 & 77.27 \\ Hybrid & 7.64 & 77.92 \\ \hline \end{tabular} \caption{CIFAR100 with ResNet20} \label{tab:overview-cifar1} \end{subtable}}% \hspace*{2em} \resizebox{0.21\textwidth}{!}{ \begin{subtable}{0.22\linewidth} \centering \begin{tabular}{c|cc} \hline \textbf{Policy} & \textbf{PSNR} & \textbf{ACC} \\ \hline None & 13.07 & 70.13 \\ Random & 12.18 & 69.91 \\ 21-13-3 & 5.76 & 66.98 \\ 7-4-15 & 7.75 & 69.67 \\ Hybrid & 6.83 & 70.27 \\ \hline \end{tabular} \caption{CIFAR100 with ConvNet} \label{tab:overview-cifar2} \end{subtable}}% \hspace*{2em} \resizebox{0.21\textwidth}{!}{ \begin{subtable}{0.22\linewidth} \centering \begin{tabular}{c|cc} \hline \textbf{Policy} & \textbf{PSNR} & \textbf{ACC} \\ \hline None & 10.04 & 95.03 \\ Random & 9.23 & 91.16 \\ 19-15-45& 7.01 & 91.33 \\ 2-43-21 & 7.75 & 89.41 \\ Hybrid & 7.60 & 92.23 \\ \hline \end{tabular} \caption{F-MNIST with ResNet20} \label{tab:overview-mnist1} \end{subtable}}% \hspace*{2em} \resizebox{0.21\textwidth}{!}{ \begin{subtable}{0.22\linewidth} \centering \begin{tabular}{c|cc} \hline \textbf{Policy} & \textbf{PSNR} & \textbf{ACC} \\ \hline None & 9.12 & 94.25 \\ Random & 8.83 & 90.18 \\ 42-28-42 & 7.01 & 91.33 \\ 14-48-48 & 6.75 & 90.56 \\ Hybrid & 6.94 & 91.35 \\ \hline \end{tabular} \caption{F-MNIST with ConvNet} \label{tab:overview-mnist2} \end{subtable}}% \captionsetup[table]{position=bottom} \caption{PSNR (db) and model accuracy (\%) of different transformation configurations for each architecture and dataset.} \label{tab:overview} \end{table*} \iffalse \begin{table}[t] \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{c c c c c} \toprule Dataset & Architecture & Transformation & PSNR & ACC \\ \midrule \multirow{8}{*}{CIFAR100} & \multirow{4}{*}{ResNet20} & No & 13.88 & 76.88 \\ & & Random & 12.40 & 73.94 \\ & & 3-1-7 & 6.579 & 70.56 \\ & & 43-18-18 & 8.56 & 77.27 \\ & & Hybrid & 7.64 & 77.92 \\ \cmidrule{2-5} & \multirow{4}{*}{ConvNet} & No & 13.07 & 70.13 \\ & & Random & 12.18 & 69.91 \\ & & 21-13-3 & 5.76 & 66.98 \\ & & 7-4-15 & 7.75 & 69.67 \\ & & Hybrid & 6.83 & 70.27 \\ \midrule \multirow{8}{*}{\makecell{Fashion \\MNIST}} & \multirow{4}{*}{ResNet20} & No & 10.04 & 95.03 \\ & & Random & 923 & 91.16 \\ & & 19-15-45& 7.01 & 91.33 \\ & & 2-43-21 & 7.75 & 89.41 \\ & & Hybrid & 7.60 & 92.23 \\ \cmidrule{2-5} & \multirow{4}{*}{ConvNet} & No & 9.12 & 94.25 \\ & & Random & 8.83 & 90.18 \\ & & 42-28-42 & 7.01 & 91.33 \\ & & 14-48-48 & 6.75 & 90.56 \\ & & Hybrid & 6.94 & 91.35 \\ \bottomrule \end{tabular} } \end{center} \caption{Privacy and accuracy of different transformation configurations for each architecture and dataset.} \label{tab:overview} \end{table} \fi Table \ref{tab:attack} reports the PSNR values of the hybrid strategy against different reconstruction attacks and their variants. Compared with the training process without any defenses, the hybrid of searched transformations can significantly reduce the image quality of the reconstructed images, and eliminate information leakage in different attacks. \begin{table}[t] \begin{center} \small \resizebox{0.47\textwidth}{!}{ \begin{tabular}{c c c|| c c c } \toprule \textbf{Attack} & \textbf{None} & \textbf{Hybrid} & \textbf{Attack} & \textbf{None} & \textbf{Hybrid} \\ \midrule LBFGS+L2 & 6.93 & 4.79 & LBFGS+COS & 10.33 & 6.16 \\ \midrule Adam+Cosine & 13.88 & 7.64 & Adam+L2 & 10.55 & 7.61 \\ \midrule Adam+L1 & 9.99 & 6.97 & SGD+COS & 14.04 & 7.71 \\ \bottomrule \end{tabular}} \end{center} \caption{The PSNR values (db) between the reconstructed and transformed images under different attack techniques.} \label{tab:attack} \end{table} \noindent\textbf{Comparisons with other defenses.} We also compare our solution with state-of-the-art privacy-preserving methods proposed in prior works. We consider model pruning with different compression ratios, and differential privacy with different noise scales and types. Table~\ref{tab:defense} illustrates the comparison results. We observe that these solutions can hardly reduce the PSNR values, and the model accuracy is decreased significantly with larger perturbation. These results are consistent with the conclusion in \cite{zhu2019deep}. In contrast, our solution can significantly destruct the quality of recovered images, while maintaining high model accuracy. \begin{table}[t] \begin{center} \resizebox{0.27\textwidth}{!}{ \begin{tabular}{c c c c } \toprule \textbf{Defense} & \textbf{PSNR} & \textbf{ACC} \\ \midrule Pruning ($70\%$) & 12.00 & 77.12 \\ Pruning ($95\%$) & 10.07 & 70.12 \\ Pruning ($99\%$) & 10.93 & 58.33 \\ \midrule Laplacian ($10^{-3}$) & 11.85 & 74.12 \\ Laplacian ($10^{-2}$) & 9.67 & 39.59 \\ Gaussian ($10^{-3}$) & 12.71 & 75.67 \\ Gaussian ($10^{-2}$) & 11.44 & 48.2 \\ \midrule Hybrid & 7.64 & 77.92 \\ \bottomrule \end{tabular}} \end{center} \vspace{-1em} \caption{Comparisons with existing defense methods under the Adam+Cosine attack.} \label{tab:defense} \vspace{-1em} \end{table} \noindent\textbf{Transferability.} In the above experiments, we search the optimal policies for each dataset. Actually the searched transformations have high transferability across different datasets. To verify this, we apply the policies searched from CIFAR100 to the tasks of F-MNIST, and Table \ref{tab:db-transfer} illustrates the PSNR and accuracy values. We observe that although these transferred policies are slightly worse than the ones directly searched from F-MNIST, they are still very effective in preserving the privacy and model performance, and much better than the randomly selected policies. This transferability property makes our solution more efficient. \begin{table \centering \captionsetup[subtable]{position = below} \resizebox{0.22\textwidth}{!}{ \begin{subtable}{0.5\linewidth} \centering \begin{tabular}{c|cc} \hline \textbf{Policy} & \textbf{PSNR} & \textbf{ACC} \\ \hline None & 10.04 & 95.03 \\ 3-1-7 & 7.5 & 87.95 \\ 43-18-18 & 8.13 & 91.29 \\ Hybrid & 8.14 & 91.49 \\ \hline \end{tabular} \caption{F-MNIST with ResNet20} \label{tab:db-transfer1} \end{subtable}} \hspace*{1em} \resizebox{0.22\textwidth}{!}{ \begin{subtable}{0.5\linewidth} \centering \begin{tabular}{c|cc} \hline \textbf{Policy} & \textbf{PSNR} & \textbf{ACC} \\ \hline None & 9.12 & 94.25 \\ 21-13-3 & 7.51 & 74.81 \\ 7-4-15 & 7.68 & 88.29 \\ Hybrid & 7.11 & 87.51 \\ \hline \end{tabular} \caption{F-MNIST with ConvNet} \label{tab:db-transfer2} \end{subtable}} \captionsetup[table]{position=bottom} \caption{Transferability results: applying the same policies from CIFAR100 to F-MNIST.} \label{tab:db-transfer} \end{table} \subsection{Explanations about the Transformation Effects} \label{sec:delve-into-augmentation} In this section, we further analyze the mechanisms of the transformations that can invalidate reconstruction attacks. We first investigate which kinds of transformations are particularly effective in obfuscating input samples. Figure~\ref{fig:transform-dist} shows the privacy score of each transformation. The five transformations with the lowest scores are (red bars in the figure): 3rd [horizontal shifting, 9], 15th [brightness, 9], 18th [contrast, 7], 26th [brightness, 6] and 1st [contrast, 6]; where the parameters inside the brackets are the magnitudes of the transformations. These functions are commonly selected in the optimal policies. Horizontal shifting achieves the lowest score, as it incurs a portion of black area, which can undermine the quality of the recovered image during the optimization. Contrast and brightness aim to modify the lightness of an image. These operations can blur the local details, which also increase the difficulty of image reconstruction. Overall, the selected privacy-preserving transformations can distort the details of the images, while maintaining the semantic information. \begin{figure} \centering \vspace{-1em} \includegraphics[ width=0.85\linewidth]{figs/trans_dist/trans_dist.pdf} \caption{Privacy scores of the 50 transformation functions in the augmentation library.} \vspace{-1em} \label{fig:transform-dist} \end{figure} Next, we explore the attack effects at different network layers. We compare three strategies: (1) no transformation; (2) random transformation policy; (3) searched transformation policy. Figure \ref{fig:deep-shallow} demonstrates the similarity between the gradient of the reconstructed samples and the actual gradient for two shallow layers (a) and two deep layers (b). We can observe that at shallow layers, the similarity scores converge to 0.7 when no or random policy is applied. In contrast, the similarity score stays at lower values when the optimal policy is used. This indicates that the optimal policy makes it difficult to reconstruct low-level visual features of the input, e.g. color, shape, and texture. The similarity scores for all the three cases are almost the same at deep layers. This reveals the optimal policy has negligible impact on the semantic information of the images used for classification, and the model performance is thus maintained. \begin{figure} \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[ width=0.49\linewidth]{figs/layer-distribution/layers.0.0.conv1.weight.pdf} \includegraphics[ width=0.49\linewidth]{figs/layer-distribution/layers.0.1.conv2.weight.pdf} \caption{Shallow layers} \label{fig:shallow} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[ width=0.49\linewidth]{figs/layer-distribution/layers.1.0.conv2.weight.pdf} \includegraphics[ width=0.49\linewidth]{figs/layer-distribution/layers.2.2.conv2.weight.pdf} \caption{Deep layers} \label{fig:Deep} \end{subfigure} \caption{Gradient similarity during the reconstruction optimization process, for CIFAR100 with ResNet20. } \vspace{-1em} \label{fig:deep-shallow} \end{figure} \section{Introduction} A collaborative learning system enables multiple participants to jointly train a shared Deep Learning (DL) model for a common artificial intelligence task \cite{yang2019federated,melis2019exploiting,guo2020towards}. Typical collaborative systems are distributed systems such as federated learning systems, where each participant iteratively calculates the local gradients based on his own training dataset and shares them with other participants to approach the ideal model. This collaborative mode can significantly improve the training speed, model performance and generalization. Besides, it can also protect the training data privacy, as participants do not need to release their sensitive data during the training phase. Due to these advantages, collaborative learning has become promising in many scenarios, e.g., smart manufacturing \cite{hao2019efficient}, autonomous driving \cite{niknam2020federated}, digital health \cite{brisimi2018federated}, etc. Although each participant does not disclose the training dataset, he has to share with others the gradients, which can leak information of the sensitive data indirectly. Past works \cite{hitaj2017deep,melis2019exploiting,nasr2019comprehensive} demonstrated the possibility of membership inference and property inference attacks in collaborative learning. A more serious threat is the \emph{reconstruction attack} \cite{zhu2019deep,zhao2020idlg,geiping2020inverting}, where an adversary can recover the exact values of samples from the shared gradients with high fidelity. This attack is very practical under realistic and complex circumstances (e.g., large-size images, batch training). Due to the severity of this threat, an effective and practical defense solution is desirable to protect the privacy of collaborative learning. Common privacy-aware solutions \cite{zhu2019deep,wei2020framework} attempt to increase the difficulty of input reconstruction by obfuscating the gradients. However, the obfuscation magnitude is bounded by the performance requirement of the DL task: a large-scale obfuscation can hide the input information, but also impair the model accuracy. The effectiveness of various techniques (e.g., noise injection, model pruning) against reconstruction attacks have been empirically evaluated \cite{zhu2019deep}. Unfortunately, they cannot achieve a satisfactory trade-off between data privacy and model usability, and hence become less practical. Motivated by the limitations of existing solutions, this paper aims to solve the privacy issue from a different perspective: \emph{obfuscating the training data to make the reconstruction difficult or infeasible.} The key insight of our strategy is to \emph{repurpose data augmentation techniques for privacy enhancement}. A variety of transformation approaches have been designed to improve the model performance and generalization. We aim to leverage certain transformation functions to preprocess the training sets and then train the gradients, which can prevent malicious participants from reconstructing the transformed or original samples. Mitigating reconstruction attacks via data augmentation is challenging. First, existing image transformation functions are mainly used for performance and generalization improvement. It is unknown which ones are effective in reducing information leakage. Second, conventional approaches apply these transformations to augment the training sets, where original data samples are still kept for model training. This is relatively easier to maintain the model performance. In contrast, to achieve our goal, we have to abandon the original samples, and only use the transformed ones for training, which can impair the model accuracy. We introduce a systematic approach to overcome these challenges. Our goal is to automatically discover an ensemble of effective transformations from a large collection of commonly-used data augmentation functions. This ensemble is then formed as a transformation policy, to preserve the privacy of collaborative learning. Due to the large search space and training overhead, it is computationally infeasible to evaluate the privacy and performance impacts of each possible policy. Instead, we design two novel metrics to quantify the policies without training a complete model. These metrics with our new search algorithm can identify the optimal policies within 2.5 GPU hours. The identified transformation policies exhibit great capability of preserving the privacy while maintaining the model performance. They also enjoy the following properties: (1) the policies are general and able to defeat different variants of reconstruction attacks. (2) The input transformations are performed locally without modifying the training pipeline. They are applicable to any collaborative learning systems and algorithms. (3) The transformations are lightweight with negligible impact on the training efficiency. (4) The policies have high transferability: the optimal policy searched from one dataset can be directly applied to other datasets as well. \section{Related Work} \subsection{Reconstruction Attacks} In collaborative learning, reconstruction attacks aim to recover the training samples from the shared gradients. Zhu et al. \cite{zhu2019deep} first formulated this attack as an optimization process: the adversarial participant searches for the optimal samples in the input space that can best match the gradients. L-BFGS~\cite{liu1989limited} was adopted to realize this attack. Following this work, several improved attacks were further proposed to enhance the attack effects and reduce the cost. For instance, Zhao et al. \cite{zhao2020idlg} first extracted the training labels from the gradients, and then recovered the training samples with higher convergence speed. Geiping et al. \cite{geiping2020inverting} adopted the cosine similarity as the distance function and Adam as the optimizer to solve the optimization problem, which can yield more precise reconstruction results. He et al. \cite{he2019model,he2020attacking} proposed reconstruction attacks against collaborative inference systems, which are not considered in this paper. \subsection{Existing Defenses and Limitations} One straightforward defense strategy is to obfuscate the gradients before releasing them, in order to make the reconstruction difficult or infeasible. Differential privacy is a theoretical framework to guide the randomization of exchange information~\cite{abadi2016deep, lecuyer2019certified, phan2020scalable,guo2020differentially}. For instance, Zhu et al. \cite{zhu2019deep} tried to add Gaussian/Laplacian noises guided by differential privacy to the gradients, and compress the model with gradient pruning. Unfortunately, there exists an unsolvable conflict between privacy and usability in these solutions: a large-scale obfuscation can compromise the model performance while a small-scale obfuscation still leaks certain amount of information. Such ineffectiveness of these methods was validated in \cite{zhu2019deep}, and will be further confirmed in this paper (Table \ref{tab:defense}). Wei et al. \cite{wei2020framework} proposed to adjust the hyper-parameters (e.g. bath size, loss or distance function), which also has limited impact on the attack results. An alternative direction is to design new collaborative learning systems to thwart the reconstruction attacks. Zhao et al. \cite{zhao2020privatedl} proposed a framework that transfers sensitive samples to public ones with privacy protection, based on which the participants can collaboratively update their local models with noise‐preserving labels. Fan et al. \cite{fan2020rethinking} designed a secret polarization network for each participant to produce secret losses and calculate the gradients. These approaches require all participants to follow the new training pipelines or optimization methods. They cannot be directly applied to existing collaborative implementations. This significantly restricts their practicality. \section{Problem Statement} \subsection{System Model} We consider a standard collaborative learning system where all participants jointly train a global model $M$. Each participant owns a private dataset $D$. Let $\mathcal{L}, W$ be the loss function and the parameters of $M$, respectively. At each iteration, every participant randomly selects a training sample $(x, y)$, calculates the loss $\mathcal{L}(x, y)$ by forward propagation and then the gradient $\bigtriangledown W(x, y) = \frac{\partial \mathcal{L}(x, y)}{\partial W}$ using backward propagation. The participants can also use the mini-batch SGD, where a mini-batch of samples are randomly selected to train the gradient at each iteration. Gradients need to be consolidated at each iteration. In a centralized system, a parameter server aggregates all the gradients, and sends the updated one to each participant. In a decentralized system, each participant aggregates the gradients from his neighbors, and then broadcasts the results. \subsection{Attack Model} We consider a honest-but-curious adversarial entity in the collaborative learning system, who receives other participants' gradients in each iteration, and tries to reconstruct the private training samples from them. In the centralized mode, this adversary is the parameter server, while in the decentralized mode, the adversary can be an arbitrary participant. Common reconstruction techniques \cite{zhu2019deep,zhao2020idlg,geiping2020inverting} adopt different optimization algorithms to extract training samples from the gradients. Specifically, given a gradient $\bigtriangledown W(x, y)$, the attack goal is to discover a pair of sample and label $(x', y')$, such that the corresponding gradient $\bigtriangledown W(x', y')$ is very close to $\bigtriangledown W$. This can be formulated as an optimization problem of minimizing the objective: \newcommand{\argmin}{\operatornamewithlimits{argmin}} \begin{equation} \label{eq:attack} x^*, y^* = \argmin_{x', y'} \quad ||\bigtriangledown W(x, y) - \bigtriangledown W(x', y')||, \end{equation} where $||\cdot||$ is a norm for measuring the distance between the two gradients. A reconstruction attack succeeds if the identified $x^*$ is visually similar to $x$. This can be quantified by the metric of Peak Signal-to-Noise Ratio (PSNR)~\cite{hore2010image}. Formally, a reconstruction attack is defined as below: \begin{definition}(($\epsilon, \delta$)-Reconstruction Attack) Let ($x^*$, $y^*$) be the solution to Equation \ref{eq:attack}, and ($x$, $y$) be the target training sample that produces $\bigtriangledown W(x, y)$. This process is called a ($\epsilon, \delta$)-reconstruction attack if the following property is held: \begin{equation} \emph{Pr}[\emph{\texttt{PSNR}}(x^*, x)\geq \epsilon] \geq 1-\delta. \end{equation} \end{definition} \iffalse \tianwei{It is not necessary to introduce the perturbation method, since it only serves as a baseline} One straightforward defense strategy is to share a perturbed gradient $\bigtriangledown W^*$ with other participants by adding a perturbation $\triangle W$ to $\bigtriangledown W$, $$\bigtriangledown W^* = \bigtriangledown W + \triangle W.$$ The perturbation is expected to have the following two properties. \begin{itemize} \item Privacy-preserving: with the perturbation, the adversary fails to reconstruct the corresponding training points from the perturbed gradient at the current iteration. \item Performance-preserving: the prediction accuracy of the global model with perturbation should be the same as the one without perturbation. Specifically, let $W^*$ be the parameters that are trained with perturbation. $P$ be the prediction accuracy of $M$ on the training data distribution $D$, i.e., \begin{myalign} P_W = Pr\left(M_W(x) = y, (x,y) \thicksim D\right) \end{myalign} We expect $P_W \leq P_{W^*}$. \end{itemize} However, existing defense algorithms fail to find such perturbation. In the following, we present a systemic and privacy-enhanced data augmentation method for perturbation generation to meet the above requirements. \fi \input{algorithm} \input{experiment} \section{Discussions and Future Work} \bheading{Adaptive attack.} Our solution prevents image reconstruction via data augmentation techniques. Although the evaluations show it is effective against existing attacks, a more sophisticated adversary may try to bypass our defense from two aspects. First, instead of starting from a randomly initialized image, he may guess the content property or class representatives of the target sample, and start the reconstruction from an image with certain semantic information. The success of such attacks depends on the probability of a successful guess, which becomes lower with higher complexity or variety of images. Second, the adversary may design attack techniques instead of optimizing the distance between the real and dummy gradients. We leave these advanced attacks as future work. \bheading{Defending other domains.} In this paper, we focus on the computer vision domain and image classification tasks. The reconstruction attacks may occur in other domains, e.g., natural language processing \cite{zhu2019deep}. Then the searched image transformations cannot be applied. However, it is possible to use text augmentation techniques \cite{kobayashi2018contextual,wei2019eda} (e.g., deletion, insertion, shuffling, synonym replacement) to preprocess the sensitive text to be less leaky without losing the semantics. Future work will focus on the design of an automatic search method for privacy protection of NLP tasks. \section{Conclusion} In this paper, we devise a novel methodology to automatically and efficiently search for data augmentation policies, which can prevent information leakage from the shared gradients. Our extensive evaluations demonstrate that the identified policies can defeat existing reconstruction attacks with negligible overhead. These policies also enjoy high transferability across different datasets, and applicability to different learning systems. We expect our search method can be adopted by researchers and practitioners to identify more effective policies when new data augmentation techniques are designed in the future. \section{Acknowledgement} We thank the anonymous reviewers for their valuable comments. This research was conducted in collaboration with SenseTime. This work is supported by A*STAR through the Industry Alignment Fund --- Industry Collaboration Projects Grant. It is also supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2020-019). \newpage {\small \bibliographystyle{ieee_fullname}
{ "attr-fineweb-edu": 1.456055, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbp44uzqh_Kly7ZIU
\section{Introduction} \label{sec:intro} \sloppy Recently, Deep Neural Networks achieve significant accomplishment in computer vision tasks such as image classification~\cite{krizhevsky2012imagenet} and object detection~\cite{ren2015faster,lecun2015deep}. However, their inference-cumbersome problem hinders their broader implementations. To develop deep models in resource-constrained edge devices, researchers propose several neural network compression paradigms, \textit{e.g.,} knowledge distillation~\cite{hinton2015distilling,heo2019comprehensive}, network pruning~\cite{lecun1989optimal,han2015deep} and network quantization~\cite{hubara2016binarized,qin2020forward}. Among the network quantization methods, the network binarization~\cite{hubara2016binarized} stands out, as it extremely quantizes weights and activations (\textit{i.e.}~intermediate feature maps) to $\pm 1$. Under this framework, the full-precision (FP) network is compressed 32$\times$ more, and the time-consuming inner-product operations are replaced with the efficient Xnor-bitcount operations. However, BNNs can hardly achieve comparable performance to the original models due to the loss of FP weights and activations. A major reason for the performance drop is that the inferior robustness comes from the error amplification effect, where the binarization operation degrades the distance induced by amplified noise~\cite{lin2019defensive}. The destructive manner of $\textit{sgn}(\cdot)$ severely corrupts the robustness of the BNN, and thus undermines their representation capacity ~\cite{bulat2019xnor,he2020proxybnn,liu2020reactnet}. As some theoretical works validated, robustness is a significant property for functions (neural networks in our context), which further influences their generalization ability~\cite{luxburg2004distance,bartlett2017spectrally}. In the above-mentioned binarization works, researchers investigate the effectiveness of their methods via the ill-defined concepts of function robustness without solid theoretical support, such as observing the visualized distributions of weights and activations~\cite{he2020proxybnn,lin2020rotated,liu2020reactnet,lin2019defensive}. However, they rarely introduced the well-defined mathematical property, Lipschitz continuity, for measuring the robustness of functions into BNN. Lipschitz continuity has been proven to be a powerful and strict tool for systematically analyzing deep learning models. For instance, Miyato~\textit{et. al.} propose the well-known Spectral Normalization~\cite{yoshida2017spectral,miyato2018spectral} utilizing the Lipschitz constant to regularize network training, which is initially designed for GAN and then extended to other network architectures, achieving great success~\cite{neyshabur2017exploring}; Lin~\textit{et. al.}~\cite{lin2019defensive} design a Lipschitz-based regularization method for network (low-bit) quantization, and testify that Lipschitz continuity is significantly related to the robustness of the low-bit network. But simply bridging those existing Lipschitz-based regularization methods with the binary neural networks (1-bit) is sub-optimal, as the exclusive property of BNN,~\textit{e.g.}, the extreme sparsity of binary weight matrix ~\cite{hubara2016binarized} impedes calculating the singular values, which is the core module in those Lipschitz-involved methods. To tackle this problem, we analyze the association between the structures and the Lipschitz constant of BNN. Motivated by this analysis, we design a new approach to effectively retain the Lipschitz constant of BNNs and make it close to the Lipschitz constant of its latent FP counterpart. Particularly, we develop a Lipschitz Continuity Retention Matrix ($\mathbf{RM}$) for each block and calculate the spectral norm of $\mathbf{RM}$ via the iterative power method to avoid the high complexity of calculating exact Lipschitz constants. It is worth to note that the designed loss function for retaining the Lipschitz continuity of BNNs is differentiable \emph{w.r.t.} the binary weights. \noindent Overall, the contributions of this paper are three-fold: \begin{itemize}[leftmargin=*, topsep=0pt, partopsep=0pt, itemsep=1pt] \item We propose a novel network binarization framework, named as ~\textbf{L}ipschitz \textbf{C}ontinuity \textbf{R}atined Binary Neural Network (\textbf{\emph{LCR}}-BNN), to enhance the robustness of binary network optimization process. To the best of our knowledge, we are the first on exploring the Lipschitz continuity to enhance the representation capacity of BNNs; \item We devise a Lipschitz Continuity Retention Matrix to approximate the Lipschitz constant with activations (instead of directly using weights as SN~\cite{miyato2018spectral} and DQ~\cite{lin2019defensive} devised) of networks in the BNN forward pass; \item By adding our designed regularization term on the existing state-of-the-art methods, we observe the enhanced robustness are validated on ImageNet-C and promising accuracy improvement on CIAFR and ImageNet datasets. \end{itemize} \section{Related Work} \label{related} \sloppy \subsection{Network Binarization} In the pioneer art of BNNs, Hubara \textit{et. al.}~\cite{hubara2016binarized} quantize weights and activations to $\pm 1$ via sign function. Due to the non-differentiability of the sign function, the straight-through estimator (STE)~\cite{bengio2013estimating} is introduced for approximating the derivative of the sign function. Inspired by this archetype, numerous researchers dig into the field of BNNs and propose their modules to improve the performance of BNNs. For instance, Rastegari \textit{et. al.}~\cite{rastegari2016xnor} reveal that the quantization error between the FP weights and corresponding binarized weights is one of the obstacles degrading the representation capabilities of BNNs. Then they propose to introduce a scaling factor calculated by the L1-norm for both weights and activation functions to minimize the quantization error. XNOR++~\cite{bulat2019xnor} absorbs the idea of scaling factor and proposes learning both spatial and channel-wise scaling factors to improve performance. Furthermore, Bi-Real~\cite{liu2020bi} proposes double residual connections with full-precision downsampling layers to lessen the information loss. ProxyBNN~\cite{he2020proxybnn} designs a proxy matrix as a basis of the latent parameter space to guide the alignment of the weights with different bits by recovering the smoothness of BNNs. Those methods try to lessen the quantization error and investigate the effectiveness from the perspective of model smoothness (normally via visualizing the distribution of weights). A more detailed presentation and history of BNNs can be found in the Survey~\cite{qin2020binary}. However, none of them take the functional property, Lipschitz continuity, into consideration, which is a well-developed mathematical tool to study the robustness of functions. Bridging Lipschitz continuity with BNNs, we propose to retain the Lipschitz continuity of BNNs, which can serve as a regularization term and further improve the performance of BNNs by strengthening their robustness. \subsection{Lipschitz Continuity in Neural Networks} The Lipschitz constant is an upper bound of the ratio between input perturbation and output variation within a given distance. It is a well-defined metric to quantify the robustness of neural networks to small perturbations~\cite{scaman2018lipschitz}. Also, the Lipschitz constant $\Vert f \Vert_{Lip}$ can be regarded as a functional norm to measure the Lipschitz continuity of given functions. Due to its property, the Lipschitz constant is the primary concept to measure the robustness of functions~\cite{bartlett2017spectrally,luxburg2004distance,neyshabur2017exploring}. In the deep learning era, previous theoretical arts~\cite{virmaux2018lipschitz,neyshabur2017exploring} disclose the regularity of deep networks via Lipschitz continuity. Lipschitz continuity is widely introduced into many deep learning topics for achieving the SoTA performance~\cite{miyato2018spectral,yoshida2017spectral,shang2021lipschitz,zhang2021towards}. For example, in image synthesis, Miyato~\textit{et. al.}~\cite{miyato2018spectral,yoshida2017spectral} devise spectral normalization to constrain the Lipschitz constant of the discriminator for optimizing a generative adversarial network, acting as a regularization term to smooth the discriminator function; in knowledge distillation, Shang \textit{et. al.}~\cite{shang2021lipschitz} propose to utilize the Lipschitz constant as a form of knowledge to supervise the training process of student network; in neural network architecture design, Zhang \textit{et. al.}~\cite{zhang2021towards} propose a novel $L_{\infty}$-dist network using naturally 1-Lipschitz functions as neurons. The works above highlight the significance of Lipschitz constant in expressiveness and robustness of deep models. Particularly, retaining Lipschitz continuity at an appropriate level is proven to be an effective technique for enhancing the model robustness. Therefore, the functional information of neural networks, Lipschitz constant, should be introduced into network binarization to fill the robustness gap between BNN and its real-valued counterpart. \noindent\textbf{Relation to Spectral Normalization (SN)~\cite{miyato2018spectral}.} We empirically implement the SN in BNN but fail. By analyzing the failure of the implementation, we conclude that the SN is not suitable for BNNs. The reasons are: (i) One of the key modules in SN is spectral norm computation based on singular value calculatiuon, which is directly implemented on the weight matrix (\textit{e.g.}, the matrices of convolutional and linear layers). But the binarization enforcing the FP weight into 1 or -1 makes the weight matrix extremely sparse. Thus, applying the existing algorithm to binary matrices collapses. (ii) In contrast to normal networks, the forward and backward passes of BNN are more complex, \textit{e.g.}, FP weights (after backpropagation) and binary weights (after binarization) exist in the same training iteration. This complexity problem impedes broader implementations of SN on BNNs as the number of structures in a BNN exceeds the number in a normal network. To tackle those problems, we propose a novel Lipschitz regularization technique targeted to train BNNs. We elaborate more technical comparisons between our method and SN in the following Section~\ref{sec:difference}. \section{Lipschitz Continuity Retention for BNNs} \label{sec:method} \subsection{Preliminaries} \label{sec:pre} We first define a general neural network with $L$ fully-connected layers (without bias term for simplification). This network $f(\mathbf{x})$ can be denoted as: \begin{equation} f(\mathbf{W}^1,\cdots,\mathbf{W}^L;\mathbf{x}) = (\mathbf{W}^{L}\cdot\sigma\cdot \mathbf{W}^{L-1}\cdot \cdots \cdot\sigma\cdot \mathbf{W}^{1})(\mathbf{x}), \label{eq:1} \end{equation} where $\mathbf{x}$ is the input sample and $\mathbf{W}^{k}\in \mathbb{R}^{d_{k-1}\times d_{k}} (k=1,...,L-1)$ stands for the weight matrix connecting the $(k-1)$-th and the $k$-th layer, with $d_{k-1}$ and $d_{k}$ representing the sizes of the input and output of the $k$-th network layer, respectively. The $\sigma(\cdot)$ function performs element-wise activation for the activations. \noindent \textbf{Binary Neural Networks.} Here, we revisit the general gradient-based method in~\cite{courbariaux2015binaryconnect}, which maintains full-precision latent variables $\mathbf{W}_F$ for gradient updates, and the $k$-th weight matrix $\mathbf{W}_F^k$ is binarized into $\pm 1$ binary weight matrix $\mathbf{W}_B^k$ by a binarize function (normally $\textit{sgn}(\cdot)$) as $\mathbf{W}_B^k = \textit{sgn}(\mathbf{W}_F^k)$. Then the activation map of the $k$-th layer is produced by $\mathbf{A}^{k} = \mathbf{W}_B^k \mathbf{A}^{k-1}$, and a whole forward pass of binarization is performed by iterating this process for $L$ times. \noindent \textbf{Lipschitz Constant (Definition 1).} A function $g : \mathbb{R}^{n} \longmapsto \mathbb{R}^{m}$ is called Lipschitz continuous if there exists a constant $L$ such that: \begin{equation} \forall \mathbf{x,y} \in \mathbb{R}^{n}, \Vert g(\mathbf{x}) - g(\mathbf{y})\Vert_2 \leq L\Vert \mathbf{x} - \mathbf{y}\Vert_2, \label{definition:lip} \end{equation} where $\mathbf{x,y}$ represent two random inputs of the function $g$. The smallest $L$ holding the inequality is the Lipschitz constant of function $g$, denoted as $\Vert g \Vert_{Lip}$. By Definition 1, $\Vert \cdot \Vert_{Lip}$ can upper bound of the ratio between input perturbation and output variation within a given distance (generally L2 norm), and thus it is naturally considered as a metric to evaluate the robustness of neural networks~\cite{scaman2018lipschitz,rosca2020case,shang2021lipschitz}. In the following section, we propose our Lipschitz Continuity Retention Procedure (Sec.~\ref{sec:3.2}), where the a BNN is enforced to close to its FP counterpart in term of Lipschitz constant. In addition, we introduce the proposed loss function and gradient approximation for optimizing the binary network (Sec.~\ref{sec:3.3}). Finally, we discuss the relation between \emph{LCR} and Lipschitz continuity, and compare our method to the well-known Spectral Normalization~\cite{miyato2018spectral} (Sec.~\ref{sec:difference}). \subsection{Lipschitz Continuity Retention Procedure} \label{sec:3.2} We aim to retain the Lipschitz constants in an appropriate level. In practice, we need to pull $\Vert f_B\Vert_{Lip}$ and $\Vert f_F\Vert_{Lip}$ closely to stabilize the Lipschitz constant of the BNNs. However, it is NP-hard to compute the exact Lipschitz constant of neural networks~\cite{virmaux2018lipschitz}, especially involving the binarization process. To solve this problem, we propose to bypass the exact Lipschitz constant computation by introducing a sequence of Retention Matrices produced by the adjacent activations, and then compute their spectral norms via power iteration method to form a LCR loss for retaining the Lipschitz continuity of the BNN as demonstrated in Figure~\ref{fig:pipeline}. \noindent \textbf{Lipschitz constant of neural networks.} We fragment an affine function for the $k$-th layer with weight matrix $\mathbf{W}^k$, $f^k(\cdot)$ mapping $\mathbf{a}^{k-1} \longmapsto \mathbf{a}^{k}$, in which $\mathbf{a}^{k-1} \in \mathbb{R}^{d_{k-1}}$ and $\mathbf{a}^{k} \in \mathbb{R}^{d_{k}}$ are the activations produced from the $(k-1)$-th and the $k$-th layer, respectively. Based on Lemma 1 in the Supplemental Materials, $\Vert f^k\Vert_{Lip}= {\sup}_{\mathbf{a}} \Vert \nabla \mathbf{W}^k(\mathbf{a}) \Vert_{SN}$, where $ \Vert \cdot \Vert_{SN}$ is the matrix spectral norm formally defined as: \begin{equation} \label{eq:3} \Vert \mathbf{W}^k \Vert_{SN} \triangleq \max \limits_{\mathbf{x}:\mathbf{x}\neq \mathbf{0}} \frac{\Vert \mathbf{W}^k \mathbf{x} \Vert_2}{\Vert \mathbf{x} \Vert_2} = \max \limits_{\Vert\mathbf{x}\Vert_2 \leq 1}{\Vert \mathbf{W}^k \mathbf{x} \Vert_2}, \end{equation} where the spectral norm of the matrix $\mathbf{W}$ is equivalent to its largest singular value. Thus, for the $f^k$, based on Lemma 2 in the Supplemental Materials, its Lipschitz constant can be derived as: \begin{equation} \Vert \mathbf{W}^k\Vert_{Lip} = {\sup}_{\mathbf{a}} \Vert \nabla \mathbf{W}^k(\mathbf{a}) \Vert_{SN} = \Vert \mathbf{W}^k \Vert_{SN}. \end{equation} Moreover, as for the most functional structures in neural network such as ReLU, Tanh, Sigmoid, Sign, batch normalization and other pooling layers, they all have simple and explicit Lipschitz constants~\cite{goodfellow2016deep,miyato2018spectral,shang2021lipschitz}. Note that for the sign function in BNN, though it is not theoretically differentiable, it still has an explicit Lipschitz constant as its derivative is numerically approximated by HardTanh function~\cite{bengio2013estimating}. This fixed Lipschitz constant property renders our derivation to be applicable to most network architectures, such as binary ResNet~\cite{he2016deep,hubara2016binarized} and variant binary ResNet~\cite{liu2020reactnet,bulat2020bats}. By the inequality of norm, \textit{i.e.} $\Vert \mathbf{W}^k \cdot \mathbf{W}^{k+1} \Vert_{Lip} \leq \Vert \mathbf{W}^k \Vert_{Lip}\cdot \Vert \mathbf{W}^{k+1} \Vert_{Lip}$, we obtain the following upper bound of the Lipschitz constant of network $f$, \textit{i.e.,} \begin{equation} \begin{split} \Vert f \Vert_{Lip} \leq \Vert \mathbf{W}^L \Vert_{Lip} \cdot \Vert \sigma \Vert_{Lip} \cdots \cdot \Vert \mathbf{W}^1 \Vert_{Lip} = \prod_{k=1}^{L}\Vert \mathbf{W}^{k}\Vert_{SN}. \end{split} \label{eq:7} \end{equation} In this way, we can retain the Lipschitz constant through maintaining a sequence of spectral norms of intermediate layers in the network. \noindent \textbf{Construction of Lipschitz Continuity Retention Matrix.} We now aim to design a novel optimization loss to retain Lipschitz continuity by narrowing the distance between the spectral norms of corresponding weights of full-precision and binary networks. Moreover, we need to compute the spectral norm of binarized weight matrices. Nevertheless, it is inaccessible to calculate the spectral norm of the binary weight matrix $\mathbf{W}^k_B$ in BNNs by popular SVD-based methods~\cite{aharon2006k}. Therefore, we design the Lipschitz Continuity Retention Matrix ($\mathbf{RM}$) to bypass the complex calculation of the spectral norm of $\mathbf{W}_B^k$. Approaching the final goal through the bridge of the Retention Matrix allows feasible computation to retain the Lipschitz constant and facilitates its further use as a loss function. For training data with a batch size of $N$, we have a batch of corresponding activations after a forward process for the ($k$-1)-th layer as \begin{equation} \mathbf{A}^{k-1} = (\mathbf{a}^{k-1}_1,\cdots,\mathbf{a}^{k-1}_n) \in \mathbb{R}^{{d_{k-1}}\times N}, \end{equation} where $\mathbf{W}^k \mathbf{A}^{k-1}= \mathbf{A}^{k}$ for each $k \in \{1,\dots,L-1\}$. \begin{figure}[!t] \begin{center} \includegraphics[width=0.97\textwidth]{images/pipeline_v4.pdf} \end{center} \vspace{-0.3in} \caption{(\textbf{a}) An overview of our Lipschitz regularization for a binary convolutional layer: regularizing the BNN via aligning the Lipschitz constants of binary network and its latent full-precision counterpart is the goal of our work. To reach this goal, the input and output activations of the $k$-th layer compose the Retention Matrix ($\mathbf{RM}^k$) for approximating the Lipschitz constant of this layer. $\mathbf{RM}^k_F$ and $\mathbf{RM}^k_B$ are then used to calculate the Lipschitz constant of this layer (the validation of this approximation is elaborated in~\ref{sec:3.2}). Finally, the Lipschitz continuity of the BNN is retained under a regularization module. (\textbf{b}) Difference between Spectral Normalization (Left) and \emph{LCR} (Right). More details are discussed in~\ref{sec:difference}.} \label{fig:pipeline} \vspace{-0.15in} \end{figure} Studies about similarity of activations illustrate that for well-trained networks, their batch of activations in the same layer (\textit{i.e.}~$\{\mathbf{a}^{k-1}_i\}, i \in \{1,\dots,n\}$) have strong mutual linear independence. We formalize the independence of the activations as follows: \begin{equation} \begin{split} (\mathbf{a}^{k-1}_i)^{\mathsf{T}} \mathbf{a}^{k-1}_j \approx 0&,~~~~\forall i\neq j\in\{1,\cdots,N\},\\ (\mathbf{a}^{k-1}_i)^{\mathsf{T}} \mathbf{a}^{k-1}_i \neq 0&,~~~~\forall i \in\{1,\cdots,N\}. \end{split} \label{eq:9} \end{equation} We also empirically and theoretically discuss the validation of this assumption in the Sec.~\ref{sec:further_analysis}. With the above assumption, we formalize the devised Retention Matrix $\mathbf{RM}^k$ for estimating the spectral norm of matrix $\mathbf{W}^k$ as: \begin{equation} \begin{aligned} \mathbf{RM}^k &\triangleq \left[(\mathbf{A}^{k-1})^{\mathsf{T}} \mathbf{A}^{k}\right]^{\mathsf{T}} \left[(\mathbf{A}^{k-1})^{\mathsf{T}} \mathbf{A}^{k}\right]\\ &= (\mathbf{A}^{k-1})^{\mathsf{T}} (\mathbf{W}^k)^{\mathsf{T}} (\mathbf{A}^{k-1}) (\mathbf{A}^{k-1})^{\mathsf{T}} \mathbf{W}^k \mathbf{A}^{k-1}.\\ \end{aligned} \label{eq:rm1} \end{equation} Incorporating independence assumption in Eq.~\ref{eq:9} (\textit{i.e.}, $(\mathbf{A}^{k-1})(\mathbf{A}^{k-1})=\mathbf{I}$)) with Eq.~\ref{eq:rm1}, we can transfer the $\mathbf{RM}^k$ as follows: \begin{equation} \label{eq:main} \mathbf{RM}^k = (\mathbf{A}^{k-1})^{\mathsf{T}} ({\mathbf{W}^k}^{\mathsf{T}}\mathbf{W}^k) \mathbf{A}^{k-1}. \end{equation} Based on Theorem 1 in supplemental material and Eq.~\ref{eq:main}, $\sigma_1(\mathbf{RM}^k) = \sigma_1({\mathbf{W}^k}^{\mathsf{T}}\mathbf{W}^k)$ where $\sigma_1(\cdot)$ is the function for computing the largest eigenvalue, \textit{i.e.}, Retention Matrix $\mathbf{RM}^k$ has the same largest eigenvalue with ${\mathbf{W}^k}^{\mathsf{T}}\mathbf{W}^k$. Thus, with the definition of spectral norm $\Vert \mathbf{W}^k\Vert_{SN} = \sigma_1({\mathbf{W}^k}^{\mathsf{T}}\mathbf{W}^k)$, the spectral norm of the matrix $\mathbf{W}^k$ can be yielded through calculating the largest eigenvalue of $\mathbf{RM}^k$, \textit{i.e.}~$\sigma_1(\mathbf{RM}^k)$, which is solvable~\cite{shang2021lipschitz}. For networks with more complex layers, such as the residual block and block in MobileNet~\cite{he2016deep,howard2017mobilenets}, we can also design such a Retention Matrix to bypass the Lipschitz constant computation layer-wisely. By considering the block as an affine mapping from front to back activations, the proposed Retention Matrix can also be produced block-wisely, making our spectral norm calculation more efficient. Specifically, we define the Retention Matrix $\mathbf{RM}$ for the residual blocks as follows: \begin{equation} \mathbf{RM}_m \triangleq \left[(\mathbf{A}^{f})^{\mathsf{T}} \mathbf{A}^{l}\right]^{\mathsf{T}} \left[(\mathbf{A}^{f})^{\mathsf{T}} \mathbf{A}^{l}\right], \end{equation} where $\mathbf{A}^{f}$ and $\mathbf{A}^{l}$ denote the front-layer activation maps and the back-layer activation maps of the residual block, respectively. \noindent\textbf{Calculation of Spectral Norms.} Here, to calculate the spectral norms of two matrices, an intuitive way is to use SVD to compute the spectral norm, which results in overloaded computation. Rather than SVD, we utilize \textbf{Power Iteration} method~\cite{golub2000eigenvalue,miyato2018spectral} to approximate the spectral norm of the targeted matrix with a small trade-off of accuracy. By Power Iteration Algorithm (see Supplemental Material), we can obtain the spectral norms of the binary and corresponding FP Retention Matrices, respectively (\textit{i.e.}~$\Vert \mathbf{RM}_F^k \Vert_{SN}$ and $\Vert \mathbf{RM}_B^k \Vert_{SN}$ for each $k\in \{1, \dots, L - 1\}$). And then, we can calculate the distance between these two spectral norms to construct the loss function. \subsection{Binary Neural Network Optimization} \label{sec:3.3} \noindent\textbf{Optimization losses.} We define the Lipschitz continuity retention loss function $\mathcal{L}_{Lip}$ as \begin{equation} \mathcal{L}_{Lip} = \sum_{k=1}^{L-1}\left[(\frac{\Vert \mathbf{RM}_B^k \Vert_{SN}}{\Vert \mathbf{RM}_F^k \Vert_{SN}} - 1){\beta^{k-L}}\right]^2, \label{equ:15} \end{equation} where $\beta$ is a coefficient greater than $1$. Hence, with $k$ increasing, the $\left[(\frac{\Vert \mathbf{RM}_B^k \Vert_{SN}}{\Vert \mathbf{RM}_F^k \Vert_{SN}} - 1){\beta^{k-L}}\right]^2$ increases. In this way, the spectral norm of latter layer can be more retained. Combined with the cross entropy loss $\mathcal{L}_{CE}$, we propose a novel loss function for the overall optimization objective as \begin{equation} \mathcal{L} = \frac{\lambda}{2}\cdot\mathcal{L}_{Lip} + \mathcal{L}_{CE}, \label{equ:16} \end{equation} where $\lambda$ is used to control the degree of retaining the Lipschitz constant. We analyze the effect of the coefficient $\lambda$ in the supplementary material. After we define the overall loss function, our method is finally formulated. The forward and backward propagation processes of \emph{LCR} are elaborated in Algorithm~\ref{alg:LCR}. \begin{algorithm}[!t] \caption{Forward and Backward Propagation of \emph{LCR}-BNN} \label{alg:LCR} \begin{algorithmic}[1] \Require A minibatch of data samples $(\mathbf{X,Y})$, current binary weight $\mathbf{W}_B^k$, latent full-precision weights $\mathbf{W}_F^k$, and learning rate $\eta$. \Ensure Update weights ${\mathbf{W}_F^k}^{\prime}$. \State \textbf{Forward Propagation}: \For{$k = 1$ to $L-1$} \State Binarize latent weights: $\mathbf{W}_B^k \xleftarrow{} \mathrm{sgn}(\mathbf{W}_F^k)$; \State Perform binary operation with the activations of last layer: $\mathbf{A}_F^{k} \xleftarrow{} \mathbf{W}_B^k \cdot \mathbf{A}_B^{k-1}$; \State Binarize activations: $\mathbf{A}_B^k \xleftarrow{} \text{sgn}(\mathbf{A}_F^k)$; \State Produce the Retention Matrices $\mathbf{RM}_F^k$ and $\mathbf{RM}_B^k$ by Eq.~\ref{eq:main}; \EndFor \State Approximate the spectral norm of a series of $\mathbf{RM}$s by Algorithm~\ref{alg:pi} in the Supplemental Material, and obtain $\Vert \mathbf{RM}_F^k \Vert_{SN}$ and $\Vert \mathbf{RM}_B^k \Vert_{SN}$ for each $k \in \{1,\dots,L-1\}$; \State Compute the Lipschitz continuity retention loss $\mathcal{L}_{Lip}$ by Eq.~\ref{equ:15}; \State Combine the cross entropy loss $\mathcal{L}_{CE}$ and the quantization error loss $\mathcal{L}_{QE}$ for the overall loss $\mathcal{L}$ by Eq.~\ref{equ:16}; \State \textbf{Backward Propagation}: compute the gradient of the overall loss function, \textit{i.e.}~$\frac{\partial\mathcal{L}}{\partial \mathbf{W_B}}$, using the straight through estimator (STE)~\cite{bengio2013estimating} to tackle the sign function; \State \textbf{Parameter Update}:~update the full-precision weights: ${\mathbf{W}_F^k}^{\prime} \xleftarrow{} \mathbf{W}_F^k - \eta \frac{\partial\mathcal{L}}{\partial \mathbf{W}_B^k}$. \end{algorithmic} \end{algorithm} \noindent\textbf{Gradient Approximation.} Several works~\cite{santurkar2018does,lin2019defensive,miyato2018spectral} investigate the robustness of neural networks by introducing the concept of Lipschitzness. In this section, we differentiate the loss function of our proposed method, and reveal the mechanism of how Lipschitzness effect the robustness of BNNs. The derivative of the loss function $\mathcal{L}$ w.r.t $\mathbf{W}_B^k$ is: \begin{equation} \begin{aligned} &\frac{\partial\mathcal{L}}{\partial \mathbf{W}_B} = \frac{\partial (\mathcal{L}_{CE})}{\partial \mathbf{W}_B} + \frac{\partial (\mathcal{L}_{Lip})}{\partial \mathbf{W}_B^k}\\ &\approx \mathbf{M} - \lambda\sum_{k=1}^{L-1}\beta^{k-L}(\frac{\Vert \mathbf{RM}_F^k \Vert_{SN}}{\Vert \mathbf{RM}_B^k \Vert_{SN}}) \mathbf{u}_1^k (\mathbf{v}_1^k)^{\mathsf{T}},\\ \end{aligned} \label{eq:18} \end{equation} where $\mathbf{M} \triangleq \frac{\partial (\mathcal{L}_{CE})}{\partial \mathbf{W}_B}$, $ \mathbf{u}_1^k$ and $ \mathbf{v}_1^k$ are respectively the first left and right singular vectors of $\mathbf{W}_B^k$. In the content of SVD, $\mathbf{W}_B^k$ can be re-constructed by a series of singular vector, \textit{i.e.} \begin{equation} \mathbf{W}_B^k = \sum_{j=1}^{d_k} \sigma_j(\mathbf{W}_B^k) \mathbf{u}_j^k \mathbf{v}_j^k, \label{eq:19} \end{equation} where $d_k$ is the rank of $ \mathbf{W}_B^k$, $\sigma_j(\mathbf{W}_B^k)$ is the $j$-th biggest singular value, $\mathbf{u}_j^k$ and $\mathbf{v}_j^k$ are left and singular vectors, respectively~\cite{shang2021lipschitz}. In Eq. \ref{eq:18}, the first term $\mathbf{M}$ is the same as the derivative of the loss function of general binarization method with reducing quantization error. As for the second term, based on Eq. \ref{eq:19}, it can be seen as the regularization term penalizing the general binarization loss with an adaptive regularization coefficient $\gamma \triangleq \lambda\beta^{k-L}(\frac{\Vert \mathbf{RM}_F^k \Vert_{SN}}{\Vert \mathbf{RM}_B^k \Vert_{SN}})$ (More detailed derivation can be found in the supplemental materials). Note that even we analyze the regularization property under the concept of SVD, we do not actually use SVD in our algorithm. And Eq.~\ref{eq:18} and~\ref{eq:19} only demonstrate that \emph{LCR} regularization is related to the biggest singular value and its corresponding singular vectors. The \emph{LCR} Algorithm~\ref{alg:LCR} only uses the Power Iteration (see Algorithm in the Supplemental Materials) within less iteration steps (5 in practice) to approximate the biggest singular value. \noindent\textbf{Discussion on Retention Matrix}. Here, we would like to give a straight-forward explanation of why optimizing \emph{LCR} Loss in Eq.~\ref{equ:15} is equivalent to retaining Lipschitz continuity of BNN. Since the Lipschitz constant of a network $\Vert f\Vert_{Lip}$ can be upper-bounded by a set of spectral norms of weight matrices, \textit{i.e.} $\{\Vert \mathbf{W}_F^k \Vert_{SN}\}$ (see Eq.~\ref{eq:3}-\ref{eq:7}), we aim at retaining the spectral norms of binary weight matrices, instead of targeting on the network itself. And because Eq.~\ref{eq:9} to~\ref{eq:main} derive $\Vert \mathbf{RM}_F^k \Vert_{SN} = \Vert \mathbf{W}_F^k \Vert_{SN}$ and $\Vert \mathbf{RM}_B^k \Vert_{SN} = \Vert \mathbf{W}_B^k \Vert_{SN}$, we only need to calculate the spectral norm of our designed Retention Matrix $\Vert \mathbf{RM}_B^k \Vert_{SN}$. Finally, minimizing Eq.~\ref{equ:15} equals to enforcing $\Vert \mathbf{RM}_B^k \Vert_{SN} \longrightarrow \Vert \mathbf{RM}_F^k \Vert_{SN}$, which retains the spectral norm (Lipschitz continuity) of BNN. Therefore, the BNNs trained by our method have better performance, because the Lipschitz continuity is retained, which can smooth the BNNs. \noindent \textbf{Differences with Spectral Normalization (SN) and Defensive Quantization (DQ).} \label{sec:difference} There are two major differences: (i) In contrast to SN and DQ directly calculating the spectral norm with weight matrix, our method compute the spectral norm of specifically designed Retention Matrix to approximate the targeted spectral norms by leveraging the activations in BNNs. In this way, we can approximate the targeted yet inaccessible Lipschitz constant of binary networks as shown in Fig.~\ref{fig:pipeline} (a), in which the weight matrix is extremely sparse. Particularly, instead of layer-wisely calculating the spectral norm of weight matrix proposed in SN, our method does \textit{not rely on weight matrix} since the calculation can be done using only the in/out activations (Eq.~\ref{eq:rm1}). (ii) To tackle the training architecture complexity, our designed Retention Matrix gives flexibility to regularize BNNs via utilizing Lipschitz constant in a module manner (\textit{e.g.}, residual blocks in ResNet~\cite{he2016deep}), instead of calculating the spectral norm and normalizing the weight matrix to 1 for each layer as shown in Fig.~\ref{fig:pipeline} (b). Benefit from module-wise simplification, total computation cost of our method is much lower compared with SN and DQ. \section{Experiments} \label{sec:exp} In this section, we conduct experiments on the image classification. Following popular setting in most studies\cite{qin2020forward,lin2020rotated}, we use the CIFAR-10~\cite{krizhevsky2012imagenet} and the ImageNet ILSVRC-2012~\cite{krizhevsky2012imagenet} to validate the effectiveness of our proposed binarization method. In addition to comparing our method with the state-of-the-art methods, we design a series of ablative studies to verify the effectiveness of our proposed regularization technique. All experiments are implemented using PyTorch \cite{paszke2019pytorch}. We use one NVIDIA GeForce 3090 GPU when training on the CIFAR-10 dataset, and four GPUs on the ImageNet dataset. \noindent \textbf{Experimental Setup.} On CIFAR-10, the BNNs are trained for 400 epochs, batch size is 128 and initial learning rate is 0.1. We use SGD optimizer with the momentum of 0.9, and set weight decay is 1e-4. On ImageNet, the binary models are trained the for 120 epochs with a batch size of 256. We use cosine learning rate scheduler, and the learning rate is initially set to 0.1. All the training and testing settings follow the codebases of IR-Net~\cite{qin2020forward} and RBNN~\cite{lin2020rotated}. \subsection{CIFAR} CIFAR-10~\cite{krizhevsky2009learning} is the most widely-used image classification dataset, which consists of 50K training images and 10K testing images of size 32×32 divided into 10 classes. For training, 10,000 training images are randomly sampled for validation and the rest images are for training. Data augmentation strategy includes random crop and random flipping as in~\cite{he2016deep} during training. For testing, we evaluate the single view of the original image for fair comparison. \begin{table}[!t] \begin{minipage}{.49\textwidth} \centering \caption{Top-1 and Top-5 accuracy on ImageNet. ${\dagger}$ represents the architecture which varies from the standard ResNet architecture but in the same FLOPs level.} \scalebox{0.80}{ \begin{tabular}{ccccc}\toprule \multirow{2}{*}{Topology} & \multirow{2}{*}{Method} & BW & Top-1 & Top-5 \\ & & (W/A) & (\%) & (\%) \\\midrule & Baseline & 32/32 & 69.6 & 89.2 \\ & ABC-Net~\cite{lin2017towards} & 1/1 & 42.7 & 67.6 \\ & XNOR-Net~\cite{rastegari2016xnor} & 1/1 & 51.2 & 73.2 \\ & BNN+~\cite{darabi2018bnn+} & 1/1 & 53.0 & 72.6 \\ & DoReFa~\cite{zhou2016dorefa} & 1/2 & 53.4 & - \\ & BiReal~\cite{liu2020bi} & 1/1 & 56.4 & 79.5 \\ & XNOR++~\cite{bulat2019xnor} & 1/1 & 57.1 & 79.9 \\ & IR-Net~\cite{qin2020forward} & 1/1 & 58.1 & 80.0 \\ & ProxyBNN~\cite{he2020proxybnn} & 1/1 & 58.7 & 81.2 \\ \multirow{2}*{ResNet-18} & Ours & 1/1 & \textbf{59.6} & \textbf{81.6} \\ \cline{2-5} & Baseline & 32/32 & 69.6 & 89.2 \\ & SQ-BWN~\cite{dong2017learning} & 1/32 & 58.4 & 81.6 \\ & BWN~\cite{rastegari2016xnor} & 1/32 & 60.8 & 83.0 \\ & HWGQ~\cite{li2017performance} & 1/32 & 61.3 & 83.2 \\ & SQ-TWN~\cite{dong2017learning} & 2/32 & 63.8 & 85.7 \\ & BWHN~\cite{hu2018hashing} & 1/32 & 64.3 & 85.9 \\ & IR-Net~\cite{qin2020forward} & 1/32 & 66.5 & 85.9 \\ & Ours & 1/32 & \textbf{66.9} & \textbf{86.4} \\ \midrule & Baseline & 32/32 & 73.3 & 91.3 \\ & ABC-Net~\cite{lin2017towards} & 1/1 & 52.4 & 76.5 \\ ResNet-34 & Bi-Real ~\cite{liu2020bi} & 1/1 & 62.2 & 83.9 \\ & IR-Net~\cite{qin2020forward} & 1/1 & 62.9 & 84.1 \\ & ProxyBNN~\cite{he2020proxybnn} & 1/1 & 62.7 & 84.5 \\ & Ours & 1/1 & \textbf{63.5} & \textbf{84.6} \\\midrule Variant & ReActNet${\dagger}$~\cite{liu2020reactnet} & 1/1 & 69.4 & 85.5 \\ ResNet & Ours${\dagger}$ & 1/1 & \textbf{69.8} & \textbf{85.7} \\\bottomrule \end{tabular}} \label{tabel:cifar} \end{minipage} \begin{minipage}{.49\textwidth} \centering \caption{Top-1 accuracy (\%) on CIFAR-10 (C-10) test set. The higher the better. W/A denotes the bit number of weights/activations. } \scalebox{0.9}{ \begin{tabular}{p{1.5cm}ccc} \toprule \multirow{2}{*}{Topology} & \multirow{2}{*}{Method} & Bit-width & Acc. \\ & & (W/A) & (\%) \\ \midrule & Baseline & 32/32 & 93.0 \\ \multirow{2}*{ResNet-18}& RAD~\cite{ding2019regularizing} & 1/1 & 90.5 \\ & IR-Net~\cite{qin2020forward} & 1/1 & 91.5 \\ & Ours & 1/1 & \textbf{91.8} \\ \hline & Baseline & 32/32 & 91.7 \\ & DoReFa~\cite{zhou2016dorefa} & 1/1 & 79.3 \\ & DSQ~\cite{gong2019differentiable} & 1/1 & 84.1 \\ & IR-Net~\cite{qin2020forward} & 1/1 & 85.5 \\ & IR-bireal~\cite{qin2020forward} & 1/1 & 86.5 \\ & LNS~\cite{han2020training} & 1/1 & 85.7 \\ & SLB~\cite{yang2020searching} & 1/1 & 85.5 \\ & Ours & 1/1 & \textbf{86.0} \\ \multirow{2}*{ResNet-20}& Ours-bireal & 1/1 & \textbf{87.2} \\ \cline{2-4} & Baseline & 32/32 & 91.7 \\ & DoReFa~\cite{zhou2016dorefa} & 1/32 & 90.0 \\ & DSQ~\cite{gong2019differentiable} & 1/32 & 90.1 \\ & IR-Net~\cite{qin2020forward} & 1/32 & 90.2 \\ & LNS~\cite{han2020training} & 1/32 & 90.8 \\ \multicolumn{1}{l}{} & SLB~\cite{yang2020searching} & 1/32 & 90.6 \\ & Ours & 1/32 & \textbf{91.2} \\ \bottomrule \end{tabular}} \label{tabel:imagenet} \end{minipage} \vspace{-0.1in} \end{table} For ResNet-18, we compare with RAD~\cite{ding2019regularizing} and IR-Net~\cite{qin2020forward}. For ResNet-34, we compare with LNS~\cite{han2020training} and SLB~\cite{yang2020searching}, \textit{etc}. As the Table~\ref{tabel:cifar} presented, our method constantly outperforms other methods. \emph{LCR}-BNN achieves 0.3\%, 0.7\% and 0.6\% performance improvement over ResNet-18, ResNet-20 and ResNet-20 (without binarizing activations), respectively. In addition, our method also validate the effectiveness of bi-real structure~\cite{liu2020bi}. When turning on the bi-real module, IR-Net achieves 1.0\% accuracy improvements yet our method improves 1.2\%. \subsection{ImageNet} ImageNet~\cite{deng2009imagenet} is a larger dataset with 1.2 million training images and 50k validation images divided into 1,000 classes. ImageNet has greater diversity, and its image size is 469×387 (average). The commonly used data augmentation strategy including random crop and flipping in PyTorch examples ~\cite{paszke2019pytorch} is adopted for training. We report the single-crop evaluation result using 224×224 center crop from images. For ResNet-18, we compare our method with XNOR-Net~\cite{rastegari2016xnor}, ABC-Net~\cite{lin2017towards}, DoReFa~\cite{zhou2016dorefa}, BiReal~\cite{liu2020bi}, XNOR++~\cite{bulat2019xnor}, IR-Net~\cite{qin2020forward}, ProxyBNN~\cite{he2020proxybnn}. For ResNet-34, we compare our method with ABC-Net~\cite{lin2017towards}, BiReal~\cite{liu2020bi}, IR-Net~\cite{qin2020forward}, ProxyBNN~\cite{he2020proxybnn}. As demonstrated in Table~\ref{tabel:imagenet}, our proposed method also outperforms other methods in both top-1 and top-5 accuracy on the ImageNet. Particularly, \emph{LCR}-BNN achieves 0.9\% Top-1 accuracy improvement with ResNet-18 architecture, compared with STOA method ProxyBNN~\cite{he2020proxybnn}, as well as 0.6\% Top-1 accuracy improvement with ResNet-34 architecture, compared with state-of-the-art method ProxyBNN~\cite{qin2020forward}. Apart from those methods implemented on standard ResNet architectures, by adding our Lipschitz regularization module on ResNet-variant architecture, ReActNet~\cite{liu2020reactnet}, we also observe the accuracy improvement. Note that the training setting of adding our \emph{LCR} module on ReActNet is also different based on the codebase of ReActNet. \subsection{Ablation Study} In this section, the ablation study is conducted on CIFAR-10 with ResNet-20 architecture and on ImageNet with ResNet-18. The results are presented in Table~\ref{tabel:ablation1}. By piling up our regularization term on IR-Net~\cite{qin2020forward} and ReActNet~\cite{liu2020reactnet}, our method achieves 1.2\% and 0.4\% improvement on ImageNet, respectively. Note that ReActNet is a strong baseline with a variant ResNet architecture. We also study the effect of hyper-parameter $\lambda$ in loss function on CIFAR. As shown in Fig~\ref{tabel:ablation2}, we can observe that the performance improves with $\lambda$ increasing. Both experiments validate the effectiveness of our method. Apart from that, to investigate the regularization property of our method, we visualize several training and testing curves with various settings. Due to the space limitation, we put those demonstrations in the supplemental materials. \begin{table}\footnotesize \begin{minipage}[t]{.49\textwidth} \centering \caption{Effect of hyper-parameter $\lambda$ in loss function. Higher is better.} \scalebox{0.73}{ \begin{tabular}{c|cccccc} \toprule \diagbox{Topology}{$\log_2\lambda$} & $\lambda=0$ & -1 & 0 & 1 & 2 & 3 \\ \specialrule{0.8pt}{0pt}{0pt} ResNet-18 & 85.9 & 86.2 & 87.9 & 90.1 & 91.2 &\textbf{91.8} \\ ResNet-20 & 83.9 & 83.7 & 84.5 & 85.9 &\textbf{87.2}& 86.5 \\ \bottomrule \end{tabular}} \label{tabel:ablation2} \vspace{0.1in} \centering \caption{Ablation Study of LCR-BNN.} \scalebox{0.83}{ \begin{tabular}{clc}\toprule Dataset & Method & Acc(\%) \\\hline & Full Precision & 91.7 \\ & IR-Net~\cite{qin2020forward} (w/o BiReal) & 85.5 \\ CIFAR & IR-Net + LCR (w/o BiReal) & 86.0 \\ & IR-Net~\cite{qin2020forward} (w/ BiReal) & 86.5 \\ & IR-Net + LCR (w/o BiReal) & 87.2 \\\midrule & Full Precision & 69.6 \\ & IR-Net~\cite{qin2020forward} (w/o BiReal) & 56.9 \\ ImageNet & IR-Net + LCR (w/o BiReal) & 58.4 \\ & IR-Net~\cite{qin2020forward} (w/ BiReal) & 58.1 \\ & IR-Net + LCR & 59.6 \\\cline{2-3} & ReActNet & 69.4 \\ & ReActNet + LCR & 69.8 \\ \bottomrule \end{tabular}} \label{tabel:ablation1} \end{minipage} \hfill \begin{minipage}[t]{.49\textwidth} \centering \caption{FLOPS and BOPS for ResNet-18} \scalebox{0.86}{ \begin{tabular}{ccc}\toprule Method & BOPS & FLOPS \\\hline BNN~\cite{hubara2016binarized} & $1.695\times{10}^9$ & $1.314\times{10}^8$ \\ XNOR-Net~\cite{rastegari2016xnor} & $1.695\times{10}^9$ & $1.333\times{10}^8$ \\ ProxyBNN~\cite{he2020proxybnn} & $1.695\times{10}^9$ & $1.564\times{10}^8$ \\ IR-Net~\cite{qin2020forward} & $1.676\times{10}^9$ & $1.544\times{10}^8$ \\ Ours & $1.676\times{10}^9$ & $1.544\times{10}^8$ \\\hline Full Precision & $0$ & $1.826\times{10}^9$ \\\bottomrule \end{tabular}} \label{tabel:cost} \vspace{0.25in} \centering \captionof{table}{mCE on ImageNet-C. Lower is better.} \scalebox{0.9}{ \begin{tabular}{cl}\toprule Method & mCE (\%)\\\hline IR-Net~\cite{qin2020forward} & 89.2 \\ IR-Net + LCR (ours) & 84.9 $\downarrow$\\ \hline RBNN~\cite{lin2020rotated} & 87.5 \\ RBNN + LCR (ours) & 84.8 $\downarrow$\\ \hline ReActNet~\cite{liu2020reactnet} & 87.0 \\ IR-Net + LCR (ours) & 84.9 $\downarrow$\\ \toprule \end{tabular}} \label{table:imagenet-c} \end{minipage} \end{table} \vspace{0.3in} \subsection{Further Analysis} \label{sec:further_analysis} \noindent \textbf{Computational Cost Analysis.} In Table~\ref{tabel:cost}, we separate the number of binary operations and floating point operations, including all types of operations such as skip structure, max pooling, \textit{etc}. It shows that our method leaves the number of BOPs and number of FLOPs constant in the model inference stage, even though our method is more computational expensive in the training stage. Thus, our Lipschitz regularization term does not undermine the main benefit of the network binarization, which is to speed up the inference of neural networks. \noindent \textbf{Weight Distribution Visualization.} To validate the effectiveness of our proposed method from the perspective of weight distribution, we choose our \emph{LCR}-BNN and IR-Net to visualize the distribution of weights from different layers. For fair comparison, we randomly pick up 10,000 parameters in each layer to formulate the Figure~\ref{fig:dist}. Compared with IR-Net, the BNN trained by our method possesses smoother weight distribution, which correspondingly helps our method achieve 1.6\% accuracy improvement on ImageNet as listed in Table~\ref{tabel:imagenet}. More precisely, the standard deviation of the distribution of the IR-Net is 1.42, 28\% higher than ours 1.11, in the layer3.0.conv2 layer. \begin{figure}[!t] \begin{center} \includegraphics[width=0.7\textwidth]{images/smoothness.pdf} \end{center} \vspace{-0.2in} \caption{Histograms of weights (before binarization) of the IR-Net~\cite{qin2020forward} and \emph{LCR}-BNN with ResNet-18 architecture. The first row shows the results of the IR-Net, and the second row shows the results of ours. The BNN trained by our method has smoother weight distribution.} \label{fig:dist} \vspace{-0.2in} \end{figure} \noindent \textbf{Robustness Study on ImageNet-C.} ImageNet-C~\cite{hendrycks2019benchmarking} becomes the standard dataset for investigation of model robustness, which consists of 19 different types of corruptions with five levels of severity from the noise, blur, weather and digital categories applied to the validation images of ImageNet (see Samples in Supplemental Materials). We consider all the 19 corruptions at the highest severity level (severity = 5) and report the mean top-1 accuracy. We use Mean Corruption Error (mCE) to measure the robustness of models on this dataset. We freeze the backbone for learning the representations of data \textit{w.r.t.} classification task, and only fine-tune the task-specific heads over the backbone (\textit{i.e.} linear protocol). The results in Table~\ref{table:imagenet-c} prove that add \emph{LCR} on the existing methods can improve the robustness of binary models. \begin{figure}[!t] \begin{center} \includegraphics[width=0.85\textwidth]{images/independence_1.pdf} \end{center} \vspace{-0.2in} \caption{Correlation maps for reflecting independence assumption in Eq.~\ref{eq:9}.} \label{fig:correlation} \vspace{-0.2in} \end{figure} \noindent \textbf{Independence Assumption Reflection.} The assumption used in Eq.~\ref{eq:9} is the core of our method derivation, as it theoretically supports the approximation of the spectral norms of weight matrix with the designed retention matrix. Thus, we investigate this assumption by visualizing the correlation matrix of feature maps in the same batch. Specifically, we visualise the correlation matrices of full-precision and binary activations, where red stands for two activations are similar and blue \textit{vice versa}. As shown in Fig~\ref{fig:correlation}, we can clearly observe that an activation is only correlated with itself, which largely testify this assumption. Besides, we also design another mechanism to use this assumption properly. We set a coefficient $\beta$ greater than $1$ to give more weight on latter layer's features such that they contribute more to $\mathcal{L}_{Lip}$ (Eq.~\ref{equ:15}). As in neural network, the feature maps of latter layers have stronger mutual linear independence~\cite{alain2016understanding}. \section{Conclusion} \label{sec:con} In this paper, we introduce Lipschitz continuity to measure the robustness of BNN. Motivated by this, we propose \emph{LCR}-BNN to retain the Lipschitz constant serving as a regularization term to improve the robustness of binary models. Specifically, to bypass the NP-hard Lipschitz constant computation in BNN, we devise the Retention Matrices to approximate the Lipschitz constant, and then constrain the Lipschitz constants of those Retention Matrices. Experimental results demonstrate the efficacy of our method. \noindent\textbf{Ethical Issues.} All datasets used in our paper are open-source datasets and do not contain any personally identifiable or sensitive personally identifiable information. \noindent\textbf{Limitations.} Although our method achieve SoTA, adding it on existing method costs more time (around 20\% more) to train BNN, which is the obvious limitation of our method. \noindent \textbf{Acknowledgements.} This research was partially supported by NSF CNS-1908658 (ZZ,YY), NeTS-2109982 (YY), Early Career Scheme of the Research Grants Council (RGC) of the Hong Kong SAR under grant No. 26202321 (DX), HKUST Startup Fund No. R9253 (DX) and the gift donation from Cisco (YY). This article solely reflects the opinions and conclusions of its authors and not the funding agents. \section{Supplemental Material} \label{sec:supp} \subsection{Proofs.} \noindent\textbf{Lemma 1.} If a function $f : \mathbb{R}^{n} \longmapsto \mathbb{R}^{m}$ is a locally Lipschitz continuous function, then $f$ is differentiable almost everywhere. Moreover, if $f$ is Lipschitz continuous, then \begin{equation} \Vert f\Vert_{Lip} = \sup_{\mathbf{x}\in\mathbb{R}^{n}}\Vert \nabla_{\mathbf{x}} f \Vert_2 \end{equation} where $\Vert \cdot\Vert_2$ is the L2 matrix norm. \noindent\textbf{Proof.} Based on Rademacher's theorem, for the functions restricted to some neighborhood around any point is Lipschitz, their Lipschitz constant can be calculated by their differential operator. \noindent\textbf{Lemma 2.} Let $\mathbf{W} \in \mathbb{R}^{m \times n}, \mathbf{b} \in \mathbb{R}^{m}$ and $T(\mathbf{x}) = \mathbf{W}\mathbf{x} + \mathbf{b}$ be an linear function. Then for all $\mathbf{x} \in \mathbb{R}^{n}$, we have \begin{equation} \nabla g(\mathbf{x}) = \mathbf{W}^{\mathsf{T}}\mathbf{W}\mathbf{x} \end{equation} where $g(\mathbf{x}) = \frac{1}{2}\Vert f(\mathbf{x}) - f(\mathbf{0})\Vert_2^2$. \noindent\textbf{Proof.} By definition, $g(\mathbf{x}) = \frac{1}{2}\Vert f(\mathbf{x}) - f(\mathbf{0})\Vert_2^2 = \frac{1}{2}\Vert(\mathbf{W}\mathbf{x} + \mathbf{b}) - (\mathbf{W}\mathbf{0} + \mathbf{b})\Vert_2^2 = \frac{1}{2}\Vert \mathbf{W}\mathbf{x} \Vert_2^2$, and the derivative of this equation is the desired result. \noindent\textbf{Theorem 1.} If a matrix $\mathbf{U}$ is an orthogonal matrix, such that $\mathbf{U}^\mathsf{T}\mathbf{U} = \mathbf{I}$, where $\mathbf{I}$ is a unit matrix, the largest eigenvalues of $\mathbf{U}^\mathsf{T} \mathbf{H} \mathbf{U}$ and $\mathbf{H}$ are equivalent: \begin{equation} \sigma_1( \mathbf{U}^\mathsf{T} \mathbf{H} \mathbf{U}) = \sigma_1( \mathbf{H}), \end{equation} where the notation $\sigma_1(\cdot)$ indicates the largest eigenvalue of a matrix. \noindent\textbf{Proof.} Because for $\mathbf{U}^{-1}$, we have \begin{equation} (\mathbf{U}^{-1})^\mathsf{T} (\mathbf{U}^\mathsf{T} \mathbf{H} \mathbf{U}) (\mathbf{U}^{-1}) = (\mathbf{U} \mathbf{U}^{-1})^\mathsf{T} \mathbf{H} (\mathbf{U} \mathbf{U}^{-1}) = \mathbf{H}. \end{equation} Thus matrix $(\mathbf{U}^\mathsf{T} \mathbf{H} \mathbf{U})$ and matrix $(\mathbf{H})$ are similar. The Theorem 1 can be proven by this matrix similarity. \noindent\textbf{Exact Lipschitz constant computation is NP-Hard.} We take a 2-layer fully-connected neural network with ReLU activation function as an example to demonstrate that Lipschitz computation is not achievable in polynomial time. As we denoted in Method Section, this 2-layer fully-connected neural network can be represented as \begin{equation} f(\mathbf{W}^1,\mathbf{W}^2;\mathbf{x}) = (\mathbf{W}^{2}\circ\sigma \circ\mathbf{W}^{1})(\mathbf{x}), \end{equation} where $\mathbf{W}^1\in\mathbb{R}^{d_0\times d_1}$ and $\mathbf{W}^2\in\mathbb{R}^{d_1\times d_2}$ are matrices of first and second layers of neural network, and $\sigma(x)=\max\{0, x\}$ is the ReLU activation function. \noindent\textbf{Proof.} To prove that computing the exact Lipschitz constant of Networks is NP-hard, we only need to prove that deciding if the Lipschitz constant $\Vert f\Vert_{Lip} \leq L$ is NP-hard. From a clearly NP-hard problem: \begin{align} \label{eq:nphard} \max\min & \Sigma_i (\mathbf{h}_i^{\mathsf{T}} \mathbf{p})^2 = \mathbf{p}^{\mathsf{T}} \mathbf{H} \mathbf{p}\\ & s.t. \quad \forall k, 0\leq p_k\leq1, \end{align} where matrix $\mathbf{H}=\Sigma_i \mathbf{h}_i \mathbf{h}_i^{\mathsf{T}}$ is positive semi-definite with full rank. We denote matrices $W_1$ and $W_2$ as \begin{equation} \mathbf{W}_1 = (\mathbf{h}_1, \mathbf{h}_2,\cdots,\mathbf{h}_{d_1}), \end{equation} \begin{equation} \mathbf{W}_2 = (\mathbf{1}_{d_1\times 1},\mathbf{0}_{d_1\times d_2 -1})^{\mathsf{T}}, \end{equation} so that we have \begin{equation} \mathbf{W}_2 \textnormal{diag}\left(\mathbf{p}\right) \mathbf{W}_1 = \begin{bmatrix} \mathbf{h}_1^{\mathsf{T}} \mathbf{p} & 0 & \dots & 0 \\ \vdots & \vdots & \ddots & \\ \mathbf{h}_n^{\mathsf{T}} \mathbf{p} & 0 & & 0 \end{bmatrix}^{\mathsf{T}} \end{equation} The spectral norm of this 1-rank matrix is $\Sigma_i (\mathbf{h}_i^{\mathsf{T}} \mathbf{p})^2$. We prove that Eq. \ref{eq:nphard} is equivalent to the following optimization problem \begin{align} \label{eq:nphard2} \max\min & \Vert \mathbf{W}_2 \textnormal{diag}\left(\mathbf{p}\right) \mathbf{W}_1 \Vert_2^2 \\ & s.t. \quad \mathbf{p} \in \left[0, 1\right]^n. \end{align} Because $H$ is full rank, $W_1$ is subjective and all $\mathbf{p}$ are admissible values for $\nabla g(\mathbf{x})$ which is the equality case. Finally, ReLU activation units take their derivative within $\{0,1\}$ and Eq. \ref{eq:nphard2} is its relaxed optimization problem, that has the same optimum points. So that our desired problem is NP-hard. \subsection{Power Iteration Algorithm} \begin{algorithm}[!h] \caption{Compute Spectral Norm using Power Iteration} \begin{multicols}{2} \begin{algorithmic}[1] \Require Targeted matrix $\mathbf{RM}$ and stop condition $res_{stop}$. \Ensure The spectral norm of matrix $\mathbf{RM}$, \textit{i.e.},~$\Vert \mathbf{RM} \Vert_{SN}$. \State Initialize $\mathbf{v}_0 \in \mathbb{R}^m$ with a random vector. \While{$res\geq res_{stop}$} \State $\mathbf{v}_{i+1} \gets \mathbf{RM}\mathbf{v}_{i} \bigl / \Vert \mathbf{RM}\mathbf{v}_{i}\Vert_2$ \State $res = \Vert \mathbf{v}_{i+1} - \mathbf{v}_{i}\Vert_2$ \EndWhile \State \Return{$\Vert \mathbf{RM} \Vert_{SN} = \mathbf{v}_{i+1}^{\mathsf{T}} \mathbf{RM} \mathbf{v}_{i} $} \end{algorithmic} \end{multicols} \label{alg:pi} \vspace*{-3mm} \end{algorithm} \subsection{Detailed derivation of the gradient.} The derivative of the loss function $\mathcal{L}$ w.r.t $\mathbf{W}_B^k$ is: \begin{equation} \begin{split} &\frac{\partial\mathcal{L}}{\partial \mathbf{W}_B} = \frac{\partial (\mathcal{L}_{CE})}{\partial \mathbf{W}_B} + \frac{\partial (\mathcal{L}_{Lip})}{\partial \mathbf{W}_B^k}\\ &= \mathbf{M} - \lambda\sum_{k=1}^{L-1}\beta^{k-L}(\frac{\Vert \mathbf{RM}_F^k \Vert_{SN}}{\Vert \mathbf{RM}_B^k \Vert_{SN}})\frac{\partial \Vert \mathbf{RM}^k_B\Vert_{SN}}{\partial \mathbf{W}_B^k} \\ &\approx \mathbf{M} - \lambda\sum_{k=1}^{L-1}\beta^{k-L}(\frac{\Vert \mathbf{RM}_F^k \Vert_{SN}}{\Vert \mathbf{RM}_B^k \Vert_{SN}})\frac{\partial \Vert \mathbf{W}^k_B\Vert_{SN}}{\partial \mathbf{W}_B^k}\\ &\approx \mathbf{M} - \lambda\sum_{k=1}^{L-1}\beta^{k-L}(\frac{\Vert \mathbf{RM}_F^k \Vert_{SN}}{\Vert \mathbf{RM}_B^k \Vert_{SN}}) \mathbf{u}_1^k (\mathbf{v}_1^k)^{\mathsf{T}},\\ \end{split} \label{eq:18} \end{equation} For the third equation: \begin{equation} \mathbf{M} - \lambda\sum_{k=1}^{L-1}\beta^{k-L}(\frac{\Vert \mathbf{RM}_F^k \Vert_{SN}}{\Vert \mathbf{RM}_B^k \Vert_{SN}})\frac{\partial \Vert \mathbf{W}^k_B\Vert_{SN}}{\partial \mathbf{W}_B^k} \approx \mathbf{M} - \lambda\sum_{k=1}^{L-1}\beta^{k-L}(\frac{\Vert \mathbf{RM}_F^k \Vert_{SN}}{\Vert \mathbf{RM}_B^k \Vert_{SN}}) \mathbf{u}_1^k(\mathbf{v}_1^k)^{\mathsf{T}}, \end{equation} we provide the core proof in here, \textit{i.e.} the first pair of left and right singular vectors of $\mathbf{W}_B$ can reconstruct $\frac{\partial \Vert \mathbf{W}_B\Vert_{SN}}{\partial \mathbf{W}_B} $ precisely. For $\mathbf{W}_B\in \mathbb{R}^{m\times n}$, the spectral norm $\Vert \mathbf{W}_B\Vert_{SN} = \sigma_1(\mathbf{W}_B)$ stands for its biggest singular value, $\mathbf{u}_1$ and $\mathbf{v}_1$ are correspondingly left and singular vectors. The SVD of $\mathbf{W}_B$ is $\mathbf{W}_B=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^T$. Therefore $\Vert \mathbf{W}_B \Vert_{SN}=\mathbf{e}_1^T\mathbf{U}^T(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^T)\mathbf{V}\mathbf{e}_1$, where $\mathbf{e}_1$ is the largest eigenvalue of matrix $\mathbf{W}_B^T\mathbf{W}_B$. Hence $\Vert \mathbf{W}_B \Vert_{SN} = \mathbf{u}_1^T\mathbf{W}_B\mathbf{v}_1$. Thus the derivative of spectral norm can be evaluated in the direction $\mathbf{H}$: $\frac{\partial \Vert W\Vert_{SN}}{\partial \mathbf{W}_B}(\mathbf{H}) = \mathbf{u}_1^T\mathbf{H}\mathbf{v}_1 = \mathrm{trace}(\mathbf{u}_1^T\mathbf{H}\mathbf{v}_1) = \mathrm{trace}(\mathbf{v}_1\mathbf{u}_1^T\mathbf{H})$. The gradient is $\frac{\partial \Vert \mathbf{W}_B\Vert_{SN}}{\partial \mathbf{W}_B}= \mathbf{v}_1\mathbf{u}_1^T$, which supports the Eq.13. \subsection{ImageNet-C} \noindent\textbf{Sample Visualization of ImageNet-C.} In Section 4.4 we evaluate methods on a common image corruptions benchmark (ImageNet-C) to demonstrate the effectiveness of \emph{LCR} from the perspective of model robustness. As illustrated in Section 4.4, ImageNet-C~\cite{hendrycks2019benchmarking} consists of 19 different types of corruptions with five levels of severity from the noise, blur, weather and digital categories applied to the validation images of ImageNet (see Fig.~\ref{fig:imagenetc}). As the figure presented, it is natural to introduce the ImageNet-C to measure the semantic robustness of models. Recently, ImageNet-C indeed has became the most widely acknowledged dataset for measuring the robustness of models. \begin{figure}[!h] \begin{center} \includegraphics[width=0.97\textwidth]{images/ImageNet-C_sample.PNG} \end{center} \vspace{-0.3in} \caption{Examples of each corruption type in the image corruptions benchmark. While synthetic, this set of corruptions aims to represent natural factors of variation like noise, blur, weather, and digital imaging effects. This figure is reproduced from Hendrycks \& Dietterich (2019).} \label{fig:imagenetc} \vspace{-0.15in} \end{figure}
{ "attr-fineweb-edu": 1.706055, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbrM241xg-CaAFOhy
\section{Introduction} \label{Introduction} The modified Bessel function of the first kind, $$I_{\nu}(x)=\sum_{k=0}^{\infty} \frac{1}{k! \Gamma(k+\nu+1)} \left(\frac{x}{2}\right)^{2k+\nu},$$ arises in numerous applications. In elasticity \cite{Simpson}, one is interested in ${I_{\nu+1}(x)}/{I_{\nu}(x)}$. In image-noise modeling \cite{Hwang}, denoising photon-limited image data \cite{Wolfe}, sports data \cite{Karlis}, and statistical testing \cite{Skellam}, one is interested in $I_{\nu}(x)$ as it arises in the kernel of the probability mass function of a Skellam probability distribution. The functions $I_0(x)$ and $I_1(x)$ arise as rates in concentration inequalities in the behavior of sums of independent $\mathbb{R}^N$-valued, symmetric random vectors \cite{Kanter} \cite{Cranston}. Excellent summaries of applications of $I_{\nu}(x)$ in probability and statistics may be found in~\cite{Robert,Johnson}. For example, these functions are used in the determination of maximum likelihood and minimax estimators for the bounded mean in~ \cite{Marchand1,Marchand2}. Furthermore, \cite{Yuan} gives applications of the modified Bessel function of the first kind to the so-called Bessel probability distribution, while~\cite{Simon} has applications of $I_{\nu}(x)$ in the generalized Marcum Q-function that arises in communication channels. Finally, \cite{Fotopoulos} gives applications in finance. With $I_{\nu}(x)$ regularly arising in new areas of application comes a corresponding need to continue to better understand its properties, both as a function of $\nu$ and of $x$. There is, of course, already much that has been established on this topic. For example, \cite{Luke} first studied inequalities on generalized hypergeometric functions and was the first to give upper and lower bounds on $I_{\nu}(x)$ for $x>0$ and $\nu>-{1}/{2}$. Additionally, \cite{Amos} was concerned with computation of $I_{\nu}(x)$, and provided a way to produce rapid evaluations of ratios $I_{\nu+1}(x)/I_{\nu}(x)$, and hence $I_{\nu}(x)$ itself, through recursion. Several other useful representations of $I_{\nu}(x)$ are also provided in~\cite{Amos}. More recently, various convexity properties of $I_{\nu}(x)$ have been studied in~\cite{Neuman} and \cite{Baricz Neuman}. In the last decade, motivated by results in finite elasticity, \cite{Simpson} and~\cite{Laforgia} provide bounds on $I_{\nu}(x)/I_{\nu}(y)$ for $\nu>0$ and $0<x<y$, while \cite{Baricz Sum} provides bounds on the quantity $\exp(-x)x^{-\nu} \left[I_{\nu}(x)+I_{\nu+1}(x)\right]$ arising in concentration of random vectors, as in~\cite{Kanter, Cranston}. Motivated by applications in communication channels, \cite{Baricz Marcum Q} develops bounds on the generalized Marcum-Q function, and the same author in \cite{Baricz Turanians} develops estimates on the so-called Turan-type inequalities $I^2_{\nu}(x)-I_{\nu-1}(x)I_{\nu+1}(x)$. For an excellent review of modern results on $I_{\nu}(x)$ and its counterpart $K_{\nu}(x)$, the modified Bessel function of the second kind, we refer the reader to \cite{Baricz Edin}. In our own ongoing work in the statistical analysis of networks, the function $I_{\nu}(x)$ has arisen as well, in a manner that -- to the best of our knowledge -- has yet to be encountered and addressed in the literature. Specifically, in seeking to establish the probability distribution of the discrepancy between (a) the true number of edges in a network graph, and (b) the number of edges in a `noisy' version of that graph, one is faced with the task of analyzing the distribution of the difference of two sums of dependent binary random variables. Under a certain asymptotic regime, it is reasonable to expect that each sum converge to a certain Poisson random variable and, hence, their difference, to a so-called Skellam distribution. The latter is the name for the probability distribution characterized by the difference of two independent Poisson random variables and -- notably -- has a kernel defined in terms of $I_{\nu}(x)$ \cite{Skellam} for $x\geq 0$ and $\nu \in \mathbb{N}$. One way to study the limiting behavior of our difference of sums is through Stein's method \cite{Barbour}. As part of such an analysis, however, non-asymptotic upper bounds are necessary on the quantity \begin{equation} \label{H Def} H(\nu, x)=\sum_{k=1}^{\infty} \frac{I_{\nu+k}(x)}{I_{\nu}(x)} \enskip , \end{equation} for $\nu \in \mathbb{N}$ that have a scaling of $\sqrt{x}$ for $\nu$ near 0. Unfortunately, using current bounds on $I_{\nu+1}(x)/I_{\nu}(x)$ to lower and upper bound the infinite sum in $H(\nu,x)$ in (\ref{H Def}) for $\nu,x\geq 0$ necessitate the use of a geometric series-type argument, the resulting expressions of which both do not have this kind of behavior near $\nu=0$. In particular, we show that such an approach, for $\nu=0$, yields a lower bound that is order one and an upper bound that is order $x$ as $x\rightarrow \infty$. See (\ref{Geometric Bounds}) below. The purpose of this paper is to derive bounds on $I_{\nu+1}(x)/I_{\nu}(x)$ which, when used to lower and upper bound the infinite sum arising in $H(\nu,x)$, lead to better estimates on $H(\nu,x)$ near $\nu=0$ compared to those obtained using current estimates, for $\nu,x\geq 0$ and $\mathbb{\nu}\in \mathbb{R}$. In particular, we show that it is possible to derive both upper and lower bounds on $H(\nu,x)$ that behave as $\sqrt{x}$ for $x$ large. When we restrict $\nu$ to $\mathbb{N}$, we can apply these results to obtain a concentration inequality for the Skellam distribution, to bound the probability mass function of the Skellam distribution, and to upper and lower bound $\exp(-x)I_{\nu}(x)$ for any $\nu,x\geq 0$, improving on the asymptotic $\exp(-x)I_{\nu}(x)\sim {1}/{\sqrt{2\pi x}}$ as $x\rightarrow \infty$ in \cite{AS}, at least for $\nu \in \mathbb{N}$. In our approach to analyzing the function $H(\nu,x)=\sum_{n=1}^{\infty} {I_{\nu+n}(x)}/{I_{\nu}(x)}$, we first write each term in the sum using the iterative product, \begin{equation} \label{intro: iterative product} \frac{I_{\nu+n}(x)}{I_{\nu}(x)}=\prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} \end{equation} and split the infinite sum (\ref{H Def}) into two regimes: one where $[\nu]+2>x$ and the other when $[\nu]+2\leq x$, where $[\nu]$ denotes the floor function of $\nu$. In the former regime, the "tail'' of $H(\nu,x)$, we can use existing estimates on $I_{\nu+1}(x)/I_{\nu}(x)$ in a geometric series to lower and upper bound $H(\nu,x)$ in a way that preserves the scaling of $H(\nu,x)$ in $\nu$ and $x$. In the latter regime, lower and upper bounds on the function ${I_{\nu+1}(x)}/{I_{\nu}(x)}$ for $\nu \in \mathbb{R}$ and $\nu,x\geq 0$ are now required with algebraic properties suitable to better sum the the products (\ref{intro: iterative product}) arising in $H(\nu,x)$ in a way that preserves the behavior of $H(\nu,x)$ near $\nu=0$ for large $x$. To provide these bounds on $I_{\nu+1}(x)/I_{\nu}(x)$, we begin with those in \cite{Amos}, valid for $\nu,x\geq 0$, which can be expressed as \begin{equation} \label{Amos Bound} \sqrt{1+\left(\frac{\nu+1}{x}\right)^2} -\frac{\nu+1}{x} \leq \frac{I_{\nu+1}(x)}{I_{\nu}(x)}\leq \sqrt{1+\left(\frac{\nu+\frac{1}{2}}{x}\right)^2} -\frac{\nu+\frac{1}{2}}{x} \enskip , \end{equation} and weaken them to those with nicer, exponential properties, using a general result on the best exponential approximation for the function $f(x)=\sqrt{1+x^2}-x$ for $x\in [0,1]$. When applied to $I_{\nu+1}(x)/I_{\nu}(x)$ for $\nu+1\leq x$, we obtain $$\exp\left(-\frac{\nu+1}{x}\right)\leq \frac{I_{\nu+1}(x)}{I_{\nu}(x)} \leq \exp\left(-\alpha_0 \frac{\nu+\frac{1}{2}}{x}\right).$$ See Proposition~\ref{besselexp} and Corollary~\ref{Bessel Ratio}. Using these bounds to lower and upper bound $H(\nu,x)$ described in the above fashion, we obtain \begin{enumerate} \item For any $\nu,x\geq 0$ (and in particular, for $[\nu]+2>[x]$), \begin{equation} \label{Geometric Bounds} F(\nu+1,x)(1+F(\nu+2,x))\leq H(\nu,x) \leq \frac{F\left(\nu+\frac{1}{2},x\right)}{1-F\left(\nu+\frac{3}{2},x\right) }. \end{equation} \item If $\nu,x\geq 0$ and $[\nu]+2\leq [x]$, \begin{equation} \label{Our H Bounds} \mathcal{L}(\nu,x)\leq H(\nu,x)\leq \mathcal{U}(\nu,x) \end{equation} where \begin{equation} \label{Our Lower H Bound} \begin{aligned} \mathcal{L}(\nu,x)&= \frac{2xe^{-\frac{1}{x}\left(\nu+1\right)}}{\nu+\frac{3}{2}+\sqrt{\left(\nu+\frac{3}{2}\right)^2+4x}}- \frac{2xe^{-\frac{1}{2x} ([x]-\nu-\nu_f+1)([x]+\nu-\nu_f+2)}}{[x]+\frac{3}{2}+\sqrt{ \left([x]+\frac{3}{2}\right)^2+\frac{8x}{\pi}}}\\ &\hspace{2in}+e^{-\frac{([x]-[\nu]-1)([x]+\nu+\nu_f)}{2x}}F([x]+\nu_f,x)(1+F([x]+\nu_f+1,x))\enskip. \end{aligned} \end{equation} and \begin{equation} \label{Our Upper H Bound} \begin{aligned} \mathcal{U}(\nu,x)&= \frac{2x}{\alpha_0}\left[\frac{1}{\nu +\sqrt{\nu^2+\frac{8x}{\pi\alpha_0}}}-\frac{e^{-\frac{\alpha_0}{2x} (x^2-\nu^2)}}{x+\sqrt{x^2+\frac{4x}{\alpha_0}}}\right] \\ &\hspace{1in}+e^{-\frac{\alpha_0}{2x} ([x]-[\nu]-1)([x]+\nu+\nu_f-1)}\frac{F\left([x]+\nu_f-\frac{1}{2},x\right)}{1-F\left([x]+\nu_f+\frac{1}{2},x\right)}\enskip. \end{aligned} \end{equation} \end{enumerate} where $\alpha_0=-\log(\sqrt{2}-1)$, $[x]$ denotes the floor function of $x$, $x=[x]+x_f$, and $$F(\nu,x)=\frac{x}{\nu+\sqrt{\nu^2+x^2}}\enskip.$$ We note here that the bounds in (\ref{Our Lower H Bound}) and (\ref{Our Upper H Bound}) are similar to those occurring in (\ref{Geometric Bounds}), but now with exponentially decaying factors in $x$ plus an incurred error from $\sum_{n=1}^{[x]-[\nu]-1} \prod_{k=0}^{n-1} {I_{\nu+k+1}(x)}/{I_{\nu+k}(x)}$ which behaves like a partial sum of a Gaussian over the integers from $\nu$ to $[x]$. These contributions are lower and upper bounded by the first differences in both (\ref{Our Lower H Bound}) and (\ref{Our Upper H Bound}), and are responsible for our bounds behaving like $\sqrt{x}$ for $\nu$ near $0$ and $x$ large. Indeed, if one were to simply use (\ref{Geometric Bounds}) for all $\nu,x\geq 0$, then for $\nu=0$ and large $x$, the lower bound is order 1, while the upper bound is of order $x$. The rest of this paper is organized as follows. We derive our bounds in Section~\ref{main results}, give applications in Section~\ref{appls}, and provide some discussion in Section~\ref{conclusion}. In Section~\ref{Bessel function}, we first give the result on the best exponential approximation to the function $f(x)=\sqrt{1+x^2}-x$ for $x\in [0,1]$, in Proposition~\ref{besselexp}, and apply them to lower and upper bounding $I_{\nu+1}(x)/I_{\nu}(x)$ when $\nu+1\leq x$, obtaining Corollary~\ref{Bessel Ratio}. In Section~\ref{Hazard function}, we then use these bounds to give the upper and lower bounds on $H(\nu,x)$ for $\nu,x\geq 0$. Combining these bounds with a normalizing condition from the Skellam distribution, we provide in Section~\ref{Asymptotic application} deterministic upper and lower bounds on $\exp(-x)I_{\nu}(x)$ for $\nu\in \mathbb{N}$ and apply them to obtain upper and lower bounds on $\mathbb{P}\left[W=\nu\right]$ for $W\sim Skellam(\lambda_1,\lambda_2)$. Finally, in Section~\ref{Skellam application}, we apply the results on $H(\nu,x)$ to deriving a concentration inequality for the $Skellam(\lambda,\lambda)$. \section{Main Results: Bounds} \label{main results} \subsection{Pointwise bounds on ${I_{\nu+1}(x)}/{I_{\nu}(x)}$} \label{Bessel function} We begin with upper and lower bounds on the ratio ${I_{\nu+1}(x)}/{I_{\nu}(x)}$. First, we need the following Proposition. \begin{prop}{(Best Exponential Approximation)} \label{besselexp} For all $x\in [0,1]$, \begin{equation} \label{BEA 1} \exp(-x)\leq \sqrt{1+x^2}-x \leq \exp(-\alpha_0 x) \enskip, \end{equation} where $\alpha_0=-\log(\sqrt{2}-1)\approx 0.8814.$ Moreover, these are the best possible arguments of the exponential, keeping constants of $1$. \end{prop} {\bf Proof of Proposition \ref{besselexp}:} We want to find the best constants $\alpha_1,\alpha_2>0$ for which $$\exp(-\alpha_1 x)\leq \sqrt{1+x^2}-x \leq \exp(-\alpha_2 x), \; \; \; x\in [0,1].$$ To this end, consider the function $$f(x)=\left(\sqrt{1+x^2}-x\right) \exp(\alpha x)$$ for some $\alpha>0$. We want to find the maximum and minimum values of $f(x)$ on the interval $[0,1]$. First, note that $$f(0)=1$$ and $$f(1)=(\sqrt{2}-1)\exp(\alpha).$$ To check for critical points, we have $$ \begin{aligned} f'(x)&=\exp(\alpha x) \left[ \alpha \left(\sqrt{1+x^2}-x\right) +\frac{x}{\sqrt{1+x^2}} -1\right]\\ &=\frac{\exp(\alpha x)}{\sqrt{1+x^2}} \left[ (\alpha+x+\alpha x^2) - (1+\alpha x)\sqrt{1+x^2}\right]\\ &=\frac{\exp(\alpha x)}{\sqrt{1+x^2} \left[(\alpha+x+\alpha x^2)+(1+\alpha x)\sqrt{1+x^2}\right]}\left[ (\alpha+x+\alpha x^2)^2 - (1+\alpha x)^2 (1+x^2)\right]\\ &=0 \end{aligned} $$ Thus, we require $$(\alpha+x+\alpha x^2)^2 = (1+\alpha x)^2 (1+x^2).$$ Expanding both sides of this equation and after some algebra, we get $$\alpha^2 +\alpha^2x^2=1 \Leftrightarrow x=x_0:=\pm \sqrt{\frac{1-\alpha^2}{\alpha^2}}.$$ Furthermore, this computation shows that this value is always a local minimum. {\bf Case 1: $\alpha\geq 1$} In this case, there are no critical points, and the function $f(x)$ is monotone increasing on $(0,1)$. We find that the upper bound is $f(1)=(\sqrt{2}-1)\exp(\alpha)$ and the lower bound is $f(0)=1$, so that $$\exp(-\alpha x) \leq \sqrt{1+x^2}-x \leq (\sqrt{2}-1)\exp(\alpha) \exp(-\alpha x).$$ The lower bound maximizes at the value $\alpha=1$. \subsection*{{\bf Case 2: $\alpha\leq {1}/{\sqrt{2}}$:}} In this regime, $\alpha={1}/{\sqrt{2}}$, $x_0\geq 1$, and now the function $f$ is monotone decreasing. Thus, $$(\sqrt{2}-1)\exp(\alpha) \exp(-\alpha x)\leq \sqrt{1+x^2}-x \leq \exp(-\alpha x)$$ We can minimize the upper bound by taking $\alpha={1}/{\sqrt{2}}$. \subsection*{{\bf Case 3: ${1}/{\sqrt{2}}\leq \alpha \leq 1$}} $$f_{x_0}(\alpha)=(1-\sqrt{1-\alpha^2})\frac{\exp\left(\sqrt{1-\alpha^2}\right)}{\alpha}=\frac{\alpha}{1+\sqrt{1-\alpha^2}} \exp(\sqrt{1-\alpha^2}).$$ $$ \begin{aligned} f_{x_0}'(\alpha)=\frac{\exp\left(\sqrt{1-\alpha^2}\right)(1-\alpha^2)}{(\sqrt{1-\alpha^2})(1+\sqrt{1-\alpha^2})} \end{aligned} $$ Thus, we find that starting at $\alpha={1}/{\sqrt{2}}$ the critical point occurs at $x=1$, and monotonically moves to the left at which point it settles at $x=0$ at $\alpha=1$. While it does this, the value of the local minimum, $f(x_0)$, increases monotonically, as does $f(1)$. So, in all cases, $ f(0)\geq f(x_0)$ and $f(x_0)\leq f(1)$. But, $f(0)\geq f(1)$ for ${1}/{\sqrt{2}}\leq \alpha \leq \alpha_0$ and then $f(0)\leq f(1)$ for $\alpha_0\leq \alpha \leq 1$ and equality only occurs at $\alpha=\alpha_0$. Since we are interested in constants of $1$, in the former case, ${1}/{\sqrt{2}}\leq \alpha \leq \alpha_0$ implies, $$\sqrt{1+x^2}-x \leq \exp(-\alpha x).$$ We can minimize the upper bound by taking $\alpha=\alpha_0$. Thus, $$\exp(-x)\leq \sqrt{1+x^2}-x \leq \exp(-\alpha_0 x)$$ where $\alpha_0=-\log(\sqrt{2}-1)\approx 0.8814.$ \begin{flushright} $\square$ \end{flushright} Next, applying Proposition \ref{besselexp} to the ratio ${I_{\nu+1}(x)}/{I_{\nu}(x)}$, we have the following corollary, \begin{cor} \label{Bessel Ratio} Let $\nu,x\geq 0$, and let $\alpha_0=-\log(\sqrt{2}-1)$. If $\nu+1\leq x$, then \begin{equation} \label{Bessel Ratio I} \exp\left(-\frac{\nu+1}{x}\right)\leq \frac{I_{\nu+1}(x)}{I_{\nu}(x)} \leq \exp\left(-\alpha_0 \frac{\nu+\frac{1}{2}}{x}\right). \end{equation} \end{cor} {\bf Proof of Corollary \ref{Bessel Ratio}:} Note that by the bounds \cite{Amos}, for $\nu,x\geq 0$, \begin{equation} \label{Amos Bounds} \begin{aligned} &\frac{I_{\nu+1}(x)}{I_{\nu}(x)}\geq \sqrt{1+\left(\frac{\nu+1}{x}\right)^2} -\frac{\nu+1}{x} = \frac{x}{\nu+1+\sqrt{x^2+(\nu+1)^2}} \enskip, \\ &\frac{I_{\nu+1}(x)}{I_{\nu}(x)}\leq \frac{x}{\nu+\frac{1}{2}+\sqrt{x^2+\left(\nu+\frac{1}{2}\right)^2}} =\sqrt{1+\left(\frac{\nu+\frac{1}{2}}{x}\right)^2} -\frac{\nu+\frac{1}{2}}{x}. \end{aligned} \end{equation} We note that we cannot use the more precise lower bound in \cite{Amos}, $$ \begin{aligned} \label{Amos Bounds Optimal} &\frac{I_{\nu+1}(x)}{I_{\nu}(x)}\geq \frac{x}{\nu+\frac{1}{2}+\sqrt{x^2+\left(\nu+\frac{3}{2}\right)^2}} \end{aligned} $$ since we require the arguments in $\nu$ in the denominator to be the same. When $\nu+1\leq x$, both $(\nu+{1}/{2})/{x}, (\nu+1)/{x}\leq 1$ so that by Proposition \ref{besselexp}, $$\exp\left(-\frac{\nu+1}{x}\right)\leq \frac{I_{\nu+1}(x)}{I_{\nu}(x)} \leq \exp\left(-\alpha_0 \frac{\nu+\frac{1}{2}}{x}\right).$$ \begin{flushright} $\square$ \end{flushright} We illustrate these bounds on ${I_{\nu+1}(x)}/{I_{\nu}(x)}$ in Figure \ref{besselfunccomp2}. \begin{figure}[htp!] \includegraphics[width=15cm, height=10cm]{besselfunccompratioRight2.eps} \caption{An illustration of the exponential-type bounds from Corollary \ref{Bessel Ratio} on ${I_{\nu+1}(x)}/{I_{\nu}(x)}$ for $x=100$, over the interval $[0,150]$ taken in steps of $0.015$. For $\nu+1\leq x$, we apply the bounds from Corollary \ref{Bessel Ratio}, after which we use the lower and upper bounds ${x}/(\nu+1+\sqrt{x^2+(\nu+1)^2})$ and ${x}/(\nu+\frac{1}{2}+\sqrt{x^2+\left(\nu+\frac{1}{2}\right)^2})$, respectively. For comparison, we also plot these latter bounds for $\nu+1\leq x$, and note that due to how precise these bounds are for any $\nu$, the black, blue, and cyan curves nearly coincide.} \label{besselfunccomp2} \end{figure} \newpage \subsection{Bounds on $H(\nu,x)$} \label{Hazard function} The bounds in Section \ref{Bessel function} on $I_{\nu+1}(x)/I_{\nu}(x)$ have extremely nice algebraic properties suitable for evaluation of products. This allows us to obtain explicit and interpretable bounds on $H(\nu,x)$. Recall our program outlined in Section \ref{Introduction}: for any $\nu,x\geq 0$, $$ \begin{aligned} H(\nu,x) &= \begin{cases} \sum_{n=1}^{\infty} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} & [\nu]+2> [x]\\ \sum_{n=1}^{[x]-[\nu]-1} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} + \sum_{n=[x]-[\nu]}^{\infty} \prod_{k=0}^{[x]-[\nu]-2} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} \prod_{k=[x]-[\nu]-1}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} & [\nu]+2\leq [x] \enskip. \end{cases}\\ \end{aligned} $$ Using our bounds on $I_{\nu+1}(x)/I_{\nu}(x)$ in Corollary \ref{Bessel Ratio}, the first term in the regime $[\nu]+2\leq [x]$ behaves like a sum of discrete Gaussians. The second term in the regime $[\nu]+2\leq [x]$ and the term in the regime $[\nu]+2>x$ are ''tail"-like quantities, and a simple geometric series-type argument using (\ref{Amos Bounds}) suffices to capture the behavior of $H(\nu,x)$. In fact, such a geometric series-type argument holds for any $\nu\geq 0$, and we give the full result which will be useful for comparison. \begin{Theorem} \label{Hboundexp} Let $\nu,x\geq 0$ and $H(\nu,x)$ be defined as in (\ref{H Def}). Then, (\ref{Geometric Bounds}), (\ref{Our H Bounds}), (\ref{Our Lower H Bound}) and (\ref{Our Upper H Bound}) hold. \end{Theorem} {\bf Proof of Theorem \ref{Hboundexp}:} \begin{enumerate} \item We first prove (\ref{Geometric Bounds}). Note that $$H(\nu,x)=\frac{I_{\nu+1}(x)}{I_{\nu}(x)}(1+H(\nu+1,x)) \enskip,$$ which in view of (\ref{Amos Bounds}) yields, \begin{equation} \label{H Recurse} F(\nu+1,x)(1+H(\nu+1,x))\leq H(\nu,x) \leq F\left(\nu+\frac{1}{2},x\right) (1+H(\nu+1,x))\enskip. \end{equation} Next, we have $$\begin{aligned} H(\nu+1,x)&=\sum_{n=1}^{\infty} \prod_{k=1}^{n} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)}\\ &\leq \sum_{n=1}^{\infty} \prod_{k=1}^{n} \frac{x}{\nu+k+\frac{1}{2} + \sqrt{\left(\nu+k+\frac{1}{2}\right)^2+x^2}}\\ &\leq \sum_{n=1}^{\infty} \left( \frac{x}{\nu+\frac{3}{2} + \sqrt{\left(\nu+\frac{3}{2}\right)^2+x^2}}\right)^n\\ &=\sum_{n=1}^{\infty}F\left(\nu+\frac{3}{2},x\right)^n\enskip, \end{aligned} $$ so that $$1+H(\nu+1,x)\leq \sum_{n=0}^{\infty}F\left(\nu+\frac{3}{2},x\right)^n=\frac{1}{1-F\left(\nu+\frac{3}{2},x\right)} \enskip.$$ Thus, $$H(\nu,x) \leq \frac{F\left(\nu+\frac{1}{2},x\right)}{1-F\left(\nu+\frac{3}{2},x\right) }$$ yielding the upper bound in (\ref{Geometric Bounds}). For the lower bound, note that using (\ref{Amos Bounds}), $$F(\nu+2,x)\leq \frac{I_{\nu+2}(x)}{I_{\nu+1}(x)} \leq H(\nu+1,x)\enskip,$$ which in view of (\ref{H Recurse}) implies $$F(\nu+1)(1+F(\nu+2,x))\leq H(\nu,x)\enskip.$$ This completes the proof of (\ref{Geometric Bounds}). \item Next, we prove (\ref{Our H Bounds}), (\ref{Our Lower H Bound}) and (\ref{Our Upper H Bound}). Note that using an iterated product, we may write $$H(\nu,x)= \sum_{n=1}^{\infty} \frac{I_{\nu+n}(x)}{I_{\nu}(x)}=\sum_{n=1}^{\infty} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)}$$ so that for $[\nu]+2\leq [x]$, \begin{equation} \label{separate H} \begin{aligned} H(\nu,x) &=\sum_{n=1}^{[x]-[\nu]-1} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} + \sum_{n=[x]-[\nu]}^{\infty} \prod_{k=0}^{[x]-[\nu]-2} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} \prod_{k=[x]-[\nu]-1}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} \\ &=\sum_{n=1}^{[x]-[\nu]-1} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} +\prod_{k=0}^{[x]-[\nu]-2} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} \sum_{n=[x]-[\nu]}^{\infty} \prod_{k=[x]-[\nu]-1}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} \enskip. \end{aligned} \end{equation} First, we deal with the sum in the second term. Using similar arguments as above for the upper bound in the first part of the theorem, we can write $$ \begin{aligned} & \sum_{n=[x]-[\nu]}^{\infty} \prod_{k=[x]-[\nu]-1}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)}\\ &\leq\sum_{n=[x]-[\nu]}^{\infty} \prod_{k=[x]-[\nu]-1}^{n-1} \frac{x}{\nu+k+\frac{1}{2}+\sqrt{(\nu+k+\frac{1}{2})^2+x^2}}\\ &=F\left([x]+\nu_f-\frac{1}{2}\right)\cdot \left(1+ \sum_{n=[x]-[\nu]}^{\infty} \prod_{k=[x]-[\nu]}^{n} \frac{x}{\nu+k+\frac{1}{2}+\sqrt{(\nu+k+\frac{1}{2})^2+x^2}}\right)\\ &\leq F\left([x]+\nu_f-\frac{1}{2}\right) \cdot \left(1+\sum_{n=[x]-[\nu]}^{\infty} \left(\frac{x}{[x]+\nu_f+\frac{1}{2}+\sqrt{([x]+\nu_f+\frac{1}{2})^2+x^2}}\right)^{n+1-([x]-[\nu])} \right)\\ &=\frac{F\left([x]+\nu_f-\frac{1}{2}\right)}{1-F\left([x]+\nu_f+\frac{1}{2},x\right)}\enskip. \end{aligned} $$ Similar arguments as in the lower bound for $[\nu]+2>[x]$ yield, $$ \begin{aligned} &\sum_{n=[x]-[\nu]}^{\infty} \prod_{k=[x]-[\nu]-1}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)}\\ &\geq \frac{I_{[x]+\nu_f}(x)}{I_{[x]+\nu_f-1}(x)}\left(1+ \frac{I_{[x]+\nu_f+1}(x)}{I_{[x]+\nu_f}(x)}\right)\\ & \geq F([x]+\nu_f,x)(1+F([x]+\nu_f+1,x))\enskip. \end{aligned} $$ Thus, $$H(\nu,x)\leq \sum_{n=1}^{[x]-[\nu]-1} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} +\prod_{k=0}^{[x]-[\nu]-2} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} \frac{F\left([x]+\nu_f-\frac{1}{2}\right)}{1-F\left([x]+\nu_f+\frac{1}{2},x\right)}$$ and $$H(\nu,x)\geq \sum_{n=1}^{[x]-[\nu]-1} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} +\prod_{k=0}^{[x]-[\nu]-2} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)}F([x]+\nu_f,x)(1+F([x]+\nu_f+1,x)) \enskip.$$ Next, note that, each term in each of the products above have $\nu+k+1\leq x$, since the largest $k$ can be is $k=[x]-[\nu]-2$ and $\nu+([x]-[\nu]-2)+1=[x]+\nu_f-1\leq x$. Thus, we may apply Corollary \ref{Hboundexp} to obtain $$ \begin{aligned} \prod_{k=0}^{[x]-[\nu]-2} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} &\leq \prod_{k=0}^{[x]-[\nu]-2} e^{-\alpha_0 \frac{\nu+k+\frac{1}{2}}{x}}\\ &=\exp\left( -\frac{\alpha_0}{x} \left( \left(\nu+\frac{1}{2}\right) \left([x]-[\nu]-1\right) + \frac{\left([x]-[\nu]-1\right)\left([x]-[\nu]-2\right)}{2}\right)\right)\\ &=\exp\left( -\frac{\alpha_0}{2x} \left([x]-[\nu]-1\right)\left(2\nu+1+[x]-[\nu]-2\right) \right)\\ &=\exp\left(-\frac{\alpha_0}{2x} ([x]-[\nu]-1)([x]+\nu+\nu_f-1)\right) \enskip. \end{aligned} $$ Likewise, $$ \begin{aligned} \prod_{k=0}^{[x]-[\nu]-2} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} &\geq \prod_{k=0}^{[x]-[\nu]-2} e^{- \frac{\nu+k+1}{x}}=\exp\left(-\frac{([x]-[\nu]-1)([x]+\nu+\nu_f)}{2x}\right)\\ \end{aligned} $$ implying $$H(\nu,x)\leq \sum_{n=1}^{[x]-[\nu]-1} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} + e^{-\frac{\alpha_0}{2x} ([x]-[\nu]-1)([x]+\nu+\nu_f-1)}\frac{F\left([x]+\nu_f-\frac{1}{2}\right)}{1-F\left([x]+\nu_f+\frac{1}{2},x\right)}$$ and $$H(\nu,x)\geq \sum_{n=1}^{[x]-[\nu]-1} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} +e^{-\frac{([x]-[\nu]-1)([x]+\nu+\nu_f)}{2x}}F([x]+\nu_f,x)(1+F([x]+\nu_f+1,x))\enskip.$$ Thus, it remains only to estimate the sum $ \sum_{n=1}^{[x]-[\nu]-1} \prod_{k=0}^{n-1} {I_{\nu+k+1}(x)}/{I_{\nu+k}(x)} $. Using Corollary \ref{Hboundexp} again, we get $$ \begin{aligned} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)}\leq \prod_{k=0}^{n-1} e^{-\alpha_0 \frac{\nu+k+\frac{1}{2}}{x}}&=\exp\left(-\frac{\alpha_0}{2x} n(2\nu+n)\right)=\exp\left(-\frac{\alpha_0}{2x} [(n+\nu)^2-\nu^2] \right) \enskip. \end{aligned} $$ Applying the same technique used for the lower bound, we have \begin{equation} \label{upper H} \begin{aligned} e^{\frac{1}{2x} \left(\nu+\frac{1}{2}\right)^2} \sum_{n=1}^{[x]-[\nu]-1} e^{-\frac{1}{2x} \left(n+\nu+\frac{1}{2}\right)^2}\leq \sum_{n=1}^{[x]-[\nu]-1} \prod_{k=0}^{n-1} \frac{I_{\nu+k+1}(x)}{I_{\nu+k}(x)} \leq e^{\frac{\alpha_0 \nu^2}{2x}} \sum_{n=1}^{[x]-[\nu]-1} e^{-\frac{\alpha_0}{2x} \left(n+\nu\right)^2}\enskip. \\ \end{aligned} \end{equation} Since both the upper and lower bounds are similar, we focus only on the upper bound. The lower bound can be treated similarly. $$ \begin{aligned} & \sum_{n=1}^{[x]-[\nu]-1} e^{-\frac{\alpha_0}{2x} \left(n+\nu\right)^2}\\ &=\sum_{k=[\nu]+1}^{[x]-1} e^{-\frac{\alpha_0}{2x} \left(k+\nu_f\right)^2}\\ &\leq \int_{[\nu]}^{[x]-1} e^{-\frac{\alpha_0}{2x} \left(y+\nu_f\right)^2} dy\\ &=\sqrt{\frac{2x}{\alpha_0}} \int_{\sqrt{\frac{\alpha_0}{2x}}\nu}^{\sqrt{\frac{\alpha_0}{2x}[x]-1+\nu_f}} e^{-u^2}du\\ &\leq \sqrt{\frac{2x}{\alpha_0}} \int_{\sqrt{\frac{\alpha_0}{2x}}\nu}^{\sqrt{\frac{\alpha_0}{2x}}x} e^{-u^2}du\\ &=\sqrt{\frac{2x}{\alpha_0}} \left[ \int_{\sqrt{\frac{\alpha_0}{2x}}\nu}^{\infty} e^{-u^2}du-\int_{\sqrt{\frac{\alpha_0}{2x}} x}^{\infty} e^{-u^2}du\right]\\ \end{aligned} $$ \begin{equation} \label{Gaussian Integral} \leq \sqrt{\frac{2x}{\alpha_0}}\left[\frac{e^{-\frac{\alpha_0}{2x} \nu^2 }}{\sqrt{\frac{\alpha_0}{2x}} \nu +\sqrt{\frac{\alpha_0}{2x} \nu^2+\frac{4}{\pi}}}-\frac{e^{-\frac{\alpha_0}{2x} x^2}}{\sqrt{\frac{\alpha_0}{2x}} x+\sqrt{\frac{\alpha_0}{2x} x^2+2}}\right] \end{equation} where after a $u$-substitution, we have used the inequality (see \cite{AS}), $$\frac{e^{-x^2}}{x+\sqrt{x^2+2}} \leq \int_x^{\infty} e^{-t^2}dt \leq \frac{e^{-x^2}}{x+\sqrt{x^2+\frac{4}{\pi}}} \; \; \; x\geq 0.$$ From (\ref{upper H}) and (\ref{Gaussian Integral}), we have \begin{equation} \label{H5} \begin{aligned} &H(\nu,x) \leq \sqrt{\frac{2x}{\alpha_0}}\left[\frac{1}{\sqrt{\frac{\alpha_0}{2x}} \nu +\sqrt{\frac{\alpha_0}{2x} \nu^2+\frac{4}{\pi}}}-\frac{e^{-\frac{\alpha_0}{2x} (x^2-\nu^2)}}{\sqrt{\frac{\alpha_0}{2x}} x+\sqrt{\frac{\alpha_0}{2x} x^2+2}}\right] \\ &\hspace{1in}+e^{-\frac{\alpha_0}{2x} ([x]-[\nu]-1)([x]+\nu+\nu_f-1)}\frac{F\left([x]+\nu_f-\frac{1}{2}\right)}{1-F\left([x]+\nu_f+\frac{1}{2},x\right)} \end{aligned} \end{equation} which can be written as $$ \begin{aligned} H(\nu,x)& \leq \frac{2x}{\alpha_0}\left[\frac{1}{\nu +\sqrt{\nu^2+\frac{8x}{\pi\alpha_0}}}-\frac{e^{-\frac{\alpha_0}{2x} (x^2-\nu^2)}}{x+\sqrt{x^2+\frac{4x}{\alpha_0}}}\right] \\ &\hspace{1in}+e^{-\frac{\alpha_0}{2x} ([x]-[\nu]-1)([x]+\nu+\nu_f-1)}\frac{F\left([x]+\nu_f-\frac{1}{2}\right)}{1-F\left([x]+\nu_f+\frac{1}{2},x\right)}\enskip , \end{aligned} $$ yielding the upper bound in (\ref{Our Upper H Bound}). To complete the proof then, we just need to prove the lower bound. Repeating similar arguments above, $$ \begin{aligned} \sum_{n=1}^{[x]-[\nu]-1} e^{-\frac{1}{2x} \left(n+\nu+\frac{1}{2}\right)^2} &\geq \int_{[\nu]+1}^{[x]-\nu+[\nu]+1} e^{-\frac{1}{2x}\left(y+\frac{1}{2}\right)^2}dy\\ &=\sqrt{2x}\int_{\frac{[\nu]+\frac{3}{2}}{\sqrt{2x}}}^{\frac{[x]-\nu+[\nu]+\frac{3}{2}}{\sqrt{2x}}} e^{-u^2}du\\ &\geq \sqrt{2x}\int_{\frac{\nu+\frac{3}{2}}{\sqrt{2x}}}^{\frac{x-\nu_f+\frac{3}{2}}{\sqrt{2x}}}e^{-u^2}du\\ &=\sqrt{2x} \left[ \int_{\frac{1}{\sqrt{2x}}\left(\nu+\frac{3}{2}\right)}^{\infty} e^{-u^2}du - \int_{\frac{1}{\sqrt{2x}}\left([x]-\nu_f+\frac{3}{2}\right)}^{\infty} e^{-u^2}du\right]\\ &\geq \sqrt{2x}\left[ \frac{e^{-\frac{1}{2x} \left(\nu+\frac{3}{2}\right)^2}}{\frac{1}{\sqrt{2x}} \left(\nu+\frac{3}{2}\right)+\sqrt{\frac{1}{2x} \left(\nu+\frac{3}{2}\right)^2+2}}\right.\\ &\left.\hspace{1in}- \frac{e^{-\frac{1}{2x} \left([x]-\nu_f+\frac{3}{2}\right)^2}}{\frac{1}{\sqrt{2x}} \left([x]-\nu_f+\frac{3}{2}\right)+\sqrt{\frac{1}{2x} \left([x]-\nu_f+\frac{3}{2}\right)^2+\frac{4}{\pi}}}\right] \enskip. \end{aligned} $$ \end{enumerate} Thus, $$ \begin{aligned} &H(\nu,x)\geq e^{\frac{1}{2x} \left(\nu+\frac{1}{2}\right)^2}\sqrt{2x}\left[ \frac{e^{-\frac{1}{2x} \left(\nu+\frac{3}{2}\right)^2}}{\frac{1}{\sqrt{2x}} \left(\nu+\frac{3}{2}\right)+\sqrt{\frac{1}{2x} \left(\nu+\frac{3}{2}\right)^2+2}}- \frac{e^{-\frac{1}{2x} \left([x]-\nu_f+\frac{3}{2}\right)^2}}{\frac{1}{\sqrt{2x}} \left([x]-\nu_f+\frac{3}{2}\right)+\sqrt{\frac{1}{2x} \left([x]-\nu_f+\frac{3}{2}\right)^2+\frac{4}{\pi}}}\right]\\ &\hspace{3in}+e^{-\frac{([x]-[\nu]-1)([x]+\nu+\nu_f)}{2x}}F([x]+\nu_f,x)(1+F([x]+\nu_f+1,x)) \\ &\Rightarrow H(\nu,x)\geq e^{-\frac{1}{x}\left(\nu+1\right)} \frac{\sqrt{2x}}{\frac{1}{\sqrt{2x}} \left(\nu+\frac{3}{2}\right)+\sqrt{\frac{1}{2x} \left(\nu+\frac{3}{2}\right)^2+2}}\\ &\hspace{1in}-e^{-\frac{1}{2x} ([x]-\nu-\nu_f+1)([x]+\nu-\nu_f+2)} \frac{\sqrt{2x}}{\frac{1}{\sqrt{2x}} \left([x]+\frac{3}{2}\right)+\sqrt{\frac{1}{2x} \left([x]+\frac{3}{2}\right)^2+\frac{4}{\pi}}}\\ &\hspace{1in}+e^{-\frac{([x]-[\nu]-1)([x]+\nu+\nu_f)}{2x}}F([x]+\nu_f,x)(1+F([x]+\nu_f+1,x))\\ \end{aligned} $$ which is the same as $$ \begin{aligned} H(\nu,x)&\geq \frac{2xe^{-\frac{1}{x}\left(\nu+1\right)}}{\nu+\frac{3}{2}+\sqrt{\left(\nu+\frac{3}{2}\right)^2+4x}}- \frac{2xe^{-\frac{1}{2x} ([x]-\nu-\nu_f+1)([x]+\nu-\nu_f+2)}}{[x]+\frac{3}{2}+\sqrt{ \left([x]+\frac{3}{2}\right)^2+\frac{8x}{\pi}}}\\ &\hspace{2in}+e^{-\frac{([x]-[\nu]-1)([x]+\nu+\nu_f)}{2x}}F([x]+\nu_f,x)(1+F([x]+\nu_f+1,x))\enskip. \end{aligned} $$ Theorem \ref{Hboundexp} is proved. \begin{flushright} $\square$ \end{flushright} We illustrate the bounds (\ref{Our H Bounds}), (\ref{Our Lower H Bound}) and (\ref{Our Upper H Bound}) on $H(\nu,x)$ in Figure \ref{Hboundplot} for $x=50$ and $\nu\in [0,70]$ in steps of $0.01$ and also plot the true values of $H(\nu,x)$ computed using MATLAB. For comparison, we also plot the $F(\nu,x)$ Lower/Upper bounds (\ref{Geometric Bounds}). The value $\epsilon=0.01$ is chosen to truncate the infinite sum of Bessel functions occurring in the numerator of $H(\nu,x)$ so that the terms beyond a certain index are less than $\epsilon$. We notice that there are regimes in $\nu$ for which our lower and upper bounds are worse and better than those using the geometric series-type bounds, but that near $\nu=0$, our bounds are substantially better, and is a result of the first difference in (\ref{Our Lower H Bound}) and (\ref{Our Upper H Bound}) obtained by the use of the exponential approximations (\ref{Bessel Ratio I}) on $I_{\nu+1}(x)/I_{\nu}(x)$. \begin{figure}[htp!] \includegraphics[width=17cm, height=10cm]{HazardcompRight2} \caption{A comparison of our bounds (\ref{Our H Bounds}), (\ref{Our Lower H Bound}) and (\ref{Our Upper H Bound}) on $H(\nu,x)$ compared to the true value of $H(\nu,x)$ for $x=50$ for $\nu \in [0,200]$ taken in steps of $0.01$. The $F(\nu,x)$ Lower/Upper bounds refer to those in (\ref{Geometric Bounds}). The value $\epsilon=0.01$ is chosen to truncate the infinite sum of Bessel functions occurring in the numerator of $H(\nu,x)$ so that the terms beyond a certain index are less than $\epsilon$. We note that near $\nu=0$, our bounds are substantially better than using (\ref{Geometric Bounds}) and scale like $\sqrt{x}$ for large $x$ which we would not have been able to obtain otherwise, and is the main purpose of this paper.} \label{Hboundplot} \end{figure} \section{Main Results: Applications} \label{appls} In this section, we give some applications of Theorem \ref{Hboundexp}. First, we briefly review the Skellam distribution, and relate it to the function $H(\nu,x)$. Let $X_1\sim Pois(\lambda_1)$ and $X_2\sim Pois(\lambda_2)$ be two independent Poisson random variables with parameters $\lambda_1$ and $\lambda_2$, respectively. Then, the distribution of the random variable $W=X_1-X_2$ is called a Skellam distribution with parameters $\lambda_1$ and $\lambda_2$. We denote this by $W=X_1-X_2\sim Skellam(\lambda_1,\lambda_2)$ and have, $$\mathbb{P}\left[W=n\right]=e^{-\left(\lambda_1+\lambda_2\right)} \left( \frac{\lambda_1}{\lambda_2}\right)^{\frac{n}{2}} I_{|n|}\left(2\sqrt{\lambda_1\lambda_2}\right).$$ The probabilistic value of $H(\nu,x)$ is now immediate: if $\lambda_1=\lambda_2=\lambda>0$, then $H(\nu,2\lambda)={\mathbb{P}\left[W>\nu\right]}/{\mathbb{P}\left[W=\nu\right]}$. The quantity $$\mathcal{H}(\nu,2\lambda)=\frac{1}{H(\nu,2\lambda)+1}=\frac{\mathbb{P}\left[W=\nu\right]}{\mathbb{P}\left[W\geq \nu\right]}$$ is important in the actuarial sciences for describing the probability of death at time $\nu$ given death occurs no earlier than time $\nu$, and is known as the hazard function. \subsection{Application 1: Bounds on $\exp(-x)I_{\nu}(x)$ for $x\geq 0$ and $\nu\in \mathbb{N}$ and the Skellam$(\lambda_1,\lambda_2)$ Mass Function} \label{Asymptotic application} Since the distribution of $W\sim Skellam(\lambda,\lambda)$ is symmetric, we have \begin{equation} \label{Skellam Symmetry} \exp(-2\lambda)I_0(2\lambda)=\mathbb{P}\left[W=0\right]=\frac{1}{2H(0,2\lambda)+1}. \end{equation} Thus, we may apply the bounds on ${I_{\nu+1}(x)}/{I_{\nu}(x)}$ and $H(\nu,x)$ given in Corollary \ref{Bessel Ratio} and Theorem \ref{Hboundexp}, respectively, to obtain sharp upper and lower bounds on $\exp(-x)I_0(x)$ and hence on $$\exp(-x)I_{\nu}(x)=\prod_{k=0}^{\nu-1} \frac{I_{k+1}(x)}{I_k(x)} \exp(-x)I_0(x)$$ for $\nu\in \mathbb{N}$. We note that this result therefore improves the asymptotic formula $$\exp(-x)I_{\nu}(x)\sim \frac{1}{\sqrt{2\pi x}} \; \; \; {\rm as} \; \; \; x\rightarrow \infty$$ but only for $\nu\in \mathbb{N}$, and in particular, gives a bound on $\mathbb{P}\left[W=\nu\right]$ for $W\sim Skellam(\lambda,\lambda)$ by setting $x=2\lambda$. \begin{Theorem} \label{x Large} Set $\alpha_0=-\log(\sqrt{2}-1)\approx 0.8814.$ Then, for $\nu\in \mathbb{N}$ and $x\geq 0$, \begin{enumerate} \item If $\nu\leq x$, $$\frac{\exp((-\frac{\nu^2}{2x}\left(\frac{\nu+1}{\nu}\right))}{1+2\mathcal{U}(0,x)} \leq \exp(-x)I_{\nu}(x)\leq \frac{\exp(-\frac{\alpha_0}{2x} \nu^2)}{1+2\mathcal{L}(0,x)}$$ \item If $\nu> x$, $$\frac{e^{-\frac{[x]^2}{2x}\left( \frac{[x]+1}{[x]}\right)}B\left([x]+\frac{x}{2}+1,\nu-[x]\right) \frac{(x/2)^{\nu-[x]}}{(\nu-[x]-1)!}}{1+2\mathcal{U}(0,x)}\leq e^{-x}I_{\nu}(x)\leq \frac{e^{-\frac{\alpha_0}{2x}[x]^2} B\left([x]+x+\frac{1}{2},\nu-[x]\right) \frac{x^{\nu-[x]}}{(\nu-[x]-1)!}}{{1+2\mathcal{L}(0,x)}} $$ where $B(x,y)$ denotes the Beta function and $\mathcal{L}(\nu,x)$ and $\mathcal{U}(\nu,x)$ are the lower and upper bounds, respectively, from Theorem \ref{Hboundexp}. \end{enumerate} \end{Theorem} {\bf Proof of Theorem \ref{x Large}:} \begin{enumerate} \item By Corollary \ref{Bessel Ratio}, for $k+1\leq x$, $$ e^{-\frac{k+1}{x}}\leq \frac{I_{k+1}(x)}{I_{k}(x)} \leq e^{-\alpha_0 \frac{k+\frac{1}{2}}{x}}. $$ so that for $\nu\leq x$, $$e^{-\frac{\nu^2}{2x} \left(\frac{\nu+1}{\nu}\right) }= \prod_{k=0}^{\nu-1} e^{-\frac{k+1}{x}} \leq \prod_{k=0}^{\nu-1} \frac{I_{k+1}(x)}{I_k(x)}\leq \prod_{k=0}^{\nu-1} e^{-\alpha_0 \frac{k+\frac{1}{2}}{x}}=e^{-\frac{\alpha_0}{2x} \nu^2}\enskip.$$ Thus, $$e^{-\frac{\nu^2}{2x}\left(\frac{\nu+1}{\nu}\right)}e^{-x}I_0(x)\leq e^{-x}I_{\nu}(x)\leq e^{-\alpha_0 \frac{k+\frac{1}{2}}{x}}=e^{-\frac{\alpha_0}{2x} \nu^2}e^{-x}I_0(x)$$ since $$e^{-x}I_{\nu}(x)=\prod_{k=0}^{\nu-1} \frac{I_{k+1}(x)}{I_k(x)} e^{-x}I_0(x) .$$ By (\ref{Skellam Symmetry}) then, we have for $\nu\leq x$, $$\frac{e^{-\frac{\nu^2}{2x}\left(\frac{\nu+1}{\nu}\right)}}{1+2\mathcal{U}(0,x)} \leq e^{-x}I_{\nu}(x)\leq \frac{e^{-\frac{\alpha_0}{2x} \nu^2}}{1+2\mathcal{L}(0,x)} \enskip.$$ \item To prove the second assertion in theorem \ref{x Large}, notice that for $\nu> x$, $$\prod_{k=0}^{\nu-1} \frac{I_{k+1}(x)}{I_k(x)} = \prod_{k=0}^{[x]-1} \frac{I_{k+1}(x)}{I_k(x)} \prod_{k=[x]}^{\nu-1} \frac{I_{k+1}(x)}{I_k(x)}$$ and each term in the first product has $k\leq x$ so that by the previous argument, $$e^{-\frac{[x]^2}{2x}\left( \frac{[x]+1}{[x]}\right)}\prod_{k=[x]}^{\nu-1} \frac{I_{k+1}(x)}{I_k(x)}\leq \prod_{k=0}^{\nu-1} \frac{I_{k+1}(x)}{I_k(x)} \leq e^{-\frac{\alpha_0}{2x}[x]^2} \prod_{k=[x]}^{\nu-1} \frac{I_{k+1}(x)}{I_k(x)} .$$ Next, $$ \begin{aligned} \frac{I_{k+1}(x)}{I_k(x)} &\leq \frac{x}{k+\frac{1}{2} + \sqrt{\left(k+\frac{1}{2}\right)^2 + x^2}} \leq \frac{x}{k+\frac{1}{2}+x} \end{aligned} $$ so that $$ \begin{aligned} \prod_{k=[x]}^{\nu-1} \frac{I_{k+1}(x)}{I_k(x)} \leq \prod_{k=[x]}^{\nu-1} \frac{x}{k+\frac{1}{2}+x}&=\frac{x^{\nu-[x]} \Gamma\left([x]+x+\frac{1}{2}\right)}{\Gamma\left(\nu+\frac{1}{2}+x\right)}\\ &=B\left([x]+x+\frac{1}{2},\nu-[x]\right) \frac{x^{\nu-[x]}}{(\nu-[x]-1)!} \end{aligned} $$ and similarly, using $\sqrt{a^2+b^2}\leq a+b$ for $a,b\geq 0$, $$ \begin{aligned} \prod_{k=[x]}^{\nu-1} \frac{I_{k+1}(x)}{I_k(x)} &\geq \prod_{k=[x]}^{\nu-1} \frac{x}{k+1 + \sqrt{\left(k+1\right)^2 + x^2}} \\ &\geq \prod_{k=[x]}^{\nu-1} \frac{x}{2(k+1)+x}\\ &= \prod_{k=[x]}^{\nu-1} \frac{x/2}{k+1+x/2}\\ &=\frac{(x/2)^{\nu-[x]} \Gamma\left([x]+1+\frac{x}{2}\right)}{\Gamma\left(\nu+\frac{x}{2}+1\right)}\\ &=B\left([x]+\frac{x}{2}+1,\nu-[x]\right) \frac{(x/2)^{\nu-[x]}}{(\nu-[x]-1)!}\\ \end{aligned} $$ Thus, since $\exp(-x)I_{\nu}(x)=\prod_{k=0}^{\nu-1} {I_{k+1}(x)}/{I_k(x)} \exp(-x)I_0(x)$, we get $$\frac{e^{-\frac{[x]^2}{2x}\left( \frac{[x]+1}{[x]}\right)}B\left([x]+\frac{x}{2}+1,\nu-[x]\right) \frac{(x/2)^{\nu-[x]}}{(\nu-[x]-1)!}}{1+2\mathcal{U}(0,x)}\leq e^{-x}I_{\nu}(x)\leq \frac{e^{-\frac{\alpha_0}{2x}[x]^2} B\left([x]+x+\frac{1}{2},\nu-[x]\right) \frac{x^{\nu-[x]}}{(\nu-[x]-1)!}}{{1+2\mathcal{L}(0,x)}} .$$ Thus theorem \ref{x Large} is proved. \end{enumerate} \begin{flushright} $\square$ \end{flushright} A few remarks of Theorem \ref{x Large} are in order: \begin{enumerate} \item One may simplify the upper bound using the bounds found in \cite{Alzer} on the Beta function, $B(x,y)$, $$\alpha\leq \frac{1}{xy} -B(x,y)\leq \beta$$ where $\alpha=0$ and $\beta=0.08731\ldots$ are the best possible bounds. \item By setting $x=2\sqrt{\lambda_1\lambda_2}$ and multiplying (1) and (2) in Theorem \ref{x Large} through by $$\left(\sqrt{\frac{\lambda_1}{\lambda_2}}\right)^{\nu}\exp\left[-\left(\sqrt{\lambda_1}+\sqrt{\lambda_2}\right)^2\right],$$ we obtain precise bounds on $\mathbb{P}\left[W=\nu\right]$ for $W\sim Skellam(\lambda_1,\lambda_2)$. \item It's important to note that if one were to use the geometric series-type bound (\ref{Geometric Bounds}) with $\nu=0$, that one would not achieve the behavior of $1/\sqrt{x}$ that we have in Theorem \ref{x Large} which is indeed, guaranteed by the asymptotic $\exp(-x)I_{\nu}(x)\rightarrow 1/\sqrt{2\pi x}$ as $x\rightarrow \infty$ and exhibited by our non-asymptotic bounds on $H(\nu,x)$ in (\ref{Our Lower H Bound}) and (\ref{Our Upper H Bound}). \end{enumerate} As an example of applying our bounds non-asymptotically, we plot $\exp(-x)I_0(x)$, its asymptotic ${1}/{\sqrt{2\pi x}}$ as $x\rightarrow \infty$, and the functions ${1}/({2\mathcal{L}(0,x)+1})$ and ${1}/(2\mathcal{U}(0,x)+1)$ in Figure \ref{besseli0}. In steps of $1/100$, over the interval $[0,100]$, the top panel illustrates the behavior of all these functions over the interval $[0,100]$. We note that for large values of $x$, all functions values converge to zero, are extremely close, and are all on the order of $1/\sqrt{x}$ - something that one would not see by using instead the naive geometric-type bounds (\ref{Geometric Bounds}) with $\nu=0$. In the second panel, we restrict to the interval $[0,3]$ as our bounds transition across the line $[x]=2$. We note that for $[x]<2$, our upper bound is much more accurate than the asymptotic $1/\sqrt{2\pi x}$ and that in general is quite good. \begin{figure}[htp!] \includegraphics[width=15cm, height=10cm]{BesselI0Right2} \caption{A comparison of our bounds on $\exp(-x)I_0(x)$ to the true value and the asymptotic $1/\sqrt{2\pi x}$. All plots are taken in steps of $1/100$ on a window of $[0,100]$. Top panel: behavior for $x\in [0,100]$. Bottom panel: the same plot, but over the interval $x\in [0,3]$. We note that for $[x]<2$, our upper bound is much more accurate than the asymptotic $1/\sqrt{2\pi x}$ and that in general is quite good. For large $x$, all curves $\sim 1/\sqrt{x}$ in agreement with the asymptotic $1/\sqrt{2\pi x}$, a behavior that would not have been achieved if instead one used the geometric series-type bounds (\ref{Geometric Bounds}) in place of $\mathcal{L}(0,x)$ and $\mathcal{U}(0,x)$.} \label{besseli0} \end{figure} \subsection{Application 2: Concentration Inequality for the Skellam Distribution} \label{Skellam application} We now present a concentration inequality for the $Skellam(\lambda,\lambda)$, the proof of which is a direct consequence of Theorem \ref{x Large} and the identity, $$\mathbb{P}\left[|W|>\nu\right] = 1-\mathbb{P}\left[-\nu\leq W \leq \nu\right] =2H(\nu,2\lambda) \exp(-2\lambda)I_{\nu}(2\lambda) .$$ \begin{cor} Let $W\sim Skellam(\lambda, \lambda)$, and define $\alpha_0=-\log\left(\sqrt{2}-1\right)$. Then, \begin{enumerate} \item If $\nu\leq x$, $$\frac{2\exp\left(-\frac{\nu^2}{4\lambda}\left(\frac{\nu+1}{\nu}\right)\right)}{1+2\mathcal{U}(0,2\lambda)} \leq \frac{\mathbb{P}\left[|W|>\nu\right]}{H(\nu,2\lambda)} \leq \frac{2\exp\left(-\frac{\alpha_0}{4\lambda} \nu^2\right)}{1+2\mathcal{L}(0,2\lambda)}$$ \item If $\nu>x$, $$ \begin{aligned} \frac{2\exp\left(-\frac{[2\lambda]^2}{4\lambda}\left( \frac{[2\lambda]+1}{[2\lambda]}\right)\right)B\left([2\lambda]+\lambda +1,\nu-[2\lambda]\right) \frac{\lambda ^{\nu-[2\lambda]}}{(\nu-[2\lambda]-1)!}}{1+2\mathcal{U}(0,2\lambda)}&\leq \frac{\mathbb{P}\left[|W|>\nu\right]}{H(\nu,2\lambda)}\\ \frac{2\exp\left(-\frac{\alpha_0}{4\lambda}[2\lambda]^2\right) B\left([2\lambda]+2\lambda+\frac{1}{2},\nu-[2\lambda]\right) \frac{(2\lambda)^{\nu-[2\lambda]}}{(\nu-[2\lambda]-1)!}}{{1+2\mathcal{L}(0,2\lambda)}}&\geq \frac{\mathbb{P}\left[|W|>\nu\right]}{H(\nu,2\lambda)}\\ \end{aligned} $$ where $B(x,y)$ denotes the Beta function and $\mathcal{L}(\nu,x)$ and $\mathcal{U}(\nu,x)$ are the lower and upper bounds, respectively, from Theorem \ref{Hboundexp}. \end{enumerate} \end{cor} \section{Summary and Conclusions} \label{conclusion} In \cite{Viles}, the function $H(\nu,x)=\sum_{n=1}^{\infty} {I_{\nu+n}(x)}/{I_{\nu}(x)}$ for $x\geq 0$ and $\nu\in \mathbb{N}$ appears as a key quantity in approximating a sum of dependent random variables that appear in statistical estimation of network motifs as a Skellam$(\lambda,\lambda)$ distribution . A necessary scaling of $H(\nu,x)$ at $\nu=0$ of $\sqrt{x}$ is necessary, however, in order for the error bound of the approximating distribution to remain finite for large $x$. In this paper, we have presented a quantitative analysis of $H(\nu,x)$ for $x,\nu\geq 0$ necessary for these needs in the form of upper and lower bounds in Theorem \ref{Hboundexp}. Our technique relies on bounding current estimates on ${I_{\nu+1}(x)}/{I_{\nu}(x)}$ from above and below by quantities with nicer algebraic properties, namely exponentials, while optimizing the rates when $\nu+1\leq x$ to maintain their precision. In conjunction with the mass normalizing property of the $Skellam(\lambda,\lambda)$ distribution, we also give applications of this function in determining explicit error bounds, valid for any $x\geq 0$ and $\nu\in \mathbb{N}$, on the asymptotic approximation $\exp({-x})I_{\nu}(x)\sim {1}/{\sqrt{2\pi x}}$ as $x\rightarrow \infty$, and use them to provide precise upper and lower bounds on $\mathbb{P}\left[W=\nu\right]$ for $W\sim Skellam(\lambda_1,\lambda_2)$. In a similar manner, we derive a concentration inequality for the $Skellam(\lambda,\lambda)$ distribution, bounding $\mathbb{P}\left[|W|\geq \nu\right]$ where $W\sim Skellam(\lambda,\lambda)$ from above and below. While we analyze the function $H(\nu,x)$ $\nu\in \mathbb{N}$, $x\geq 0$ for our purposes, we leave as future research the analysis for non integer $\nu$, as well as consideration of the generalized function $$H(\nu,x)=\sum_{n=1}^{\infty}\left(\frac{\lambda_1}{\lambda_2}\right)^{\frac{n}{2}} \frac{ I_{\nu+n}(2\sqrt{\lambda_1\lambda_2})}{I_{\nu}(2\sqrt{\lambda_1\lambda_2})}$$ that would appear for the $Skellam(\lambda_1,\lambda_2)$ distribution. We hope that the results laid here will form the foundation of such future research in this area. It is also unknown as to whether normalization conditions for $\{\exp({-x})I_{\nu}(x)\}_{\nu=-\infty}^{\infty}$ induced by the $Skellam(\lambda,\lambda)$ hold for $\nu$ in a generalized lattice $\mathbb{N}+\alpha$, and if so, what the normalizing constant is. Such information would provide a key in providing error bounds on the asymptotic $\exp({-x})I_{\nu}(x)\sim {1}/{\sqrt{2\pi x}}$ for non-integer values of $\nu$. \bibliographystyle{elsarticle-num}
{ "attr-fineweb-edu": 1.268555, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbsjxaL3SuhOomPQf
\section{Introduction} In view of the ever-growing and complex data in today's scientific world, there is an increasing need for a generic method to deal with these datasets in diverse application scenarios. Non-Euclidean data, such as brain imaging, computational biology, computer graphics and computational social sciences and among many others \cite{kong, Kendall, bookstein1996biometrics, muller, balasundaram2011clique, overview}, arises in many domains. For instance, images and time-varying data can be presented as functional data that are in the form of functions \cite{csenturk2011varying, csenturk2010functional, horvath2012inference}. The sphere, Matrix groups, Positive-Definite Tensors and shape spaces are also included as manifold examples \cite{rahman2005multiscale, shi2009intrinsic, lin2017extrinsic}. It is of great interest to discover the associations in those data. Nevertheless, many traditional analysis methods that cope with data in Euclidean spaces become invalid since non-Euclidean spaces are inherently nonlinear space without inner product. The analysis of these Non-Euclidean data presents many mathematical and computational challenges. One major goal of statistical analysis is to understand the relationship among random vectors, such as measuring a linear/ nonlinear association between data, which is also a fundamental step for further statistical analysis (e.g., regression modeling). Correlation is viewed as a technique for measuring associations between random variables. A variety of classical methods has been developed to detect the correlation between data. For instance, Pearson's correlation and canonical correlation analysis (CCA) are powerful tools for capturing the degree of linear association between two sets of multi-variate random variables \cite{benesty2009pearson, fukumizu2005consistency, kim2014canonical}. In contrast to Pearson's correlation, Spearman's rank correlation coefficient, as a non-parametric measure of rank correlation, can be applied in non-linear conditions \cite{sedgwick2014spearman}. However, statistical methods for measuring the association between complex structures of Non-Euclidean data have not been fully accounted in the methods above. In metric space, \cite{frechet} proposed a generalized mean in metric spaces and a corresponding variance that may be used to quantify the spread of the distribution of metric spaces valued random elements. However, in order to guarantee the existence and uniqueness of Fr\'{e}chet mean, it requires the space should be with negative sectional curvature. While in a positive sectional curvature space, the extra conditions such as bound on the support and radial distribution are required \cite{charlier2013necessary}. Following this, more nonparametric methods for manifold-valued data was developed. For instance, A Riemannian CCA model was proposed by \cite{kim2014canonical}, measuring an intrinsically linear association between two manifold-valued objects. Recently, \cite{articleSze} proposed distance correlation to measure the association between random vectors. After that, \cite{article2009} introduced Brownian covariance and showed it to be the same as distance covariance and \cite{lyons2013distance} extended the distance covariance to metric space under the condition that the space should be of strong negative type. Pan et.al \cite{pan2018ball,pan2019ball} introduced the notions of ball divergence and ball covariance for Banach-valued random vectors. These two notions can also be extended to metric spaces but by a less direct approach. Note that a metric space is endowed with a distance, it is worth studying the behaviors of distance-based statistical procedures in metric spaces. In this paper, we extend the method in \citep{cui2018a} and propose a novel statistics in metric spaces based on \cite{wang2021nonparametric}, called metric distributional discrepancy (MDD), which considers a closed ball with the defined center and radius. We perform a powerful independence test which is applicable between a random vector $X$ and a categorical variable $Y$ based on MDD. The MDD statistics can be regarded as the weighted average of Cram\'{e}r-von Mises distance between the conditional distribution of $X$ given each class of $Y$ and the unconditional distribution of $X$. Our proposed method has the following major advantages, (i) $X$ is a random element in metric spaces. (ii) MDD is zero if and only if $X$ and $Y$ are statistically independent. (iii) It works well when the data is in heavy-tailed distribution or extreme value since it does not require any moment assumption. {Compared to distance correlation, MDD as an alternative dependence measure can be applied in the metric space which is not of strong negative type. Unlike ball correlation, MDD gets rid of unnecessary ball set calculation of $Y$ and has higher test power, which is shown in the simulation and real data analysis. } The organization of the rest of this paper is as follows. In section \ref{secsta}, we give the definition and theoretical properties of MDD, and present results of monte carlo simulations in section \ref{secexp} and experiments on two real data analysis in section \ref{realdata}. Finally, we draw a conclusion in section \ref{conclu}. \section{Metric distributional discrepancy} \label{secsta} \subsection{Metric distributional discrepancy statistics} For convenience, we first list some notations in the metric space. The order pair $(\mathcal{M},d)$ denotes a $metric$ $space$ if $\mathcal{M}$ is a set and $d$ is a $metric$ or $distance$ on $\mathcal{M}$. Given a metric space $(\mathcal{M},d)$, let $\bar{B}(x, y) = \{ v : d(x, v) \leq r\}$ be a closed ball with the center $x$ and the radius $ r = d(x, y)$. Next, we define and illustrate the metric distributional discrepancy (MDD) statistics for a random element and a categorical one. Let $X$ be a random element and $Y$ be a categorical random variable with $R$ classes where $Y = \{y_1,y_2, \ldots, y_R\}$. Then, we let $X''$ be a copy of random element $X$, $F(x, x') = P_{X''}\{X''\in\bar{B}(x, x')\}$ be the unconditional distribution function of $X$, and $F_r(x, x')= P_{X''}\{X''\in\bar{B}(x, x')|Y = y_r\}$ be the conditional distribution function of $X$ given $Y = y_r$. The $MDD(X|Y)$ can be represented as the following quadratic form between $F(x, x')$ and $F_r(x, x')$, \begin{equation} \label{mvStat} MDD(X|Y) = \sum_{r=1}^R p_r \int [F_r(x, x')-F(x, x')]^2d\nu(x)d\nu(x'), \end{equation} where $p_r = P(Y = y_r)> 0$ for $r = 1, \ldots, R$. We now provide a consistent estimator for $MDD(X|Y)$. Suppose that $\{(X_i,Y_i) : i = 1, \ldots, n\}$ with the sample size $n$ are $i.i.d.$ samples randomly drawn from the population distribution of $(X,Y)$. $n_r = \sum_{i=1}^nI(Y_i=y_r)$ denotes the sample size of the rth class and $\hat{p}_r = n_r/n$ denotes the sample proportion of the rth class, where $I(\cdot)$ represents the indicator function. Let $\hat{F_r}(x,x')= \frac{1}{n_r}\sum_{k=1}^nI(X_k\in \bar{B}(x,x'),Y_k=y_r),$ and $\hat{F}(x,x') = \frac{1}{n}\sum_{k=1}^nI(X_k \in \bar{B}(x,x'))$. The estimator of $MDD(X|Y)$ can be obtained by the following statistics \begin{equation} \begin{aligned} \label{stats} \widehat{MDD}(X|Y)& =\frac{1}{n^2}\sum_{r=1}^R\sum_{i=1}^n\sum_{j=1}^n \hat{p}_r[\hat{F}_r(X_i,X_j)-\hat{F}(X_i,X_j)]^2. \end{aligned} \end{equation} \subsection{Theoretical properties} In this subsection, we discuss some sufficient conditions for the metric distributional discrepancy and its theoretical properties in Polish space, which is a separable completely metric space. First, to obtain the property of $X$ and $Y$ are independent if and only if $ MDD (X\mid Y ) = 0$, we introduce an important concept named directionally $(\epsilon, \eta, L)$-limited \cite{federer2014}. \begin{definition}\label{def} A metric $d$ is called directionally $(\epsilon, \eta, L)$-limited at the subset $\mathcal{A}$ of $\mathcal{M}$, if $\mathcal{A} \subseteq \mathcal{M}$, $\epsilon > 0, 0 < \eta \leq 1/3$, $L$ is a positive integer and the following condition holds: if for each $a \in A$, $D \subseteq A \cap \bar{B}(a, \epsilon)$ such that $d(x, c) \geq \eta d(a, c)$ whenever $ b, c \in D (b \neq c), x \in M $ with $$d(a, x) = d(a, c), d(x, b) = d(a, b) - d(a, x),$$ then the cardinality of $D$ is no larger than $L$. \end{definition} There are many metric spaces satisfying directionally $(\epsilon, \eta, L)$-limited, such as a finite dimensional Banach space, a Riemannian manifold of class $\geq 2$. However, not all metric spaces satisfy Definition \ref{def}, such as an infinite orthonormal set in a separable Hilbert space $H$, which is verified in \cite{wang2021nonparametric}. \begin{th1}\label{the1} Given a probability measure $\nu$ with its support $supp\{\nu\}$ on $(\mathcal{M}, d)$. Let $X$ be a random element with probability measure $\nu$ on $\mathcal{M}$ and $Y$ be a categorical random variable with $R$ classes $\{y_1,y_2,\dots,y_R\}$. Then $X$ and $Y$ are independent if and only if $ MDD (X\mid Y ) = 0$ if the metric $d$ is directionally $(\epsilon, \eta, L)$-limited at $supp\{\nu\}$. \end{th1} Theorem \ref{the1} introduces the necessary and sufficient conditions for $MDD(X|Y) = 0$ when the metric is directionally $(\epsilon, \eta, L)$-limited at the support set of the probability measure. Next, we extend this theorem and introduce Corollary \ref{col1}, which presents reasonable conditions on the measure or on the metric. \begin{col}\label{col1} (a) \textbf{Measure Condition}: For $ \forall~\epsilon > 0$, there exists $\mathcal{U} \subset \mathcal{M}$ such that $\nu(\mathcal{U}) \geq 1 - \epsilon$ and the metric $d$ is directionally $(\epsilon, \eta, L)$-limited at $\mathcal{U}$. Then $X$ and $Y$ are independent if and only if $ MDD (X\mid Y ) = 0$.\\ (b) \textbf{Metric Condition}: Given a point $x\in\mathcal{M}$, we define a projection on $\mathcal{M}_k$, $\pi_k(\cdot): \mathcal{M} \to \mathcal{M}_k$ and $\pi_k(x)=x_k$. For a set $A\subset\mathcal{M}$, define $\pi_k(A)=\bigcup\limits_{x\in A}\{\pi_k(x)\}$. There exist $\{(\mathcal{M}_{l} , d)\}^{\infty}_{l= 1} $ which are the increasing subsets of $\mathcal{M}$, where each $\mathcal{M}_l$ is a Polish space satisfying the directionally-limited condition and their closure $\overline{\bigcup^{\infty}_{l=1}\mathcal{M}_l} =\mathcal{M}$. For every $x \in \mathcal{M}$, $\pi_k(x)$ is unique such that $d(x, \pi_l(x)) = \inf_{z\in\mathcal{M}_l} d(x, z)$ and $\pi_l|_{\mathcal{M}_{l'}} \circ \pi_{l'} = \pi_l$ if $l' > l$. Then $X$ and $Y$ are independent if and only if $ MDD (X\mid Y ) = 0$. \end{col} \begin{th1}\label{the1.2} $\widehat{MDD}(X|Y)$ almost surely converges to $MDD(X|Y)$. \end{th1} Theorem \ref{the1.2} demonstrates the consistency of proposed estimator for $MDD (X\mid Y )$. Hence, $\widehat{MDD}(X|Y)$ is consistent to the metric distributional discrepancy index. Due to the property, we propose an independence test between $X$ and $Y$ based on the MDD index. We consider the following hypothesis testing: $$ \begin{aligned} &H_0 : X \text{ and } Y \text{ are statistically independent.} \\ vs\ &H_1: X \text{ and } Y \text{ are not statistically independent.} \end{aligned} $$ \begin{th1}\label{the1.3} Under the null hypothesis $H_0$, \begin{equation*} n\widehat{MDD}(X|Y)\stackrel{d}{\rightarrow}\sum_{j=1}^\infty\lambda_j \mathcal{X}_{j}^2(1), \end{equation*} where $\mathcal{X}_j^2(1)$'s, $j = 1, 2,\ldots$, are independently and identically chi-square distribution with $1$ degrees of freedom, and $\stackrel{d}{\rightarrow}$ denotes the convergence in distribution. \end{th1} \begin{remark} In practical application, we can estimate the null distribution of MDD by the permutation when the sample sizes are small, and by the Gram matrix spectrum in \cite{gretton2009fast} when the sample sizes are large. \end{remark} \begin{th1}\label{the1.4} Under the alternative hypothesis $H_1$, \begin{equation*} n\widehat{MDD}(X|Y)\stackrel{a.s.}{\longrightarrow}\infty, \end{equation*} where $\stackrel{a.s.}{\longrightarrow}$ denotes the almost sure convergence. \end{th1} \begin{th1}\label{the1.5} Under the alternative hypothesis $H_1$, \begin{equation*} \sqrt{n}[\widehat{MDD}(X|Y)-MDD(X|Y)]\stackrel{d}{\rightarrow}N(0,\sigma^2), \end{equation*} where $\sigma^2$ is given in the appendix. \end{th1} \section{Numerical Studies} \label{secexp} \subsection{Monte Carlo Simulation} In this section, we perform four simulations to evaluate the finite sample performance of MDD test by comparing with other existing tests: The distance covariance test (DC) \cite{articleSze} and the HHG test based on pairwise distances \cite{articleheller}. We consider the directional data on the unit sphere $R^p$, which is denoted by $S^{p-1}=\{x\in R^p:||x||_2=1\}$ for $x$ and n-dimensional data independently from Normal distribution. For all of methods, we use the permutation test to obtain p-value for a fair comparison and run simulations to compute the empirical Type-I error rate in the significance level of $\alpha =0.05$. All numerical experiments are implemented by R language. The DC test and the HHG test are conducted respectively by R package $energy$ \cite{energy} and R package $HHG$ \cite{hhg}. \begin{table*} \centering \caption{Empirical Type-I error rates at the significance level 0.05 in Simulation 1} \label{re1} \begin{tabular}{@{}cc|c|c|c@{}} \toprule & & $X=(1,\theta,\phi)$ & $X \sim M_3(\mu,k)$ & $X = (X_1, X_2, X_3)$ \\ \midrule R& n &\ MDD \ \ DC \ \ HHG &\ MDD \ \ DC \ \ HHG &\ MDD \ \ DC \ \ HHG \\ \midrule & 40 & 0.080 0.085 0.070 & 0.040 0.050 0.030 & 0.060 0.060 0.030 \\ & 60 & 0.025 0.035 0.030 & 0.075 0.085 0.065 & 0.070 0.045 0.045 \\ 2 & 80 & 0.030 0.045 0.065 & 0.035 0.035 0.030 & 0.035 0.035 0.045 \\ & 120 & 0.040 0.050 0.050 & 0.055 0.050 0.055 & 0.025 0.025 0.025 \\ & 160 & 0.035 0.035 0.055 & 0.045 0.040 0.050 & 0.050 0.075 0.065 \\ \midrule & 40 & 0.015 0.025 0.045 & 0.050 0.045 0.050 & 0.050 0.060 0.055 \\ & 60 & 0.045 0.025 0.025 & 0.050 0.030 0.035 & 0.050 0.050 0.070 \\ 5 & 80 & 0.035 0.030 0.050 & 0.060 0.070 0.060 & 0.060 0.066 0.065 \\ & 120 & 0.040 0.040 0.035 & 0.050 0.050 0.050 & 0.045 0.030 0.070 \\ & 160 & 0.030 0.065 0.045 & 0.050 0.060 0.040 & 0.050 0.035 0.035 \\ \bottomrule \end{tabular} \end{table*} \textbf{Simulation 1} In this simulation, we test independence between a high-dimensional variable and a categorical one. We randomly generate three different types of data $X$ listed in three columns in Table \ref{re1}. For the first type in column 1, we set $p=3$ and consider coordinate of $S^2$, denoted as $(r,\theta,\phi)$ where $r = 1$ as radial distance, $\theta$ and $\phi$ were simulated from the Uniform distribution $U(-\pi,\pi)$. For the second type in column $2$, we generate three-dimensional variable from von Mises-Fisher distribution $M_3(\mu, k)$ where $\mu = (0,0,0)$ and $k = 1$. For the third type in column 3, each dimension of $X$ is independently formed from $N(0,1)$ where $X = (X_1, X_2, X_3), X_i \sim N(0,1)$. We generate the categorical random variable $Y$ from $R$ classes ${1,2, \dots, R}$ with the unbalanced proportion $p_r = P(Y = r) = 2[1 + (r-1)/(R-1)]/3R,\ r = 1,2, \dots, R$. For instance, when $Y$ is binary, $p_1 = 1/3$ and $p_2 = 2/3$ and when $R = 5, Y={1,2,3,4,5}$. Simulation times is set to 200. The sample size $n$ are chosen to be 40, 60, 80, 120, 160. The results summarized in Table \ref{re1} show that all three tests perform well in independence testing since empirical Type-I error rates are close to the nominal significance level even in the condition of small sample size. \begin{table*} \centering \caption{Empirical powers at the significance level 0.05 in Simulation 2} \label{re2} \begin{tabular}{@{}cc|c|c|c@{}} \toprule & & $X=(1,\theta,\phi)$ & $X \sim M_3(\mu,k)$ & $X = (X_1, X_2, X_3)$ \\ \midrule R & n&\ MDD \ \ DC \ \ HHG &\ MDD \ \ DC \ \ HHG &\ MDD \ \ DC \ \ HHG\\ \midrule & 40 & 0.385 0.240 0.380 & 0.595 0.575 0.590 & 0.535 0.685 0.375 \\ & 60 & 0.530 0.385 0.475 & 0.765 0.745 0.650 & 0.730 0.810 0.590 \\ 2 & 80 & 0.715 0.535 0.735 & 0.890 0.880 0.775 & 0.865 0.930 0.755 \\ & 120 & 0.875 0.740 0.915 & 0.965 0.955 0.905 & 0.970 0.990 0.925 \\ & 160 & 0.965 0.845 0.995 & 1.000 1.000 0.985 & 0.995 0.995 0.980 \\ \midrule & 40 & 0.925 0.240 0.940 & 0.410 0.230 0.185 & 0.840 0.825 0.360 \\ & 60 & 0.995 0.350 0.995 & 0.720 0.460 0.330 & 0.965 0.995 0.625 \\ 5 & 80 & 1.000 0.450 1.000 & 0.860 0.615 0.540 & 0.995 0.990 0.850 \\ & 120 & 1.000 0.595 1.000 & 0.990 0.880 0.820 & 1.000 0.995 0.990 \\ & 160 & 1.000 0.595 1.000 & 0.995 0.975 0.955 & 1.000 1.000 1.000 \\ \bottomrule \end{tabular} \end{table*} \textbf{Simulation 2 } In this simulation, we test the dependence between a high-dimensional variable and a categorical random variable when $R = 2$ or $5$ with the proportion proposed in simulation 1. In column 1, we generate $X$ and $Y$ representing radial data as follows: $$ \begin{aligned} \text{(1) \ } &Y= \{1,2\}, (a)\ X=(1,\theta,\phi_1 + \epsilon), \\ &\theta \sim U(-\pi,\pi),\ \phi_1\sim U(-\pi,\pi), \epsilon = 0, \\ &(b)\ X=(1,\theta,\phi_2+\epsilon),\\ &\theta \sim U(-\pi,\pi), \phi_2 \sim U(1/5\pi,4/5\pi), \epsilon \sim t(0,1). \end{aligned} $$ $$ \begin{aligned} \text{(2) \ } &Y=\{1,2,3,4,5\}, X=(1,\theta,\phi_r + \epsilon),\\ & \phi_r\sim U([-1+2(r-1)/5]\pi,(-1+2r/5) \pi), \\ &r = 1,2,3,4,5.\ \ \theta \sim U(-\pi,\pi),\ \epsilon \sim t(0,1). \end{aligned} $$ For column 2, we consider von Mises-Fisher distribution. Then we set $n = 3$ and $k = 1$, and the simulated data sets are generated as follows: $$ \begin{aligned} & (1) R = 2, \mu = (\mu_1,\mu_2) = (1,2), \\ & (2) R = 5,\mu =\{\mu_1,\mu_2,\mu_3,\mu_4,\mu_5\} = \{4,3,1,5,2\}. \end{aligned} $$ For column 3, $X$ in each dimension was separately generated from normal distribution $N(\mu, 1)$ to represent data in the Euclidean space. There are two choices of $R$: $$ \begin{aligned} &(1) R = 2, \mu = (\mu_1,\mu_2) = (0, 0.6),\\ &(2) R=5,\mu =\{\mu_1,\mu_2,\mu_3,\mu_4,\mu_5\} = \{4,3,1,5,2\}/3. \end{aligned} $$ Table \ref{re2} based on 200 simulations shows that the MDD test performs well in most settings with Type-I error rate approximating 1 especially when the sample size exceeds 80. When data contains extreme value, the Type-I error rate of DC deteriorate quickly while the MDD test performs more stable. Moreover, the MDD test performs better than both DC test and HHG test in spherical space, especially when the number of class $R$ increases. \textbf{Simulation 3 } In this simulation, we set $X$ with different dimensions, with the range of $\{3,6,8,10,12\}$, to test independence and dependence between a high-dimensional random variable and a categorical random variable. Respectively, I represents independence test, and II represents dependence one. We use empirical type-I error rates for I, empirical powers for II. We let sample size $n = 60$ and classes $R = 2$. Three types of data are shown in Table \ref{re3} for three columns. \\ In column 1: $$ \begin{aligned} X_{dim}&=(1,\theta,\phi_1+\epsilon, \dots, \phi_d + \epsilon), \\ &d = 1,4,6,8,10, \theta \sim U(-\pi,\pi) \\ \end{aligned} $$ For $\phi_i$ of two classes, $$(a) \phi_i\sim U(-\pi,\pi), \epsilon = 0, $$ $$ (b) \phi_i\sim U(-1/5\pi,4/5\pi), \epsilon \sim t(0,1),$$ In column 2, $$X \sim M_d(\mu, k), d= 3,6,8,10,12, \mu \in \{0,2\} ,$$ where $k=1$.\\ In column 3, we set $$X_i=(x_{i1},x_{i2}, \dots, x_{id}), d = 3,6,8,10,12 \text{ and } x_{id}\sim N(\mu,1),$$ where $\mu\in\{0,\frac{3}{5}\}$ in dependence test. Table \ref{re3} based on 300 simulations at $\alpha = 0.05$ shows that the DC test performs well in normal distribution but is conservative for testing the dependence between a radial spherical vector and a categorical variable. The HHG test works well for extreme value but is conservative when it comes to von Mises-Fisher distribution and normal distribution. It also can be observed that MDD test performs well in circumstance of high dimension. \begin{table*} \centering \caption{Empirical Type-I error rates / Empirical powers at the significance level 0.05 in Simulation 3} \label{re3} \begin{tabular}{@{}cc|c|c|c@{}} \toprule & & $X_{dim}=(1,\theta,\phi_d)$ & $X\sim M_{dim}(\mu,k)$ & $X=(x_{1},x_{2}, \dots, x_{dim})$ \\ \midrule & dim&\ MDD \ \ DC \ \ HHG &\ MDD \ \ DC \ \ HHG &\ MDD \ \ DC \ \ HHG\\ \midrule & 3 & 0.047 0.067 0.050 & 0.047 0.047 0.053 & 0.047 0.060 0.033 \\ & 6 & 0.053 0.050 0.050 & 0.050 0.053 0.037 & 0.047 0.050 0.030 \\ I & 8 & 0.037 0.040 0.050 & 0.037 0.047 0.050 & 0.040 0.037 0.030 \\ & 10 & 0.050 0.053 0.040 & 0.050 0.056 0.033 & 0.037 0.050 0.060 \\ & 12 & 0.020 0.070 0.050 & 0.053 0.047 0.047 & 0.040 0.067 0.043 \\ \midrule & 3 & 0.550 0.417 0.530 & 0.967 0.950 0.923 & 0.740 0.830 0.577 \\ & 6 & 1.000 0.583 0.997 & 0.983 0.957 0.927 & 0.947 0.963 0.773 \\ II & 8 & 1.000 0.573 1.000 & 0.967 0.950 0.920 & 0.977 0.990 0.907 \\ & 10 & 1.000 0.597 1.000 & 0.977 0.950 0.887 & 0.990 0.993 0.913 \\ & 12 & 1.000 0.590 1.000 & 0.957 0.940 0.843 & 0.997 1.000 0.963 \\ \bottomrule \end{tabular} \end{table*} \begin{table*} \centering \caption{Empirical powers at the significance level 0.05 in Simulation 4} \label{re4} \begin{tabular}{@{}cc|c|c|c@{}} \toprule & & landmark =20 & landmark =50 & landmark =70 \\ \midrule & $corr$ &\ MDD \ \ DC \ \ HHG &\ MDD \ \ DC \ \ HHG &\ MDD \ \ DC \ \ HHG \\ \midrule & 0 & 0.050 0.050 0.063 & 0.047 0.047 0.050 & 0.040 0.050 0.050 \\ & 0.05 & 0.090 0.080 0.057 & 0.097 0.073 0.087 & 0.093 0.067 0.467 \\ & 0.10 & 0.393 0.330 0.300 & 0.347 0.310 0.237 & 0.350 0.303 0.183 \\ & 0.15 & 0.997 0.957 0.997 & 0.993 0.943 0.970 & 0.993 0.917 0.967 \\ & 0.20 & 1.000 1.000 1.000 & 1.000 1.000 1.000 & 1.000 1.000 1.000 \\ \bottomrule \end{tabular} \end{table*} \textbf{Simulation 4 } In this simulation, we use the $(\cos(\theta + d/2)+\epsilon/10, \cos(\theta - d/2)+\epsilon/10) $ parametrization of an ellipse where $\theta \in (0,2\pi)$ to run our experiment, let $X$ be an ellipse shape and $Y$ be a categorical variable. The $\cos(d)$ is the parameter of correlation, which means when $\cos(d) = 0 $, the shape is a unit circle and when $\cos(d) = 1$, the shape is a straight line. We set the noise $\epsilon \sim t(2)$ and $R=2$ where $y_1=1$ represents that shape $X$ is a circle with $\cos(d) = 0$ and $y_2 = 2$ represent that shape $X$ is an ellipse with correlation $ corr = \cos(d)$. It is intuitive that, when $corr = 0$, the MDD statistics should be zero. In our experiment, we set corr = 0,0.05,0.1,0.15,0.2 and let the number of landmark comes from $\{20,50,70\}$. Sample size is set to 60 and R package $shapes$ \cite{shapes} is used to calculate the distance between shapes. Table \ref{re4} summarizes the empirical Type-I error based on 300 simulations at $\alpha = 0.05$. It shows that the DC test and HHG test are conservative for testing the dependence between an ellipse shape and a categorical variable while the MDD test works well in different number of landmarks. \section{A Real-data Analysis} \label{realdata} \subsection{The Hippocampus Data Analysis} Alzheimer's disease (AD) is a disabling neurological disorder that afflicts about $11\%$ of the population over age 65 in United States. It is an aggressive brain disease that cause dementia -- a continuous decline in thinking, behavioral and social skill that disrupts a person's ability to function independently. As the disease progresses, a person with Alzheimer's disease will develop severe memory impairment and lost ability to carry out the simplest tasks. There is no treatment that cure Alzheimer's diseases or alter the disease process in the brain. The hippocampus, a complex brain structure, plays an important role in the consolidation of information from short-term memory to long-term memory. Humans have two hippocampi, each side of the brain. It is a vulnerable structure that gets affected in a variety of neurological and psychiatric disorders \cite{hippo2}. In Alzheimer's disease, the hippocampus is one of the first regions of the brain to suffer damage \cite{damage}. The relationship between hippocampus and AD has been well studied for several years, including volume loss of hippocampus \citep{loss, hippo1}, pathology in its atrophy \cite{patho} and genetic covariates \cite{gene}. For instance, shape changes \cite{shapechanges, diagnosis1, diagnosis2} in hippocampus are served as a critical event in the course of AD in recent years. We consider the radical distances of hippocampal 30000 surface points on the left and right hippocampus surfaces. In geometry, radical distance, denoted $r$, is a coordinate in polar coordinate systems $(r, \theta)$. Basically, it is the scalar Euclidean distance between a point and the origin of the system of coordinates. In our data, the radical distance is the distance between medial core of the hippocampus and the corresponding vertex on the surface. The dataset obtained from the ADNI (The Alzheimer's Disease Neuroimaging Initiative) contains 373 observations (162 MCI individuals transformed to AD and the 212 MCI individuals who are not converted to AD) where Mild Cognitive Impairment (MCI) is a transitional stage between normal aging and the development of AD \cite{petersen} and 8 covariates in our dataset. Considering the large dimension of original functional data, we firstly use SVD (Singular value decomposition) to extract top 30 important features that can explains 85\% of the total variance. We first apply the MDD test to detect the significant variables associated with two sides of hippocampus separately at significance level $\alpha = 0.05$. Since 8 hypotheses are simultaneously tested, the Benjamini–Hochberg (BH) correction method is used to control false discover rate at 0.05, which ranks the p-value from the lowest to the highest. The statistics in \eqref{stats} is used to do dependence test between hippocampus functional data and categorical variables. The categorical variables include Gender (1=Male; 2=Female), Conversion (1=converted, 0=not converted), Handedness (1=Right; 2=Left), Retirement (1=Yes; 0=No). Then, we apply DC test and HHG test for the dataset. Note that the p-value obtained in the three methods all used permutation test with 500 times. Table \ref{hippocampi_covariates} summarizes the results that the MDD test, compared to other methods, are able to detect the significance on both side of hippocampus, which agrees with the current studies \cite{hippoage,diagnosis1,diagnosis2} that conversion and age are critical elements to AD disease. Then, we expanded our method to continuous variables, the Age and the ADAS-Cog score, which are both important to diagnosing AD disease \cite{hippoage,kong}. We discretize age and ADAS-Cog score into categorical ones by using the quartile. For instance, the factor level labels of Age are constructed as "$(54,71],(71,75],(75,80],(80,90]$", labelled as $1,2,3,4$. The result of p-values in Table \ref{hippocampi_covariates} agrees with the current research. Next, we step forward to expand our method to genetic variables to further check the efficiency of MDD test. Some genes in hippocampus are found to be critical to cause AD, such as The apolipoprotein E gene (APOE). The three major human alleles (ApoE2, ApoE3, ApoE4) are the by-product of non-synonymous mutations which lead to changes in functionality and are implicated in AD. Among these alleles, ApoE4, named $\epsilon_4$, is accepted as a factor that affect the event of AD \cite{apoe41,apoe42,apoe43}. In our second experiment, we test the correlation between ApoE2($\epsilon_2$), ApoE3($\epsilon_3$), ApoE4($\epsilon_4$) and hippocampus. The result in the Table \ref{hippocampi_APOE} agrees with the idea that $\epsilon_4 $ is a significant variable to hippocampus shape. Besides, due to hippocampal asymmetry, the $\epsilon_4$ is more influential to the left side one. The MDD test performs better than DC test and HHG test for both sides of hippocampus. From the two experiments above, we can conclude that our method can be used in Eu variables, such as age, gender. It can also be useful and even better than other popular methods when it comes to genetic covariates. The correlation between genes and shape data(high dimensional data) is an interesting field that hasn't been well studied so far. There are much work to be done in the future. Finally, we apply logistic regression to the hippocampus dataset by taking the conversion to AD as the response variable and the gender, age, hippocampus shape as predict variables. The result of regression shows that the age and hippocampus shape are significant and we present the coefficients of hippocampus shape in Figure \ref{fig1} where a blue color indicates positive regions. \begin{table}[ht] \centering \caption{The p-values for correlating hippocampi data and covariates after BH correction.} \begin{tabular}{@{}c|c|c@{}} \toprule & left & right \\ \midrule covariates & MDD DC HHG & MDD DC HHG \\ \midrule Gender & 0.032 0.014 0.106 & 0.248 0.036 0.338 \\ \textbf{Conversion to AD} & 0.012 0.006 0.009 & 0.008 0.006 0.009 \\ Handedness & 0.600 0.722 0.457 & 0.144 0.554 0.045 \\ Retirement & 0.373 0.198 0.457 & 0.648 0.267 0.800 \\ \textbf{Age} & 0.012 0.006 0.009 & 0.009 0.006 0.009 \\ \textbf{ADAS-Cog Score} & 0.012 0.006 0.012 & 0.012 0.006 0.012 \\ \bottomrule \end{tabular}\label{hippocampi_covariates} \end{table} \begin{table}[ht] \centering \caption{The p-values for correlating hippocampi data and APOE covariates after BH correction.} \begin{tabular}{@{}c|c|c@{}} \toprule & left & right \\ \midrule APOE & MDD DC HHG & MDD DC HHG \\ \midrule $\epsilon_2$ & 0.567 0.488 0.223 & 0.144 0.696 0.310\\ $\epsilon_3$ & 0.357 0.198 0.449 & 0.157 0.460 0.338 \\ \textbf{$\epsilon_4$} & \textbf{0.022} 0.206 0.036 & 0.072 0.083 0.354\\ \bottomrule \end{tabular}\label{hippocampi_APOE} \end{table} \begin{figure} \begin{center} \includegraphics[width=8cm]{hippo.jpeg} \end{center} \caption{The Hippocampus 3D Shape Surface with coefficients}\label{fig1} \end{figure} {\subsection{The corpus callosum Data Analysis} We consider another real data, the corpus callosum (CC), which is the largest white matter structure in the brain. CC has been a structure of high interest in many neuroimaging studies of neuro-developmental pathology. It helps the hemispheres share information, but it also contributes to the spread of seizure impulses from one side of the brain to the other. Recent research \cite{raz2010trajectories,witelson1989hand} has investigated the individual differences in CC and their possible implications regarding interhemispheric connectivity for several years. We consider the CC contour data obtained from the ADNI study to test the dependence between a high-dimensional variable and a random variable. In the ADNI dataset, the segmentation of the T1-weighted MRI and the calculation of the intracranial volume were done by using $FreeSurfer$ package created by \cite{dale1999cortical}, whereas the midsagittal CC area was calculated by using CCseg package. The CC data set includes 409 subjects with 223 healthy controls and 186 AD patients at baseline of the ADNI database. Each subject has a CC planar contour $Y_j$ with 50 landmarks and five covariates. We treat the CC planar contour $Y_j$ as a manifold-valued response in the Kendall planar shape space and all covariates in the Euclidean space. The Riemannian shape distance was calculated by R package $shapes$ \cite{shapes}. It is of interest to detect the significant variable associated with CC contour data. We applied the MDD test for dependence between five categorical covariates, gender, handedness, marital status, retirement and diagnosis at the significance level $\alpha = 0.05.$ We also applied DC test and HHG test for the CC data. The result is summarized in Table \ref{realt1}. It reveals that the shape of CC planar contour are highly dependent on gender, AD diagnosis. It may indicate that gender and AD diagnosis are significant variables, which agree with \cite{witelson1989hand,pan2017conditional}. This result also demonstrated that the MMD test performs better to test the significance of variable gender than HHG test. We plot the mean trajectories of healthy controls (HC) and Alzheimer's disease (AD). The similar process is conducted on the Male and Female. Both of the results are shown in Figure \ref{Fig:CCdata}. It can be observed that there is an obvious difference of the shape between the AD disease and healthy controls. Compared to healthy controls, the spleen of AD patients seems to be less thinner and the isthmus is more rounded. Moreover, the splenium can be observed that it is thinner in male groups than in female groups. This could be an intuitive evidence to agree with the correlation between gender and the AD disease. \begin{table}[ht] \centering \caption{The p-values for correlating CC contour data and five categorical covariates after BH correction.}\label{realt1} \begin{tabular}{ c|c|c|p{1cm} } \toprule covariates & MDD & DC &HHG \\ \midrule Gender & 0.015 &0.018 & 0.222\\ Handedness & 0.482 &0.499& 0.461 \\ Marital Status & 0.482 &0.744 & 0.773\\ Retirement & 0.482 &0.482 & 0.461\\ Diagnosis & 0.015&0.018 & 0.045\\ \bottomrule \end{tabular} \end{table} \begin{figure} \begin{center} \includegraphics[width=8cm]{CCdata.png} \end{center} \caption{The corpus callosum Data Surface}\label{Fig:CCdata} \end{figure} } \section{Conclusion} \label{conclu} In this paper, we propose the MDD statistics of correlation analysis for Non-Euclidean data in metric spaces and give some conditions for constructing the statistics. Then, we proved the mathematical preliminaries needed in our analysis. The proposed method is robust to outliers or heavy tails of the high dimensional variables. Depending on the results of simulations and real data analysis in hippocampus dataset from the ADNI, we demonstrate its usefulness for detecting correlations between the high dimensional variable and different types of variables (including genetic variables). We also demonstrate its usefulness in another manifold-valued data, CC contour data. We plan to explore our method to variable selection methods and other regression models. \section*{Acknowledgements} W.-L. Pan was supported by the Science and Technology Development Fund, Macau SAR (Project code: 0002/2019/APD), National Natural Science Foundation of China (12071494), Hundred Talents Program of the Chinese Academy of Sciences and the Science and Technology Program of Guangzhou, China (202002030129,202102080481). P. Dang was supported by the Science and Technology Development Fund, Macau SAR (Project code: 0002/2019/APD), the Science and Technology Development Fund, Macau SAR(File no. 0006/2019/A1). W.-X. Mai was supported by NSFC Grant No. 11901594. Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: Alzheimer's Association; Alzheimer's Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen Idec Inc.; Bristol-Myers Squibb Company; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd. and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research \& Development, LLC.; Johnson \& Johnson Pharmaceutical Research \& Development LLC.; Medpace, Inc.; Merck \& Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Synarc Inc.; and Takeda Pharmaceutical Company. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. We are grateful to Prof. Hongtu Zhu for generously sharing preprocessed ADNI dataset to us. \setcounter{equation}{0} \setcounter{subsection}{0} \renewcommand{\theequation}{A.\arabic{equation}} \renewcommand{\thesubsection}{A.\arabic{subsection}} \renewcommand{\thedefinition}{A.\arabic{definition}} \renewcommand{\thelemma}{A.\arabic{lemma}} \section*{Appendix: Technical details} \begin{proof}[\textbf{Proof of Theorem \ref{the1}}] It is obvious that if $X$ and $Y$ are independent, $F_{r}(x, x^{\prime}) = F(x, x^{\prime})$ for $ \forall x, x^{\prime} \in \mathcal{M}$, then $MDD(X \mid Y) = 0$. Next, we need to prove that if $MDD(X \mid Y) = 0$, then $X$ and $Y$ are independent. According to the definition of $MDD$, $MDD(X\mid Y)=\sum_{r=1}^{R} p_{r} \int [F_r(u, v)-F(u, v)]^{2} d \nu(u) d \nu(v)$. It is obvious that $MDD(X \mid Y) \geq 0$, so if $MDD(X \mid Y)=0$, we have $F_r(u, v) = F(u, v)$, $a.s.$ $\nu \otimes \nu$. Given $Y=y_{r}$, define $\phi_{r}$ is a Borel probability measure of $X \mid Y=y_{r}$, and we have $$ \phi_{r}[\bar{B}(u, d(u, v))] := F_{r}(u, v) =P(X \in \bar{B}(u, v) \mid Y=y_{r}). $$ Because $(\mathcal{M}, d)$ is a Polish space that $d$ is directionally $(\epsilon, \eta, L)$-limited and we have $F_r(u, v) = F(u, v)$, $a.s.$ $\nu \otimes \nu$. Next, we can apply Theorem 1 in \cite{wang2021nonparametric} to get the conclusion that $\nu = \phi_{r}$ for $r = 1, \ldots, R$. Therefore, we have $F_{r}(x, x^{\prime}) = F(x, x^{\prime})$ for $ \forall x, x^{\prime} \in \mathbb{R}$. That is, for every $x$, $x^{\prime}$ and every $r$, we have $$P(X \in \bar{B}(x, x^{\prime}) \mid Y = y_{r}) = P(X \in \bar{B}(x, x^{\prime})),$$ $i.e.$ $X$ and $Y$ are independent. \end{proof} \begin{proof}[\textbf{Proof of Corollary \ref{col1}}] (a) We have $\nu(\mathcal{U}) \geq 1 - \epsilon$ and the metric $d$ is directionally $(\epsilon, \eta, L)$-limited at $\mathcal{U}$, we can obtain the result of independence according to Theorem \ref{the1} and Corollary 1(a) in \cite{wang2021nonparametric}. (b) Similarly, we know that each $\mathcal{M}_l$ is a Polish space satisfying the directionally-limited condition, we also can obtain the result of independence according to Corollary 1(b) in \cite{wang2021nonparametric}. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{the1.2}}] Consider \begin{equation*} \begin{aligned} &\widehat{MDD}(X|Y) = \frac{1}{n^2}\sum_{r=1}^R\sum_{i=1}^n\sum_{j=1}^n \hat{p}_r[\hat{F}_r(X_i,X_j)-\hat{F}(X_i,X_j)]^2 \\ =&\frac{1}{n^2}\sum_{r=1}^R\sum_{i=1}^n\sum_{j=1}^n \hat{p}_r[\frac{1}{n}\sum_{k=1}^n \frac{I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)}{\hat{p}_r}\\ &\quad\quad\quad\quad\quad\quad\quad -\frac{1}{n}\sum_{k=1}^n I(X_k\in\bar{B}(X_i,X_j)]^2\\ =&\sum_{r=1}^R\Big[\frac{1}{n^4\hat{p}_r}\sum_{i,j,k,k^\prime=1}^n I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r) \\ &+\frac{\hat{p}_r}{n^4}\sum_{i,j,k,k^\prime=1}^n I(X_k\in\bar{B}(X_i,X_j))I(X_{k^\prime}\in\bar{B}(X_i,X_j)) \\ &-\frac{1}{n^4}\sum_{i,j,k,k^\prime=1}[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j)) \\ &-\frac{1}{n^4}\sum_{i,j,k,k^\prime=1}I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)I(X_k\in\bar{B}(X_i,X_j))\Big]\\ =&:Q_1+Q_2+Q_3. \end{aligned} \end{equation*} For $Q_1$, $\frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^n I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)$ is a V-statistic of order 4. We can verify that \begin{equation*} \begin{aligned} &E[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)]\\ =&E[E[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)|X_i,X_j]\\ &\times E[I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)|X_i,X_j]]\\ =&E[E^2[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)|X_i,X_j]]\\ =&p_r^2E[\frac{1}{p_r^2}E^2[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)|X_i,X_j]]\\ =&p_r^2E[F_r^2(X_i,X_j)]. \end{aligned} \end{equation*} Since $E|I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)|\leq1<\infty$, according to Theorem 3 of \cite{lee2019u}, $\frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^n I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)$ almost surely converges to $p_r^2E[F_r^2(X_i,X_j)]$. And we have $p_r\to\hat{p}_r$, and we can draw conclusion that $Q_1 \stackrel{a.s.}{\longrightarrow} \sum_{r=1}^R p_rE[ F_r^2(X_i,X_j)]$, as $n\to \infty$. Similarly, we also have $Q_2\stackrel{a.s.}{\longrightarrow}\sum_{r=1}^R p_rE[F^2(X_i,X_j)]$ and $Q_3\stackrel{a.s.}{\longrightarrow} -2\sum_{r=1}^R p_rE[F_r(X_i,X_j)F(X_i,X_j)]$, as $n\to \infty$. Because $MDD(X|Y)=E[\sum_{r=1}^R p_r(F_r(X_i,X_j)-F(X_i,X_j))^2]$, we have $$ \widehat{MDD}(X|Y)\stackrel{a.s.}{\longrightarrow} MDD(X|Y),~n\to\infty. $$ \end{proof} Before we prove Theorem \ref{the1.3}, we give an lemma and some notations as follows \cite{lee2019u}. \begin{lemma}\label{lm6.4} Let $V_n$ be a V-statistic of order m, where $V_n = n^{-m}\sum_{i_1=1}^n\cdots\sum_{i_m=1}^n h(X_{i_1}, \ldots, X_{i_m})$ and $h(X_{i_1} ,\ldots, X_{i_m})$ is the kernel of $V_n$. For all $1\leq i_1\leq\cdots\leq i_m\leq m$, $E[h(X_{i_1} ,\ldots, X_{i_m})]^2 < \infty$. We have the following conclusions. (i) If $\zeta_1 = \mathrm{Var}(h_1(X_1)) > 0$, then $$\sqrt{n}(V_n - E(h(X_{1} ,\ldots, X_{m})))\stackrel{d}{\rightarrow}N(0, m^2\zeta_1),$$ where $\zeta_k =\mathrm{Var}(h_k(X_1,\ldots,X_K))$ and \begin{align*} h_k(X_1,\ldots,X_K)=&E[h(X_1,\ldots,X_m)|X_1=x_1,\ldots,X_k=x_k]\\ =&E[h(x_1,\ldots,x_k,X_{k+1},\ldots,X_m)]. \end{align*} (ii) If $\zeta_1 = 0$ but $\zeta_2 = \mathrm{Var}(h_2(X_1, X_2)) > 0$, then $n(V_n - E(h(X_{1} ,\ldots, X_{m})))\stackrel{d}{\rightarrow}\frac{m(m-1)}{2}\sum_{j=1}^{\infty}\lambda_j\mathcal{X}_{j}^2(1)$, where $\mathcal{X}_j^2(1)$'s, $j = 1, 2,\ldots$, are independently and identically distributed $\mathcal{X}^2$ random variables with $1$ degree of freedom and $\lambda_j$'s meet the condition $\sum_{j=1}^\infty \lambda_j^2=\zeta_2$.\\ \end{lemma} \begin{proof}[\textbf{Proof of Theorem \ref{the1.3}}] Denote $Z_i=(X_i, Y_i)$, $Z_j=(X_j, Y_j)$, $Z_k=(X_k, Y_k)$, $Z_{k^\prime}=(X_{k^\prime}$, $Y_{k^\prime})$. We consider the statistic with the known parameter $p_r$ \begin{equation*} \begin{aligned} I_n=&\frac{1}{n^2}\sum_{r=1}^R\sum_{i=1}^n\sum_{j=1}^n p_r[\frac{1}{n}\sum_{k=1}^n \frac{I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)}{p_r}\\ &\quad\quad\quad\quad\quad\quad\quad -\frac{1}{n}\sum_{k=1}^n I(X_k\in\bar{B}(X_i,X_j)]^2\\ =&\frac{1}{n^4}\sum_{r=1}^R\sum_{i=1}^n\sum_{j=1}^n\sum_{k,k^\prime=1}^n[\frac{1}{p_r}I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\times I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r) \\ &+p_r I(X_k\in\bar{B}(X_i,X_j))I(X_{k^\prime}\in\bar{B}(X_i,X_j)) \\ &-I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j)) \\ &-I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)I(X_k\in\bar{B}(X_i,X_j))]. \end{aligned} \end{equation*} Let $V_n^{p_r}=\frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^n \Psi^{(r)}(Z_i,Z_j,Z_k,Z_{k^\prime})$ and \begin{equation*} \begin{aligned} &\Psi^{(r)}(Z_i,Z_j,Z_k,Z_{k^\prime}) \\ =&\frac{1}{p_r}I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)\\ &+ p_r I(X_k\in\bar{B}(X_i,X_j))I(X_{k^\prime}\in\bar{B}(X_i,X_j))\\ &-I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j)) \\ &-I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)I(X_k\in\bar{B}(X_i,X_j)), \end{aligned} \end{equation*} then we have \begin{align*} I_n =& \sum_{r=1}^R V_n^{(p_r)}\\ =&\sum_{r=1}^R\frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^n \Psi^{(r)}(Z_i,Z_j,Z_k,Z_{k^\prime}). \end{align*} We would like to use V statistic to obatain the asymptotic properties of $\widehat{MDD}(X|Y)$. We symmetrize the kernel $\Psi^{(r)}(Z_i,Z_j,Z_k,Z_{k^\prime})$ and denote \begin{equation*} \begin{aligned} &\Psi^{(r)}_S(Z_i,Z_j,Z_k,Z_{k^\prime})\\ =&\frac{1}{4!}\sum_{\tau\in\pi(i,j,k,k^\prime)}\Psi^{(r)}(Z_{\tau(1)},Z_{\tau(2)},Z_{\tau(3)},Z_{\tau(4)}), \end{aligned} \end{equation*} where $\pi(i,j,k,k^\prime)$ are the permutations of $\{i,j,k,k^\prime\}$. Now, the kernel $\Psi^{(r)}_S(Z_i,Z_j,Z_k,Z_{k^\prime})$ is symmetric, and $\frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^n\Psi^{(r)}_S(Z_i,Z_j,Z_k,Z_{k^\prime})$ should be a V-statistic. By using the denotation of Lemma \ref{lm6.4}, we should consider $E[\Psi^{(r)}_S(z_i,Z_j,Z_k,Z_{k^\prime})]$, that is, the case where only one random variable fixed its value. And, we have to consider $E[\Psi^{(r)}(z_i,Z_j,Z_k,Z_{k^\prime})]$, $E[\Psi^{(r)}(Z_i,z_j,Z_k,Z_{k^\prime})]$, $E[\Psi^{(r)}(Z_i,Z_j,z_k,Z_{k^\prime})]$ and $E[\Psi^{(r)}(Z_i,Z_j,Z_k,z_{k^\prime})]$. We consider \begin{equation*} \begin{aligned} &E[\Psi^{(r)}(z_i,Z_j,Z_k,Z_{k^\prime})]\\ =&\frac{1}{p_r}E[I(X_k\in\bar{B}(x_i,X_j),Y_k=y_r)\\ &\quad\quad I(X_{k^\prime}\in\bar{B}(x_i,X_j),Y_{k^\prime}=y_r)]\\ &+ p_r E[I(X_k\in\bar{B}(x_i,X_j))I(X_{k^\prime}\in\bar{B}(x_i,X_j))]\\ &-E[I(X_k\in\bar{B}(x_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(x_i,X_j))] \\ &- E[I(X_{k^\prime}\in\bar{B}(x_i,X_j),Y_{k^\prime}=y_r)I(X_k\in\bar{B}(x_i,X_j))]\\ =& \frac{1}{p_r}P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j),Y_k=Y_{k^\prime}=y_r)]\\ &+p_rP_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j)]\\ &- P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j), Y_k=y_r]\\ &-P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j), Y_{k^\prime}=y_r], \end{aligned} \end{equation*} where $P_{j,k,k^\prime}$ means the probability of $Z_j$, $Z_k$ and $Z_{k^\prime}$. Under the null hypothesis $H_0$, $X$ and $Y$ are independent. Then we have \begin{equation*} \begin{aligned} &\frac{1}{p_r}P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j),Y_k=Y_{k^\prime}=y_r]\\ =&\frac{1}{p_r}P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j),Y_k=y_r]P_{k^\prime}[Y_{k^\prime}=y_r]\\ =&P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j),Y_k=y_r], \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} &p_rP_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j)]\\ =&P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j),Y_{k^\prime}=y_r], \end{aligned} \end{equation*} Thus, $E[\Psi^{(r)}(z_i,Z_j,Z_k,Z_{k^\prime})]=0$. Similarly, we have $E[\Psi^{(r)}(Z_i,z_j,Z_k,Z_{k^\prime})]=0$, $E[\Psi^{(r)}(Z_i,Z_j,z_k,Z_{k^\prime})]=0$ and $E[\Psi^{(r)}(Z_i,Z_j,Z_k,z_{k^\prime})]=0$ because of the independence of $X$ and $Y$ under the null hypothesis $H_0$. Next, we consider the case when two random elements are fixed. \begin{equation*} \begin{aligned} &E[\Psi^{(r)}(Z_i,Z_j,z_k,z_{k^\prime})]\\ =&\frac{1}{p_r}E[I(x_k\in\bar{B}(X_i,X_j),y_k=y_r)\\ &\quad\quad\times I(x_{k^\prime}\in\bar{B}(X_i,X_j),y_{k^\prime}=y_r)]\\ &+ p_r E[I(x_k\in\bar{B}(X_i,X_j))I(x_{k^\prime}\in\bar{B}(X_i,X_j))]\\ &-E[I(x_k\in\bar{B}(X_i,X_j),y_k=y_r)I(x_{k^\prime}\in\bar{B}(X_i,X_j))] \\ &- E[I(x_{k^\prime}\in\bar{B}(X_i,X_j),y_{k^\prime}=y_r)I(x_k\in\bar{B}(X_i,X_j))]\\ =& \frac{1}{p_r}P_{i,j}[x_k,x_{k^\prime}\in\bar{B}(X_i,X_j),y_k=y_{k^\prime}=y_r)]\\ &+p_rP_{i,j}[x_k,x_{k^\prime}\in\bar{B}(X_i,X_j)]\\ &- P_{i,j}[x_k,x_{k^\prime}\in\bar{B}(X_i,X_j), y_k=y_r]\\ &-P_{i,j}[x_k,x_{k^\prime}\in\bar{B}(X_i,X_j), y_{k^\prime}=y_r], \end{aligned} \end{equation*} $E[\Psi^{(r)}(Z_i,Z_j,z_k,z_{k^\prime})]$ is a non-constant function related to $z_k, z_{k^\prime}$. In addition, we know \begin{equation*} \begin{aligned} &E[\Psi^{(r)}(Z_i,Z_j,Z_k,Z_{k^\prime})]\\ =&\frac{1}{p_r}E[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)\\ &\quad\quad\times I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)]\\ &+ p_r E[I(X_k\in\bar{B}(X_i,X_j))I(X_{k^\prime}\in\bar{B}(X_i,X_j))]\\ &-E[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j))] \\ &- E[I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)I(X_k\in\bar{B}(X_i,X_j))]\\ =& \frac{1}{p_r}P_{i,j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(X_i,X_j),Y_k=Y_{k^\prime}=y_r)]\\ &+p_rP_{i,j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(X_i,X_j)]\\ &- P_{i,j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(X_i,X_j), Y_k=y_r]\\ &-P_{i,j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(X_i,X_j), Y_{k^\prime}=y_r] = 0, \end{aligned} \end{equation*} The last equation is derived from the independence of $X$ and $Y$. By Lemma \ref{lm6.4} $(ii)$, $V_{n}^{(p_r)}$ is a limiting $\chi^2$-type V statistic. Now, we consider $V_{n}^{(\hat{p}_r)}$. Let $t=(t_1,t_2)$. In showing the conditions of Theorem 2.16 in \cite{de1987effect} hold, we use $$ \begin{aligned} h(z_1,z_2; p_r)=p_r\int g(z_1,t;p_r)g(z_2,t;p_r)dM(t), \end{aligned} $$ where $g(z,t;\gamma)=\sqrt{p_r}(I(z\in\bar{B}(t_1,t_2),y_1=y_r)/\gamma-I(z\in\bar{B}(t_1,t_2)))$ and $ = \nu\otimes\nu$ is the product measure of $\nu$ with respect to $X$. Thus, \begin{align*} \mu(t;\gamma)=&Eg(Z,t;\gamma)\\ =&\sqrt{\gamma}(P(X\in\bar{B}(t_1,t_2))p_r/\gamma-P(X\in\bar{B}(t_1,t_2))) \end{align*} and \begin{align*} \mathbf{d}_1\mu(t;p_r)=p_r^{-\frac{1}{2}}P(X\in\bar{B}(t_1,t_2)). \end{align*} The condition of Theorem 2.16 in \cite{de1987effect} can be shown to hold in this case using \begin{align*} h_* &(Z_1,Z_2)\\ =\int &[g(Z_1,t;p_r)+\mathbf{d}_1\mu(t;p_r)(I(Y_1=y_r)-p_r)]\\ &[g(Z_2,t;p_r)+\mathbf{d}_1\mu(t;p_r)(I(Y_2=y_r)-p_r)]dM(t). \end{align*} Let $\{\lambda_i\}$ denote the eigenvalues of the operator $A$ defined by \begin{align*} Aq(x)=\int h_*(z_1,z_2)q(y)d\nu(x_2)dP(y_2), \end{align*} then \begin{align*} nV_{n}^{(\hat{p}_r)}\stackrel{d}{\rightarrow}\sum_{j=1}^\infty\lambda_j^{(r)}\mathcal{X}_j^2(1), \end{align*} where $\mathcal{X}_j^2(1)'s$, $j= 1,2,\ldots$, are independently and identically distributed chi-square distribution with 1 degree of freedom. Notice that $\widehat{MDD}(X|Y)=\sum_{r=1}^R V_n^{(r)}$, according to the independence of the sample and the additivity of chi-square distribution, \begin{equation*} n\widehat{MDD}(X|Y)\stackrel{d}{\rightarrow}\sum_{j=1}^\infty\lambda_j\mathcal{X}_j^2(1), \end{equation*} where $\lambda_j = \sum_{r=1}^R \lambda_j^{(r)}$. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{the1.4}}] Under the alternative hypothesis $H_1$, $\widehat{MDD}(X|Y)\stackrel{a.s.}{\longrightarrow}MDD(X|Y) > 0$, as $n\rightarrow\infty$. Thus, we have $n\widehat{MDD}(X|Y)\stackrel{a.s.}{\longrightarrow}\infty$, as $n\rightarrow\infty$. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{the1.5}}] We consider \begin{equation*} \widehat{MDD}(X|Y) = I_n + J_n , \end{equation*} where $$ \begin{aligned} I_n=&\sum_{r=1}^R \frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^n\frac{1}{p_r}I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)\\ &\quad\quad\quad\quad\quad\quad\quad\times I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)\\ +&\sum_{r=1}^R \frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^np_r I(X_k\in\bar{B}(X_i,X_j))I(X_{k^\prime}\in\bar{B}(X_i,X_j)) \\ -&\sum_{r=1}^R \frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^n I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)\\ &\quad\quad\quad\quad\quad\quad\quad\times I(X_{k^\prime}\in\bar{B}(X_i,X_j))\\ -&\sum_{r=1}^R \frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^nI(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)\\ &\quad\quad\quad\quad\quad\quad\quad\times I(X_k\in\bar{B}(X_i,X_j)) \end{aligned} $$ and $$ \begin{aligned} J_n=&\sum_{r=1}^R(\frac{1}{\hat{p}_r}-\frac{1}{p_r})\frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^{n}I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)\\ &\quad\quad\quad\quad\quad\quad\quad\times I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)\\ +& \sum_{r=1}^R(\hat{p}_r - p_r)\frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^{n}I(X_k\in\bar{B}(X_i,X_j))\\ &\quad\quad\quad\quad\quad\quad\quad\times I(X_{k^\prime}\in\bar{B}(X_i,X_j)). \end{aligned} $$ We consider $E[\Psi^{(r)}(z_i,Z_j,Z_k,Z_{k^\prime})]$ in the proof of Theorem \ref{the1.3}, \begin{equation*} \begin{aligned} &E[\Psi^{(r)}(z_i,Z_j,Z_k,Z_{k^\prime})]\\ =& \frac{1}{p_r}P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j),Y_k=Y_{k^\prime}=y_r)]\\ &+p_rP_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j)]\\ &- P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j), Y_k=y_r]\\ &-P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j), Y_{k^\prime}=y_r]. \end{aligned} \end{equation*} Under the alternative hypothesis $H_1$, $X$ and $Y$ is not independent of each other, i.e. \begin{equation*} \begin{aligned} &\frac{1}{p_r}P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j),Y_k=Y_{k^\prime}=y_r]\\ \neq &P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j),Y_k=y_r], \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} &p_rP_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j)]\\ \neq& P_{j,k,k^\prime}[X_k,X_{k^\prime}\in\bar{B}(x_i,X_j),Y_{k^\prime}=y_r], \end{aligned} \end{equation*} so $E[\Psi^{(r)}(z_i,Z_j,Z_k,Z_{k^\prime})]$ is a non-constant function related to $z_i$, and we know $h_1^{(r)}(Z_1)=\frac{1}{4}[E[\Psi^{(r)}(z_i,Z_j,Z_k,Z_{k^\prime})]+E[\Psi^{(r)}(Z_i,z_j,Z_k,Z_{k^\prime})]+E[\Psi^{(r)}(Z_i,Z_j,z_k,Z_{k^\prime})]+E[\Psi^{(r)}(Z_i,Z_j,Z_k,z_{k^\prime})]]$, then we have $\mathrm{Var}[h_1^{(r)}(Z_1)]>0$. We apply Lemma \ref{lm6.4} $(i)$, we have \begin{equation*} \sqrt{n}[V_n^{(p_r)}-E[\Psi^{(r)}(Z_i,Z_j,Z_k,Z_{k^\prime})]]\stackrel{d}{\rightarrow}N(0,16\mathrm{Var}[h_1^{(r)}(Z_1)]), \end{equation*} Because $I_n = \sum_{r=1}^R V_n^{(p_r)}$, $MDD(X|Y)=E[\sum_{r=1}^R p_r(F_r(X_i,X_j)-F(X_i,X_j))^2]$ and \begin{equation*} \begin{aligned} &E[\Psi^{(r)}(Z_i,Z_j,Z_k,Z_{k^\prime})]\\ =&E[E[\Psi^{(r)}(Z_i,Z_j,Z_k,Z_{k^\prime})|Z_i,Z_j]]\\ =&\frac{1}{p_r}E[E[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)|Z_i,Z_j]\\ &\quad\quad E[I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)|Z_i,Z_j]]\\ &+ p_r E[E[I(X_k\in\bar{B}(X_i,X_j))|Z_i,Z_j]\\ &\quad\quad E[I(X_{k^\prime}\in\bar{B}(X_i,X_j))|Z_i,Z_j]]\\ &-E[E[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)|Z_i,Z_j]\\ &\quad\quad\quad E[I(X_{k^\prime}\in\bar{B}(X_i,X_j))|Z_i,Z_j]] \\ &- E[E[I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)|Z_i,Z_j]\\ &\quad\quad E[I(X_k\in\bar{B}(X_i,X_j))|Z_i,Z_j]]\\ =&\frac{1}{p_r}E[E^2[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)|Z_i,Z_j]] \\ &+ p_r E[E^2[I(X_k\in\bar{B}(X_i,X_j))|Z_i,Z_j]]\\ &-2E[E[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)|Z_i,Z_j]\\ &\quad\quad\quad E[I(X_k\in\bar{B}(X_i,X_j))|Z_i,Z_j]]\\ =&p_rE[(\frac{1}{p_r}E[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)|Z_i,Z_j]\\ &-E[I(X_k\in\bar{B}(X_i,X_j))|Z_i,Z_j])^2]\\ =&E[p_r(F_r(X_i,X_j)-F(X_i,X_j))^2], \end{aligned} \end{equation*} we have $$ E[\Psi^{(r)}(Z_i,Z_j,Z_k,Z_{k^\prime})]]=\sum_{r=1}^R MDD(X|Y) $$ and \begin{equation*} \sqrt{n}[I_n-MDD(X|Y)]\stackrel{d}{\rightarrow}N(0,\sigma_I^2), \end{equation*} where $\sigma_I^2 = \sum_{r=1}^R16\mathrm{Var}[h_1^{(r)}(Z_i)]+2n\sum_{i<j}Cov(V_n^{(i)},V_n^{(j)})$. We explain here that $Cov(V_n^{(i)},V_n^{(j)})$ is the covariance of V-statistic $V_n^{(i)}$ and $V_n^{(j)}$ of order 4, which can be written as $Cov(V_n^{(i)},V_n^{(j)})=\frac{1}{n^4}\sum_{c=1}^4\binom{4}{c}(n-4)^{4-c}\sigma_{c,c}^2$, where $\sigma_{c,c}^2=Cov(h_c^{(p)}(Z_1,\ldots,Z_c),h_c^{(q)}(Z_1,\ldots,Z_c))$ and $h_c^{(p)}(Z_1,\ldots,Z_c)$ represents $h_c(Z_1,\ldots,Z_c)$ of Lemma \ref{lm6.4} when $r=p$. Next, we would like to know the asymptotic distribution of $\sqrt{n}[\widehat{MDD}(X|Y)-MDD(X|Y)]$, we need to consider the asymptotic distribution of $\sqrt{n}J_n$ as follows \begin{equation*} \begin{aligned} &\sqrt{n}[\widehat{MDD}(X|Y)-MDD(X|Y)] \\ =& \sqrt{n}[I_n+J_n-MDD(X|Y)]. \end{aligned} \end{equation*} Then, we consider $J_n$. Denote $V_1^{(r)}=\frac{1}{n^4}\sum_{i,j,k,k^\prime=1}^{n}I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)$ and $V_2=\frac{1}{n^4}\sum_{i,j,k,{k^\prime}=1}^{n}I(X_k\in\bar{B}(X_i,X_j))I(X_{k^\prime}\in\bar{B}(X_i,X_j))$, we have \begin{equation*} J_n=\sum_{r=1}^R(\hat{p}_r-p_r)(V_2-\frac{1}{\hat{p}_rp_r}V_1^{(r)}). \end{equation*} The asymptotic distribution of $V_2-\frac{1}{\hat{p}_rp_r}V_1^{(r)}$ is a constant, because we know $E[V_1^{(r)}]=E[I(X_k\in\bar{B}(X_i,X_j),Y_k=y_r)I(X_{k^\prime}\in\bar{B}(X_i,X_j),Y_{k^\prime}=y_r)]=p_r^2E[F_r^2(X_i,X_j)]$, $E[V_2]=E[I(X_k\in\bar{B}(X_i,X_j)I(X_{k^\prime}\in\bar{B}(X_i,X_j)]=E[F^2(X_i,X_j)]$ and $\hat{p}_r\rightarrow p_r$. Then $$ V_2-\frac{1}{\hat{p}_rp_r}V_1^{(r)}\rightarrow E[F^2(X_i,X_j)]-E[F_r^2(X_i,X_j)]. $$ According to the Central Limit Theorem (CLT), for arbitrary $r=1,\ldots,R$, we have $$ \sqrt{n}(\hat{p}_r-p_r)(V_2-\frac{1}{\hat{p}_rp_r}V_1^{(r)})\rightarrow N(0,\sigma_r^2), $$ where $\sigma_r^2 = p_r(1-p_r)(E[F^2(X_i,X_j)]-E[F_r^2(X_i,X_j)])$. Let $\mathbf{\hat{p}}^{(i)}=(I(Y_i=y_1),I(Y_i=y_2),\ldots,I(Y_i=y_R))^T$, where $\mathbf{\hat{p}}^{(i)}$ is a $R$-dimensional random variable and $\mathbf{\hat{p}}^{(1)}, \mathbf{\hat{p}}^{(2)},\ldots, \mathbf{\hat{p}}^{(n)}$ is dependent of each other. Therefore, according to multidimensional CLT, $\sqrt{n}(\frac{1}{n}\sum_{i=1}^n\mathbf{\hat{p}}^{(i)}-E[\mathbf{\hat{p}}^{(i)}])= (\sqrt{n}(\hat{p}_1-p_1),\ldots,\sqrt{n}(\hat{p}_R-p_R))^T$ asymptotically obey the $R$-dimensional normal distribution. In this way we get the condition of the additivity of the normal distribution, then we have $$ \sqrt{n}J_n\rightarrow N(0,\sigma_J^2), $$ where $\sigma_J^2 = \sum_{r=1}^R\sigma_r^2+2\sum_{i<j}Cov(\hat{p}_i,\hat{p}_j)= \sum_{r=1}^R\sigma_r^2-\frac{2}{n}\sum_{i<j}p_ip_j =\sum_{r=1}^Rp_r(1-p_r)(E[F^2(X_i,X_j)]-E[F_r^2(X_i,X_j)])-\frac{2}{n}\sum_{i<j}p_ip_j$. Similarly, we can use multidimensional CLT to prove $\sqrt{n}J_n$ and $\sqrt{n}[I_n-MV(X|Y)]$ are bivariate normal distribution. Then, we make a conclusion, \begin{equation*} \sqrt{n}[\widehat{MDD}(X|Y)-MDD(X|Y)]\stackrel{d}{\rightarrow}N(0,\sigma^2), \end{equation*} where $\sigma^2=\sigma_I^2+\sigma_J^2 + 2nCov(I_n,J_n)\sigma_I\sigma_J$. We explain here that $Cov(I_n,J_n)$ is the covariance of V-statistic $I_n$ of order 4 and $J_n$ of order 1, which can be written as $Cov(I_n,J_n)=Cov(\sum_{r=1}^R V_n^{(r)},\sum_{r=1}^R(\hat{p}_r-p_r)(V_2-\frac{1}{\hat{p}_rp_r}V_1^{(r)}))=\sum_{r=1}^R\sum_{r^{\prime}=1}^RCov(V_n^{(r)},(\hat{p}_{r^{\prime}}-p_{r^{\prime}})(V_2-\frac{1}{\hat{p}_{r^{\prime}}p_{r^{\prime}}}V_1^{(r)}))=\sum_{r=1}^R\sum_{r^{\prime}=1}^R\frac{1}{n}\binom{4}{1}(n-4)^3Cov(h_1^{(r)}(Z_1),I(Y_k=y_{r^{\prime}}))(E[F^2(X_i,X_j)]-E[F_r^2(X_i,X_j)])$. \end{proof} \bibliographystyle{imsart-nameyear}
{ "attr-fineweb-edu": 1.944336, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbvbxK19JmejM8ESN
\section*{Supplementary Material} \section{Circuit derivation} \label{app:circ} \begin{figure} \begin{centering} \includegraphics[width=3.0in]{Figures/Supp_circuit.pdf} \caption{\label{fig:TwoIslandCircuit} Coupler circuit comprising two transmons qubits arranged in a flux-tunable loop with a third junction $(E_{12} ,C_{12})$.} \end{centering} \end{figure} We write down the Lagrangian using standard circuit parametrization~\cite{riwar2021circuit, Vool_2017, You2019} \begin{align} \mathcal{L} = &\frac{1}{2}C_1 (\dot\Phi_1+\dot\Phi_{e1})^2+\frac{1}{2}C_{12}(\dot\Phi_2-\dot\Phi_1+\dot\Phi_{e12})^2\nonumber\\ &\, +\frac{1}{2}C_2 (\dot\Phi_2-\dot\Phi_{e2})^2\nonumber\\ &\, +E_1\cos{((\Phi_1+\Phi_{e1})/\phi_0)}+E_2\cos{((\Phi_2-\Phi_{e2})/\phi_0)}\nonumber\\ &\, +E_{12}\cos{((\Phi_2-\Phi_1+\Phi_{e12})/\phi_0)}, \end{align} where $\Phi_j(t)$ can be thought of as the node phase or current-like degree of freedom while $\dot\Phi_j(t)$ is the node voltage-like degree of freedom. We use $\phi_0$ to denote the reduced flux quantum $\Phi_0/2\pi$. The loop formed by the three Josephson junctions introduces a constraint which, for our choice of branch sign convention, requires $\Phi_e = \Phi_{e12}+\Phi_{e1}+\Phi_{e2}$; here $\Phi_e$ is an externally applied flux threaded through the loop, while $\{\Phi_{e12},\,\Phi_{e1},\,\Phi_{e2}\}$ denote the branch fluxes across junctions $\{E_{12},\,E_1,\,E_2\}$ respectively. Performing the Legendre transformation, we arrive at the Hamiltonian, \begin{align} H = &\frac{C_{12}(Q_1+Q_2)^2}{2C^2}+\frac{C_2Q_1^2+C_1Q_2^2}{2C^2}\nonumber\\ &\, +\frac{\dot\Phi_{e12} (-Q_1C_2C_{12}+Q_2C_1C_{12})}{C^2}\nonumber\\ &\, +\frac{\dot\Phi_{e1} (Q_1(C_1C_2+C_1C_{12})+Q_2C_1C_{12})}{C^2}\nonumber\\ &\, -\frac{\dot\Phi_{e2} (Q_1C_1C_{12}+Q_2(C_1C_2+C_1C_{12}))}{C^2}\nonumber\\ &\, -E_1\cos{((\Phi_1+\Phi_{e1})/\phi_0)}-E_2\cos{((\Phi_2- \Phi_{e2})/\phi_0)}\nonumber\\ &\, -E_{12}\cos{((\Phi_2-\Phi_1+\Phi_{e12})/\phi_0)},\label{eq:trueH} \end{align} where $C^2 = C_1C_2+C_1C_{12}+C_2C_{12}$. Using the loop constraint we may parametrize $\{\Phi_{e12},\,\Phi_{e1},\,\Phi_{e2}\}$ in terms of $\Phi_e$ such that all $\dot\Phi_e$ terms cancel~\cite{You2019}: \begin{align} H = &\frac{C_{12}(Q_1+Q_2)^2}{2C^2}+\frac{C_2Q_1^2+C_1Q_2^2}{2C^2}\nonumber\\ &\,-E_1\cos{\left[\left(\Phi_1+\frac{C_{12}C_2}{C^2}\Phi_e\right)/\phi_0\right]}\nonumber\\ &\,-E_2\cos{\left[\left(\Phi_2-\frac{C_{12}C_1}{C^2}\Phi_e\right)/\phi_0\right]}\nonumber\\ &\,-E_{12}\cos{\left[\left(\Phi_2-\Phi_1+\frac{C_1C_2}{C^2}\Phi_e\right)/\phi_0\right]}\label{eq:chargeH}. \end{align} In this parameterization, we can infer that for typical capacitive coupling, $C_{12}\leq 0.1 \sqrt{C_1C_2}$, the participation of $E_{12}$ with $\Phi_e$ is greater than $80\%$. Due to the lack of $\dot\Phi_e$ terms, Eq.~(\ref{eq:chargeH}) is the simplest construction for charge basis simulations of the Hamiltonian. Another useful parameterization redistributes $\Phi_e$ across the junctions to set $\partial H/\partial \Phi_j = 0$ for $j=\{1,\,2\}$. This cancels odd order terms in $\Phi_j$ and thereby enables a straightforward conversion to the harmonic oscillator basis. We assume $\langle \Phi_j^2 \rangle \ll \Phi_0$ and expand to fourth order in $\Phi_j$ \begin{align} H \approx &\frac{C_{12}(Q_1+Q_2)^2}{2C^2}+\frac{C_2Q_1^2+C_1Q_2^2}{2C^2}\nonumber\\ &\, +\frac{\dot\Phi_{e12} (-Q_1C_2C_{12}+Q_2C_1C_{12})}{C^2}\nonumber\\ &\, +\frac{E'_{12}\dot\Phi_{e12}}{E'_{1}}\frac{(Q_1(C_1C_2+C_1C_{12})+Q_2C_1C_{12})}{C^2}\nonumber\\ &\, -\frac{E'_{12}\dot\Phi_{e12}}{E'_{2}}\frac{(Q_1C_2C_{12}+Q_2(C_1C_2+C_2C_{12}))}{C^2}\nonumber\\ &\, +E'_{1}\left(\frac{\Phi_1^2}{2\phi_0^2}-\frac{\Phi_1^4}{24\phi_0^4}\right)\nonumber\\ &\, +E'_{2}\left(\frac{\Phi_2^2}{2\phi_0^2} - \frac{\Phi_2^4}{24\phi_0^4}\right)\nonumber\\ &\, +E'_{12}\left(\frac{(\Phi_2-\Phi_1)^2}{2\phi_0^2} - \frac{(\Phi_2-\Phi_1)^4}{24\phi_0^4}\right)\label{eq:preharmonicH}. \end{align} The external flux is then \begin{align} \Phi_e = \Phi_{e12} + &\phi_0\arcsin{((E_{12}/E_1)\sin{(\Phi_{e12}/\phi_0)})}\nonumber\\+&\phi_0\arcsin{((E_{12}/E_2)\sin{(\Phi_{e12}/\phi_0)})}, \end{align} which may be numerically inverted. The junction energies are now flux sensitive: $E'_{12} = E_{12}\cos{(\Phi_{e12}/\phi_0)}$ and $E'_{j} = \sqrt{E_j^2-E_{12}^2\sin^2{(\Phi_{e12}/\phi_0)}}$. \subsection{Harmonic oscillator basis} We convert to the Harmonic oscillator basis by substituting ${Q_j = \sqrt{\hbar\omega_jC_{Sj}/2}(a_j + a_j^{\dagger})}$ and ${\Phi_j = -i\sqrt{\hbar\omega_jL_j/2}(a_j - a_j^{\dagger})}$ in Eq.~(\ref{eq:preharmonicH}). This leads to the following approximate Hamiltonian, with parameters defined in Table~\ref{table: parameters}, as \begin{table} \begin{centering} \begin{tabular}{c|c} term & value\\ \hline $C_{S1}$ & $\frac{C^2}{C_{12}+C_2}$\\ $C_{S2}$ & $\frac{C^2}{C_{12}+C_1}$\\ $L_j$ & $\frac{\phi_0^2}{E'_{12}+E'_{j}}$ \\ $E_{Cj}$ & $e^2/2C_{Sj}$\\ $g_C$ & $\frac{C_{12}\sqrt{\omega_1\omega_2}}{2 \sqrt{(C_1+C_{12})(C_2+C_{12})}}$\\ $g_L$ & $\frac{4E'_{12}\sqrt{E_{C1}E_{C2}}}{\hbar^2\sqrt{\omega_1\omega_2}}$\\ $\xi_1$ & $\frac{(2(E_1'+E_{12}'))^{1/4}[-E_1'(E_2'+E_{12}')C_2C_{12}+E_2'E_{12}'(C_1C_2+C_1C_{12})]}{2E_{C2}^{1/4}E_1'E_2'C^2}$\\ $\xi_2$ & $\frac{(2(E_2'+E_{12}'))^{1/4}[E_2'(E_1'+E_{12}')C_1C_{12}-E_1'E_{12}'(C_1C_2+C_2C_{12})]}{2E_{C2}^{1/4}E_1'E_2'C^2}$\\ $\omega_j$ & $\frac{\sqrt{8(E_j'+E_{12}')E_{Cj}}}{\hbar}$\\ $\nu_j^{(4)}$ & $\frac{E_{Cj}}{12\hbar}$\\ $\nu_j^{(6)}$ & $\frac{\sqrt{2}E_{Cj}}{360\hbar}\left(\frac{E_{Cj}}{E_j'+E_{12}'}\right)^{1/2}$\\ $\mu_k^{(4)}$ & $\binom{4}{k}\frac{(-1)^kE'_{12}}{12\hbar}\left(\frac{E_{C1}}{E_1'+E_{12}'}\right)^{k/4}\left(\frac{E_{C2}}{E_2'+E_{12}'}\right)^{(4-k)/4}$\\ $\mu_k^{(6)}$ & $\binom{6}{k}\frac{(-1)^k\sqrt{2}E'_{12}}{360\hbar}\left(\frac{E_{C1}}{E_1'+E_{12}'}\right)^{k/4}\left(\frac{E_{C2}}{E_2'+E_{12}'}\right)^{(6-k)/4}$ \end{tabular} \caption{\label{table: parameters} Parameter definitions} \end{centering} \end{table} \begin{align} H/\hbar =& \sum_{j=1}^2\left[\omega_ja_j^{\dagger}a_j-\nu_j^{(4)}(a_j-a_j^{\dagger})^4-\nu_j^{(6)}(a_j-a_j^{\dagger})^6\right]\nonumber\\ &\, +g_C(a_1+a_1^{\dagger})(a_2+a_2^{\dagger})+g_L(a_1-a_1^{\dagger})(a_2-a_2^{\dagger})\nonumber\\ &\, +\sum_{j=1}^{2}\frac{\xi_j\dot\Phi_{e12}}{\Phi_0}(a_j+a_j^{\dagger})\nonumber\\ &\, -\sum_{k=1}^3\mu_k^{(4)}(a_1-a_1^{\dagger})^k(a_2-a_2^{\dagger})^{(4-k)}\nonumber\\ &\, -\sum_{k=1}^3\mu_k^{(6)}(a_1-a_1^{\dagger})^k(a_2-a_2^{\dagger})^{(6-k)} \label{eq:HOfull}. \end{align} The terms fourth order in the raising and lowering operators modify the eigenenergies in the harmonic oscillator basis by several hundred MHz and account for the majority of the anharmonicity. The sixth order terms compensate for a residual few 10s of MHz of discrepancy between the charge basis and harmonic oscillator approaches to simulation. At sixth order in the raising and lowering operators the first and second excitation manifolds of eigenenergies for the two simulation approaches agree to within 1~MHz, shown together in Fig.~\ref{fig:HOsim}. \begin{figure} \begin{centering} \includegraphics[width=3.3in]{Figures/HOsim_fidelity.pdf} \caption{Comparison of the eigenstates produced by a harmonic oscillator basis simulation (blue) compared to a charge basis simulation. We chose parameters $E_{j}=13\textrm{ GHz}$, $E_{12} = 1.3\textrm{ GHz}$, $C_j=100\textrm{ fF}$, and $C_{12} = 0\textrm{ fF}$. We simulated 8 levels per qubit for the harmonic oscillator simulation and 14 Cooper pair levels in the charge simulation.} \end{centering} \label{fig:HOsim} \end{figure} \section{Analytical derivation} \label{app:SWT} To apply the standard machinery of the Schrieffer-Wolff tranformation to decouple the states of the coupler from the computational Hilbert space we must first diagonalize the coupler states. Exact analytic diagonalization at second order in raising and lowering operators (this neglects anharmonicity) may be performed using the Bogoliubov transformation, \begin{align} \mathbb{H} &= \Psi^{\dagger}K\Psi, \end{align} where $\Psi^{\dagger} = \begin{pmatrix}a_1 & a_1^{\dagger} & a_2 & a_2^{\dagger}\end{pmatrix}$ and \begin{align} K &= \begin{pmatrix} \omega_1 & 0 & g_C-g_L & g_C+g_L\\ 0 & \omega_1 & g_C+g_L & g_C-g_L\\ g_C-g_L & g_C+g_L & \omega_2 & 0\\ g_C+g_L & g_C-g_L & 0 & \omega_2 \end{pmatrix}.\label{eq:K} \end{align} The idea is to perform a similarity transformation to diagonalize $K$ and thereby find a diagonal parametrization of $\mathbb{H}$, which is a redundant construction of the conventional Hamiltonian $H$. Interestingly, because $[\Psi_i^{\dagger},\Psi_j]=(-1)^i\delta_{i,j}$, a similarity transformation is likely to produce a set of transformed operators that no longer obey the bosonic commutation relations. We can fix this by identifying the transformation $\sigma = (-1)^i\delta_{i,j}$, applying it to the column vector $[\Psi_i^{\dagger},\sigma\Psi_j]=\sigma\sigma = \delta_{i,j}$, where the commutation relations are now uniform. The resulting similarity transformation $T$ takes the form \begin{align} \sigma T^{\dagger}\sigma \sigma K T = T^{-1}\sigma K T = \sigma K' \end{align} where $K'$ is the desired diagonal form of $K$ and $\Psi = T\Psi'$, see Altland and Simons, page 72~\cite{altland_simons_2010}. Using this technique, the diagonal form of the coupler operators (neglecting anharmonic terms) can be found by diagonalizing $\sigma K$. The corresponding eigenenergies $\{\omega_{\pm}, -\omega_{\pm}\}$ (neglecting anharmonic terms) are given by, \begin{align} \omega_{\pm} &= \sqrt{2\bar{\omega}^2-\omega_1\omega_2-4g_Cg_L\pm 2\eta}, \end{align} with $\eta = \sqrt{\delta^2\bar{\omega}^2+(g_C+g_L)^2\omega_1\omega_2-4g_Cg_L\bar{\omega}^2}$. \subsection{Co-rotating terms} Neglecting terms proportional to $g_C+g_L$ in Eq.~(\ref{eq:K}) allows a compact representation of the approximate eigenvectors $T$, which we apply to derive an analytic form of the effective coupling as mediated by exchange interactions. The approximate eigenenergies are likewise simplified to \begin{align} \omega_{\pm} = \bar{\omega}\pm\sqrt{\delta^2+(g_C-g_L)^2}, \end{align} where $\bar{\omega} = (\omega_1+\omega_2)/2$ and $\delta = (\omega_1-\omega_2)/2$. The corresponding eigenvectors now take the form \begin{subequations} \begin{align} a_- &= -\sqrt{1-\beta^2} a_1 +\beta a_2,\\ a_+ &= \beta a_1 + \sqrt{1-\beta^2} a_2, \end{align} \end{subequations} where ${\beta = A/\sqrt{(g_C-g_L)^2+A^2}}$, with ${A = \delta+\sqrt{\delta^2+(g_C-g_L)^2}}$. If $\delta = 0$ then the coupler transmons are degenerate and therefore fully hybridized, leading to $\beta=1/\sqrt{2}$ as we might expect. Substituting the eigenoperators $a_{\pm}$ for $a_{1,2}$ in Eq.~(\ref{eq:HOfull}) and retaining terms to second order, we obtain \begin{align} H &= H_0 + V \end{align} with \begin{subequations} \begin{align} H_0/\hbar &= \omega_a a_a^{\dagger} a_a + \omega_b a_b^{\dagger} a_b + \omega_- a_-^{\dagger} a_- + \omega_+ a_+^{\dagger} a_+,\\ V/\hbar &= -\sqrt{1-\beta^2}g_a (a_a+a_a^{\dagger})(a_-+a_-^{\dagger})\nonumber\\ & \quad +\beta g_b (a_b+a_b^{\dagger})(a_-+a_-^{\dagger})\nonumber\\ & \quad + \beta g_a (a_a+a_a^{\dagger})(a_++a_+^{\dagger})\nonumber\\ & \quad +\sqrt{1-\beta^2}g_b (a_b+a_b^{\dagger})(a_++a_+^{\dagger}). \end{align} \end{subequations} We identify a Schrieffer-Wolff generator $S$ such that $[H_0,S] = V$, \begin{align} S/\hbar =& \frac{-\sqrt{1-\beta^2}g_a}{\Delta_{a-}}(a_a^{\dagger} a_--a_aa_-^{\dagger})+\frac{\beta g_b}{\Delta_{b-}}(a_b^{\dagger} a_--a_ba_-^{\dagger})\nonumber\\ +&\frac{\beta g_a}{\Delta_{a+}}(a_a^{\dagger} a_+-a_aa_+^{\dagger})+\frac{\sqrt{1-\beta^2}g_b}{\Delta_{b+}}(a_b^{\dagger} a_+-a_ba_+^{\dagger})\nonumber\\ +& \frac{-\sqrt{1-\beta^2}g_a}{\Sigma_{a-}}(a_a^{\dagger} a_-^{\dagger}-a_aa_-)+\frac{\beta g_b}{\Sigma_{b-}}(a_b^{\dagger} a_-^{\dagger}-a_ba_-)\nonumber\\ +&\frac{\beta g_a}{\Sigma_{a+}}(a_a^{\dagger} a_+^{\dagger}-a_aa_+)+\frac{\sqrt{1-\beta^2}g_b}{\Sigma_{b+}}(a_b^{\dagger} a_+^{\dagger}-a_ba_+),\nonumber\\ \end{align} where $\Delta_{j\pm} = \omega_j - \omega_{\pm} = \Delta_j \pm \sqrt{\delta^2+(g_C-g_L)^2}$ and, as defined in the main text, $\Delta_j = \omega_j-\bar\omega$. . Then \begin{align} H' = e^{S}He^{-S} = H_0 + [S,V]/2 + \mathcal{O}(V^3) \end{align} is the leading order diagonalized Hamiltonian with the pre-factor of the lowest order term $[S,V]/2$ setting the effective coupling \begin{align} g_{\textrm{eff}} = \frac{g_ag_b\beta\sqrt{1-\beta^2}}{2}\Big[&\frac{1}{\Delta_{a+}}+\frac{1}{\Delta_{b+}}+\frac{1}{\Sigma_{a+}}+\frac{1}{\Sigma_{b+}}\nonumber\\ -&\frac{1}{\Delta_{a-}}-\frac{1}{\Delta_{b-}}-\frac{1}{\Sigma_{a-}}-\frac{1}{\Sigma_{b-}}\Big]. \end{align} We may further rearrange terms to obtain the equation quoted in the main text \begin{align} g_{\textrm{eff}} = \sum_{j=a,b} \frac{g_a g_b \left[g_C-g_L+(g_C+g_L)\frac{\Delta_j}{\bar{\omega}}\right]}{2\Delta_j^2 - 2\delta^2 - 2(g_C-g_L)^2}. \end{align} The ac Stark shifts on the data qubits may be calculated similarly \begin{align} \omega^{\textrm{ac}}_a = \frac{g_a^2}{2}\Big[&\frac{\beta^2}{\Delta_{a+}}+\frac{1-\beta^2}{\Delta_{a-}}-\frac{\beta^2}{\Sigma_{a+}}-\frac{1-\beta^2}{\Sigma_{a-}}\Big]\nonumber\\ \omega^{\textrm{ac}}_a = \frac{g_a^2}{2}\Big[&\frac{\omega_a-\omega_1}{\Delta_{a}^2-\delta^2-(g_C-g_L)^2}+\frac{-\omega_a-\omega_1}{\Sigma_{a}^2-\delta^2-(g_C-g_L)^2}\Big]\nonumber\\ \omega^{\textrm{ac}}_b = \frac{g_b^2}{2}\Big[&\frac{1-\beta^2}{\Delta_{b+}}+\frac{\beta^2}{\Delta_{b-}}-\frac{1-\beta^2}{\Sigma_{b+}}-\frac{\beta^2}{\Sigma_{b-}}\Big]\nonumber\\ \omega^{\textrm{ac}}_b = \frac{g_b^2}{2}\Big[&\frac{\omega_b-\omega_2}{\Delta_{b}^2-\delta^2-(g_C-g_L)^2}+\frac{-\omega_b-\omega_2}{\Sigma_{b}^2-\delta^2-(g_C-g_L)^2}\Big].\nonumber\\ \end{align} \subsection{Counter-rotating terms} Neglecting now the terms proportional to $g_C-g_L$ in Eq.~(\ref{eq:K}), we can calculate the approximate eigenenergies, \begin{align} \omega_{\pm} = \pm\delta+\sqrt{\bar{\omega}^2-(g_C+g_L)^2}, \end{align} and the eigenvectors, \begin{subequations} \begin{align} a_+ &= \alpha a_1 - i\sqrt{1-\alpha^2} a_2^{\dagger},\\ a_- &= \alpha a_2 - i\sqrt{1-\alpha^2} a_1^{\dagger}, \end{align} \end{subequations} with ${\alpha = D/\sqrt{(g_C+g_L)^2+D^2}}$, ${D = \bar{\omega}+\sqrt{\bar{\omega}^2-(g_C+g_L)^2}}$. Note that here we have neglected the direct exchange interaction terms. Substituting the transformed operators for $a_{1,2}$ in Eq.~(\ref{eq:HOfull}) leads to the interaction, \begin{align} V/\hbar &= i\sqrt{1-\alpha^2}g_a (a_a+a_a^{\dagger})(a_--a_-^{\dagger})\nonumber\\ &\quad+\alpha g_b (a_b+a_b^{\dagger})(a_-+a_-^{\dagger})\nonumber\\ & \quad+\alpha g_a (a_a+a_a^{\dagger})(a_++a_+^{\dagger})\nonumber\\ & \quad+i\sqrt{1-\alpha^2}g_b (a_b+a_b^{\dagger})(a_+-a_+^{\dagger}), \end{align} which can be transformed using the generator, \begin{align} S/\hbar &= \frac{i\sqrt{1-\alpha^2}g_a}{\Delta_{a-}}(a_a^{\dagger} a_-+a_aa_-^{\dagger})+\frac{\alpha g_b}{\Delta_{b-}}(a_b^{\dagger} a_--a_ba_-^{\dagger})\nonumber\\ +&\frac{\alpha g_a}{\Delta_{a+}}(a_a^{\dagger} a_+-a_aa_+^{\dagger})+\frac{i\sqrt{1-\alpha^2}g_b}{\Delta_{b+}}(a_b^{\dagger} a_++a_ba_+^{\dagger})\nonumber\\ +& \frac{i\sqrt{1-\alpha^2}g_a}{\Sigma_{a-}}(-a_a^{\dagger} a_-^{\dagger}-a_aa_-)+\frac{\alpha g_b}{\Sigma_{b-}}(a_b^{\dagger} a_-^{\dagger}-a_ba_-)\nonumber\\ +&\frac{\alpha g_a}{\Sigma_{a+}}(a_a^{\dagger} a_+^{\dagger}-a_aa_+)+\frac{i\sqrt{1-\alpha^2}g_b}{\Sigma_{b+}}(-a_b^{\dagger} a_+^{\dagger}-a_ba_+),\nonumber\\ \end{align} to obtain the effective qubit-qubit coupling, \begin{align} H_{\textrm{eff}} &= ig_1 (a^{\dagger}_a+a_a)(a_b - a^{\dagger}_b)+ig_2 (a^{\dagger}_b+a_b)(a_a - a^{\dagger}_a) \end{align} where \begin{subequations} \begin{align} g_1 &= \frac{\alpha\sqrt{1-\alpha^2}g_ag_b}{2}\left[\frac{1}{\Delta_{b-}}+\frac{1}{\Delta_{b+}}-\frac{1}{\Sigma_{b-}}-\frac{1}{\Sigma_{b+}}\right],\\ g_2 &= \frac{\alpha\sqrt{1-\alpha^2}g_ag_b}{2}\left[\frac{1}{\Delta_{a-}}+\frac{1}{\Delta_{a+}}-\frac{1}{\Sigma_{a-}}-\frac{1}{\Sigma_{a+}}\right]. \end{align} \end{subequations} \section{Noise analysis} \label{app:noise} \subsection{Coherence} The Gaussian pure dephasing rate due to $1/f$ flux noise is $\Gamma_{\phi}^{E} = \sqrt{A_{\Phi}ln2}|\partial\omega/\partial\Phi|$ for Hahn echo measurements. It depends upon the noise amplitude $\sqrt{A_{\Phi}}$, defined at $\omega/2\pi = 1\text{ Hz}$, of the power spectral density $S(\omega)_{\Phi}=A_{\Phi}/|\omega|$ and the slope of the energy dispersion with flux $\hbar\partial \omega/\partial \Phi$. Careful engineering gives a noise amplitude $\sqrt{A_{\Phi}} \sim 2.5\mu\Phi_0$. On the coupler, the peak to peak difference in the frequency dispersion is $\sim 0.6 \text{ GHz}$, giving $\Gamma_{\phi}^{E} \approx 2.5\mu \Phi_0 \sqrt{\text{ln}(2)}(2\pi)^2 \times 0.3 \text{ GHz}/\Phi_0 = 1/40\text{ }\mu \text{s}$. The noise on $g_{\textrm{eff}}$ from the flux sensitivity on $g_L$ is then reduced by a factor $g_ag_b/(2\Delta_j^2-2\delta^2)$. Similarly, frequency noise on the qubits from the flux sensitivity on $\bar \omega$ is reduced by a factor of $\sim g_a^2/(2\Delta_j^2-2\delta^2)$. For $g_{\textrm{eff}}=2\pi\times 60\text{ MHz}$, the coupler limits the pure dephasing lifetime to $T_{\phi}^E \sim 400\text{ }\mu\text{s}$. \subsection{Energy relaxation} Energy relaxation of a qubit into its nearest neighbor coupler transmon is approximately given by the Purcell formula. Using the definitions in the main text and supplement the coupler induces relaxation of qubit a \begin{align} \Gamma_{1,a}^{\textrm{Purcell}} &\approx g_a^2\left(\frac{\Gamma_{1,+}\beta^2}{\Delta_{a+}^2}+\frac{\Gamma_{1,-}(1-\beta^2)}{\Delta_{a-}^2}\right)\nonumber\\ &\sim \frac{\Gamma_{1,1} g_a^2}{(\omega_a-\omega_1)^2}. \end{align} Similarly, for qubit b \begin{align} \Gamma_{1,b}^{\textrm{Purcell}} &\approx g_b^2\left(\frac{\Gamma_{1,-}\beta^2}{\Delta_{b-}^2}+\frac{\Gamma_{1,+}(1-\beta^2)}{\Delta_{b+}^2}\right)\nonumber\\ &\sim \frac{\Gamma_{1,2} g_b^2}{(\omega_b-\omega_2)^2}. \end{align} The energy relaxation rates for coupler transmons 1 and 2 are given by $\Gamma_{1,1}$ and $\Gamma_{1,2}$, respectively. While these quantities are not true observables of the system since the coupler transmons can be strongly hybridized, we want to emphasize that qubit a is not very sensitive to relaxation channels local to coupler transmon 2, nor is qubit b sensitive to relaxation channels local to coupler transmon 1. In the case of direct measurement of coupler relaxation rates, or if we expect correlated relaxation processes, then $\Gamma_{1,+}$ and $\Gamma_{1,-}$ describe the energy relaxation rate of the upper and lower hybridized states, respectively. \par Plugging in $20\text{ }\mu\text{s}$ as a reasonable lower bound on each coupler transmon's energy relaxation lifetime, the induced $T_{1}$ on a neighboring data qubit is $220\text{ }\mu\text{s}$ for a $g/\Delta$ ratio of $0.3$. \subsection{Coupling a qubit to an open quantum system} We consider coupling qubit a to a bath as mediated by the tunable coupler. In this scenario a deliberate interaction with the bath induces coupler transmon 2 to relax with rate $\Gamma_{1,2}$ into the bath. \begin{align} \Gamma_{1,a} \approx& \Gamma_{1,2} g_a^2\beta^2(1-\beta^2)\left(\frac{1}{\Delta_{a+}^2}+\frac{1}{\Delta_{a-}^2}\right)\nonumber\\ \approx& \frac{\Gamma_{1,2} (g_C-g_L)^2g_a^2}{(\Delta_a^2-\delta^2-(g_C-g_L)^2)^2} \end{align} We see that the coupler isolates the qubit from dissipation on the next-to-nearest-neighbor coupler transmon to fourth order in $g_C-g_L,\,g_a/\Delta_a$. Although $\Gamma_{1,a}$ turns off at $g_C-g_L = 0$, it is difficult to achieve sizeable `on' $\Gamma_a$ values using the coupler in dispersive operation. This weak `on' interaction motivates the alternative approach taken in the main text. \subsection{Energy relaxation into the flux bias line} A critical consideration for choosing an appropriate mutual inductance between the coupler SQuID and its flux bias line is the relaxation rate of the coupler induced due to this inductive coupling. This relaxation can lead to strong correlations if $\delta \ll |g_C-g_L|$. In this circumstance the `bright' state is the one that tunes strongly with flux and will relax, at worst, at the sum of the individual transmon relaxation rates. In the other regime $\delta \gg |g_C-g_L|$, the eigenstates $\ket{\pm}$ are closely approximated by independent transmon eigenstates, such that we can approximately map $\xi_{j}\leftrightarrow \xi_{\pm}$ for $j\in \{1,\,2\}$. Assuming $\dot{\Phi}_e = M \dot{I}$, this allows us to write the effective decay rate as, \begin{align} \Gamma_{1,\pm}^{fb} &\sim \left(\frac{\xi_{\pm}(\Phi_e)M}{\Phi_0}\right)^2 S_{\dot{I}\dot{I}}(\omega)\nonumber\\ &= \frac{2\hbar\xi_{\pm}^2(\Phi_e)M^2\omega_{\pm}^3}{\Phi_0^2 Z_0}, \label{eq:test} \end{align} where in the first line we have used the flux participation ratio as prescribed by the third line of Eq.~(\ref{eq:HOfull}), and in the second line we have assumed that the magnitude of current fluctuations is set by their vacuum expectation value, i.e. $S_{\dot{I}\dot{I}}(\omega) = \omega_{\pm}^2S_{II}(\omega) = 2\hbar\omega_{\pm}^3/Z_0$. \par Given a mutual inductance $M = 16\textrm{ pH}$, capacitive coupling $C_{12}\ll C_1,\,C_2$, dimensionless coupling constant $\xi_{\pm}(0) \sim E_{12}'(0)/E_1'(0)\textrm{, or }E_{12}'(0)/E_2'(0)$, transition frequency $\omega_{\pm}/2\pi = 5\textrm{ GHz}$, and bath impedance of $50\textrm{ Ohms}$, the equation above leads to an estimated $\Gamma_{1,\pm}^{fb}\sim 2\times 10^3\textrm{ s}^{-1}$, before additional low pass filtering of the flux bias. We note that these are worst case calculations since $\xi_{\pm}(\Phi_e)\propto \cos{(2\pi\Phi_e/\Phi_0)}$, which at $\Phi_e^{(0)} \sim 0.25 \Phi_0$ causes the dimensionless coupling constant $\xi(\Phi_e)$ to vanish.
{ "attr-fineweb-edu": 1.87793, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbzE25V5hYDkHGZsP
\chapter{Conclusions} \label{chapter_conc} In this thesis, we have studied a class of $f(R)$ gravity theories that feature a nonminimal coupling (NMC) between gravity and the matter fields. These NMC theories feature two functions of the Ricci scalar, $f_1(R)$ and $f_2(R)$. While the former takes the place of the Ricci scalar in the Einstein-Hilbert action, the latter couples directly with the Lagrangian of the matter fields. As a consequence, the on-shell Lagrangian of the matter fields appears explicitly in the equations of motion, in contrast with minimally coupled $f(R)$ gravity. Hence, its correct determination is crucial to the derivation of the dynamics of the gravitational and matter fields. Several different on-shell Lagrangians have been used to describe perfect fluids in the literature, such as $\mathcal{L}_\text{m}^\text{on}= -\rho$ or $\mathcal{L}_\text{m}^\text{on}=p$, but their use in the context of NMC theories of gravity generally leads to different results, contradicting the claim that they describe the same type of fluid. This suggested that a more in-depth analysis of the on-shell Lagrangian of perfect fluids was required, and that has been the central problem tackled in this thesis. This final chapter concludes our work by summarising its main contributions and discussing possible future research directions. \section{Main contributions} We focused on three key points of research: (1) the characterization of the on-shell Lagrangian of the matter fields; (2) the background-level thermodynamics of a Universe filled with perfect fluids composed of solitonic particles of fixed mass and structure; and (3) the derivation of novel constraints on specific NMC gravity models. \subsection{The Lagrangian of perfect fluids} In NMC theories, the minimum action principle results in a set of equations of motion that explicitly feature the on-shell Lagrangian of the matter fields. In the case of NMC gravity, this also means that the motion of particles and fluids, and the evolution of the energy-momentum tensor (EMT) will depend on the form of the on-shell Lagrangian of the matter fields. In Chapter \ref{chapter_lag}, we have shown that the correct form of the on-shell Lagrangian depends not only on the fluid's EMT but also on the microscopic properties of the fluid itself. This is one of the key contributions of this thesis. For a perfect fluid that is minimally coupled with gravity and that conserves particle number and entropy, the on-shell Lagrangian can take many different forms that do not change the equations of motion or the EMT. We have shown that this is still the case for a barotropic fluid. The other major contributions in this area is the demonstration that the correct on-shell Lagrangian for a fluid composed of solitonic particles must be given by the trace of the fluid's EMT, \textit{i.e.} $\mathcal{L}_\text{m}^\text{on}= T=T^{\mu\nu}g_{\mu\nu}$. This result is independent of the coupling of the fluid to gravity or to other fields, but is particularly relevant when this coupling is nonminimal, since the on-shell Lagrangian appears explicitly in the equations of motion. In Chapter \ref{chapter_nmc}, we also found that consistency between the evolution of the energy and linear momentum in the context of theories of gravity with an NMC coupling to the matter fields requires that the condition $\mathcal L^{\rm on}=T$ is required for consistency, giving no margin to other possibilities for the on-shell Lagrangian of an ideal gas. \subsection{The nonnminimal coupling and thermodynamics} In Chapter \ref{chapter_nmc}, we have shown that fluids composed of radiation (such as photons or neutrinos), couple differently to gravity than dust (such as cold dark matter or baryons), since their on-shell Lagrangian density vanishes. This, together with the well-known nonconservation of the EMT present in NMC theories, can lead to unusual thermodynamic behaviour. In particular, we have found that the nonconservation of the EMT translates into a non-adiabatic expansion of a homogeneous and isotropic Friedmann-Lemaître-Robertson-Walker universe, in stark contrast with general relativity. This in turn implies that the entropy of such a universe is in general not conserved. Moreover, we have shown that Boltzmann's $\mathcal{H}$-theorem may not hold in the context of NMC theories. \subsection{Observational constraints} In Chapter \ref{chapter_constr}, we have constrained $n$-type spectral distortions in the cosmic microwave background (CMB) associated with the NMC to gravity, affecting the normalization of the black-body spectrum. Using data from COBE and WMAP, we constrained the NMC function $f_2$ to vary by only a few parts in $10^5$ from decoupling ($z\sim10^3$) up to the present day. For a power-law NMC of the type $f_2 \propto R^n$, this translate into $|n|\lesssim {\rm few} \times 10^{-6}$. We have also shown that the baryon-to-photon ratio is no longer a constant in NMC gravity after big bang nucleosynthesis (BBN). Using data from the Planck satellite and from computational codes that simulate the process of BBN, we have obtained constraints on power-law NMC of the type $f_2 \propto R^n$ of $-0.007<n<0.002$ or $-0.002<n<0.003$ (depending on the version of BBN code used), in the redshift range $z\in[10^3,10^9]$. We have also shown that the NMC causes a violation of Etherington's distance-duality relation, which can be verified using data from Type Ia supernovae (SnIa) and baryon acoustic oscillations (BAO) observations. These observations effectively constrain NMC models in the redshift range $z\in[0,1.5]$. For a power-law NMC of the type $f_2 \propto R^n$, we obtain the marginalized constraint $n=0.013\pm 0.035$, while for an exponential NMC of the type $f_2\propto e^{\beta R}$, we obtain the marginalized constraint $\beta=\left(1.24^{+0.97}_{-1.2}\right)\cdot10^{-6}$. \section{Discussion and future work} As with any research, this thesis is not without limitations. While the fluids described in Chapter \ref{chapter_lag} can cover many different cases, they rely on a few assumptions. Namely, the derivation of the on-shell Lagrangian for the solitonic-particle fluid $\mathcal{L}_\text{m}^\text{on}=T$ assumes that the particles have fixed rest mass and structure. In light of these results, it may be worthy to re-examine previous works in the literature in which other forms of the Lagrangian have been assumed for cosmic fluids that are well described by an ideal gas. While some results are expected to hold, particularly when $\mathcal{L}_\text{m}^\text{on}=-\rho$ is used to describe dust, this will not be the case in general. The stability of NMC models is another aspect that was not explored in this thesis. Some models in this class of theories are subject to instabilities, such the well-known Ostrograsdky and Dolgov-Kawasaki instabilities, and therefore be unsuitable or undesirable as mechanisms for the explanation of cosmological phenomena. This was not the focus of this thesis but is a topic that requires more attention. Another issue that merits more research is the possible nonconservation of the EMT of matter, and its consequences. It is not immediately clear how the difference in how fluids couple to gravity can arise from first principles, nor why some fluids (like dust) maintain the conservation of energy-momentum, while others (like radiation) do not. Moreover, since this non-conservation is tied to an NMC with gravity, the possibility of the existence of a purely gravitational EMT in this context may be worthy of consideration. This idea is still a matter of discussion and debate in GR but has not been explored in the context of NMC theories. Lastly, it is worthy of note that while we have focused on a particular family of $f(R)$ NMC gravity models, many issues we have covered in this thesis are transversal to many other theories, including those that feature an NMC between other fields (such as dark energy) and the Lagrangian of the matter fields. Two clear examples are already given in Section \ref{sec.nmclagrole}, where the dark energy field is coupled to either neutrinos or the electromagnetic field, but the main results of Section \ref{sec.nmclag} are not limited to any particular coupling or gravitational theory. \chapter{Constraints on Nonminimally Coupled $f(R)$ Gravity} \label{chapter_constr} We have shown in Chapter \ref{chapter_nmc} that theories featuring a nonminimal coupling (NMC) between gravity and the matter fields can feature a non-conservation of energy-momentum, which has significant thermodynamic consequences. This non-conservation, along with changes to the equations of motion, also leads to different predictions for both cosmological phenomena and solar system dynamics, and one can use observational data to constrain specific NMC models. In this chapter we will provide an overview of these constraints, summarising existing local constraints in the literature (Section \ref{sec.solar-const}), and detailing the new cosmological constraints obtained in this work from observations, namely the cosmic microwave background (CMB) (Section \ref{sec.cmb}) \cite{Avelino2018}, big-bang nucleosynthesis (BBN) (Section \ref{sec.bbn}) \cite{Azevedo2018a}, type Ia supernovae (SnIa) and baryon acoustic oscillations (BAO) (Section \ref{sec.ddr}) \cite{Azevedo2021} observations. Throughout this chapter we will consider the action \begin{equation}\label{eq:actionf1f2_cons} S = \int d^4x \sqrt{-g} \left[\kappa f_1(R)+f_2(R)\mathcal{L}_\text{m} \right] \,, \end{equation} where $f_1(R)$ and $f_2(R)$ are arbitrary functions of the Ricci scalar $R$, $\mathcal{L}_\text{m}$ is the Lagrangian of the matter fields and $\kappa = c^4/(16\pi G)$. We will also consider a flat FLRW metric with line element \begin{equation} \label{eq:line-nmc-flat} ds^2=-dt^2+a^2(t)\left[dr^2 +r^2 d\theta^2 +r^2\sin^2\theta d\phi^2\right]\,, \end{equation} where $a(t)$ is the scale factor (we set $a(t)=1$ at the present time), $t$ is the cosmic time, and $r$, $\theta$ and $\phi$ are polar comoving coordinates, filled by a collection of perfect fluids, with energy-momentum tensor (EMT) of the form \begin{equation}\label{eq:pf_emt_nmc-cons} T^{\mu\nu}=(\rho+p)U^\mu U^\nu + p g^{\mu\nu}\,, \end{equation} where $\rho$ and $p$ are respectively the proper density and pressure of the fluid, and $U^\mu$ is the 4-velocity of a fluid element, satisfying $U_\mu U^\mu = -1$. Alternatively, we can also write the line element for this metric as \begin{equation} \label{eq:line-nmc-flat-cart} ds^2=-dt^2+a^2(t)d\vb{q}\cdot d\vb{q}\,, \end{equation} where $\vb{q}$ are the comoving Cartesian coordinates. The modified field equations can be obtained from the action \eqref{eq:actionf1f2_cons} \begin{equation}\label{eq:fieldNMC_cons} FG_{\mu\nu}=\frac{1}{2}f_2 T_{\mu\nu} + \Delta_{\mu\nu}F+\frac{1}{2}g_{\mu\nu}\kappa f_1 - \frac{1 }{2}g_{\mu\nu} RF \,, \end{equation} where \begin{equation} \label{eq:F_cons} F=\kappa f'_1+f'_2\mathcal{L}_\text{m}\,, \end{equation} the primes denote a differentiation with respect to the Ricci scalar, $G_{\mu\nu}$ is the Einstein tensor, $\Delta_{\mu\nu}\equiv\nabla_\mu \nabla_\nu-g_{\mu\nu}\square$, with $\square=\nabla_\mu \nabla^\mu$ being the D'Alembertian operator, and $\mathcal{L}_\text{m}$ is the on-shell Lagrangian of the matter fields, which in the case of a perfect fluid composed of point particles is given by the trace of the EMT \begin{equation} \label{eq:Lag_cons} \mathcal{L}_\text{m}= T^{\mu\nu}g_{\mu\nu} = T= 3p-\rho \,. \end{equation} The modified Friedmann equation (MFE) is \begin{equation}\label{eq:fried-f1f2-1-cons} H^2=\frac{1}{6F}\left[FR- \kappa f_1+f_2\rho-6H\dot{F}\right]\,, \end{equation} and the modified Raychaudhuri equation (MRE) is \begin{equation}\label{eq:ray-f1f2-1-cons} 2\dot{H}+3H^2=\frac{1}{2F}\left[FR-\kappa f_1-f_2 p-4H\dot{F}-2\ddot{F}\right] \,. \end{equation} \section{Solar system constraints} \label{sec.solar-const} Much like in the context of $f(R)$ theories in Chapter \ref{chapter_modgrav}, one can also perform a weak-field expansion around a spherical body to derive constraints on NMC gravity \cite{Bertolami2013,Castel-Branco2014,March2017,March2019}. The process is largely the same, and the same conditions apply, extended to the NMC function: \textit{Condition 1:} $f_1(R)$ and $f_2(R)$ are analytical at the background curvature $R=R_0$. \textit{Condition 2:} The pressure of the local star-like object is approximately null, $p\simeq 0$. This implies that the trace of the energy-momentum tensor is simply $T\simeq -\rho$. Likewise, the weak-field expansion feature Yukawa terms, which can be avoided if one adds a third condition: \textit{Condition 3:} $|m|r\ll1$, where $m$ is the effective mass of the scalar degree of freedom of the theory (defined in Eq. \eqref{eq:NMC-mass-param}) and $r$ is the distance to the local star-like object. The use of the appropriate Lagrangian is critical to the derivation of constraints. In \cite{Bertolami2013,Castel-Branco2014,March2017,March2019} the Lagrangian used for the perfect fluid is $\mathcal{L}_\text{m}=-\rho$, rather than $\mathcal{L}_\text{m}=T$, as we have previously determined. However, the following analysis considers only dust contributions on both the cosmological and local levels, so the general perfect fluid Lagrangian does indeed reduce to $\mathcal{L}_\text{m}=T=-\rho$. Under this assumption, the results presented in the literature for local constraints on NMC gravity therefore should remain valid in light of this thesis. \subsection[Post-Newtonian expansion]{Post-Newtonian expansion} Once again one assumes that the scalar curvature can be expressed as the sum of two components \begin{equation} R(r,t)\equiv R_0(t)+R_1(r) \,, \end{equation} where $R_0(t)$ is the background spatially homogeneous scalar curvature, and $R_1(r)$ is a time-independent perturbation to the background curvature. Since the timescales usually considered in Solar System dynamics are much shorter than cosmological ones, we can usually take the background curvature to be constant, i.e. $R_0= \text{const.}$. We can therefore separate the source for the Ricci scalar into two different components, one cosmological and another local. So the trace of the field equations Eq. \eqref{eq:fieldNMC_cons} reads \begin{align}\label{eq:NMC-trace-dec} &\left[\kappa f'_1+f'_2\left(\mathcal{L}_\text{m}^\text{cos}+\mathcal{L}_\text{m}^\text{s}\right)\right]R-\kappa f_1 \nonumber\\ &\qquad\qquad+3\square\left[\kappa f'_1+f'_2\left(\mathcal{L}_\text{m}^\text{cos}+\mathcal{L}_\text{m}^\text{s}\right)\right]=\frac{1}{2}f_2\left(T^\text{cos}+T^\text{s}\right)\,, \end{align} where $\mathcal{L}_\text{m}^\text{cos}=T^\text{cos}=-\rho^\text{cos}$ and $\mathcal{L}_\text{m}^\text{s}=T^\text{s}=-\rho^\text{s}$ are the cosmological and local matter contributions, respectively. If one takes into account that $R_1\ll R_0$ and that $R_0$ solves the terms of the expansion of Eq. \eqref{eq:NMC-trace-dec} that are independent of $R_1$, one can write it as \begin{align} \label{eq:NMC-trace-exp} &6\nabla^2\left[\left(\kappa f''_{1,0}+f''_{2,0}\mathcal{L}_\text{m}\right)R_1\right] + \left(-2\kappa f'_{1,0}+f'_{2,0}\mathcal{L}_\text{m}\right)R_1 \nonumber \\ &+2\left(\kappa f''_{1,0}+f''_{2,0}\mathcal{L}_\text{m}\right)R_0 R_1+6\left[\square\left(\kappa f''_{1,0}-f''_{2,0}\rho^\text{cos}\right)-\rho_\text{s}\square f''_{2,0}\right]R_1 \nonumber \\ &\qquad=-(1+f_{2,0})\rho_\text{s}+2f'_{2,0}R_0\rho_\text{s}+6\rho_\text{s}\square f'_{2,0}+6f'_{2,0}\nabla^2\rho^\text{s} \,, \end{align} where $f_{i,0}\equiv f_i(R_0)$. From here one can define a potential \begin{equation} \label{eq:NMC-potential-ppn} U(r)=2\left[\kappa f''_{1,0}+f''_{2,0}\mathcal{L}_\text{m}(r)\right]R_1(r) \,, \end{equation} and a mass parameter $m$ as \begin{equation} \label{eq:NMC-mass-param} m^2 \equiv \frac{1}{3} \left[\frac{2 \kappa f'_{1,0}-f'_{2,0}\mathcal{L}_\text{m}}{2\left(\kappa f''_{1,0}+f''_{2,0}\mathcal{L}_\text{m}\right)}-R_0-\frac{3\square\left(\kappa f''_{1,0}-f''_{2,0}\rho^\text{cos}\right)-3\rho^\text{s}\square f''_{2,0}}{\kappa f''_{1,0}+f''_{2,0}\mathcal{L}_\text{m}}\right] \,. \end{equation} If $|m|r\ll1$, using Eqs. \eqref{eq:NMC-potential-ppn} and \eqref{eq:NMC-mass-param} in Eq. \eqref{eq:NMC-trace-exp} we obtain \begin{equation} \label{eq:trace-exp-pot-mass} \nabla^2 U - m^2 U = \frac{1}{3}f_{2,0}\rho^\text{s}+\frac{2}{3}f'_{2,0}\rho^\text{s}R_0 +2\rho^\text{s}\square f'_{2,0} +2f'_{2.0}\nabla^2 \rho^\text{s} \,, \end{equation} which outside of the star can be solved for $R_1$ \begin{equation} \label{eq:NMC_R1_sol} R_1 = \frac{\chi}{8\pi\left(f''_{2,0}\rho^\text{cos}-f''_{1,0}\right)}\frac{M}{r}\,, \end{equation} where $M$ is the mass of the star and \begin{equation} \label{eq:NMC-ppn-eta} \chi=-\frac{1}{3}f_{2,0}+\frac{2}{3}f'_{2,0}R_0+2\square f'_{2,0} \,. \end{equation} By considering a flat FLRW metric with a spherically symmetric perturbation \begin{equation} \label{eq:pert-NMC-flrw} ds^2=-\left[1+2\Psi(r)\right] dt^2 + a^2(t)\left\{\left[1+2\Phi(r)\right]dr^2 +r^2 d\theta^2 +r^2\sin^2\theta d\phi^2\right\} \,, \end{equation} and solving the linearized field equations for $\Psi(r)$ and $\Phi(r)$ with the solution obtained for $R_1$ \eqref{eq:NMC_R1_sol}, one obtains \cite{Bertolami2013} \begin{equation} \label{eq:nmc-ppn-psi} \Psi=-\frac{f_{2,0}+f'_{2,0}R_0}{12\pi\left(f'_{1,0}-f'_{2,0}\rho^\text{cos}\right)}\frac{M}{r}\,, \end{equation} \begin{equation} \label{eq:nmc-ppn-phi} \Phi=\frac{f_{2,0}+4f'_{2,0}R_0+6\square f'_{2,0}}{24\pi\left(f'_{1,0}-f'_{2,0}\rho^\text{cos}\right)}\frac{M}{r}. \end{equation} Comparing Eqs. \eqref{eq:nmc-ppn-psi} and \eqref{eq:nmc-ppn-phi} with the equivalent PPN metric \begin{equation} \label{eq:metric-ppn-nmc} ds^2=-\left(1-\frac{2GM}{r}\right) dt^2 + \left(1+\gamma\frac{2GM}{r}\right)\left(dr^2 +r^2 d\theta^2 +r^2\sin^2\theta d\phi^2\right) \,, \end{equation} where $\gamma$ is a PPN parameter, one can see that in NMC gravity \begin{equation} \label{eq:nmc-gamma} \gamma=\frac{1}{2}\left[\frac{f_{2,0}+4f'_{2,0}R_0+6\square f'_{2,0}}{f_{2,0}+f'_{2,0}R_0}\right]\,. \end{equation} If $f_2=1$, then $\gamma=1/2$, like in $f(R)$ gravity. The tightest bound on this parameter comes from the tracking of the Cassini probe, where it was determined that $\gamma-1= (2.1\pm2.3)\times 10^{-5}$. This in turn can be used to constrain particular NMC models, provided that the linearized limit is valid and that $|m| r\ll 1$ (see \cite{Bertolami2013} for a more in-depth look at this constraint). \subsection{``Post-Yukawa'' expansion} Similarly to $f(R)$ theories, the authors in \cite{Castel-Branco2014} have expanded the previous analysis to include the consideration of the Yukawa terms in the expansion, rather than disregard them entirely by imposing a limit to the mass of the scalar degree of freedom. We will briefly summarize the resulting constraints from the leading order of the expansion, and point the reader to \cite{Castel-Branco2014,March2017,March2019} for a more detailed analysis. The used metric for spacetime around a spherical central object (star) with mass $M$ and density $\rho$ in this scenario mirrors the $f(R)$ case, in that it is also given by a small perturbation on an asymptotically flat Minkowski metric, and is given by the line element \begin{equation} \label{eq:pert-NMC-mink} ds^2=-\left[1+2\Psi(r)\right] dt^2 +\left\{\left[1+2\Phi(r)\right]dr^2 +r^2 d\theta^2 +r^2\sin^2\theta d\phi^2\right\} \,. \end{equation} This metric is very similar to the one used for the post-Newtonian expansion \eqref{eq:pert-NMC-flrw}, with the notable exception that one ignores the effects of the background cosmological curvature. In this case, one assumes that the $f(R)$ function admit a Taylor expansion around $R=0$ of the form \begin{equation} \label{eq:yuk-f1} f_1(R)=R+\frac{R^2}{6\mathcal{M}^2}+ \mathcal{O}(R^3) \,, \end{equation} \begin{equation} \label{eq:yuk-f2} f_2(R)=1+2\xi \frac{R}{\mathcal{M}^2}+\mathcal{O}(R^2) \,, \end{equation} where $\mathcal{M}$ is a characteristic mass scale and $\xi$ is a dimensionless parameter. Solving the trace of the field equations under these assumptions, and ignores terms of order $\mathcal{O}(c^{-3})$ or smaller, one obtains \begin{equation} \label{eq:trace-nmc-1} \nabla^2 R - \mathcal{M}^2R=\frac{8\pi G}{c^2}\mathcal{M}^2\left[\rho-6\frac{2\xi}{\mathcal{M}^2}\nabla^2\rho\right]\,. \end{equation} The solution for the curvature outside of the star is \cite{Castel-Branco2014} \begin{equation} \label{eq:R-sol-nmc-yuk} R(r)=\frac{2GM}{c^2 r}\mathcal{M}^2 (1-12\xi) A(\mathcal{M},r_s)e^{-mr}\,, \end{equation} where $r_s$ is the mass of the star and $A(\mathcal{M},r_s)$ is a form factor given by \begin{equation} \label{eq:yuk-form} A(\mathcal{M},r_s) = \frac{4\pi}{\mathcal{M}M}\int_{0}^{r_s} \sinh(\mathcal{M}r)\rho(r)r \, dr \,. \end{equation} One can now determine the solutions for $\Psi$ and $\Phi$ outside the star, obtaining \begin{equation} \label{eq:psi-nmc-yuk} \Psi = -\frac{GM}{c^2 r}\left[1+\left(\frac{1}{3}-4\xi\right)A(\mathcal{M},r_s)e^{-\mathcal{M}r}\right] \,, \end{equation} \begin{equation} \label{eq:phi-nmc- yuk} \Phi = \frac{GM}{c^2 r}\left[1-\left(\frac{1}{3}-4\xi\right)A(\mathcal{M},r_s)e^{-\mathcal{M}r}(1+\mathcal{M}r)\right] \,. \end{equation} Inserting these perturbations back into the metric \eqref{eq:pert-NMC-mink} one can immediately identify an additional Yukawa term to the usual Newtonian potential, which reads \begin{equation} U(r)=-\frac{GM}{r}\left[1+\alpha A(\mathcal{M},r_s)e^{-r/\lambda}\right]\,, \end{equation} where $\alpha= 1/3-4\xi$ and $\lambda = 1/\mathcal{M}$ are the strength and characteristic length of the Yukawa potential. One can then use current constraints \cite{Adelberger2003} on a fifth Yukawa-type force to constrain the parameter pair ($\alpha,\lambda$) or, equivalently, ($\xi,\mathcal{M}$) (see Fig. \ref{fig:yuk}). \begin{figure}[!h] \centering \subfloat{\includegraphics[width=0.49\textwidth]{yuk_nano.jpeg}}\hfill \subfloat{\includegraphics[width=0.49\textwidth]{yuk_micro.jpeg}}\par \subfloat{\includegraphics[width=0.49\textwidth]{yuk_cm.jpeg}} \caption[Constraints on the Yukawa force up to the nano scale]{Constraints on the Yukawa force strength $\alpha$ and range $\lambda$ parameters \cite{Adelberger2003}.} \label{fig:yuk} \end{figure} This analysis can of course be extended to a higher-order expansion of the functions $f_1$ and $f_2$. Though the calculations are much more involved, constraints on the expansion parameters have been obtained from the precession of Mercury's orbit \cite{March2017} and from ocean experiments \cite{March2019}. \section{Spectral distortions in the cosmic microwave background} \label{sec.cmb} The cosmic microwave background (CMB) has been the subject of many studies and observational missions ever since it was discovered. Most of this effort concerns the study of the small temperature variations across the CMB, usually categorized as multiple types of spectral distortions, some of which can be gravitational in origin. Naturally, one would expect that a modified theory of gravity could have an impact on these distortions, and this is indeed the case in NMC theories of gravity. Since photons couple differently to the background curvature in NMC theories (when compared to GR), one can derive a new type of spectral distortions, which we dub $n$-type spectral distortions, affecting the normalization of the spectral energy density \cite{Avelino2018}. Let us start by assuming that the CMB has a perfect black body spectrum with temperature $T_{\rm dec}$ at the time of decoupling between baryons and photons (neglecting tiny temperature fluctuations of $1$ part in $10^5$). The spectral energy density and number density of a perfect black body are given by \begin{equation} \label{eq:black-bod-cons} u(\nu)=\frac{8 \pi h \nu^3}{e^{h \nu/(k_B T)}-1}\,, \qquad n(\nu) = \frac{u(\nu)}{h \nu}\,, \end{equation} respectively, where $h$ is Planck's constant and $k_B$ is Boltzmann's constant, $T$ is the temperature and $E_\gamma = h \nu$ is the energy of a photon of frequency $\nu$. In the standard scenario, assuming that the universe becomes transparent for $T < T_{\rm dec}$, the CMB radiation retains a black body spectrum after decoupling. This happens because the photon number density evolves as $n_\gamma \propto a^{-3}$ (assuming that the number of photons is conserved) while their frequency is inversely proportional to the scale factor $a$, so that $\nu \propto a^{-1}$. In the case studied in the present paper, the number of CMB photons is still assumed to be conserved (so that $n_\gamma \propto a^{-3}$) However, recall that the momentum of particles in NMC gravity evolves as determined in Eq. \eqref{eq:momev}, and therefore the energy and frequency of photons also follows \begin{equation} \label{eq:nu-ev} E_\gamma \propto \nu \propto (a f_2)^{-1}\,. \end{equation} Alternatively, one can arrive at the same result by recalling that the equation of motion of a point particle in NMC gravity is given by \begin{equation} \label{eq:geodesic} \frac{du^{\mu} }{ ds}+\Gamma^\mu_{\alpha\beta} u^\alpha u^\beta = \mathfrak{a}^\mu \,, \end{equation} where $\mathfrak{f}^\mu = m\mathfrak{a}^\mu$ is a momentum-dependent four-force, given by \begin{equation} \label{eq:extraforce} \mathfrak{f}^\mu=-m\frac{f'_2}{ f_2}h^{\mu\nu}\nabla_\nu R \, , \end{equation} and $h^{\mu\nu}=g^{\mu\nu}+u^{\mu}u^{\nu}$ is the projection operator. Solving Eqs. \eqref{eq:geodesic} and \eqref{eq:extraforce} with the metric \eqref{eq:line-nmc-flat} one finds that the components of the three-force on particles $\vb{\mathfrak{f}}=d\vb{p}/dt$ are given by \cite{Avelino2020} (in Cartesian coordinates) \begin{equation} \label{eq:3force} \mathfrak{f}^i = -\frac{d\ln(af_2)}{dt}p^i \,, \end{equation} where $p^i$ are the components of the particle's three-momentum $\vb{p}$, which therefore evolve as \begin{equation} p^i \propto (a f_2)^{-1}\,. \label{eq:momev-cons} \end{equation}Hence, taking into account that $n_\gamma \propto a^{-3} \propto (f_2)^3 \times (af_2)^{-3}$, the spectral energy density at a given redshift $z$ after decoupling may be written as \begin{equation} u(\nu)_{[z]} = \frac{(f_{2[z]})^3}{(f_{2[z_{\rm dec}]})^3} \frac{8 \pi h \nu^3}{e^{h \nu/(k_B T_{[z]})}-1}\,, \end{equation} where \begin{equation} T_{[z]} \equiv T_{ [z_{\rm dec}]} \frac{(1+z) f_{2 [z_{dec}]}}{(1+z_{dec})f_{2 [z]}}\,. \end{equation} The evolution of this spectral density is similar to that of a perfect black body (see Eqs. \eqref{eq:black_spectral_red} and \eqref{eq:black_temp_red}), except for the different normalization. Also note that a small fractional variation $\Delta f_2/f_2$ on the value of $f_2$ produces a fractional change in the normalization of the spectral density equal to $3 \Delta f_2/f_2$. The FIRAS (Far InfraRed Absolute Spectrophotometer) instrument onboard COBE (COsmic Background Explorer) measured the spectral energy of the nearly perfect CMB black body spectrum \cite{Fixsen1996,Fixsen2009}. The weighted root-mean-square deviation between the observed CMB spectral radiance and the blackbody spectrum fit was found to be less than $5$ parts in $10^5$ of the peak brightness. Hence, we estimate that $f_2$ can vary by at most by a few parts in $10^5$ from the time of decoupling up to the present time. This provides a stringent constraint on NMC theories of gravity, independently of any further considerations about the impact of such theories on the background evolution of the Universe. Let us define \begin{equation} \Delta f_2^ {i \to f} \equiv \left|f_2(z_f) - f_2(z_i)\right| \,, \end{equation} which is assumed to be much smaller than unity. Here, $z_i$ and $z_f$ are given initial and final redshifts with $z_i > z_f$. Consider a power law model for $f_2$ defined by \begin{equation} \label{eq:model} f_1(R) \sim R\,, \qquad f_2(R) \propto R^n\,, \end{equation} in the redshift range $[z_f,z_i]$, where $n$ is a real number with $|n| \ll 1$ and $f_2 \sim 1$ at all times. In this case \begin{equation} \label{eq:f2var} \Delta f_2^ {i \to f} \sim \left| \left(\frac{R(z_f)}{R(z_i)}\right)^n -1\right| \sim 3 \left| n \right| \ln \left( \frac{1+z_f}{1+z_i}\right)\,. \end{equation} Therefore, we have \begin{equation} \left| n \right| \lesssim \frac{\Delta f_2}{3}^{CMB \to 0} \left[ \ln \left(1+z_\text{CMB}\right) \right]^{-1}\,, \label{eq:nconstraint3} \end{equation} and it is simple to show that this translates into $|n|\lesssim {\rm few} \times 10^{-6}$. \section{Big-bang nucleosynthesis } \label{sec.bbn} Primordial nucleosynthesis may be described by the evolution of a set of differential equations, namely the Friedmann equation, the evolution of baryon and entropy densities, and the Boltzmann equations describing the evolution of the average density of each nuclide and neutrino species. As one could expect, even taking into account experimental values for the reaction cross-sections instead of theoretical derivations from particle physics, the accurate computation of element abundances cannot be done without resorting to numerical algorithms \cite{Wagoner1973,Kawano1992,Smith:1992yy,Pisanti2008,Consiglio2017}. The prediction of these quantities in the context of NMC theories is beyond the scope of this thesis, but it is worthy of note that these codes require one particular parameter to be set a priori: the baryon-to-photon ratio $\eta$. While in GR the baryon-to-photon ratio is fixed around nucleosynthesis, in general the same does not happen in the context of NMC theories. To show this, recall that the evolution of the density of photons and baryons (which are always non-relativistic from the primordial nucleosynthesis epoch up to the present era) is given by Eq. \eqref{eq:densityevo} as \begin{equation} \label{eq:matradevo} \rho_\gamma = \rho_{\gamma,0} a^{-4}f_2^{-1}\,, \qquad \rho_\text{b} =\rho_{\text{b},0} a^{-3}\,. \end{equation} While the baryon number in a fixed comoving volume is conserved ($n_\text{b}\propto a^{-3}$, where $n_\text{b}$ is the baryon number density), before recombination photons are in thermal equilibrium, so the photon number density is directly related to the temperature by \begin{equation} \label{eq:photntemp} n_\gamma={2\zeta(3)\over \pi^2}T^3\,. \end{equation} Since the photon energy density also relates to the temperature as \begin{equation} \label{eq:photenergdens} \rho_\gamma={\pi^2\over 15}T^4\,, \end{equation} combining Eqs. \eqref{eq:matradevo}, \eqref{eq:photntemp} and \eqref{eq:photenergdens} one obtains that that the baryon-to-photon ratio $\eta$ between BBN (at a redshift $z_\text{BBN} \sim 10^9$) and photon decoupling (at a redshift $z_\text{CMB} \sim 10^3$) evolves as \begin{equation} \label{eq:eta} \eta\equiv {n_\text{b}\over n_\gamma}\propto f_2^{3/4}\,, \end{equation} as opposed to the GR result, $\eta=\text{const}$. After photon decoupling the baryon-to-photon ratio is conserved in NMC theories, with the energy of individual photons evolving as $E_\gamma \propto (a f_2)^{-1}$ (as opposed to the standard $E_\gamma \propto a^{-1}$ result). We will assume that the modifications to the dynamics of the universe with respect to GR are small, in particular to the evolution of $R$ and $H$ with the redshift $z$. This can be seen as a ``best-case'' scenario for the theory, as any significant changes to $R(z)$ and $H(z)$ are expected to worsen the compatibility between the model's predictions and observational data. Hence, in the following, we shall assume that \begin{eqnarray} R &=& 3 H_0^2 \left[ \Omega_{\text{m},0} (1+z)^{3} + 4 \Omega_{\Lambda,0} \right] \nonumber \\ &\sim& 3 H_0^2 \Omega_{\text{m},0} (1+z)^{3} \propto (1+z)^{3}\,, \end{eqnarray} where $\Omega_{\text{m},0} \equiv (\rho_{\text{m},0})/(6 \kappa H_0^2)$ and $\Omega_{\Lambda,0} \equiv (\rho_{\Lambda,0})/(6 \kappa H_0^2)$ are the matter and dark energy density parameters (here dark energy is modelled as a cosmological constant $\Lambda$), and the approximation is valid all times, except very close to the present time. Let us define \begin{equation} \frac{\Delta \eta}{\eta}^{i \to f} \equiv \frac{\left|\eta(z_f) - \eta(z_i)\right|}{\eta(z_i)} \,, \end{equation} which is assumed to be much smaller than unity. Eq. \eqref{eq:f2var} then implies that \begin{equation} \left| n \right| \lesssim \frac49 \frac{\Delta \eta}{\eta}^{i \to f} \left[ \ln \left( \frac{1+z_f}{1+z_i}\right) \right]^{-1}\,, \label{eq:nconstraint} \end{equation} assuming a small relative variation of $\eta$ satisfying Eq. \eqref{eq:eta} \begin{equation} \label{eq:etavar} \Delta f_2^ {i \to f} \sim \frac43 \frac{\Delta \eta}{\eta}^{i \to f}\,. \end{equation} There are two main ways of estimating the value of $\eta$ at different stages of cosmological evolution. On one hand, one may combine the observational constraints on the light element abundances with numerical simulations of primordial BBN nucleosynthesis to infer the allowed range of $\eta$. This is the method used in \cite{Iocco2009}, among others, leading to \begin{equation} \label{eq:etabbnconst} \eta_\text{BBN}=(5.7\pm0.6)\times 10^{-10} \end{equation} at 95\% credibility interval (CI) just after nucleosynthesis (at a redshift $z_\text{BBN} \sim10^9$). More recently, an updated version of the program {\fontfamily{qcr}\selectfont PArthENoPE} ({\fontfamily{qcr}\selectfont PArthENoPE} 2.0), which computes the abundances of light elements produced during BBN, was used to obtain new limits on the baryon-to-photon ratio, at $2\sigma$ \cite{Consiglio2017} \begin{equation} \label{eq:etabbn2const} \eta_\text{BBN}=(6.23^{+0.24}_{-0.28})\times 10^{-10}\,. \end{equation} There is some variation of $\eta$ during nucleosynthesis due to the entropy transfer to photons associated with the $e^\pm$ annihilation. The ratio between the values of $\eta$ at the beginning and the end of BBN is given approximately by a factor of $2.73$ \cite{Serpico2004}. Although the NMC will lead to further changes ratio, we will not consider this effect, since it will be subdominant for $|n| \ll 1$. We will therefore use the above standard values obtained for $\eta_\text{BBN}$ immediately after nucleosynthesis to constrain NMC gravity. The neutron-to-photon ratio also affects the acoustic peaks observed in the CMB, generated at a redshift $z_\text{CMB} \sim10^3$. The full-mission Planck analysis \cite{Ade2016} constrains the baryon density $\omega_\text{b} = \Omega_\text{b} (H_0/[100 \text{ km s}^{-1}\text{ Mpc}^{-1}] )$ with the inclusion of BAO, at 95\% CI, \begin{equation} \label{eq:omegacmb-cons} \omega_\text{b}=0.02229^{+0.00029}_{-0.00027}\, . \end{equation} This quantity is related to the baryon-to-photon ratio via $\eta = 273.7\times10^{-10} \omega_\text{b}$, leading to \begin{equation} \label{eq:etacmb-cons} \eta_\text{CMB}= 6.101^{+0.079}_{-0.074}\times 10^{-10}\,. \end{equation} Here, we implicitly assume that no significant change to $\eta$ occurs after $z_\text{CMB} \sim10^3$, as shown in Ref. \cite{Avelino2018}. Taking these results into consideration, we shall determine conservative constraints on $n$ using the maximum allowed variation of $\eta$ from $z_\text{BBN} \sim10^9$ to $z_\text{CMB} \sim10^3$, using the appropriate lower and upper limits given by Eqs. \eqref{eq:etabbnconst}, \eqref{eq:etabbn2const} and \eqref{eq:etacmb-cons}. Combining Eqs. \eqref{eq:eta} and \eqref{eq:model} to obtain \begin{equation} \label{eq:etamodel} \eta\propto R^{3n/4}\,, \end{equation} it is easy to see that the sign of $n$ will affect whether $\eta$ is decreasing or increasing throughout the history of the universe, and thus, since $R$ monotonically decreases towards the future, a positive (negative) $n$ will imply a decreasing (increasing) $\eta$. This being the case, for the allowed range in Eq. \eqref{eq:etabbnconst}, we have for positive $n$ \begin{equation} \label{eq:npos} \frac{\Delta \eta}{\eta} = \frac{\left|(6.101 - 0.074) - (5.7 + 0.6)\right|}{5.7 + 0.6} \simeq 0.04 \,, \end{equation} and for negative $n$ \begin{equation} \label{eq:nneg} \frac{\Delta \eta}{\eta} = \frac{\left|(6.101 + 0.079) - (5.7 - 0.6)\right|}{5.7 - 0.6} \simeq 0.21 \,, \end{equation} Therefore we find \begin{equation} \label{eq:nconstr1} -0.007<n<0.002\, , \end{equation} and using the limits given in Eq. \eqref{eq:etabbn2const} \cite{Consiglio2017}, \begin{equation} \label{eq:nconstr2} -0.002<n<0.003\, . \end{equation} In the previous section the NMC has been shown to lead to $n$-type spectral distortions in the CMB, affecting the normalization of the spectral energy density, and for a power-law $f_2\propto R^n$ we obtained $|n|\lesssim {\rm few} \times 10^{-6}$, which is roughly $3$ orders of magnitude stronger than the constraint coming from the baryon-to-photon ratio. Still, this constraint and those given by Eqs. \eqref{eq:nconstr1} and \eqref{eq:nconstr2} are associated with cosmological observations which probe different epochs and, as such, can be considered complementary: while the former limits an effective value of the power-law index $n$ in the redshift range $[0,10^3]$, the later is sensitive to its value at higher redshifts in the range $[10^3,10^9]$. Furthermore, NMC theories with a power-law coupling $f_2(R)$ have been considered as a substitute for dark matter in previous works \cite{Bertolami2012,Silva2018}. There it has been shown that $n$ would have to be in the range $-1\leq n \leq -1/7$ to explain the observed galactic rotation curves. However, such values of $n$ are excluded by the present study. \section{Distance duality relation} \label{sec.ddr} Etherington's relation, also known as the distance-duality relation (DDR), directly relates the luminosity distance $d_\text{L}$ and angular-diameter distance $d_\text{A}$ in GR, where they differ only by a specific function of the redshift \begin{equation} \label{eq:DDR_cons} \frac{d_\text{L}}{d_\text{A}}=(1+z)^2\,. \end{equation} Naturally, if a modified gravity theory features different expressions for the luminosity or angular-diameter distances, this relationship may also change. The DDR has therefore recently come into focus given the possibility of performing more accurate tests of Etherington's relation with new cosmological surveys of type Ia supernovae (SnIa) and baryon acoustic oscillations (BAO)\cite{Bassett2004,Ruan2018,Xu2020,Martinelli2020,Lin2021,Zhou2021}, as well as observations from Euclid \cite{Laureijs2011,Astier2014} and gravitational wave observatories \cite{Cai2017}. In this section, we derive the impact of NMC theories on the DDR and use the most recent data available to impose constraints on a broad class of NMC models. \subsection{The distance-duality relation in NMC theories} The luminosity distance $d_\text{L}$ of an astronomical object relates its absolute luminosity $L$, i.e. its radiated energy per unit time, and its energy flux at the detector $l$, so that they maintain the usual Euclidian relation \begin{equation} \label{eq:apparent luminosity-cons} l = \frac{L}{4\pi d_\text{L}^2}\,, \end{equation} or in terms of the luminosity distance \begin{equation} \label{eq:lum_dist_1_const} d_\text{L} = \sqrt{\frac{L}{4\pi l}}\,. \end{equation} Over a small emission time $\Delta t_\text{em}$ the absolute luminosity can be written as \begin{equation} \label{eq:abs_lum_1_const} L = \frac{N_{\gamma,\text{em}} E_\text{em}}{\Delta t_\text{em}}\,, \end{equation} where $N_{\gamma,\text{em}}$ is the number of emitted photons and $E_\text{em}$ is the average photon energy. An observer at a coordinate distance $r$ from the source will, however, observe an energy flux given by \begin{equation} \label{eq:app_lum_const} l = \frac{N_{\gamma,\text{obs}}E_\text{obs}}{\Delta t_\text{obs} 4\pi r^2} \end{equation} where $N_{\gamma,\text{obs}}$ is the number of observed photons and $E_\text{obs}$ is their average energy. Recall that the energy of a photon in a flat FLRW metric in NMC gravity is given by \begin{equation} \label{eq:photon_freq} E\propto\nu\propto \frac{1}{af_2} = \frac{1+z}{f_2} \, . \end{equation} Note that while the number of photons is conserved, $N_{\gamma,\text{obs}} = N_{\gamma,\text{em}}$, the time that it takes to receive the photons is increased by a factor of $1+z$, $t_\text{obs}= (1+z)t_\text{em}$, and as per Eq. \eqref{eq:photon_freq}, their energy is reduced as \begin{equation} \label{eq:energy_obs_const} E_\text{obs} = \frac{E_\text{em}}{1+z}\frac{f_2(z)}{f_2(0)} \,, \end{equation} where $f_2(z)=f_2[R(z)]$ and $f_2(0)=f_2[R(0)]$ are respectively the values of the function $f_2$ at emission and at the present time. The distance $r$ can be calculated by just integrating over a null geodesic, that is \begin{align} \label{eq:null geodesic_const} ds^2 &= - dt^2 + a(t)^2dr^2 = 0\nonumber \\ \Rightarrow dr &= -\frac{dt}{a(t)} \nonumber \\ \Rightarrow r &= \int_{t_\text{em}}^{t_\text{obs}} \frac{dt}{a(t)}= \int_0^z \frac{dz'}{H(z')} \,, \end{align} Using Eqs. \eqref{eq:abs_lum_1_const}, \eqref{eq:app_lum_const}, \eqref{eq:energy_obs_const} and \eqref{eq:null geodesic_const} in Eq. \eqref{eq:lum_dist_1_const}, we finally obtain \begin{equation} \label{eq:lum_dist_cons} d_\text{L} = (1+z)\sqrt{\frac{f_2(0)}{f_2(z)}}\int_0^z \frac{dz'}{H(z')} \, . \end{equation} In the GR limit $f_2=\text{const.}$, and we recover the standard result. The angular-diameter distance $d_\text{A}$, on the other hand, is defined so that the angular diameter $\theta$ of a source that extends over a proper distance $s$ perpendicularly to the line of sight is given by the usual Euclidean relation \begin{equation} \label{eq:angle-cons} \theta=\frac{s}{d_\text{A}}\,. \end{equation} In a FLRW universe, the proper distance $s$ corresponding to an angle $\theta$ is simply \begin{equation} \label{eq:prop_dist-cons} s= a(t)r\theta = \frac{r\theta}{1+z}\,, \end{equation} where the scale factor has been set to unity at the present time. So the angular-diameter distance is just \begin{equation} \label{eq:ang_dist_cons} d_\text{A}=\frac{1}{1+z}\int_0^z \frac{dz'}{H(z')}\,. \end{equation} Comparing Eqs. \eqref{eq:lum_dist_cons} and \eqref{eq:ang_dist_cons} one finds \begin{equation} \label{eq:mod_DDR} \frac{d_\text{L}}{d_\text{A}}=(1+z)^2 \sqrt{\frac{f_2(0)}{f_2(z)}}\,. \end{equation} Deviations from the standard DDR are usually parametrized by the factor $\upeta$ as \begin{equation} \label{eq:par_DDR} \frac{d_\text{L}}{d_\text{A}}=(1+z)^2 \upeta\,. \end{equation} Constraints on the value of $\upeta$ are derived from observational data for both $d_\text{A}$ and $d_\text{L}$. Comparing Eqs. \eqref{eq:mod_DDR} and \eqref{eq:par_DDR} one immediately obtains \begin{equation} \label{eq:eta_ddr} \upeta(z) = \sqrt{\frac{f_2(0)}{f_2(z)}}\,, \end{equation} (see also \cite{Minazzoli2014,Hees2014} for a derivation of this result in theories with an NMC between the matter fields and a scalar field). If $f_1=R$, like in GR, any choice of the NMC function apart from $f_2 = 1$ would lead to a deviation from the standard $\Lambda$CDM background cosmology and would therefore require a computation of the modified $H(z)$ and $R(z)$ for every different $f_2$ that is probed. However, it is possible to choose a function $f_1$ such that the cosmological background evolution remains the same as in $\Lambda$CDM. In this case, the Hubble factor is simply \begin{equation} \label{eq:Hubble} H(z)=H_0\left[\Omega_{\text{r},0}(1+z)^4+\Omega_{\text{m},0}(1+z)^3+\Omega_{\Lambda,0}\right]^{1/2} \,, \end{equation} and the scalar curvature $R$ is given by \begin{equation} \label{eq:scalar_curv} R(z)=3H_0^2\left[\Omega_{\text{m},0}(1+z)^3+4\Omega_{\Lambda,0}\right] \,, \end{equation} where $H_0$ is the Hubble constant, $\Omega_{\text{r},0}$, $\Omega_{\text{m},0}$ and $\Omega_{\Lambda,0}$ are the radiation, matter and cosmological constant density parameters at present time. The calculation of the appropriate function $f_1$ must be done numerically, by integrating either the MFE \eqref{eq:fried-f1f2-1-cons} or the MRE \eqref{eq:ray-f1f2-1-cons} for $f_1$ with the appropriate initial conditions at $z=0$ (when integrating the MRE, the MFE serves as an additional constraint). Considering that GR is strongly constrained at the present time, the natural choice of initial conditions is $f_1(0)=R(0)$ and $\dot{f}_1(0)=\dot{R}(0)$. Nevertheless, in this section we will consider that significant deviations of $f_2$ from unity are allowed only at relatively low redshift, since CMB and BBN constraints on NMC theories have already constrained $f_2$ to be very close to unity at large redshifts \cite{Avelino2018,Azevedo2018a}. We have verified that the function $f_1(z)$ required for Eqs. \eqref{eq:Hubble} and \eqref{eq:scalar_curv} to be satisfied deviates no more than $3\%$ from the GR prediction $f_1=R$ for $z\lesssim1.5$, for the models investigated in this paper (using the best-fit parameters in Tables \ref{tab:results_full_power} and \ref{tab:results_full_exp}). \subsection{Methodology and results}\label{sec:results} In \cite{Martinelli2020}, the authors used Pantheon and BAO data to constrain a parametrization of the DDR deviation of the type \begin{equation} \label{eq:par_eta_eps-cons} \upeta(z) = (1+z)^{\epsilon}\,. \end{equation} and obtained, for a constant $\epsilon$, $\epsilon=0.013\pm0.029$ at the 68\% credible interval (CI). Here, we use the same datasets and a similar methodology to derive constraints for specific NMC models. We present a brief description of the methodology for completeness, but refer the reader to \cite{Martinelli2020} for a more detailed discussion. In general, BAO data provides measurements of the ratio $d_z$ (see, for example, \cite{Beutler2011}), defined as \begin{equation} d_z\equiv \frac{r_\text{s}(z_\text{d})}{D_V(z)} \,, \end{equation} where $D_V(z)$ is the volume-averaged distance \cite{Eisenstein2005} \begin{equation} D_V(z)=\left[(1+z)^2 d_\text{A}^2(z) \frac{c z}{H(z)}\right]^{1/3} \,, \end{equation} and $r_\text{s}(z_\text{d})$ is the comoving sound horizon at the drag epoch. Assuming that the evolution of the Universe is close to $\Lambda$CDM, $r_\text{s}(z_\text{d})$ can be approximated as \cite{Eisenstein1998} \begin{equation} \label{eq:sound_hor} r_\text{s}(z_\text{d}) \simeq \frac{44.5 \ln \left(\frac{9.83}{\Omega_{\text{m},0}h^2}\right)} {\sqrt{1+10(\Omega_{\text{b},0}h^2)^{3/4}}} \,, \end{equation} where $\Omega_{\text{b},0}$ is the baryon density parameter and $h$ is the dimensionless Hubble constant. Here we shall assume that $\Omega_{\text{b},0}h^2=0.02225$ in agreement with the latest \textit{Planck} release \cite{Aghanim2020}. Notice that the BAO observations are used to estimate $d_\text{A}$, which remains unchanged in NMC theories provided that the evolution of $H(z)$ and $R(z)$ is unchanged with respect to the $\Lambda$CDM model. Thus, BAO data will ultimately provide us with constraints on $H_0$ and $\Omega_{\text{m},0}$. The original datasets that we shall consider in the present paper come from the surveys 6dFGS \cite{Beutler2011}, SDDS \cite{Anderson2014}, BOSS CMASS \cite{Xu2012}, WiggleZ \cite{Blake2012}, MGS \cite{Ross2015}, BOSS DR12 \cite{Gil-Marin2016}, DES \cite{Abbott2019}, Ly-$\alpha$ observations \cite{Blomqvist2019}, SDSS DR14 LRG \cite{Bautista2018} and quasar observations \cite{Ata2018}, but the relevant data is conveniently compiled and combined in Appendix A of \cite{Martinelli2020}. Likewise, the luminosity distance can be constrained using SnIa data, via measurements of their apparent magnitude \begin{equation} \label{eq:appar_mag} m(z)=M_0 +5 \log_{10}\left[\frac{d_\text{L}(z)}{\text{Mpc}}\right]+25 \,, \end{equation} or, equivalently, \begin{equation} \label{eq:appar_mag_explicit} m(z) = M_0 - 5\log_{10}(H_0)+ 5\log_{10}\left[\upeta(z)\hat{d}_\text{L}(z)\right] +25 \,, \end{equation} where $M_0$ is the intrinsic magnitude of the supernova and $\hat{d}_\text{L}(z)$ is the GR Hubble-constant-free luminosity distance. Note, that the intrinsic magnitude $M_0$ is completely degenerate with the Hubble constant $H_0$, and thus simultaneous constraints on both quantities cannot be derived from SnIa data alone. As per \cite{Martinelli2020}, we use the marginalized likelihood expression from Appendix C in \cite{Conley2011}, which takes into account the marginalization of both $M_0$ and $H_0$, whenever possible. Likewise, we use the full 1048 point Pantheon compilation from \cite{Scolnic2018}. For simplicity, we shall consider two NMC models with a single free parameter (the NMC parameter) which is assumed to be a constant in the relevant redshift range ($0<z<1.5$), and assume a flat Universe evolving essentially as $\Lambda$CDM. Since the contribution of radiation to the overall energy density is very small at low redshift we ignore its contribution, and therefore $\Omega_{\Lambda,0}=1-\Omega_{\text{m},0}$. We use the Markov chain Monte Carlo (MCMC) sampler in the publicly available Python package \texttt{emcee} \cite{Foreman-Mackey2013} to build the posterior likelihoods for the cosmological parameters, $H_0$ and $\Omega_{\text{m},0}$, as well as the NMC parameter, assuming flat priors for all of them. The MCMC chains are then analyzed using the Python package \texttt{GetDist} \cite{Lewis2019}, to calculate the marginalized means and CIs, as well as plots of the 2D contours of the resulting distributions. \subsubsection*{Power Law} \label{subsubsec:powerlaw} Consider a power law NMC function of the type \begin{equation} \label{eq:power_law} f_2\propto R^n \,, \end{equation} where $n$ is the NMC parameter (GR is recovered when $n=0$). Using Eqs. \eqref{eq:scalar_curv} and \eqref{eq:power_law} in Eq. \eqref{eq:eta_ddr}, one obtains \begin{equation} \label{eq:eta_power} \upeta(z;n,\Omega_{\text{m},0}) = \left[\frac{\Omega_{\text{m},0}+4(1-\Omega_{\text{m},0})}{\Omega_{\text{m},0}(1+z)^3+4(1-\Omega_{\text{m},0})}\right]^{n/2} \,. \end{equation} The marginalized 68\% CI results can be found in Table \ref{tab:results_full_power}, and the 2D distributions for $n$ and $\Omega_{\text{m},0}$ are displayed in Fig.~\ref{fig:n_Om} (see also Fig. \ref{fig:tri_n_Tot} for the remaining distribution plots). A reconstruction of Eq. \eqref{eq:eta_power} is also shown in Fig.~\ref{fig:eta_power_law}. Note that Eq. \eqref{eq:eta_power} implies that in the power-law case $\upeta(z)$ only depends on the parameters $n$ and $\Omega_{\text{m},0}$, which are not completely degenerate. Therefore, SnIa data alone is able to constrain both of these parameters. However, since BAO data constrains both $H_0$ and $\Omega_{\text{m},0}$, we are able to combine the two datasets to significantly improve the constraints on $n$ and $\Omega_{\text{m},0}$. \begin{table} \centering \caption[Best-fit values and marginalized means and limits on cosmological and power-law NMC parameters from BAO and SnIa]{Best-fit values and marginalized means, 68\%, 95\% and 99\% CI limits obtained from currently available data on the cosmological parameters $\Omega_{\text{m},0}$ and $H_0$ (in units of km s$^{-1}$ Mpc$^{−1}$) and on the NMC parameter $n$ (dimensionless).} \label{tab:results_full_power} \bgroup \def1.2{1.2} \begin{tabular}{lc|ccccc} \hline\hline Parameter & Probe & Best fit & Mean & 68\% & 95\% & 99\% \\ \hline & BAO & $66.4$ & $66.8$ & $^{+1.2}_{-1.4}$ & $^{+2.7}_{-2.5}$ & $^{+3.7}_{-3.2}$ \\ {\boldmath$H_0$} & SnIa & \multicolumn{5}{c}{unconstrained} \\ & SnIa+BAO & $66.0$ & $66.1$ & $\pm 1.2$ & $^{+2.5}_{-2.3}$ & $^{+3.4}_{-3.0}$ \\ \hline & BAO & $0.291$ & $0.300$ & $^{+0.027}_{-0.035}$ & $^{+0.064}_{-0.060}$ & $^{+0.094}_{-0.071}$ \\ {\boldmath$\Omega_{\text{\bf m},0}$} & SnIa & $0.181$ & $0.191$ & $^{+0.037}_{-0.061}$ & $^{+0.11}_{-0.094}$ & $^{+0.18}_{-0.10}$ \\ & SnIa+BAO & $0.276$ & $0.279$ & $^{+0.024}_{-0.030}$ & $^{+0.054}_{-0.052}$ & $^{+0.079}_{-0.062}$ \\ \hline & BAO & \multicolumn{5}{c}{unconstrained} \\ {\boldmath$n$} & SnIa & $0.178$ & $0.184$ & $^{+0.092}_{-0.15}$ & $^{+0.26}_{-0.24}$ & $^{+0.45}_{-0.26}$ \\ & SnIa+BAO & $0.014$ & $0.013$ & $\pm 0.035$ & $^{+0.071}_{-0.066}$ & $^{+0.097}_{-0.085}$\\ \hline\hline \end{tabular} \egroup \end{table} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{n_Om.pdf} \caption[Constraints on the power law parameter $n$ and $\Omega_{\text{m},0}$]{2D contours on the power law parameter $n$ and $\Omega_{\text{m},0}$ using data from BAO (blue), SnIa (yellow) and the combination of the two (red). The darker and lighter concentric regions represent the 68\% and 95\% credible intervals, respectively. Colour-blind friendly colour-scheme from \cite{Tol}.} \label{fig:n_Om} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{tri_n_Tot.pdf} \caption[Constraints on the power law parameter $n$, $H_0$ and $\Omega_{\text{m},0}$]{Constraints on the power law parameter $n$, $H_0$ and $\Omega_{\text{m},0}$ using combined data from BAO and SnIa. The darker and lighter regions represent the 68\% and 95\% credible intervals, respectively.} \label{fig:tri_n_Tot} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{eta_power_law.pdf} \caption[Reconstruction of $\upeta(z)$ for the power law NMC model]{Reconstruction of $\upeta(z)$ for the power law NMC model from combined BAO and SnIa data. The dashed line represents the GR prediction $\upeta=1$, while the solid red line represents the mean value of $\upeta$ at every redshift. The orange (darker) and yellow (lighter) contours represent the 68\% and 95\% credible intervals, respectively.} \label{fig:eta_power_law} \end{figure} The combined SnIa and BAO datasets constrain the NMC parameter to $n=0.013\pm 0.035$ (68\% CI). While this constraint falls short of the ones previously obtained from the black-body spectrum of the CMB, $|n|\lesssim \text{few}\times 10^{-6}$, or from BBN, $-0.002<n<0.003$, the present results are again complementary as they more directly constrain the value of $n$ at much smaller redshifts. Note that this still rules out NMC models designed to mimic dark matter, as these would require a power law with exponent in the range $-1\leq n \leq -1/7$ \cite{Bertolami2012,Silva2018}. \subsubsection*{Exponential} \label{subsubsec:exp} Consider now an exponential NMC function, \begin{equation} \label{eq:exp_f2} f_2\propto e^{\beta R} \,, \end{equation} where $\beta$ is the NMC parameter, with dimensions of $R^{-1}$ (GR is recovered when $\beta=0$). Using Eqs. \eqref{eq:scalar_curv} and \eqref{eq:exp_f2} in Eq. \eqref{eq:eta_ddr}, one obtains \begin{equation} \label{eq:eta_exp} \upeta(z;\beta,\Omega_{\text{m},0}, H_0) = \exp\left[\frac{3}{2}\beta H_0^2 \Omega_{\text{m},0} (1-(1+z)^3)\right] \,. \end{equation} Note that $\upeta$ now depends on all three free parameters, $\beta$, $\Omega_{\text{m},0}$ and $H_0$. Furthermore, since $H_0$ is now also degenerate with $\beta$ and $\Omega_{\text{m},0}$, we can no longer analytically marginalize over $H_0$, and SnIa data alone cannot be used to derive useful constraints on any of these parameters. By combining the BAO and SnIa datasets, however, one is able to break this degeneracy, and derive constraints on all three parameters. The marginalized results can be found in Table \ref{tab:results_full_exp}, and the 2D distributions for $\beta$ and $\Omega_{\text{m},0}$ can be found in Fig.~\ref{fig:beta_Om} (see also Fig.~\ref{fig:tri_beta_Tot} for the remaining distribution plots). A reconstruction of Eq. \eqref{eq:eta_exp} is also shown in Fig.~\ref{fig:eta_exp}. \begin{table} \centering \caption[Best-fit values and marginalized means and limits on cosmological and exponential NMC parameters from BAO and SnIa]{Best-fit values and marginalized means, 68\%, 95\% and 99\% CI limits obtained from currently available data on the cosmological parameters $\Omega_{\text{m},0}$ and $H_0$ (in units of km s$^{-1}$ Mpc$^{−1}$) and on the NMC parameter $\beta$ (in units of km$^{-2}$ s$^2$ Mpc$^2$).} \label{tab:results_full_exp} \bgroup \def1.2{1.2} \begin{tabular}{lc|ccccc} \hline\hline Parameter & Probe & Best fit & Mean & 68\% & 95\% & 99\% \\ \hline & BAO & $66.4$ & $66.8$ & $^{+1.2}_{-1.4}$ & $^{+2.7}_{-2.5}$ & $^{+3.7}_{-3.2}$ \\ {\boldmath$H_0$} & SnIa & \multicolumn{5}{c}{unconstrained} \\ & SnIa+BAO & $65.7$ & $65.7$ & $\pm 1.0$ & $^{+2.1}_{-2.0}$ & $^{+3.4}_{-3.0}$ \\ \hline & BAO & $0.291$ & $0.300$ & $^{+0.027}_{-0.035}$ & $^{+0.064}_{-0.060}$ & $^{+0.094}_{-0.071}$ \\ {\boldmath$\Omega_{\text{\bf m},0}$} & SnIa & \multicolumn{5}{c}{unconstrained} \\ & SnIa+BAO & $0.268$ & $0.268$ & $\pm0.019$ & $^{+0.038}_{-0.036}$ & $^{+0.052}_{-0.046}$ \\ \hline & BAO & \multicolumn{5}{c}{unconstrained} \\ {\boldmath$\beta\cdot10^6$} & SnIa & \multicolumn{5}{c}{unconstrained} \\ & SnIa+BAO & $1.18$ & $1.24$ & $^{+0.97}_{-1.2}$ & $^{+2.2}_{-2.1}$ & $^{+3.3}_{-2.5}$ \\ \hline\hline \end{tabular} \egroup \end{table} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{beta_Om.pdf} \caption[Constraints on the exponential parameter $\beta$ and $\Omega_{\text{m},0}$]{2D contours on the exponential parameter $\beta$ and $\Omega_{\text{m},0}$ using data from BAO (blue) and the combination of the SnIa and BAO (red). The darker and lighter concentric regions represent the 68\% and 95\% credible intervals, respectively.} \label{fig:beta_Om} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{tri_beta_Tot.pdf} \caption[Constraints on the exponential parameter $\beta$, $H_0$ and $\Omega_{\text{m},0}$]{Constraints on the exponential parameter $\beta$, $H_0$ and $\Omega_{\text{m},0}$ using combined data from BAO and SnIa. The darker and lighter regions represent the 68\% and 95\% credible intervals, respectively.} \label{fig:tri_beta_Tot} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{eta_exponential.pdf} \caption[Reconstruction of $\upeta(z)$ for the exponential NMC model]{Reconstruction of $\upeta(z)$ for the exponential NMC model from combined BAO and SnIa data. The dashed line represents the GR prediction $\upeta=1$, while the solid red line represents the mean value of $\upeta$ at every redshift. The orange (darker) and yellow (lighter) contours represent the 68\% and 95\% credible intervals, respectively.} \label{fig:eta_exp} \end{figure} The combined SnIa and BAO datasets constrain the NMC parameter to $\beta=\left(1.24^{+0.97}_{-1.2}\right)\cdot10^{-6}$ (68\% CI), in units of km$^{-2}$ s$^2$ Mpc$^2$. Once again this result complements the one found for the same function using the method presented in \cite{Azevedo2018a} for the variation of the baryon to photon ratio, $|\beta|\lesssim10^{-28}$, as they constrain the same parameter in significantly different redshift ranges. Also notice that while the marginalized results do not contain the GR limit $\beta=0$ at the 68\% CI, that limit is contained in both the marginalized 95\% CI, $\beta=\left(1.2^{+2.2}_{-2.1}\right)\cdot10^{-6}$, and the 2D 68\% credible region. Future observations from LSST and the Euclid DESIRE survey \cite{Laureijs2011,Astier2014} will provide more data points in the range $z\in[0.1, 1.6]$. For the $\epsilon$ parametrization used in \cite{Martinelli2020}, this will result in an improvement on the constraint of about one order of magnitude. If this is the case, one could expect a corresponding improvement on the NMC parameter constraints. Third-generation gravitational-wave observatories will also be able to provide data points at even higher redshift (up to $z\sim5$), which will serve as independent and complementary data \cite{Cai2017}. \chapter{The Standard Model of Cosmology} \label{chapter_intro} Albert Einstein's general relativity (GR) remains the most successful theory of gravitation, boasting an enormous body of experimental evidence \cite{Will2014}, from the accurate prediction of Mercury's orbit to the more recent detection of gravitational waves \cite{Abbott2016}. Despite this backing, when coupled only with baryonic matter GR still fails to account for the rotational speeds of galaxies and offers little explanation for the accelerated expansion of the Universe. Namely, the rotational speed and mass of galaxies as predicted by GR do not match observations, indicating the presence of ``dark'' matter that does not interact electromagnetically \cite{Clowe2006,Bertone2005}. Moreover, in the past two decades, observations of supernovae have signalled that the Universe is presently expanding at an accelerating rate \cite{Carroll2001}. Although an accelerated expansion via a cosmological constant is a possible scenario in GR, the origin of this acceleration is still unknown. For example, if one attempts to connect the small energy density required to explain such acceleration with the much larger vacuum energy predicted by quantum field theory one faces a discrepancy of about 120 orders of magnitude. The $\Lambda$CDM model was formulated to model these galactic and universal dynamics, consisting of a Universe evolving under GR and with the addition of uniform dark energy with a negative equation of state (EOS) parameter, represented by a cosmological constant $\Lambda$, responsible for this accelerated expansion, and cold dark matter (CDM), a non-baryonic type of matter that has at best a vanishingly small nonminimal interaction with the other matter fields, responsible for the missing mass in galaxies. This model is often supplemented by an inflationary scenario in the early Universe that attempts to provide a solution to the horizon, flatness and relic problems. In this chapter, we will give an overview of the standard model of cosmology and cover the most relevant observational constraints to this thesis. Unless otherwise stated, in the remainder of this work we will use units such that $c=\hbar=k_\text{B}=1$, where $c$ is the speed of light, $\hbar$ is the reduced Planck constant and $k_\text{B}$ is Boltzmann's constant. \section{General relativity}\label{subsec:GR} In 1915 Einstein arrived at his theory of general relativity, which can be derived from the Einstein-Hilbert action \begin{equation} \label{eq:actionGR} S = \int d^4x \sqrt{-g} \left[\kappa(R-2\Lambda)+ \mathcal{L}_\text{m} \right]\,, \end{equation} where $g$ is the determinant of the metric tensor, with components $g_{\mu\nu}$, $R$ is the Ricci scalar, $\mathcal{L}_{\rm m}$ is the Lagrangian density of the matter fields, $\Lambda$ is the cosmological constant, and $\kappa = c^4/(16\pi G)$, with $c$ the speed of light in vacuum and $G$ Newton's gravitational constant. For the remainder of this work, we will refer to Lagrangian densities $\mathcal{L}$ as simply ``Lagrangians''. Assuming a Levi-Civita connection, the Einstein field equations can be derived by requiring that the variation of the action with respect to the components of the inverse metric tensor $g^{\mu\nu}$ vanishes, which returns \begin{equation} \label{eq:fieldGR} G_{\mu\nu} +\Lambda g_{\mu\nu}= \frac{1}{2\kappa}T_{\mu\nu}\,, \end{equation} where $G_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}$ is the Einstein tensor, $R_{\mu\nu}$ is the Ricci tensor and $T_{\mu\nu}$ are the components of the matter energy-momentum tensor (EMT), given by \begin{equation} \label{eq:energy-mom} T_{\mu\nu} = - \frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}\mathcal{L}_\text{m}\right)}{\delta g^{\mu\nu}}\,. \end{equation} If one takes the trace of the Einstein field equations we obtain \begin{equation} \label{eq:EFE-trace} R = 4\Lambda -\frac{1}{2\kappa}T \,, \end{equation} where $T=T_{\mu\nu}g^{\mu\nu}$ is the trace of the EMT. One can write the field equations in an equivalent alternative form \begin{equation} \label{eq:fieldGR-alt} R_{\mu\nu}-\Lambda g_{\mu\nu} = \frac{1}{2\kappa}\left(T_{\mu\nu}-\frac{1}{2}Tg_{\mu\nu}\right) \,. \end{equation} Taking the covariant derivative of Eq. \eqref{eq:fieldGR} and using the second Bianchi identity \begin{equation} \label{eq:Bianchi} R_{\alpha\beta\mu\nu;\sigma}+ R_{\alpha\beta\sigma\mu;\nu}+R_{\alpha\beta\nu\sigma;\mu}=0\,, \end{equation} one can obtain a conservation law for the energy-momentum tensor \begin{equation} \label{eq:conservGR} \nabla^\mu T_{\mu\nu} = 0\,. \end{equation} The standard model of cosmology is built around the assumption that the Universe is homogeneous and isotropic on cosmological scales, \textit{i.e.} the Cosmological Principle. With this consideration in mind, it can be described by the Friedmann-Lemaître-Robertson-Walker (FLRW) metric, represented by the line element \begin{equation} \label{eq:line} ds^2=-dt^2+a^2(t)\left[\frac{dr^2}{1-kr^2} +r^2 d\theta^2 +r^2\sin^2\theta d\phi^2\right]\,, \end{equation} where $a(t)$ is the scale factor, $k$ is the spatial curvature of the Universe, $t$ is the cosmic time, and $r$, $\theta$ and $\phi$ are polar comoving coordinates. Current observational data strongly suggest that the Universe is spatially flat, \textit{i.e.} $k=0$, or very nearly so, and this is usually taken as an assumption when spatial curvature is not important. As a source term for the Einstein field equations, it is often considered that the matter content of the Universe on sufficiently large scales is well described as a collection of perfect fluids, with EMT of the form \begin{equation}\label{eq:pf_emt} T^{\mu\nu}=(\rho+p)U^\mu U^\nu + p g^{\mu\nu}\,, \end{equation} where $\rho$ and $p$ are respectively the density and pressure of the fluid, and $U^\mu$ are the components of the 4-velocity of a fluid element, satisfying $U_\mu U^\mu = -1$. The pressure and density of the fluid are related by the equation of state (EOS), $p=w\rho$, where $w$ is the EOS parameter. For non-relativistic matter, one has $p_\text{m}=0$ and $w_\text{m}= 0$, while for radiation $p_\text{r}=\rho_\text{r}/3$ and $ w_\text{r}= 1/3$. In the case of a constant density, such as the cosmological constant, one would require $w_\Lambda = -1$, so that $p_\Lambda=-\rho_{\Lambda}$. Using Eqs. \eqref{eq:conservGR} and \eqref{eq:pf_emt} one can obtain the continuity equation for the perfect fluid \begin{equation} \label{eq:contin} \dot{\rho}+3H(\rho+p)=0, \end{equation} where $H\equiv\dot{a}/a$ is the Hubble parameter. If $w=\text{const.}$, one can find the general solution for $\rho$ via direct integration \begin{equation} \label{eq:density} \rho(t)=\rho_0 \, a(t)^{-3(1+w)}\,, \end{equation} where $\rho_0$ is the value of the energy density at time $t_0$ and $a(t_0)\equiv 1$. We will therefore consider three different components for the energy density in the Universe: a matter term $\rho_{\rm m}$ composed of both baryonic and cold dark matter, a radiation term $\rho_{\rm r}$ composed essentially of photons, and a dark energy term $\rho_\Lambda$ which will be expressed by a cosmological constant. Using Eq. \eqref{eq:density}, these components evolve as, respectively, \begin{align} \label{eq:densities} \rho_{\rm m}(t)&=\rho_{{\rm m},0} \, a(t)^{-3}\,, \nonumber \\ \rho_{\rm r}(t)&=\rho_{{\rm r},0} \, a(t)^{-4}\,, \nonumber \\ \rho_\Lambda(t)&=\rho_{\Lambda,0} = 2\kappa\Lambda\,. \end{align} Having both defined a metric and a source term for the field equation, one can now write the explicit forms of its $tt$ and $ii$ components as the well-known Friedmann and Raychaudhuri equations \begin{equation} \label{eq:friedGR} H^2=\frac{1}{ 6\kappa}(\rho_{\rm m}+\rho_{\rm r})-\frac{k}{a^2}+\frac{\Lambda }{3}\,, \end{equation} \begin{equation} \label{eq:rayGR} 2\dot{H}+3H^2=-\frac{1}{ 2 \kappa}(\rho_{\rm m}+\rho_{\rm r}+3p_{\rm r})+\Lambda\,, \end{equation} respectively, and the Ricci scalar yields \begin{equation} \label{eq:Ricci_flat} R=6\left(2H^2+\dot{H}\right)\,. \end{equation} It is sometimes useful to recast the scale factor $a(t)$ as a function of the redshift $z$ that photons experience when propagating through spacetime. Light travels along null geodesics \begin{equation} \label{eq:null geodesic} ds^2 = - dt^2 + a(t)^2\frac{dr^2 }{1-kr^2} = 0 \,, \end{equation} therefore light that leaves a source at comoving distance $r_\text{em}$ at time $t_\text{em}$ will arrive at the observer $r_\text{obs}=0$ at a time $t_\text{obs}$, given by \begin{equation} \label{eq:null geodesic2} \int_{t_\text{em}}^{t_\text{obs}} \frac{dt}{a(t)}= \int_0^{r_\text{em}} \frac{dr}{\sqrt{1-kr^2}} \,. \end{equation} Suppose that we observe a signal that is emitted over a time $\delta t_\text{em}$, and received over a time $\delta t_\text{obs}$. The r.h.s. of Eq. \eqref{eq:null geodesic2} remains the same regardless of the limits of the integral on the l.h.s., so we can write \begin{align} \label{eq:null geodesic3} &\int_{t_\text{em}+\delta t_\text{em}}^{t_\text{obs}+\delta t_\text{obs}} \frac{dt}{a(t)} = \int_{t_\text{em}}^{t_\text{obs}} \frac{dt}{a(t)} \nonumber\\ \Rightarrow&\int_{t_\text{em}+\delta t_\text{em}}^{t_\text{obs}+\delta t_\text{obs}} \frac{dt}{a(t)} = \int_{t_\text{em}}^{t_\text{em}+\delta t_\text{em}} \frac{dt}{a(t)} +\int_{t_\text{em}+\delta t_\text{em}}^{t_\text{obs}} \frac{dt}{a(t)} \nonumber \\ \Rightarrow&\int_{t_\text{em}+\delta t_\text{em}}^{t_\text{obs}+\delta t_\text{obs}} \frac{dt}{a(t)} +\int^{t_\text{em}+\delta t_\text{em}}_{t_\text{obs}} \frac{dt}{a(t)}= \int_{t_\text{em}}^{t_\text{em}+\delta t_\text{em}} \frac{dt}{a(t)} \nonumber \\ \Rightarrow&\int_{t_\text{obs}}^{t_\text{obs}+\delta t_\text{obs}} \frac{dt}{a(t)} = \int_{t_\text{em}}^{t_\text{em}+\delta t_\text{em}} \frac{dt}{a(t)} \, . \end{align} If we assume that the intervals $\delta t_\text{obs/em}$ are much smaller than the timescale of the variation of $a(t)$, then $a(t_\text{em})= a(t_\text{em}+\delta t_\text{em})$, and likewise for $a(t_\text{obs})$. Therefore we can rewrite Eq. \eqref{eq:null geodesic3} as \begin{align} \label{eq:null geodesic4} &\frac{1}{a(t_\text{obs})}\int_{t_\text{obs}}^{t_\text{obs}+\delta t_\text{obs}} dt = \frac{1}{a(t_\text{em})}\int_{t_\text{em}}^{t_\text{em}+\delta t_\text{em}} dt \nonumber \\ \Rightarrow&\frac{\delta t_\text{obs}}{a(t_\text{obs})}= \frac{\delta t_\text{em}}{a(t_\text{em})} \,. \end{align} If the intervals $\delta t_\text{em}$ and $\delta t_\text{obs}$ are the time it takes to emit and receive one full wavelength $\lambda$ of a light ray, then \begin{equation} \label{eq:wavelenght_dif} \frac{\lambda_\text{obs}}{a(t_\text{obs})}= \frac{\lambda_\text{em}}{a(t_\text{em})} \,. \end{equation} The redshift $z$ is defined as \begin{equation} \label{eq:redshift_def} z= \frac{\lambda_\text{obs}-\lambda_\text{em}} {\lambda_\text{em}} \,, \end{equation} and using Eq. \eqref{eq:wavelenght_dif} we obtain \begin{align} \label{eq:redshift_scale} &z = \frac{a(t_\text{obs})}{a(t_\text{em})}-1 \nonumber\\ \Rightarrow &1+z = \frac{1}{a(t)} \,, \end{align} where we have taken $t\equiv t_\text{em}$ and fixed the scale factor at the present time at unity $a(t_\text{obs})\equiv a_0=1$. The density of a flat universe is known as the critical density $\rho_\text{c}=6\kappa H^2$. One can therefore write the various contributions to the right hand side of Eq. \eqref{eq:friedGR} as a function of the corresponding density parameters $\Omega\equiv\rho/\rho_\text{c}$. If one also includes the explicit dependence of the density on the scale factor $a$, one can rewrite the Friedmann equation as \begin{equation} \label{eq:friedGR_dens} H^2=H^2_0\left(\Omega_{\text{r},0}a^{-4}+\Omega_{\text{m},0}a^{-3}+\Omega_{\text{k},0}a^{-2}+\Omega_{\Lambda,0} \right) \,, \end{equation} where $H_0$ is the Hubble constant, $\Omega_{\text{r},0}$ and $\Omega_{\text{m},0}$ are the radiation and matter density parameters, respectively, \begin{equation} \Omega_{\text{k},0}=1-\Omega_0=1-6\kappa H^2_0 \,, \end{equation} is the spatial curvature density parameter, and \begin{equation} \Omega_{\Lambda,0}=\Lambda/3H_0^2 \,, \end{equation} is the cosmological constant density parameter. The subscript $0$ denotes that the quantities are measured at the present time. Current observational evidence allows one to roughly split the background evolution of the Universe into four stages. Soon after the Big Bang, it is hypothesized that the Universe went through a rapid inflationary period. While the telltale signs of inflation have evaded detection so far, it provides a neat solution to the horizon, flatness and relic problems, and provides a mechanism that explains the origin of the density fluctuations which gave rise to the large-scale structure observed at the present day \cite{Liddle2000}. There are many models of inflation, most of them relying on the addition of a scalar field to the Einstein-Hilbert action, but they are not an essential part of the $\Lambda$CDM model, so we will not cover them here. After inflation the Universe became dominated by radiation, with $H^2\simeq H^2_0\Omega_{\text{r},0} a^{-4}$ and $w_\text{r}=p_\text{r}/\rho_\text{r}=1/3$. Via the continuity \eqref{eq:contin} and Friedmann \eqref{eq:friedGR_dens} equations we obtain $a(t)\propto t^{1/2}$ and $\rho_\text{r}\propto a^{-4}\propto t^{-2}$ \cite{Weinberg2008}. It is during this epoch that the majority of the light elements (mainly stable isotopes of hydrogen, helium and lithium) were formed via a process called big-bang nucleosynthesis (BBN). Following BBN the Universe remained so hot that nucleons and electrons were not able to come together into atoms, and it remained opaque to radiation. As it expanded and cooled, the temperature dropped enough for electrons to finally be captured into (mostly) hydrogen atoms, in a process known as recombination. It is during this epoch that photons decoupled from matter, and the cosmic microwave background (CMB) was emitted. Following recombination came an era of matter domination. It can be described by $H^2 \simeq H^2_0\Omega_{\text{m},0}a^{-3}$, which leads to the solutions $a(t)\propto t^{2/3}$ and $\rho_\text{m}\propto a^{-3}\propto t^{-2}$. We are currently experiencing a transition period where the expansion of the Universe is accelerating. In the future, the Universe will most likely be dominated by dark energy, and in the case of a cosmological constant one obtains $H^2 \simeq H^2_0\Omega_{\Lambda,0}$, leading to an exponential expansion $a(t)\propto \exp(H_0 t)$. \section{Distance measures in general relativity} Measuring distances in astronomy can be done in several different ways, all of which necessarily take into account the curved nature of spacetime. Therefore, the measurement of these distances will depend on the spacetime metric and how it changes as light propagates through it. Two very useful distance definitions are the angular diameter distance, $d_\text{A}$, and the luminosity distance, $d_\text{L}$. In Euclidean space, an object with diameter $s$ at some distance $d$ away would extend across an angular diameter $\theta$, i.e. \begin{equation} \label{eq:euc_dA} \theta = \frac{s}{d} \,. \end{equation} In curved spacetime, however, the proper distance to a distant object such as a star or a galaxy cannot be so easily measured. Nevertheless, one can define an angular diameter distance $d_\text{A}$ as a recasting of $d$ such that the relation between the object's size and angular diameter would maintain the Euclidean-space relation \eqref{eq:euc_dA}, \textit{i.e.} \begin{equation} \label{eq:angle} \theta=\frac{s}{d_\text{A}}\,. \end{equation} In a FLRW universe, the proper distance $s$ corresponding to an angle $\theta$ is simply \begin{equation} \label{eq:prop_dist} s= a(t_\text{em})r\theta= \frac{r\theta}{1+z}\,, \end{equation} where $r$ is the coordinate distance to the object. Assuming a spatially flat Universe, $k=0$, the distance $r$ can be calculated by just integrating over a null geodesic, that is \begin{align} \label{eq:null geodesic_distance} ds^2 &= - dt^2 + a(t)^2dr^2 = 0\nonumber \\ \Rightarrow dr &= -\frac{dt}{a(t)} \nonumber \\ \Rightarrow r &= \int_{t_\text{em}}^{t_\text{obs}} \frac{dt}{a(t)}= \int_0^z \frac{dz'}{H(z')} \,, \end{align} so the angular diameter distance to an object at redshift $z$ is simply \begin{equation} \label{eq:ang_dist} d_\text{A}=\frac{1}{1+z}\int_0^z \frac{dz'}{H(z')}\,. \end{equation} On the other hand, the luminosity distance $d_\text{L}$ of an astronomical object relates its absolute luminosity $L$, \textit{i.e.} its radiated energy per unit time, and its energy flux at the detector $l$, so that they maintain the usual Euclidean-space relation \begin{equation} \label{eq:apparent luminosity} l = \frac{L}{4\pi d_\text{L}^2}\,, \end{equation} or, rearranging, \begin{equation} \label{eq:lum_dist_1} d_\text{L} = \sqrt{\frac{L}{4\pi l}}\,. \end{equation} Over a small emission time $\Delta t_\text{em}$ the absolute luminosity can be written as \begin{equation} \label{eq:abs_lum_1} L = \frac{N_{\gamma,\text{em}} E_\text{em}}{\Delta t_\text{em}}\,, \end{equation} where $N_{\gamma,\text{em}}$ is the number of emitted photons and $E_\text{em}$ is the average photon energy. An observer at a coordinate distance $r$ from the source will on the other hand observe an energy flux given by \begin{equation} \label{eq:app_lum} l = \frac{N_{\gamma,\text{obs}}E_\text{obs}}{\Delta t_\text{obs} 4\pi r^2} \end{equation} where $N_{\gamma,\text{obs}}$ is the number of observed photons and $E_\text{obs}$ is their average energy. Note that while the number of photons is generally assumed to be conserved, $N_{\gamma,\text{obs}} = N_{\gamma,\text{em}}$, the time that it takes to receive the photons is increased by a factor of $1+z$, $\delta t_\text{obs}= (1+z)\delta t_\text{em}$ as shown in Eq. \eqref{eq:null geodesic4}, and their energy is reduced by the same factor \begin{equation} \label{eq:energy_obs} E_\text{obs} = \frac{E_\text{em}}{1+z} \,. \end{equation} Using Eqs. \eqref{eq:abs_lum_1}, \eqref{eq:app_lum}, \eqref{eq:energy_obs} and \eqref{eq:null geodesic} in Eq. \eqref{eq:lum_dist_1}, we finally obtain \begin{equation} \label{eq:lum_dist} d_\text{L} = (1+z)\int_0^z \frac{dz'}{H(z')} \, . \end{equation} Eqs. \eqref{eq:ang_dist} and \eqref{eq:lum_dist} differ only in a factor of $(1+z)^2$, \begin{equation} \label{eq:DDR} \frac{d_\text{L}}{d_\text{A}}=(1+z)^2\,. \end{equation} Eq. \eqref{eq:DDR} is called Etherington's distance-duality relation (DDR), and it can be a useful tool for probing theories of modified gravity or theories that do not conserve the photon number \cite{Bassett2004,Ruan2018}. \section{The cosmic microwave background} In 1965, Arno Penzias and Robert Wilson encountered a source of isotropic ``noise'' in the microwave part of the radiation spectrum as they were experimenting with the Holmdel Horn Antenna. At the same time, Robert Dicke, Jim Peebles and David Wilkinson were preparing a search for primordial radiation emitted soon after recombination. The concurrence of these two events allowed these physicists to quickly identify Penzias and Wilson's accidental discovery as the Cosmic Microwave Background (CMB) radiation \cite{Penzias1965,Dicke1965}, one of the most important astronomical discoveries ever made, and one that granted Penzias, Wilson and Peebles the Nobel prize. Despite the nature of its discovery, the CMB had been a prediction of cosmology for some time, and it is one that has continued to be studied to this day using sophisticated ground- and balloon-based telescopes, and several space-based observatories such as the Cosmic Microwave Background Explorer (COBE) \cite{Bennett1996}, the Wilkinson Microwave Anisotropy Probe (WMAP) \cite{Bennett2013}, and more recently the \textit{Planck} satellite \cite{Aghanim2020a}. Before recombination, radiation was kept in thermal equilibrium with baryonic matter via frequent collisions between photons and free electrons, and thus the photons maintained a black body-spectrum. For a given temperature $\mathcal{T}$ the spectral and number densities of photons at frequency $\nu$ are given by, respectively, \begin{equation} \label{eq:black_spectral} u(\nu)=\frac{8 \pi h \nu^3}{e^{h \nu/(k_B \mathcal{T})}-1} \,, \end{equation} and \begin{equation} \label{eq:black_number} n(\nu) = \frac{u(\nu)}{h \nu} = \frac{8 \pi \nu^2}{e^{h \nu/(k_B \mathcal{T})}-1} \,, \end{equation} where $h$ is the Planck constant. As spacetime expanded, matter cooled enough for the free electrons to be captured by the positive nuclei, and radiation began propagating freely across space. While this radiation is isotropic to a large degree, it does present small anisotropies. These can be primordial in origin, but can also be due to effects along the path photons travelled until detection. Some of these effects are temperature fluctuations at the last-scattering surface, Doppler effects due to velocity variations in the plasma, the peculiar velocity of the Earth relative to the CMB, gravitational redshift (Sachs-Wolfe and integrated Sachs-Wolfe effects), and photon scattering with electrons in the intergalactic medium (Sunyaev–Zel'dovich effect). Despite this, the CMB remains the best black body ever measured. The detailed description of these anisotropies is not particularly relevant to this thesis. Nevertheless, they are central to the determination of constraints on the cosmological parameters \cite{Aghanim2020}. As spacetime continued to expand after photon decoupling, the frequency of the photons emitted at the so-called last scattering surface becomes redshifted over time. That is to say, a photon emitted at the time of last scattering $t_\text{L}$ with frequency $\nu_\text{L}$ would be detected at time $t$ with a frequency $\nu = \nu_\text{L} a(t_\text{L})/a(t)$ and therefore the spectral and number densities measured at time $t$ would be \begin{equation} \label{eq:black_spectral_red} u(\nu)=\frac{8 \pi h \nu^3}{e^{h \nu/(k_B \mathcal{T}(t))}-1} \,, \end{equation} and \begin{equation} \label{eq:black_number_red} n(\nu) = \frac{8 \pi \nu^2}{e^{h \nu/(k_B \mathcal{T}(t))}-1} \,, \end{equation} where \begin{equation} \label{eq:black_temp_red} \mathcal{T}(t)= \mathcal{T}(t_\text{L})\frac{a(t_\text{L})}{a(t)} \,, \end{equation} is the temperature of the black-body spectrum. Alternatively, we can write this temperature as, \begin{equation} \label{eq:redshift_cmb_temp} \mathcal{T}(z)= \mathcal{T}(z_\text{L})\frac{1+z}{1+z_\text{L}} \,, \end{equation} where $z_\text{L}$ is the redshift at the time of last scattering. In essence, the black-body spectrum is maintained, but with a ``redshifted'' temperature. \section{Big-bang nucleosynthesis} One of the great successes of modern cosmology is the prediction of the abundances of light elements formed in the early Universe (mainly $^2$H, $^3$He, $^4$He and $^7$Li), which are impossible to achieve in such quantities via stellar processes alone, through a process called big-bang nucleosynthesis (BBN). Current astronomy is unable to observe past the CMB radiation emitted at the last scattering surface ($z\simeq1090$), but it is possible to infer from the high degree of isotropy of the CMB itself that matter and radiation were in thermal equilibrium prior to decoupling. We can therefore use statistical physics and thermodynamics to predict the evolution of the early Universe when the temperature and density were much higher than the present day and radiation dominated the expansion of the universe. The primordial soup was originally composed of all known species of elementary particles, ranging from the heaviest particles like the top quark down to neutrinos and photons. Then, as the Universe progressively expanded and cooled, the heavier species slowly became non-relativistic, and particle-antiparticle pairs annihilated themselves and unstable particles decayed. After the electroweak transition at around $\mathcal{T}_\text{EW}\sim100\text{ TeV}$, the first particles to disappear were the top quark, followed soon after by the Higgs boson and the $W^\pm$ and $Z^0$ gauge bosons. As the temperature dropped below $\mathcal{T}\sim10\text{ GeV}$, the bottom and charm quarks, as well as the tau lepton, were also annihilated. At $\mathcal{T}_\text{QCD}\sim150 \text{ MeV}$, the temperature was low enough that quark-gluon interactions started binding quarks and antiquarks together into hadrons, composite particles with integer charge. These can be baryons, three-quark particles with half-integer spin (\textit{i.e.} fermions) such as the protons and neutrons, or mesons, quark-antiquark pairs with integer spin (\textit{i.e.} bosons) such as pions. This process is known as the quantum chromodynamics (QCD) phase transition. Soon after at $\mathcal{T}<100 \text{ MeV}$ the pions and muons also annihilated, and after this process finished the only particles left in large quantities were neutrons, protons, electrons, positrons, neutrinos and photons. As the Universe continued to cool down, the weak interaction rates eventually dropped below the expansion rate $H$, with neutrinos departing from thermodynamic equilibrium with the remaining plasma. In the context of this thesis, the most relevant consequence of this phenomenon is the breaking of the neutron-proton chemical equilibrium at $\mathcal{T}_\text{D}\sim 0.7 \text{ MeV}$, which leads to the freeze-out of the neutron-proton number density ratio at $n_n/n_p=\exp(-\Delta m/\mathcal{T}_\text{D}) \sim 1/7$, where $\Delta m = 1.29 \text{ MeV}$ is the neutron-proton mass difference (the ratio $n_n/n_p$ is then slightly reduced by subsequent neutrons decays). Soon after, at $\mathcal{T}_\text{N}\sim 100 \text{ keV}$, the extremely high photon energy density has been diluted enough to allow for the formation of the first stable $^2$H deuterium nuclei. Once $^2$H starts forming, an entire nuclear process network is set in motion, leading to the production of light-element isotopes and leaving all the decayed neutrons bound into them, the vast majority in helium-4 $^4$He nuclei, but also in deuterium $^2$H, helium-3 $^3$He and lithium-7 $^7$Li. Unstable tritium $^3$H and beryllium-7 $^7$Be nuclei were also formed, but quickly decayed into $^3$He and $^7$Li, respectively. Primordial nucleosynthesis may be described by the evolution of a set of differential equations \cite{Wagoner1966,Wagoner1967,Wagoner1969,Wagoner1973,Esposito2000a,Esposito2000}, namely the Friedmann equation \eqref{eq:friedGR}, the continuity equation \eqref{eq:contin}, the equation for baryon number conservation \begin{equation}\label{eq:bary_cons} \dot{n}_\text{B}+3Hn_\text{B}=0 \,, \end{equation} the Boltzmann equations describing the evolution of the average density of each species \begin{equation} \label{eq:nuclide_evo} \dot{X}_i = \sum_{j,k,l}N_i\left(\Gamma_{kl\rightarrow ij}\frac{X_l^{N_l}X_k^{N_k}}{N_l!N_k!} -\Gamma_{ij\rightarrow kl}\frac{X_i^{N_i}X_j^{N_j}}{N_i!N_j!}\right)\equiv \Gamma_i \,, \end{equation} and the equation for the conservation of charge \begin{equation} \label{eq:charge_cons} n_\text{B}\sum_j Z_j X_j = n_{e^-}-n_{e^+} \,, \end{equation} where $i$, $j$, $k$, $l$ denote nuclear species ($n$, $p$, $^2$H, ...), $X_i=n_i/n_\text{B}$, $N_i$ is the number of nuclides of type $i$ entering a given reaction, the $\Gamma$s represent the reaction rates, $Z_i$ is the charge number of the $i$-th nuclide, and $n_{e^\pm}$ is the number density of electrons ($-$) and positrons ($+$). As one could expect, the accurate computation of element abundances cannot be done without resorting to numerical algorithms \cite{Wagoner1973,Kawano1992,Smith:1992yy,Pisanti2008,Consiglio2017}. It is worthy of note at this stage that these codes require one particular parameter to be set a priori, the baryon-to-photon ratio $\eta$, whose value is fixed shortly after BBN and in the standard model of cosmology is expected to remain constant after that. \section{Constraints on the cosmological parameters}\label{subsec:cosmo_cons} In the first decades following the formulation of GR, cosmology remained a largely theoretical pursuit, with the discovery of several useful metrics (such as de Sitter and FLRW), the formulation of the cosmological principle and the early predictions of the CMB and the big bang. Concurrently there were significant advancements in astronomy, but the data collected was not yet enough to derive precise constraints on cosmological parameters, nor to detect the accelerated expansion of the Universe. Nevertheless, several key observations were made in the decades that followed. In 1929 Edwin Hubble shows that the Universe is expanding by measuring the redshift-distance relation to a linear degree \cite{Hubble1931}; in 1965 Penzias and Wilson make the first observations of the CMB \cite{Penzias1965,Dicke1965}; and in 1970 Vera Rubin presents evidence for the existence of a substantial amount of dark matter by measuring the rotation curve of galaxies \cite{Rubin1970}. Several key developments were made during the 1980s, such as the independent proposals and work on inflation by Alan Guth \cite{Guth1981}, Alexei Starobinsky \cite{Starobinsky1980}, Andrei Linde \cite{Linde1982}, and Andreas Albrecht and Paul Steinhardt \cite{Albrecht1982}, the increasing evidence pointing to a Universe currently dominated by cold dark matter, and culminating with the 1989 launch of COBE \cite{Bennett1996} and the detection of CMB anisotropies. COBE's launch marked the beginning of the \textit{golden age of cosmology}. The exponential increase in the establishment of space- and ground-based observatories like the Hubble Space Telescope (HST), the Wilkinson Microwave Anisotropy Probe (WMAP), the \textit{Planck} satellite, and the Very Large Telescope (VLT) has progressed to today and shows no signs of slowing. As of this writing, four significant projects fully underway are the construction of the largest optical/near-infrared telescope in the form of the Extremely Large Telescope (ELT), the calibration of HST's successor \textit{i.e.} the James Webb Space Telescope (JWST), the planning of the largest radio observatory ever built in the Square Kilometer Array (SKA), and the planning of the first space-based gravitational wave detector in the Laser Interferometer Space Antenna (LISA). Also worthy of mention are the many ongoing surveys like the Dark Energy Survey (DES), the Sloan Digital Sky Survey (SDSS) and the Galaxy and Mass Assembly (GAMA). As such, the constraints on the cosmological model ruling our Universe have greatly improved in the last two decades. In what follows we will present current constraints on the cosmological parameters relevant to this work, most of which can be seen in Table \ref{tab:cosmo_constraints}. \subsection{Hubble constant $H_0$} The most precise constraints available today for $H_0$ come from CMB data collected by the aforementioned \textit{Planck} mission. Unfortunately, both $H_0$ and $\Omega_{\text{m},0}$ are derived parameters from that analysis, and suffer from degeneracy with each other. Nevertheless, with the inclusion of lensing considerations, the Planck collaboration was able to constrain the Hubble constant for a flat Universe to \cite{Aghanim2020} \begin{equation} \label{eq:H0_planck} H_0=(67.36\pm0.54)\text{km s}^{-1}\text{Mpc}^{-1} \,, \end{equation} at the 68\% credibility level (CL). This result can be further supplemented with data from baryon acoustic oscillations (BAO), for a slightly tighter constraint of (68\% CL) \begin{equation} \label{eq:H0_planck_bao} H_0=(67.66\pm0.42)\text{km s}^{-1}\text{Mpc}^{-1} \,. \end{equation} There is, however, some tension in the measurement of $H_0$. The measurement of the cosmic distance ladder, and in particular the relation between an object's geometric distance and its redshift, can also be used to determine cosmological parameters. The recent Supernova H0 for the Equation of State (SH0ES) project used the HST to make an independent measurement of the Hubble constant (68\% CL) \cite{Riess2021} \begin{equation} \label{eq:H0_shoes} H_0=(73.04\pm1.04)\text{km s}^{-1}\text{Mpc}^{-1} \,. \end{equation} While this value is less precise, the central values of both determinations have barely changed throughout the years while the errors are continuously decreasing, leading to the discrepancy of $2.5\sigma$ in 2013 (the time of the first Planck data release) growing to over $5\sigma$ today. Numerous investigations have been conducted, trying to explain this tension either with new physics or unaccounted-for systematic errors \cite{Divalentino2021} but, as of this writing, there is no definite answer. \subsection{Density parameters} As mentioned above, \textit{Planck} data is able to determine the matter density parameter $\Omega_{\text{m},0}$ along with $H_0$. With the inclusion of lensing, the 68\% constraint from the Planck collaboration for a flat Universe is \begin{equation} \label{eq:omegaM_planck} \Omega_{\text{m},0}=0.3153\pm0.0073 \,, \end{equation} which can once again be constrained further with the inclusion of BAO data \begin{equation} \label{eq:omegaM_planck_BAO} \Omega_{\text{m},0}=0.3111\pm0.0056 \,. \end{equation} These values naturally lead to a determination of the cosmological constant density parameter from CMB data \begin{equation} \label{eq:omegaL_planck} \Omega_{\Lambda,0}=0.6847\pm0.0073 \,. \end{equation} and with the addition of BAO \begin{equation} \label{eq:omegaL_planck_BAO} \Omega_{\Lambda,0}=0.6889\pm0.0056 \,. \end{equation} In addition, \textit{Planck} is able to constrain values of $\Omega_{\text{b},0}h^2$ and $\Omega_{\text{c},0}h^2$, where $\Omega_{\text{b},0}$ and $\Omega_{\text{c},0}$ are the density parameters for baryonic matter and cold dark matter, respectively, and $h=H_0/(100\text{km s}^{-1}\text{Mpc}^{-1})$ is the dimensionless Hubble constant. The base estimates for these parameters from CMB data are \begin{equation} \label{eq:omegab_planck} \Omega_{\text{b},0}h^2= 0.02237\pm0.00015 \,, \end{equation} \begin{equation} \label{eq:omegac_planck} \Omega_{\text{c},0}h^2= 0.1200\pm0.0012 \,, \end{equation} and with the addition of BAO data \begin{equation} \label{eq:omegab_planck_BAO} \Omega_{\text{b},0}h^2= 0.02242\pm0.00014 \,, \end{equation} \begin{equation} \label{eq:omegac_planck_BAO} \Omega_{\text{c},0}h^2= 0.11933\pm0.00091 \,. \end{equation} \subsection{Cosmic microwave background temperature} The accurate characterization of the CMB spectrum has been the goal of many observational experiments so far, but as of this writing the best measurements of the black body spectrum still come from the FarInfraRed Absolute Spectrophotometer (FIRAS) instrument on board COBE \cite{Fixsen1996}, which has more recently been calibrated using WMAP data \cite{Fixsen2009}, and resulted in a temperature measurement of \begin{equation} \mathcal{T}=2.760\pm0.0013\,\text{K} \,, \end{equation} as can be seen in Fig. \ref{fig:cmb_firas_wmap}. \begin{figure}[!ht] \centering \includegraphics[width=0.85\textwidth]{CMB_spectrum.png} \caption[CMB spectrum from the COBE-FIRAS instrument]{The mean spectrum associated with the velocity of the solar system with respect to the CMB. The line is the a priori prediction based on the WMAP velocity and previous FIRAS calibration. The uncertainties are the noise from the FIRAS measurements. Figure taken from \cite{Fixsen2009}.\label{fig:cmb_firas_wmap}} \end{figure} \subsection{Baryon-to-photon ratio} There are two main ways of estimating the value of the baryon-to-photon ratio $\eta$ at different stages of cosmological evolution. On one hand, one may combine the observational constraints on the light-element abundances with numerical simulations of primordial BBN to infer the allowed range of $\eta$. This is the method used in \cite{Iocco2009}, among others, leading to \begin{equation} \label{eq:etabbn} \eta_\text{BBN}=(5.7\pm0.3)\times 10^{-10} \,, \end{equation} at 68\% CL just after nucleosynthesis (at a redshift $z_\text{BBN} \sim10^9$). More recently, an updated version of the program {\fontfamily{qcr}\selectfont PArthENoPE} ({\fontfamily{qcr}\selectfont PArthENoPE} 2.0), which computes the abundances of light elements produced during BBN, was used to obtain new limits on the baryon-to-photon ratio, at $1\sigma$ \cite{Consiglio2017} \begin{equation} \label{eq:etabbn2} \eta_\text{BBN}=(6.23^{+0.12}_{-0.14})\times 10^{-10}\,. \end{equation} There is actually some variation of $\eta$ during nucleosynthesis due to the entropy transfer to photons associated with the $e^\pm$ annihilation. The ratio between the values of $\eta$ at the beginning and at the end of BBN is given approximately by a factor of $2.73$ \cite{Serpico2004}. The baryon-to-photon ratio also affects the acoustic peaks observed in the CMB, generated at a redshift $z_\text{CMB} \sim10^3$. The 2017 Planck analysis \cite{Ade2016} constrains the baryon density $\omega_\text{b} = \Omega_\text{b} h^2$ from baryon acoustic oscillations, at 95\% CL, \begin{equation} \label{eq:omegacmb} \omega_\text{b}=0.02229^{+0.00029}_{-0.00027}\, . \end{equation} This quantity is related to the baryon-to-photon ratio via $\eta = 273.7\times10^{-10} \omega_\text{b}$, leading to \begin{equation} \label{eq:etacmb} \eta_\text{CMB}= 6.101^{+0.079}_{-0.074}\times 10^{-10}\,. \end{equation} \begin{table}[] \centering \footnotesize \caption[Current constraints on cosmological parameters]{Mean values and marginalized 68\% CI limits on the cosmological parameters, according to recent observational data. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. \label{tab:cosmo_constraints}} \tabcolsep=0.11cm \begin{tabular}{l|ccccc} \hline\hline & \textit{Planck} & \textit{Planck}+BAO & SH0ES & BBN \cite{Iocco2009} & BBN \cite{Consiglio2017} \\ \hline {\boldmath$H_0$} & $67.36\pm0.54$ & $67.66\pm0.42$ & $73.52\pm1.62$ & \---- & \---- \\ {\boldmath$\Omega_{\text{m},0}$} & $0.3153\pm0.0073$ & $0.3111\pm0.0056$ & \---- & \---- & \---- \\ {\boldmath$\Omega_{\Lambda,0}$} & $0.6847\pm0.0073$ & $0.6889\pm0.0056$ & \---- & \---- & \---- \\ {\boldmath$\Omega_{\text{b},0}h^2$} & $0.02237\pm0.00015$ & $0.02242\pm0.00014$ & \---- & \---- & \---- \\ {\boldmath$\Omega_{\text{c},0}h^2$} & $0.1200\pm0.0012$ & $0.11933\pm0.00091$ & \---- & \---- & \---- \\ {\boldmath$\eta\times10^{10}$} & $6.123\pm0.041$ & $6.136\pm0.038$ & \---- & $5.7\pm0.3$ & $6.23^{+0.12}_{-0.14}$ \\ \hline\hline \end{tabular} \end{table} \chapter{The Lagrangian of Cosmological Fluids} \label{chapter_lag} Many popular modified gravity theories feature a nonminimal coupling (NMC) between matter and gravity or another field. In addition to the rich dynamics these theories provide, the use of the variational principle in the derivation of the equations of motion (EOM) leads to the appearance of the on-shell Lagrangian of the matter fields in the EOM, in addition to the usual energy-momentum tensor (EMT). This is a significant departure from minimally coupled theories like general relativity (GR) and $f(R)$ theories, which are insensitive to the form of the matter Lagrangian leading to the appropriate EMT. The knowledge of the on-shell Lagrangian of the matter fields thus becomes paramount if one is to derive reliable predictions from the theory. In essence, this is the problem tackled in this chapter, and it comprises an important part of the original work published in Refs. \cite{Avelino2018,Ferreira2020,Avelino2020} (Sections \ref{sec.nmclag}, \ref{sec.whichlag} and \ref{sec.nmclagrole}). \section{Perfect fluid actions and Lagrangians}\label{sec:fluid_lag} The form of the on-shell Lagrangian of certain classes of minimally coupled perfect fluids has been extensively studied in the literature \cite{Schutz1977,Brown1993a}, but it is often misused in NMC theories. Before tackling this problem in NMC gravity, we will first briefly summarize some of these derivations when the perfect fluid is minimally coupled, like in GR. Consider a perfect fluid in GR, described locally by the following thermodynamic variables \begin{align}\label{eq:thermo variables} n\,&:\quad \text{particle number density}\,,\\ \rho\,&:\quad \text{energy density}\,,\\ p\,&:\quad \text{pressure}\,,\\ \mathcal{T}\,&:\quad \text{temperature}\,,\\ s\,&:\quad \text{entropy per particle}\,, \end{align} defined in the rest frame of the fluid, and with four-velocity $U^\mu$. Assuming that the total number of particles $N$ is conserved, the first law of thermodynamics reads \begin{equation} \label{eq:first_law_1} d\mathcal{U}=\mathcal{T}dS-pdV\,, \end{equation} where $V$, $\mathcal{U}=\rho V$, $S = sN$ and $N=nV$ are the physical volume, energy, entropy and particle number, respectively. In terms of the variables defined above, it can be easily rewritten as \begin{equation} \label{eq:1st law thermo} d\rho=\mu dn+n\mathcal{T}ds\,, \end{equation} where \begin{equation} \label{eq:chemical potential} \mu=\frac{\rho+p}{n} \end{equation} is the chemical potential. Eq. \eqref{eq:chemical potential} implies that the energy density may be written as a function of $n$ and $s$ alone, that is $\rho=\rho(n,s)$. Building a perfect-fluid action from these variables alone, and ignoring possible microscopic considerations as to the composition of the fluid, requires the addition of a few constraints, as shown by Schutz \cite{Schutz1977}. These constraints are particle number conservation and the absence of entropy exchange between neighbouring field lines, \begin{equation}\label{eq:fluid con1} (n U^\mu)_{;\mu}=0\,, \end{equation} \begin{equation} \label{eq:fluid con2} (nsU^\mu)_{;\mu}=0\,, \end{equation} respectively, and the fixing of the fluid flow lines on the boundaries of spacetime. These constraints can be easily satisfied if one uses a set of scalar fields $\alpha^A\,,\,A=1,\,2,\,3$ as Lagrangian coordinates for the fluid \cite{Lin1963,Serrin1959}. Consider a spacelike hypersurface with a coordinate system $\alpha^A$. By fixing the coordinates on the boundaries of spacetime and labelling each fluid flow line by the coordinate at which it intersects the hypersurface, one automatically satisfies the last constraint. The remaining two constraints can be satisfied through the use of Lagrange multipliers in the action. Let's introduce a contravariant vector density $J^\mu$, interpreted as the particle number density flux vector and defined implicitly via the four-velocity as \begin{equation} \label{eq:j vector} U^\mu = \frac{J^\mu}{|J|}\,, \end{equation} where $|J|=\sqrt{-J^\mu J_\mu}$ is its magnitude, and the particle number density is given by $n=|J|/\sqrt{-g}$. One can now build the fluid action as a functional of the vector $J^\mu$, the metric $g_{\mu\nu}$, the entropy per particle $s$, the Lagrangian coordinates $\alpha^A$, and spacetime scalars $\varphi$, $\theta$ and $\beta_A$, which will act as Lagrange multipliers for the particle number and entropy flow constraints. The off-shell action reads \cite{Brown1993a} \begin{equation} \label{eq:action Brown} S_\text{off-shell} = \int d^4 x \left[-\sqrt{-g}\rho(n,s)+J^\mu(\varphi_{,\mu}+s\theta_{,\mu}+\beta_A{\alpha^A}_{,\mu})\right]\,. \end{equation} The EMT derived from the action has the form \begin{equation} \label{eq:Brown EMT} T^{\mu\nu}=\rho U^\mu U^\nu + \left(n\frac{\partial\rho}{\partial n}-\rho\right)(g^{\mu\nu}+U^\mu U^\nu)\,, \end{equation} which represents a perfect fluid with EOS \begin{equation} \label{eq:Brown EOS} p = n\frac{\partial\rho}{\partial n}-\rho \,. \end{equation} The EOM of the fluid, derived via a null variation of the action \eqref{eq:action Brown} with respect to each of the fields, are \begin{align} J^\nu\,&:\quad \mu U_\nu + \varphi_{,\nu}+s\theta_{,\nu}+\beta_A {\alpha^A}_{,\nu}=0\,,\label{eq:Brown EOM J} \\ \varphi\,&:\quad {J^\mu}_{,\mu} = 0\,,\label{eq:Brown EOM phi} \\ \theta\,&:\quad (sJ^\mu)_{,\mu}=0 \,,\label{eq:Brown EOM theta} \\ s\,&:\quad \sqrt{-g}\frac{\partial\rho}{\partial s}-\theta_{,\mu}J^\mu=0 \,,\label{eq:Brown EOM s}\\ \alpha^A\,&:\quad (\beta_A J^\mu)_{,\mu}=0 \,,\label{eq:Brown EOM alpha}\\ \beta_A\,&:\quad {\alpha^A}_{,\mu}J^\mu = 0 \,.\label{eq:Brown EOM beta} \end{align} It is useful at this stage to derive some relations between the fields and thermodynamic quantities. Comparing the first law of thermodynamics \eqref{eq:1st law thermo} with Eq. \eqref{eq:Brown EOM s} one obtains \begin{align} \label{eq:theta T relation} &\sqrt{-g}\frac{\partial \rho}{\partial s} - \theta_{,\nu} U^\nu=0 \nonumber\\ \Rightarrow&\sqrt{-g}n\mathcal{T}-\theta_{,\nu} \sqrt{-g} n U^\nu = 0 \nonumber\\ \Rightarrow&\frac{1}{n}\frac{\partial \rho}{\partial s}=\theta_{,\nu} U^\nu=\mathcal{T} \,. \end{align} Similarly, one can contract Eq. \eqref{eq:Brown EOM J} with $U^\nu$ and use the EOM and Eq. \eqref{eq:theta T relation} to obtain \begin{align} \label{eq:phi mu sT relation} &- \mu + \varphi_{,\nu}U^\nu+s\theta_{,\nu}U^\nu+\beta_A {\alpha^A}_{,\nu}U^\nu=0 \nonumber\\ \Rightarrow&-\mu+\varphi_{,\nu}U^\nu+s\mathcal{T}=0 \nonumber\\ \Rightarrow&\varphi_{,\nu}U^\nu = \mu-s\mathcal{T} = \mathpzc{f} \,, \end{align} where $\mathpzc{f}$ is the chemical free-energy density. One can now apply the EOM on the off-shell action \eqref{eq:action Brown} in order to obtain its expression on-shell. Using Eq. \eqref{eq:Brown EOM J}, and the definitions of $J^\mu$ and its magnitude, one can write \begin{align} \label{eq:Brown on shell 1} S_\text{on-shell} &= \int d^4 x\left[-\sqrt{-g}\rho-\mu J^\nu U_\nu\right] \nonumber\\ &= \int d^4 x \left[-\sqrt{-g}\rho+\mu |J|\right]\nonumber\\ &=\int d^4 x\sqrt{-g}\left[-\rho+\mu n\right] \,, \end{align} and substituting in the chemical potential \eqref{eq:chemical potential}, one finally obtains \begin{equation} \label{eq:action brown p} S_\text{on-shell} = \int d^4 x \sqrt{-g}\,p \,. \end{equation} Note, however, that one always has the liberty of adding boundary terms to the off-shell action, without affecting the EOM. Two particular interesting choices are the integrals \begin{equation} \label{eq:surface int 1} -\int d^4x (\varphi J^\mu)_{,\mu} \,, \end{equation} and \begin{equation} \label{eq:surface int 2} -\int d^4x(\theta s J^\mu)_{,\mu} \,, \end{equation} Adding both of these integrals to the off-shell action \eqref{eq:action Brown} returns \begin{equation} \label{eq:action Brown rho off shell} S_\text{off-shell}=\int\left[-\sqrt{-g}\rho+J^\nu \beta_A \alpha^A_{,\nu}-\theta(sJ^\nu)_{,\nu}-\varphi J^\nu_{\,,\nu}\right] \,, \end{equation} and applying the EOM and relations \eqref{eq:theta T relation} and \eqref{eq:phi mu sT relation} we obtain the on-shell action \begin{align} \label{eq:fluid_lag_rho} S_\text{on-shell}&=\int d^4 x\left[-\sqrt{-g}\rho- J^\nu (\mu U_\nu+ \varphi_{,\nu} + s\theta_{,\nu})\right] \nonumber \\ &=\int d^4 x \sqrt{-g}(-\rho) \,. \end{align} Thus, the addition of surface integrals to the off-shell action changes the on-shell perfect fluid Lagrangian from $p$ to $-\rho$, even though it does not change the EOM or the form of the EMT. Likewise, further additions of these or other integrals would allow one to write the on-shell Lagrangian in a multitude of ways. For example, we can add the integrals \eqref{eq:surface int 1} and \eqref{eq:surface int 2} $c_1$ times. In this case the off-shell action reads \begin{align} \label{eq:action_Brown_T_off} S_\text{off-shell}=\int d^4x\big[-\sqrt{-g}\rho + J^\nu \beta_A \alpha^A_{,\nu} &+(1-c_1)J^\nu\left(\varphi_{,\nu} + s\theta_{,\nu}\right) \nonumber\\ &-c_1\theta(sJ^\nu)_{,\nu} -c_1\varphi J^\nu_{\,,\nu}\big] \,, \end{align} and applying the EOM we obtain \begin{align} \label{eq:action_Brown_T_on} S_\text{on-shell} =\int d^4 x \sqrt{-g}\left[\left(1-c_1\right)p-c_1\rho\right] \,. \end{align} Nevertheless, since the source term for the Einstein field equations is the EMT of the perfect fluid and not its on-shell Lagrangian, the dynamics of gravity in GR and minimally coupled theories are insensitive to this choice. \section{The barotropic fluid Lagrangian} \label{sec.barlag} The degeneracy of the on-shell Lagrangian presented in Section \ref{sec:fluid_lag} can be broken if one makes further assumptions. In particular, this is the case if one derives the Lagrangian for a barotropic perfect fluid whose pressure $p$ depends only on the fluid's particle number density $n$ \footnote{In some works in the literature the barotropic fluid is defined as a fluid whose pressure is only a function of the rest mass density $\rho_\text{m}$. However, assuming that the particles are identical and have conserved rest mass, then the rest mass density of the fluid is proportional to the particle number density $\rho_\text{m}\propto n$, and the two descriptions are equivalent.}, \textit{i.e.} $p =p(n)$, and assumes that the off-shell Lagrangian is only a function of the particle number density ${\mathcal L}_{\rm m}^{\rm off}=\mathcal{L}_{\rm m}^{\rm off}(n)$ \cite{Harko2010a,Minazzoli2012,Minazzoli2013,Arruga2021}. \subsection{Derivation with $\mathcal{L}_\text{m}^\text{off}=\mathcal{L}_\text{m}^\text{off}(n)$} \label{subsec.harko} To show this, one can simply start from the definition of the EMT, \begin{equation} \label{eq:bar-emt} T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\partial(\sqrt{-g}\mathcal{L}_\text{m}^\text{off})}{\partial g^{\mu\nu}} =\mathcal{L}_\text{m}^\text{off}g_{\mu\nu}-2\frac{\partial \mathcal{L}_\text{m}^\text{off}}{\partial g^{\mu\nu}} \,. \end{equation} If one considers that the mass current is conserved, \textit{i.e.} $\nabla_\mu(n U^\mu)=0$, where $U^\mu$ are the components of the four-velocity of a fluid element, then one can show that \cite{Harko2010a} \begin{equation} \label{eq:var-sig} \delta n = \frac{1}{2} n(g_{\mu\nu} - U_\mu U_\nu)\delta g^{\mu\nu} \,. \end{equation} Using Eq. \eqref{eq:var-sig} and assuming that the off-shell Lagrangian is a function of the number density (or equivalently, the rest mass density), one can rewrite Eq. \eqref{eq:bar-emt} as \cite{Harko2010a} \begin{equation} \label{eq:bar-emt2} T^{\mu\nu}=n \frac{d \mathcal{L}_\text{m}^\text{off}}{dn} U^\mu U^\nu+\left(\mathcal{L}_\text{m}^\text{off} - n\frac{d \mathcal{L}_\text{m}^\text{off}}{dn}\right) g^{\mu\nu} \,. \end{equation} Comparing Eq. \eqref{eq:bar-emt2} with the EMT of a perfect fluid \begin{equation} \label{eq:pf-emt} T^{\mu\nu}=-(\rho+p)U^\mu U^\nu + p g^{\mu\nu} \,, \end{equation} we identify \begin{align} \mathcal{L}_\text{m}^\text{off}(n) &= -\rho(n)\,, \label{eq:bar-lag}\\ \frac{d\mathcal{L}_\text{m}^\text{off}(n)}{dn}&=-\frac{\rho(n)+p(n)}{n} \,. \end{align} Hence, \begin{equation} \label{eq:mina-eos} \frac{d\rho(n)}{dn}=\frac{\rho(n)+p(n)}{n} \,. \end{equation} Eq. \eqref{eq:mina-eos} is exactly the equation of state obtained in Section \ref{sec:fluid_lag}, in Eq. \eqref{eq:Brown EOS}. Eq. \eqref{eq:mina-eos} has the general solution \cite{Minazzoli2012} \begin{equation} \label{eq:mina-rho} \rho(n) = C n + n\int\frac{p(n)}{n^2}dn \,, \end{equation} where $C$ is an integration constant. Therefore the on-shell and off-shell Lagrangians are the same and unique, and determined by the rest mass density of the fluid \begin{equation} \label{eq:Harko-Min-lag} \mathcal{L}_\text{m}^\text{on} = \mathcal{L}_\text{m}^\text{off} = -\rho(n) = -Cn-n\int\frac{p(n)}{n^2}dn \,. \end{equation} \subsection{Derivation with $\mathcal{L}_\text{m}^\text{off}=\mathcal{L}_\text{m}^\text{off}(j^\mu,\phi)$} \label{subsec.fer} We can perform a derivation of the on-shell Lagrangian for a barotropic fluid in a similar way to the treatment applied in Section \ref{sec:fluid_lag} with a simplified action, as performed in Ref. \cite{Ferreira2020}. Consider a fluid characterized by the following intensive variables \begin{align}\label{eq:bar-variables} n\,&:\quad \text{particle number density}\,,\\ \rho\,&:\quad \text{energy density}\,,\\ p\,&:\quad \text{pressure}\,,\\ s\,&:\quad \text{entropy per particle}\,, \end{align} defined in the rest frame of the fluid, and with four-velocity $U^\mu$. Much like in Section \ref{sec:fluid_lag}, assuming that the total number of particles is conserved, we can rewrite the first law of thermodynamics \eqref{eq:first_law_1} as \begin{equation} \label{eq:1st-law-bar} d\left(\frac{\rho}{n}\right)=-p d\left(\frac{1}{n}\right)+\mathcal{T}ds\,. \end{equation} If the flow is isentropic, $ds=0$, and therefore Eq. \eqref{eq:1st-law-bar} simplifies to \begin{equation} \label{eq:1st-law-bar2} d\left(\frac{\rho}{n}\right)=-p d\left(\frac{1}{n}\right)\,. \end{equation} If we consider that the fluid's energy density and pressure depend only on its number density, $\rho=\rho(n)$ and $p=p(n)$, then we can solve Eq. \eqref{eq:1st-law-bar2} to obtain \begin{equation} \label{eq:fer-rho} \rho(n) = C n + n\int\frac{p(n)}{n^2}dn \,, \end{equation} where $C$ is an integration constant. Consider now a minimally coupled fluid described by the action \begin{equation} \label{eq:fer-act} S=\int d^4x\sqrt{-g}\mathcal{L}_\text{m}^\text{off}(j^\mu,\phi) \,, \end{equation} where \begin{equation} \label{eq:fer-lag} \mathcal{L}_\text{m}^\text{off}(j^\mu,\phi) = F(|j|)-\phi\nabla_\mu j^\mu \,, \end{equation} $j^\mu$ are the components of a timelike vector field, $\phi$ is scalar field, and $F$ is a function of $|j|=\sqrt{-j^\mu j_\mu}$. Using the variational principle on the action \eqref{eq:fer-act} with respect to $j^\mu$ and $\phi$ one obtains the EOM \begin{align} \frac{\delta S}{\delta j^\mu} &= 0 = -\frac{1}{|j|}\frac{dF}{d|j|}j_\mu + \nabla_\mu \phi \,, \label{eq:j-eq-mot} \\ \frac{\delta S}{\delta \phi} &= 0 = \nabla_\mu j^\mu \,. \label{eq:phi-eq-mot} \end{align} Varying the matter action with respect to the metric components one obtains \begin{equation} \label{eq: EMT0} \delta S=\int d^{4}x \frac{\delta\left(\sqrt{-g}\mathcal{L}_\text{m}^\text{off}\right)}{\delta g_{\mu\nu}} \delta g_{\mu \nu}= \frac12 \int d^{4}x {\sqrt{-g}} \, T^{\mu \nu} \delta g_{\mu \nu}\,, \end{equation} where \begin{eqnarray}\label{eq: EMT1} \delta\left(\sqrt{-g}\mathcal{L}_\text{m}^\text{off}\right)&=& \sqrt{-g} \delta \mathcal{L}_\text{m}^\text{off} + \mathcal{L} \delta \sqrt{-g} \nonumber \\ &=& \sqrt{-g} \delta \mathcal{L}_\text{m}^\text{off} + \frac{\mathcal{L}_\text{m}^\text{off}}{2} \sqrt{-g} g^{\mu\nu} \delta g_{\mu\nu}\,, \end{eqnarray} with \begin{equation}\label{eq: EMT2} \delta \mathcal{L}_\text{m}^\text{off} =- \frac{1}{2}\frac{dF}{d|j|}\frac{j^{\mu}j^{\nu}}{|j|} \delta g_{\mu\nu} - \phi \delta (\nabla_\sigma j^\sigma) \,, \end{equation} and \begin{eqnarray}\label{eq: EMT3} \phi \delta \left(\nabla_\sigma j^\sigma\right)&=&\phi \delta \left(\frac{\partial_{\sigma}\left(\sqrt{-g} j^{\sigma}\right)}{\sqrt{-g}} \right) \nonumber \\ &=& -\frac12 g^{\mu\nu} \delta g_{\mu\nu} \nabla_\sigma \left(\phi j^\sigma\right)+ \frac12\nabla_\sigma\left( \phi j^\sigma g^{\mu\nu} \delta g_{\mu\nu}\right)\,. \end{eqnarray} Discarding the last term in Eq. \eqref{eq: EMT3} --- this term gives rise to a vanishing surface term in Eq. \eqref{eq: EMT0} ($\delta g_{\mu \nu}=0$ on the boundary) --- and using Eqs. \eqref{eq:j-eq-mot} and \eqref{eq:phi-eq-mot} it is simple to show that the EMT associated with the Lagrangian defined in \eqref{eq:fer-lag} given by \eqref{eq:bar-emt} is \begin{equation} \label{eq:fer-emt} T^{\mu\nu}=-\frac{dF}{d|j|}\frac{j^\mu j^\nu}{|j|}+\left(F-|j|\frac{dF}{d|j|}\right)g^{\mu\nu} \,. \end{equation} Comparing Eq. \eqref{eq:fer-emt} with the EMT of a perfect fluid \eqref{eq:pf-emt} we immediately obtain the identifications \begin{align} n &= |j| \,,\\ \rho(n) &= -F \,,\\ p(n) &= F-n\frac{dF}{dn}\,,\\ U^\mu &= \frac{j^\mu}{n} \,. \end{align} With these identifications, then it is immediate to see the condition implied by Eq. \eqref{eq:phi-eq-mot} is \begin{equation} \nabla_\mu j^\mu = \nabla_\mu(n U^\mu) = 0 \,, \end{equation} \textit{i.e.} that the particle number density is conserved. In this case the on-shell Lagrangian is equal to \begin{equation} \mathcal{L}_\text{m}^\text{on}=F=-\rho(n)\,. \end{equation} Using this result, in combination with Eq. \eqref{eq:fer-rho}, it is possible to write the on-shell Lagrangian as \begin{equation} \label{eq: Harko Lagrangian} \mathcal{L}_\text{m}^\text{on}= -\rho(n)= -Cn - n\int^{n}\frac{p\left(n'\right)}{n^{2}}dn \,, \end{equation} just like in Eq. \eqref{eq:Harko-Min-lag}. Let us now assume another off-shell Lagrangian, obtained from Eq. \eqref{eq:fer-lag} via the transformation \begin{equation} \label{eq:fer-lag-p} \mathcal{L}_\text{m}^\text{off}\rightarrow \mathcal{L}_\text{m}^\text{off}+\nabla_\mu(\phi j^\mu)= F(|j|)+j^\mu \nabla_\mu \phi \, . \end{equation} Note that the Lagrangians in Eqs. \eqref{eq:fer-lag} and \eqref{eq:fer-lag-p} differ only in a total derivative, so the EOM \eqref{eq:j-eq-mot} and \eqref{eq:phi-eq-mot} remain valid. Substituting the Lagrangian \eqref{eq:fer-lag-p} in Eq. \eqref{eq:bar-emt}, one once again obtains the EMT in Eq. \eqref{eq:fer-emt}, and therefore the identifications between the fluid variables and the fields remain the same. However, when one now use the EOM on the off-shell Lagrangian, one obtains a different on-shell value \begin{equation} \label{eq:fer-lag-p-on} \mathcal{L}_\text{m}^\text{on} = F + j^\mu j_\mu \frac{1}{|j|}\frac{dF}{d|j|} = p(n) \,. \end{equation} The on-shell Lagrangians in Eqs. \eqref{eq: Harko Lagrangian} and \eqref{eq:fer-lag-p-on} both represent a barotropic perfect fluid with the usual energy-momentum tensor, and in the case of a minimal coupling can even be used to describe the same physics. The reason one can only obtain the result of $\mathcal{L}_\text{m}^\text{on}= -\rho$ in Section \ref{subsec.harko} is because the form of the assumed off-shell Lagrangian is less general than the one assumed in Eq. \eqref{eq:fer-lag}. Likewise, if we had assumed \textit{a priori} that the off-shell Lagrangian in the action in Eq. \eqref{eq:fer-act} only depended on $n\equiv |j|$, we would have obtained the same result: that the only possible on-shell Lagrangian would be $\mathcal{L}_\text{m}^\text{on}= -\rho$. \section{The solitonic-particle fluid Lagrangian} \label{sec.nmclag} It was shown in Section \ref{sec:fluid_lag} that the form of the on-shell Lagrangians can take many different forms and has no impact on the EOM, as long as the fluid is minimally coupled with gravity. However, this degeneracy of the Lagrangian is lost in theories that feature an NMC between matter and gravity, or in fact between matter and any other field \cite{Avelino2018,Avelino2020,Ferreira2020}. In this section, we show that the on-shell Lagrangian of a fluid composed of particles with fixed rest mass and structure must be given by the trace of the EMT of the fluid. We will consider two different approaches: one where the fluid can be described as a collection of single point particles with fixed rest mass, and another where fluid particles can be described by localized concentrations of energy (solitons). \subsection{Point-particle fluid} \label{subsec:single_part} In many situations of interest, a fluid (not necessarily a perfect one) may be simply described as a collection of many identical point particles undergoing quasi-instantaneous scattering from time to time \cite{Avelino2018,Avelino2018a}. Hence, before discussing the Lagrangian of the fluid as a whole, let us start by considering the action of a single point particle with mass $m$ \begin{equation} S=-\int d \tau \, m \,, \end{equation} and EMT \begin{equation} \label{eq: particle EM tensor} T_*^{\mu\nu} = \frac{1}{\sqrt {-g}}\int d \tau \, m \, u^\mu u^\nu \delta^4\left(x^\sigma-\xi^\sigma(\tau)\right)\,, \end{equation} where the $*$ indicates that the quantity refers to a single particle, $\xi^\mu(\tau)$ represents the particle worldline and $u^\mu$ are the components of the particle 4-velocity. If one considers its trace $T_*=T_*^{\mu \nu} g_{\alpha \nu}$ and integrates over the whole of spacetime, we obtain \begin{eqnarray} \int d^{4}x \sqrt{-g} \, T_* &=&- \int d^4x \,d\tau\, m\, \delta^4\left(x^\sigma-\xi^\sigma(\tau)\right) \nonumber\\ &=&- \int d\tau \,m \, , \end{eqnarray} which can be immediately identified as the action for a single massive particle, and therefore implies that \begin{equation} \int d^{4}x \sqrt{-g} \,\mathcal{L}_*^{\rm on}= \int d^{4}x \sqrt{-g} \,T_*\,. \end{equation} If a fluid can be modelled as a collection of point particles, then its on-shell Lagrangian at each point will be the average value of the single-particle Lagrangian over a small macroscopic volume around that point \begin{eqnarray} \langle \mathcal{L}_*^{\rm on} \rangle &=& \frac{\int d^4 x \sqrt{-g} \, \mathcal{L}_*^{\rm on}}{\int d^4 x \sqrt{-g}}\\ &=& \frac{\int d^4 x \sqrt{-g} \, T_*}{\int d^4 x \sqrt{-g}} = \langle T_* \rangle\,, \end{eqnarray} where $\langle T_* \rangle = T$ is the trace of the EMT of the perfect fluid. This provides a further possibility for the on-shell Lagrangian of a perfect fluid: \begin{equation} \langle \mathcal{L}_*^{\rm on} \rangle\equiv\mathcal{L}^{\rm on}= T =-\rho+3p\,, \end{equation} where $p=\rho \langle v^2 \rangle/3=\rho \mathcal T$, $\sqrt{\langle v^2 \rangle}$ is the root-mean-square velocity of the particles and $\mathcal T$ is the temperature. Notice that in the case of dust ($p=0$) we recover the result obtained in Eq. \eqref{eq:fluid_lag_rho} ($\mathcal{L}^{\rm on}=-\rho$). \subsection{Solitonic-particle fluid and Derrick's argument} In addition to the previous interpretation, particles can also be seen as localized energy concentration, \textit{i.e.} solitons, with fixed rest mass and structure and that are not significantly impacted by their self-induced gravitational field (see also \cite{Polyakov2018,Avelino2019}). We shall assume that the spacetime is locally Minkowski on the particle's characteristic length scale. For this interpretation to hold, one must ensure that these concentrations of energy are stable. Consider a real scalar-field multiplet $\{\phi^1,\dots, \phi^D\}$ in $D+1$ dimensional Minkowski spacetime, with the Lagrangian \begin{equation} \mathcal{L}_\phi(\phi^a,X^{bc}) = f(X^{bc})-V(\phi^a) \,, \end{equation} where $f(X^{bc})$ and $V(\phi^a)$ are the kinetic and potential terms, respectively, and \begin{equation} \label{eq:scalar_kin} X^{bc}= -\frac{1}{2}\partial_\mu\phi^b \partial^\mu\phi^c \,. \end{equation} The EMT is given by \begin{align} \label{eq:scalar_emt_gen} T_{\mu\nu}&=-2\frac{\delta\mathcal{L}_\phi}{\delta g^{\mu\nu}} + g_{\mu\nu}\mathcal{L}_\phi \nonumber \\ &=\frac{\partial\mathcal{L}_\phi}{\partial X^{ab}}\partial_\mu\phi^a \partial_\nu\phi^b + g_{\mu\nu}\mathcal{L}_\phi\,, \end{align} and the total energy $E$ can be calculated via the integral \begin{equation} \label{eq:field_energ} E= \int T_{00}\, d^D x \,. \end{equation} It has been shown by Hobart and Derrick \cite{Hobart1963,Derrick1964} that a maximally symmetric concentration of energy is not stable when $D>1$ if the kinetic term is given by the usual form \begin{equation} \label{eq:scalar_kin_stand} f(X^{bc})= \delta_{bc}X^{bc}=X \,. \end{equation} However, one can show that these solutions do exist if the kinetic part of the Lagrangian is not given by Eq. \eqref{eq:scalar_kin_stand}, as long as certain conditions are met \cite{Avelino2011,Avelino2018a}. Consider a static solution $\phi^a=\phi^a(x^i)$ and a rescalling of the spatial coordinates $x^i\rightarrow\tilde{x}^i=\lambda x^i$. Under this transformation, a necessary condition for the existence of this solution is that the transformed static concentration of energy $E(\lambda)$ must satisfy \begin{equation} \label{eq:Derrick_1} \left[\frac{\delta E(\lambda)}{\delta\lambda}\right]_{\lambda=1}=0 \,. \end{equation} In addition, stability of this solution demands that \begin{equation} \label{eq:Derrick_2} \left[\frac{\delta^2E(\lambda)}{\delta\lambda^2}\right]_{\lambda=1}\geq0 \,. \end{equation} Eqs. \eqref{eq:Derrick_1} and \eqref{eq:Derrick_2} succinctly summarize Derrick's argument. Under the spherical transformation $x^i\rightarrow\tilde{x}^i=\lambda x^i$ the line element of the metric may be rewritten as \begin{equation} ds^2=-dt^2+\delta_{ij}dx^i dx^j = -dt^2+\tilde{g}_{ij}d\tilde{x}^i d\tilde{x}^j \,. \end{equation} where $\delta_{ij}$ is the Kronecker delta and $\tilde{g}_{ij}=\lambda^{-2}\delta_{ij}$ is the spatial part of the transformed metric, with determinant $\tilde{g}=-\lambda^{-2D}$. Since we assume a static solution, in the particle's proper frame \begin{equation} \frac{\delta\mathcal{L}_\phi}{\delta g^{00}}=0 \,, \end{equation} so that the $00$ component of Eq. \eqref{eq:scalar_emt_gen} reads \begin{equation} \label{eq:EMT_00} T_{00}=-\mathcal{L}_\phi \,. \end{equation} The transformed static concentration of energy can therefore be written as \begin{equation} E(\lambda)=-\int \sqrt{-\tilde{g}} \mathcal{L}_\phi(\tilde{g}^{ij},x^k)d^D\tilde{x} \,, \end{equation} and so a variation with respect to $\lambda$ returns \begin{align} \frac{\delta E}{\delta\lambda}&=-\int \frac{\delta\left(\sqrt{-\tilde{g}} \mathcal{L}_\phi\right)}{\delta\lambda} d^D\tilde{x} \nonumber \\ &=-\int \left[2\frac{\delta\mathcal{L}_\phi}{\delta\tilde{g}^{ij}}\tilde{g}^{ij}-D\mathcal{L}_\phi\right]\lambda^{-1-D} d^D\tilde{x} \,, \end{align} Setting $\lambda=1$ and applying the first condition \eqref{eq:Derrick_1}, we obtain \begin{align} \label{eq:Derrick_cond_1} \left[\frac{\delta E}{\delta\lambda}\right]_{\lambda=1}&=-\int \left[2\frac{\delta\mathcal{L}_\phi}{\delta g^{ij}}g^{ij}-D\mathcal{L}_\phi\right] d^Dx \nonumber \\ &= \int T^i_{\;\;i} d^Dx= 0 \,. \end{align} Likewise, \begin{equation} \frac{\delta^2E}{\delta\lambda^2}=-\int \left[4\frac{\delta^2\mathcal{L}_\phi}{\delta(\tilde{g}^{ij})^2}(\tilde{g}^{ij})^2 +(2-4D)\frac{\delta\mathcal{L}_\phi}{\delta\tilde{g}^{ij}}\tilde{g}^{ij} +(D+D^2)\mathcal{L}_\phi\right]\lambda^{-2-D}d^D\tilde{x} \,, \end{equation} and applying Eq. \eqref{eq:Derrick_2} we obtain \begin{equation} \int \left[4\frac{\delta^2\mathcal{L}_\phi}{\delta(g^{ij})^2}(g^{ij})^2 +(2-4D)\frac{\delta\mathcal{L}_\phi}{\delta g^{ij}}g^{ij} +D(D+1)\mathcal{L}_\phi\right]d^Dx \leq 0 \,, \end{equation} The on-shell Lagrangian of a fluid composed of solitonic particles $\mathcal{L}_\text{fluid}$ can be written as volume average of the Lagrangian of a collection of particles, \textit{i.e.} \begin{equation} \label{eq:Lag_vol_average} \mathcal{L}_\text{fluid}\equiv\langle \mathcal{L}_\phi \rangle = \frac{\int \mathcal{L}_\phi d^D x}{\int d^D x} \,. \end{equation} Taking into account Eqs. \eqref{eq:EMT_00} and \eqref{eq:Derrick_cond_1}, and that one can write the trace of the EMT as $T=T^0_{\;\;0}+T^i_{\;\;i}$, we obtain \begin{equation} \label{eq:Lag_EMT_sol} \mathcal{L}_\text{fluid}= \frac{\int T^0_{\;\;0} d^D x}{\int d^D x} =\frac{\int T d^D x - \int T^i_{\;\;i}d^D x}{\int d^D x} = \langle T\rangle \equiv T_\text{fluid} \,, \end{equation} where $T_\text{fluid}$ is the trace of the EMT of the fluid. Note that \begin{equation} \label{eq:Lag_EMT_sol2} \mathcal{L}_\text{fluid}= T_\text{fluid} \,, \end{equation} is a scalar equation, and therefore invariant under any Lorentz boost, despite being derived in a frame where the particles are at rest. This also implies that Eq. \eqref{eq:Lag_EMT_sol2} is valid regardless of individual soliton motion, as long as they preserve their structure and mass. \section{Which Lagrangian?} \label{sec.whichlag} We have shown that the models represented in Sections \ref{sec:fluid_lag}, \ref{sec.barlag} and \ref{sec.nmclag}, characterized by different Lagrangians, may be used to describe the dynamics of a perfect fluid. If the matter fields couple only minimally to gravity, then these models may even be used to describe the same physics. However, this degeneracy is generally broken in the presence of an NMC either to gravity \cite{Bertolami2007,Bertolami2008a,Sotiriou2008,Faraoni2009,Harko2010,Bertolami2010,Ribeiro2014,Azizi2014,Bertolami2014} or to other fields \cite{Bekenstein1982,Sandvik2002,Anchordoqui2003,Copeland2004,Lee2004,Koivisto2005,Avelino2008,Ayaita2012,Pourtsidou2013,Bohmer2015,Barros2019,Kase2020a}, in which case the identification of the appropriate on-shell Lagrangian may become essential in order to characterize the overall dynamics (note that this is not an issue if the form of the off-shell Lagrangian is assumed \textit{a priori}, as in \cite{Bohmer2015,Bettoni2011,Bettoni2015,Dutta2017,Koivisto2015,Bohmer2015a,Brax2016,Tamanini2016}). The models in Sections \ref{sec:fluid_lag} and \ref{sec.barlag} imply both the conservation of particle number and entropy. However, both the entropy and the particle number are in general not conserved in a fluid described as a collection of point particles or solitons. Hence, the models in Section \ref{sec.nmclag} can have degrees of freedom that are not accounted for by the models in Section \ref{sec:fluid_lag} and \ref{sec.barlag}, since they take into account microscopic properties of the fluid. The models in Section \ref{sec.nmclag} have an equation of state parameter $w=p/\rho$ in the interval $[0,1/3]$, which while appropriate to describe a significant fraction of the energy content of the Universe, such as dark matter, baryons and photons, cannot be used to describe dark energy. \section{The role of the Lagrangian in nonminimally-coupled models} \label{sec.nmclagrole} As previously discussed, the EMT does not provide a complete characterization of NMC matter fields, since the Lagrangian will also in general explicitly appear in the EOM. To further clarify this point, we present here a few examples of models in which there is an NMC between matter or radiation with dark energy (DE) \cite{Ferreira2020}, and we will explore an NMC with gravity in Chapter \ref{chapter_nmc}. Consider the model described by the action \begin{equation} \label{eq: NMC DE} S = \int d^4 x \sqrt{-g}\left[R+\mathcal{L}+\mathcal{L}_{\text{F}\phi}\right]\,, \end{equation} where $R$ is the Ricci scalar, $\phi$ is the DE scalar field described by the Lagrangian \begin{equation} \label{eq: L DE} \mathcal{L} = X - V(\phi)\,, \end{equation} and $\mathcal{L}_{\text{F}\phi}$ is the Lagrangian of the matter term featuring an NMC with DE \cite{Wetterich1995,Amendola2000,Zimdahl2001,Farrar2003} \begin{equation} \label{eq: matter DE nmc} \mathcal{L}_{\text{F}\phi}=f(\phi)\mathcal{L}_\text{F}\,. \end{equation} Here, $f(\phi)>0$ is a regular function of $\phi$ and $\mathcal{L}_\text{F}$ is the Lagrangian that would describe the matter component in the absence of an NMC to gravity (in which case $f$ would be equal to unity). Using the variational principle, it is straightforward to derive the EOM for the gravitational and scalar fields. They are given by \begin{equation} \label{eq: eom metric NMC DE} G^{\mu\nu} = f\, T_\text{F}^{\,\mu\nu} +\nabla^\mu\phi\nabla^\nu\phi -\frac{1}{2}g^{\mu\nu}\nabla_\alpha\phi\nabla^\alpha\phi -g^{\mu\nu}\, V\,, \end{equation} \begin{equation} \label{eq: eom DE NMC} \square\phi -\frac{d V}{d \phi} + \frac{d f}{d \phi}\mathcal{L}_\text{F}=0\,, \end{equation} respectively, where $G^{\mu\nu}$ is the Einstein tensor, $\square \equiv \nabla_\mu\nabla^\mu$ is the Laplace-Beltrami operator, and \begin{equation} \label{eq: Em tensor NMC} T_\text{F}^{\mu\nu}=\frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}\mathcal{L}_\text{F}\right)}{\delta g_{\mu\nu}} \end{equation} are the components of the EMT associated with the Lagrangian $\mathcal{L}_\text{F}$. Note that the Lagrangian of the matter fields is featured explicitly in the EOM for $\phi$. Thus, knowledge of the EMT alone is not enough to fully describe the dynamics of any of the fields. Consider the coupled matter EMT defined by $T_{\text{F}\phi}^{\mu\nu}=f(\phi)T_\text{F}^{\mu\nu}$. By taking the covariant derivative of Eq. \eqref{eq: eom metric NMC DE} and using the Bianchi identities one obtains \begin{equation} \label{eq: EM tensor cons 1} \nabla_\nu T_{\text{F}\phi}^{\mu\nu} = -\nabla_\nu(\partial^\nu\phi \partial^\mu\phi) + \frac{1}{2}\nabla^\nu(\partial_\alpha\phi \partial^\alpha\phi) + \frac{d V}{d\phi}\partial^\mu\phi \,, \end{equation} thus showing that the coupled matter EMT is in general not conserved. Using Eq. \eqref{eq: eom DE NMC} it is possible to rewrite this equation in such a way as to highlight the explicit dependence on the Lagrangian \begin{equation} \nabla_\nu T_{\text{F}\phi}^{\mu\nu} = \frac{d f}{d\phi}\mathcal{L}_\text{F}\partial^\nu\phi \,. \label{EMCF1} \end{equation} If $\mathcal{L}_\text{F}$ describes a fluid of particles with fixed rest mass $m_\text{F}$, then one must have $\mathcal{L}_\text{F}=T_\text{F}$, as per Sec. \ref{subsec:single_part}. Also, $\mathcal{L}_{\text{F}\phi} =f(\phi)\mathcal{L}_\text{F}$ will describe a fluid with particles of variable rest mass $m(\phi) = f(\phi)m_\text{F}$. In this case, Eq.~\eqref{EMCF1} may also be written as \begin{equation} \nabla_\nu T_{\text{F}\phi}^{\mu\nu} = -\beta T_\text{F}\partial^\mu\phi \,, \label{EMCF2} \end{equation} where \begin{equation} \beta(\phi) = -\frac{d \ln m(\phi)}{d \phi}\,. \label{betadef} \end{equation} In the present example, we shall focus on the macroscopic fluid dynamics, but the NMC between matter and DE also affects the dynamics of the individual particles (see, for example, \cite{Ayaita2012} for more details). \subsection{Coupling between dark energy and neutrinos} A related model featuring an NMC between neutrinos and DE, so-called growing neutrino quintessence, where the neutrinos are described by the Lagrangian \begin{equation} \label{eq: neutrinos L} \mathcal{L}_{\mathcal V} = i\bar{\psi}\left(\gamma^\alpha\nabla_\alpha +m(\phi)\right)\psi \,, \end{equation} has been investigated in \cite{Ayaita2012}. Here, $\bar{\psi}$ is the Dirac conjugate, $m (\phi)$ is a DE-field dependent neutrino rest mass, the quantities $\gamma^\alpha(x)$ are related to the usual Dirac matrices $\gamma^a$ via $\gamma^\alpha =\gamma^a e^\alpha_a$ where $e^\alpha_a$ are the vierbein, with $g^{\alpha\beta}=e_a^\alpha e_b^\beta \eta^{ab}$ and $\eta^{ab} =\text{diag}(-1,1,1,1)$, and $\nabla_\alpha$ is the covariant derivative that now takes into account the spin connection (see \cite{Brill1957} for more details on the vierbein formalism). The classical EOM for the neutrinos, derived from the action \begin{equation} \label{eq: neutrino action} S = \int d^4 x \sqrt{-g}\left[R+\mathcal{L}+\mathcal{L}_{\mathcal V}\right]\,, \end{equation} may be written as \begin{align} \label{eq: neutrino eom} \gamma^\alpha\nabla_\alpha \psi+m(\phi)\psi &=0\,, \\ \nabla_\alpha\bar{\psi}\gamma^\alpha-m(\phi)\bar{\psi} &= 0\,. \end{align} The components of the corresponding EMT are \begin{equation} \label{eq: neutrino em tensor} T_{\mathcal V}^{\alpha\beta} = -\frac{i}{2}\bar{\psi}\gamma^{(\beta}\nabla^{\alpha)}\psi +\frac{i}{2}\nabla^{(\alpha}\bar{\psi}\gamma^{\beta)}\psi \,, \end{equation} where the parentheses represent a symmetrization over the indices $\alpha$ and $\beta$. The trace of the EMT is given by \begin{equation} \label{eq: EM neutrino trace} T_{\mathcal V} = i\bar\psi\psi m(\phi) = - m(\phi) \widehat n \,, \end{equation} where $\widehat n= -i\bar\psi\psi$ is a scalar that in the nonrelativistic limit corresponds to the neutrino number density. Taking the covariant derivative of Eq. \eqref{eq: neutrino em tensor} one obtains \begin{equation} \label{eq: EM neut tensor cons1} \nabla_\mu T_{\mathcal V}^{\alpha\mu} = -\beta (\phi) T_{\mathcal V}\partial^\alpha\phi \,, \end{equation} where $\beta(\phi)$ is defined in Eq. \eqref{betadef}. A comparison between Eqs. \eqref{EMCF2} and \eqref{eq: EM neut tensor cons1} implies that $\mathcal{L}_{F\phi}$ and $\mathcal{L}_{\mathcal V}$ provide equivalent on-shell descriptions of a fluid of neutrinos in the presence of an NMC to the DE field. The same result could be achieved by analyzing the dynamics of individual neutrino particles \cite{Ayaita2012}. \subsection{Coupling between dark energy and the electromagnetic field} Consider now a model described by Eqs. \eqref{eq: NMC DE} and \eqref{eq: matter DE nmc} with \begin{equation} \label{eq: electro lagrangian} \mathcal{L}_\text{F} = \mathcal{L}_\text{EM} = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu} \,, \end{equation} where $F_{\alpha\beta}$ is the electromagnetic field tensor \cite{Bekenstein1982,Sandvik2002,Avelino2008}. This model will naturally lead to a varying fine-structure ``constant" \begin{equation} \label{eq: fine struct} \alpha(\phi) = \frac{\alpha_0}{f(\phi)} \,, \end{equation} whose evolution is driven by the dynamics of the DE scalar field $\phi$. Equation \eqref{eq: eom DE NMC} implies that the corresponding EOM is given by \begin{equation} \label{eq: eom electro scalar} \square\phi -\frac{d V}{d \phi} + \frac{\alpha_0}{4 \alpha^2}\frac{d \alpha}{d \phi}F_{\mu\nu}F^{\mu\nu}= 0 \end{equation} or, equivalently, \begin{equation} \label{eq: eom electro scalar 2} \square\phi -\frac{d V}{d \phi} - \frac{\alpha_0}{ \alpha^2}\frac{d \alpha}{d \phi}\mathcal{L}_\text{EM}= 0 \,. \end{equation} Electromagnetic contributions to baryon and lepton mass mean that in general $\mathcal{L}_\text{EM} \neq 0$. However, $\mathcal{L}_\text{photons}=(E^2-B^2)_\text{photons} = 0$ (here, $E$ and $B$ represent the magnitude of the electric and magnetic fields, respectively) and, therefore, electromagnetic radiation does not contribute to $\mathcal{L}_\text{EM}$. Note that the last term on the left-hand side of Eq. \eqref{eq: eom electro scalar} is constrained, via the equivalence principle, to be small \cite{Olive2002}. Therefore, the contribution of this term to the dynamics of the DE field is often disregarded (see, e.g., \cite{Anchordoqui2003,Copeland2004,Lee2004}). It is common, in particular in cosmology, to describe a background of electromagnetic radiation as a fluid of point particles whose rest mass is equal to zero (photons). In this case, one should use the appropriate on-shell Lagrangian of this fluid in Eq. \eqref{eq: eom electro scalar 2}. In Sec. \ref{sec.nmclag} we have shown that if the fluid is made of particles of fixed mass, then the appropriate on-shell Lagrangian is $\mathcal{L}_\text{EM}= T = 3p-\rho$. For photons (with $p=\rho/3$) this again implies that the on-shell Lagrangian $\mathcal{L}_\text{EM}$ vanishes, thus confirming that photons do not source the evolution of the DE scalar field $\phi$. \chapter{$f(R)$ Theories} \label{chapter_modgrav} Even though the standard $\Lambda$CDM model has great explanatory power and is very tightly constrained, it still has two core problems: the necessity to include dark energy and dark matter, both of which have managed to avoid direct detection. Thus, many physicists have proposed alternative theories to GR that do not require the addition of at least one of these dark components to the energy budget to explain cosmological phenomena. The standard of proof required of these new theories is very high, as they have to satisfy stringent cosmological and astrophysical constraints. Another hurdle faced by these modified gravity theories is their origin: are they fundamental theories, or merely the low-energy manifestation of a grander theory of everything? As of this writing, GR remains the most successful theory, but it can nevertheless be fruitful to take a deeper look at modified gravity since it can often shine a light on other interesting problems and phenomena. Many extensions of GR have been proposed in the literature. Among these are theories with additional fields, such as scalar-tensor theories (\textit{e.g.} Jordan-Brans-Dicke), Einstein-\ae{}ther theories (\textit{e.g.} modified Newtonian dynamics aka MoND) and bimetric theories, theories with more complex geometric terms such as $f(R)$ and $f(R, R_{\mu\nu}, R_{\mu\nu\alpha\beta})$ theories \cite{DeFelice2010,Sotiriou2010,Capozziello2009,Capozziello2007,Capozziello2005,Capozziello2003,Allemandi2004,Chiba2007}, and theories featuring a non-minimal coupling (NMC) between geometry and matter, such as $f(R,\mathcal{L}_m)$ theories \cite{Allemandi2005,Nojiri2004,Bertolami2007,Sotiriou2008,Harko2010,Harko2011,Nesseris2009,Thakur2013,Bertolami2008a,Harko2013,Harko2015}. An excellent review and introduction to these and other models and their cosmological impact can be found in \cite{Clifton2012}. In this chapter, we take a closer look at the case of the aforementioned $f(R)$ theories, which will serve as a stepping stone for the NMC models we will present later. $f(R)$ theories need not be looked at as fundamental, but rather as phenomenological gravity models that may describe the low-energy gravitational dynamics of a grander ``theory of everything''. For a good review and deeper dive on $f(R)$ theories beyond what is covered in this chapter, see \cite{DeFelice2010,Sotiriou2010} and references therein. \section{Action and field equations}\label{sec:f(R)} From a phenomenological standpoint (and without the addition of more fields), one way to generalize the Einstein-Hilbert action \eqref{eq:actionGR} is to replace the Ricci scalar $R$ with a generic function $f(R)$ \begin{equation}\label{eq:actionfR} S = \int d^4x \sqrt{-g} \left[\kappa f(R)+\mathcal{L}_\text{m} \right]\,. \end{equation} where $\kappa = c^4/(16\pi G)$ and $\mathcal{L}_\text{m}$ is the Lagrangian of the matter fields. It is immediate that GR is recovered when $f(R)=(R-2\Lambda)$. We obtain the field equations of this action \eqref{eq:actionfR} using the same method as in GR, so that \begin{equation}\label{eq:fieldfR} G_{\mu\nu}f'=\frac{1}{2}g_{\mu\nu}[f-Rf']+\Delta_{\mu\nu}f'+ \frac{1}{2\kappa}T_{\mu\nu}\,, \end{equation} where primes denotes differentiation with respect to the Ricci scalar, $G_{\mu\nu}$ is the Einstein tensor and $\Delta_{\mu\nu}\equiv\nabla_\mu \nabla_\nu-g_{\mu\nu}\square$, with $\square=\nabla_\mu \nabla^\mu$ the D'Alembertian operator. Since the Ricci scalar involves first and second order derivatives of the metric, the presence of $\Delta_{\mu\nu}f'$ in the field equations \eqref{eq:fieldfR} makes them fourth order differential equations. If the action is linear in R, the fourth order terms vanish and the theory reduces to GR. There is also a differential relation between $R$ and the trace of the EMT $T\equiv g^{\mu\nu} T_{\mu\nu}$, given by the trace equation \begin{equation}\label{eq:R-TfR} 3\square f'-2f+Rf'=\frac{1}{2\kappa}T\,, \end{equation} rather than the algebraic relation found in GR when $\Lambda=0$, $R=-T/(2\kappa)$. This enables the admittance of a larger pool of solutions than GR, such as solutions that have scalar curvature, $R\neq0$, when $T=0$. The maximally symmetric solutions lead to a constant Ricci scalar, so for constant $R$ and $T_{\mu\nu}=0$, one obtains \begin{equation}\label{eq:maxsymfR} Rf'-2f=0\,, \end{equation} which is an algebraic equation in $R$ for a given $f$. Here it becomes important to distinguish between singular ($R^{-n}~,~n>0$) and non-singular ($R^n~,~n>0$) $f(R)$ models \cite{Dick2004}. For non-singular models, $R=0$ is always a possible solution, the field equations \eqref{eq:fieldfR} reduce to $R_{\mu\nu}=0$, and the maximally symmetric solution is Minkowski spacetime. When $R=C$ with $C$ a constant, this becomes equivalent to a cosmological constant, the field equations reduce to $R_{\mu\nu}=g_{\mu\nu}C/4$, and the maximally symmetric solution is a de Sitter or anti-de Sitter space, depending on the sign of $C$. For singular $f(R)$ theories, however, $R=0$ is no longer an admissible solution to Eq. \eqref{eq:maxsymfR}. Similarly to GR, applying the Bianchi identities on the covariant derivative of the field equations yields the same conservation law for the energy-momentum tensor as in GR \eqref{eq:conservGR}, $\nabla^\mu T_{\mu\nu} = 0$. It is also possible to write the field equations \eqref{eq:fieldfR} in the form of the Einstein equations with an effective stress-energy tensor \begin{equation}\label{eq:fieldfReff} G_{\mu\nu}=\frac{1}{2\kappa f'}\left[T_{\mu\nu}+2\kappa \nabla_{\mu\nu} f'+\kappa g_{\mu\nu} \left(f-Rf'\right)\right]\equiv\frac{1}{2\kappa f'}\left[T_{\mu\nu}+T_{\mu\nu}^\text{(eff)}\right]\,, \end{equation} where $G_\text{eff}\equiv G/f'$ is an effective gravitational coupling strength, so that demanding that $G_\text{eff}$ be positive returns $f'>0$. \section{$f(R)$ cosmology}\label{subsec:cosmofR} As with any gravitational theory, for an $f(R)$ theory to be a suitable candidate for gravity, it must be compatible with the current cosmological evidence, explaining in particular the observed cosmological dynamics, and the evolution of cosmological perturbations must be compatible with the CMB, large scale structure formation and BBN. To derive the modified Friedmann and Raychaudhuri equations we again assume a flat, homogeneous and isotropic universe described by the FLRW metric with line element \begin{equation} \label{eq:line_fR} ds^2=-dt^2+a^2(t)\left[dr^2 +r^2 d\theta^2 +r^2\sin^2\theta d\phi^2\right]\,. \end{equation} We shall assume that the universe is filled with a collection of perfect fluids with energy density $\rho$, pressure $p$, and energy-momentum tensor \begin{equation} \label{eq:energy-mom2} T_{\mu\nu} = - (\rho+p)U_\mu U_\nu + p g_{\mu\nu}\,. \end{equation} Inserting the metric and energy-momentum into the field equations \eqref{eq:fieldfR}, one obtains \begin{equation}\label{eq:friedfR} H^2=\frac{1}{3f'}\left[\frac{1}{2\kappa}\sum_i \rho_i+\frac{Rf'-f}{2}-3H\dot{f'}\right]\,, \end{equation} \begin{equation}\label{eq:rayfR} 2\dot{H}+3H^2=-\frac{1}{f'}\left[\frac{1}{2\kappa}\sum_i p_i+\ddot{f'}+2H\dot{f'}+\frac{f-Rf'}{2}\right]\,, \end{equation} where $\rho_i$ and $p_i$ are the rest energy density and pressure of each of the perfect fluids, respectively. Eq. \eqref{eq:fieldfReff} implies that one must have $f'>0$ so that $G_\text{eff}>0$. Adding to this condition, $f''>0$ is required in order to avoid ghosts \cite{Starobinsky2007} and the Dolgov-Kawasaki instability \cite{Dolgov2003}. As the stability of $f(R)$ theories (regardless of the coupling to matter) is not the focus of this thesis, we will not expand on this topic (see \cite{DeFelice2010,Sotiriou2010} for a more detailed analysis). One can collect the extra geometric terms in the Friedmann equations by introducing an effective density and pressure, defined as \begin{equation}\label{eq:rhofR} \rho_\text{eff}=\kappa\left(\frac{Rf'-f}{ f'}-\frac{6H\dot{f'}}{ f'}\right)\,, \end{equation} \begin{equation}\label{eq:pressfR} p_\text{eff}=\frac{\kappa}{ f'}\left(2\ddot{f'}+4H\dot{f'}+f-Rf'\right)\,, \end{equation} \noindent where $\rho_\text{eff}$ must be non-negative in a spatially flat FLRW spacetime for the Friedmann equation to have a real solution when $\rho\rightarrow0$. When $\rho_\text{eff}\gg\rho$, the Friedmann \eqref{eq:friedfR} and Raychaudhuri \eqref{eq:rayfR} equations take the form \begin{equation}\label{eq:friedfReff} H^2=\frac{1}{6\kappa}\rho_\text{eff}\,, \end{equation} \begin{equation}\label{eq:rayfReff} \frac{\ddot{a}}{ a}=-\frac{q}{ 12\kappa}\left(\rho_\text{eff}+3p_\text{eff}\right)\,, \end{equation} where $q\equiv -a\ddot{a}/\dot{a}^2$ is the deceleration parameter. If we now consider the case of $f(R)\propto R^n$ and consider a power law expansion characterized by $a(t)=a_0(t/t_0)^\alpha$, the effective EOS parameter $w_\text{eff}=p_\text{eff}/\rho_\text{eff}$ and the parameter $\alpha$ become (for $n\neq1$) \cite{Sotiriou2010} \begin{equation} w_\text{eff}=-\frac{6n^2-7n-1 }{ 6n^2-9n+3}\,, \end{equation} \begin{equation} \alpha=\frac{-2n^2+3n-1 }{ n-2}\,. \end{equation} Now one can simply choose a value for $n$ such that $\alpha>1$ to obtain an accelerated expansion. Likewise, $n$ can be constrained using data from supernovae and CMB observations \cite{Capozziello2003}. Using the aforementioned model, one can constrain the values of $n$ that still have $\alpha>1$ and a negative deceleration parameter. From supernovae type Ia (SNeIa) we obtain the constraint $-0.67\leq n\leq -0.37$ or $1.37\leq n \leq 1.43$, and from WMAP data we obtain $-0.450\leq n\leq -0.370$ or $1.366\leq n \leq 1.376$. More recently, the Planck collaboration has also set limits on this type of $f(R)$ theories \cite{Ade2016a,Aghanim2020}. Solving the Friedmann equation in $f(R)$ gravity requires setting two initial conditions. One of them is usually set by requiring that \begin{equation} \lim_{R\rightarrow\infty}\frac{f(R)}{R}=0 \,, \end{equation} and the other, usually called $B_0$, is the present day value of \begin{equation}\label{eq:bound} B(z)=\frac{f'' }{ f'} \frac{H\dot{R} }{ \dot{H}-H^2}=\frac{2(n-1)(n-2) }{ -2n^2+4n-3}\,. \end{equation} Planck data implies that we must have $B_0 \lesssim 7.9\times 10^{-5}$, effectively restricting the value of $n$ to be very close to unity. It is worthy of note that one can also use $f(R)$ theories to describe inflation, for example using the well-known Starobinsky model \cite{Starobinsky1980,DeFelice2010} given by \begin{equation}\label{eq:starfR} f(R)=R+\frac{R^2}{ 6M^2}\,, \end{equation} where $M$ is a mass scale and where the presence of the linear term in $R$ is responsible for bringing inflation to an end. The field equations \eqref{eq:friedfR} and \eqref{eq:rayfR} return \begin{equation}\label{eq:friedstar} \ddot{H}-\frac{\dot{H}^2}{ 2H}+\frac{1}{2} M^2 H=-3H\dot{H}\,, \end{equation} \begin{equation}\label{eq:raystar} \ddot{R}+3H\dot{R}+M^2 R=0\,. \end{equation} During inflation, the first two terms of Eq. \eqref{eq:friedstar} are much smaller than the others, and one obtains a linear differential equation for $H$ that can be integrated to give \begin{align}\label{eq:starsolution} H&\simeq H_\text{i}-\frac{M^2 }{ 6}(t-t_\text{i})\,,\nonumber\\ a&\simeq a_\text{i} \exp\left[H_i(t-t_\text{i})-\frac{M^2}{ 12}(t-t_\text{i})^2\right]\,,\nonumber\\ R&\simeq 12H^2-M^2\,, \end{align} where $H_\text{i}$ and $a_\text{i}$ are the Hubble parameter and the scale factor at the onset of inflation ($t=t_\text{i}$), respectively. \section{Equivalence with scalar field theories}\label{subsec:scalarfR} An interesting property of $f(R)$ theories is that they are equivalently to Jordan-Brans-Dicke (JBD) theories \cite{Sotiriou2006,Faraoni2007b,DeFelice2010}, which can be written in both the Jordan and Einstein frames, \textit{i.e.} with the scalar field coupled directly with the Ricci scalar, or not, respectively. The first equivalence is drawn by rewriting the $f(R)$ action \eqref{eq:actionfR} as a function of an arbitrary scalar field $\chi$, \begin{equation}\label{eq:actionfchi} S=\int d^4x \sqrt{-g}\left[\kappa\left(f(\chi)+f'(\chi)(R-\chi)\right)+\mathcal{L}_\text{m}\right]\,, \end{equation} where primes represent differentiation with respect to $\chi$. The null variation of this action with respect to $\chi$ returns \begin{equation}\label{eq:r=chi} f''(\chi)(R-\chi)=0\,, \end{equation} which implies $\chi=R$ provided that $f''(\chi)\neq0$, and therefore the action \eqref{eq:actionfchi} takes the same form as Eq. \eqref{eq:actionfR}. Defining a new field $\varphi\equiv f'(\chi)$, Eq. \eqref{eq:actionfchi} can be expressed as \begin{equation}\label{eq:actionfrJBD} S=\int d^4x \sqrt{-g}\left[\kappa\varphi R-V(\varphi)+\mathcal{L}_\text{m} \right]\,, \end{equation} where $V(\varphi)$ is a field potential given by \begin{equation}\label{eq:potfrJBD} V(\varphi)=\kappa\left[\chi(\varphi)\varphi-f\left(\chi(\varphi)\right)\right]\,, \end{equation} which has the same form as the JBD action \begin{equation}\label{eq:actionJBD} S=\int d^4x \sqrt{-g}\left[\frac{1}{2}\varphi R-\frac{\omega_\text{JBD}}{ 2\varphi}g^{\mu\nu}\partial_\mu \varphi \partial_\nu\varphi-V(\varphi)+\mathcal{L}_\text{m} \right]\,, \end{equation} when the JBD parameter $\omega_\text{JBD}$ is null. It is also possible to write this action in the Einstein frame via a conformal transformation \begin{equation}\label{eq:confmetric} \tilde{g}_{\mu\nu}=\Omega^2g_{\mu\nu}\,, \end{equation} where $\Omega^2$ is the conformal factor and the tilde represents quantities in the Einstein frame. The Ricci scalars in each of the frames $R$ and $\tilde{R}$ have the relation \begin{equation}\label{eq:confRicci} R=\Omega^2\left(\tilde{R}+6\tilde{\square}\ln\Omega - 6\tilde{g}^{\mu\nu}\partial_\mu(\ln \Omega) \partial_\nu (\ln \Omega) \right)\,. \end{equation} Substituting in the action \eqref{eq:actionfrJBD} and using the relation $\sqrt{-g}=\Omega^{-4}\sqrt{-\tilde{g}}$ we can rewrite it as \begin{align}\label{eq:actionfreins1} S = \int d^4x \sqrt{-\tilde{g}} \Big[\kappa \Omega^{-2} \varphi \Big(\tilde{R} &+6\tilde{\square}(\ln\Omega) - 6\tilde{g}^{\mu\nu}\partial_\mu(\ln \Omega) \partial_\nu (\ln \Omega) \Big) \nonumber \\ &-\Omega^{-4}\varphi^2 U+\Omega^{-4}\mathcal{L}_\text{m}(\Omega^{-2}\tilde{g}_{\mu\nu},\Psi_M) \Big]\,, \end{align} where now the matter Lagrangian is a function of the transformed metric $\tilde{g}_{\mu\nu}$ and the matter fields $\Psi_M$, and $U$ is a potential defined as \begin{equation}\label{eq:fRpotential} U=\kappa\frac{\chi(\varphi)\varphi-f\left(\chi(\varphi)\right) }{ \varphi^2}\,. \end{equation} Careful observation of the previous equation makes it clear that one obtains the action in the Einstein frame for the choice of transformation \begin{equation}\label{eq:conftransfR} \Omega^2=\varphi\,, \end{equation} where it is assumed that $\varphi>0$. We now rescale the scalar field as \begin{equation}\label{eq:fRfield} \phi\equiv \sqrt{3 \kappa} \ln\left(\varphi\right)\,. \end{equation} Since the integral $\int d^4x \sqrt{-\tilde{g}}\tilde{\square}(\ln\Omega)$ vanishes due to Gauss's theorem, we can finally write the action \eqref{eq:actionfreins1} in the Einstein frame \begin{equation} S = \int d^4x \sqrt{-\tilde{g}} \left[\kappa\tilde{R} - \frac{1}{2}\partial^\alpha\phi \partial_\alpha \phi - U(\phi)+e^{-2\frac{\phi }{ \sqrt{3\kappa}}}\mathcal{L}_\text{m}\left(e^{-\frac{\phi }{ \sqrt{3\kappa}}}\tilde{g}_{\mu\nu},\Psi_M\right) \right]\,, \end{equation} where $e^{-\frac{\phi }{ \sqrt{3\kappa}}}\tilde{g}_{\mu\nu}$ is the physical metric. These three representations are equivalent at the classical level, as long as one takes care to scale the fundamental and derived units when performing conformal transformations \cite{Faraoni2007c}, so one can choose to work in the most convenient representation. The scalar-field representation may be more familiar to particle physicists, whereas the geometric nature of $f(R)$ may appeal more to mathematicians and relativists. \section{The weak-field limit and local constraints} Modern measurements of solar system dynamics allow one to derive local constraints on modified gravity. This is usually done by performing a weak-field expansion of the field equations on a perturbed Minkowski or FLRW metric. In general, one must always ensure two conditions: \textit{Condition 1:} $f(R)$ is analytical at background curvature $R=R_0$. \textit{Condition 2:} The pressure of the local star-like object is approximately null, $p\simeq 0$. This implies that the trace of the energy-momentum tensor is simply $T\simeq -\rho$. Additionally, a third condition is necessary if one wishes to avoid non-Newtonian terms in the weak-field expansion: \textit{Condition 3:} $mr\ll1$, where $m$ is the effective mass of the scalar degree of freedom of the theory (defined in Eq. \eqref{eq:fR-mass-param}) and $r$ is the distance to the local star-like object. If condition 3 is verified, then the extra scalar degree of freedom (the Ricci scalar) will have a range longer than the Solar System. In this case, one can perform a parametrized post-Newtonian (PPN) expansion \cite{Eddington:1922,Robertson:1962,Nordtvedt:1968b,Will:1972,Will:1973} around a background FLRW metric, and the resulting expansion will allow one to derive constraints from experimental measurements of the PPN parameters \cite{Chiba2007}. On the other hand, if the range of the scalar degree of freedom is shorter than the typical distances in the Solar System, then the weak-field expansion will feature Yukawa-like terms, which can be constrained by experiment \cite{Naf2010}. \subsection[Post-Newtonian Expansion]{Post-Newtonian expansion} The PPN formalism provides a systematic approach in which one can analyse and compare different metric theories of gravity in the weak-field slow-motion limit. Essentially, this relies on performing a linear expansion of the metric and the field equations of the modified gravity theory and then comparing them against the post-Newtonian parameters, assigned to different gravitational potentials, whose values can be constrained by experiment. For $f(R)$ gravity, one assumes that the scalar curvature can be expressed as the sum of two components \begin{equation} R(r,t)\equiv R_0(t)+R_1(r) \,, \end{equation} where $R_0(t)$ is the background spatially homogeneous scalar curvature, and $R_1(r)$ is a time-independent perturbation to the background curvature. Since the timescales usually considered in Solar System dynamics are much shorter than cosmological ones, we can usually take the background curvature to be constant, \textit{i.e.} $R_0= \text{const.}$. We can therefore separate the source for the Ricci scalar into two different components, one cosmological and another local. So the trace Eq. \eqref{eq:R-TfR} reads \begin{equation}\label{eq:R-Tf2R-sep} 3\square f'-2f+Rf'=\frac{1}{2\kappa}(T^\text{cos}+T^\text{s})\,, \end{equation} where $T^\text{cos}$ and $T^\text{s}$ are the cosmological and local contributions, respectively. If one takes into account that $R_1\ll R_0$ (see \cite{Chiba2007} for more details) and that $R_0$ solves the terms of the expansion of Eq. \eqref{eq:R-Tf2R-sep} that are independent of $R_1$, one can write it as \begin{equation} \label{eq:fR-trace-exp} 3f''_0\square R_1(r)-\left(f'_0 R_0-f_0'\right)= \frac{1}{2\kappa}T^\text{s} \,, \end{equation} where $f_0\equiv f(R_0)$. For a static, spherically symmetric body, one can further write this as \begin{equation} \label{eq:fR-trace-exp-2} \nabla^2R_1-m^2 R_1 = -\frac{1}{2\kappa} \frac{\rho^\text{s}}{f''_0} \,, \end{equation} where $\rho^\text{s}$ is the body's density and $m$ is the mass parameter, defined as \begin{equation} \label{eq:fR-mass-param} m^2 \equiv \frac{1}{3} \left(\frac{f'_0}{f''_0}-R_0-3\frac{\square f''_0}{f''_0}\right) \,. \end{equation} If $mr\ll1$, one can solve Eq. \eqref{eq:fR-trace-exp-2} outside the star to obtain \begin{equation} \label{eq:fR-R1-sol} R_1=\frac{1}{24\kappa\pi f''_0}\frac{M}{r}\,, \end{equation} where $M$ is the total mass of the source. By considering a flat FLRW metric with a spherically symmetric perturbation \begin{equation} \label{eq:pert-flrw} ds^2=-\left[1+2\Psi(r)\right] dt^2 + a^2(t)\left\{\left[1+2\Phi(r)\right]dr^2 +r^2 d\theta^2 +r^2\sin^2\theta d\phi^2\right\} \,, \end{equation} and solving the linearized field equations for $\Psi(r)$ and $\Phi(r)$ with the solution obtained for $R_1$ \eqref{eq:fR-R1-sol} and $a(t=t_0)=1$, one obtains \cite{Chiba2007} \begin{equation} \label{eq:pert-flrw-2} ds^2=-\left(1-\frac{2GM}{r}\right) dt^2 + \left(1+\frac{GM}{r}\right)\left(dr^2 +r^2 d\theta^2 +r^2\sin^2\theta d\phi^2\right) \,, \end{equation} where $M$ is the total mass of the central source. Comparing Eq. \eqref{eq:pert-flrw-2} with the equivalent PPN metric \begin{equation} \label{eq:metric-ppn-fr} ds^2=-\left(1-\frac{2GM}{r}\right) dt^2 + \left(1+\gamma\frac{2GM}{r}\right)\left(dr^2 +r^2 d\theta^2 +r^2\sin^2\theta d\phi^2\right) \,, \end{equation} where $\gamma$ is a PPN parameter, one can see that in $f(R)$ gravity $\gamma=1/2$. The tightest bound on this parameter comes from the tracking of the Cassini probe, where it was determined that $\gamma-1= (2.1\pm2.3)\times 10^{-5}$. Thus $f(R)$ gravity is generally incompatible with solar system tests, provided that the linearized limit is valid and that $m r\ll 1$ (see \cite{Chiba2007} for a more in-depth look at this constraint). Nevertheless, some $f(R)$ models can feature a so-called ``chameleon'' mechanism which hides the scalar degree of freedom in regions with high densities, and thus allows the theory to be constrained by local tests, rather than outright excluded \cite{Faulkner2007,Capozziello2008}. \subsection{``Post-Yukawa'' expansion} In \cite{Naf2010} the authors perform a more general weak-field expansion (\textit{i.e.} accounting for Yukawa terms) of the field equations on a perturbed asymptotically flat metric with components $g_{\mu\nu}$ given by \begin{align} \label{eq:fR-asymp-flat-metric} g_{00} &= -1+ {}^{(2)}h_{00} + {}^{(4)} h_{00} + \mathcal{O}(c^{-6}) \,, \nonumber \\ g_{0i} &= {}^{(3)}h_{0i}+\mathcal{O}(c^{-5}) \,, \nonumber \\ g_{ij} &= \delta_{ij} + {}^{(2)} h_{ij} + \mathcal{O}(c^{-4}) \,, \end{align} where ${}^{(n)}h_{\mu\nu}$ denotes a quantity of order $\mathcal{O}(c^{-n})$. Since $R\sim \mathcal{O}(c^{-2})$ and assuming $f$ to be analytic at $R=0$ with $f'(0)=1$, it is sufficient to consider the expansion \begin{equation} \label{eq:fR-Yuk} f(R) = R + aR^2 \,, \qquad a\neq 0 \,, \end{equation} where the presence of a cosmological constant is ignored due to considering an asymptotically flat background. Introducing a scalar field $\varphi$ defined as \begin{equation} \label{eq:phi-fr-yuk} f'(R) = 1+2a\varphi \,, \end{equation} one can rewrite the trace equation \eqref{eq:R-TfR} as \begin{equation} \label{eq:fR-trace-phi} \square\varphi= \frac{1}{6a\kappa}T+\frac{1}{6a}\varphi \,. \end{equation} If one expands the Ricci tensor $R_{\mu\nu}$, scalar field $\varphi$ and Energy momentum tensor $T^{\mu\nu}$ to the necessary order, then Eq. \eqref{eq:fR-trace-phi} can be rewritten up to leading order as \begin{equation} \label{eq:fR-trace-yuk} \nabla^2 {}^{(2)}\varphi -\alpha^2 {}^{(2)}\varphi= -\frac{\alpha^2}{2\kappa} {}^{(-2)}T^{00}\, \end{equation} where $\alpha^2\equiv 1/(6a)$, which has the solution \begin{equation} {}^{(2)}\varphi(\vec{x},t)=\frac{1}{c^2}V(\vec{x},t) \,, \end{equation} where \begin{equation} V(\vec{x},t)\equiv\frac{2G\alpha^2}{c^2}\int\frac{{}^{(-2)}T^{00}(\vec{x}',t)e^{-\alpha|\vec{x}-\vec{x}'|}}{|\vec{x}-\vec{x}'|}d^3x' \,. \end{equation} One can then use the field equations \eqref{eq:fieldfR} to obtain the solution for ${}^{(2)}h_{00}$ \begin{equation} {}^{(2)}h_{00}(\vec{x},t)=\frac{1}{c^2}\left[2U(\vec{x},t)-W(\vec{x},t)\right] \,, \end{equation} with the potentials \begin{equation} \label{eq:fr-U} U(\vec{x},t)\equiv\frac{4G}{3c^2}\int\frac{{}^{(-2)}T^{00}(\vec{x}',t)}{|\vec{x}-\vec{x}'|}d^3x' \,, \end{equation} \begin{equation} \label{eq:fr-W} W(\vec{x},t)\equiv\frac{1}{12\pi}\int\frac{V(\vec{x}',t)}{|\vec{x}-\vec{x}'|}d^3x' \,. \end{equation} Whereas the potential $U$ corresponds to the standard Newtonian term, $W$ contains the Yukawa term in $V$. The remaining components of the perturbed metric can be calculated similarly (we direct the reader to \cite{Naf2010} for the complete calculation). Yukawa corrections to the Newtonian potential are already tightly constrained in the literature, and in this case, the Eöt-Wash experiment \cite{Kapner2007} constrains the model parameter $a=1/(6\alpha^2)$ to \begin{equation} a\lesssim 10^{-10}\text{ m}^2\,. \end{equation} \chapter[Nonminimally Coupled $f(R)$ Theories]{Nonminimally Coupled $f(R)$ Theories} \label{chapter_nmc} A popular group of modified gravity theories are those that feature a nonminimal coupling (NMC) between the matter fields and gravity or another field. As mentioned in Chapter \ref{chapter_lag}, this leads to the appearance of the on-shell Lagrangian of the matter fields in the equations of motion (EOM), in addition to the usual energy-momentum tensor (EMT). In this chapter, we will analyse the impact of this modification in the background dynamics of the Universe, its thermodynamics and the behaviour of particles and fluids. While these theories have been studied before in the literature, the use of the appropriate Lagrangian for the matter fields and the study of its consequences are presented in Sections \ref{sec.cosmo}, \ref{sec:emNMC}, \ref{sec.scalar}, \ref{sec.firstlaw}, \ref{sec.seclaw} and \ref{sec.htheo}, and published in Refs. \cite{Avelino2018,Azevedo2019b,Avelino2020,Azevedo2020,Avelino2022}. \section{Action and equations of motion} The family of theories investigated in this thesis is $f(R)$-inspired, featuring an NMC between curvature and matter. It can be defined through the action \begin{equation}\label{eq:actionf1f2} S = \int d^4x \sqrt{-g} \left[\kappa f_1(R)+f_2(R)\mathcal{L}_\text{m} \right] \,, \end{equation} where $f_1(R)$ and $f_2(R)$ are arbitrary functions of the Ricci scalar $R$, $\mathcal{L}_\text{m}$ is the Lagrangian of the matter fields and $\kappa = c^4/(16\pi G)$ \cite{Allemandi2005,Nojiri2004,Bertolami2007,Sotiriou2008,Harko2010,Harko2011,Nesseris2009,Thakur2013,Bertolami2008a,Harko2013,Harko2015}. One recovers GR by taking $f_1(R)=R-2\Lambda$ and $f_2(R)=1$. The field equations are obtained as usual by imposing a null variation of the action with respect to the metric, \begin{equation}\label{eq:fieldNMC} FG_{\mu\nu}=\frac{1}{2}f_2 T_{\mu\nu} + \Delta_{\mu\nu}F+\frac{1}{2}g_{\mu\nu}\kappa f_1 - \frac{1 }{2}g_{\mu\nu} RF \,, \end{equation} where \begin{equation} \label{eq:F} F=\kappa f'_1+f'_2\mathcal{L}_\text{m}\,, \end{equation} the primes denote a differentiation with respect to the Ricci scalar, $G_{\mu\nu}$ is the Einstein tensor and $\Delta_{\mu\nu}\equiv\nabla_\mu \nabla_\nu-g_{\mu\nu}\square$, with $\square=\nabla_\mu \nabla^\mu$ being the D'Alembertian operator, and $T_{\mu\nu}$ are the components of the EMT given by \begin{equation} \label{eq:energy-mom3} T_{\mu\nu} = - \frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}\mathcal{L}_\text{m}\right)}{\delta g^{\mu\nu}}\,. \end{equation} An alternative way to write the field equations is \begin{equation} \label{eq:eff_eom} G_{\mu\nu}=\lambda\left(T_{\mu\nu}+\hat{T}_{\mu\nu}\right)\,, \end{equation} where $\lambda$ is an effective coupling and $\hat{T}_{\mu\nu}$ is an effective EMT. Comparing Eqs. \eqref{eq:fieldNMC} and \eqref{eq:eff_eom} immediatly sets \begin{equation} \lambda = \frac{f_2}{2\kappa f'_1+2\mathcal{L}_\text{m} f'_2}\,, \end{equation} and \begin{equation} \hat{T}_{\mu\nu} = \left(\frac{\kappa f_1}{f_2}-\frac{RF}{f_2}\right)g_{\mu\nu}+\frac{2\Delta_{\mu\nu} F}{f_2}\,. \end{equation} Demanding that gravity remain attractive requires a positive effective coupling, that is \cite{Bertolami2009} \begin{equation} \label{eq:pos_grav} \lambda=\frac{f_2}{2\kappa f'_1+2\mathcal{L}_\text{m} f'_2} > 0\,. \end{equation} A common requirement for the NMC functions $f_1$ and $f_2$ is that the theory remain free of Dolgov-Kawasaki instabilities. This criterion is similar to the one found for minimally-coupled $f(R)$ theories and was initially determined in \cite{Faraoni2007} and later extended in \cite{Bertolami2009}. It is expressed by \begin{equation} \label{eq:dolgov} \kappa f''_1+f''_2\mathcal{L}_\text{m}\geq0\,. \end{equation} A crucial feature of these theories is that the EMT is no longer covariantly conserved: in fact, applying the Bianchi identities to the EOM \eqref{eq:fieldNMC} leads to \begin{equation}\label{eq:conservNMC} \nabla^\mu T_{\mu\nu} = \frac{f'_2}{ f_2}\left(g_{\mu\nu}\mathcal{L}_\text{m}-T_{\mu\nu}\right)\nabla^\mu R \,. \end{equation} This energy-momentum nonconservation associated with the NMC to gravity suggests that a more general definition of the EMT, including a yet to be defined gravitational contribution, may be worthy of consideration. This has not yet been sufficiently explored in the context of NMC theories, but has proven to be quite problematic in the context of GR \cite{Bonilla1997,Clifton2013,Sussman2014}. Likewise, the fluids will suffer an additional acceleration due to the NMC to gravity. Consider a perfect fluid with EMT \begin{equation} \label{eq:EMT_fluid} T^{\mu\nu}=(\rho+p)U^\mu U^\nu + p g^{\mu\nu} \,, \end{equation} where $\rho$ and $p$ are the proper energy density and pressure of the fluid, respectively, and $U^\mu$ is the 4-velocity of a fluid element, satisfying $U_\mu U^\mu = -1$. The 4-acceleration of a perfect fluid element may be written as \cite{Bertolami2007,Bertolami2008a} \begin{align} \label{eq:nmc_fluid_acc} \mathfrak{a}^\mu_\text{[fluid]}&=\frac{dU^\mu}{d\tau}+\Gamma^\mu_{\alpha\beta}U^\alpha U^\beta \nonumber\\ &=\frac{1}{\rho+p}\left[\frac{f'_2}{ f_2}\left(\mathcal{L}_\text{m}-p\right)\nabla_\nu R + \nabla_\nu p\right]h^{\mu\nu}_\text{[fluid]}\,, \end{align} where $h^{\mu\nu}_\text{[fluid]}\equiv g^{\mu\nu}+U^\mu U^\nu$ is the projection operator. It is then clear that the knowledge of the Lagrangian of the matter fields, regardless of their macroscopic or microscopic description, is fundamental to the study of these theories. \section{Equivalence with scalar field theories}\label{subsec:scalarNMC} As shown before for $f(R)$ theories, one can rewrite NMC theories similarly with an action with two scalar fields, and do a conformal transformation to the Einstein frame \cite{Bertolami2008b}. For completeness, we make a brief detour to show explicitly how this equivalence manifests. For the former, it is enough to write the action \begin{equation}\label{eq:actionscalarNMC1} S=\int d^4x \sqrt{-g}\left[\kappa f_1(\chi)+\varphi(R-\chi)+f_2(\chi)\mathcal{L}_\text{m} \right]\,, \end{equation} where variation with respect to $\varphi$ and $\chi$ give, respectively, \begin{equation}\label{eq:NMCphi} \chi=R, \end{equation} \begin{equation}\label{eq:NMCpsi} \varphi=\kappa f'_1(\chi)+f'_2(\chi)\mathcal{L}_\text{m}\,. \end{equation} We can rewrite the action \eqref{eq:actionscalarNMC1} in the form of a JBD type theory with $\omega_{\rm JBD}=0$ \begin{equation}\label{eq:actionscalarNMC2} S=\int d^4x \sqrt{-g}\left[\varphi R-V(\chi,\varphi)+f_2(\chi)\mathcal{L}_\text{m} \right]\,, \end{equation} with a potential \begin{equation}\label{eq:potentialscalarNMC} V(\chi,\varphi)=\varphi\chi-\kappa f_1(\chi)\,. \end{equation} Alternatively, one can make a conformal transformation $g_{\mu\nu}\rightarrow \tilde{g}_{\mu\nu}=\Omega^2 g_{\mu\nu}$ so that the action \eqref{eq:actionscalarNMC2} reads \begin{align}\label{eq:actionscalarNMC3} S = \int d^4x \sqrt{-\tilde{g}} \Big[\kappa \Omega^{-2}\varphi \Big(&\tilde{R} +6\tilde{\square}(\ln\Omega) - 6\tilde{g}^{\mu\nu}\partial_\mu(\ln \Omega) \partial_\nu (\ln \Omega) \Big) \nonumber \\ &-\Omega^{-4}\varphi^2 U+\Omega^{-4}f_2(\chi)\mathcal{L}_\text{m}(\Omega^{-2}\tilde{g}_{\mu\nu},\Psi_M) \Big]\,, \end{align} where the potential $U$ is given by \begin{equation}\label{eq:potentialscalarNMC2} U=\kappa\frac{\varphi\chi-\kappa f_1(\chi) }{ \varphi^2}\,. \end{equation} To express the action in the Einstein frame, the conformal factor must now obey \begin{equation}\label{eq:conftransNMC} \Omega^2=\varphi, \end{equation} where it is assumed that $f'_1(R)>0$. We now rescale the two scalar fields as \begin{equation}\label{eq:NMCfield1} \phi\equiv \sqrt{3 \kappa} \ln\left(\varphi\right)\,, \end{equation} \begin{equation}\label{eq:NMCfield2} \psi\equiv f_2(\chi)\,. \end{equation} Once again Gauss's theorem allows us to finally write the action \eqref{eq:actionscalarNMC3} as \begin{align} S = \int d^4x \sqrt{-\tilde{g}} \bigg[\kappa\tilde{R} &- \frac{1}{2}\partial^\alpha\phi \partial_\alpha \phi - U(\phi,\psi) \nonumber \\ &+\psi e^{-2\frac{\phi }{ \sqrt{3\kappa}}}\mathcal{L}_\text{m}\left(e^{-\frac{\phi }{ \sqrt{3\kappa}}}\tilde{g}_{\mu\nu},\Psi_M\right) \bigg]\,, \end{align} where $e^{-\frac{\phi }{ \sqrt{3\kappa}}}\tilde{g}_{\mu\nu}$ is the physical metric. \section{NMC cosmology} \label{sec.cosmo} Consider a homogeneous and isotropic universe described by a Friedmann-Lemaître-Robertson-Walker (FLRW) metric, represented by the line element \begin{equation} \label{eq:line-nmc} ds^2=-dt^2+a^2(t)\left[\frac{dr^2}{1-kr^2} +r^2 d\theta^2 +r^2\sin^2\theta d\phi^2\right]\,, \end{equation} where $a(t)$ is the scale factor, $k$ is the spatial curvature of the universe (which is observationally constrained to be very close to zero), $t$ is the cosmic time, and $r$, $\theta$ and $\phi$ are polar comoving coordinates, filled by a collection of perfect fluids, with EMT given by Eq. \eqref{eq:EMT_fluid}. The modified Friedmann equation (MFE) becomes \begin{equation}\label{eq:fried-f1f2-1} H^2+\frac{k}{a^2}=\frac{1}{6F}\left[FR- \kappa f_1+f_2\sum_i\rho_i-6H\dot{F}\right]\,, \end{equation} and the modified Raychaudhuri equation (MRE) becomes \begin{equation}\label{eq:ray-f1f2-1} 2\dot{H}+3H^2+\frac{k}{a^2}=\frac{1}{2F}\left[FR-\kappa f_1-f_2 \sum_i p_i-4H\dot{F}-2\ddot{F}\right] \,, \end{equation} where $\rho_i$ and $p_i$ are the energy density and pressure of each of the fluids, respectively, a dot represents a derivative with respect to the cosmic time, $H\equiv\dot{a}/a$ is the Hubble factor, $F'\equiv \kappa f''_1+f''_2 \mathcal{L}_\text{m}$, and $\mathcal{L}_\text{m}$ is now the on-shell matter Lagrangian. It should be noted that the presence of both standard and NMC $f(R)$ terms can produce very interesting dynamics, both at late and early times. It was shown in Chapter \ref{chapter_lag} that the on-shell Lagrangian of a perfect fluid composed of particles with fixed rest mass and structure is given by \begin{equation} \label{eq:lag-nmc} \mathcal{L}_\text{m}=T=T^{\mu\nu}g_{\mu\nu}=3p-\rho \,. \end{equation} Therefore, the $t$ component of Eq. \eqref{eq:conservNMC} becomes \begin{equation} \label{eq:dens-cons_nmc} \dot{\rho}+3H(\rho+p)=-(\mathcal{L}_\text{m}+\rho)\frac{\dot{f}_2}{ f_2}=-3p\frac{\dot{f}_2}{ f_2}\,, \end{equation} where $w\equiv p/\rho$ is the equation of state (EOS) parameter. For a single fluid $i$, Eq. \eqref{eq:dens-cons_nmc} can be directly integrated to give \begin{equation} \label{eq:densityevo} \rho_i=\rho_{i,0} a^{-3(1+w_i)} f_2^{-3w_i}\,, \end{equation} where $\rho_{i,0}$ is the energy density at the present time, when $a(t)=a_0=1$. It is then immediate to show that in the case of dust ($w=0$) the usual conservation law $\rho \propto a^{-3}$ holds, while in the case of photons ($w=1/3$) the NMC generally leads to a significant change to the evolution of the photon energy density ($\rho \propto a^{-4}f_2^{-1}$ instead of $\rho \propto a^{-4}$). The relative change to the conservation laws of photons and baryons may be used to derive strong constraints on the form of $f_2$, as we shall see in Chapter \ref{chapter_constr}. Taking into account that the proper pressure of the fluid is given by $p=\rho v^2/3$ (assuming, for simplicity, that $v$ is the same for all particles) and requiring that the number of particles per comoving volume be conserved, or equivalently that $\rho \propto \gamma a^{-3}$, (where $\gamma \equiv 1/\sqrt{1-v^2}$), and substituting into Eq. \eqref{eq:dens-cons_nmc}, we find that the velocity of fluid particles in FLRW spacetime is given by \begin{equation} \label{eq:dotvel} {\dot v} +\left( H + \frac{{\dot f}_2}{f_2} \right) (1-v^2) v =0 \,. \end{equation} Hence, the momentum of such a particle evolves as \cite{Avelino2018} \begin{equation} \label{eq:momev} m \gamma v \propto (a f_2)^{-1}\,. \end{equation} \section{Energy-momentum constraints in general relativity} \label{sec.em-cons} It is worth to take a deeper dive on how the nonconservation of the EMT influences the dynamics of particles and fluids, and how such dynamics constrain the allowed on-shell Lagrangians. In preparation for a subsequent analysis in the broader context of NMC gravity, in this section we shall present five different derivations of the equation for the evolution of the speed of free localized particles of fixed mass and structure in a homogeneous and isotropic FLRW universe, relying solely on the conservation of linear momentum and energy in the context of GR. Let us start by considering the Einstein-Hilbert action \begin{equation} S=\int {\sqrt {-g}}(\kappa R+{\mathcal L}_{\rm m}) d^4 x \,, \end{equation} In general relativity the EMT of the matter fields, whose components are given by Eq. \eqref{eq:energy-mom3}, is covariantly conserved, so that \begin{equation} \nabla_{\nu} {T{^\nu}}_\mu= 0\,. \label{emconservation} \end{equation} Throughout this section we shall consider either the EMT of the individual particles with components ${T_*}^{\mu \nu}$ or the EMT of a perfect fluid composed of many such particles. The components of the latter are given by Eq. \eqref{eq:EMT_fluid} --- notice the use of $*$ to identify the EMT of a single particle. Energy-momentum conservation implies that \begin{equation} h^{\mu \beta} \nabla_{\alpha} {T{^\alpha}}_\beta = (\rho+p)U^\nu \nabla_\nu U^\mu + h^{\mu \beta} \nabla_\beta p= 0 \,, \end{equation} where $h^{\mu \nu}=g^{\mu \nu}+ U^\mu U^\nu$ is the projection operator. In the case of dust $p=0$ and, therefore, \begin{equation} U^\nu \nabla_\nu U^\mu = 0 \,. \end{equation} \subsection{Particles in a Minkowski spacetime} Consider a single particle and a rest frame where its EMT is static. Assuming that the gravitational interaction plays a negligible role on the particle structure, the spacetime in and around the particle may be described by a Minkowski metric element \begin{equation} ds^2=-dt^2 +d {\vb r} \cdot d {\vb r} =-dt^2 +dx^2+dy^2+dz^2 \,, \end{equation} where $t$ is the physical time and ${\vb r} = (x,y,z)$ are Cartesian coordinates. The particle's proper frame is defined by \begin{equation} \int {{T_*}^i}_{0[\rm prop]} \, d^3 r_{[\rm prop]}= -\int {{T_*}^0}_{i[\rm prop]} \, d^3 r_{[\rm prop]} =0 \,, \label{Ti0} \end{equation} where Latin indices take the values $1,\dots,3$, with \begin{equation} E_{[\rm prop]} = - \int {{T_*}^0}_{0[\rm prop]} \, d^3 r \label{Eprop} \end{equation} being the proper energy of the particle (the subscript $[\rm prop]$ is used to designate quantities evaluated in the proper frame). On the other hand, the generalized von Laue conditions \cite{doi:10.1002/andp.19113400808,Avelino2018a}, \begin{equation} \int {{T_*}^1}_{1[\rm prop]} \, d^3 r_{[\rm prop]} = \int {{T_*}^2}_{2[\rm prop]} \, d^3 r_{[\rm prop]} \int {{T_*}^3}_{3[\rm prop]} \, d^3 r _{[\rm prop]} = 0 \,, \label{vonlaue} \end{equation} are required for particle stability. Consider a Lorentz boost in the $x$ direction defined by \begin{eqnarray} t&=&\gamma(t_{[\rm prop]}+vx_{[\rm prop]})\,,\\ x&=&\gamma(x_{[\rm prop]}+vt_{[\rm prop]})\,,\\ y&=&y_{[\rm prop]}\,,\\ z&=&z_{[\rm prop]}\,, \end{eqnarray} where $\gamma=\left(1-v^{2}\right)^{-1/2}$ is the Lorentz factor and $v$ is the particle velocity. Under this boost, the components of the EMT ${T_*}_{\mu\nu}$ transform as \begin{equation} {{T_*}^{\mu}}_{\nu}={\Lambda^{\mu}}_{\alpha}\,{\Lambda_{\nu}}^{\beta}\,{{T_*}^{\alpha}}_{\beta[\rm prop]}\,,\label{emtransf} \end{equation} where the non-zero components of ${\Lambda^\mu}_\alpha$ and ${\Lambda_\nu}^\beta$ are \begin{eqnarray} {\Lambda^0}_0&=&{\Lambda^1}_1={\Lambda_0}^0={\Lambda_1}^1=\gamma\,,\label{lorentz1}\\ {\Lambda^0}_1&=&{\Lambda^1}_0=-{\Lambda_0}^1=-{\Lambda_1}^0=\gamma v\,,\label{lorentz2}\\ {\Lambda^2}_2&=&{\Lambda^3}_3={\Lambda_2}^2={\Lambda_3}^3=1\label{lorentz3}\,, \end{eqnarray} with all other components vanishing. In the moving frame the energy and linear momentum of the particle are given, respectively, by \begin{eqnarray} E &=& - \int {{T_*}^0}_0 \, d^3 r =E_{[\rm prop]} \gamma \,, \label{Eeq}\\ \mathfrak{p} &=& \int {{T_*}^1}_0 \, d^3 r=E_{[\rm prop]} \gamma v =E v \label{peq}\,, \end{eqnarray} where Eqs. (\ref{Ti0}), (\ref{Eprop}), (\ref{emtransf}), (\ref{lorentz1}), (\ref{lorentz2}), as well as Lorentz contraction, have been taken into account in the derivation of Eqs. (\ref{Eeq}) and (\ref{peq}). These two equations imply that $E^2-\mathfrak{p}^2=E_{\rm [prop]}^2$ and \begin{equation} \dot{\mathfrak{p}} = \dot E \frac{E}{\mathfrak{p}}= \frac{\dot E}{v}=E_{[\rm prop]} \dot v \gamma^3\label{dotp}\,. \end{equation} On the other hand, using Eqs. (\ref{emtransf}), (\ref{lorentz2}) and (\ref{lorentz3}) one finds \begin{eqnarray} \int {{T_*}^1}_1 \, d^3 r&=& E_{[\rm prop]} \gamma v^2 = E v^2\,,\label{T11}\\ \int {{T_*}^2}_2 \, d^3 r &=& \int {{T_*}^3}_3 \, d^3 r = 0 \,, \end{eqnarray} so that \begin{equation} \int {{T_*}^i}_i\, d^3 r=E_{[\rm prop]}\gamma v^2 = E v^2\,. \label{trace} \end{equation} Also notice that \begin{equation} \int {T_*} \, d^3 r= \int {{T_*}^\mu}_\mu\, d^3 r=-\frac{E_{[\rm prop]}}{\gamma} = -\frac{E}{\gamma^2}\,. \label{traceT} \end{equation} \subsection{Free particles in an FLRW spacetime} In a flat, homogeneous and isotropic universe, described by the FLRW metric, the line element may be written as \begin{equation} ds^2=a^2(\zeta)(-d\zeta^2+d \vb{q} \cdot d \vb{q}\,) \,, \end{equation} where $a(\zeta)$ is the scale factor, $\zeta = \int a^{-1} dt$ is the conformal time and $\vb q$ are comoving Cartesian coordinates. In an FLRW spacetime the nonvanishing components of the connection are given by \begin{equation} \Gamma_{0 0}^0=\mathscr{H}\,, \quad \Gamma_{i j}^0=\mathscr{H} \,\delta_{ij}\,, \quad \Gamma_{0 j}^i=\mathscr{H} \,{\delta^i}_j \,, \end{equation} where $\mathscr{H} = \grave{a}/a$ and a grave accent denotes a derivative with respect to the conformal time $\zeta$. \subsubsection*{Linear momentum conservation} Consider again a single free particle moving along the $x$-direction. The $x$-component of Eq. (\ref{emconservation}) describing momentum conservation in an FLRW spacetime then implies that \begin{equation} 0=\nabla_\nu {{T_*}^\nu}_1 = \partial_0 {{T_*}^0}_1 + \partial_i {{T_*}^i}_1 + 4 \mathscr{H} {{T_*}^0}_1\,. \end{equation} Integrating over the spatial volume one finds that \begin{equation} \grave{\mathfrak{p}}+ \mathscr{H} \mathfrak{p}=0\,, \label{pev} \end{equation} where \begin{equation} \mathfrak{p}=\int {{T_*}^0}_1 \, d^3 r=a^3 \int {{T_*}^0}_1 \, d^3 q \,. \end{equation} In this derivation we have assumed that the particle is isolated so that the EMT vanishes outside it. Hence, \begin{equation} \int \partial_i {{T_*}^\mu}_\nu \, d^3 q =0 \label{isolated} \end{equation} for any possible value of $\mu$, $\nu$ and $i$. Notice that Eq. (\ref{pev}) implies that $\mathfrak{p} = E_{[\rm prop]} \gamma v \propto a^{-1}$. Dividing Eq. (\ref{pev}) by $E_{[\rm prop]} $, taking into account Eq. (\ref{dotp}), one obtains the equation for the evolution of the free particle velocity in a homogeneous and isotropic FLRW universe: \begin{equation} {\grave v}+ \mathscr{H}(1-v^2) v=0\,. \label{vev} \end{equation} \subsubsection*{Energy conservation} Energy conservation, on the other hand, implies that \begin{equation} 0=\nabla_\nu {{T_*}^\nu}_0 = \partial_0 {{T_*}^0}_0 + \partial_i {{T_*}^i}_0 + 3 \mathscr{H} {{T_*}^0}_0 - \mathscr{H} {{T_*}^i}_i \,. \end{equation} Integrating over the spatial volume, and using Eqs. (\ref{trace}) and (\ref{isolated}), one finds that \begin{equation} {\grave E}+ \mathscr{H} v^2 E=0\,, \label{Eev} \end{equation} where \begin{equation} E=-\int {{T_*}^0}_0 \,d^3 r=-a^3 \int {{T_*}^0}_0 \,d^3 q \,. \end{equation} Dividing Eq. (\ref{Eev}) by $v$, taking into account Eq. (\ref{dotp}), once again one obtains Eq. (\ref{pev}) for the evolution of linear momentum in a homogeneous and isotropic FLRW universe. \subsection{Perfect fluids in an FLRW spacetime} We shall now derive the dynamics of free particles assuming that they are part of a homogeneous perfect fluid (see Eq. \eqref{eq:EMT_fluid}) with the proper energy density $\rho$ and the proper pressure $p$ depending only on time. \subsubsection*{Linear momentum conservation: dust} In the case of a perfect fluid with vanishing proper pressure $p$, the components of the EMT are \begin{equation} \label{eq:pfemt} T^{\mu\nu}=\rho \, U^\mu U^\nu \,, \end{equation} If the fluid moves in the positive $x$-direction, then \begin{eqnarray} U^0&=&\frac{d\zeta}{d\tau} = \frac{\gamma}{a}\,,\\ U^1&=&\frac{dq^1}{d\tau} = {\grave q}^1\frac{d\zeta}{d\tau}=v \, U^0 =v\frac{\gamma}{a} \,,\\ U^2&=&U^3=0\,. \end{eqnarray} The $x$-component of Eq. (\ref{emconservation}), describing momentum conservation, implies that \begin{equation} \grave U^1 U^0+ 2 \Gamma^1_{1 0} U^0 U^1=0 \,. \end{equation} Multiplying this equation by $E_{[\rm prop]} a/U^0$, taking into account that $U^1=\gamma v/a$ and that $\Gamma^0_{1 1}=\mathscr{H}$, one obtains once again Eq. (\ref{pev}) for the evolution of linear momentum of a free particle in a homogeneous and isotropic FLRW universe. \subsubsection*{Energy conservation: dust} The time component of Eq. (\ref{emconservation}), describing energy conservation, is given by \begin{equation} \grave U^0 U^0+ \Gamma^0_{0 0} U^0 U^0 + \Gamma^0_{1 1} U^1 U^1=0 \,. \end{equation} Multiplying this equation by $E_{[\rm prop]} a/U^0$, taking into account that $U^0=\gamma/a$, $U^1=\gamma v/a$, and that $\Gamma^0_{0 0}=\Gamma^0_{1 1}=\mathscr{H}$, one obtains once again Eq. (\ref{Eev}) for the evolution of the energy of a free particle in a homogeneous and isotropic FLRW universe, which has been shown to be equivalent to Eq. (\ref{pev}) for the evolution of the linear momentum. \subsubsection*{Energy conservation: homogeneous and isotropic fluid} \label{sec:emGRF} We shall now consider a homogeneous and isotropic perfect fluid (at rest in the comoving frame, so that $U^i=0$) made up of free particles all with the same speed $v$. This fluid can be pictured as the combination of six equal density dust fluid components moving in the positive/negative $x$, $y$, and $z$ directions. The time component of Eq. (\ref{emconservation}), describing energy conservation, implies that \begin{equation} {\grave \rho} + 3 \mathscr{H} (\rho +p) =0\,. \label{rhoEQ} \end{equation} If the number $N$ of particles in a volume $V=a^3$ is conserved then \begin{equation} \rho=\frac{NE}{V} = \frac{N E}{a^3}=N E_{[\rm prop]} \frac{\gamma}{a^3}\propto \frac{\gamma}{a^3}\,. \label{rhoEV} \end{equation} On the other hand, if the perfect fluid is an ideal gas then its proper pressure is given by \begin{equation} p=\rho\frac{v^2}{3}\,. \label{Pideal} \end{equation} Substituting the conditions given in Eqs. (\ref{rhoEV}) and (\ref{Pideal}) into Eq. (\ref{rhoEQ}) multiplied by $a/N$, one again arrives at Eq. (\ref{Eev}), the same as the one derived considering energy conservation for individual free particles. \section{Energy-momentum constraints in NMC gravity} \label{sec:emNMC} In this section we shall again present five different derivations of the equation for the evolution of the speed of individual localized particles of fixed mass and structure in a homogeneous and isotropic FLRW universe, but now considering the possibility of a coupling between gravity and the matter fields. We shall demonstrate that consistency between the results obtained uniquely defines the correct form of the corresponding on-shell Lagrangians \cite{Avelino2022}. Let us once again consider the action \eqref{eq:actionf1f2}, allowing for a NMC between gravity and the matter fields. In this and other NMC theories the energy momentum tensor of the matter fields, whose components are given in Eq. (\ref{eq:energy-mom3}), is not in general covariantly conserved. Instead one has \begin{equation} \nabla_{\nu} {T_{\mu}}^\nu= S_{\mu}\,, \label{Tncons} \end{equation} where \begin{equation} S_{\mu} = ({\mathcal L}_{\rm m} \delta_{\mu}^\nu -{T_{\mu}}^\nu) \frac{\nabla_\nu f_2}{f_2} \,.\label{Tncons2} \end{equation} Here, we shall again consider either the EMT of the individual particles with components ${T_*}^{\mu \nu}$ or the EMT of a perfect fluid composed of many such particles whose components are given in Eq. \eqref{eq:EMT_fluid}. In the case of a perfect fluid Eq. (\ref{Tncons}), with $S_\mu$ given by Eq. (\ref{Tncons2}), implies that \cite{Bertolami2007} \begin{equation} U^\nu \nabla_\nu U^\mu = \frac{1}{\rho+p} \left[\left(\mathcal{L}_{\rm f} -p\right) \frac{\nabla_\nu f_2}{f_2} -\nabla_\nu p\right]h^{\mu \nu}\,, \label{UevNMC} \end{equation} where ${\mathcal L}_{\rm f}$ and $h^{\mu \nu}=g^{\mu \nu}+ U^\mu U^\nu$ are the on-shell Lagrangian of the perfect fluid and the projection operator, respectively. In the following we shall also consider the particular case of dust with $\mathcal{L}_\text{f}=\mathcal{L}_\text{dust}$ (characterized by $p_{\rm dust}=0$ and $\rho_{\rm dust}=-T_{\rm dust}$), for which \begin{equation} U^\nu \nabla_\nu U^\mu = \frac{\mathcal{L}_{\rm dust} }{\mathcal \rho_{\rm dust}} \frac{\nabla_\nu f_2}{f_2} h^{\mu \nu}\,, \label{UevNMCdust} \end{equation} \subsection{Free particles in FLRW spacetimes} Consider once again the motion of localized particles of fixed mass and structure in an FLRW background, but this time in the context of NMC gravity. Given that the EMT is no longer covariantly conserved, the presence of additional dynamical terms, dependent on the matter Lagrangian, will need to be taken into account. \subsubsection*{Linear momentum evolution} For a single isolated particle moving along the $x$-direction in an FLRW background, the $x$-component of Eq. (\ref{Tncons2}) implies that \begin{equation} \int S_1 \, d^3 r = -\mathfrak{p} \frac{\grave f_2}{f_2} \,, \label{S1int} \end{equation} Hence, considering the $x$-component of Eq. (\ref{Tncons}), and following the same steps of the previous section, the equation for the evolution of the linear momentum of the particle can now be generalized to \begin{equation} \grave{\mathfrak{p}}+ \Theta \, \mathfrak{p}=0\,, \label{pev1} \end{equation} where $\Theta$ is defined by \begin{equation} \Theta = \frac{\grave b}{b}= \frac{\grave a}{a}+ \frac{\grave f_2}{f_2} = \mathscr{H} + \frac{\grave f_2}{f_2} \,, \label{pev1theta} \end{equation} and $b=a f_2$. Notice that Eq. (\ref{pev1}) was obtained without making any assumptions about the specific form of the on-shell Lagrangian. \subsubsection*{Energy evolution} Of course, one must be able to arrive at the same result using the time component of Eq. (\ref{Tncons}) --- otherwise there would be an inconsistency. The time component of Eq. (\ref{Tncons2}) requires that \begin{equation} \int S_0 \, d^3 r = \left(\int {\mathcal L}_{\rm m} \, d^3 r +E\right) \frac{\grave f_2}{f_2} \,, \label{S0int} \end{equation} Following the same steps of the previous section but now using the time component of Eq. (\ref{Tncons}) and Eq. (\ref{S0int}) one obtains \begin{equation} {\grave E}+ \mathscr{H} v^2 E=-\left(\int {\mathcal L}_{\rm m} \, d^3 r +E\right) \frac{\grave f_2}{f_2} \,. \label{Eev1} \end{equation} Dividing Eq. (\ref{Eev1}) by $v$, taking into account Eqs. (\ref{peq}) and (\ref{dotp}), one finds that \begin{equation} \grave{\mathfrak{p}}+ \mathscr{H} \mathfrak{p}=-\frac{\int {\mathcal L}_{\rm m} \, d^3 r +E}{v} \frac{\grave f_2}{f_2} \,. \end{equation} Consistency with Eqs. (\ref{pev1}) and (\ref{pev1theta}) then requires that \begin{equation} \mathfrak{p}=\frac{\int {\mathcal L}_{\rm m} \, d^3 r +E}{v}\,, \end{equation} Taking into account that $\mathfrak{p}=v E=E_{[\rm prop]} \gamma v$ and Eq. (\ref{traceT}), this in turn implies that \begin{equation} \int {\mathcal L}_{\rm m} d^3 r = -\frac{E}{\gamma^2} = -\frac{E_{[\rm prop]}}{\gamma} = \int {T_*} \, d^3 r\,. \end{equation} Hence, the volume average of the on-shell Lagrangian of a particle of fixed mass and structure is equal to the volume average of the trace of its EMT, independently of the particle structure and composition. \subsection{Perfect fluids in FLRW spacetimes} Here, we shall derive the dynamics of moving localized particles with fixed proper mass and structure in an FLRW assuming that they are part of a homogeneous perfect fluid, but now in the context of NMC gravity. \subsubsection*{Linear momentum constraints: dust} In the case of dust, a perfect fluid with $p_{\rm dust}=0$, the $x$-component of Eq. (\ref{UevNMC}) may be written as \begin{equation} \grave U^1 U^0+ 2 \Gamma^1_{1 0} U^0 U^1= \frac{\mathcal{L}_{\rm dust} }{\rho_{\rm dust}} \frac{\grave f_2}{f_2} U^0 U^1 \,. \end{equation} Multiplying this equation by $E_{[\rm prop]} a/U^0$, taking into account that $U^1=\gamma v/a$ and that $\Gamma^0_{1 1}=\mathscr{H}$, one obtains \begin{equation} \grave{\mathfrak{p}} + \mathscr{H} \mathfrak{p} = \frac{\mathcal{L}_{\rm dust} }{\rho_{\rm dust}} \frac{\grave f_2}{f_2} \mathfrak{p} \,. \label{pevNMC} \end{equation} Consistency with Eqs. (\ref{pev1}) and (\ref{pev1theta}) requires that \begin{equation} \mathcal{L}_{\rm dust}=\mathcal -\rho_{\rm dust}= T_{\rm dust} \,. \label{dustlag} \end{equation} \subsubsection*{Energy constraints: dust} The time component of Eq. (\ref{UevNMCdust}) is given by \begin{equation} \grave U^0 U^0+ \Gamma^0_{0 0} U^0 U^0 + \Gamma^0_{1 1} U^1 U^1=\frac{\mathcal{L}_{\rm dust} }{\mathcal \rho_{\rm dust}} \frac{\grave f_2}{f_2} (g^{00}+U^0 U^0) \,, \end{equation} Multiplying this equation by $E_{[\rm prop]} a/U^0$, taking into account that $g^{00}=-1/a^2$, $U^0=\gamma/a$, $U^1=\gamma v/a$, $v^2\gamma^2=\gamma^2-1$, and that $\Gamma^0_{0 0}=\Gamma^0_{1 1}=\mathscr{H}$, one obtains \begin{equation} {\grave E}+ \mathscr{H} v^2 E=\frac{\mathcal{L}_{\rm dust} }{\mathcal \rho_{\rm dust}} \frac{\grave f_2}{f_2} E v^2\,. \label{EevNMC} \end{equation} Dividing Eq. (\ref{Eev1}) by $v$, taking into account Eqs. (\ref{peq}) and (\ref{dotp}), one again arrives at Eq. (\ref{pevNMC}) for the evolution of linear momentum. \subsubsection*{Energy constraints: homogeneous and isotropic fluid} Consider a homogeneous and isotropic perfect fluid (at rest in the comoving frame, so that $U^i=0$) made up of localized particles of fixed mass and structure all with the same speed $v$. The time component of Eq. (\ref{Tncons}), is given by \begin{equation} {\grave \rho_{\rm f}} + 3 \mathscr{H} (\rho_{\rm f} +p_{\rm f}) = -({\mathcal L}_{\rm f} + \rho_{\rm f}) \frac{\grave f_2}{f_2}\,,\label{Tcons2} \end{equation} where ${\mathcal L}_{\rm f}$, $\rho_f$ and $p_{\rm f}$ are the on-shell Lagrangian, proper energy density and proper pressure of the fluid, respectively. If the number of particles is conserved then Eq. (\ref{rhoEV}) is satisfied. On the other hand, if the perfect fluid is an ideal gas then its proper pressure is given by Eq. (\ref{Pideal}): $p_{\rm f}=\rho_{\rm f} v^2/3$. Substituting the conditions given in Eqs. (\ref{rhoEV}) and (\ref{Pideal}) into Eq. (\ref{Tcons2}) and multiplying it by $a^3/N$, one obtains \begin{equation} {\grave E}+ \mathscr{H} v^2 E= -\left(\frac{{\mathcal L}_{\rm f}}{\rho_{\rm f}} + 1\right) \frac{\grave f_2}{f_2}E\,. \label{Eevf} \end{equation} As in Sec. \ref{sec:emGRF}, this homogeneous and isotropic perfect fluid can be pictured as the combination of six equal density dust fluid components moving in the positive/negative $x$, $y$, and $z$ directions. Therefore, in the proper frame of the resulting perfect fluid, the evolution of particle energy and linear momentum of each of its dust components and of the total combined fluid must be the same, \textit{i.e.} Eqs. (\ref{EevNMC}) and \eqref{Eevf} must result in the same equation of motion. This implies that \begin{equation} -\frac{\mathcal{L}_{\rm dust} }{\rho_\text{dust}}v^2=\frac{\mathcal{L}_{\rm f} }{\rho_\text{f}}+1 \,. \label{finald} \end{equation} We can therefore write the on-shell Lagrangian of the perfect fluid as \begin{align} \mathcal{L}_\text{f} &= -\rho_\text{f}\left(\frac{\mathcal{L}_\text{dust}}{\rho_{\rm dust}}v^2+1\right)=\rho_\text{f}\left(v^2-1\right) \nonumber \\ \Rightarrow \mathcal{L}_\text{f} &= 3p_{\rm f}-\rho_\text{f} = T_{\rm f} \,,\label{lagperf} \end{align} where we have taken into account Eqs. \eqref{Pideal} and \eqref{dustlag}. Naturally, in the case of dust ($v=0$) Eq. \eqref{lagperf} again implies that $\mathcal{L}_\text{dust}=T_{\rm dust}=-\rho_\text{dust}$. In the derivation of this result the crucial assumption is that the fluid can be described by the ideal-gas equation of state --- no assumptions have been made regarding the role of gravity on the structure of the particles in this case. Notice that this result is not in contradiction with the findings of Refs. \cite{Harko2010a,Minazzoli2012}, according to which the on-shell Lagrangian of a fluid with 1) a conserved number of particles and 2) an off-shell Lagrangian dependent solely on the particle number number density is ${\mathcal L}_{\rm m}^{\rm on}=-\rho$, since the second condition does not apply to an ideal gas. \section{Scalar matter fields in NMC gravity} \label{sec.scalar} It is interesting to analyse the case where the matter fields are given by a real scalar field $\phi$ governed by a generic Lagrangian of the form \begin{equation} \mathcal{L}_\text{m} =\mathcal{L}_\text{m}(\phi,X) \,, \end{equation} where \begin{equation} X= -\frac{1}{2}\nabla^\mu \phi \partial_\mu \phi\,, \end{equation} is the kinetic term. Therefore Eq. \eqref{eq:energy-mom3} implies that the components of the EMT are given by \begin{equation} T_{\mu\nu}=\mathcal{L}_{\text{m},X}\partial_\mu \phi \partial^\nu \phi + \mathcal{L}_\text{m} g_{\mu\nu}\, . \end{equation} We can now look at a few matter Lagrangians and examine their significance in NMC gravity \cite{Avelino2018}. \subsection{Perfect fluid with $\mathcal{L}_\text{m}=p$} For timelike $\partial_\mu \phi$, it is possible to write the EMT in a perfect fluid form \begin{equation}\label{eq:fluid2} T^{\mu\nu} = (\rho + p) U^\mu U^\nu + p g^{\mu\nu} \,, \end{equation} by means of the following identifications \begin{equation}\label{eq:new_identifications} u_\mu = \frac{\partial_\mu \phi}{\sqrt{2X}} \,, \quad \rho = 2 X p_{,X} - p \, ,\quad p = \mathcal{L}_\text{m}(\phi,X)\, . \end{equation} In Eq.~(\ref {eq:fluid2}), $U^\mu$ are the components of the 4-velocity field describing the motion of the fluid, while $\rho$ and $p$ are its proper energy density and pressure, respectively. Observe that in this case $\mathcal{L}_\text{m}=p$, which in the context of GR is one of the possible choices considered in the literature for the on-shell Lagrangian of a perfect fluid. Note that, since the 4-velocity is a timelike vector, the correspondence between scalar field models of the form $\mathcal{L}_\text{m}=\mathcal{L}_\text{m} (\phi,X)$ and perfect fluids breaks down whenever $\partial_\mu \phi$ is spacelike, as is the case of non-trivial static solutions. In the case of a homogeneous and isotropic universe filled with a perfect fluid with arbitrary density $\rho$ and $p=0$, Eqs. (\ref{eq:fluid2}) and (\ref{eq:new_identifications}) imply that the dynamics of this fluid may be described by a matter Lagrangian whose on-shell value is equal to zero everywhere (a simple realization of this situation would be to take $\mathcal{L}_\text{m}=X-V= {\dot \phi}^2/2-V(\phi)$ with the appropriate potential $V$ and initial conditions, so that $V(\phi)$ is always equal to ${\dot \phi}^2/2$). \subsection{Solitons in 1+1 dimensions: $\mathcal{L}_\text{m}=T$} In this section, our particle shall be modelled once again as a topological soliton of the field $\phi$, but in 1+1 dimensions. For concreteness, assume that the matter fields may be described by a real scalar field with Lagrangian \begin{equation} \mathcal{L}_\text{m}= -\frac12 \partial_\mu \phi \partial^\mu \phi - V(\phi)\,, \end{equation} where $V(\phi) \ge 0$ is a real scalar field potential \begin{equation} V(\phi)=\frac{\lambda}{4} \left(\phi^2-\varepsilon^2\right)^2\,, \end{equation} which has two degenerate minima at $\phi=\pm \varepsilon$. In this case, ${\mathcal L}_{\text{m},X}=1$ and the EMT of the matter fields is given by \begin{equation} \label{eq:fluid1} T^{\mu\nu} = \partial^\mu \phi \partial^\nu \phi + \mathcal{L}_\text{m} g^{\mu \nu}\,. \end{equation} On the other hand, the equation of motion for the scalar field $\phi$ is \begin{equation} \Box \phi=- \frac{f'_2}{f_2} \partial_\mu R \partial^\mu \phi +V_{,\phi}\,. \label{dalphi} \end{equation} Multiplying Eq. \eqref{dalphi} by $\partial^\nu \phi$, and taking into account that \eqref{eq:fluid1}, one recovers Eq. \eqref{eq:conservNMC}. \subsubsection*{Minkowski spacetime} In a $1+1$ dimensional Minkowski space-time the line element can be written as $ds^2=-dt^2+dz^2$. Hence, neglecting the self-induced gravitational field, the Lagrangian and the equation of motion of the scalar field $\phi$ are given respectively by \begin{eqnarray} \mathcal{L}_\text{m}&=& \frac{{\left(\phi_{,t}\right)^2}}{2} -\frac{\left(\phi_{,z}\right)^2}{2}-V(\phi)\,, \label{Lagphi}\\ \phi_{,tt} - \phi_{,zz}&=& -\frac{dV}{d \phi}\,, \label{phieqmM} \end{eqnarray} where the $\phi_{,t}$, $\phi_{,z}$ and $\phi_{,tt}$, $\phi_{,zz}$ represent the first and second derivatives with respect to the physical time $t$ and the spatial coordinate $z$. The components of the EMT of the particle can now be written as \begin{eqnarray} \rho_\phi=-{T^{0}}_0&=&\frac{\left(\phi_{,t}\right)^2}{2}+\frac{\left(\phi_{,z}\right)^2}{2}+V(\phi)\,,\\ T^{0z}&=&-\phi_{,t}\phi_{,z}\,,\\ p_\phi={T^{z}}_z&=&\frac{\left(\phi_{,t}\right)^2}{2}+\frac{\left(\phi_{,z}\right)^2}{2}-V(\phi)\,, \end{eqnarray} so that the trace $T$ of the EMT is given by \begin{equation} T={T^\mu}_\mu= {T^{0}}_0+{T^{z}}_z =-\rho_\phi+p_\phi=-2 V(\phi)\,. \label{Ttrace} \end{equation} Consider a static soliton with $\phi=\phi(z)$. In this case Eq. (\ref{phieqmM}) becomes \begin{equation} \phi_{,zz}= \frac{dV}{d\phi} \label{phieqmM1}\,, \end{equation} and it can be integrated to give \begin{equation} \frac{\left(\phi_{,z}\right)^2}{2} = V\,,\label{KeqU} \end{equation} assuming that $|\phi| \to \varepsilon$ for $z \to \pm \infty$. If the particle is located at $z=0$, Eq. (\ref{phieqmM1}) has the following solution \begin{equation} \phi = \pm \varepsilon \tanh\left(\frac{z}{{\sqrt 2}R}\right)\,, \end{equation} with \begin{equation} R=\lambda^{-1/2} \varepsilon^{-1}\,. \end{equation} The rest mass of the particle is given by \begin{eqnarray} m&=&\int_{- \infty}^{\infty} \rho dz = 2 \int_{- \infty}^{\infty} V dz = \frac{8 {\sqrt 2}}{3} V_\text{max} R = \nonumber \\ &=& \frac{2{\sqrt 2}}{3}\lambda^{1/2} \varepsilon^3\,, \end{eqnarray} where $V_\text{max} \equiv V(\phi=0) = \lambda \varepsilon^4/4$. Here we have taken into account that Eq. (\ref{KeqU}) implies that in the static case the total energy density is equal to $2V$. On the other hand, from Eqs. (\ref{Lagphi}) and (\ref{Ttrace}), one also has that \begin{equation} \mathcal{L}_\text{m}=T\,, \label{LT} \end{equation} where this equality is independent of the reference frame, and, consequently, it does not depend on whether the particle is moving or at rest. Also note that this result also applies to collections of particles and, in particular, to one which can be described as perfect fluid. However, unlike the result obtained for a homogeneous scalar field described by a matter Lagrangian of the form $\mathcal{L}_\text{m}(\phi,X)$, according to which the on-shell Lagrangian of a perfect fluid with proper pressure $p=0$ is $\mathcal{L}_\text{m}^{\rm on} = 0$ (independently of its proper density $\rho$), one finds that a perfect fluid with $p=0$ made of static solitonic particles would have an on-shell Lagrangian given by $\mathcal{L}_\text{m}^{\rm on} = T = -\rho$. This is an explicit demonstration that the Lagrangian of a perfect fluid depends on microscopic properties of the fluid not specified by its EMT. \subsubsection*{FLRW spacetime} Consider a $1+1$ dimensional FLRW space-time with line element $ds^2=-dt^2+a^2(t) dq_z^2$, where $q_z$ is the comoving spatial coordinate and $a(t)$ is the scale factor. Taking into account that \begin{equation} { \phi^{,\mu}}_{,\mu} = \left(- \Gamma^\mu_{\mu \nu} + \frac{f'_2}{f_2} \partial_\nu R \right) \phi^{,\nu}\,, \end{equation} one obtains \begin{equation} {\ddot \phi} + \left(H + \frac{{\dot f}_2}{f_2}\right){\dot \phi} - \nabla^2 \phi= -\frac{dV}{d\phi}\,, \label{dynphi} \end{equation} where $H \equiv {\dot a} / a$ is the Hubble parameter and $\nabla^2 \equiv d^2 /dz^2 = a^{-2} d^2 / d q_z^2$ is the physical Laplacian. The dynamics of $p$-branes in $N+1$-dimensional FRW universes has been studied in detail in \cite{Sousa2011a,Sousa2011b} (see also \cite{Avelino2016}). There, it has been shown that the equation for the velocity $v$ of a $0$-brane in a $1+1$-dimensional FRW spacetime implied by Eq. (\ref{dynphi}) is given by \begin{equation} {\dot v} +\left( H + \frac{{\dot f}_2}{f_2} \right) (1-v^2) v =0 \,, \end{equation} which is exactly the same result we obtained in Eq.~\eqref{eq:dotvel} for the velocity of the fluid particles with the on-shell Lagrangian $\mathcal{L}_\text{m}^{\rm on}=T$. \section{The first law of thermodynamics and particle creation/decay}\label{sec.firstlaw} Naturally, the absence of energy-momentum conservation has significant implications to the laws of thermodynamics, namely the possibility of a violation of the second law of thermodynamics. To study these effects, we shall consider the thermodynamics of a universe filled with a perfect fluid, in the presence of an NMC between geometry and matter described by the action given in Eq.~\eqref{eq:actionf1f2}. Particle creation or decay via an NMC to gravity would require significant perturbations to the FLRW geometry on the relevant microscopic scales since the FLRW metric is essentially Minkowskian on such scales. The constraints on gravity on microscopic scales are extremely weak and it might be possible to construct viable modified theories of gravity in which the gravitational interaction on such scales is significantly enhanced with respect to GR (see, for example, \cite{Avelino2012a,Avelino2012}). However, these small scale perturbations have not been considered in the derivation of Eq.~\eqref{eq:dens-cons_nmc} and have not been explicitly taken into account in previous works when considering particle creation or decay via an NMC to gravity. Consequently, the only consistent interpretation for the change to the evolution of the energy density of a fluid made of soliton-like particles associated with the term on the right-hand-side of Eq.~\eqref{eq:dens-cons_nmc} is the modification to the evolution of the linear momentum of such particles described by Eq.~\eqref{eq:momev}. Here, we shall start by considering the thermodynamics of a homogeneous and isotropic universe in the absence of significant small-scale perturbations, and then describe phenomenologically the case in which microscopic perturbations to the FLRW geometry result in particle creation or decay. \subsection{Perfect fluid with $\mathcal{L}_\text{m}=T$} We start by treating the Universe as a system where the average number of particles per comoving volume is conserved, for which the first law of thermodynamics takes the form \begin{equation} \label{eq:conservenergy} d (\rho a^3)= dQ_{\rm NMC}-pd(a^3) \, , \end{equation} where $dQ_{\rm NMC}$ is the ``heat'' received by the system over the interval of time $dt$ due to the NMC between the gravitational and the matter fields \cite{Azevedo2019b}. As previously mentioned, in the literature \cite{Prigogine1988, Prigogine1989, Lima2014, Harko2015} an adiabatic expansion ($dQ/dt=0$) is usually considered, and therefore an extra term associated with particle creation due to the NMC between the gravitation and matter fields is added to Eq.~\eqref{eq:conservenergy}. However, as in previous work, an FLRW metric is assumed. Hence, no perturbations to the background geometry that could be responsible for spontaneous particle creation are considered. In this scenario, we are left with associating the NMC with the non-adiabaticity of the expansion. Eq.~\eqref{eq:conservenergy} may be rewritten as \begin{equation} \label{eq:heat1} \dot{\rho}+3H(\rho+p)=\frac{{\dot{Q}}_{\rm NMC}}{ a^3} \, . \end{equation} Using Eq.~\eqref{eq:dens-cons_nmc}, one obtains the ``heat'' transfer rate with $\mathcal{L}_\text{m}=3p-\rho$ \begin{eqnarray} \label{eq:heat_transf} {\dot{Q}}_{\rm NMC}&=&-({\mathcal L}_\text{m} + \rho) a^3 \frac{\dot f_2}{f_2}=-3p a^3\frac{\dot f_2}{f_2} \nonumber \\ &=&-\rho v^2 a^3\frac{\dot f_2}{f_2} \, . \end{eqnarray} This implies that for non-relativistic matter ($v\ll1$), such as baryons and cold dark matter, ${\dot{Q}}_{\rm NMC}\sim 0$ so that the usual energy-momentum conservation approximately holds. On the other hand, relativistic matter is strongly impacted by this energy-momentum transfer. \subsection{Particle creation/decay and effective Lagrangians} Here, we consider the possibility that the perturbations to the FLRW geometry on microscopic scales may be responsible for particle creation or decay. Discussing particle creation/decay with the matter Lagrangian $\mathcal{L}_\text{m}=T$ in great detail would of course require a microscopic description of the particle structure, which we leave purposefully generic, and its interaction with gravity on microscopic scales. While such analysis is beyond the scope of this thesis, we can treat particle creation/decay phenomenologically, by introducing a modification to the energy-momentum conservation equation. If particle number is not conserved due to the NMC, an additional term, associated to particle creation/decay, should therefore be added to the right-hand side of Eq.~\eqref{eq:dens-cons_nmc} \begin{equation} \label{eq:densitycreation} {\dot \rho} + 3 H (\rho +p) = -({\mathcal L}_\text{m} + \rho) \frac{\dot f_2}{f_2} -\mathcal{L}_\Gamma \frac{\dot f_2}{f_2} \, . \end{equation} Note that $\mathcal{L}_\Gamma$ is not a true Lagrangian, but rather a phenomenological term associated to the effect of the NMC between matter and gravity on microscopic scales. If the mass and structure of the particles does not change due to the NMC to gravity, except for (almost) instantaneous particle creation or decay, the on-shell Lagrangian of the perfect fluid is still well described by $\mathcal{L}_\text{m}^{\rm on}=T$ (we also allow for almost instantaneous scattering events which do not have an impact in the form of the perfect-fluid Lagrangian). Hence, Eq.~\eqref{eq:momev} still describes the cosmological contribution to the evolution of the linear-momentum of the particles. Equation~\eqref{eq:densitycreation} may then be rewritten as \begin{equation} \label{eq:densitycreation2} {\dot \rho} + 3 H (\rho +p) = -({\mathcal L}_\text{eff} + \rho) \frac{\dot f_2}{f_2} \, , \end{equation} where \begin{equation} \label{eq:efflagrangian} \mathcal{L}_\text{eff} = \mathcal{L}_\text{m} + \mathcal{L}_\Gamma \, . \end{equation} In this case Eq.~\eqref{eq:conservenergy} is changed to \cite{Prigogine1989} \begin{equation} \label{eq:conservenergycreation} d (\rho a^3)= dQ_{\rm NMC}-pd(a^3) + \frac{h}{n}d (n a^3) \, , \end{equation} where $n$ is the particle number density and $h=\rho+p$ is the enthalpy per unit volume. For simplicity, we have also implicitly assumed that all particles are identical and that the corresponding perfect fluid is always in thermodynamic equilibrium. This is a natural assumption if the rate of particle creation/decay is much smaller than the particle scattering rate, a case in which thermalization following particle creation/decay occurs (almost) instantaneously. Equation~\eqref{eq:conservenergycreation} may be rewritten as as \begin{equation} \label{eq:creation} \dot{\rho}+3H(\rho+p)=\frac{{\dot{Q}}_{\rm NMC}}{ a^3} + \frac{h}{n}(\dot{n}+3Hn) \, , \end{equation} and using Eq.~\eqref{eq:densitycreation2} one finds that \begin{equation} \label{eq:creation2} \frac{{\dot{Q}}_{\rm NMC}}{ a^3} + \frac{h}{n}(\dot{n}+3Hn)=-(\mathcal{L}_\text{eff}+\rho) \frac{\dot f_2}{f_2} \, . \end{equation} Equations~\eqref{eq:heat_transf}, \eqref{eq:efflagrangian} and \eqref{eq:creation2} also imply that \begin{equation} \label{eq:creation3} \frac{\rho+p}{n}(\dot{n}+3Hn)=-\mathcal{L}_{\Gamma} \frac{\dot f_2}{f_2}\, . \end{equation} Introducing the particle creation/decay rate \begin{equation} \label{eq:gamma} \Gamma = \frac{\dot{n}}{n}+3H \, , \end{equation} and using Eq.~\eqref{eq:creation3} one obtains \begin{equation} \label{eq:creation4} \Gamma= -\frac{\mathcal{L}_\Gamma}{ \rho+p}\frac{\dot f_2}{f_2}\, . \end{equation} Alternatively, particle creation/decay may be described as an extra effective creation/decay pressure $p_\Gamma$ of the perfect fluid that must be included in the continuity equation \cite{Prigogine1988} \begin{equation} \label{eq:conteqpres} \dot{\rho} +3H(\rho+p+p_\Gamma)= -({\mathcal L}_\text{m} + \rho) \frac{\dot f_2}{f_2} \, , \end{equation} where \begin{equation} \label{eq:creationpressure} p_\Gamma = \frac{\mathcal{L}_{\Gamma}}{3H} \frac{\dot{f}_2}{ f_2}\,, \end{equation} may be obtained from Eq. \eqref{eq:densitycreation2}. We have argued that the correct on-shell form of the Lagrangian of a perfect fluid composed of solitonic particles is $\mathcal{L}_\text{m}=T$, even in the presence of (almost) instantaneous particle scattering and/or particle creation/decay, and when $\mathcal{L}_\text{eff}=\mathcal{L}_\text{m}$, one trivially recovers the results of the previous subsection. Nevertheless, one may ask whether or not the Lagrangians suggested in previous work to describe such a perfect fluid could play the role of effective Lagrangians. Let us then consider the particular cases with $\mathcal{L}_\text{eff}=-\rho$ and $\mathcal{L}_\text{eff}=p$. If $\mathcal{L}_\text{eff}=-\rho$ then \begin{equation} \label{eq:Lgammarho} \mathcal{L}_\Gamma = \mathcal{L}_\text{eff} - \mathcal{L}_\text{m} = -3p \, , \end{equation} where we have used Eq. \eqref{eq:efflagrangian} and taken into account that $\mathcal{L}_\text{m}=T=3p-\rho$. Hence, in this case \begin{equation} \label{eq:creationpressurerho} p_\Gamma = -\frac{p}{ H} \frac{\dot{f}_2}{f_2}\, , \end{equation} and there is a particle creation/decay rate given by \begin{equation} \label{eq:gammarho} \Gamma= \frac{3p}{\rho+p}\frac{\dot f_2}{f_2}\, . \end{equation} Notably, if $\mathcal{L}_\text{eff}=-\rho$ the standard conservation equation for the energy density is recovered. If $\mathcal{L}_\text{eff}=p$ then \begin{equation} \label{eq:Lgammap} \mathcal{L}_\Gamma = \rho-2p \, . \end{equation} In this case, the effective pressure is equal to \begin{equation} \label{eq:creationpressurep} p_\Gamma = \frac{\rho-2p}{ 3H} \frac{\dot{f}_2}{ f_2} \, , \end{equation} and the particle creation/decay rate is \begin{equation} \label{eq:gammap} \Gamma= -\frac{\rho-2p}{ \rho+p}\frac{\dot f_2}{f_2}\, . \end{equation} Note that if ${\mathcal L}_\text{eff}=p$ the standard evolution equation for the density is not recovered, unless $p=-\rho$. In both cases, $\mathcal{L}_\text{eff}=-\rho$ and $\mathcal{L}_\text{eff}=p$, the particle creation/decay rate $\Gamma$ would not in general be a constant. Rather than depending on the particle properties and on the way these are affected by the NMC to gravity on microscopic scales, for a given choice of the function $f_2$ the evolution of $\Gamma$ given in Eqs. \eqref{eq:gammarho} and \eqref{eq:gammap} would depend essentially on the cosmology and the macroscopic properties of the fluid. As discussed before, the FLRW metric is essentially Minkowski on the microscopic scales relevant to particle creation/decay. Consequently, one should not expect such a cosmological dependence of the particle creation/decay rate $\Gamma$, which questions the relevance of the effective Lagrangians $\mathcal{L}_\text{eff}=-\rho$ and $\mathcal{L}_\text{eff}=p$ in this context. \section{The second law of thermodynamics}\label{sec.seclaw} Consider the fundamental thermodynamic relation \begin{equation} \label{eq:ftr} \mathcal{T}dS = d(\rho a^3) + pda^3\, , \end{equation} where $S$ is the entropy of the matter content and $\mathcal{T}$ is the temperature. Equations \eqref{eq:dens-cons_nmc}, \eqref{eq:heat1} and \eqref{eq:ftr} imply that \cite{Azevedo2020} \begin{equation} \label{eq:entropy} \mathcal{T}dS = dQ_{\rm NMC}= -3p a^3 \frac{d f_2}{f_2} \, , \end{equation} as opposed to GR, where $dS = 0$. Consider a universe filled with dust and radiation, both represented by perfect fluids with proper energy density $\rho_{{\rm dust}}$ and $\rho_{{\rm r}}$, and pressure $p_{{\rm dust}}=0$ and $p_{{\rm r}}=\rho_{{\rm r}}/3$. Eq. {\eqref{eq:entropy} therefore implies that the comoving entropy of the dust component is conserved ($dS_{\rm dust}=0$). In this case the total comoving entropy $S$ is equal to the comoving entropy of the radiation component $S_{\rm r}$ ($S=S_{\rm r}$). Its proper pressure and density --- be it bosonic, fermionic or both --- satisfy $p_{\rm r}=\rho_{\rm r}/3$ and $\rho_{\rm r} \propto \mathcal{T}^4$, respectively. Here, the scattering timescale is implicitly assumed to be much smaller than the characteristic timescale of the change of the NMC coupling function $f_2$, so that the radiation component can always be taken to be in thermodynamic equilibrium at a temperature $\mathcal{T}$ (we are also assuming that the chemical potential is zero). Hence, in the case of radiation, Eq.~\eqref{eq:dens-cons_nmc} may be written as \begin{equation} \label{eq:conservenergypart} \frac{d\rho_{\rm r}}{d\mathcal{T}}\, \dot{\mathcal{T}}+3H(\rho_{\rm r}+p_{\rm r})=-3p_{\rm r}\frac{\dot{f}_2}{f_2} \,, \end{equation} or equivalently, \begin{equation} \label{eq:Tdot1} \dot{\mathcal{T}}=- \frac{3H\left(\rho_{\rm r}+p_{\rm r}\right)+3p_{\rm r}\frac{\dot{f}_2}{f_2}}{{\frac{d\rho_{\rm r}}{d\mathcal{T}}}} \,, \end{equation} with $p_{\rm r}=\rho_{\rm r}/3$ and $\rho_{\rm r} \propto \mathcal{T}^4$. Equation~\eqref{eq:Tdot1} is easily integrated and returns \begin{equation} \label{eq:Tradeq} \mathcal{T} \propto a^{-1} f_2^{-1/4} \,, \end{equation} so that \begin{equation} \label{eq:densrad} \rho_{\rm r} \propto a^{-4}f_2^{-1} \,. \end{equation} Taking into account Eq. \eqref{eq:Tradeq} and the fact that $p_{\rm r}=\rho_{\rm r}/3 \propto \mathcal{T}^4$, Eq.~\eqref{eq:entropy} can be easily integrated to give \begin{equation} \label{eq:entropyevoeq} S\propto f_2^{-3/4} \,. \end{equation} Imposing the second law of thermodynamics \begin{equation} \label{eq:entropycreation} \mathcal{T}\dot{S} = \dot{Q}_{\rm NMC} = -3p a^3\frac{f'_2}{f_2}\dot{R} \geq 0\, , \end{equation} would prove quite restrictive, in particular in the case of a universe in which the time derivative of the Ricci scalar changes sign, as we will demonstrate in the next subsection. In fact, the only function $f_2$ that would verify Eq. \eqref{eq:entropycreation} with all generality would be $f_2=\text{const.}$, which corresponds to the minimally coupled $f(R)$ limit. Conversely, in minimally coupled gravity $f_2=1$, and the right hand term of Eq. \eqref{eq:entropycreation} vanishes, leaving the second law of thermodynamics unchanged. The preservation of the second law of thermodynamics would require its generalization to take into account a gravitational entropy contribution. Even though some work has been done in this context for GR \cite{Bonilla1997,Clifton2013,Sussman2014,Acquaviva2018}, it remains a subject of much discussion and debate. However, the need for such a generalized description appears to be much greater in modified gravity models that inherently violate the second law of thermodynamics in its standard form. \subsection{Entropy in a universe with positive curvature}\label{sec:background} Here, we shall consider a homogeneous and isotropic universe with positive curvature ($k=1$) filled with dust and radiation. This provides a fair representation of the energy content of the Universe from early post-inflationary times until the onset of dark energy domination. The addition of a positive curvature will allow us to consider expanding and contracting phases of the evolution of the universe and to contrast the behaviour of the comoving entropy of the matter fields in these periods. The total proper energy density and pressure are given, respectively, by \begin{equation} \label{eq:densitypressure} \rho_{\rm total}=\rho_{\rm r}+\rho_{\rm dust}\,, \qquad \qquad p_{\rm total} =\frac{1}{3}\rho_{\rm r} \,, \end{equation} with \begin{equation} \label{eq:realdensities} \rho_{\rm r} =\rho_{{\rm r},0} \, a^{-4}f_2^{-1}\,, \qquad \qquad \rho_{\rm dust} =\rho_{\rm dust} \, a^{-3} \,. \end{equation} On the other hand, Eqs. \eqref{eq:lag-nmc} , \eqref{eq:densitypressure}, and \eqref{eq:realdensities}, imply that the on-shellr Lagrangian of the matter fields is equal to \begin{equation} \label{eq:lagrangianrpcdm} \mathcal{L}_\text{m}=3p_{\rm total}-\rho_{\rm total}=-\rho_{{\rm dust},0} \, a^{-3}\,. \end{equation} Here, the subscripts `$\rm r$' and `$\rm dust$' again denote the radiation and cold dark matter components, respectively, and the subscript `$0$' refers to an arbitrary initial time $t=0$. Using Eqs. \eqref{eq:F}, \eqref{eq:densitypressure}, \eqref{eq:realdensities}, \eqref{eq:lagrangianrpcdm}, along with \begin{align} \label{eq:deltaF} & \Delta_{tt} F = -3HF' \dot{R} - 9H^2f'_2 \rho_{\rm dust} \nonumber\\ &=-18HF' \left(\ddot{H}+4H\dot{H} - 2Ha^{-2} \right) - 9H^2f'_2 \rho_{\rm dust}\,,\\ &\Delta_{ii}F=g_{ii}\left(2H\dot{F}+\ddot{F}\right)\,, \end{align} it is straightforward to rewrite the MFE \eqref{eq:fried-f1f2-1} and the MRE \eqref{eq:fried-f1f2-1} as \begin{align} \label{eq:friedmann} \left(H^2 +a^{-2}\right)F =& \frac{1}{6}\left(f'_1 R-f_1\right)+ \frac{1}{6} \rho_{{\rm r},0}a^{-4}- HF'\dot{R}\nonumber\\ &+\frac{1}{6}\left[f_2-f'_2 \left(R+18H^2\right)\right] \rho_{{\rm dust},0}a^{-3} \,, \end{align} \begin{align} \label{eq:ray} \left(\dot{H}+H^2\right)F=&-2\left(H^2+a^{-2}\right)F+\frac{1}{2} f_1 \nonumber \\ &+\frac{1}{6}\rho_{{\rm r},0}a^{-4}f_2^{-1} +2H\dot{F} +\ddot{F} \,, \end{align} where we have chosen units such that $\kappa= 1$. Since the time derivatives of $F$ are \begin{align} \label{eq:Fdot} \dot{F}&=F'\dot{R} + 3H f'_2 \rho_{\rm dust}\,, \\ \ddot{F} &= F'\ddot{R}+F''\dot{R}^2 +3\left(2H\dot{R}f''_2+\dot{H}f'_2-3H^2f'_2\right)\rho_{\rm dust}\,, \end{align} where \begin{align} \label{eq:R} R&=6\left(\dot{H}+2H^2+a^{-2}\right)\,, \\ \dot{R}&=6\left(\ddot{H}+4H\dot{H}-2Ha^{-2}\right)\,, \\ \ddot{R}&=6\left[\dddot{H}+4H\ddot{H}+4\dot{H}^2+2\left(2H^2-\dot{H}\right)a^{-2}\right] \,, \end{align} the MFE and MRE are third and fourth order nonlinear differential equation for the scale factor $a$ with respect to time, respectively. Here, we consider the functions $f_1 = R$ and $f_2 = \alpha R^\beta$, with constant $\alpha$ and $\beta$ (where $\alpha$ has units of $R^{-\beta}$), so that Eq. (\ref{eq:friedmann}) becomes \begin{align} \label{eq:friedmann1} &\left(H^2 +a^{-2}\right)F = \frac{1}{6} \rho_{{\rm r},0}a^{-4}\nonumber\\ &+ \frac{\alpha}{6} R^{\beta} \left(1-\beta- \frac{18H^2}{R}\right) \rho_{{\rm dust},0}a^{-3} \nonumber\\ & + 6 \alpha \beta (\beta-1) R^{\beta-2} \rho_{{\rm dust},0}a^{-3} H^2 \left(\frac{\ddot{H}}{H}+4\dot{H}-2a^{-2}\right)\,. \end{align} Starting from an arbitrary initial time ($t=0$), we integrate Eq. \eqref{eq:ray} using a 5th-order backwards differentiation formula, first backwards up to the Big Bang and then forward up to the Big Crunch. Since it is a fourth order differential equation it requires setting three further initial conditions ($H_0$, $\dot{H}_0$ and $\ddot{H}_0$) in addition to $a_0=1$ (as well as $\rho_{{\rm dust},0}$ and $\rho_{{\rm r},0}$). Notice that the comoving entropy may change only if $\rho_{\rm dust} \neq 0$ and $\rho_{\rm r} \neq 0$. If the universe was assumed to be filled entirely with cold dark matter, then the proper pressure would vanish and, therefore, so would the right-hand side of Eq.~\eqref{eq:entropy}. Hence, there would be no change to the comoving entropy content of the universe. Conversely, if the universe was composed only of radiation, then Eq.~\eqref{eq:friedmann1} would reduce to the standard Friedmann equation found in GR. Hence, the Ricci scalar $R$ would vanish and, Eq.~\eqref{eq:entropy} would again imply the conservation of the comoving entropy. In the remainder of this section we shall consider cosmologies with $\rho_{{\rm dust},0}=5.94$, $\rho_{{\rm r},0}=0.06$ and $H_0=0$ in the context either of GR or of NMC gravity models with $\alpha=0.95$ and $\beta=0.01$. In the case of GR ($\alpha=1$, $\beta=0$), these conditions are sufficient to determine the full evolution of the universe. In the context of NMC gravity, Eq.~\eqref{eq:friedmann1} acts as an additional constraint, and with $a_0=1$ and $H_0=0$ becomes \begin{equation} \label{eq:MFEcond} \rho_{{\rm r},0}+\alpha\left[R_0^\beta+\beta(6-R_0)R_0^{\beta-1} \right]\rho_{{\rm dust},0} =6 \,, \end{equation} and therefore sets $\dot{H}_0$ at the initial time, leaving only one additional initial condition, $\ddot{H}_0$ . Fig.~\ref{fig:asyma} displays the evolution of the scale factor $a$ as a function of the physical time $t$ in the context of two distinct cosmological models computed assuming either GR (dashed blue line) or NMC gravity (solid orange line). In the context of GR one may observe the exact symmetry between the expanding and contracting phases of the universe, which is verified independently of the initial conditions. The orange solid line shows the evolution of $a$ with $t$ in the context of an NMC gravity model with $\ddot{H}_0=0.5$. Fig.~\ref{fig:asyma} shows that, in this case, the symmetry between the expanding and contracting phases of the universe is no longer preserved. It also reveals the presence of oscillations on the evolution of the scale factor of variable amplitude and frequency, as well as multiple local maxima of the scale factor (two, for this particular parameter choice). These features are common in the context of NMC gravity and are associated with the increased complexity of the higher-order nonlinear equations which rule the evolution of the universe in that context. Moreover, many NMC models (such as the present one for the chosen parameters) are subject to the Dolgov-Kawasaki instability \cite{Dolgov2003,Faraoni2007,Bertolami2009}. However, this oscillatory behaviour in the cosmological evolution of the universe has also been previously discussed for $f(R)$ models \cite{Appleby2010,Motohashi2010,Motohashi2011,Motohashi2012}, even when they satisfy the former and other stability criteria. A detailed analysis of such oscillations was not a focus of this thesis, as they do not affect this critical result --- that the second law of thermodynamics does not generally hold in the context of NMC gravity. Fig.~\ref{fig:asymH} displays the evolution of the Hubble parameter $H$ as a function of the physical time $t$ for the same models shown in Fig.~\ref{fig:asyma}. Notice the three zeros of $H$, as well as its sharp variations at specific values of the physical time $t$ in the case of NMC gravity. \begin{figure} \centering \includegraphics[width=0.85\textwidth]{asym_a.pdf} \caption[Evolution of the scale factor $a$ as a function of the physical time $t$ in the context of GR and of a NMC model]{Evolution of the scale factor $a$ as a function of the physical time $t$ in the context of GR (dashed blue line) and of a NMC gravity model with $\alpha=0.95$ and $\beta=0.01$ (solid orange line), having $\rho_{{\rm dust},0}=5.94$ and $\rho_{{\rm r},0}=0.06$ and $H_0=0$ as initial conditions. Notice the asymmetric evolution of the universe in the context of NMC gravity (in contrast with GR), and the presence of oscillations of variable amplitude and frequency, as well as two local maxima of $a$. \label{fig:asyma}} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{asym_H.pdf} \caption[Evolution of the Hubble parameter $H$ as a function of the physical time $t$ in the context of GR and of a NMC model]{Same as in Fig.~\ref{fig:asyma} but for the evolution of the Hubble parameter $H$. Notice the three zeros of $H$, as well as its sharp variation at specific values of the physical time $t$ in the context of NMC gravity. \label{fig:asymH}} \end{figure} Although an asymmetry between the expanding and contracting phases is generic in the context of NMC gravity, one can use the freedom in the choice of initials conditions to impose a symmetric expansion and contraction by choosing $\ddot{H}_0 = 0$. The results for the symmetric case can be found in Figs.~\ref{fig:syma} through \ref{fig:symS}, which show, respectively, the evolution of the scale factor $a$, the Hubble parameter $H$, the Ricci scalar $R$ and the entropy $S$ as a function of the physical time $t$. The results presented in Figs.~\ref{fig:syma} and \ref{fig:symH} for the evolution of $a$ and $H$ with the physical time, display an exact symmetry between the expanding and contracting phases of the universe, both in the case of GR and NMC gravity. Also, note that in this case there is a single maximum of $a$ (zero of $H$). Otherwise, the results are similar to those shown in Figs.~\ref{fig:asyma} and \ref{fig:asymH} for an asymmetric evolution of the universe. \begin{figure} \centering \includegraphics[width=0.85\textwidth]{sym_a.pdf} \caption[Evolution of the scale factor $a$ as a function of the physical time $t$ in the context of GR and of a NMC model]{Evolution of the scale factor $a$ as a function of the physical time $t$ in the context of GR (dashed blue line) and of a NMC gravity model with $\alpha=0.95$ and $\beta=0.01$ (solid orange line), having $\rho_{{\rm dust},0}=5.94$, $\rho_{{\rm r},0}=0.06$ and $H_0=0$ as initial conditions. An extra initial condition is required in the context of NMC gravity which we take to be ${\ddot H}_0=0$ in order to guarantee a symmetric evolution of the universe. \label{fig:syma}} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{sym_H.pdf} \caption[Evolution of the Hubble parameter $H$ as a function of the physical time $t$ in the context of GR and of a NMC model]{Same as in Fig.~\ref{fig:syma} but for the evolution of the Hubble parameter $H$. \label{fig:symH}} \end{figure} Figs.~\ref{fig:symR} and \ref{fig:symS} display the evolution of the Ricci scalar $R$ and of the comoving entropy $S$ again for the symmetric case. Apart from the oscillations of variable amplitude and frequency, Fig.~\ref{fig:symR} shows that, on average, $R$ decreases during the expanding phase and increases during the contracting phase, while Fig. \ref{fig:symS} show that the comoving entropy has the opposite behaviour. This illustrates the coupling between the evolution of the comoving entropy and the dynamics of the universe, which generally exists in cosmological models with an NMC between gravity and the matter fields, linking the thermodynamic arrow of time to the cosmological evolution of the universe. \begin{figure} \centering \includegraphics[width=0.85\textwidth]{sym_R.pdf} \caption[Evolution of the Ricci scalar $R$ as a function of the physical time $t$ in the context of GR and of a NMC model]{Same as in Fig.~\ref{fig:syma} but for the evolution of the Ricci scalar $R$. \label{fig:symR}} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{sym_S.pdf} \caption[Evolution of the comoving entropy $S$ as a function of the physical time $t$ in the context of GR and of a NMC model]{Same as in Fig.~\ref{fig:syma} but for the evolution of the comoving entropy $S$, normalized so that $S=1$ at $t=0$. \label{fig:symS}} \end{figure} \section{Boltzmann's $\mathcal{H}$-theorem, entropy and the strength of gravity} \label{sec.htheo} In the late nineteenth century, Boltzmann almost single-handedly developed the foundations of modern statistical mechanics. One of his major contributions, Boltzmann's $\mathcal{H}$-theorem, implies that, under generic conditions, the entropy of a closed system is a non-decreasing function of time \cite{Jaynes1965}. However, Boltzmann's $\mathcal{H}$-theorem in its standard form does not necessarily hold in theories with an NMC to gravity, as we will show in this section \cite{Avelino2020}. \subsection{4-force on point particles} Consider the action of a single point particle \begin{equation} \label{eq:actionpp} S=-\int d \tau \, m \,, \end{equation} with energy momentum tensor \begin{equation} T^{\mu \nu} = \frac{m}{\sqrt {-g}}\int d \tau \, u^\mu u^\nu \delta^4(x^\sigma-\xi^\sigma(\tau)) \,, \end{equation} where $\delta^4(x^\sigma-\xi^\sigma(\tau))$ denotes the four–dimensional Dirac delta function, $\xi^\sigma(\tau)$ represents the particle worldline, $\tau$ is the proper time, $u^\mu$ are the components of the particle 4-velocity ($u^\mu u_\mu=-1$) and $m$ is the proper particle mass. If one considers its trace $T$ and integrates over the whole of space-time, we obtain \begin{eqnarray} \int d^{4}x \sqrt{-g} \, T &=&- \int d^4x \,d\tau\, m\, \delta^4\left(x^\sigma-\xi^\sigma(\tau)\right) \nonumber\\ &=&- \int d\tau \,m \, , \end{eqnarray} which can be immediately identified as the action for a single massive particle, and therefore implies that \begin{equation} \label{eq:lag} {\mathcal L}_\text{m} = T= - \frac{m}{\sqrt {-g}}\int d \tau \, \delta^4(x^\sigma-\xi^\sigma) \,, \end{equation} is the particle Lagrangian, as we showed in Chapter \ref{chapter_lag}. The covariant derivative of the EMT may be written as \begin{eqnarray} \label{eq:en_mom_part_cons} \nabla_\nu T^{\mu \nu} &=& \frac{1}{\sqrt {-g}} \partial_\nu \left(\sqrt {-g} T^{\mu \nu}\right)\nonumber\\ &=& \frac{m}{\sqrt {-g}}\int d \tau \left(\nabla_\nu u^\mu\right) u^\nu \delta^4(x^\sigma-\xi^\sigma(\tau))\,. \end{eqnarray} By using \eqref{eq:en_mom_part_cons} and \eqref{eq:lag} in Eq.~\eqref{eq:conservNMC} we obtain \begin{equation} \frac{m}{\sqrt {-g}}\int d \tau \delta^4(x^\sigma-\xi^\sigma(\tau)) \nonumber\\ \times \left(\frac{d u^\mu}{d \tau} +\Gamma^\mu_{\alpha \beta} u^\alpha u^\beta+ \frac{f'_2}{ f_2} h^{\mu \nu} \nabla_\nu R \right)=0\,, \end{equation} where $h^{\mu \nu}=g^{\mu \nu}+ u^\mu u^\nu$ is the projection operator. The equation of motion of the point particle is then given by \begin{equation} \label{eq:nmc_part_acc} \mathfrak{a}^\mu=\frac{d u^\mu}{d \tau} +\Gamma^\mu_{\alpha \beta} u^\alpha u^\beta=- \frac{f'_2}{ f_2} h^{\mu \nu} \nabla_\nu R \,, \end{equation} where \begin{equation} \label{eq:force} \mathfrak{f}^{\nu}=m \mathfrak{a}^\mu=-m \frac{f'_2}{ f_2} h^{\mu \nu} \nabla_\nu R \,, \end{equation} is the velocity-dependent 4-force on the particles associated to the NMC to gravity and $\mathfrak{a}^\mu$ is the corresponding 4-acceleration (see \cite{Ayaita2012} for an analogous calculation in the context of growing neutrino models where the neutrino mass is non-minimally coupled to a dark energy scalar field). It is important to note that if the particles are part of a fluid, then the 4-acceleration of the individual particles does not, in general, coincide with the 4-acceleration of the fluid element to which they belong, as can be clearly seen by comparing Eqs. \eqref{eq:nmc_fluid_acc} and \eqref{eq:nmc_part_acc} (this point is often overlooked, see \textit{e.g.} \cite{Bertolami2020}). However, in the case of dust $p=0$ and $\mathcal{L}_\text{m}=-\rho$, and the 4-acceleration of a fluid element and of its particles are both given by \begin{equation} \mathfrak{a}^\mu=- \frac{f'_2}{ f_2} h^{\mu \nu} \nabla_\nu R \,. \end{equation} \subsection{Boltzmann's $\mathcal{H}$-theorem} \label{subsec:boltzmann} The usual collisionless Boltzmann equation given by \begin{equation} \label{eq:boltzmann1} \frac{d \mathcal{F}}{dt}= \frac{\partial \mathcal{F}}{\partial t}+\nabla_{\vb r} \mathcal{F} \cdot \frac{d\vb r}{dt} + \nabla_{\vb p} \mathcal{F} \cdot {\vb F}=0 \,, \end{equation} expresses the constancy in time of a six-dimensional phase space volume element ${\mathcal V}_6$ containing a fixed set of particles in the absence of particle collisions. Here, $t$ is the physical time, the six-dimensional phase space is composed of the six positions and momentum coordinates $({\vb r},{\vb p})$ of the particles, ${\vb F}=d{\vb p}/dt$ is the 3-force on the particles (assumed to be independent of ${\vb p}$), and $\mathcal{F}(t,{\vb r},{\vb p}) {\mathcal V}_6 $ is the number of particles in the six-dimensional infinitesimal phase space volume element ${\mathcal V}_6 = d^3 r \, d^3 p$. However, in the presence of NMC to gravity ${\vb F}$ may depend on $\vb p$, and this volume is in general not conserved. In this case, phase-space continuity, expressing particle number conservation in six-dimensional phase space in the absence of collisions, \begin{equation} \label{eq:phase-space-cont} \frac{\partial \mathcal{F}}{\partial t} + \nabla_{\vb r}\cdot\left(\mathcal{F}\frac{d \vb{r}}{dt}\right) +\nabla_{\vb p}\cdot\left(\mathcal{F}\vb{F}\right)=0 \,, \end{equation} should be used rather than Eq. \eqref{eq:boltzmann1}. Here, $\vb{r}$ and $\vb{p}$ are independent variables, thus implying $\nabla_{\vb r}\cdot \vb{p}=0$. Note that no assumption has been made regarding the relativistic or non-relativistic nature of the particles (Eq. \eqref{eq:phase-space-cont} is valid in both regimes). In a flat homogeneous and isotropic universe, described by the Friedmann-Lemaître-Robertson-Walker metric, the line element is given by \begin{equation} ds^2=-dt^2+d{\vb r} \cdot d {\vb r}= -dt^2 + a^2(t) d{\vb q} \cdot d{\vb q}\,, \end{equation} where $a(t)$ is the scale factor and ${\vb q}$ are comoving Cartesian coordinates. In this case, the Ricci scalar is a function of cosmic time alone [$R=R(t)$] and the $i0$ components of the projection operator may be written as $h^{i0}=\gamma^2 v^i$, where $\gamma=u^0=dt/d\tau$ and $v^i=u^i/\gamma$ are the components of the 3-velocity. Therefore, Eq.~\eqref{eq:force} implies that the 3-force on the particles is given by \begin{eqnarray} \label{eq:3-force} F^i=\frac{d {p}^i}{dt}&=& \frac{\mathfrak{f}^{i}}{\gamma} -\frac{d\ln a}{dt}p^i= -\left(\frac{d\ln a}{dt}+\frac{f'_2}{ f_2} \frac{d R} {dt} \right)p^i \nonumber \\ &=&- \left(\frac{d \ln a}{dt}+ \frac{d \ln f_2}{dt} \right) p^i \nonumber\\ &=& -\frac{d \ln \left(a f_2 \right)}{dt} p^i \,, \end{eqnarray} This in turn implies that $p^i \propto (f_2 a)^{-1}$, so that \begin{equation} \label{V6} {\mathcal V}_6 = d^3 r \, d^3 p \propto {f_2}^{-3}\,, \end{equation} where we have taken into account that $d^3 r= a^3 d^3 q$. Eq. \eqref{V6} explicitly shows that in the presence of a NMC to gravity the phase-space volume is, in general, no longer incompressible. In a homogeneous and isotropic universe \begin{align} \label{eq:drdt} \frac{d\vb{r}}{dt} &= \frac{da}{dt}\vb{q}+a\frac{d\vb{q}}{dt} = \frac{d \ln a}{dt}\vb{r} + \vb{v} \nonumber\\ &= \frac{d \ln a}{dt} \vb{r} + \frac{\vb{p}}{\left(m^2+p^2\right)^{1/2}} \,, \end{align} where $m$ is the rest mass of the particles, thus implying that \begin{equation} \label{eq:nabla-r} \nabla_{\vb r}\cdot\left(\frac{d\vb{r}}{dt}\right)=3\frac{d \ln a}{dt}\,. \end{equation} Substituting Eqs. \eqref{eq:3-force} and \eqref{eq:nabla-r} into the phase-space continuity equation — note that Eq. \eqref{eq:phase-space-cont} remains valid in a FLRW background — and taking into account that in a homogeneous universe $\mathcal{F}$ is independent of $\vb{r}$ [$\mathcal{F}=\mathcal{F}(t,\vb{p})$], one obtains \begin{eqnarray} \label{eq:boltzmann2} 0 &=& \frac{\partial \mathcal{F}}{\partial t} + \mathcal{F}\nabla_{\vb r}\cdot\left(\frac{d\vb{r}}{dt}\right) + \vb{F}\cdot\nabla_{\vb p}\mathcal{F} + \mathcal{F} \nabla_{\vb p} \cdot {\vb F}\nonumber \\ &=&\frac{\partial \mathcal{F}}{\partial t} - \frac{\partial \mathcal{F}}{\partial p^i}\frac{d \ln \left(a f_2\right)}{dt} p^i -3 \mathcal{F} \frac{d \ln f_2}{dt} \,. \end{eqnarray} Note that Eq. \eqref{eq:boltzmann2} does not include collision terms and, therefore, it only applies in the case of collisionless fluids. For example, after neutrino decoupling non-gravitational neutrino interactions may, in general, be neglected and, consequently, Eq. \eqref{eq:boltzmann2} may be used to determine the evolution of the neutrino phase-space distribution function for as long as the Universe remains approximately homogeneous and isotropic (the same applying to photons after recombination, although to a lesser extent). We shall defer to the following subsection a discussion of the impact of collisions in situations where they might be relevant. Let us start by explicitly verifying the conservation of the number of particles $N$ inside a constant comoving spatial volume $V_q$ defined by $\int d^3 r =a^3 \int d^3 q =a^3 V_q$. Since $N=\int d^3 r \, d^3 p \, \mathcal{F}=a^3 V_q \int \mathcal{F} d^3 p$, \begin{align} \label{eq:dotN} \frac{d N}{dt}&= 3 \frac{d\ln a}{dt} N+a^3 V_q \int d^3 p \frac{\partial \mathcal{F}}{\partial t} \nonumber\\ &= 3 \frac{d\ln (a f_2)}{dt} N + a^3 V_q \int d^3 p \frac{\partial \mathcal{F}}{\partial p^i}\frac{d \ln \left(a f_2\right)}{dt} p^i = 0\,. \end{align} Here, we have used Eq. ~\eqref{eq:boltzmann2} in order to evaluate $\partial \mathcal{F}/ \partial t$ and performed the momentum integral by parts. Let us now consider Boltzmann's $\mathcal{H}$ defined by \begin{equation} \mathcal{H}= \int d^3r \, d^3 p \mathcal{F} \ln \mathcal{F} = a^3 V_q \int d^3 p \mathcal{F} \ln \mathcal{F} \,, \end{equation} Taking the derivative of $\mathcal{H}$ with respect to the physical time and using ~\eqref{eq:dotN} one obtains \begin{eqnarray} \label{eq:dotH} \frac{d \mathcal{H}}{dt}&=& 3 \frac{d\ln a}{dt} \mathcal{H}+a^3 V_q \int \, d^3 p (1+\ln \mathcal{F}) \frac{\partial \mathcal{F}}{\partial t} \nonumber\\ &=& 3 \frac{d\ln a}{dt} (\mathcal{H}-N)+ a^3 V_q \int d^3 p \frac{\partial \mathcal{F}}{\partial t} \ln \mathcal{F} \,, \end{eqnarray} where again $\int d^3 r =a^3 \int d^3 q =a^3 V_q$ and $N$ is the number of particles inside $V_q$. Using Eq.~\eqref{eq:boltzmann2}, the integral which appears in the last term of Eq.~\eqref{eq:dotH} may be written as \begin{eqnarray} I &=& \int d^3 p \frac{\partial \mathcal{F}}{\partial t} \ln \mathcal{F}=3 \frac{d \ln f_2}{dt} \int d^3 p\mathcal{F} \ln \mathcal{F} \nonumber\\ &+& \int d^3 p \left(\frac{\partial \mathcal{F}}{\partial p^i}\frac{d \ln \left(af_2\right)}{dt}p^i\right) \ln \mathcal{F} \nonumber \\ &=& I_1 + I_2 \,, \end{eqnarray} where \begin{eqnarray} I_1&=& 3 (a^3 V_q)^{-1} \frac{d \ln f_2}{dt} \mathcal{H}\\ I_2&=& \int d^3 p \left(\frac{\partial \mathcal{F}}{\partial p^i}\frac{d \ln \left(af_2\right)}{dt}p^i\right) \ln \mathcal{F} \,. \end{eqnarray} Integrating $I_2$ by parts one obtains \begin{eqnarray} \label{eq:integralI} I_2 &=& - \int d^3 p \mathcal{F}\frac{\partial }{\partial p^i} \left[\ln \mathcal{F}\frac{d \ln \left(a f_2\right)}{dt}p^i\right]\nonumber\\ &=&-3 (a^3 V_q)^{-1} \frac{d \ln \left(a f_2\right)}{dt} \mathcal{H} - \int d^3 p \frac{\partial \mathcal{F}}{\partial p^i} \frac{d \ln \left(a f_2\right)}{dt}p^i \nonumber \\ &=& 3 (a^3 V_q)^{-1} \frac{d \ln \left(a f_2\right)}{dt} (N-\mathcal{H}) \,. \end{eqnarray} Summing the various contributions, Eq.~\eqref{eq:dotH} finally becomes \begin{equation} \label{eq:dotH1} \frac{d \mathcal{H}}{dt} = 3 \frac{d\ln f_2}{dt} N \,. \end{equation} In general relativity $f_2$ is equal to unity and, therefore, Boltzmann's $\mathcal{H}$ is a constant in the absence of particle collisions. However, Eq. ~\eqref{eq:dotH1} implies that this is no longer true in the context of NMC theories of gravity. In this case, the evolution of Boltzmann's $\mathcal{H}$ is directly coupled to the evolution of the universe. Boltzmann's $\mathcal{H}$ may either grow or decay, depending on whether $f_2$ is a growing or a decaying function of time, respectively. This provides an explicit demonstration that Boltzmann's $\mathcal{H}$ theorem --- which states that $d\mathcal{H}/dt \le 0$ --- may not hold in the context of NMC theories of gravity. \subsubsection*{An alternative derivation} Consider two instants of time $t_\text{A}$ and $t_\text{B}$, with $a_\text{A}=1$. According to Eq. \eqref{eq:3-force}, in the absence of collisions, $\vb{p}\propto(af_2)^{-1}$. Therefore, assuming that the number of particles is conserved, Eq. \eqref{V6} implies that \begin{equation} \frac{\mathcal{F}_\text{B}}{\mathcal{F}_\text{A}}\equiv\frac{\mathcal{F}(t_\text{B},f_{2,\text{A}}\vb{p}/(a_\text{B}f_{2,\text{B}}))}{\mathcal{F}(t_\text{A},\vb{p})} =\frac{{\mathcal V}_{6,\text{A}}}{{\mathcal V}_{6,\text{B}}}=\left(\frac{f_{2,\text{B}}}{f_{2,\text{A}}}\right)^3 \,. \end{equation} Hence, \begin{align} \label{eq:alt-H} \mathcal{H}_\text{B}&=a^3_\text{B}V_q\int d^3p_\text{B}\mathcal{F}_\text{B}\ln \mathcal{F}_\text{B} \nonumber\\ &= a^3_\text{B}V_q\int d^3p_\text{A} \left(\frac{f_{2,\text{A}}}{a_\text{B}f_{2,\text{B}}}\right)^3 \mathcal{F}_\text{B}\ln \mathcal{F}_\text{B} \nonumber \\ &= V_q\int d^3p_\text{A} \mathcal{F}_\text{A} \ln\left[\mathcal{F}_\text{A}\left(\frac{f_{2,\text{B}}}{f_{2,\text{A}}}\right)^3 \right]\nonumber \\ &= V_q\int d^3p_\text{A} \mathcal{F}_\text{A} \ln\mathcal{F}_\text{A} + 3\ln\left(\frac{f_{2,\text{B}}}{f_{2,\text{A}}}\right)V_q\int d^3p_\text{A} \mathcal{F}_\text{A} \nonumber \\ &=\mathcal{H}_\text{A}+3\left(\ln f_{2,\text{B}}-\ln f_{2,\text{A}} \right)N \,. \end{align} If $t_\text{A}$ and $t_\text{B}$ are sufficiently close, one can write $t_\text{B}-t_\text{A}=dt$, $\mathcal{H}_\text{B}-\mathcal{H}_\text{A}=d\mathcal{H}$, and $\ln f_{2,\text{B}}-\ln f_{2,\text{A}}=d\ln f_2$. Then, dividing Eq. \eqref{eq:alt-H} by $dt$ one obtains Eq. \eqref{eq:dotH1}. This alternative derivation shows, perhaps even more explicitly, how the growth or decay of the magnitude of the linear momentum of the particles associated to the NMC to gravity may contribute, respectively, to a decrease or an increase of Boltzmann's $\mathcal{H}$. \subsection{Entropy} \label{subsec:entropy} Consider a fluid of $N$ point particles with Gibbs' and Boltzmann's entropies given respectively by \begin{eqnarray} S_{G}&=&-\int P_{N} \ln P_{N} d^{3} r_{1} d^{3} p_{1} \cdots d^{3} r_{N} d^{3} p_{N}\, \\ S_{B}&=&-N \int P \ln P d^{3} r d^{3} p\,. \end{eqnarray} where $P_{N}\left({\vb r}_1, {\vb p}_1, \ldots, {\vb r}_N, {\vb p}_N,t\right)$ and $P\left({\vb r}, {\vb p},t\right)$ are, respectively, the $N$-particle probability density function in $6N$-dimensional phase space and the single particle probability in 6-dimensional phase space. $P$ and $P_N$ are related by \begin{equation} P\left({\vb r}, {\vb p},t\right)=\int P_N d^{3} r_{2} d^{3} p_{2} \cdots d^{3} r_{N} d^{3} p_{N} \end{equation} These two definitions of the entropy have been shown to coincide only if \begin{equation} P_{N}\left({\vb r}_1, {\vb p}_1, \ldots, {\vb r}_N, {\vb p}_N,t\right)= \prod_{i=1}^N P\left({\vb r}_i, {\vb p}_i,t\right)\,, \end{equation} or, equivalently, if particle correlations can be neglected, as happens for an ideal gas \cite{Jaynes1965}. In the remainder of this section we shall assume that this is the case, so that $S=S_B=S_G$ (otherwise $S_G<S_B$ \cite{Jaynes1965}). We shall also consider a fixed comoving volume $V_q$. Close to equilibrium $\mathcal{F}\left({\vb r}, {\vb p},t\right)=NP\left({\vb r}, {\vb p},t\right)$ holds to an excellent approximation and, therefore \begin{equation} \mathcal{H}=-S + N \ln N\,. \end{equation} Again, assuming that the particle number $N$ is fixed, Eq. ~\eqref{eq:dotH1} implies that \begin{equation} \label{eq:dotS} \frac{d S}{dt}=-\frac{d \mathcal{H}}{dt} = - 3 \frac{d \ln f_2}{dt} N\,. \end{equation} Hence, the entropy $S$ in a homogeneous and isotropic universe may decrease with cosmic time, as long as ${f_2}$ grows with time. This once again shows that the second law of thermodynamics does not generally hold in the context of modified theories of gravity with an NMC between the gravitational and the matter fields. \subsubsection*{The collision term} Under the assumption of molecular chaos, \textit{i.e.} that the velocities of colliding particles are uncorrelated and independent of position, adding a two-particle elastic scattering term to Eq. \eqref{eq:phase-space-cont} results in a non-negative contribution to the entropy increase with cosmic time --- this contribution vanishes for systems in thermodynamic equilibrium. This result holds independently of the NMC coupling to gravity, as acknowledged in \cite{Bertolami2020} where the standard calculation of the impact of the collision term has been performed without taking into account the momentum-dependent forces on the particles due to the NMC to gravity. However, as demonstrated in this section, these momentum-dependent forces may be associated with a further decrease of the magnitude of the linear momentum of the particles (if $f_2$ grows with time) contributing to the growth of Boltzmann's $\mathcal{H}$ (or, equivalently, to a decrease of the entropy). The existence of particle collisions, although extremely relevant in most cases, does not change this conclusion. If the particles are non-relativistic, and assuming thermodynamic equilibrium, $\mathcal{F}(\vb p,t)$ follows a Maxwell-Boltzmann distribution. In an FLRW homogeneous and isotropic universe with an NMC to gravity the non-relativistic equilibrium distribution is maintained even if particle collisions are switched off at some later time, since the velocity of the individual particles would simply evolve as ${\vb v} \propto (a f_2)^{-1}$ in the absence of collisions (see Eq.~\eqref{eq:3-force}) --- the temperature, in the case of non-relativistic particles, would evolve as $\mathcal{T}\propto v^2 \propto (af_2)^{-2}$. If the fluid is an ideal gas of relativistic particles (with $p =\rho/3$), each satisfying equation of motion for a point particle derived in Eq. \eqref{eq:nmc_part_acc} (except, eventually, at quasi-instantaneous scattering events), then its on-shell Lagrangian vanishes ($\mathcal{L}_\text{m,[fluid]}=T=-\rho+3p =0$), and we recover the results found in Section \ref{sec.seclaw}, namely that the entropy density $s$ evolves as $n(\mathcal{T}) \propto s(\mathcal{T}) \propto \mathcal{T}^3\propto a^{-3}f_2^{-3/4}$. This implies that both the number of particles $N$ and the entropy $S$ in a fixed comoving volume are not conserved — they evolve as $N\propto S\propto na^3\propto f_2^{-3/4}$. Unless $f_2$ is a constant, the equilibrium distribution of the photons cannot be maintained after the Universe becomes transparent at a redshift $z\sim10^3$, given that the number of photons of the cosmic background radiation is essentially conserved after that. Hence, direct identification of Boltzmann's $\mathcal{H}$ with the entropy should not be made in this case. The requirement that the resulting spectral distortions be compatible with observations has been used to put stringent limits on the evolution of $f_2$ after recombination \cite{Avelino2018}, and we will show in the next chapter. \subsubsection*{The strength of gravity} \label{subsubsec:gstrength} Existing cosmic microwave background and primordial nucleosynthesis constraints restrict the NMC theory of gravity studied in the present work (or its most obvious generalization) to be very close to General Relativity ($f_2=1$) at late times \cite{Avelino2018,Azevedo2018a}. Before big bang nucleosynthesis, the dynamics of $f_2$ is much less constrained on observational grounds, but it is reasonable to expect that the cosmological principle and the existence of stable particles — assumed throughout this work — would still hold (at least after primordial inflation). This requires the avoidance of pathological instabilities, such as the Dolgov-Kawasaki instability, \textit{i.e.} $\kappa f''_1+f''_2\mathcal{L}_\text{m}\geq0$. Consider a scenario, free from pathological instabilities, in which the function $f_2$ was much larger at early times than at late times (here, early and late refer to times much before and after primordial nucleosynthesis, respectively). In this scenario, the present value of Newton's gravitational constant is the result of a dynamical process associated with the decrease of $f_2$, perhaps by many orders of magnitude, from early to late times. More importantly, the high entropy of the Universe and the weakness of gravity would be interrelated in this scenario.
{ "attr-fineweb-edu": 1.652344, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc0Y5qWTA5dUDj1Ze
\section[Introduction]{Introduction}\label{introduction} Penalized regression has been a widely used technique for variable selection for many decades. Computation for such models has been a challenge due to the non-smooth properties of penalties such as the lasso \citep{tibshirani96}. A plethora of algorithms now exist such as the Least Angle Regression (LARS) algorithm \citep{efron2004} and coordinate descent \citep{Tseng2001, friedman2010}, among many others. There exist for these algorithms even more packages for various penalized regression routines such as \pkg{glmnet} \citep{glmnet}, \pkg{lars} \citep{lars}, \pkg{ncvreg} \citep{ncvreg}, \pkg{grpreg} \citep{grpreg}, and \pkg{gglasso} \citep{gglasso}, among countless others. Each of the above packages focuses on a narrow class of penalties, such as group regularization, non-convex penalties, or the lasso. The existence of many options often makes it hard to choose between packages, and furthermore difficult to develop a consistent workflow when it is not clear which type of penalty is suitable for a given problem. There has been much focus on algorithms for scenarios where the number of variables \(p\) is much larger than the number of observations \(n\), such as the LARS algorithm, yet many applications of ``internet scale'' typically involve an extraordinarily large number of observations and a moderate number of variables. In these applications, speed is crucial. The \pkg{oem} package is intended to provide a highly efficient framework for penalized regression in these tall data settings. It provides computation for a comprehensive selection of penalties, including the lasso and elastic net, group-wise penalties, and non-convex penalties and allows for simultaneous computation of these penalties. Most of the algorithms and packages listed above, however, are more efficient than the \pkg{oem} package for scenarios when the number of variables is larger than the number of observations. Roughly speaking, \pkg{oem} package is most effective when the ratio of the number of variables to the number of observations is less than \(1/10\). This is ideal for data settings, such as in internet applications, where a large number of variables are available, but the number of observations grows rapidly over time. Centered around the orthogonalizing expectation maximization (OEM) algorithm of \citet{xiong16}, the \pkg{oem} package provides a unified framework for computation for penalized regression problems with a focus on big \emph{tall} data scenarios. The OEM algorithm is particularly useful for regression scenarios when the number of observations is significantly larger than the number of variables. It is efficient even when the number of variables is large (in the thousands) as long as the number of observations is yet larger (hundreds of thousands or more). The OEM algorithm is particularly well-suited for penalized linear regression scenarios when the practitioner must choose between a large number of potential penalties, as the OEM algorithm can compute full tuning parameter paths for multiple penalties nearly as efficiently as for just one penalty. The \pkg{oem} package also places an explicit emphasis on practical aspects of penalized regression model fitting, such as tuning parameter selection, in contrast to the vast majority of penalized regression packages. The most common approach for tuning parameter selection for penalized regression is cross validation. Cross validation is computationally demanding, yet there has been little focus on efficient implementations of cross validation. Here we present a modification of the OEM algorithm to dramatically reduce the computational load for cross validation. Additionally, the \pkg{oem} package provides some extra unique features for very large-scale problems. The \pkg{oem} package provides functionality for out-of-memory computation, allowing for fitting penalized regression models on data which are too large to fit into memory. Feasibly, one could use these routines to fit models on datasets hundreds of gigabytes in size on just a laptop. The \pkg{biglasso} package \citep{biglasso} also provides functionality for out-of-memory computation, however its emphasis is on ultrahigh-dimensional data scenarios and is limited to the lasso, elastic-net, and ridge penalties. Also provided are OEM routines based off of the quantities \(\bfX^\top\bfX\) and \(\bfX^\top\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F\) that may already be available to researchers from exploratory analyses. This can be especially useful for scenarios when data are stored across a large cluster, yet the sufficient quantities can be computed easily on the cluster, making penalized regression computation very simple and quick for datasets with an arbitrarily large number of observations. The core computation for the \pkg{oem} package is in \proglang{C++} using the \pkg{Eigen} numerical linear algebra library \citep{eigenweb} with an \proglang{R} interface via the \pkg{RcppEigen} \citep{RcppEigen} package. Out-of-memory computation capability is provided by interfacing to special \proglang{C++} objects for referencing objects stored on disk using the \pkg{bigmemory} package \citep{bigmemory}. In Section \ref{the-orthogonalizing-em-algorithm}, we provide a review of the OEM algorithm. In Section \ref{parallelization-and-fast-cross-validation} we present a new efficient approach for cross validation based on the OEM algorithm. In Section \ref{extension-to-logistic-regression} we show how the OEM algorithm can be extended to logistic regression using a proximal Newton algorithm. Section \ref{the-oem-package} provides an introduction to the package, highlighting useful features. Finally, Section \ref{timings} demonstrates the computational efficiency of the package with some numerical examples. \section[The orthogonalizing EM algorithm]{The orthogonalizing EM algorithm}\label{the-orthogonalizing-em-algorithm} \subsection[Review of the OEM algorithm]{Review of the OEM algorithm}\label{review-of-the-oem-algorithm} The OEM algorithm is centered around the linear regression model: \begin{equation}\label{linear_model} \mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F=\bfX\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}+\boldsymbol{\varepsilon}, \end{equation} where \(\bfX = (x_{ij})\) is an \(n \times p\) design matrix, \(\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F \in \R^n\) is a vector of responses, \(\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta} = (\beta_1,\ldots,\beta_p)^\top\) is a vector of regression coefficients, and \(\boldsymbol{\varepsilon}\) is a vector of random error terms with mean zero. When the number of covariates is large, researchers often want or need to perform variable selection to reduce variability or to select important covariates. A sparse estimate \(\hat{\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}}\) with some estimated components exactly zero can be obtained by minimizing a penalized least squares criterion: \begin{equation}\label{eqn:pen_loss} \hcoef=\argmin_{\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}}\|\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F-\bfX\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}\|^2 + P_\lambda(\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}), \end{equation} where the penalty term \(P_\lambda\) has a singularity at the zero point of its argument. Widely used examples of penalties include the lasso \citep{tibshirani96}, the group lasso \citep{yuan2006}, the smoothly clipped absolute deviation (SCAD) penalty \citep{fan01}, and the minimax concave penalty (MCP) \citep{zhang2010}, among many others. The OEM algorithm can solve (\ref{eqn:pen_loss}) under a broad class of penalties, including all of those previously mentioned. The basic motivation of the OEM algorithm is the triviality of minimizing this loss when the design matrix \(\bfX\) is orthogonal. However, the majority of design matrices from observational data are not orthogonal. Instead, we seek to augment the design matrix with extra rows such that the augmented matrix is orthogonal. If the non-existent responses of the augmented rows are treated as missing, then we can embed our original minimizaton problem inside a missing data problem and use the EM algorithm. Let \(\boldsymbol{\Delta}\) be a matrix of pseudo observations whose response values \(\bfz\) are missing. If \(\boldsymbol{\Delta}\) is designed such that the augmented regression matrix \[\bfX_c = \begin{pmatrix}\bfX \\ {\bf \Delta}\end{pmatrix}\] is column orthogonal, an EM algorithm can be used to solve the augmented problem efficiently similar to \cite{healy1956}. The OEM algorithm achieves this with two steps: \begin{description} \item[Step 1.] Construct an augmentation matrix ${\bf \Delta}$. \item[Step 2.] Iteratively solve the orthogonal design with missing data by EM algorithm. \begin{description} \item[Step 2.1.] E-step: impute the missing responses $\bfz$ by $\bfz = \boldsymbol{\Delta}\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}^{(t)}$, where $\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}^{(t)}$ is the current estimate. \item[Step 2.2.] M-step: solve \begin{equation} \label{mstep} \boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}^{(t+1)} = \argmin_{\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}} \frac{1}{2}\|\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F - \bfX\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}\|^2 + \frac{1}{2}\|\bfz - \boldsymbol{\Delta}\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}\|^2 + P_\lambda(\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}). \end{equation} \end{description} \end{description} An augmentation matrix \({\bf \Delta}\) can be constructed using the active orthogonalization procedure. The procedure starts with any positive definite diagonal matrix \(\boldsymbol{S}\) and \({\bf \Delta}\) can be constructed conceptually by ensuring that \({\bf \Delta}^\top{\bf \Delta} = d\boldsymbol{S}^2 - \bfX^\top\bfX\) is positive semidefinite for some constant \(d \geq \lambda_1(\boldsymbol{S}^{-1}\bfX^\top\bfX\boldsymbol{S}^{-1})\), where \(\lambda_1(\cdot)\) is the largest eigenvalue. The term \({\bf \Delta}\) need not be explicitly computed, as the EM iterations in the second step result in closed-form solutions which only depend on \({\bf \Delta}^\top{\bf \Delta}\). To be more specific, let \({\bf A}={\bf \Delta}^\top{\boldsymbol{\Delta}}\) and \({\bf u}=\bfX^\top\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F+{\bf A}\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}^{(t)}\). Furthermore, assume the regression matrix \(\m{X}\) is standardized so that \begin{equation} \sum_{i=1}^nx_{ij}^2=1, \; \mbox{for} \; j=1,\ldots,p.\nonumber \end{equation} Then the update for the regression coefficients when \(P_\lambda(\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}) = 0\) has the form \(\beta_j^{(t+1)}=u_j/d_j\). When \(P_\lambda(\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta})\) is the \(\ell_1\) norm, corresponding to the lasso penalty, \begin{equation} \beta_j^{(t+1)}={\mathrm{sign}}(u_j)\left(\frac{|u_j|-\lambda}{d_j}\right)_+, \nonumber \end{equation} where \((a)_+\) denotes \(\max\{a,0\}\). \citet{xiong16} used a scalar value for \(d\) and for simplicity chose \(S\) to be the diagonal matrix. Furthermore, they showed that while the above algorithm converges for all \(d \geq \lambda_1(\bfX^\top\bfX)\), using \(d = \lambda_1(\bfX^\top\bfX)\) results in the fastest convergence. Such a \(d\) can be computed efficiently using the Lanczos algorithm \citep{lanczos1950}. \subsection[Penalties]{Penalties}\label{penalties} The \pkg{oem} package uses the OEM algorithm to solve penalized least squares problems with the penalties outlined in Table \ref{tab:pen}. For a vector ${\boldsymbol u}$ of length $k$ and an index set $g \subseteq \{ 1, \dots, k \}$ we define the length $|g|$ subvector ${\boldsymbol u}_g$ of a vector ${\boldsymbol u}$ as the elements of ${\boldsymbol u}$ indexed by $g$. Furthermore, for a vector $\boldsymbol u$ of length $k$ let $|| \boldsymbol u|| = \sqrt{\sum_{j=1}^k{\boldsymbol u}_j^2}$. \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} \begin{table*}[h] \centering \ra{1.3} \begin{tabular}{@{}ccc@{}}\toprule Penalty & Penalty form \\ \midrule Lasso & $\lambda \sum_{j = 1}^pw_j|\beta_j|$ \\ Elastic Net & $\alpha\lambda \sum_{j = 1}^pw_j|\beta_j| + \frac{1}{2}(1 - \alpha)\lambda \sum_{j = 1}^pw_j\beta_j^2$ \\ MCP & $\sum_{j = 1}^p P^{MCP}_{\lambda w_j,\gamma}(\beta_j)$ \\ SCAD & $\sum_{j = 1}^p P^{SCAD}_{\lambda w_j,\gamma}(\beta_j)$ \\ Group Lasso & $\lambda \sum_{k = 1}^Gc_k|| \boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}_{g_k} ||$ \\ Group MCP & $\lambda \sum_{k = 1}^G P^{MCP}_{\lambda c_k,\gamma}(||\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}_{g_k}||) $ \\ Group SCAD & $\lambda \sum_{k = 1}^G P^{SCAD}_{\lambda c_k,\gamma}(||\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}_{g_k}||) $ \\ Sparse Group Lasso & $\lambda(1 - \tau) \sum_{k = 1}^Gc_k|| \boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}_{g_k} || + \lambda \tau \sum_{j = 1}^pw_j|\beta_j|$\\ \bottomrule \end{tabular} \caption{Listed above are the penalties available in the \pkg{oem} package. In the group lasso, ${g_k}$ refers to the index set of the $k$th group. The vector $\boldsymbol w \in \R^p$ is a set of variable-specific penalty weights and the vector $\boldsymbol c \in \R^G$ is a set of group-specific penalty weights.} \label{tab:pen} \end{table*} For $\lambda > 0$ let: \[ P_{\lambda, \gamma}^{SCAD}(\beta) = \left\{ \begin{array}{ll} \lambda|\beta| & |\beta| \leq \lambda ; \\ -\frac{|\beta|^2 - 2\gamma\lambda|\beta| + \lambda^2}{2(\gamma - 1)} & \lambda < |\beta| \leq \gamma\lambda ; \\ \frac{(\gamma + 1)\lambda^2}{2} & |\beta| > \gamma\lambda \\ \end{array} \right. \] for $\gamma > 2$ and \[ P_{\lambda, \gamma}^{MCP}(\beta) = \left\{ \begin{array}{ll} \lambda|\beta| - \frac{\beta^2}{2\gamma} & |\beta| \leq \gamma\lambda ; \\ \frac{\gamma\lambda^2}{2} & |\beta| > \gamma\lambda \\ \end{array} \right. \] for $\gamma > 1$. The updates for the above penalties are given below: \begin{description} \item[]1. \textbf{Lasso} \begin{equation}\label{lasso} \beta_j^{(t+1)}= S(u_j, w_j\lambda, d) = {\mathrm{sign}}(u_j)\left(\frac{|u_j|-w_j\lambda}{d}\right)_+. \end{equation} \item[]2. \textbf{Elastic Net} \begin{equation} \beta_j^{(t+1)}= {\mathrm{sign}}(u_j)\left(\frac{|u_j|-w_j\alpha\lambda}{d+w_j(1-\alpha)\lambda}\right)_+.\label{net} \end{equation} \item[]3. \textbf{MCP} \begin{equation} \beta_j^{(t+1)} = M(u_j, w_j\lambda, \gamma, d) =\left\{\begin{array}{ll}{\mathrm{sign}}(u_j)\frac{\gamma\big(|u_j|-w_j\lambda\big)_+}{(\gamma d-1)},\quad&\text{if}\ |u_j|\leq w_j\gamma\lambda d, \\ u_j/d,&\text{if}\ |u_j|>w_j\gamma\lambda d.\end{array}\right.\label{mcp} \end{equation} where $\gamma > 1$ \item[]4. \textbf{SCAD} \begin{equation}\label{scad} \beta_j^{(t+1)} = C(u_j, w_j\lambda, \gamma, d) =\left\{\begin{array}{ll}{\mathrm{sign}}(u_j)\big(|u_j|-w_j\lambda\big)_+/d,&\text{if}\ |u_j|\leq(d+1)w_j\lambda, \\{\mathrm{sign}}(u_j)\frac{\big[(\gamma-1)|u_j|-w_j\gamma\lambda\big]}{\big[(\gamma-1)d-1\big]},\quad&\text{if}\ (d+1)w_j\lambda<|u_j|\leq w_j\gamma\lambda d, \\ u_j/d,&\text{if}\ |u_j|>w_j\gamma\lambda d.\end{array}\right. \end{equation} where $\gamma > 2$. \item[]5. \textbf{Group Lasso} The update for the $k$th group is \begin{equation}\label{glasso} \boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}_{g_k}^{(t+1)} = G(\mathbf u} \def\bfI{\mathbf I} \def\bfX{\mathbf X} \def\bfv{\mathbf v_{g_k}, c_k\lambda, d) =\frac{\mathbf u} \def\bfI{\mathbf I} \def\bfX{\mathbf X} \def\bfv{\mathbf v_{g_k}}{d}\left(1 - \frac{c_k\lambda}{||\mathbf u} \def\bfI{\mathbf I} \def\bfX{\mathbf X} \def\bfv{\mathbf v_{g_k}||_2}\right)_+. \end{equation} \item[]6. \textbf{Group MCP} The update for the $k$th group is \begin{equation}\label{gmcp} \boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}_{g_k}^{(t+1)} = M(||\mathbf u} \def\bfI{\mathbf I} \def\bfX{\mathbf X} \def\bfv{\mathbf v_{g_k}||, c_k\lambda, \gamma, d)\frac{\mathbf u} \def\bfI{\mathbf I} \def\bfX{\mathbf X} \def\bfv{\mathbf v_{g_k}}{||\mathbf u} \def\bfI{\mathbf I} \def\bfX{\mathbf X} \def\bfv{\mathbf v_{g_k}||}. \end{equation} \item[]7. \textbf{Group SCAD} The update for the $k$th group is \begin{equation}\label{gscad} \boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}_{g_k}^{(t+1)} = C(||\mathbf u} \def\bfI{\mathbf I} \def\bfX{\mathbf X} \def\bfv{\mathbf v_{g_k}||, c_k\lambda, \gamma, d)\frac{\mathbf u} \def\bfI{\mathbf I} \def\bfX{\mathbf X} \def\bfv{\mathbf v_{g_k}}{||\mathbf u} \def\bfI{\mathbf I} \def\bfX{\mathbf X} \def\bfv{\mathbf v_{g_k}||}. \end{equation} \item[]8. \textbf{Sparse Group Lasso} The update for the $k$th group is \begin{equation}\label{sglasso} \boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}_{g_k}^{(t+1)} = G(\bfv_{g_k}, c_k\lambda(1 - \tau), 1), \end{equation} where the $j$th element of $\bfv$ is $S(u_j, w_j\lambda\tau, d)$. This is true because the thresholding operator of two nested penalties is the composition of the thresholding operators where innermost nested penalty's thresholding operator is evaluated first. For more details and theoretical justification of the composition of proximal operators for nested penalties, see \citet{jenatton2010proximal}. \end{description} \section[Parallelization and fast cross validation]{Parallelization and fast cross validation}\label{parallelization-and-fast-cross-validation} The OEM algorithm lends itself naturally to efficient computation for cross validation for penalized linear regression models. When the number of variables is not too large (ideally \(n >> p\)) relative to the number of observations, computation for cross validation using the OEM algorithm is on a similar order of computational complexity as fitting one model for the full data. To see why this is the case, note that the key computational step in OEM is in forming the matrix \(\bf A\). Recall that \(\bf A = d\bfI_p - \bfX^\top\bfX\). In \(K\)-fold cross validation, the design matrix is randomly partitioned into \(K\) submatrices as \[ \bfX = \begin{pmatrix} \bfX_1 \\ \vdots \\ \bfX_K \end{pmatrix}. \] Then for the \(k^{th}\) cross validation model, the oem algorithm requires the quantities \(\bf A_{(k)} = d_{(k)}\bfI_p - \bfX_{-k}^\top\bfX_{-k}\) and \(\bfX_{-k}^\top\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F_{-k}\), where \(\bfX_{-k}\) is the design matrix \(X\) with the \(k^{th}\) submatrix \(\bfX_{k}\) removed and \(\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F_{-k}\) is the response vector with the elements from the \(k^{th}\) fold removed. Then trivially, we have that \(\bf A = d\bfI_p - \sum_{k = 1}^K\bfX_k^\top\bfX_k\), \(\bf A_{(k)} = d\bfI_p - \sum_{c = 1, \dots, K, c \neq k}\bfX_c^\top\bfX_c\), and \(\bfX_{-k}^\top\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F_{-k} = \sum_{c = 1, \dots, K, c \neq k}\bfX_c^\top\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F_c\). Then the main computational tasks for fitting a model on the entire training data and for fitting \(k\) models for cross validation is in computing \(\bfX_k^\top\bfX_k\) and \(\bfX_{k}^\top\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F_{k}\) for each \(k\), which has a total computational complexity of \(O(np^2 + np)\) for any \(k\). Hence, we can precompute these quantities and the computation time for the entire cross validation procedure can be dramatically reduced from the naive procedure of fitting models for each cross validation fold individually. It is clear that \(\bfX_k^\top\bfX_k\) and \(\bfX_{k}^\top\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F_{k}\) can be computed independently across all \(k\) and hence we can reduce the computational load even further by computing them in parallel. The techniques presented here are not applicable to models beyond the linear model, such as logistic regression. \section{Extension to logistic regression}\label{extension-to-logistic-regression} The logistic regression model is often used when the response of interest is a binary outcome. The OEM algorithm can be extended to handle logistic regression models by using a proximal Newton algorithm similar to that used in the \pkg{glmnet} package and described in \citet{friedman2010}. OEM can act as a replacement for coordinate descent in the inner loop in the algorithm described in Section 3 of \citet{friedman2010}. While we do not present any new algorithmic results here, for the sake of clarity we will outline the proximal Newton algorithm of \citet{friedman2010} that we use. The response variable \(Y\) takes values in \(\{0,1\}\). The logistic regression model posits the following model for the probability of an event conditional on predictors: \[ \mu(x) = \mbox{Pr}(Y = 1|x) = \frac{1}{1 + \exp{-(x^\top\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta})}}. \] Under this model, we compute penalized regression estimates by maximizing the following penalized log-likelihood with respect to \(\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}\): \begin{equation} \frac{1}{n}\sum_{i = 1}^n \left\{ y_i x_i^\top\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta} + \log (1+ \exp(x_i^\top\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}) ) \right\} - P_\lambda(\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}). \label{eqn:pen_lik_logistic} \end{equation} Then, given a current estimate \(\hat{\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}}\), we approximate (\ref{eqn:pen_lik_logistic}) with the following weighted penalized linear regression: \begin{equation} -\frac{1}{2n}\sum_{i = 1}^n w_i\{z_i - x_i^\top\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta} \} - P_\lambda(\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}), \label{eqn:approx_pen_lik_logistic} \end{equation} where \begin{align*} z_i = {} & x_i^\top\hat{\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}} + \frac{y_i - \hat{\mu}(x_i)}{\hat{\mu}(x_i)(1 - \hat{\mu}(x_i))} \\ w_i = {} & \hat{\mu}(x_i)(1 - \hat{\mu}(x_i)) \end{align*} and \(\hat{\mu}(x_i)\) is evaluated at \(\hat{\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}}\). Here, \(z_i\) are the working responses and \(w_i\) are the weights. For each iteration of the proximal Newton algorithm, we maximize (\ref{eqn:approx_pen_lik_logistic}) using the OEM algorithm. Similar to \citet{krishnapuram2005, friedman2010}, we optionally employ an approximation to the Hessian using an upper-bound of \(w_i = 0.25\) for all \(i\). This upper bound is often quite efficient for big tall data settings. \section[The oem package]{The \pkg{oem} package}\label{the-oem-package} \subsection[The oem() function]{The \code{oem()} function}\label{the-oem-function} The function \code{oem()} is the main workhorse of the \pkg{oem} package. \begin{CodeChunk} \begin{CodeInput} nobs <- 1e4 nvars <- 25 rho <- 0.25 sigma <- matrix(rho, ncol = nvars, nrow = nvars) diag(sigma) <- 1 x <- mvrnorm(n = nobs, mu = numeric(nvars), Sigma = sigma) y <- drop(x rnorm(nobs, sd = 3) \end{CodeInput} \end{CodeChunk} The group membership indices for each covariate must be specified for the group lasso via the argument \code{groups}. The argument \code{gamma} specifies the \(\gamma\) value for MCP. The function \code{plot.oemfit()} allows the user to plot the estimated coefficient paths. Its argument \code{which.model} allows the user to select model to be plotted. \begin{CodeChunk} \begin{CodeInput} fit <- oem(x = x, y = y, penalty = c("lasso", "mcp", "grp.lasso"), gamma = 2, groups = rep(1:5, each = 5), lambda.min.ratio = 1e-3) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} par(mar=c(5, 5, 5, 3) + 0.1) layout(matrix(1:3, ncol = 3)) plot(fit, which.model = 1, xvar = "lambda", cex.main = 3, cex.axis = 1.25, cex.lab = 2) plot(fit, which.model = 2, xvar = "lambda", cex.main = 3, cex.axis = 1.25, cex.lab = 2) plot(fit, which.model = 3, xvar = "lambda", cex.main = 3, cex.axis = 1.25, cex.lab = 2) \end{CodeInput} \begin{figure}[H] {\centering \includegraphics{plot_path-1} } \caption[The plots above depict estimated coefficient paths for the lasso, MCP, and group lasso]{The plots above depict estimated coefficient paths for the lasso, MCP, and group lasso.}\label{fig:plot_path} \end{figure} \end{CodeChunk} To compute the loss function in addition to the estimated coefficients, the argument \code{compute.loss} must be set to \code{TRUE} like the following: \begin{CodeChunk} \begin{CodeInput} fit <- oem(x = x, y = y, penalty = c("lasso", "mcp", "grp.lasso"), gamma = 2, groups = rep(1:5, each = 5), lambda.min.ratio = 1e-3, compute.loss = TRUE) \end{CodeInput} \end{CodeChunk} By default, \code{compute.loss} is set to \code{FALSE} because it adds a large computational burden, especially when many penalties are input. The function \code{logLik.oemfit()} can be used in complement with fitted with \code{oem()} objects with \code{compute.loss = TRUE} with the model specified using the \code{which.model} argument like the following: \begin{CodeChunk} \begin{CodeInput} logLik(fit, which.model = 2)[c(1, 25, 50, 100)] \end{CodeInput} \begin{CodeOutput} [1] -14189.39 -13804.72 -13795.76 -13795.11 \end{CodeOutput} \end{CodeChunk} \subsection[Fitting multiple penalties]{Fitting multiple penalties}\label{fitting-multiple-penalties} The OEM algorithm is well-suited to quickly estimate a solution path for multiple penalties simultaneously for the linear model if the number of variables is not too large, often when the number of variables is several thousand or fewer, provided the number of observations is larger than the number of variables. Ideally the number of observations should be at least ten times larger than the number of variables for best performance. Once the quantities \(\bf A\) and \(\bfX^\top\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F\) are computed initially, then the remaining computational complexity for OEM for a given model and tuning parameter is just \(O(p^2)\) per iteration. To demonstrate the efficiency, consider the following simulated example: \begin{CodeChunk} \begin{CodeInput} nobs <- 1e6 nvars <- 100 rho <- 0.25 sigma <- matrix(rho, ncol = nvars, nrow = nvars) diag(sigma) <- 1 x2 <- mvrnorm(n = nobs, mu = numeric(nvars), Sigma = sigma) y2 <- drop(x2 rnorm(nobs, sd = 5) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} mb <- microbenchmark( "oem[lasso]" = oem(x = x2, y = y2, penalty = c("lasso"), gamma = 3, groups = rep(1:20, each = 5)), "oem[all]" = oem(x = x2, y = y2, penalty = c("lasso", "mcp", "grp.lasso", "scad"), gamma = 3, groups = rep(1:20, each = 5)), times = 10L) print(mb, digits = 3) \end{CodeInput} \begin{CodeOutput} Unit: seconds expr min lq mean median uq max neval cld oem[lasso] 2.38 2.39 2.41 2.42 2.44 2.46 10 a oem[all] 2.88 2.89 2.92 2.91 2.93 3.00 10 b \end{CodeOutput} \end{CodeChunk} \subsection[Parallel support via OpenMP]{Parallel support via \pkg{OpenMP}}\label{parallel-support-via-openmp} As noted in Section \ref{parallelization-and-fast-cross-validation}, the key quantities necessary for the OEM algorithm can be computed in parallel. By specifying \texttt{ncores} to be a value greater than 1, the \code{oem()} function automatically employs \pkg{OpenMP} \citep{openmp15} to compute \(\bf A\) and \(\bfX^\top\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F\) in parallel. Due to memory access inefficiencies in breaking up the computation of \(\bf A\) into pieces, using multiple cores does not speed up computation linearly. It is typical for \pkg{OpenMP} not to result in linear speedups, especially on Windows machines, due to its overhead costs. Furthermore, if a user does not have OpenMP on their machine, the \pkg{oem} package will still run normally on one core. In the following example, we can see a slight benefit from invoking the use of extra cores. \begin{CodeChunk} \begin{CodeInput} nobs <- 1e5 nvars <- 500 rho <- 0.25 sigma <- rho ** abs(outer(1:nvars, 1:nvars, FUN = "-")) x2 <- mvrnorm(n = nobs, mu = numeric(nvars), Sigma = sigma) y2 <- drop(x2 rnorm(nobs, sd = 5) mb <- microbenchmark( "oem" = oem(x = x2, y = y2, penalty = c("lasso", "mcp", "grp.lasso", "scad"), gamma = 3, groups = rep(1:20, each = 25)), "oem[parallel]" = oem(x = x2, y = y2, ncores = 2, penalty = c("lasso", "mcp", "grp.lasso", "scad"), gamma = 3, groups = rep(1:20, each = 25)), times = 10L) print(mb, digits = 3) \end{CodeInput} \begin{CodeOutput} Unit: seconds expr min lq mean median uq max neval cld oem 4.80 4.85 4.93 4.93 5.00 5.07 10 b oem[parallel] 3.72 3.84 3.98 3.95 4.11 4.31 10 a \end{CodeOutput} \end{CodeChunk} \subsection[The cv.oem() function]{The \code{cv.oem()} function}\label{the-cv.oem-function} The \code{cv.oem()} function is used for cross validation of penalized models fitted by the \code{oem()} function. It does not use the method described in Section \ref{parallelization-and-fast-cross-validation} and hence can be used for models beyond the linear model. It can also benefit from parallelization using either \pkg{OpenMP} or the \pkg{foreach} package. For the former, one only need specify the \code{ncores} argument of \code{cv.oem()} and computation of the key quantities for OEM are computed in parallel for each cross validation fold. With the \pkg{foreach} package \citep{foreach}, cores must be ``registered'' in advance using \pkg{doParallel} \citep{doParallel}, \pkg{doMC} \citep{doMC}, or otherwise. Each cross validation fold is computed on a separate core, which may be more efficient depending on the user's hardware. \begin{CodeChunk} \begin{CodeInput} cvfit <- cv.oem(x = x, y = y, penalty = c("lasso", "mcp", "grp.lasso"), gamma = 2, groups = rep(1:5, each = 5), nfolds = 10) \end{CodeInput} \end{CodeChunk} The best performing model and its corresponding best tuning parameter can be accessed via: \begin{CodeChunk} \begin{CodeInput} cvfit$best.model \end{CodeInput} \begin{CodeOutput} [1] "mcp" \end{CodeOutput} \begin{CodeInput} cvfit$lambda.min \end{CodeInput} \begin{CodeOutput} [1] 0.0739055 \end{CodeOutput} \end{CodeChunk} A summary method is available as \code{summary.cv.oem()}, similar to the summary function of \code{cv.ncvreg()} of the \pkg{ncvreg} package, which prints output from all of the cross validated models. It can be used like the following: \begin{CodeChunk} \begin{CodeInput} summary(cvfit) \end{CodeInput} \begin{CodeOutput} lasso-penalized linear regression with n=10000, p=25 At minimum cross-validation error (lambda=0.0242): ------------------------------------------------- Nonzero coefficients: 13 Cross-validation error (Mean-Squared Error): 9.11 Scale estimate (sigma): 3.018 <===============================================> mcp-penalized linear regression with n=10000, p=25 At minimum cross-validation error (lambda=0.0739): ------------------------------------------------- Nonzero coefficients: 5 Cross-validation error (Mean-Squared Error): 9.10 Scale estimate (sigma): 3.016 <===============================================> grp.lasso-penalized linear regression with n=10000, p=25 At minimum cross-validation error (lambda=0.0242): ------------------------------------------------- Nonzero coefficients: 16 Cross-validation error (Mean-Squared Error): 9.10 Scale estimate (sigma): 3.017 \end{CodeOutput} \end{CodeChunk} Predictions from any model (\code{which.model = 2}) or the best of all models (\code{which.model = "best.model"}) using \code{cv.oem} objects. The tuning parameter is specified via the argument \code{s}, which can take numeric values or \code{"lambda.min"} for the best tuning parameter or \code{"lambda.1se"} for a good but more conservative tuning parameter. \begin{CodeChunk} \begin{CodeInput} predict(cvfit, newx = x[1:3,], which.model = "best.model", s = "lambda.min") \end{CodeInput} \begin{CodeOutput} [,1] [1,] -0.2233264 [2,] 0.2211386 [3,] 0.4760399 \end{CodeOutput} \end{CodeChunk} \subsection[The xval.oem() function]{The \code{xval.oem()} function}\label{the-xval.oem-function} The \code{xval.oem()} function is much like \code{cv.oem()} but is limited to use for linear models only. It is significantly faster than \code{cv.oem()}, as it uses the method described in Section \ref{parallelization-and-fast-cross-validation}. Whereas \code{cv.oem()} functions by repeated calls to \code{oem()}, all of the primary computation in \code{xval.oem()} is carried out in \proglang{C++}. We chose to keep the \code{xval.oem()} function separate from the \code{cv.oem()} because the underlying code between the two methods is vastly different and furthermore because \code{xval.oem()} is not available for logistic regression models. \begin{CodeChunk} \begin{CodeInput} xvalfit <- xval.oem(x = x, y = y, penalty = c("lasso", "mcp", "grp.lasso"), gamma = 2, groups = rep(1:5, each = 5), nfolds = 10) yrng <- range(c(unlist(xvalfit$cvup), unlist(xvalfit$cvlo))) layout(matrix(1:3, ncol = 3)) par(mar=c(5, 5, 5, 3) + 0.1) plot(xvalfit, which.model = 1, ylim = yrng, cex.main = 3, cex.axis = 1.25, cex.lab = 2) plot(xvalfit, which.model = 2, ylim = yrng, cex.main = 3, cex.axis = 1.25, cex.lab = 2) plot(xvalfit, which.model = 3, ylim = yrng, cex.main = 3, cex.axis = 1.25, cex.lab = 2) \end{CodeInput} \begin{figure} {\centering \includegraphics{xvalone-1} } \caption[Depicted above are the cross validated mean squared prediction errors for paths of tuning parameters for the lasso, MCP, and group lasso]{Depicted above are the cross validated mean squared prediction errors for paths of tuning parameters for the lasso, MCP, and group lasso.}\label{fig:xvalone} \end{figure} \end{CodeChunk} \subsection[OEM with precomputation for linear models with the oem.xtx() function]{OEM with precomputation for linear models with the \code{oem.xtx()} function}\label{oem-with-precomputation-for-linear-models-with-the-oem.xtx-function} The key quantities, \(\bfX^\top\bfX\) and \(\bfX^\top\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F\) can be computed in parallel, and when data are stored across a large cluster, their computation can be performed in a straightforward manner. When they are available, the \code{oem.xtx()} function can be used to carry out the OEM algorithm based on these quantities instead of on the full design matrix \(\bfX\) and the full response vector \(\mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F\). All methods available to objects fitted by \code{oem()} are also available to objects fitted by \code{oem.xtx()}. \begin{CodeChunk} \begin{CodeInput} xtx <- crossprod(x) / nrow(x) xty <- crossprod(x, y) / nrow(x) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} fitxtx <- oem.xtx(xtx, xty, penalty = c("lasso", "mcp", "grp.lasso"), gamma = 2, groups = rep(1:5, each = 5)) \end{CodeInput} \end{CodeChunk} \subsection[Out-of-Memory computation with the big.oem() function]{Out-of-Memory computation with the \code{big.oem()} function}\label{out-of-memory-computation-with-the-big.oem-function} Standard \proglang{R} objects are stored in memory and thus, when a design matrix is too large for memory it cannot be used for computation in the standard way. The \pkg{bigmemory} package offers objects which point to data stored on disk and thus allows users to bypass memory limitations. It also provides access to \proglang{C++} objects which do the same. These objects are highly efficient due to memory mapping, which is a method of mapping a data file to virtual memory and allows for efficient moving of data in and out of memory from disk. For further details on memory mapping, we refer readers to \citet{bovet2005, kane2013}. The \code{big.oem()} function allows for out-of-memory computation by linking \pkg{Eigen} matrix objects to data stored on disk via \pkg{bigmemory}. The standard approach for loading the data from \code{SEXP} objects (\code{X_} in the example below) to \pkg{Eigen} matrix objects (\code{X} in the example below) in \proglang{C++} looks like: \begin{CodeChunk} \begin{CodeInput} using Eigen::Map; using Eigen::MatrixXd; const Map<MatrixXd> X(as<Map<MatrixXd> >(X_)); \end{CodeInput} \end{CodeChunk} To instead map from an object which is a pointer to data on disk, we first need to load the \pkg{bigmemory} headers: \begin{CodeChunk} \begin{CodeInput} #include <bigmemory/MatrixAccessor.hpp> #include <bigmemory/BigMatrix.h> \end{CodeInput} \end{CodeChunk} Then we link the pointer (passed from \proglang{R} to \proglang{C++} as \code{X_} and set as \code{bigPtr} below) to data to an \pkg{Eigen} matrix object via: \begin{CodeChunk} \begin{CodeInput} XPtr<BigMatrix> bigPtr(X_); const Map<MatrixXd> X = Map<MatrixXd> ((double *)bigPtr->matrix(), bigPtr->nrow(), bigPtr->ncol() ); \end{CodeInput} \end{CodeChunk} The remaining computation for OEM is carried out similarly as for \code{oem()}, yet here the object \code{X} is not stored in memory. To test out \code{big.oem()} and demonstrate its memory usage profile we simulate a large dataset and save it as a ``filebacked'' \code{big.matrix} object from the \pkg{bigmemory} package. \begin{CodeChunk} \begin{CodeInput} nobs <- 1e6 nvars <- 250 bkFile <- "big_matrix.bk" descFile <- "big_matrix.desc" big_mat <- filebacked.big.matrix(nrow = nobs, ncol = nvars, type = "double", backingfile = bkFile, backingpath = ".", descriptorfile = descFile, dimnames = c(NULL, NULL)) for (i in 1:nvars) { big_mat[, i] = rnorm(nobs) } yb <- rnorm(nobs, sd = 5) \end{CodeInput} \end{CodeChunk} Using the \code{profvis()} function of the \pkg{profvis} package \citep{profvis}, we can see that no copies of the design matrix are made at any point. Furthermore, a maximum of 173 Megabytes are used by the \proglang{R} session during this simulation, whereas the size of the design matrix is 1.9 Gigabytes. The following code generates an interactive \proglang{html} visualization of the memory usage of \code{big.oem()} line-by-line: \begin{CodeChunk} \begin{CodeInput} profvis::profvis({ bigfit <- big.oem(x = big_mat, y = yb, penalty = c("lasso", "grp.lasso", "mcp", "scad"), gamma = 3, groups = rep(1:50, each = 5)) }) \end{CodeInput} \end{CodeChunk} Here we save a copy of the design matrix in memory for use by \code{oem()}: \begin{CodeChunk} \begin{CodeInput} xb <- big_mat[,] print(object.size(xb), units = "Mb") \end{CodeInput} \begin{CodeOutput} 1907.3 Mb \end{CodeOutput} \begin{CodeInput} print(object.size(big_mat), units = "Mb") \end{CodeInput} \begin{CodeOutput} 0 Mb \end{CodeOutput} \end{CodeChunk} The following benchmark for on-disk computation is on a system with a hard drive with 7200 RPM, 16MB Cache, and SATA 3.0 Gigabytes per second (a quite modest setup compared with a system with a solid state drive). Even without a solid state drive we pay little time penalty for computing on disk over computing in memory. \begin{CodeChunk} \begin{CodeInput} mb <- microbenchmark( "big.oem" = big.oem(x = big_mat, y = yb, penalty = c("lasso", "grp.lasso", "mcp", "scad"), gamma = 3, groups = rep(1:50, each = 5)), "oem" = oem(x = xb, y = yb, penalty = c("lasso", "grp.lasso", "mcp", "scad"), gamma = 3, groups = rep(1:50, each = 5)), times = 10L) print(mb, digits = 3) \end{CodeInput} \begin{CodeOutput} Unit: seconds expr min lq mean median uq max neval cld big.oem 8.46 8.51 8.73 8.57 8.69 9.73 10 a oem 9.81 9.85 10.36 10.08 10.65 12.03 10 b \end{CodeOutput} \end{CodeChunk} \subsection[Sparse matrix support]{Sparse matrix support}\label{sparse-matrix-support} The \code{oem()} and \code{cv.oem()} functions can accept sparse design matrices as provided by the \code{CsparseMatrix} class of objects of the \pkg{Matrix} package \citep{Matrix}. If the design matrix provided has a high degree of sparsity, using a \code{CsparseMatrix} object can result in a substantial computational speedup and reduction in memory usage. \begin{CodeChunk} \begin{CodeInput} library(Matrix) n.obs <- 1e5 n.vars <- 200 true.beta <- c(runif(15, -0.25, 0.25), rep(0, n.vars - 15)) xs <- rsparsematrix(n.obs, n.vars, density = 0.01) ys <- rnorm(n.obs, sd = 3) + as.vector(xs x.dense <- as.matrix(xs) mb <- microbenchmark(fit = oem(x = x.dense, y = ys, penalty = c("lasso", "grp.lasso"), groups = rep(1:40, each = 5)), fit.s = oem(x = xs, y = ys, penalty = c("lasso", "grp.lasso"), groups = rep(1:40, each = 5)), times = 10L) print(mb, digits = 3) \end{CodeInput} \begin{CodeOutput} Unit: milliseconds expr min lq mean median uq max neval cld fit 669.9 672.5 679.3 680.3 682.2 690.6 10 b fit.s 63.2 64.1 65.5 65.3 66.8 68.2 10 a \end{CodeOutput} \end{CodeChunk} \subsection[API comparison with glmnet]{API comparison with \pkg{glmnet}}\label{api-comparison-with-glmnet} The application program interface (API) of the \pkg{oem} package was designed to be familiar to users of the \pkg{glmnet} package. Data ready for use by \code{glmnet()} can be used directly by \code{oem()}. Most of the arguments are the same, except the \code{penalty} argument and other arguments relevant to the various penalties available in \code{oem()}. Here we fit linear models with a lasso penalty using \code{oem()} and \code{glmnet()}: \begin{CodeChunk} \begin{CodeInput} oem.fit <- oem(x = x, y = y, penalty = "lasso") glmnet.fit <- glmnet(x = x, y = y) \end{CodeInput} \end{CodeChunk} Here we fit linear models with a lasso penalty using \code{oem()} and \code{glmnet()} with sparse design matrices: \begin{CodeChunk} \begin{CodeInput} oem.fit.sp <- oem(x = xs, y = ys, penalty = "lasso") glmnet.fit.sp <- glmnet(x = xs, y = ys) \end{CodeInput} \end{CodeChunk} Now we make predictions using the fitted model objects from both packages: \begin{CodeChunk} \begin{CodeInput} preds.oem <- predict(oem.fit, newx = x) preds.glmnet <- predict(glmnet.fit, newx = x) \end{CodeInput} \end{CodeChunk} We now plot the coefficient paths using both fitted model objects: \begin{CodeChunk} \begin{CodeInput} plot(oem.fit, xvar = "norm") plot(glmnet.fit, xvar = "norm") \end{CodeInput} \end{CodeChunk} We now fit linear models with a lasso penalty and select the tuning parameter with cross validation using \code{cv.oem()}, \code{xval.oem()}, and \code{cv.glmnet()}: \begin{CodeChunk} \begin{CodeInput} oem.cv.fit <- cv.oem(x = x, y = y, penalty = "lasso") oem.xv.fit <- xval.oem(x = x, y = y, penalty = "lasso") glmnet.cv.fit <- cv.glmnet(x = x, y = y) \end{CodeInput} \end{CodeChunk} We now plot the cross validation errors using all fitted cross validation model objects: \begin{CodeChunk} \begin{CodeInput} plot(oem.cv.fit) plot(oem.xv.fit) plot(glmnet.cv.fit) \end{CodeInput} \end{CodeChunk} We now make predictions using the best tuning parameter according to cross validation error using all fitted cross validation model objects: \begin{CodeChunk} \begin{CodeInput} preds.cv.oem <- predict(oem.cv.fit, newx = x, s = "lambda.min") preds.xv.oem <- predict(oem.xv.fit, newx = x, s = "lambda.min") preds.cv.glmnet <- predict(glmnet.cv.fit, newx = x, s = "lambda.min") \end{CodeInput} \end{CodeChunk} \section[Timings]{Timings}\label{timings} Extensive numerical studies are conducted in \citet{xiong16} regarding the OEM algorithm for computation time for paths of tuning parameters for various penalties, so in the following simulation studies, we will focus on computation time for the special features of the \pkg{oem} package such as cross validation and sparse matrix support. All simulations are run on a 64-bit machine with an Intel Xeon E5-2470 2.30 GHz CPU and 128 Gigabytes of main memory and a Linux operating system. \subsection[Cross validation]{Cross validation}\label{cross-validation} In this section we will compare the computation time of the \code{cv.oem()} and \code{xval.oem()} for various penalties, both individually and simultanously, with the cross validation functions from other various packages, including \pkg{glmnet}, \pkg{ncvreg}, \pkg{grpreg}, \pkg{gglasso}, and the \proglang{Python} package \pkg{sklearn} \citep{pedregosa2011}. The model class we use from \pkg{sklearn} is \code{LassoCV}, which performs cross validation for lasso linear models. In particular, we focus on the comparison with \pkg{glmnet}, as it has been carefully developed with computation time in mind and has long been the gold standard for computational performance. In the simulation setup, we generate the design matrix from a multivariate normal distribution with covariance matrix \((\sigma_{ij}) = 0.5 ^ {|i - j|}\). Responses are generated from the following model: \[ \mathbf y} \def\bfY{\mathbf Y} \def\bfF{\mathbf F = \bfX\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta} + \boldsymbol\epsilon \] where the first five elements of \(\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}\) are \((-0.5, -0.5, 0.5, 0.5, 1)\) and the remaining are zero and \(\boldsymbol\epsilon\) is an independent mean zero normal random variable with standard deviation 2. The total number of observations \(n\) is set to \(10^5\) and \(10^6\), the number of variables \(p\) is varied from 50 to 500, and the number of folds for cross validation is set to 10. For grouped regularization, groups of variables of size 25 are chosen contiguously. For the MCP regularization, the tuning parameter \(\gamma\) is chosen to be 3. Each method is fit using the same sequence of 100 values of the tuning parameter \(\lambda\) and convergence is specified to be the same level of precision for each method. The computation times for the \code{glmnet} and \code{oem} functions without cross validation are given as a reference point. From the results in Figure \ref{fig:plotCV}, both the \code{xval.oem()} and \code{cv.oem()} functions are competitive with all other cross validation alternatives. Surprisingly, the \code{xval.oem()} function for the lasso penalty only is competitive with and in many scenarios is even faster than \code{glmnet}, which does not perform cross validation. The \code{xval.oem()} is clearly faster than \code{cv.oem()} in all scenarios. In many scenarios \code{cv.oem()} takes at least 6 times longer than \code{xval.oem()}. Both cross validation functions from the \pkg{oem} package are nearly as fast in computing for three penalties simultaneously as they are for just one. We have found in general that for any scenario where \(n >> p\) and cross validation is required, it is worth considering \pkg{oem} as a fast alternative to other packages. In particular, a rough rule of thumb is that \pkg{oem} is advantageous when \(n > 10p\), however, \pkg{oem} may be advantageous with fewer observations than this for penalties other than the lasso, such as MCP, SCAD, and group lasso. \begin{CodeChunk} \begin{figure} {\centering \includegraphics{plotCV-1} } \caption[Depicted above is the average computing time over ten runs in log(Seconds) for the functions that perform cross validation for selection of the tuning parameter for various penalized regression methods]{Depicted above is the average computing time over ten runs in log(Seconds) for the functions that perform cross validation for selection of the tuning parameter for various penalized regression methods.}\label{fig:plotCV} \end{figure} \end{CodeChunk} Now we test the impact of parallelization on the computation time of \code{xval.oem()} using the same simulation setup as above except now we additionally vary the number of cores used. From the results in Figure \ref{fig:plotCVPar}, we can see that using parallelization through \pkg{OpenMP} helps to some degree. However, it is important to note that using more cores than the number of cross validation folds will unlikely result in better computation time than the same number of cores as the number of folds due to how parallelization is implemented. \begin{CodeChunk} \begin{figure} {\centering \includegraphics{plotCVPar-1} } \caption{Depicted above is the average computing time over ten runs in log(Seconds) for the \code{xval.oem()} function with a varying number of cores. Results are shown for just one penalty (lasso) and three penalties fit simultaneously (lasso, group lasso, and MCP) with 100 values for the tuning parameter $\lambda$ for each penalty.}\label{fig:plotCVPar} \end{figure} \end{CodeChunk} \subsection[Sparse matrices]{Sparse matrices}\label{sparse-matrices} In the following simulated example, sparse design matrices are generated using the function \code{rsparsematrix()} of the \pkg{Matrix} package with nonzero entries generated from an independent standard normal distribution. The proportion of zero entries in the design matrix is set to 0.99, 0.95, and 0.9. The total number of observations \(n\) is set to \(10^5\) and \(10^6\) and the number of variables \(p\) is varied from 250 to 1000. Responses are generated from the same model as in Section \ref{cross-validation}. Each method is fit using the same sequence of 100 values of the tuning parameter \(\lambda\) and convergence is specified to be the same level of precision for each method. The authors are not aware of penalized regression packages other than \pkg{glmnet} that provide support for sparse matrices, so we just compare \pkg{oem} with \pkg{glmnet}. From the results in Figure \ref{fig:plotSPARSE}, it can be seen that \code{oem()} is superior in computation time for very tall data settings with a high degree of sparsity of the design matrix. \begin{CodeChunk} \begin{figure} {\centering \includegraphics{plotSPARSE-1} } \caption[Depicted above is the average computing time over ten runs in log(Seconds) for computation of lasso linear regression models using sparse matrices]{Depicted above is the average computing time over ten runs in log(Seconds) for computation of lasso linear regression models using sparse matrices.}\label{fig:plotSPARSE} \end{figure} \end{CodeChunk} \subsection[Penalized logistic regression]{Penalized logistic regression}\label{penalized-logistic-regression} In this simulation, we generate the design matrix from a multivariate normal distribution with covariance matrix \((\sigma_{ij}) = 0.5 ^ {|i - j|}\). Responses are generated from the following model: \[ \mbox{Pr}(Y_i = \boldsymbol 1|x_i) = \frac{1}{1 + \exp{(-x_i^\top\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta})}}, \mbox{ for } i = 1, \dots, n \] where the first five elements of \(\boldsymbol \beta} \def\hcoef{\hat{\boldsymbol \beta}\) are \((-0.5, -0.5, 0.5, 0.5, 1)\) and the remaining are zero. The total number of observations \(n\) is set to \(10^4\) and \(10^5\), the number of variables \(p\) is varied from 50 to 500. For grouped regularization, groups of variables of size 25 are chosen contiguously. For the MCP regularization, the tuning parameter \(\gamma\) is chosen to be 3. Each method is fit using the same sequence of 25 values of the tuning parameter \(\lambda\). Both \code{glmnet()} and \code{oem()} allow for a Hessian upper-bound approximation for logistic regression models, so in this simulation we compare both the vanilla versions of each in addition to the Hessian upper-bound versions. For logistic regression models, unlike linear regression, the various functions compared use vastly different convergence criterion, so care was taken to set the convergence thresholds for the different methods to achieve a similar level of precision. The average computing times over 10 runs are displayed in Figure \ref{fig:plotBINOMIAL}. The Hessian upper-bound versions of \code{glmnet()} and \code{oem()} are given by the dashed lines. The \code{glmnet()} function is the fastest for lasso-penalized logistic regression, but \code{oem()} with an without the use of a Hessian upper-bound is nearly as fast as \code{glmnet()} when the sample size gets larger. For the MCP and group lasso, \code{oem()} with the Hessian upper-bound option is the fastest across the vast majority of simulation settings. In general, the Hessian upper-bound is most advantageous when the number of variables is moderately large. \begin{CodeChunk} \begin{figure} {\centering \includegraphics{plotBINOMIAL-1} } \caption[Depicted above is the average computing time over ten runs in log(Seconds) for the functions that penalized estimates for binomial regression models]{Depicted above is the average computing time over ten runs in log(Seconds) for the functions that penalized estimates for binomial regression models.}\label{fig:plotBINOMIAL} \end{figure} \end{CodeChunk} \subsection[Quality of solutions]{Quality of solutions}\label{quality-of-solutions} In this section we present information about the quality of solutions provided by each of the compared packages. Using the same simulation setup and precisions for convergence as the cross validation and binomial simulations, we investigate the numerical precision of the methods by presenting the objective function values corresponding to the given solutions. Specifically, we look at the difference in the objective function values between \code{oem()} and the comparative functions. A Negative value here indicates that the \code{oem()} function results in solutions with a smaller value of the objective function. Since each simulation is run on a sequence of tuning parameter values we present the differences in objective functions averaged over the tuning parameters. Since the objective function values across the different tuning parameter values are on the same scale, this comparison is reasonable. While the \code{grpreg()} function from the \pkg{grpreg} package provides solutions for the group lasso, it minimizes a slightly modified objective function, wherein the covariates are orthonormalized within groups as described in Section 2.1 of \citet{breheny2015}. Due to this fact we do not compare the objective function values assocated with results returned by \code{grpreg()}. The objective function value differences for the linear model simulations are presented in Table \ref{table:loss_linear}. We can see that the solutions provided by \code{oem()} are generally more precise than those of other functions except \code{ncvreg()} for the MCP. However, the differences between the solutions provided by \code{oem()} and \code{ncvreg()} for the MCP are close to machine precision. The objective function value differences for the logistic model simulations are presented in Tables \ref{table:loss_fullhessian} and \ref{table:loss_upperbound}. Table \ref{table:loss_fullhessian} refers to comparisons between \code{oem()} using the full Hessian and other methods and Table \ref{table:loss_upperbound} refers to comparisons between \code{oem()} using the Hessian upper-bound. We can see that \code{oem()} in both cases results in solutions which are generally more precise than other methods. Interestingly, for the logistic regression simulations unlike the linear regression simulations, we see that \code{oem()} provides solutions which are dramatically more precise than \code{ncvreg()} for the MCP. These results indicate that the nonconvexity of the MCP has more serious consequences for logistic regression in terms of algorithmic choices. \begin{table}[b] \center \begin{tabular}{llccc} \toprule\toprule $n$ & $p$ & \pkg{glmnet} & \pkg{gglasso} & \multicolumn{1}{c}{\pkg{ncvreg}} \\ \midrule \nopagebreak $10^5$ & \nopagebreak 50 & $-4.50^{-09}$ & $-3.46^{-06}$ & $ \hphantom{-}2.47^{-11}$ \\ & \nopagebreak 100 & $-5.38^{-09}$ & $-3.38^{-06}$ & $-1.46^{-12}$ \\ & \nopagebreak 250 & $-1.93^{-08}$ & $-3.37^{-06}$ & $-1.15^{-12}$ \\ & \nopagebreak 500 & $-3.58^{-08}$ & $-3.29^{-06}$ & $ \hphantom{-}7.11^{-12}$ \\ \rule{0pt}{1.7\normalbaselineskip} \nopagebreak $10^6$ & \nopagebreak 50 & $-2.00^{-09}$ & $-3.32^{-06}$ & $ \hphantom{-}7.00^{-14}$ \\ & \nopagebreak 100 & $-4.34^{-09}$ & $-3.30^{-06}$ & $-1.39^{-14}$ \\ & \nopagebreak 250 & $-8.82^{-09}$ & $-3.36^{-06}$ & $ \hphantom{-}6.76^{-12}$ \\ & \nopagebreak 500 & $-1.75^{-08}$ & $-3.34^{-06}$ & $ \hphantom{-}1.69^{-14}$ \\ \bottomrule \end{tabular} \caption{The results above are the averages differences between the \code{oem()} function and other methods in objective function values averaged over all of the values of the tuning parameter for the linear regression simulations. Negative here means the \code{oem()} function results in estimates with a lower objective function value.} \label{table:loss_linear} \end{table} \begin{table}[b] \center \begin{tabular}{llcccc} \toprule\toprule $n$ & $p$ & \pkg{glmnet} & \pkg{glmnet} (ub) & \pkg{gglasso} & \multicolumn{1}{c}{\pkg{ncvreg}} \\ \midrule \nopagebreak $10^4$ & \nopagebreak 50 & $-2.79^{-13}$ & $-4.30^{-12}$ & $-4.33^{-07}$ & $-5.31^{-03}$ \\ & \nopagebreak 100 & $-1.82^{-13}$ & $-4.74^{-12}$ & $-4.95^{-07}$ & $-4.90^{-03}$ \\ & \nopagebreak 250 & $-6.52^{-13}$ & $-3.87^{-12}$ & $-5.06^{-07}$ & $-5.03^{-03}$ \\ & \nopagebreak 500 & $-4.80^{-13}$ & $-6.90^{-12}$ & $-5.30^{-07}$ & $-5.48^{-03}$ \\ \rule{0pt}{1.7\normalbaselineskip} \nopagebreak $10^5$ & \nopagebreak 50 & $-1.48^{-13}$ & $-2.72^{-12}$ & $-4.79^{-07}$ & $-3.05^{-03}$ \\ & \nopagebreak 100 & $-1.30^{-13}$ & $-2.17^{-12}$ & $-4.78^{-07}$ & $-4.55^{-03}$ \\ & \nopagebreak 250 & $-9.30^{-14}$ & $-2.78^{-12}$ & $-4.78^{-07}$ & $-4.47^{-03}$ \\ & \nopagebreak 500 & $-2.07^{-13}$ & $-4.49^{-12}$ & $-4.51^{-07}$ & $-3.07^{-03}$ \\ \midrule \end{tabular} \caption{The results above are the averages differences between the full Hessian version of \code{oem()} and other methods in objective function values averaged over all of the values of the tuning parameter for the logistic regression simulations. Negative here means the \code{oem()} function results in estimates with a lower objective function value. The heading ``\pkg{glmnet} (ub)'' corresponds to \code{glmnet()} with the Hessian upper bound option.} \label{table:loss_fullhessian} \end{table} \begin{table}[b] \center \begin{tabular}{llcccc} \toprule\toprule $n$ & $p$ & \pkg{glmnet} & \pkg{glmnet} (ub) & \pkg{gglasso} & \multicolumn{1}{c}{\pkg{ncvreg}} \\ \midrule \nopagebreak $10^4$ & \nopagebreak 50 & $-7.01^{-14}$ & $-4.09^{-12}$ & $-4.20^{-07}$ & $-5.31^{-03}$ \\ & \nopagebreak 100 & $ \hphantom{-}6.74^{-13}$ & $-3.88^{-12}$ & $-4.76^{-07}$ & $-4.90^{-03}$ \\ & \nopagebreak 250 & $-3.88^{-14}$ & $-3.25^{-12}$ & $-4.97^{-07}$ & $-5.03^{-03}$ \\ & \nopagebreak 500 & $-4.06^{-13}$ & $-6.82^{-12}$ & $-5.16^{-07}$ & $-5.47^{-03}$ \\ \rule{0pt}{1.7\normalbaselineskip} \nopagebreak $10^5$ & \nopagebreak 50 & $ \hphantom{-}3.16^{-12}$ & $ \hphantom{-}5.87^{-13}$ & $-4.71^{-07}$ & $-3.05^{-03}$ \\ & \nopagebreak 100 & $ \hphantom{-}3.06^{-12}$ & $ \hphantom{-}1.02^{-12}$ & $-4.61^{-07}$ & $-4.55^{-03}$ \\ & \nopagebreak 250 & $ \hphantom{-}2.72^{-12}$ & $ \hphantom{-}3.40^{-14}$ & $-4.71^{-07}$ & $-4.47^{-03}$ \\ & \nopagebreak 500 & $ \hphantom{-}1.09^{-12}$ & $-3.19^{-12}$ & $-4.31^{-07}$ & $-3.07^{-03}$ \\ \bottomrule \end{tabular} \caption{The results above are the averages differences between the upper bound version of \code{oem()} and other methods in objective function values averaged over all of the values of the tuning parameter for the logistic regression simulations. Negative here means the \code{oem()} function results in estimates with a lower objective function value. The heading ``\pkg{glmnet} (ub)'' corresponds to \code{glmnet()} with the Hessian upper bound option.} \label{table:loss_upperbound} \end{table} \section[Acknowledgments]{Acknowledgments}\label{acknowledgments} This material is based upon work supported by, or in part by, the U. S. Army Research Laboratory and the U. S. Army Research Office under contract/grant number W911NF1510156, NSF Grants DMS 1055214 and DMS 1564376, and NIH grant T32HL083806. \bibliographystyle{jss}
{ "attr-fineweb-edu": 1.767578, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc2rxaKgTyGYK84Hm
\section*{Acknowledgements} Manfred Eppe, Phuong Nguyen and Stefan Wermter acknowledge funding by the German Research Foundation (DFG) through the projects IDEAS and LeCAREbot. Stefan Wermter and Matthias Kerzel acknowledge funding by the DFG through the transregio project TRR 169 Cross-modal Learning (CML). Christian Gumbsch and Martin Butz acknowledge funding by the DFG project number BU 1335/11-1 in the framework of the SPP program ``The Active Self" (SPP 2134). Furthermore, Martin Butz acknowledges support from a Feodor-Lynen stipend of the Humboldt Foundation as well as from the DFG, project number 198647426, ``Research Training Group 1808: Ambiguity - Production and Perception'' and from the DFG-funded Cluster of Excellence ``Machine Learning -- New Perspectives for Science´´, EXC 2064/1, project number 390727645. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Christian Gumbsch. \input{bibliography} \end{document} \section*{References} \section{Neurocognitive foundations} \label{sec:cognitive_background} The problem-solving abilities of a biological agent depend on cognitive key mechanisms that include abstraction, intrinsic motivation, and mental simulation. \autoref{fig:cog_prereq} shows how these mechanisms depend on forward and inverse models as neuro-functional prerequisites. Together, cognitive abilities that we deem crucial for higher level intelligence become enabled, including few-shot problem solving and transfer learning. In the following, we start with a characterization of these crucial cognitive abilities and then exhibit some of the key mechanisms and prerequisites needed to enable them. \subsection{Crucial cognitive abilities} By cognitive abilities for problem-solving we refer to phenomenologically observable traits and properties of biological agents. As the most remarkable ability, we consider few-shot problem-solving. \smallskip \noindent \textbf{Few-shot problem-solving} is the ability to solve unknown problems with few ($\lessapprox 10$) trials.\footnote{Zero-shot problem-solving is a special case of few-shot problem-solving where no additional training at all is required to solve a new problem.} For simple problems, few-shot problem-solving is trivial. For example, catching a ball is usually a purely reactive behaviour that builds on a direct neural mechanism to map the observation of the flying ball to appropriate motor commands. Such problems are solvable with current computational RL methods\cite{Li2018_DeepRL_Overview}. However, there are also more difficult problems that require, e.g., using tools in new contexts. For example, consider the aforementioned problem-solving example of the crow\cite{Gruber2019_CrowMentalRep} (see \autoref{fig:crow_probsolving}). The crow knows how to use a stick as a tool without any further training, because it has previously used a stick for related problems. In our conceptual framework (see \autoref{fig:cog_prereq}) we identify two abilities that are most central to perform such problem-solving, namely transfer learning and planning. \smallskip \noindent \textbf{Transfer learning} allows biological agents to perform few-shot pro\-blem-solving by transferring the solution to a previously solved task to novel, previously unseen tasks\cite{Perkins1992}. This significantly reduces and sometimes eliminates the number of trials to solve a problem. \citet{Perkins1992} propose a distinction between near and far transfer. Near transfer refers to the transfer of skills between domains or situations that are rather similar, e.g. learning to catch a ball and then catching other objects. In contrast, far transfer requires the transfer of more abstract solutions between different situations typically via abstract analogies. Such analogies are determined by mappings between problems from different domains. For example, an object transport problem with a robotic hand that involves [\concept{grasp(object)}, \concept{moveArm(goal)}, \concept{release(object)}] is analogous to a file movement problem on a computer that involves \concept{[cutInFolder(file)}, \concept{ navigate(targetFolder)}, \concept{pasteInFolder(file)}], with the obvious mappings between the action types and arguments. Cognitive theories consider analogical reasoning as a critical cornerstone of human cognition. For example, conceptual metaphor theory\cite{Feldman2009_neural_ECG} understands metaphors as mappings between analogous concepts in different domains. In addition to evidence from linguistics, there is also significant evidence from education theory and research on mechanical reasoning that analogical reasoning improves problem-solving: Education theory suggests that human transfer learning improves when explicitly trained to identify analogies between different problems\cite{Klauer1989_AnalogyTransfer}. The Psychology of mechanical inference suggests that analogical knowledge about the dynamic properties of mechanical objects is often transferred between domains\cite{Hegarty2004_MechReasMentalSim}. For example, knowledge about what happens if somebody jumps into a pool can be transferred to other mechanical problems that involve over-flooding water containers\cite{Hegarty2004_MechReasMentalSim}. \smallskip \noindent \textbf{Goal-directed planning.} The behaviour of biological agents is traditionally divided into two categories: Stimulus-driven, habituated behaviour and goal-directed, planned behaviour\cite{Dolan2013, Friston2016}. Initially, stimulus-response theories dominated the field, suggesting that most behaviour was learned by forming associations that were previously reinforced \cite{Thorndike}. Edward Tolman was one of the main critiques of stimulus-response theories\cite{Dolan2013}. He showed that rats can find rewards in a maze faster when they have visited the maze before, even if their previous visit had not been rewarded \cite{Tolman1930}. The results suggest that the rats form a representation, or cognitive map, of the maze, which enables them to plan or mentally simulate their behaviour once a reward was detected. Habituation of behaviour, which is comparable to model-free policy optimization in reinforcement learning\cite{Botvinick2014,Dayan2009}, enables an agent to learn one particular skill in a controlled environment. However, complex problem-solving in a new or changing environment requires some form of prospective planning\cite{Butz2017_MindBeing, Dolan2013, Hoffmann:2003}. Over the last decades, more research has focused on understanding the mechanisms and representations involved in goal-directed planning. It is now generally agreed upon that, during planning, humans predict the effect of their actions, compare them to the desired effects, and, if required, refine their course of actions\cite{Hoffmann:2003,Kunde:2007}. This is deemed to be a hierarchical process where the effects of actions on different levels of abstraction are considered\cite{Botvinick2014, Tomov2020}. Thereby, humans tend to first plan high-level actions before considering actions at a finer granularity\cite{Wiener2003}. Hierarchical planning can dramatically decrease the computational cost involved in planning \cite{Botvinick2014, Dolan2013}. Additionally, hierarchical abstractions enable automatizing originally planned behaviour, thus further alleviating computational cost \cite{Dayan2009}. \subsection{Cognitive mechanisms for transfer learning and planning} Transfer learning and planning rely on shared and partially overlapping mechanisms and principles. Here, we focus on three mechanisms and principles that we identify as most critical, namely sensorimotor abstraction, intrinsic motivation, and mental simulation. \subsection{Sensorimotor abstraction.} According to the widely accepted embodiment theory, abstract mental concepts are inferred from our bodily-grounded sensorimotor environmental interaction experiences \cite{Barsalou2008_GroundedCognition,Butz2017_MindBeing,Butz2016,Pulvermuller:2010}. Cognitive theories often distinguish between \emph{action abstraction} and \emph{state abstraction}. Action abstractions refer to a temporally extended sequence of primitive actions, that are often referred to as options\cite{Sutton1999_options} or movement primitives\cite{Flash2005_AnimalMotorPrimitives, Schaal_2006}. For example, \emph{transporting an object} can be seen as an action abstraction since it is composed of a sequence of more primitive actions, such as \textit{reaching}, \textit{lifting}, and \textit{transporting} (see \autoref{fig:cog_comp}). The elementary actions that compose an action abstraction are typically rather loosely defined through their effects. For example, we can tell that a robot arm we have never seen before is grasping an object, even though the anatomy, the unfolding control algorithms, and the involved motor commands may largely vary from our human ones. State abstractions refer to encoding certain parts of the environment while abstracting away irrelevant details. A simple example of a state abstraction is a code that specifies whether a certain type of object is present in a scene. State abstractions can be hierarchically structured: A single object can be subdivided into its parts to form a partonomy, i.e., a hierarchical organization of its parts and their spatial relationship (see \autoref{fig:cog_comp}) \cite{Minsky1974, Zacks2001_EventStructure}. Additionally, objects can be organized taxonomically, reflecting a hierarchical relationship that focuses on particular conceptual aspects \cite{Minsky1974, Zacks2001_EventStructure}. For example, an affordance-oriented \cite{Gibson:1979} taxonomy of beverage containers could classify \textit{wine glasses} and \textit{beer mugs} both as \textit{drinking vessels}, while a \textit{teapot} would be classified as a \textit{pot} instead. Meanwhile, \textit{beer mugs}, \textit{wine glasses}, and \textit{teapots} are all \textit{graspable containers}. Conceptual abstractions and cross-references to other conceptual abstractions within imply a representational key property: \textbf{compositionality}. Formal compositionality principles state that an expression or representation is compositional if it consists of subexpressions, and if there exist rules that determine how to combine the subexpressions \cite{Szabo:2020}. The Language of Thought theory\cite{Fodor:2001} transfers the compositionality principle from language to abstract mental representations, claiming that thought must also be compositional. For example, humans are easily able to imagine previously unseen situations and objects by composing their known constituents, such as the \emph{golden pavements} and \emph{ruby walls} imagined by Scottish philosopher David Humes in \emph{A Treatise of Human Nature}\cite{Frankland2020_SearchLoT}. On top of that, embodied world knowledge further constrains the formal semantics-based options to combined sub-expressions. In the remainder, we will refer to this type of compositionality by \textbf{common sense compositionality}, that is, compositional rules that are grounded in and flexibly inferred by our intuitive knowledge about the world, including other agents (humans, animals, robots, machines, etc.). This common sense compositionality essentially enforces that our minds attempt to form plausible environmental scenarios and events that unfold within given conceptual information about a particular situation \cite{Butz2016,Gaerdenfors:2014,Lakoff1999}. Consider the processes that unfold when understanding a sentence like ``He sneezed the napkin off the table''. Most people have probably never heard or used this sentence before, but everyone with sufficient world and English grammar knowledge can correctly interpret the sentence, imagining a scene wherein the described event unfolds \cite{Butz2016}. Common sense compositionality makes abstractions applicable in meaningful manners, by constraining the filling in of an abstract component (e.g., the target of a grasp) towards applicable objects (e.g., a graspable object such as a teapot), as depicted in \autoref{fig:cog_comp}. Note the awkwardness -- and indeed the attempt of the brain to resolve it -- when disobeying common sense compositionality, such as when stating that `The dax fits into the air' or `grasping a cloud' \cite{Butz2016}. Common sense compositionality indeed may be considered a hallmark and cornerstone of human creativity, as has been (indirectly) suggested by numerous influential cognitivists \cite{Barsalou2008_GroundedCognition,Fillmore1985,Lake2017,Lakoff1999,Sugita:2011,Turner2014,Werning:2012}. In the neurosciences, recent neuroimaging techniques indeed revealed that the compositionality of mental representations can actually be observed in neural codes\cite{Frankland2020_SearchLoT}. For example, \citet{Haynes2015_CompositionalNeuroRep} and \citet{Reverberi2012_CompositionalFmrIRules} show that the neural codes in the ventrolateral prefrontal cortex representing compound behavioural rules can be decomposed into neural codes that encode their constituent rules. Similar results have been obtained for neural codes that represent food compositions\cite{Barron2013_CombinedChoicesConcepts}. \begin{figure} \begin{minipage}{.32\textwidth} \centering \includegraphics[width=\textwidth]{pics/ActionAbstraction_Box_a.pdf} \end{minipage} \hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=\textwidth]{pics/StateAbstraction_Box_b.pdf} \end{minipage} \hfill \begin{minipage}{.32\textwidth} \centering \includegraphics[width=\textwidth]{pics/SensorimotorAbstraction_Box_c.pdf} \end{minipage} \caption{Compositional action and state abstractions. \emph{Action abstractions} describe a sequence of more primitive actions (a.). \emph{State abstractions} encode certain properties of the state space (b). Their \emph{compositionality} enables their general application, by instantiating abstract definitions (the graspable target of reaching), with a specific object (teapot) (c.). \label{fig:cog_comp}} \end{figure} From an algorithmic perspective, common sense compositionality yields benefits for analogical reasoning and planning. It simplifies the identification of analogies because it enables compositional morphisms between representations. This advantage is well-known in cognitive theories of creativity, e.g., concept blending\cite{Eppe2018_AIJ_Blending,Turner2014}, where the search for analogies is a computational bottleneck. Similarly, compositional state- and action representations for goal-directed planning lead to a lower computational complexity, because action and object types can be flexibly combined as long as critical properties match. Along these lines, action grammars have been proposed, which systematically form an effect-oriented taxonomy of tool-object interactions \cite{Worgotter:2013}. As a result, common sense compositionality enables the utilisation of objects as tools in innovative manners---when, for example, utilising a stone as a hammer \cite{Butz2017_MindBeing,Lakoff1999}. Moreover, it enables drawing analogies across domains---when , for example, talking about ``holding onto a thought'' for later utilisation. Common sense compositionality thus seems to be key for truly intelligent artificial systems. We propose that apart from the addition of suitable inductive learning biases, such as event-segmentation tendencies \cite{Butz2016,Butz2017_MindBeing,Butz:2020,Shin:2020tsi,Zacks2007_EST}, suitably targeted intrinsic motivation and the ability to play out mental simulations are of critical importance. \smallskip \noindent \textbf{Intrinsic motivation} affects goal-directed reasoning and planning because it sets intrinsically motivated goals that an agent may aim to achieve. The most basic behaviour of biological agents purely strives for satisfying homeostatic needs, such as hunger or thirst. However, advanced behaviour, such as exploration and play, seems to be partially decoupled from the primary biological needs of an animal. From a cognitive development perspective, the term \emph{intrinsic motivation} was coined to describe the ``inherent tendency to seek out novelty, [...] to explore and to learn''\cite{RyanDeci2000}. Here, \emph{intrinsic} is used in opposition to extrinsic values that are directly associated with the satisfaction of needs or the avoidance of undesired outcomes \cite{Friston2015}. Intrinsic motivations induce general exploratory behaviour, curiosity, and playfulness. Simple exploratory behaviour can even be found in worms, insects, and crustaceans \cite{Pisula2008} and may be elicited by rather simple tendencies to wander around while being satiated or an inborn tendency to evade overcrowded areas. Curiosity refers to an epistemic drive that causes information gain-oriented exploratory behaviour\cite{Berlyne1966, Loewenstein1994, Oudeyer2007}. Combined with developing common sense compositionality, this drive can elicit hypothesis testing behaviour even by means of analogy making. The closely related playfulness is only exhibited in certain intelligent, mostly mammalian or avian species, such as dogs and corvids\cite{Pisula2008}, where scenarios and events within are played out in a hypothetical manner and novel interactions are tried-out in a skill-improvement-oriented manner. \smallskip \noindent \textbf{Mental simulation}, meanwhile, enables biological agents to reflect on and to anticipate the dynamics of its environment on various representational levels in the past, present, and future, and even fully hypothetically. Actual motor-imagery has been shown to enable athletes to improve the execution of challenging body movements (e.g. a backflip), significantly reducing the number of actually executed trials\cite{Jeannerod1995_MentalImageryMotor}. On the reasoning side, consider human engineers who rely on mental simulation when developing mechanical systems, such as pulleys or gears. The simulation involves visual imaginations, but also includes non-visual properties like friction and force \cite{Hegarty2004_MechReasMentalSim}. Mental simulation also takes place on higher conceptual, compositional, and causal reasoning levels. For example, \citet{Kahnemann1982_MentalSim}, \citet{Wells1989_MentalSimCausality} and later \citet{Taylor1998_MentalSim} report how mental simulation improves the planning of future behaviour on a causal conceptual level, such as when planning a trip. \smallskip \subsection{Forward-inverse models as functional prerequisites} Sensorimotor abstraction, intrinsic motivation, and mental simulation are complex mechanisms that demand a suitable neuro-functional basis. We propose that this essential basis is constituted by forward and inverse models \cite{Wolpert1998_ForwardInverse}. Forward models predict how an action affects the world, while inverse models determine the actions that need to be executed to achieve a goal or to obtain a reward. Note that inverse models may be implicitly inferred from the available forward models, but more compact inverse models, which are closely related to habits and motion primitives, certainly foster the proficiency of particular behavioural skills further \cite{Barsalou2008_GroundedCognition,Butz2017_MindBeing,Dezfouli:2014,Schaal_2006}. \smallskip \noindent \textbf{Forward-inverse models for mental simulation.} To perform mental simulation, an agent needs a forward model to simulate how the dynamics of the environment behave, possibly conditioned on the agent's actions. However, even when equipped with a well-predicting forward model, the consideration of all possible actions and action sequences quickly becomes computationally intractable. Hence, action-selection requires more direct mappings from state and desired inner mental state (including an intention or a goal) to potential actions. This is accomplished by inverse models. In RL, inverse models are represented as behaviour policies. Selecting actions can happen in a reflex-like manner on the motor level but it can also serve as a heuristic to generate candidate actions for accomplishing a current goal. For example, when designing a new mechanical system, engineers have an intuition about how to put certain mechanical parts together, depending on the goal function they want to achieve with the system. Based on their intuition, they mentally simulate and verify the functioning of the mechanical system\cite{Hegarty2004_MechReasMentalSim}. \smallskip \noindent \textbf{Forward and inverse models for intrinsic motivation.} Forward and inverse models are also needed to trigger behaviour that is driven by intrinsic motivation. For example, prediction error minimisation has been demonstrated to be a useful additional, intrinsic reward signal to model curiosity, inversely triggering behaviour and behavioural routines that are believed to cause particular environmental changes that are, in turn, believed to possibly result in knowledge gain \cite{Kaplan:2004, Schmidhuber2010, Schmidhuber:1991, Oudeyer2007}. Along similar lines, Friston et al.~ propose that intrinsically motivated behaviour emerges when applying active inference\cite{Friston2011, Friston2015}. Active inference describes the process of inferring actions to minimize expected free energy, which includes approximations of anticipated surprise\cite{Friston2010, Friston2016}. Free energy is decomposed into various sources of uncertainty. One part is uncertainty about future states or outcomes given a certain sequence of actions \cite{Friston2015}. The agent strives to reduce this uncertainty, and with it overall expected free energy, by activating the actions that cause the uncertainty. Hence, active inference can lead to epistemic exploration, where an agent acts out different behaviour to learn about its consequences\cite{Friston2015, Oudeyer2018_CompuCuriosityLearning}. Here, the forward model is required to predict future states given an action, and also to estimate the uncertainty of the prediction. \smallskip \noindent \textbf{Forward and inverse models for abstraction and grounding.} Over the last decade, various theories including predictive coding\cite{Huang2011}, the Free Energy Principle\cite{Friston2010}, and the Bayesian Brain hypothesis\cite{Knill2004}, have viewed the brain as a generative machine, which constantly attempts to match incoming sensory signals with its probabilistic predictions\cite{Clark2013}. Within these frameworks, prediction takes place on multiple processing hierarchies that interact bidirectionally: Top-down information per processing stage provides additional information to predict sensory signals, while bottom-up error information is used to correct the top-down predictions \cite{Clark2016_SurfUncertainty}. Meanwhile, future states are predicted in a probabilistic manner\cite{Clark2013}. On lower processing levels, actual incoming sensorimotor signals are predicted, while on higher levels more abstract, conceptual and compositional predictive encodings emerge\cite{Butz2016}. In this way, rather complex state abstractions, such as the aforementioned \concept{container}-concept, can form. As a result, high-level predictions, e.g., of an object being within a container, enable predictions on lower levels. For example, consider the prediction of sensory information about how an object's position will change over time while it is occluded. Event Segmentation Theory (EST)\cite{Zacks2007_EST} makes the role of forward predictions even more explicit: EST is concerned with why humans seem to perceive the continuous sensory activity in terms of discrete, conceptual events. The theory suggests that these segmentations mirror the internal representation of the unfolding experience. Internal representations of events, or \emph{event models}, are sensorimotor abstractions that encode entities with certain properties and their functional relations in a spatiotemporal framework\cite{Radvansky2011_EventPerception}. According to EST, these event models guide the perception by providing additional information that can be used for forward predictions\cite{Zacks2007_EST}. While observing an event, a specific subset of event models is active until a transient prediction error is registered, resulting in an exchange of the currently active event models to a new subset that may be better suited for predicting the currently ongoing dynamics\cite{Kuperberg:2020tsi, Radvansky2011_EventPerception, Shin:2020tsi, Zacks2007_EST}. EST-inspired computational models demonstrate that such transient forward prediction errors can indeed be used to signal event transitions in video streams or self-explored sensorimotor data in an online fashion \cite{Franklin:2020, Gumbsch2019, Humaidan:2020}. \section{Conclusion} \label{sec:conclusion} Our review provides an overview of the cognitive foundations of hierarchical problem-solving and how these are implemented in current hierarchical reinforcement learning architectures. Herein, we focus on few-shot problem-solving -- the ability to learn to solve causally non-trivial problems with as few trials as possible or, ideally, without any trials at all (zero-shot problem-solving). As a benchmark example, we refer to recent results from animal cognition \cite{Gruber2019_CrowMentalRep}, demonstrating how a crow develops mental representations of tools to solve a previously unseen food-access problem at the first trial (see \autoref{fig:crow_probsolving}). As our main research question, we ask for the computational prerequisites and mechanisms to enable problem-solving for artificial computational agents that are on the level of intelligent animals. We provide two main contributions to address this question. First, we perform a literature survey and a top-down structuring of cognitive abilities, mechanisms, and prerequisites for few-shot problem-solving (see \autoref{fig:cog_prereq}). Second, we perform a comprehensive review of recent hierarchical reinforcement learning architectures to identify whether and how the existing approaches implement the prerequisites, mechanisms, and abilities towards few-shot problem-solving (see \autoref{tab:hrl_properties}). Herein, we identify five major shortcomings in the state of the art of the computational approaches. These shortcomings are mostly based on the lack of hierarchical forward models and compositional abstractions. Nonetheless, we were able to identify several methods that address these gaps in the state of the art in isolation. Not seeing any major reason why these approaches could not be integrated, we suggest that most of the tools to realise higher levels of intelligent behaviour in artificial agents have already been investigated. The key is to combine them effectively. This demonstrates significant potential to develop intelligent agents that can learn to solve problems on the level of intelligent biological agents. \section{Computational realizations} \label{sec:hrlFinal} The abilities, mechanisms and prerequisites of computational hierarchical reinforcement learning systems (cf.~\autoref{fig:hrl}) are less sophisticated and integrated than those of biological agents. However, there are promising novel developments to potentially overcome the existing limitations. To identify the potential of the existing computational approaches, we provide an overview of the current state of the art on hierarchical reinforcement learning in \autoref{tab:hrl_properties}. \begin{figure} \centering \includegraphics[width=\textwidth]{pics/Figure3_smaller.pdf} \caption{A general hierarchical problem-solving architecture with a separate policy $\pi^i$ for each layer of abstraction. Most existing computational HRL approaches focus on two layers, where the high-level action $a^1$ is either a behavioural primitive (an option or sub-policy), or a subgoal. Only the low-level actions $a^0$ are motor commands that affect the environment directly. } \label{fig:hrl} \end{figure} \input{all_table} \subsection{Transfer learning and planning for few-shot abilities} \label{sec:hrl:abilities} Our survey of the neurocognitive foundations indicates that two foundational cognitive abilities for few-shot problem-solving are transfer learning and planning. \smallskip \noindent \textbf{Transfer learning} denotes the re-use of previously learned skills in new contexts and domains, where \emph{near} transfer learning denotes transfer between similar contexts and domains between source and target tasks, while \emph{far} transfer considers stronger dissimilarities \cite{Perkins1992}. A significant fraction of the existing near transfer approaches build on learning re-usable low-level skills, which are referred to as behavioural primitives, options, or sub-policies. For example, \citet{Li2020_SubPolicy} and \citet{Frans2018_MetaLearningHierarchies} present sub-policy-based hierarchical extensions to Proximal Policy Optimization (PPO)\cite{Schulman2017_PPO}. Their approaches enable the transfer of sub-policies to novel tasks within the same domain. \citet{Heess2016_transferModulControl} use Generalized Advantage Estimation (GAE)\cite{Schulman2015_GAE} to implement similar transferable sub-policies. However, the authors also consider transfer learning towards different types of tasks. For example, they transfer behaviour from a locomotion task to a more complex soccer task. \citet{Eysenbach2019_DiversityFunction} and \citet{Sharma2020_DADS} introduce diversity functions to learn diverse re-usable sub-policies that are independent of external rewards. \citet{Tessler2017_LifelongHRLMinecraft} focus on the transfer of skills in lifelong learning, and \citet{Wu2019_ModelPrimitives} propose a model-based approach, where only the forward models are transferred to new tasks, but not the policies. \citet{Vezhnevets2016_STRAW} build on the automatic discovery of transferable macro-actions (plans) to solve problems in discrete 2D-environments, while \citet{Jiang2019_HRL_Language_Abstraction} use natural language action representations to perform near transfer learning. \citet{Qureshi2020_CompAgnosticPol} build on re-usable low-level policies for goal-conditioned hierarchical reinforcement learning. Research has not only considered transfer learning between different tasks but also between different robot and agent morphologies\cite{Devin2017_Transfer_RL,Frans2018_MetaLearningHierarchies,Hejna2020_MorphologicalTransfer}. We classify these approaches as \emph{far} transfer because the entire sensorimotor apparatus changes, which places the agent in a dissimilar context. Furthermore, the methods that feature such cross-morphological transfer also perform far transfer learning across different application domains. For example, \citet{Frans2018_MetaLearningHierarchies} transfer navigation skills acquired in a discrete-space grid maze environment to a continuous-space ant maze environment. \smallskip \noindent \textbf{Planning} is a highly overloaded term that has different meanings in different sub-disciplines of AI. In this paper, we refer to goal-directed planning in the sense of classical AI, as an abductive search for a sequence of actions that will lead to a specific goal by simulating the actions with an internal model of the domain dynamics. Planning enables one-shot problem-solving because the searching does not involve the physical execution of actions. The agent only executes the actions if the mental simulation verifies that the actions are successful. In this sense, planning differs from model-based reinforcement learning, which usually refers to training a policy by simulating actions using a predictive forward model\cite{Sutton1990_Dyna}. Hierarchical planning is a well-known paradigm in classical AI\cite{Nau2003_SHOP2}, but approaches that integrate planning with hierarchical reinforcement learning are rare. Some approaches integrate action planning with reinforcement learning by using an action planner for high-level decision-making and a reinforcement learner for low-level motor control\cite{Eppe2019_planning_rl,Lyu2019_SDRL,Ma2020_MultiAgentHRL,Sharma2020_DADS,Yamamoto2018_RL_Planning,Yang2018_PEORL}. These approaches underpin that planning is especially useful for high-level inference in discrete state-action spaces. \subsection{Mechanisms behind transfer learning and planning} Our summary of the cognitive principles behind transfer learning and planning reveals three important mechanisms that are critical for the learning and problem-solving capabilities of biological agents, namely compositional sensorimotor abstraction, intrinsic motivation, and mental simulation. \smallskip \noindent \textbf{Compositional sensorimotor abstraction and grounding.} The temporal abstraction of actions is, by definition, an inherent property of hierarchical reinforcement learning as it allows to break down complex problems into a temporal hierarchical structure of simpler problems. Another dimension of abstraction is representational abstraction. Hierarchical reinforcement learning approaches also involve representational action abstraction. One can distinguish the existing approaches into two different types of representational action abstraction. The probably most influential method for action abstraction builds on behaviour primitives\cite{Bacon2017_OptionCritic,Dietterich2000_StateAbstraction_MAXQ,Frans2018_MetaLearningHierarchies,Kulkarni2016_HDQN,Li2020_SubPolicy,Li2017_efficient_learning,Ma2020_MultiAgentHRL,Sharma2020_DADS,Machado2017_PVF_Laplace_OptionDiscovery,Qiao_2020_HRL-Driving,Shankar2020_MotorPrimitives,Tessler2017_LifelongHRLMinecraft,Vezhnevets2020_OPRE}, including options\cite{Sutton1999_options}, sub-policies, or atomic high-level skills. Such behaviour primitives are represented in an abstract representational space, e.g. in a discrete finite space of action labels or indices, abstracting away from the low-level space of motor commands (see \autoref{fig:options_behaviour_primitives}, a.). Another more recent type of producing high-level action representations are subgoals in the low-level state-space \cite{Eppe2019_planning_rl,Ghazanfari2020_AssociationRules,Levy2019_Hierarchical,Nachum2018_HIRO,Rafati2019_model-free_rep_learning}$^,$ ~ \cite{Hejna2020_MorphologicalTransfer,Roeder2020_CHAC,Zhang2020_AdjancentSubgoals} (see \autoref{fig:options_behaviour_primitives}, b.). Subgoals are abstract actions defined in the low-level state space, and the agent achieves the final goal by following a sequence of subgoals. There exist also methods that encode high-level actions as continuous vector representations. For example, \citet{Han2020_hierarchicalSelfOrga} use a multi-timescale RNN, where the high-level actions are encoded by the connections between the RNN layers and others use latent vector representations to encode high-level behaviour\cite{Isele2016_FeatureZeroShotTransfer,Shankar2020_MotorPrimitives}. \begin{figure} \centering \includegraphics[width=.99\linewidth,trim={20pt 0pt 100pt 40pt},clip]{pics/OptionsSubgoals.pdf} {\sf \footnotesize ~~~~~~~ (a.) behaviour primitives ~~~~~~~~~~~~ (b.) Subgoals ~~~~~~~~~~~~ } \caption{Action abstraction through behaviour primitives (a) vs subgoals (b). With behaviour primitives, the agent determines the path to the final goal by selecting a sequence of high-level actions, but without specifying explicitly to which intermediate state each action leads. With subgoals, the agent determines the intermediate states as subgoals that lead to the final goal, but without specifying the actions to achieve the subgoals.} \label{fig:options_behaviour_primitives} \end{figure} Overall, in hierarchical reinforcement learning literature, there exists a strong implicit focus on action abstraction. However significant cognitive evidence demonstrates that representational \emph{state abstraction} is at least as important to realize efficient problem-solving\cite{Butz2017_MindBeing,Lesort2018_state-rep-learning-control}. Yet, compared to action abstraction, there is considerably less research on state abstraction in hierarchical reinforcement learning. Cognitive state abstractions range from the mere preprocessing of sensor data, e.g. in the primary visual cortex, to the problem-driven grounding of signals in abstract compositional concept representations. Counterparts for some of these methods can also be found in computational architectures. For example, most current reinforcement learning approaches that process visual input use convolutional neural networks to preprocess the visual sensor data\cite{Jiang2019_HRL_Language_Abstraction,Lample2018_PlayingFPSGames,Oh2017_Zero-shotTaskGeneralizationDeepRL,Sohn2018_Zero-ShotHRL,Vezhnevets2017_Feudal,Wulfmeier2020_HierachicalCompositionalPolicies,Yang2018_HierarchicalControl}. A problem with simple preprocessing is that it does not appreciate that different levels of inference require different levels of abstraction: For example, to transport an object from one location to another with a gripper, only the low-level motor control layer of a hierarchical approach needs to know the precise shape and weight of the object. A high-level planning or inference layer only needs to know abstract Boolean facts, such as whether the object to transport is initially within the gripper's reach or not, and whether the object is light enough to be carried. Therefore, we consider only those approaches as representational abstraction methods that involve layer-wise abstraction. Layer-wise state abstraction has been tackled, but most existing hierarchical reinforcement learning approaches perform the abstraction using manually defined abstraction functions\cite{Eppe2019_planning_rl,Kulkarni2016_HDQN,Ma2020_MultiAgentHRL,Toussaint2018}. There exist only a few exceptions where state abstractions are derived automatically in hierarchical reinforcement learning architectures\cite{Ghosh2019_ActionableRep,Vezhnevets2017_Feudal,Vezhnevets2020_OPRE}, e.g. through clustering\cite{Akrour2018_RL_StateAbstraction,Ghazanfari2020_AssociationRules}, with feature representations\cite{Schaul2013_Better_Generalization_with_Forecasts} or by factorisation \cite{Vezhnevets2020_OPRE}. Interestingly, non-hierarchical model-based reinforcement learning offers promising alternative prediction-based methods for state abstraction \cite{Hafner2020_Dreamer,Pathak2019_ICML_SelfSuperExploreDisa}, which show parallels to cognitive prediction-based abstraction theories. However, these have not yet been applied in a hierarchical architecture. As implied from the cognitive science side, a key-property of representations of states and actions is \emph{compositionality}. For instance, a symbolic compositional action representation \concept{grasp(glass)} allows for modulating the action \emph{grasp} with the object \emph{glass}. Compositionality is not limited to symbolic expressions, but also applicable to distributed numerical expressions, such as vector representations. For example, a vector $v_1$ is a compositional representation if it is composed of other vectors, e.g., $v_2$ and $v_3$, and if there is a vector operation $\circ$ that implements interpretation rules, e.g., $v_1 = v_2 \circ v_3$. There is significant cognitive evidence that compositionality improves transfer learning\cite{Colas2020_LanguageGroundingRL}. This evidence is computationally plausible when considering that transfer learning relies on exploiting analogies between problems. The analogies between two or more problems in goal-conditioned reinforcement learning are defined by a multidimensional mapping between the initial state space, the goal space, and the action space of these problems. For example, given $n_a=4$ action types (e.g. \concept{``grasp'', ``push'', ``move'', ``release''}) and $n_o=4$ object types (e.g. \concept{``glass'', ``cup'', ``tea pot'', ``spoon''}), a non-compositional action-representations requires one distinct symbol for each action-object combination, resulting in $n_o \cdot n_a/2$ possible action mappings. In contrast, an analogy mapping with compositional actions would require to search over possible mappings between action types and, separately, over mappings between objects. Hence, the size of the search space is $n_o/2 + n_a/2$. Evidence that a lower number of possible analogy mappings improves transferability is also provided by other cognitively inspired computational methods, such as concept blending\cite{Eppe2018_AIJ_Blending}. The work by \citet{Jiang2019_HRL_Language_Abstraction} provides further empirical evidence that compositional representations improve learning transfer. The authors use natural language as an inherently compositional representation to describe high-level actions in hierarchical reinforcement learning. In an ablation study, they find that the compositional natural language representation of actions improves transfer learning performance compared to non-compositional representations. Few other researchers use compositional representations in hierarchical reinforcement learning. \citet{Saxe2017_HRL_Multitask_LMDP} compose high-level actions from concurrent linearly solvable Markov decision processes (LMDPs) to guarantee optimal compositionality. Their method successfully executes compositional policies that it has never executed before, effectively performing zero-shot problem-solving. Zero-shot problem-solving has also been demonstrated by other related research that features compositional action representations, but that does not draw an explicit link between compositionality and zero-shot problem-solving\cite{Isele2016_FeatureZeroShotTransfer,Oh2017_Zero-shotTaskGeneralizationDeepRL,Sohn2018_Zero-ShotHRL}. Few symbolic compositional state representations exist, but these rely on manually defined abstraction functions\cite{Eppe2019_planning_rl,Kulkarni2016_HDQN,Lyu2019_SDRL,Toussaint2018} or they are very general and feature compositionality only as an optional property\cite{Rasmussen2017_NeuralHRL}. There also exist hierarchical reinforcement learning approaches where sub-symbolic compositional representations are learned\cite{Devin2017_Transfer_RL,Qureshi2020_CompAgnosticPol,Schaul2013_Better_Generalization_with_Forecasts,Vezhnevets2020_OPRE}. The compositionality of these representations is rather implicit and, with one exception \cite{Qureshi2020_CompAgnosticPol}, has not been investigated in the context of transfer learning. The exceptional approach by \citet{Qureshi2020_CompAgnosticPol} considers composable low-level policies and shows that compositionality significantly improves transfer between similar environments. \smallskip \noindent \textbf{Intrinsic motivation} is a useful method to stabilise reinforcement learning by supplementing sparse external rewards. It is also commonly used to incentivize exploration. Reinforcement learning typically models intrinsic motivation through intrinsic rewards. The most common method of hierarchical reinforcement learning to generate intrinsic rewards is by providing intrinsic rewards when subgoals or subtasks are achieved\cite{Blaes2019_CWYC, Cabi2017_intentionalunintentional,Eppe2019_planning_rl,Haarnoja2018_LatentSpacePoliciesHRL,Jaderberg2017_unreal,Kulkarni2016_HDQN,Lyu2019_SDRL,Oh2017_Zero-shotTaskGeneralizationDeepRL,Qureshi2020_CompAgnosticPol,Rasmussen2017_NeuralHRL,Riedmiller2018_SAC-X,Roeder2020_CHAC,Saxe2017_HRL_Multitask_LMDP,Sohn2018_Zero-ShotHRL,Yamamoto2018_RL_Planning,Yang2018_PEORL}. Other approaches provide intrinsic motivation to identify a collection of behavioural primitives with a high diversity\cite{Blaes2019_CWYC, Eysenbach2019_DiversityFunction,Machado2017_PVF_Laplace_OptionDiscovery} and predictability\cite{Sharma2020_DADS}. This includes also the identification of primitives that are suited for re-composing high-level tasks\cite{Shankar2020_MotorPrimitives}. Another prominent intrinsic reward model that is commonly used in non-hierarchical reinforcement learning is based on surprise and curiosity\cite{Oudeyer2007,Pathak2017_forward_model_intrinsic,Schillaci2020_IntrinsicMotivation,Schmidhuber2010}. In these approaches, surprise is usually modelled as a function of the prediction error of a forward model, and curiosity is realised by providing intrinsic rewards if the agent is surprised. However, there is only little work on modelling surprise and curiosity in hierarchical reinforcement learning. Only \citet{Blaes2019_CWYC}, \citet{Colas2019_CURIOUS}, and \citet{Roeder2020_CHAC} use surprise in a hierarchical setting, showing that hierarchical curiosity leads to a significant improvement of the learning performance. These approaches train a high-level layer to determine explorative subgoals in hierarchical reinforcement learning. An alternative method to model curiosity is to perform hypothesis-testing for option discovery\cite{Chuck2020_HyPe}. \smallskip \noindent \textbf{Mental simulation} is a mechanism that allows an agent to anticipate the effects of its own and other actions. Therefore, it is a core mechanism to equip an intelligent agent with the ability to plan ahead. Cognitive theories about mental simulation involve predictive coding\cite{Huang2011} and mental motor imagery\cite{Jeannerod1995_MentalImageryMotor}, while computational approaches to mental simulation involve model-based reinforcement learning\cite{Sutton1990_Dyna}, action planning\cite{Nau2003_SHOP2}, or a combination of both \cite{Eppe2019_planning_rl,Hafez2020_dual-system}. However, even though there is significant cognitive evidence that mental simulation happens on multiple representational layers\cite{Butz2017_MindBeing,Frankland2020_SearchLoT}, there is a lack of approaches that use hierarchical mental simulation in hierarchical reinforcement learning. Only a few approaches that integrate planning with reinforcement learning build on mental simulation\cite{Eppe2019_planning_rl,Lyu2019_SDRL,Ma2020_MultiAgentHRL,Sharma2020_DADS,Yamamoto2018_RL_Planning,Yang2018_PEORL}, though the mental simulation mechanisms of these models are only implemented on the high-level planning layer. An exception is the work by \citet{Wu2019_ModelPrimitives}, who use mental simulation on the low-level layer to determine the sub-policies to be executed. Another exception is presented by \citet{Li2017_efficient_learning} who perform mental simulation for planning on multiple task layers. There exist several non-hierarchical model-based reinforcement learning methods\cite{Ha2018_WorldModels,Hafner2020_Dreamer,Sutton1990_Dyna} that function akin to mental motor imagery: In these approaches, the policy is trained by mentally simulating action execution instead of executing the actions in the real world. However, even though mental simulation is deemed to be a hierarchical process, we are not aware of any approach that performs hierarchical model-based reinforcement learning in the classical sense, where the policy is trained on the developing forward model. \subsection{Prerequisites for sensorimotor abstraction, intrinsic motivation and mental simulation} Reinforcement learning builds on policies that select actions based on the current observation and a goal or a reward function. Policies can be modelled directly, derived from value functions or combined with value functions. In all cases, a policy is an inverse model that predicts the actions to be executed in the current state to maximise reward or to achieve a goal state. In contrast, a forward model predicts a future world state based on the current state and a course of actions. Both kinds of models are critical prerequisites for the mechanisms that enable transfer learning, planning, and ultimately few-shot problem-solving. Our review shows that the vast majority of hierarchical reinforcement learning methods use inverse models for both the high-level and low-level layers. However, some approaches exist that use an inverse model only for the low-level layer\cite{Eppe2019_planning_rl,Lyu2019_SDRL,Ma2020_MultiAgentHRL,Sharma2020_DADS,Yamamoto2018_RL_Planning,Yang2018_PEORL}. These frameworks use a planning mechanism, driven by a forward model, to perform the high-level decision-making. Our review demonstrates that a forward model is required for several additional mechanisms that are necessary or at least highly beneficial for transfer learning, planning, and few-shot problem-solving. Some non-hierarchical approaches use a forward model to perform sensorimotor abstraction\cite{Hafner2020_Dreamer,Pathak2017_forward_model_intrinsic}. They achieve this with a self-supervised process where forward predictions are learned in a latent abstract space. However, we are not aware of any hierarchical method that exploits this mechanism. A forward model is also required to model curiosity as an intrinsic motivation mechanism. Cognitive science suggests that curiosity is one of the most important drives of human development, and it has been demonstrated to alleviate the sparse rewards problem in reinforcement learning\cite{Pathak2017_forward_model_intrinsic,Roeder2020_CHAC}. Curiosity is commonly modelled as a mechanism that rewards an agent for being surprised; surprise is formally a function of the error between the predicted dynamics of a system and the actual dynamics\cite{Friston2011}. To perform these predictions, a forward model is required. However, with few exceptions\cite{Colas2019_CURIOUS,Roeder2020_CHAC}, there is a lack of approaches that use a hierarchical forward model for generating hierarchical curiosity. \section{Introduction} Humans and several other higher level intelligent animal species have the ability to break down complex unknown problems into hierarchies of simpler previously learned sub-problems. This hierarchical approach allows them to solve previously unseen problems in a zero-shot manner, i.e., without any trial and error. For example, \autoref{fig:crow_probsolving} depicts how a New Caledonian crow solves a non-trivial food-access puzzle that consists of three causal steps: It first picks a stick, then uses the stick to access a stone, and then uses the stone to activate a mechanism that releases food \cite{Gruber2019_CrowMentalRep}. There exist numerous analogous experiments that attest similar capabilities to primates, octopuses, and, of course, humans \cite{Butz2017_MindBeing,Perkins1992}. This raises the question of how we can equip intelligent artificial agents and robots with similar problem-solving abilities. To answer this question, the involved computational mechanisms and algorithmic implementation options need to be identified. A very general computational framework for learning-based problem-solving is reinforcement learning (RL) \cite{Arulkumaran2017_DeepRL,Li2018_DeepRL_Overview,Sutton:2018}. Several studies suggest that RL is in many aspects biologically and cognitively plausible \cite{Neftci2019_RL_BioArtificial_NatureMI,Sutton:2018}. Existing RL-based methods are to some extent able to perform zero-shot problem-solving and transfer learning. However, this is currently only possible for minor variations of the same or a similar task \cite{Eppe2019_planning_rl} or in simple synthetic domains, such as a 2D gridworld \cite{Oh2017_Zero-shotTaskGeneralizationDeepRL,Sohn2018_Zero-ShotHRL}. A continuous-space problem-solving behaviour that is comparable with the crow's behaviour of \autoref{fig:crow_probsolving} has not yet been realised with any artificial system, based on RL or other approaches. Research in human and animal cognition strongly suggests that problem-solving and learning is hierarchical \cite{Botvinick2009_HierarchicalCognitionRL,Butz2017_MindBeing,Tomov2020}. We hypothesise that one reason why current machine learning systems fail is that the existing approaches underestimate the power of learning hierarchical abstractions. At the time of writing this article, we performed a comprehensive meta-search over RL review articles from 2015 to 2020 using the Microsoft Academic search engine. Our meta-survey has yielded the following results: The most cited review article on RL since 2015 \cite{Arulkumaran2017_DeepRL} dedicates 1/6th of a page out of 16 pages to hierarchical approaches. In the second most cited article \cite{Garcia2015_safeRLSurvey}, hierarchical RL is not considered at all, i.e, the word stem `hierarchic' does not appear anywhere in the text. The third most cited review article \cite{Li2018_DeepRL_Overview} dedicates 2/3rd of a page out of 85 total pages to hierarchical RL. From 37 RL reviews and surveys published since 2015, only two contain the word stem ``hierarchic'' in their abstract. The second edition of the popular RL book by \citet{Sutton:2018} only mentions hierarchies in the twenty-year-old options framework \cite{Sutton1999_options} on 2 pages in the final book chapter and briefly discusses automatic abstraction a few lines later in that chapter. It appears that researchers are struggling with automatically learning RL-suitable hierarchical structures. In this article, we address this gap by illuminating potential reasons for this struggle and by providing pointers to solutions. We show that most computational hierarchical RL approaches are model-free, whereas the results from our survey of biological cognitive models suggest that suitable predictive forward models are needed. Furthermore, we exhibit that existing hierarchical RL methods hardly address state abstraction and compositional representations. However, we also show that there exist exceptions where isolated relevant cognitive mechanisms including forward models and compositional abstraction have already been implemented, but not in combination with other important mechanisms. As a result, we conclude that the AI community already has most of the tools for building more intelligent agents at hand; but it currently lacks holistic efforts to integrate them appropriately. \subsection{Representational state abstraction} Cognitive scientists often model abstract mental concepts, such as \emph{path} of \emph{container} in terms of role-filler-based embodied semantic representations, such as semantic frames \cite{Barsalou1992} or image schemas \cite{Lakoff1988,Feldman2009_neural_ECG}. For example, a \concept{path} schema has a role \concept{trajector} which is typically filled by an object or agent that moves along the path, and it has roles for \concept{start} and \concept{end}. In our previous work on human-robot dialog, we build on image schemas to perform semantic parsing \cite{Eppe2016_IROS_SemanticParsing,Eppe2016_GCAI_SemanticParsing}, and show how embodied action-related representations facilitate semantic compositionality for human-robot interaction. Role-filler based representations are compatible with action-related semantic representations used in Artificial Intelligence. For example, in action planning, actions are commonly represented by sets of pre- and postconditions (cf. our previous work on action theory \cite{Eppe2015_JAL,Eppe2015_JAL_ASP}): a \concept{move-object} action has the precondition that the agent that moves the object must hold the object, and it has the postcondition that the object is at the target location. In our previous work \cite{Eppe2016_IROS_SemanticParsing,Eppe2016_GCAI_SemanticParsing}, we have used semantic action templates that link this pre- and postcondition perspective with the role-filler semantics of a semantic language parsing process. For example, the \concept{move-object} action inherits structure from the compositional \concept{path} concept: the object to move fills the \concept{trajector} role of the concept, the current object location fills that \concept{start} role and the target location fills the \concept{end} role. The Neurocognitive and Psychological theories of event coding \cite{Hommel2015_TEC} and ideomotor theory \cite{Stock2004} are in-line with the AI-based pre- and postcondition perspective on actions, as they consider associations between actions and effects at their core. However, these and other related theories do not provide a clear consensus on whether conceptual mental action representations are symbolic \cite{Barsalou1999} or distributed \cite{Hommel2015_TEC}. A unifying proposal of modern theories (e.g.~\cite{Hommel2010}) suggests that conceptual compositional representations are distributed, and that language serves as a symbolification mechanism for the underlaying distributed representations. \emph{However, there exists no commonly agreed upon mathematical model of learning compositional action- and state representations that is mechanistically compatible to the neurocognitive perspective of distributed representations that are potentially symbolifiable.} Humans abstract their sensorimotor data streams and map them to conceptual embodied representations \citet{Barsalou2008_GroundedCognition}. High-level cognitive representations and functions are algorithmically useful to arrange continuous low-level representations and functions into discrete representational spaces that are significantly smaller than continuous sensorimotor spaces. For example, on the lowest motor level, an \concept{open hand} action can be modelled as a sequence of continuous finger movements. The opening of the hand may be a semantic primitive within a high-level \concept{grasp} action that consists of \concept{[open hand, move hand to object, close hand]}. The grasp action itself may be a semantic primitive within another higher-level \concept{transport} behavior that consists of \concept{[grasp, move, release]}. This divide and conquer approach is also a prominent method to realize learning-based intelligent problem-solving behavior of robots and virtual agents. High-level cognitive representations and functions are algorithmically useful to arrange continuous low-level representations and functions into discrete representational spaces that are significantly smaller than continuous sensorimotor spaces. This effectively reduces the branching factor of a problem-solving search process: for the high-level action selection, an agent only has to consider a small set of discrete high-level action primitives and not the large set of their possible low motor-level instantiations. This computational problem-solving perspective on sensorimotor abstraction is in-line with \section{Results} \label{sec:results} \autoref{tab:hrl_properties} shows the results of our review on computational hierarchical reinforcement learning approaches, in alignment with our structuring of cognitive prerequisites and mechanisms for the problem-solving abilities of intelligent biological agents (see \autoref{fig:cog_prereq}). We summarise our main results as follows: \textbf{Most current few-shot problem-solving methods build on transfer learning but they do not leverage planning.} This result is interesting because any approach with a forward model could straight-forwardly be extended to also consider planning\cite{Botvinick2014}, and planning can be leveraged for few-shot problem-solving\cite{Sharma2020_DADS}. Therefore, current model-based approaches do not exploit their full potential. \textbf{Current methods do not exploit the full potential of compositional abstract representations.} All hierarchical reinforcement learning methods inherently perform representational abstraction, e.g., through options or subgoals, but only a few methods consider compositionality. Exceptions include natural language-based representations of actions \cite{Jiang2019_HRL_Language_Abstraction} or symbolic logic-based compositional representations \cite{Oh2017_Zero-shotTaskGeneralizationDeepRL}. None of these approaches ground these abstractions tightly to sensorimotor experiences. \textbf{State abstraction is significantly less researched than action abstraction.} Recent hierarchical actor-critic approaches \cite{Levy2019_Hierarchical,Roeder2020_CHAC} use the same state representation for each layer in the hierarchy, without performing any abstraction. There exist a few approaches that perform state abstraction\cite{Vezhnevets2017_Feudal}, and also some that involve compositional state representations\cite{Vezhnevets2020_OPRE}. However, most of these build on hand-crafted abstraction functions that are not learned \cite{Eppe2019_planning_rl}. \textbf{Curiosity and diversity, as intrinsic motivation methods, are underrepresented.} When assessing the intrinsic motivation mechanisms of the existing approaches, we distinguish between the three most frequently used kinds of intrinsic motivation: curiosity, diversity, and subgoal discovery. \autoref{tab:hrl_properties} indicates that curiosity and diversity are underrepresented intrinsic motivation methods, even though there is significant cognitive evidence that these mechanisms are critical for intelligent behaviour in humans \cite{Oudeyer2018_CompuCuriosityLearning}. Finally, \textbf{there is a lack of hierarchical mental simulation methods.} While numerous non-hierarchical methods show how mental simulation can improve sample efficiency\cite{Deisenroth2011,Ha2018_WorldModels}, our summary shows that reinforcement learning with hierarchical mental simulation is yet to be explored. As a result, inter-dependencies between sub-tasks are hard to detect.
{ "attr-fineweb-edu": 1.506836, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc4I4dbgg2BVl1iFE
\section{Introduction} Determining the computational complexity of the Graph Isomorphism Problem is a long-standing open question in theoretical computer science (see, e.g., \cite{Karp72}). The problem is easily seen to be contained in NP, but it is neither known to be in PTIME nor known to be NP-complete. In a breakthrough result, Babai \cite{Babai16} recently obtained a quasipolynomial-time algorithm for testing isomorphism of graphs (i.e., an algorithm running in time $n^{{\mathcal O}((\log n)^c)}$ where $n$ denotes the number of vertices of the input graphs, and $c$ is some constant), achieving the first improvement over the previous best algorithm running in time $n^{{\mathcal O}(\sqrt{n / \log n})}$ \cite{BabaiKL83} in over three decades. However, it remains wide open whether GI can be solved in polynomial time. In this work, we are concerned with the parameterized complexity of isomorphism testing. While polynomial-time isomorphism tests are known for a large variety of restricted graph classes (see, e.g., \cite{Bodlaender90,GroheM15,GroheS15,HopcroftT72,Luks82,Ponomarenko91}), for several important structural parameters such as maximum degree or the Hadwiger number\footnote{The Hadwiger number of a graph $G$ is the maximum number $h$ such that $K_h$ is a minor of $G$.}, it is still unknown whether isomorphism testing is fixed-parameter tractable (i.e., whether there is an isomorphism algorithm running in time $f(k)n^{{\mathcal O}(1)}$ where $k$ denotes the graph parameter in question, $n$ the number of vertices of the input graphs, and $f$ is some function). On the other hand, there has also been significant progress in recent years. In 2015, Lokshtanov et al.\ \cite{LokshtanovPPS17} obtained the first fpt isomorphism test parameterized by the tree-width $k$ of the input graph running in time $2^{{\mathcal O}(k^5 \log k)}n^5$. This algorithm was later improved by Grohe et al.\ \cite{GroheNSW20} to a running time of $2^{{\mathcal O}(k \cdot (\log k)^c)}n^3$ (for some constant $c$). In the same year, Kawarabayashi \cite{Kawarabayashi15} obtained the first fpt isomorphism test parameterized by the Euler genus $g$ of the input graph running time $f(g)n$ for some function $f$. While Kawarabayashi's algorithm achieves optimal dependence on the number of vertices of the input graphs, it is also extremely complicated and it provides no explicit upper bound on the function $f$. Indeed, the algorithm spans over multiple papers \cite{Kawarabayashi15,KawarabayashiM08,KawarabayashiMNZ20} and builds on several deep structural results for graphs of bounded genus. In this work, we present an alternative isomorphism test for graphs of Euler genus $g$ running in time $2^{{\mathcal O}(g^4 \log g)}n^{{\mathcal O}(1)}$. In contrast to Kawarabayashi's algorithm, our algorithm does not require any deep graph-theoretic insights, but rather builds on an elegant combination of well-established and simple group-theoretic, combinatorial, and graph-theoretic ideas. In particular, this enables us to provide the first explicit upper bound on the dependence on $g$ for an fpt isomorphism test. Actually, the only property our algorithm exploits is that graphs of genus $g$ exclude $K_{3,h}$ as a minor for $h \geq 4g+3$ \cite{Ringel65}. In other words, our main result is an fpt isomorphism test for graphs excluding $K_{3,h}$ as a minor. \begin{theorem} The Graph Isomorphism Problem for graphs excluding $K_{3,h}$ as a minor can be solved in time $2^{{\mathcal O}(h^4 \log h)}n^{{\mathcal O}(1)}$. \end{theorem} For this class of graphs, the best existing algorithm runs in time $n^{{\mathcal O}((\log h)^c)}$ for some constant $c$ \cite{Neuen20}, and no fpt isomorphism test was known prior to this work. For the algorithm, we combine different approaches to the Graph Isomorphism Problem. On a high-level, our algorithm follows a simple decomposition strategy which decomposes the input graph $G$ into pieces such that the interplay between the pieces is simple. Here, the main idea is to define the pieces in such a way that, after fixing a small number of vertices, the automorphism group of $G$ restricted to a piece $D \subseteq V(G)$ is similar to the automorphism groups of graphs of maximum degree $3$. This allows us to test isomorphism between the pieces using the group-theoretic graph isomorphism machinery dating back to Luks's polynomial-time isomorphism test for graphs of bounded maximum degree \cite{Luks82}. In order to capture the restrictions on the automorphism group, we introduce the notion of \emph{$(t,k)$-WL-bounded graphs} which generalize so-called $t$-CR-bounded graphs. The class of $t$-CR-bounded graphs was originally defined by Ponomarenko \cite{Ponomarenko89} and was recently rediscovered in \cite{Neuen20,GroheNW20,Neuen21} in a series of works eventually leading to an algorithm testing isomorphism of graphs excluding $K_h$ as a topological subgraph in time $n^{{\mathcal O}((\log h)^c)}$ (for some constant $c$). Intuitively speaking, a graph $G$ is \emph{$t$-CR-bounded} if an initially uniform vertex-coloring $\chi$ can be turned into a discrete coloring (i.e., a coloring where every vertex has its own color) by repeatedly (a) applying the standard Color Refinement algorithm, and (b) splitting all color classes of size at most $t$. We define \emph{$(t,k)$-WL-bounded} graphs in the same way, but replace the Color Refinement algorithm by the well-known Weisfeiler-Leman algorithm of dimension $k$ (see, e.g., \cite{CaiFI92,ImmermanL90}). Maybe surprisingly, this natural extension of $t$-CR-bounded has not been considered so far in the literature, and we start by building a polynomial-time isomorphism test for such graphs using the group-theoretic methods developed by Luks \cite{Luks82} as well as a simple extension due to Miller \cite{Miller83b}. Actually, it turns out that isomorphism of $(t,k)$-WL-bounded graphs can even be tested in time $n^{{\mathcal O}(k \cdot (\log t)^c)}$ using recent extensions \cite{Neuen20} of Babai's quasipolynomial-time isomorphism test. However, since we only apply these methods for $t=k=2$, there is no need for our algorithm to rely on such sophisticated subroutines. Now, as the main structural insight, we prove that each $3$-connected graph $G$ that excludes $K_{3,h}$ as a minor admits (after fixing $3$ vertices) an isomorphism-invariant rooted tree decomposition $(T,\beta)$ such that the adhesion width (i.e., the maximal intersection between two bags) is bounded by $h$. Additionally, each bag $\beta(t)$, $t \in V(T)$, can be equipped with a set $\gamma(t) \subseteq \beta(t)$ of size $|\gamma(t)| \leq h^4$ such that, after fixing all vertices in $\gamma(t)$, $G$ restricted to $\beta(t)$ is $(2,2)$-WL-bounded. Given such a decomposition, isomorphisms can be computed by a simple bottom-up dynamic programming strategy along the tree decompositions. For each bag, isomorphism is tested by first individualizing all vertices from $\gamma(t)$ at an additional factor of $|\gamma(t)|! = 2^{{\mathcal O}(h^4 \log h)}$ in the running time. Following the individualization of these vertices, our algorithm can then simply rely on a polynomial-time isomorphism test for $(2,2)$-WL-bounded graphs. Here, we incorporate the partial solutions computed in the subtree below the current bag via a simple gadget construction. To compute the decomposition $(T,\beta)$, we also build on the notion of $(2,2)$-WL-bounded graphs. Given a set $X \subseteq V(G)$, we define the \emph{$(2,2)$-closure} to be the set $D = \cl_{2,2}^G(X)$ of all vertices appearing in a singleton color class after artificially individualizing all vertices from $X$, and performing the $(2,2)$-WL procedure. As one of the main technical contributions, we can show that the interplay between $D$ and its complement in $G$ is simple (assuming $G$ excludes $K_{3,h}$ as a minor). To be more precise, building on various properties of the $2$-dimensional Weisfeiler-Leman algorithm, we show that $|N_G(Z)| < h$ for every connected component $Z$ of $G - D$. This allows us to choose $D = \cl_{2,2}^G(X)$ as the root bag of $(T,\beta)$ for some carefully chosen set $X$, and obtain the decomposition $(T,\beta)$ by recursion. \medskip The remainder of this work is structured as follows. In the next section, we give the necessary preliminaries. In Section \ref{sec:t-k-wl} we introduce $(t,k)$-WL-bounded graphs and provide a polynomial-time isomorphism test for such graphs. Next, we give a more detailed overview on our fpt algorithm in Section \ref{sec:overview}. The tree decomposition is then computed in Sections \ref{sec:disjoint-subtrees} and \ref{sec:decomposition}. Finally, we assemble the main algorithm in Section \ref{sec:main-algorithm}. \section{Preliminaries} \label{sec:preliminaries} \subsection{Graphs} A \emph{graph} is a pair $G = (V(G),E(G))$ consisting of a \emph{vertex set} $V(G)$ and an \emph{edge set} $E(G)$. All graphs considered in this paper are finite and simple (i.e., they contain no loops or multiple edges). Moreover, unless explicitly stated otherwise, all graphs are undirected. For an undirected graph $G$ and $v,w \in V(G)$, we write $vw$ as a shorthand for $\{v,w\} \in E(G)$. The \emph{neighborhood} of a vertex $v \in V(G)$ is denoted by $N_G(v)$. The \emph{degree} of $v$, denoted by $\deg_G(v)$, is the number of edges incident with $v$, i.e., $\deg_G(v)=|N_G(v)|$. For $X \subseteq V(G)$, we define $N_G[X] \coloneqq X \cup \bigcup_{v \in X}N_G(v)$ and $N_G(X) \coloneqq N_G[X] \setminus X$. If the graph $G$ is clear from context, we usually omit the index and simply write $N(v)$, $\deg(v)$, $N[X]$ and $N(X)$. We write $K_{\ell,h}$ to denote the complete bipartite graph on $\ell$ vertices on the left side and $h$ vertices on the right side. A graph is \emph{regular} if every vertex has the same degree. A bipartite graph $G=(V_1,V_2,E)$ is called \emph{$(d_1,d_2)$-biregular} if all vertices $v_i \in V_i$ have degree $d_i$ for both $i \in \{1,2\}$. In this case $d_1 \cdot |V_1| = d_2 \cdot |V_2| = |E|$. By a double edge counting argument, for each subset $S \subseteq V_i$, $i\in\{1,2\}$, it holds that $|S| \cdot d_i \leq |N_G(S)| \cdot d_{3-i}$. A bipartite graph is \emph{biregular}, if there are $d_1,d_2 \in {\mathbb N}$ such that $G$ is $(d_1,d_2)$-biregular. Each biregular graph satisfies the Hall condition, i.e., for all $S \subseteq V_1$ it holds $|S| \leq |N_G(S)|$ (assuming $|V_1| \leq |V_2|$). Thus, by Hall's Marriage Theorem, each biregular graph contains a matching of size $\min(|V_1|,|V_2|)$. A \emph{path of length $k$} from $v$ to $w$ is a sequence of distinct vertices $v = u_0,u_1,\dots,u_k = w$ such that $u_{i-1}u_i \in E(G)$ for all $i \in [k] \coloneqq \{1,\dots,k\}$. The \emph{distance} between two vertices $v,w \in V(G)$, denoted by $\dist_G(v,w)$, is the length of a shortest path between $v$ and $w$. As before, we omit the index $G$ if it is clear from context. For two sets $A,B\subseteq V(G)$, we denote by $E_G(A,B) \coloneqq \{vw\in E(G)\mid v\in A,w\in B\}$. Also, $G[A,B]$ denotes the graph with vertex set $A\cup B$ and edge set $E_G(A,B)$. For $A \subseteq V(G)$, we denote by $G[A] \coloneqq G[A,A]$ the induced subgraph on $A$. Moreover, $G - A$ denotes the subgraph induced by the complement of $A$, that is, the graph $G - A \coloneqq G[V(G) \setminus A]$. A graph $H$ is a \emph{subgraph} of $G$, denoted by $H \subseteq G$, if $V(H) \subseteq V(G)$ and $E(H) \subseteq E(G)$. A graph $H$ is a \emph{minor} of $G$ if $H$ can be obtained from $G$ by deleting vertices and edges, as well as contracting edges. The graph $G$ \emph{excludes $H$ as a minor} if it does not have a minor isomorphic to $H$. A \emph{tree decomposition} of a graph $G$ is a pair $(T,\beta)$ where $T$ is a tree and $\beta\colon V(T) \rightarrow 2^{V(G)}$ ($2^{V(G)}$ denotes the power set of $V(G)$) such that \begin{enumerate}[label = (T.\arabic*)] \item for every $vw \in E(G)$ there is some $t \in V(T)$ such that $v,w \in \beta(t)$, and \item for every $v \in V(G)$ the set $\{t \in V(T) \mid v \in \beta(t)\}$ is non-empty and connected in $T$. \end{enumerate} The \emph{width} of the decomposition $(T,\beta)$ is $\max_{t \in V(T)} |\beta(t)| - 1$. Also, the \emph{adhesion width} of $(T,\beta)$ is $\max_{st \in E(T)} |\beta(t) \cap \beta(s)|$. An \emph{isomorphism} from $G$ to a graph $H$ is a bijection $\varphi\colon V(G) \rightarrow V(H)$ that respects the edge relation, that is, for all~$v,w \in V(G)$, it holds that~$vw \in E(G)$ if and only if $\varphi(v)\varphi(w) \in E(H)$. Two graphs $G$ and $H$ are \emph{isomorphic}, written $G \cong H$, if there is an isomorphism from~$G$ to~$H$. We write $\varphi\colon G\cong H$ to denote that $\varphi$ is an isomorphism from $G$ to $H$. Also, $\Iso(G,H)$ denotes the set of all isomorphisms from $G$ to $H$. The automorphism group of $G$ is $\Aut(G) \coloneqq \Iso(G,G)$. Observe that, if $\Iso(G,H) \neq \emptyset$, it holds that $\Iso(G,H) = \Aut(G)\varphi \coloneqq \{\gamma\varphi \mid \gamma \in \Aut(G)\}$ for every isomorphism $\varphi \in \Iso(G,H)$. A \emph{vertex-colored graph} is a tuple $(G,\chi_V)$ where $G$ is a graph and $\chi_V\colon V(G) \rightarrow C$ is a mapping into some set $C$ of colors, called \emph{vertex-coloring}. Similarly, an \emph{arc-colored graph} is a tuple $(G,\chi_E)$, where $G$ is a graph and $\chi_E\colon\{(u,v) \mid \{u,v\} \in E(G)\} \rightarrow C$ is a mapping into some color set $C$, called \emph{arc-coloring}. Observe that colors are assigned to directed edges, i.e., the directed edge $(v,w)$ may obtain a different color than $(w,v)$. We also consider vertex- and arc-colored graphs $(G,\chi_V,\chi_E)$ where $\chi_V$ is a vertex-coloring and $\chi_E$ is an arc-coloring. Typically, $C$ is chosen to be an initial segment $[n]$ of the natural numbers. To be more precise, we generally assume that there is a linear order on the set of all potential colors which, for example, allows us to identify a minimal color appearing in a graph in a unique way. Isomorphisms between vertex- and arc-colored graphs have to respect the colors of the vertices and arcs. \subsection{Weisfeiler-Leman Algorithm} The Weisfeiler-Leman algorithm, originally introduced by Weisfeiler and Leman in its $2$-dimensional version \cite{WeisfeilerL68}, forms one of the most fundamental subroutines in the context of isomorphism testing. Let~$\chi_1,\chi_2\colon V^k \rightarrow C$ be colorings of the~$k$-tuples of vertices of~$G$, where~$C$ is a finite set of colors. We say $\chi_1$ \emph{refines} $\chi_2$, denoted $\chi_1 \preceq \chi_2$, if $\chi_1(\bar v) = \chi_1(\bar w)$ implies $\chi_2(\bar v) = \chi_2(\bar w)$ for all $\bar v,\bar w \in V^{k}$. The colorings $\chi_1$ and $\chi_2$ are \emph{equivalent}, denoted $\chi_1 \equiv \chi_2$, if $\chi_1 \preceq \chi_2$ and $\chi_2 \preceq \chi_1$. We describe the \emph{$k$-dimensional Weisfeiler-Leman algorithm} ($k$-WL) for all $k \geq 1$. For an input graph $G$ let $\WLit{k}{0}{G}\colon (V(G))^{k} \rightarrow C$ be the coloring where each tuple is colored with the isomorphism type of its underlying ordered subgraph. More precisely, $\WLit{k}{0}{G}(v_1,\dots,v_k) = \WLit{k}{0}{G}(v_1',\dots,v_k')$ if and only if, for all $i,j \in [k]$, it holds that $v_i = v_j \Leftrightarrow v_i'= v_j'$ and $v_iv_j \in E(G) \Leftrightarrow v_i'v_j' \in E(G)$. If the graph is equipped with a coloring the initial coloring $\WLit{k}{0}{G}$ also takes the input coloring into account. More precisely, for a vertex-coloring $\chi_V$, it additionally holds that $\chi_V(v_i) = \chi_V(v_i')$ for all $i \in [k]$. And for an arc-coloring $\chi_E$, it is the case that $\chi_E(v_i,v_j) = \chi_E(v_i',v_j')$ for all $i,j \in [k]$ such that $v_iv_j \in E(G)$. We then recursively define the coloring $\WLit{k}{i}{G}$ obtained after $i$ rounds of the algorithm. For $k \geq 2$ and $\bar v = (v_1,\dots,v_k) \in (V(G))^k$ let $\WLit{k}{i+1}{G}(\bar v) \coloneqq \Big(\WLit{k}{i}{G}(\bar v), {\mathcal M}_i(\bar v)\Big)$ where \[{\mathcal M}_i(\bar v) \coloneqq \Big\{\!\!\Big\{\big(\WLit{k}{i}{G}(\bar v[w/1]),\dots,\WLit{k}{i}{G}(\bar v[w/k])\big) \;\Big\vert\; w \in V(G) \Big\}\!\!\Big\}\] and $\bar v[w/i] \coloneqq (v_1,\dots,v_{i-1},w,v_{i+1},\dots,v_k)$ is the tuple obtained from $\bar v$ by replacing the $i$-th entry by $w$ (and $\{\!\{\dots\}\!\}$ denotes a multiset). For $k = 1$ the definition is similar, but we only iterate over neighbors of $v$, i.e., $\WLit{1}{i+1}{G}(v) \coloneqq \Big(\WLit{1}{i}{G}(v), {\mathcal M}_i(v)\Big)$ where \[{\mathcal M}_i(v) \coloneqq \Big\{\!\!\Big\{\WLit{1}{i}{G}(w) \;\Big\vert\; w \in N_G(v) \Big\}\!\!\Big\}.\] By definition, $\WLit{k}{i+1}{G} \preceq \WLit{k}{i}{G}$ for all $i \geq 0$. Hence, there is a minimal~$i_\infty$ such that $\WLit{k}{i_{\infty}}{G} \equiv \WLit{k}{i_{\infty}+1}{G}$ and for this $i_\infty$ the coloring $\WL{k}{G} \coloneqq \WLit{k}{i_\infty}{G}$ is the \emph{$k$-stable} coloring of $G$. The $k$-dimensional Weisfeiler-Leman algorithm takes as input a (vertex- or arc-)colored graph $G$ and returns (a coloring that is equivalent to) $\WL{k}{G}$. This can be implemented in time ${\mathcal O}(n^{k+1}\log n)$ (see \cite{ImmermanL90}). \subsection{Group Theory} \label{sec:group-theory} We introduce the group-theoretic notions required in this work. For a general background on group theory we refer to \cite{Rotman99}, whereas background on permutation groups can be found in \cite{DixonM96}. \subparagraph{Permutation groups} A \emph{permutation group} acting on a set $\Omega$ is a subgroup $\Gamma \leq \Sym(\Omega)$ of the symmetric group. The size of the permutation domain $\Omega$ is called the \emph{degree} of $\Gamma$. If $\Omega = [n]$, then we also write $S_n$ instead of $\Sym(\Omega)$. For $\gamma \in \Gamma$ and $\alpha \in \Omega$ we denote by $\alpha^{\gamma}$ the image of $\alpha$ under the permutation $\gamma$. For $A \subseteq \Omega$ and $\gamma \in \Gamma$ let $A^{\gamma} \coloneqq \{\alpha^{\gamma} \mid \alpha \in A\}$. The set $A$ is \emph{$\Gamma$-invariant} if $A^{\gamma} = A$ for all $\gamma \in \Gamma$. For a partition ${\mathcal P}$ of $\Omega$ let ${\mathcal P}^\gamma \coloneqq \{A^\gamma \mid A \in {\mathcal P}\}$. Observe that ${\mathcal P}^\gamma$ is again a partition of $\Gamma$. We say ${\mathcal P}$ is \emph{$\Gamma$-invariant} if ${\mathcal P}^{\gamma} = {\mathcal P}$ for all $\gamma \in \Gamma$. For $A \subseteq \Omega$ and a bijection $\theta\colon \Omega \rightarrow \Omega'$ we denote by $\theta[A]$ the restriction of $\theta$ to the domain $A$. For a $\Gamma$-invariant set $A \subseteq \Omega$, we denote by $\Gamma[A] \coloneqq \{\gamma[A] \mid \gamma \in \Gamma\}$ the induced action of $\Gamma$ on $A$, i.e., the group obtained from $\Gamma$ by restricting all permutations to $A$. More generally, for every set $\Lambda$ of bijections with domain $\Omega$, we denote by $\Lambda[A] \coloneqq \{\theta[A] \mid \theta \in \Lambda\}$. Similarly, for a partition ${\mathcal P}$ of $\Omega$, we denote by $\theta[{\mathcal P}]\colon {\mathcal P} \rightarrow {\mathcal P}'$ the mapping defined via $\theta(A) \coloneqq \{\theta(\alpha) \mid \alpha \in A\}$ for all $A \in {\mathcal P}$. As before, $\Lambda[{\mathcal P}] \coloneqq \{\theta[{\mathcal P}] \mid \theta \in \Lambda\}$. \subparagraph{Algorithms for permutation groups} Next, let us review some basic facts about algorithms for permutation groups. More details can be found in \cite{Seress03}. In order to perform computational tasks for permutation groups efficiently the groups are represented by generating sets of small size. Indeed, most algorithms are based on so-called strong generating sets, which can be chosen of size quadratic in the size of the permutation domain of the group and can be computed in polynomial time given an arbitrary generating set (see, e.g., \cite{Seress03}). \begin{theorem}[cf.\ \cite{Seress03}] \label{thm:permutation-group-library} Let $\Gamma \leq \Sym(\Omega)$ and let $S$ be a generating set for $\Gamma$. Then the following tasks can be performed in time polynomial in $|\Omega|$ and $|S|$: \begin{enumerate} \item compute the order of $\Gamma$, \item given $\gamma \in \Sym(\Omega)$, test whether $\gamma \in \Gamma$, \item compute the orbits of $\Gamma$, and \item given $A \subseteq \Omega$, compute a generating set for $\Gamma_{(A)}$. \end{enumerate} \end{theorem} \subparagraph{Groups with restricted composition factors} In this work, we shall be interested in a particular subclass of permutation groups, namely groups with restricted composition factors. Let $\Gamma$ be a group. A \emph{subnormal series} is a sequence of subgroups $\Gamma = \Gamma_0 \geq \Gamma_1 \geq \dots \geq \Gamma_k = \{\id\}$ such that $\Gamma_i$ is a normal subgroup of $\Gamma_{i-1}$ for all $i \in [k]$. The length of the series is $k$ and the groups $\Gamma_{i-1} / \Gamma_{i}$ are the factor groups of the series, $i \in [k]$. A \emph{composition series} is a strictly decreasing subnormal series of maximal length. For every finite group $\Gamma$ all composition series have the same family (considered as a multiset) of factor groups (cf.\ \cite{Rotman99}). A \emph{composition factor} of a finite group $\Gamma$ is a factor group of a composition series of $\Gamma$. \begin{definition} For $d \geq 2$ let $\mgamma_d$ denote the class of all groups $\Gamma$ for which every composition factor of $\Gamma$ is isomorphic to a subgroup of $S_d$. A group $\Gamma$ is a \emph{$\mgamma_d$-group} if it is contained in the class $\mgamma_d$. \end{definition} Let us point out the fact that there are two similar classes of groups usually referred by $\Gamma_d$ in the literature. The first is the class denoted by $\mgamma_d$ here originally introduced by Luks \cite{Luks82}, while the second one, for example used in \cite{BabaiCP82}, in particular allows composition factors that are simple groups of Lie type of bounded dimension. \subparagraph{Group-Theoretic Tools for Isomorphism Testing} In this work, the central group-theoretic subroutine is an isomorphism test for hypergraphs where the input group is a $\mgamma_d$-group. Two hypergraphs ${\mathcal H}_1 = (V_1,{\mathcal E}_1)$ and ${\mathcal H}_2 = (V_2,{\mathcal E}_2)$ are isomorphic if there is a bijection $\varphi\colon V_1 \rightarrow V_2$ such that $E \in {\mathcal E}_1$ if and only if $E^{\varphi} \in {\mathcal E}_2$ for all $E \in 2^{V_1}$ (where $E^\varphi \coloneqq \{\varphi(v) \mid v \in E\}$ and $2^{V_1}$ denotes the power set of $V_1$). We write $\varphi\colon {\mathcal H}_1 \cong {\mathcal H}_2$ to denote that $\varphi$ is an isomorphism from ${\mathcal H}_1$ to ${\mathcal H}_2$. Consistent with previous notation, we denote by $\Iso({\mathcal H}_1,{\mathcal H}_2)$ the set of isomorphisms from ${\mathcal H}_1$ to ${\mathcal H}_2$. More generally, for $\Gamma \leq \Sym(V_1)$ and a bijection $\theta\colon V_1 \rightarrow V_2$, we define \[\Iso_{\Gamma\theta}({\mathcal H}_1,{\mathcal H}_2) \coloneqq \{\varphi \in \Gamma\theta \mid \varphi\colon {\mathcal H}_1 \cong {\mathcal H}_2\}.\] In this work, we define the Hypergraph Isomorphism Problem to take as input two hypergraphs ${\mathcal H}_1 = (V_1,{\mathcal E}_1)$ and ${\mathcal H}_2 = (V_2,{\mathcal E}_2)$, a group $\Gamma \leq \Sym(V_1)$ and a bijection $\theta\colon V_1 \rightarrow V_2$, and the goal is to compute a representation of $\Iso_{\Gamma\theta}({\mathcal H}_1,{\mathcal H}_2)$. The following algorithm forms a crucial subroutine. \begin{theorem}[Miller \cite{Miller83b}] \label{thm:hypergraph-isomorphism-gamma-d} Let ${\mathcal H}_1 = (V_1,{\mathcal E}_1)$ and ${\mathcal H}_2 = (V_2,{\mathcal E}_2)$ be two hypergraphs and let $\Gamma \leq \Sym(V_1)$ be a $\mgamma_d$-group and $\theta\colon V_1 \rightarrow V_2$ a bijection. Then $\Iso_{\Gamma\theta}({\mathcal H}_1,{\mathcal H}_2)$ can be computed in time $(n+m)^{{\mathcal O}(d)}$ where $n \coloneqq |V_1|$ and $m \coloneqq |{\mathcal E}_1|$. \end{theorem} \begin{theorem}[Neuen \cite{Neuen20}] \label{thm:hypergraph-isomorphism-gamma-d-fast} Let ${\mathcal H}_1 = (V_1,{\mathcal E}_1)$ and ${\mathcal H}_2 = (V_2,{\mathcal E}_2)$ be two hypergraphs and let $\Gamma \leq \Sym(V_1)$ be a $\mgamma_d$-group and $\theta\colon V_1 \rightarrow V_2$ a bijection. Then $\Iso_{\Gamma\theta}({\mathcal H}_1,{\mathcal H}_2)$ can be computed in time $(n+m)^{{\mathcal O}((\log d)^c)}$ for some constant $c$ where $n \coloneqq |V_1|$ and $m \coloneqq |{\mathcal E}_1|$. \end{theorem} Observe that both algorithms given by the two theorems tackle the same problem. The second algorithm is asymptotically much faster, but it is also much more complicated and the constant factors in the exponent of the running time are likely to be much higher. Since this paper only applies either theorem for $d=2$, it seems to be preferable to use the first algorithm. Indeed, the first result is a simple extension of Luks's well-known isomorphism test for bounded-degree graphs \cite{Luks82}, and thus the underlying algorithm is fairly simple. For all these reasons, we mostly build on Theorem \ref{thm:hypergraph-isomorphism-gamma-d}. However, for future applications of the techniques presented in this work, it might be necessary to build on Theorem \ref{thm:hypergraph-isomorphism-gamma-d-fast} to benefit from the improved run time bound. For this reason, we shall provide variants of our results building on Theorem \ref{thm:hypergraph-isomorphism-gamma-d-fast} wherever appropriate. For the sake of completeness, I remark that there is another algorithm tackling the problem covered by both theorems running in time $n^{{\mathcal O}(d)}m^{{\mathcal O}(1)}$ \cite{SchweitzerW19}. However, this variant is not beneficial to us since $m = {\mathcal O}(n)$ for all applications considered in this paper. \section{Allowing Weisfeiler and Leman to Split Small Color Classes} \label{sec:t-k-wl} In this section, we introduce the concept of \emph{$(t,k)$-WL-bounded graphs} and provide a polynomial-time isomorphism test for such graphs for all constant values of $t$ and $k$. The final fpt isomorphism test for graphs excluding $K_{3,h}$ as a minor builds on this subroutine for $t = k = 2$. The concept of $(t,k)$-WL-bounded graphs is a natural extension of $t$-CR-bounded graphs which were already introduced by Ponomarenko in the late 1980's \cite{Ponomarenko89} and which were recently rediscovered in \cite{Neuen20,GroheNW20,Neuen21}. Intuitively speaking, a graph $G$ is \emph{$t$-CR-bounded}, $t \in {\mathbb N}$, if an initially uniform vertex-coloring $\chi$ (i.e., all vertices receive the same color) can be turned into the discrete coloring (i.e., each vertex has its own color) by repeatedly \begin{itemize} \item performing the Color Refinement algorithm (expressed by the letters `CR'), and \item taking a color class $[v]_\chi \coloneqq \{w \in V(G) \mid \chi(w) = \chi(v)\}$ of size $|[v]_\chi| \leq t$ and assigning each vertex from the class its own color. \end{itemize} A very natural extension of this idea to replace the Color Refinement algorithm by the Weisfeiler-Leman algorithm for some fixed dimension $k$. This leads us to the notion of \emph{$(t,k)$-WL-bounded graphs} (the letters `CR' are replaced by `$k$-WL'). In particular, $(t,1)$-WL-bounded graphs are exactly the $t$-CR-bounded graphs. Maybe surprisingly, it seems that this simple extension has not been considered so far in the literature. \begin{definition} \label{def:t-cr-bounded} A vertex- and arc-colored graph $G = (V,E,\chi_V,\chi_E)$ is \emph{$(t,k)$-WL-bounded} if the sequence $(\chi_i)_{i \geq 0}$ reaches a discrete coloring where $\chi_0 \coloneqq \chi_V$, \[\chi_{2i+1}(v) \coloneqq \WL{k}{V,E,\chi_{2i},\chi_E}(v,\dots,v)\] and \[\chi_{2i+2}(v) \coloneqq \begin{cases} (v,1) & \text{if } |[v]_{\chi_{2i+1}}| \leq t\\ (\chi_{2i+1}(v),0) & \text{otherwise} \end{cases}\] for all $i \geq 0$. Also, for the minimal $i_\infty \geq 0$ such that $\chi_{i_\infty} \equiv \chi_{i_\infty+1}$, we refer to $\chi_{i_\infty}$ as the \emph{$(t,k)$-WL-stable} coloring of $G$ and denote it by $\tWL{t}{k}{G}$. \end{definition} At this point, the reader may wonder why $(\chi_i)_{i \geq 0}$ is chosen as a sequence of vertex-colorings and not a sequence of colorings of $k$-tuples of vertices (since $k$-WL also colors $k$-tuples of vertices). While such a variant certainly makes sense, it still leads to the same class of graphs. Let $G$ be a graph and let $\chi \coloneqq \WL{k}{G}$. The main insight is that, if there is some color $c \in \im(\chi)$ for which $|\chi^{-1}(c)| \leq t$, then there is also a color $c' \in \im(\chi)$ for which $|\chi^{-1}(c')| \leq t$ and $\chi^{-1}(c') \subseteq \{(v,\dots,v) \mid v \in V(G)\}$. In other words, one can not achieve any additional splitting of color classes by also considering non-diagonal color classes. We also need to extend several notions related to $t$-CR-bounded graphs. Let $G$ be a graph and let $X \subseteq V(G)$ be a set of vertices. Let $\chi_V^*\colon V(G) \rightarrow C$ be the vertex-coloring obtained from individualizing all vertices in the set $X$, i.e., $\chi_V^*(v) \coloneqq (v,1)$ for $v \in X$ and $\chi_V^*(v) \coloneqq (0,0)$ for $v \in V(G) \setminus X$. Let $\chi \coloneqq \tWL{t}{k}{G,\chi_V^*}$ denote the $(t,k)$-WL-stable coloring with respect to the input graph $(G,\chi_V^*)$. We define the \emph{$(t,k)$-closure} of the set $X$ (with respect to $G$) to be the set \[\cl_{t,k}^G(X) \coloneqq \left\{v \in V(G) \mid |[v]_{\chi}| = 1\right\}.\] Observe that $X \subseteq \cl_{t,k}^G(X)$. For $v_1,\dots,v_\ell \in V(G)$ we also use $\cl_{t,k}^G(v_1,\dots,v_\ell)$ as a shorthand for $\cl_{t,k}^G(\{v_1,\dots,v_\ell\})$. If the input graph is equipped with a vertex- or arc-coloring, all definitions are extended in the natural way. For the remainder of this section, we concern ourselves with designing a polynomial-time isomorphism test for $(t,k)$-WL-bounded graphs. Actually, we shall prove a slightly stronger result which turns out to be useful later on. The main idea for the algorithm is to build a reduction to the isomorphism problem for $(t,1)$-WL-bounded graphs for which such results are already known \cite{Ponomarenko89,Neuen20}. Indeed, isomorphism of $(t,1)$-WL-bounded graphs can be reduced to the Hypergraph Isomorphism Problem for $\mgamma_t$-groups. Since one may be interested in using different subroutines for solving the Hypergraph Isomorphism Problem for $\mgamma_t$-groups (see the discussion at the end of Section \ref{sec:group-theory}), most results are stated via an oracle for the Hypergraph Isomorphism Problem on $\mgamma_t$-groups \begin{theorem}[\mbox{\cite[Lemma 20]{Neuen20}}] \label{thm:isomorphism-t-cr-bounded} Let $G_1,G_2$ be two vertex- and arc-colored graphs that are $(t,1)$-WL-bounded. Then there is an algorithm computing $\Iso(G_1,G_2)$ in polynomial time using oracle access to the Hypergraph Isomorphism Problem for $\mgamma_t$-groups. Moreover, $\Aut(G_1) \in \mgamma_t$. \end{theorem} \begin{theorem} \label{thm:bounding-group-t-k-wl} Let $G_1,G_2$ be two vertex- and arc-colored graphs and let $\chi_i \coloneqq \tWL{t}{k}{G_i}$. Also let ${\mathcal P}_i = \{[v]_{\chi_i} \mid v \in V(G_i)\}$ be the partition into color classes of $\chi_i$. Then ${\mathcal P}_1^\varphi = {\mathcal P}_2$ for all $\varphi \in \Iso(G_1,G_2)$. Moreover, using oracle access to the Hypergraph Isomorphism Problem for $\mgamma_t$-groups, in time $n^{{\mathcal O}(k)}$ one can compute a $\mgamma_t$-group $\Gamma \leq \Sym({\mathcal P}_1)$ and a bijection $\theta \colon {\mathcal P}_1 \rightarrow {\mathcal P}_2$ such that \[\Iso(G_1,G_2)[{\mathcal P}_1] \subseteq \Gamma\theta.\] In particular, $\Aut(G_1)[{\mathcal P}_1] \in \mgamma_t$. \end{theorem} \begin{proof} Let $G_1,G_2$ be two vertex- and arc-colored graphs. The algorithm first inductively computes the following sequence of colorings $\chi_{j,r}^i \colon (V(G_i))^k \rightarrow C$ for all $j \in [j_\infty]$ and $r \in [r_\infty(j)]$ (where $C$ is a suitable set of colors). We set \[\chi_{1,1}^i \coloneqq \WL{k}{G_i}\] and set $r_\infty(1) \coloneqq 1$. Now, suppose that $j > 1$. If $j$ is even, then we define \[\chi_{j,1}^i(v_1,\dots,v_k) \coloneqq \begin{cases} (v_1,1) & \text{if } v_1 = v_2 = \dots = v_k \text{ and }\\ &|C(v;\chi_{j-1,r_\infty(j-1)}^i)| \leq t\\ (\chi_{j-1,r_\infty(j-1)}^i(v_1,\dots,v_k),0) & \text{otherwise} \end{cases}\] where \[C(v;\chi_{j-1,r_\infty(j-1)}^i) \coloneqq \{w \in V(G) \mid \chi_{j-1,r_\infty(j-1)}^i(w,\dots,w) = \chi_{j-1,r_\infty(j-1)}^i(v,\dots,v)\}.\] Also, we set $r_\infty(j) \coloneqq 1$. If $\chi_{j,1}^i \equiv \chi_{j-1,r_\infty(j-1)}^i$ the algorithm terminates with $j_\infty \coloneqq j-1$. Otherwise $j$ is odd. For notational convenience, we set $\chi_{j,0}^i \coloneqq \chi_{j-1,r_\infty(j-1)}^i$. For $r \geq 1$ and $\bar v = (v_1,\dots,v_k) \in (V(G))^k$ let \[\chi_{j,r+1}^i(\bar v) \coloneqq \Big(\chi_{j,r}^i(\bar v), {\mathcal M}_r(\bar v)\Big)\] where \[{\mathcal M}_r(\bar v) \coloneqq \Big\{\!\!\Big\{\big(\chi_{j,r}^i(\bar v[w/1]),\dots,\chi_{j,r}^i(\bar v[w/k])\big) \;\Big\vert\; w \in V(G) \Big\}\!\!\Big\}\] and $\bar v[w/i] \coloneqq (v_1,\dots,v_{i-1},w,v_{i+1},\dots,v_k)$ is the tuple obtained from replacing the $i$-th entry by $w$. We set $r_\infty(j) \coloneqq r$ for the minimal $r \geq 0$ such that $\chi_{j,r}^i \equiv \chi_{j,r+1}^i$. If $r = 0$ then the algorithm terminates and $j_\infty \coloneqq j-1$. This completes the description of the colorings. The definition of the sequence of colorings clearly follows the definition of the $(t,k)$-WL-stable coloring where the single refinement steps of the WL-algorithm are given explicitly. Hence, \begin{equation} \tWL{t}{k}{G_i} \equiv \chi_{j_\infty,r_\infty(j_\infty)}^i. \end{equation} Now, the basic idea is to define isomorphism-invariant vertex- and arc-colored graphs $(H_i,\lambda_V^i,\lambda_E^i)$ such that $\im(\chi_{j,r}^i)$ corresponds to a subset of $V(H_i)$ for all $j \in [j_\infty]$ and $r \in [r_\infty(j)]$, and $(H_i,\lambda_V^i,\lambda_E^i)$ is $t$-CR-bounded. Towards this end, we partition the vertex set of $H_i$ into \emph{layers} each of which corresponds to one of the colorings. To be more precise, we define \[V(H_i) \coloneqq \bigcup_{j \in j_\infty} \bigcup_{r \in [r_\infty(j)]} V_{j,r}^i\] for some suitable sets $V_{j,r}^i$ to be described below. Let $j \in [j_\infty]$ and $r \in [r_\infty(j)]$. We define \[W_{j,r}^i \coloneqq \{(i,j,r,c) \mid c \in \im(\chi_{j,r}^i)\} \subseteq V_{j,r}^i\] and set \[\lambda_V^i(i,j,r,c) \coloneqq \begin{cases} (j,r,{\sf col}) &\text{if } (j,r) \neq (1,1)\\ (j,r,{\sf col},c) &\text{otherwise} \end{cases}.\] Here, ${\sf col}$ is a special value indicating that $(i,j,r,c)$ is a vertex that corresponds to a color of $\chi_{j,r}^i$. (In connecting the layers, we will need to add further auxiliary vertices to build certain gadgets. These vertices will be colored using a special value ${\sf aux}$ in the third component.) We now connect the layers in such a way that $(H_i,\lambda_V^i,\lambda_E^i)$ is $t$-CR-bounded. Towards this end, we build on the inductive definition of the colorings. Let \[X_i \coloneqq \{v \in V(H_i) \mid |[v]_{\tWL{t}{1}{H_i}}| = 1\}\] denote the set of vertices appearing in a singleton color class of the coloring $\tWL{t}{1}{H_i}$. Throughout the construction, we maintain the property that $V_{j,r}^i \subseteq X_i$ for all layers covered so far. For the first layer $j=1$, $r=1$ there is nothing more to be done. By the definition of $\lambda_V^i$ all vertices $V_{1,1}^i \coloneqq W_{1,1}^i$ already form singleton color classes. So suppose $j > 1$. First suppose $j$ is even. In this case $r_\infty(j) = 1$. We set $V_{j,1}^i \coloneqq W_{j,1}^i$ For each $c \in \im(\chi_{j,1}^i)$ we add an edge $(i,j,1,c)(i,j-1,r_\infty(j-1),c')$ where $c' \in \im(\chi_{j-1,r_\infty(j-1)}^i)$ is the unique color for which \[\big(\chi_{j,1}^i\big)^{-1}(c) \subseteq \big(\chi_{j-1,r_\infty(j-1)}^i\big)^{-1}(c').\] Also, we set \[\lambda_E^i((i,j,1,c)(i,j-1,r_\infty(j-1),c')) \coloneqq 0.\] By definition of the coloring $\chi_{j,1}^i$, every vertex $v \in V_{j-1,r_\infty(j-1)}^i$ has at most $t$ many neighbors in $V_{j,1}^i$. Since $V_{j-1,r_\infty(j-1)}^i \subseteq X_i$ by induction, this implies that $V_{j,1}^i \subseteq X_i$. Next, suppose $j$ is odd. For notational convenience, we set $V_{j,0}^i \coloneqq V_{j-1,r_\infty(j-1)}^i$ as well as $(i,j,0,c) \coloneqq (i,j-1,r_\infty(j-1),c)$ for all vertices $(i,j-1,r_\infty(j-1),c) \in W_{j-1,r_\infty(j-1)}^i$. Fix some $r \in [r_\infty(j)]$. Suppose $\im(\chi_{j,r}^i) = \{c_1,\dots,c_\ell\}$. For every $p \in [\ell]$ we have that $c_p = (c_p',{\mathcal M}_p)$ where $c_p' \in \im(\chi_{j,r-1}^i)$, and ${\mathcal M}_p$ is a multiset over elements from $\big(\im(\chi_{j,r-1}^i)\big)^k$. Let ${\mathcal M} \coloneqq \bigcup_{p \in [\ell]} {\mathcal M}_p$. For each $\bar c = (\bar c_1,\dots,\bar c_k) \in {\mathcal M}$ we introduce a vertex $(i,j,r,\bar c)$. We set $\lambda_V^i(i,j,r,\bar c) \coloneqq (j,r,{\sf aux})$ and connect $(i,j,r,\bar c)$ to all vertices $(i,j,r-1,\bar c_q)$ for all $q \in [k]$. We set \[\lambda_E^i((i,j,r,\bar c)(i,j,r-1,\bar c_q)) \coloneqq q\] for all $q \in [k]$. Next, we connect $(i,j,r,c_p)$ to $(i,j,r-1,c_p')$ as well as to $(i,j,r,\bar c)$ for all $\bar c \in {\mathcal M}_p$ and $p \in [\ell]$. We set \[\lambda_E^i((i,j,r,c_p)(i,j,r-1,c_p')) \coloneqq 0\] and $\lambda_E^i((i,j,r,c_p)(i,j,r,\bar c))$ to the multiplicity of $\bar c$ in the multiset ${\mathcal M}_p$. It is easy to see that $V_{j,r}^i \subseteq X_i$ using that $V_{j,r-1}^i \subseteq X_i$ by induction. Indeed, once all vertices from $V_{j,r-1}^i$ are individualized, it suffices to apply the Color Refinement algorithm for every vertex in $V_{j,r}^i$ to be assigned a distinct color. This completes the description of $H_i$. In total, $X_i = V(H_i)$ which means that $H_i$ is $(t,1)$-WL-bounded. Using Theorem \ref{thm:isomorphism-t-cr-bounded}, it is possible to compute $\Iso(H_1,H_2)$ in time polynomial in the size of $H_1$ and $H_2$ using oracle access to the Hypergraph Isomorphism Problem for $\mgamma_t$-groups. So let us analyze the size of the graph $H_i$. Since each coloring $\chi_{j,r}^i$ refines the previous coloring, we conclude that there are at most $n^k$ many layers. Also, $|W_{j,r}^i| = |\im(\chi_{j,r}^i)| \leq n^k$. So it remains to bound the number of auxiliary vertices in a given layer. Towards this end, observe that $|{\mathcal M}| \leq p \cdot n \leq n^{k+1}$. Hence, $|V(H_i)| \leq n^k(n^{k+1} + n^k) = n^{{\mathcal O}(k)}$. This means it is possible to compute $\Iso(H_1,H_2)$ in time $n^{{\mathcal O}(k)}$ using oracle access to the Hypergraph Isomorphism Problem for $\mgamma_t$-groups. Now, the vertices with color $(j_\infty,r_\infty(j_\infty),{\sf col})$ exactly correspond to the color classes of $\chi_i = \tWL{t}{k}{G_i}$. We set \[\Gamma\theta \coloneqq \Iso(H_1,H_2)[W_{j_\infty,r_\infty(j_\infty)}^1].\] By renaming the elements of the domain in the natural way, we get $\Gamma \leq \Sym({\mathcal P}_1)$ and $\theta \colon {\mathcal P}_1 \rightarrow {\mathcal P}_2$. Since $H_i$ is defined in an isomorphism-invariant manner, it follows that $\Iso(G_1,G_2)[{\mathcal P}_1] \subseteq \Gamma\theta$. Finally, $\Gamma \in \mgamma_t$ by Theorem \ref{thm:isomorphism-t-cr-bounded}. \end{proof} \begin{corollary} Let $G_1,G_2$ be two $(t,k)$-WL-bounded graphs. Then a representation for $\Iso(G_1,G_2)$ can be computed in time $n^{{\mathcal O} (k \cdot (\log t)^c)}$ for some absolute constant $c$. \end{corollary} \begin{proof} Let ${\mathcal P}_1 \coloneqq \{\{v\} \mid v \in V(G_1)\}$ and ${\mathcal P}_2 = \{\{v\} \mid v \in V(G_2)\}$. By Theorem \ref{thm:bounding-group-t-k-wl} and \ref{thm:hypergraph-isomorphism-gamma-d-fast}, there is algorithm computing a $\mgamma_t$-group $\Gamma \leq \Sym({\mathcal P}_1)$ and bijection $\theta\colon {\mathcal P}_1 \rightarrow {\mathcal P}_2$ such that \[\Iso(G_1,G_2)[{\mathcal P}_1] \subseteq \Gamma\theta\] running in time $n^{{\mathcal O} (k \cdot (\log t)^c)}$. Identifying the singleton set $\{v\}$ with the element $v$, one may assume $\Gamma \leq \Sym(V(G_1))$ and $\theta\colon V(G_1) \rightarrow V(G_2)$ such that \[\Iso(G_1,G_2) \subseteq \Gamma\theta.\] In particular \[\Iso(G_1,G_2) = \Iso_{\Gamma\theta}(G_1,G_2).\] Interpreting both input graphs as hypergraphs, the latter can be computed in time $n^{{\mathcal O} ((\log t)^c)}$ using Theorem \ref{thm:hypergraph-isomorphism-gamma-d-fast}. \end{proof} \section{Structure Theory and Small Color Classes} \label{sec:overview} Having established the necessary tools, we can now turn to the isomorphism test for graphs excluding $K_{3,h}$ as a minor. We start by giving a high-level overview on the algorithm. The main idea is to build on the isomorphism test for $(2,2)$-WL-bounded graphs described in the last section. Let $G_1$ and $G_2$ be two (vertex- and arc-colored) graphs that exclude $K_{3,h}$ as a minor. Using well-known reduction techniques building on isomorphism-invariant decompositions into triconnected\footnote{A triconnected component is either $3$-connected or a cycle.} components (see, e.g., \cite{HopcroftT72}), we may assume without loss of generality that $G_1$ and $G_2$ are $3$-connected. The algorithm starts by individualizing three vertices. To be more precise, the algorithm picks three distinct vertices $v_1,v_2,v_3 \in V(G_1)$ and iterates over all choices of potential images $w_1,w_2,w_3 \in V(G_2)$ under some isomorphism between $G_1$ and $G_2$. Let $X_1 \coloneqq \{v_1,v_2,v_3\}$ and $X_2 \coloneqq \{w_1,w_2,w_3\}$. Also, let $D_i \coloneqq \cl_{2,2}^{G_i}(X_i)$ denote the $(2,2)$-closure of $X_i$, $i \in \{1,2\}$. Observe that $D_i$ is defined in an isomorphism-invariant manner given the initial choice of $X_i$. Building on Theorems \ref{thm:bounding-group-t-k-wl} and \ref{thm:hypergraph-isomorphism-gamma-d} it can be checked in polynomial time whether $G_1$ and $G_2$ are isomorphic restricted to the sets $D_1$ and $D_2$. Now, the central idea is to follow a decomposition strategy. Let $Z_1^i,\dots,Z_\ell^i$ denote the vertex sets of the connected components of $G_i - D_i$, and let $S_j^i \coloneqq N_{G_i}(Z_j^i)$ for $j \in [\ell]$ and $i \in \{1,2\}$. We recursively compute isomorphisms between all pairs of graphs $G_i[Z_j^i \cup S_j^i]$ for all $j \in [\ell]$ and $i \in \{1,2\}$. To be able to determine whether all these partial isomorphisms can be combined into a global isomorphism, the crucial insight is that $|S_j^i| < h$ for all $j \in [\ell]$ and $i \in \{1,2\}$. \begin{lemma} \label{la:small-separator} Let $G$ be a graph that excludes $K_{3,h}$ as a minor. Also let $X \subseteq V(G)$ and define $D \coloneqq \cl_{2,2}^G(X)$. Let $Z$ be a connected component of $G - D$. Then $|N_G(Z)| < h$. \end{lemma} Indeed, this lemma forms one of the main technical contributions of the paper. I remark that similar statements are exploited in \cite{GroheNW20,Neuen20,Neuen21} eventually leading to an isomorphism test running in time $n^{{\mathcal O}((\log h)^c)}$ for all graphs excluding $K_h$ as a topological subgraph. However, all these variants require the $(t,k)$-closure to be taken for non-constant values of $t$ (i.e., $t = \Omega(h)$). For the design of an fpt-algorithm, this is infeasible since we can only afford to apply Theorems \ref{thm:bounding-group-t-k-wl} and \ref{thm:hypergraph-isomorphism-gamma-d} for constant values of $t$ and $k$ (since the set $D_i$ might be the entire vertex set of $G_i$). The lemma above implies that the interplay between $D_i$ and $V(G_i) \setminus D_i$ is simple which allows for a dynamic programming approach. To be more precise, we can recursively list all elements of the set $\Iso((G_i[Z_j^i \cup S_j^i],S_j^i),(G_{i'}[Z_{j'}^{i'} \cup S_{j'}^{i'}],S_{j'}^{i'}))[S_j^i]$ for all $j,j' \in [\ell]$ and $i,i' \in \{1,2\}$ (i.e., we list all bijections $\sigma\colon S_j^i \rightarrow S_{j'}^{i'}$ that can be extended to an isomorphism between the corresponding subgraphs). To incorporate this information, we extend the graph $G_i[D_i]$ by simple gadgets obtaining graphs $H_i$ that are $(2,2)$-WL-bounded and such that $G_1 \cong G_2$ if and only if $H_1 \cong H_2$. (For technical reasons, the algorithm does not exactly implement this strategy, but closely follows the general idea.) In order to realize this recursive strategy, it remains to ensure that the algorithm makes progress when performing a recursive call. Actually, this turns out to be a non-trivial task. Indeed, it may happen that $D_i = X_i$, there is only a single component $Z_1^i$ of $G_i - D_i$, and $N_{G_i}(Z_1^i) = D_i$. To circumvent this problem, the idea is to compute an isomorphism-invariant extension $\gamma(X_i) \supsetneq X_i$ such that $|\gamma(X_i)| \leq h^4$. Assuming such an extension can be computed, we simply extend the set $X_i$ until the algorithm arrives in a situation where the recursive scheme discussed above makes progress. Observe that this is guaranteed to happen as soon as $|X_i| \geq h$ building on Lemma \ref{la:small-separator}. Also note that we can still artificially individualize all vertices from $X_i$ at a cost of $2^{{\mathcal O}(h^4\log h)}$ (since any isomorphism can only map vertices from $X_1$ to vertices from $X_2$). To compute the extension, we exploit the fact that $G_i$ is $(h-1,1)$-WL-bounded by \cite[Corollary 24]{Neuen20} (after individualizing $3$ vertices). Simply speaking, for every choice of $3$ distinct vertices in $X_i$, after individualizing these vertices and performing the $1$-dimensional Weisfeiler-Leman algorithm, we can identify a color class of size at most $h-1$ to be added to the set $X_i$. Overall, assuming $|X_i| \leq h$, this gives an extension $\gamma(X_i)$ of size at most $h+h^3(h-1) \leq h^4$. To implement this high-level strategy, we first prove Lemma \ref{la:small-separator} in the next section. Afterwards, we compute the entire decompositions of the input graphs in Section \ref{sec:decomposition}. Finally, the dynamic programming strategy along the computed decomposition is realized in Section \ref{sec:main-algorithm}. \section{Finding Disjoint and Connected Subgraphs} \label{sec:disjoint-subtrees} In this section, we give a proof of Lemma \ref{la:small-separator}. Towards this end, we first require some additional notation and basic tools for the $2$-dimensional Weisfeiler-Leman algorithm. Let $G$ be a graph and let $\chi \coloneqq \WL{2}{G}$ be the coloring computed by the $2$-dimensional Weisfeiler-Leman algorithm. We denote by $C_V = C_V(G,\chi) \coloneqq \{\chi(v,v) \mid v \in V(G)\}$ the set of \emph{vertex colors} under the coloring $\chi$. Also, for $c \in C_V$, $V_c \coloneqq \{v \in V(G) \mid \chi(v,v) = c\}$ denotes the set of all vertices of color $c$. Next, consider a set of colors $C \subseteq \{\chi(v,w) \mid v \neq w\}$. We define the graph $G[C]$ with vertex set \[V(G[C]) \coloneqq \{v_1,v_2 \mid \WL{2}{G}(v_1,v_2) \in C\}\] and edge set \[E(G[C]) \coloneqq \{v_1v_2 \mid \WL{2}{G}(v_1,v_2) \in c\}.\] In case $C = \{c\}$ consists of a single color we also write $G[c]$ instead of $G[\{c\}]$. Moreover, let $A_1,\dots,A_\ell$ be the vertex sets of the connected components of $G[C]$. We also define the graph $G/C$ as the graph obtained from contracting every set $A_i$ to a single vertex. Formally, \[V(G/C) \coloneqq \{\{v\} \mid v \in V(G) \setminus V(G[C])\} \cup \{A_1,\dots,A_\ell\}\] and edge set \[E(G/C) \coloneqq \{X_1X_2 \mid \exists v_1 \in X_1,v_2 \in X_2\colon v_1v_2 \in E(G)\}.\] As before, if $C = \{c\}$ consists of a single edge-color we write $G/c$ instead of $G/C$. \begin{lemma}[see {\cite[Theorem 3.1.11]{ChenP19}}] \label{la:factor-graph-2-wl} Let $G$ be a graph and $C \subseteq \{\chi(v,w) \mid v \neq w\}$ a set of colors of the stable coloring $\chi \coloneqq \WL{2}{G}$. Define \[(\chi/C)(X_1,X_2) \coloneqq \{\!\{\chi(v_1,v_2) \mid v_1 \in X_1, v_2 \in X_2\}\!\}\] for all $X_1,X_2 \in V(G/C)$. Then $\chi/C$ is a stable coloring of the graph $G/C$ with respect to the $2$-dimensional Weisfeiler-Leman algorithm. Moreover, for all $X_1,X_2,X_1',X_2' \in V(G/C)$, it holds that either $(\chi/C)(X_1,X_2) = (\chi/C)(X_1',X_2')$ or $(\chi/C)(X_1,X_2) \cap (\chi/C)(X_1',X_2') = \emptyset$. \end{lemma} Next, let $X$ be a set and let ${\mathcal P}$ be a partition of $X$. We define the corresponding equivalence relation $\sim_{\mathcal P}$ on the set $X$ via $x \sim_{\mathcal P} y$ if there is set $P \in {\mathcal P}$ such that $x,y \in P$. Observe that the equivalence classes of $\sim_{\mathcal P}$ are exactly the classes of ${\mathcal P}$. Assuming $X \subseteq V(G)$, we also define $G/{\mathcal P}$ to be the graph obtained from $G$ by contracting all blocks $P \in {\mathcal P}$ into single vertices. Let $c \in C_V$. We say a partition ${\mathcal P}$ of the set $V_c$ is \emph{$\chi$-definable} if there is a set of colors $C_{\mathcal P} \subseteq \{\chi(v,w) \mid v \neq w \in V_c\}$ such that \[v \sim_{\mathcal P} w \;\;\Leftrightarrow \chi(v,w) \in C_{\mathcal P}\] for all $v,w \in V_c$. Observe that $\chi/C_{\mathcal P}$ is a $2$-stable coloring for $G/{\mathcal P}$ in this case by Lemma \ref{la:factor-graph-2-wl}. \begin{lemma} \label{la:partition-overlap} Let $G$ be a graph and $\chi$ a $2$-stable coloring. Let $c \in C_V$ and let ${\mathcal P},{\mathcal Q}$ be two $\chi$-definable partitions of $V_c$ such that $|{\mathcal P}| \leq |{\mathcal Q}|$. Let $P_1,\dots,P_\ell \in {\mathcal P}$ be distinct blocks. Then there are $v_i \in P_i$, $i \in [\ell]$, such that $v_i \not\sim_{\mathcal Q} v_j$ for all distinct $i,j \in [\ell]$. \end{lemma} \begin{proof} Consider the bipartite graph $B = ({\mathcal P},{\mathcal Q},E_B)$ where \[E_B \coloneqq \{PQ \mid P \in {\mathcal P}, Q \in {\mathcal Q}, P \cap Q \neq \emptyset\}.\] Since $\chi$ is $2$-stable the graph $B$ is biregular. Let $d_{\mathcal P} \coloneqq \deg_B(P)$ for $P \in {\mathcal P}$, and $d_{\mathcal Q} \coloneqq \deg_B(Q)$ for $Q \in {\mathcal Q}$. Then $|E_B| = d_{\mathcal P} \cdot |{\mathcal P}| = d_{\mathcal Q} \cdot |{\mathcal Q}|$. Let ${\mathcal P}' \subseteq {\mathcal P}$. Then $d_{\mathcal Q} \cdot |N_B({\mathcal P}')|\geq |{\mathcal P}'| \cdot d_{\mathcal P}$ and hence, $|N_B({\mathcal P}')| \geq \frac{d_{\mathcal P}}{d_{\mathcal Q}}|{\mathcal P}'| = \frac{|{\mathcal Q}|}{|{\mathcal P}|}|{\mathcal P}'| \geq |{\mathcal P}'|$. So by Hall's Marriage Theorem, there is a perfect matching $M$ of $B$. Let $Q_1,\dots,Q_\ell \in {\mathcal Q}$ be those sets matched to $P_1,\dots,P_\ell$ by the matching $M$. Pick an arbitrary element $v_i \in P_i \cap Q_i$ for all $i \in [\ell]$. Clearly, $v_i \not\sim_{\mathcal Q} v_j$ for all distinct $i,j \in [\ell]$. \end{proof} Let $G$ be a graph and let $\chi$ be a $2$-stable coloring. We define the graph $G[[\chi]]$ with vertex set $V(G[[\chi]]) \coloneqq C_V(G,\chi)$ and edges \[E(G[[\chi]]) \coloneqq \{c_1c_2 \mid \exists v_1 \in V_{c_1}, v_2 \in V_{c_2} \colon v_1v_2 \in E(G)\}.\] Having established all the necessary basic tools, we are now ready to prove the main technical statement of this section. \begin{lemma} \label{la:disjoint-trees} Let $G$ be a graph and let $\chi$ be a $2$-stable coloring. Suppose that $G[[\chi]]$ is connected and $|V_c| \geq 3$ for every $c \in C_V$. Then there are vertex-disjoint, connected subgraphs $H_1,H_2,H_3 \subseteq G$ such that $V(H_r) \cap V_c \neq \emptyset$ for all $r \in \{1,2,3\}$ and $c \in C_V$. \end{lemma} \begin{proof} Let $F$ be a spanning tree of $G[[\chi]]$. Let $c_0 \in C_V$ be an arbitrary color and fix $c_0$ as the root of the tree $F$. Also, we direct all edges in $F$ away from the root. We denote by $L(F)$ the set of leaves of $F$, i.e., all vertices without outgoing edges. Also, for $c \neq c_0$ we denote by $\parent(c)$ the unique parent node in the tree $F$. Finally, for three graphs $H_1,H_2,H_3 \subseteq G$, we denote by $V(H_1,H_2,H_3) \coloneqq V(H_1) \cup V(H_2) \cup V(H_3)$. We prove the following statement by induction on $|V(F)| = |C_V|$. The input consists of the following objects: \begin{enumerate}[label = (\roman*)] \item\label{item:disjoint-trees-input-1} a graph $G$ and a $2$-stable coloring $\chi$ such that $|V_c| \geq 3$ for every $c \in C_V$, \item\label{item:disjoint-trees-input-2} a (rooted) spanning tree $F$ for $G[[\chi]]$, \item\label{item:disjoint-trees-input-3} a collection $({\mathcal P}_c)_{c \in C_R}$ of $\chi$-definable partitions ${\mathcal P}_c = \{P_1^c,P_2^c\}$ (i.e., each partition consists of exactly two blocks) where $C_R \subseteq L(F)$ (i.e., only leaves of $F$ may be equipped with an additional partition), and \item\label{item:disjoint-trees-input-4} a collection $(f_c)_{c \in C_R}$ of functions $f_c$ that map each triple $v_1,v_2,v_3 \in V_c$ of distinct vertices such that $1 \leq |\{v_1,v_2,v_3\} \cap P_1^c| \leq 2$ to a partition $f_c(v_1,v_2,v_3) = {\mathcal Q}_c^{v_1,v_2,v_3} = (Q_1,Q_2,Q_3) \preceq {\mathcal P}_c$ into three blocks such that $v_r \in Q_r$ for all $r \in \{1,2,3\}$. \end{enumerate} Then there are $H_1,H_2,H_3 \subseteq G$ such that \begin{enumerate}[label = (\alph*)] \item\label{item:disjoint-trees-output-1} $H_1,H_2,H_3$ are pairwise vertex-disjoint, \item\label{item:disjoint-trees-output-2} $V(H_r) \cap V_c \neq \emptyset$ for all $c \in C_V$, \item\label{item:disjoint-trees-output-3} $|V(H_r) \cap V_c| = 1$ for all leaves $c \in L(F)$, \item\label{item:disjoint-trees-output-4} $|V(H_1,H_2,H_3) \cap P| \leq 2$ for all $P \in {\mathcal P}_c$ and $c \in C_R$, and \item\label{item:disjoint-trees-output-5} $H_r'$ is connected where $V(H_r') \coloneqq V(H_r) \cup \bigcup_{c \in C_R}Q_r^c$, $f_c(v_1^c,v_2^c,v_3^c) = (Q_1^c,Q_2^c,Q_3^c)$ and $\{v_r^c\} = V(H_r) \cap V_c$ for all $c \in C_R$, and \[E(H_r') \coloneqq E(G[V(H_r')]) \cup \bigcup_{c \in C_R} \binom{Q_r^c}{2}.\] \end{enumerate} Before diving into the proof, let us give some intuition on this complicated statement. First of all, observe that Requirements \ref{item:disjoint-trees-input-1} and \ref{item:disjoint-trees-input-2} cover the prerequisites of the lemma, and Properties \ref{item:disjoint-trees-output-1} and \ref{item:disjoint-trees-output-2} provide two of the three guarantees listed in the lemma. Also, for $C_R = \emptyset$, we recover the statement of the lemma (in a slightly stronger form due to Property \ref{item:disjoint-trees-output-3}). Now, the basic idea of the inductive proof is to consider a leave $c$ and construct the graphs $H_1,H_2,H_3$ from a solution $H_1',H_2',H_3'$ obtained by induction after removing all vertices of color $c$ (i.e., removing the leave $c$ from the tree $F$). Let $d$ denote the parent of $c$. A particularly difficult case occurs if $G[V_d,V_c]$ is isomorphic to $2K_{2,h}$, i.e., it is the disjoint union of two copies of $K_{2,h}$, where $|V_c| = 4$ and $h \geq 3$ (see Figure \ref{fig:find-subgraphs}). Here, it may happen that $V(H_1',H_2',H_3') \cap V_d$ is a subset of one of the two components of $G[V_d,V_c]$. In this case, it is impossible to extend all three graphs to the color $c$ while preserving connectedness. To resolve this problem, we introduce a partition ${\mathcal P}_d$ which propagates the requirement that both connected components must be covered by $V(H_1',H_2',H_3')$ up the tree $F$ (see Property \ref{item:disjoint-trees-output-4}). Unfortunately, this additional requirement introduces further challenges for other cases. To compensate for this, we exploit the fact that we can potentially use vertices from $V_c$ to connect parts from $H_r$ that can not be connected in another way. To be more precise, if $V(H_r') \cap V_d$ were to contain vertices $v_r,v_r'$ from the same connected component of $G[V_d,V_c]$, it would be acceptable if these vertices were not connected in $H_r'$ since, as soon as we add some vertex $w_r \in V_c$, this vertex will establish a connection between $v_r$ and $v_r'$. This simple idea is captured by Requirement \ref{item:disjoint-trees-input-4} and Property \ref{item:disjoint-trees-output-5}. If we construct $H_r'$ in such a way that $V(H_r') \cap V_d = \{v_r\}$, then there is a partition ${\mathcal Q}_d^{v_1,v_2,v_3} = (Q_1,Q_2,Q_3) \preceq {\mathcal P}_d$ which has the property that all vertices within one block may be connected by a vertex from the class $V_c$. Hence, it is sufficient if $H_r'$ is connected after adding all the vertices from $Q_r$ to $H_r'$ and turning $Q_r$ into a clique. Now, this is precisely what Property \ref{item:disjoint-trees-output-5} guarantees. \begin{figure} \centering \begin{tikzpicture}[scale = 1.4] \draw[line width = 1.6pt, color = mRed] (0,0) ellipse (1.8cm and 0.6cm); \draw[line width = 1.6pt, color = mBlue] (4,0) ellipse (1.8cm and 0.6cm); \draw[line width = 1.6pt, color = lipicsYellow] (2,2) ellipse (1.8cm and 0.6cm); \draw[line width = 1.6pt, color = mGreen] (6,2) ellipse (1.8cm and 0.6cm); \draw[line width = 1.6pt, color = mTurquoise] (4,4) ellipse (1.8cm and 0.6cm); \node at (5.7,4.5) {$c_0$}; \node at (0.3,2.5) {$c_1$}; \node at (7.7,2.5) {$c_2$}; \node at (-1.7,-0.5) {$c_3$}; \node at (5.7,-0.5) {$c_4$}; \node[vertex, fill = mRed] (r1) at (-1.2,0) {}; \node[vertex, fill = mRed] (r2) at (-0.4,0) {}; \node[vertex, fill = mRed] (r3) at (0.4,0) {}; \node[vertex, fill = mRed] (r4) at (1.2,0) {}; \node[vertex, fill = mBlue] (b1) at (3.2,0) {}; \node[vertex, fill = mBlue] (b2) at (4.0,0) {}; \node[vertex, fill = mBlue] (b3) at (4.8,0) {}; \node[vertex, fill = lipicsYellow] (y1) at (0.75,2) {}; \node[vertex, fill = lipicsYellow] (y2) at (1.25,2) {}; \node[vertex, fill = lipicsYellow] (y3) at (1.75,2) {}; \node[vertex, fill = lipicsYellow] (y4) at (2.25,2) {}; \node[vertex, fill = lipicsYellow] (y5) at (2.75,2) {}; \node[vertex, fill = lipicsYellow] (y6) at (3.25,2) {}; \node[vertex, fill = mGreen] (g1) at (4.8,2) {}; \node[vertex, fill = mGreen] (g2) at (5.6,2) {}; \node[vertex, fill = mGreen] (g3) at (6.4,2) {}; \node[vertex, fill = mGreen] (g4) at (7.2,2) {}; \node[vertex, fill = mTurquoise] (t1) at (2.75,4) {}; \node[vertex, fill = mTurquoise] (t2) at (3.25,4) {}; \node[vertex, fill = mTurquoise] (t3) at (3.75,4) {}; \node[vertex, fill = mTurquoise] (t4) at (4.25,4) {}; \node[vertex, fill = mTurquoise] (t5) at (4.75,4) {}; \node[vertex, fill = mTurquoise] (t6) at (5.25,4) {}; \foreach \v/\w in {r1/y1, b1/y1, y1/t1, t1/g1}{ \draw[line width = 2.4pt, lipicsGray] (\v) edge (\w); } \foreach \v/\w in {r2/y2,r2/y3, b2/y2, y2/t2,y3/t3, t2/g3}{ \draw[line width = 3.2pt, lipicsGray!70] (\v) edge (\w); } \foreach \v/\w in {r3/y4,r3/y5,r3/y6, b3/y6, y4/t4,y5/t5,y6/t6, t5/g4}{ \draw[line width = 4.0pt, lipicsGray!40] (\v) edge (\w); } \foreach \v/\w in {r1/y1,r1/y2,r1/y3,r2/y1,r2/y2,r2/y3,r3/y4,r3/y5,r3/y6,r4/y4,r4/y5,r4/y6, b1/y1,b1/y4,b2/y2,b2/y5,b3/y3,b3/y6, y1/t1,y1/t3,y2/t1,y2/t2,y3/t2,y3/t3,y4/t4,y4/t6,y5/t4,y5/t5,y6/t5,y6/t6, t1/g1,t1/g2,t2/g1,t2/g3,t3/g2,t3/g3,t4/g3,t4/g4,t5/g2,t5/g4,t6/g1,t6/g4}{ \draw (\v) edge (\w); } \end{tikzpicture} \caption{Visualization for the construction of $H_1,H_2,H_3$ in Lemma \ref{la:disjoint-trees}. The sets $E(H_1),E(H_2),E(H_3)$ are marked in gray. Observe that the edge $(c_0,c_2) \in E(T)$ allows expansion which means the color class $c_2 \in V(T)$ is removed first in the inductive process. Afterwards, the leaves $c_3$ and $c_4$ are removed, and the color $c_1$ is added to the set $C_R$ since $G[V_{c_2},V_{c_3}]$ is isomorphic to $2K_{2,3}$.} \label{fig:find-subgraphs} \end{figure} \medskip Now, let us turn to the inductive proof of the above statement. The base case $|C_V| = 1$ is trivial. Each of the graphs $H_r$, $r \in \{1,2,3\}$, consists of a single vertex and one can clearly ensure that all properties are satisfied. For the inductive step we distinguish several cases. A visualization is given in Figure \ref{fig:find-subgraphs}. Let $c$ be a leave of $F$ and let $d$ be its parent. We say that the edge $(d,c)$ \emph{allows expansion} if \begin{enumerate} \item\label{item:allow-expansion-1} $|N_G(U) \cap V_c| \geq |U|$ for all $U \subseteq V_d$ for which $|U| \leq 3$, and \item\label{item:allow-expansion-2} if $c \in C_R$ then $N_G(v) \cap P_1^c \neq \emptyset$ as well as $N_G(v) \cap P_2^c \neq \emptyset$ for all $v \in V_d$. \end{enumerate} \begin{claim} \label{claim:expand-trees} Suppose $(d,c) \in E(F)$ allows expansion. Let $v_1,v_2,v_3 \in V_{d}$ be distinct vertices. Then there are distinct $w_1,w_2,w_3 \in V_{c}$ such that $v_rw_r \in E(G)$ for all $r \in \{1,2,3\}$. Moreover, if $c \in C_R$ then $|\{w_1,w_2,w_3\} \cap P_1^c| \leq 2$ and $|\{w_1,w_2,w_3\} \cap P_2^c| \leq 2$. \end{claim} \begin{claimproof} Consider the bipartite graph $B \coloneqq G[\{v_1,v_2,v_3\},V_c]$. Since $(d,c)$ allows expansion the graph $B$ satisfies the requirements of Hall's Marriage Theorem. Hence, there are $w_1,w_2,w_3 \in V_c$ such that $v_rw_r \in E(G)$ for all $r \in \{1,2,3\}$. So it only remains to ensure the second property. Suppose that $c \in C_R$ and $w_1,w_2,w_3$ are in the same block of ${\mathcal P}_c$ (otherwise there is nothing to be done). Without loss of generality suppose that $\{w_1,w_2,w_3\} \subseteq P_1^c$. Then one can simply replace $w_1$ by an arbitrary vertex from $N_G(v_1) \cap P_2^c$. Observe that this set can not be empty since $(d,c)$ allows expansion. \end{claimproof} First, suppose there is a leave $c$, and its parent $d$, such that $(d,c)$ allows expansion. Let $G' \coloneqq G - V_c$ and $F' \coloneqq F - c$. Also, let $C_R' \coloneqq C_R \setminus \{c\}$. Clearly, the input $(G',F',({\mathcal P}_c)_{c \in C_R'},(f_c)_{c \in C_R'})$ satisfies the Requirements \ref{item:disjoint-trees-input-1} - \ref{item:disjoint-trees-input-4}. By the induction hypothesis, there are subgraphs $H_1',H_2',H_3' \subseteq G'$ satisfying \ref{item:disjoint-trees-output-1} - \ref{item:disjoint-trees-output-5}. We pick arbitrary elements $v_r \in V(H_r') \cap V_d$, $r \in \{1,2,3\}$ (these sets are non-empty by Property \ref{item:disjoint-trees-output-2}). Let $w_1,w_2,w_3$ be the vertices provided by Claim \ref{claim:expand-trees}. We define the graphs $H_r$, $r \in \{1,2,3\}$, via $V(H_r) \coloneqq V(H_r') \cup \{w_r\}$ and $E(H_r) \coloneqq E(H_r') \cup \{v_rw_r\}$. It is easy to verify that Properties \ref{item:disjoint-trees-output-1} - \ref{item:disjoint-trees-output-5} are satisfied. Observe that, if $c \in C_R$, the graphs are connected even without the additional vertices and edges coming from the partition $f_c(w_1,w_2,w_3)$. \medskip Next, consider a color $d \in V(F)$ such that all children of $d$ are leaves in $F$. Let $c_1,\dots,c_\ell$ be the children of $d$. We assume that $(d,c_i)$ does not allow expansion for all $i \in [\ell]$ (otherwise the previous case is applied). We start by defining a sequence of $\chi$-definable partitions ${\mathcal A}_1,\dots,{\mathcal A}_\ell$ of the set $V_d$. Fix some $i \in [\ell]$. Since $(d,c_i)$ does not allow expansion, Item \ref{item:allow-expansion-1} or \ref{item:allow-expansion-2} is violated. First suppose Item \ref{item:allow-expansion-2} is violated. This means that $c_i \in C_R$. Let $A_1^i \coloneqq \{v \in V_d \mid N_G(v) \cap V_{c_i} \subseteq P_1^{c_i}\}$ and $A_2^i \coloneqq \{v \in V_d \mid N_G(v) \cap V_{c_i} \subseteq P_2^{c_i}\}$. Since $\chi$ is $2$-stable and ${\mathcal P}_{c_i}$ is $\chi$-definable, we conclude that ${\mathcal A}_i \coloneqq \{A_1^i,A_2^i\}$ is a partition of $V_d$. Moreover, $|A_1^i| = |A_2^i|$ and ${\mathcal A}_i$ is $\chi$-definable. So suppose Item \ref{item:allow-expansion-2} is not violated, meaning Item \ref{item:allow-expansion-1} is violated. So there is some set $U_i \subseteq V_d$ such that $|U_i| \leq 3$ and $|N_G(U_i) \cap V_{c_i}| < |U_i|$. If $|N_G(v) \cap V_{c_i}| = 1$ for all $v \in V_d$ then $G[V_d,V_c]$ is isomorphic to the disjoint union of $\ell$ copies of $K_{1,h}$ for some $h \geq 2$ and $\ell \geq 3$. Otherwise $|N_G(v) \cap V_{c_i}| \geq 2$ for all $v \in V_d$. This implies that $|U_i| = 3$ and $|N_G(v) \cap V_{c_i}| = 2$ for all $v \in V_d$. Moreover, $N_G(v) \cap V_{c_i} = N_G(v') \cap V_{c_i}$ for all $v,v' \in U_i$. We say that $v,v' \in V_d$ are \emph{$c_i$-twins} if $N_G(v) \cap V_{c_i} = N_G(v') \cap V_{c_i}$. Let $A_1^i,\dots,A_{q_i}^i$ denote the equivalence classes of $c_i$-twins, $i \in [\ell]$. In both cases, $q_i \geq 2$ since $|V_c| \geq 3$. Since $\chi$ is $2$-stable we have that $|A_j^i| = |A_{j'}^i|$ for all $j,j' \in [q_i]$. Moreover, the partition ${\mathcal A}_i \coloneqq \{A_1^i,\dots,A_{q_i}^i\}$ is $\chi$-definable. This completes the description of the partitions ${\mathcal A}_1,\dots,{\mathcal A}_\ell$. Let $q_i \coloneqq |{\mathcal A}_i|$ denote the number of blocks of ${\mathcal A}_i$. Without loss of generality assume that $q_1 \leq q_i$ for all $i \in [\ell]$. For ease of notation, define $c \coloneqq c_1$, ${\mathcal A} \coloneqq {\mathcal A}_1$ and $q \coloneqq q_1$. Recall that $q \geq 2$. We distinguish two case. First suppose that $q \geq 3$. Let $F' \coloneqq F - \{c_i \mid i \in [\ell]\}$. Observe that $d$ is a leave of $F'$. Also define \[G' \coloneqq \left(G - \bigcup_{i \in [\ell]} V_{c_i}\right)/{\mathcal A},\] i.e., $G'$ is the graph obtained from $G$ by deleting all vertices from $V_{c_i}$, $i \in [\ell]$, and contracting the sets $A_1^1,\dots,A_q^1$ to single vertices. Let $C_R' \coloneqq C_R \setminus \{c_i \mid i \in [\ell]\}$. Using Lemma \ref{la:factor-graph-2-wl} and $q \geq 3$, the input $(G',F',({\mathcal P}_c)_{c \in C_R'},(f_c)_{c \in C_R'})$ satisfies the Requirements \ref{item:disjoint-trees-input-1} - \ref{item:disjoint-trees-input-4}. By the induction hypothesis, there are subgraphs $H_1',H_2',H_3' \subseteq G'$ satisfying \ref{item:disjoint-trees-output-1} - \ref{item:disjoint-trees-output-5}. Let $\{A_r\} = V(H_r') \cap V_d$, $r \in \{1,2,3\}$ (recall that $|V(H_r') \cap V_d| = 1$ by Property \ref{item:disjoint-trees-output-3}). \begin{claim} \label{claim:expand-modulo-twins} Let $i \in [\ell]$ and $v_1,v_2,v_3 \in V_d$ such that $v_r$ and $v_{r'}$ are not $c_i$-twins for all distinct $r,r' \in \{1,2,3\}$. Then there are distinct vertices $w_1,w_2,w_3 \in V_{c_i}$ such that $v_rw_r \in E(G)$ for all $r \in \{1,2,3\}$. Moreover, if $c_i \in C_R$, then $|\{w_1,w_2,w_3\} \cap P_1^{c_i}| \leq 2$ and $|\{w_1,w_2,w_3\} \cap P_2^{c_i}| \leq 2$. \end{claim} \begin{claimproof} Consider the bipartite graph $B \coloneqq G[\{v_1,v_2,v_3\},V_{c_i}]$. Since $v_1,v_2,v_3$ are pairwise no $c_i$-twins, the graph $B$ satisfies the requirements of Hall's Marriage Theorem. Hence, there are $w_1,w_2,w_3 \in V_{c_i}$ such that $v_rw_r \in E(G)$ for all $r \in \{1,2,3\}$. So it only remains to ensure the second property. Suppose that $c_i \in C_R$ and $w_1,w_2,w_3$ are in the same block of ${\mathcal P}_{c_i}$ (otherwise there is nothing to be done). Without loss of generality suppose that $\{w_1,w_2,w_3\} \subseteq P_1^{c_i}$. Then one can simply replace $w_1$ by an arbitrary vertex from $N_G(v_1) \cap P_2^{c_i}$. Observe that this set can not be empty since $(d,c_i)$ satisfies Property \ref{item:allow-expansion-2}. Indeed, if it would not satisfy Property \ref{item:allow-expansion-2}, then $q_i = 2$ by construction. But $q_i \geq q \geq 3$. \end{claimproof} Now, we construct $H_r$ from $H_r'$ as follows. First, we uncontract the set $A_r$, i.e., we remove vertex $A_r$ from $H_r'$ and add all vertices in $A_r$. Let $v_r \in A_r$ be an arbitrary vertex, $r \in \{1,2,3\}$. By definition, $v_1, v_2$ and $v_3$ are pairwise no $c$-twins. Hence, by Claim \ref{claim:expand-modulo-twins}, there are distinct $w_1,w_2,w_3 \in V_{c}$ such that $v_rw_r \in E(G)$ for all $r \in \{1,2,3\}$. Since all vertices in $A_r$ are $c$-twins by definition, it holds that $A_r \subseteq N_G(w_r)$ for $r \in \{1,2,3\}$. We add $w_r$ as well as all edges $w_rv_r'$, $v_r' \in A_r$, to the graph $H_r$. Now, it only remains to cover the classes $V_{c_i}$, $2 \leq i \leq \ell$. Fix some $2 \leq i \leq \ell$. By Lemma \ref{la:partition-overlap}, there are $v_1^i \in A_1$, $v_2^i \in A_2$ and $v_3^i \in A_3$ that are pairwise no $c_i$-twins. Thus, by Claim \ref{claim:expand-modulo-twins}, there are distinct $w_1^i, w_2^i,w_3^i \in V_{c_i}$ such that $v_r^iw_r^i \in E(G)$ for all $r \in \{1,2,3\}$. We add these vertices and edges to the graph $H_r$. This completes the description of $H_r$. It is easy to see that Properties \ref{item:disjoint-trees-output-1} - \ref{item:disjoint-trees-output-3} are satisfied. Property \ref{item:disjoint-trees-output-4} is satisfied if $c_i \in C_R$, $i \in [\ell]$, by Claim \ref{claim:expand-modulo-twins}. For colors $c \in C_R \setminus \{c_i \mid i \in [\ell]\}$, Property \ref{item:disjoint-trees-output-4} follows directly from the induction hypothesis. Finally, for Property \ref{item:disjoint-trees-output-5}, observe that $H_r[V_d \cup \bigcup_{i \in [\ell]}V_{c_i}]$ is connected for every $r \in \{1,2,3\}$. This follows from the fact that $A_r \subseteq N_G(w_r)$. \medskip So it only remains to cover the case $q = 2$. At this point, we need to define another partition ${\mathcal A}^*$ of $V_d$ to cover a certain special case. If $|N_G(v) \cap V_c| = 1$ for all $v \in V_d$ then we define ${\mathcal A}^*$ to be the partition into the equivalence classes of the $c$-twin relation. Observe that $|{\mathcal A}^*| = |V_c|$ in this case. Otherwise, ${\mathcal A}^* \coloneqq \{\{v\} \mid v \in V_d\}$ is defined to be the trivial partition. In this case $|{\mathcal A}^*| = |V_d|$. Observe that, in both cases, $|{\mathcal A}^*| \geq 3$, ${\mathcal A}^*$ is $\chi$-definable and ${\mathcal A}^* \preceq {\mathcal A}$. In particular, since ${\mathcal A}^* \preceq {\mathcal A}$, we can interpret ${\mathcal A}$ as a partition of ${\mathcal A}^*$. Let $F' \coloneqq F - \{c_i \mid i \in [\ell]\}$. Observe that $d$ is a leave of $F'$. Also define \[G' \coloneqq \left(G - \bigcup_{i \in [\ell]} V_{c_i}\right)/{\mathcal A}^*,\] i.e., $G'$ is the graph obtained from $G$ by deleting all vertices from $V_{c_i}$, $i \in [\ell]$, and contracting all sets $A^* \in {\mathcal A}^*$ to a single vertex. Here, ${\mathcal A}^*$ forms the class of vertices of color $d$ in the graph $G'$. Moreover, let $C_R' \coloneqq (C_R \setminus \{c_i \mid i \in [\ell]\}) \cup \{d\}$. Also, let ${\mathcal P}_d \coloneqq {\mathcal A}$ (where ${\mathcal A}$ is interpreted as a partition of ${\mathcal A}^*$). Recall that $|{\mathcal A}| = q = 2$ and ${\mathcal A}$ is $\chi$-definable. It remains to define $f_d$. Observe that $f_d$ needs to be defined on triples of vertices $A_1^*,A_2^*,A_3^* \in {\mathcal A}^*$. Let $A_1^*,A_2^*,A_3^* \in {\mathcal A}^*$ be distinct such that $1 \leq |\{A_1^*,A_2^*,A_3^*\} \cap P_1^d| \leq 2$. To define $f_d(A_1^*,A_2^*,A_3^*)$ we distinguish two further subcases. First, suppose that $(d,c)$ satisfies Property \ref{item:allow-expansion-2}, meaning that it violates Property \ref{item:allow-expansion-1} (recall that $(d,c)$ does not allow expansion). Then $G[V_d,V_c]$ is isomorphic to a disjoint union of two copies of $K_{2,h}$ for some number $h \geq 3$, and $|V_c| = 4$. In this case, we pick an arbitrary partition according to the requirements of Item \ref{item:disjoint-trees-input-4}. In the other case, $(d,c)$ violates Property \ref{item:allow-expansion-2}. In particular, $c \in C_R$. This means that there are only edges between $P_1^d$ and $P_1^c$ as well as between $P_2^d$ and $P_2^c$ (here, we interpret ${\mathcal P}_d = {\mathcal A}$ as a partition of $V_d$ in the graph $G$). By the definition of ${\mathcal A}^*$, there are $w_1,w_2,w_3 \in V_c$ such that $E_G(A_r^*,\{w_r\}) \neq \emptyset$, and $|\{w_1,w_2,w_3\} \cap P_1^c| = |\{A_1^*,A_2^*,A_3^*\} \cap P_1^d|$ as well as $|\{w_1,w_2,w_3\} \cap P_2^c| = |\{A_1^*,A_2^*,A_3^*\} \cap P_2^d|$. Let ${\mathcal Q}_c^{w_1,w_2,w_3} = f_c(w_1,w_2,w_3) = (Q_1^c,Q_2^c,Q_3^c)$. We define ${\mathcal Q}_d^{A_1^*,A_2^*,A_3^*} = f_d(A_1^*,A_2^*,A_3^*) = (Q_1^d,Q_2^d,Q_3^d)$ in such a way that $A_r^* \in Q_r^d$, and every $A^* \in Q_r^d$ contains an element that has a neighbor in the set $Q_r^c$ in the graph $G$. Clearly, such a partition exists since $G[V_d,V_c]$ is biregular. Moreover, it is easy to see that ${\mathcal Q}_d^{A_1^*,A_2^*,A_3^*} \preceq {\mathcal P}_d$. Using Lemma \ref{la:factor-graph-2-wl}, we can apply the induction hypothesis to $(G',F',({\mathcal P}_c)_{c \in C_R'},(f_c)_{c \in C_R'})$. This results in subgraphs $H_1',H_2',H_3' \subseteq G'$ satisfying \ref{item:disjoint-trees-output-1} - \ref{item:disjoint-trees-output-5}. Let $A_r^*$ be the unique vertex in the set $V(H_r') \cap {\mathcal A}^*$, $r \in \{1,2,3\}$ (recall that $|V(H_r') \cap {\mathcal A}^*| = 1$ by Property \ref{item:disjoint-trees-output-3}). Observe that $1 \leq |\{A_1^*,A_2^*,A_3^*\} \cap P_1^d| \leq 2$ and $1 \leq |\{A_1^*,A_2^*,A_3^*\} \cap P_2^d| \leq 2$ by Property \ref{item:disjoint-trees-output-4}. Also let $f_d(A_1^*,A_2^*,A_3^*) = (Q_1^d,Q_2^d,Q_3^d)$. Note that $A_r^* \in Q_r^d$. We perform several steps to obtain $H_r$ from the graph $H_r'$. As a first step, all vertices from the set $\bigcup_{A^* \in Q_r^d}A^*$ are added to the vertex set of $H_r$. Next, consider the color $c = c_1$. We again need to distinguish between the two cases already used for the definition of $f_d$. First, suppose that $(d,c)$ satisfies Property \ref{item:allow-expansion-2}, meaning that it violates Property \ref{item:allow-expansion-1}. Recall that, in this case, $G[V_d,V_c]$ is isomorphic to a disjoint union of two copies of $K_{2,h}$ for some number $h \geq 3$, and $|V_c| = 4$. Also, ${\mathcal A}^* = \{\{v\} \mid v \in V_d\}$, i.e., there is a natural one-to-one correspondence between $V_d$ and ${\mathcal A}^*$. Let $v_r \in V_d$ such that $A_r^* = \{v_d\}$. Observe that $P_1^d$ and $P_2^d$ correspond to the two connected components of $G[V_d,V_c]$ (here, we again interpret ${\mathcal P}_d = {\mathcal A}$ as a partition of $V_d$). Hence, $v_1,v_2,v_3$ cover both connected components. It is easy to see that there are distinct $w_1,w_2,w_3 \in V_c$ such that $v_rw_r \in E(G)$ for all $r \in \{1,2,3\}$. We add vertex $w_r$ as well as the edge $v_rw_r$ to the graph $H_r$. Recall that $P_1^d$ and $P_2^d$ are defined as the equivalence classes of the $c$-twins relation, and that $\{Q_1^d,Q_2^d,Q_3^d\} \preceq \{P_1^d,P_2^d\}$. This implies that $\bigcup_{A^* \in Q_r^d}A^* \subseteq N_G(w_r)$, $r \in \{1,2,3\}$. In particular, $H_r[V_d \cup V_c]$ is connected. In the other case $(d,c)$ violates Property \ref{item:allow-expansion-2}. Let $w_1,w_2,w_3$ be the vertices used for the definition of $f_d(A_1^*,A_2^*,A_3^*) = (Q_1^d,Q_2^d,Q_3^d)$. Also let $f_c(w_1,w_2,w_3) = (Q_1^c,Q_2^c,Q_3^c)$. Recall that $E_G(A_r^*,\{w_r\}) \neq \emptyset$, and $|\{w_1,w_2,w_3\} \cap P_1^c| = |\{A_1^*,A_2^*,A_3^*\} \cap P_1^d|$ as well as $|\{w_1,w_2,w_3\} \cap P_2^c| = |\{A_1^*,A_2^*,A_3^*\} \cap P_2^d|$. Again, we add vertex $w_r$ as well as all edges $v_rw_r$, $v_r \in A_r^*$, to the graph $H_r$ (observe that $w_rv_r \in E(G)$ for all $v_r \in A_r^*$ by the definition of the partition ${\mathcal A}^*$). Observe that Property \ref{item:disjoint-trees-output-4} for color $d$ (ensured by the induction hypothesis), together with conditions above, implies Property \ref{item:disjoint-trees-output-4} for color $c$. Also observe that $H_r[V_d \cup V_c]$ is connected after adding all vertices from $Q_r^c$ and turning $Q_r^c$ into clique, as all vertices from $\bigcup_{A^* \in Q_r^d}A^*$ have a neighbor in the set $Q_r^c$ by definition. Now, we turn to the other leaves $c_2,\dots,c_\ell$. Observe that, up to this point and ignoring the leaves $c_2,\dots,c_\ell$, the graphs $H_1,H_2,H_3$ satisfy Properties \ref{item:disjoint-trees-output-1} - \ref{item:disjoint-trees-output-5}. So let $i \in \{2,\dots,\ell\}$ and consider the leave $c_i$. \begin{claim} \label{claim:expand-with-two-twins} There are distinct vertices $w_1,w_2,w_3 \in V_{c_i}$ such that, for every $r \in \{1,2,3\}$, there is some $v_r' \in \bigcup_{A^* \in Q_r^d}A^*$ such that $v_r'w_r \in E(G)$. Moreover, if $c_i \in C_R$, then $|\{w_1,w_2,w_3\} \cap P_1^{c_i}| \leq 2$ and $|\{w_1,w_2,w_3\} \cap P_2^{c_i}| \leq 2$. \end{claim} \begin{claimproof} Let $\widehat{Q}_r \coloneqq \bigcup_{A^* \in Q_r^d}A^*$. Observe that $(\widehat{Q}_1,\widehat{Q}_2,\widehat{Q}_3)$ forms a partition of $V_d$ that refines ${\mathcal A}$. Consider the bipartite graph $B = (\{\widehat{Q}_1,\widehat{Q}_2,\widehat{Q}_3\},V_{c_i},E_B)$ where \[E_B \coloneqq \{\widehat{Q}_rw \mid \exists v \in \widehat{Q}_r \colon vw \in E(G)\}.\] Suppose towards a contradiction that $B$ that does not satisfy Hall's condition, i.e., there is a set $U \subseteq \{\widehat{Q}_1,\widehat{Q}_2,\widehat{Q}_3\}$ such that $|N_B(U)| < |U|$. First observe that $N_B(\{\widehat{Q}_1,\widehat{Q}_2,\widehat{Q}_3\}) = V_{c_i}$ and $|V_{c_i}| \geq 3$ by Condition \ref{item:disjoint-trees-input-1}. So $U \neq \{\widehat{Q}_1,\widehat{Q}_2,\widehat{Q}_3\}$ which means that $|U| \leq 2$. Next, observe that there are no isolated vertices in $B$, i.e., $|N_B(U)| \geq 1$. Together, this means that $|U| = 2$ and $|N_B(U)| = 1$ (since $|N_B(U)| < |U|$ by assumption). Without loss of generality suppose that $U = \{\widehat{Q}_1,\widehat{Q}_2\}$ and pick $w \in V_{c_i}$ such that $N_B(U) = \{w\}$. Then $N_G(v) \cap V_{c_i} \subseteq \{w\}$ for all $v \in \widehat{Q}_1 \cup \widehat{Q}_2$. Since $G[V_d,V_{c_i}]$ is biregular and contains at least one edge, it follows that $|N_G(v) \cap V_{c_i}| = 1$ for every $v \in V_d$. Hence, $G[V_d,V_{c_i}]$ is isomorphic to $\ell$ disjoint copies of $K_{1,h}$ for some $\ell,h \geq 1$. Also, $|V_{c_i}| = \ell$ and $|V_d| = \ell \cdot h$. Now, observe that $N_G(v) \cap V_{c_i} = \{w\}$ for all $v \in \widehat{Q}_1 \cup \widehat{Q}_2$. This means that $h \geq |\widehat{Q}_1 \cup \widehat{Q}_2| = |\widehat{Q}_1| + |\widehat{Q}_2|$. Since $\{\widehat{Q}_1,\widehat{Q}_2,\widehat{Q}_3\} \preceq {\mathcal A}$ and ${\mathcal A}$ forms an equipartition into two blocks, we get that $|\widehat{Q}_1| + |\widehat{Q}_2| \geq \frac{1}{2}|V_d|$. Together, this implies that $h \geq \frac{1}{2}|V_d|$ and thus, $\ell \leq 2$. But this is a contradiction, since $\ell = |V_{c_i}| \geq 3$ by Condition \ref{item:disjoint-trees-input-1}. So $B$ satisfies Hall's condition which, by Hall's Marriage Theorem, means that $B$ contains a perfect matching $\{\widehat{Q}_1w_1,\widehat{Q}_2w_2,\widehat{Q}_3w_3\}$. Next, suppose $c_i \in C_R$ and $w_1,w_2,w_3$ are in the same block of ${\mathcal P}_{c_i}$ (otherwise there is nothing to be done). Without loss of generality suppose that $\{w_1,w_2,w_3\} \subseteq P_1^{c_i}$. Since $N_B(\{\widehat{Q}_1,\widehat{Q}_2,\widehat{Q}_3\}) = V_{c_i}$ there is some $r \in \{1,2,3\}$ such that $N_B(\widehat{Q}_r) \cap P_2^{c_i} \neq \emptyset$. Hence, one can simply replace $w_r$ by an arbitrary element $w_r' \in N_B(\widehat{Q}_r) \cap P_2^{c_i}$. This ensures that $|\{w_1,w_2,w_3\} \cap P_1^{c_i}| \leq 2$ and $|\{w_1,w_2,w_3\} \cap P_2^{c_i}| \leq 2$. To complete the proof, for every $r \in \{1,2,3\}$, we pick some element $v_r' \in \widehat{Q}_r$ such that $v_r'w_r \in E(G)$ (the existence of such elements follows directly from the definition of $E_B$). \end{claimproof} Let $w_1,w_2,w_3$ and $v_1',v_2',v_3'$ be the vertices from Claim \ref{claim:expand-with-two-twins}. We add vertex $w_r$ as well as the edge $v_r'w_r$ to the graph $H_r$. Applying this procedure for all $i \in \{2,\dots,\ell\}$ completes the description of the graphs $H_1,H_2,H_3$. Building on the comments above, it is easy to see that the graphs satisfy Properties \ref{item:disjoint-trees-output-1} - \ref{item:disjoint-trees-output-5}. This completes the final case. \end{proof} Building on the last lemma, we can now prove Lemma \ref{la:small-separator}. \begin{proof}[Proof of Lemma \ref{la:small-separator}] Let $\chi$ be a $2$-stable coloring such that $|[v]_\chi| = 1$ for all $v \in D$ and $|[w]_\chi| \geq 3$ for all $w \in V(G) \setminus D$. Suppose towards a contradiction that $|N_G(Z)| \geq h$, and pick $v_1,\dots,v_h \in N_G(Z)$ to be distinct vertices. Let $C \coloneqq \{\chi(v,v) \mid v \in Z\}$ be the set of vertex colors appearing in the set $Z$. Note that $(G[[\chi]])[C]$ is connected, and $|V_c| \geq 3$ for all $c \in C$. Let $W \coloneqq \{w \in V(G) \mid \chi(w,w) \in C\}$. Observe that $W \cap D = \emptyset$. By Lemma \ref{la:disjoint-trees}, there are connected, vertex-disjoint subgraphs $H_1,H_2,H_3 \subseteq G[W]$ such that $V(H_r) \cap V_c \neq \emptyset$ for all $r \in \{1,2,3\}$ and $c \in C$. Now let $i \in [h]$. Since $v_i \in N_G(Z)$ there is some vertex $w_i \in Z \subseteq W$ such that $v_iw_i \in E(G)$. Let $c_i \coloneqq \chi(w_i,w_i)$. Observe that $c_i \in C$. Also, $V_{c_i} \subseteq N_G(v_i)$ since $|[v_i]_\chi| = 1$ and $\chi$ is $2$-stable. This implies that $N_G(v_i) \cap V(H_r) \neq \emptyset$ for all $r \in \{1,2,3\}$, because $V(H_r) \cap V_{c_i} \neq \emptyset$. But this results in a minor isomorphic to $K_{3,h}$ with vertices $v_1,\dots,v_h$ on the right side, and vertices $V(H_1),V(H_2),V(H_3)$ on the left side. \end{proof} Besides Lemma \ref{la:small-separator}, we also require a second tool which is used to define the extension sets $\gamma(X_i)$ which we needed to ensure the recursive algorithm makes progress. \begin{lemma} \label{la:find-small-color-class} Let $G$ be a graph that excludes $K_{3,h}$ as a minor. Also let $X \subseteq V(G)$ and define $D \coloneqq \cl_{h-1,1}^G(X)$. Let $Z$ be a connected component of $G - D$. Then $|N_G(Z)| < 3$. \end{lemma} The lemma essentially follows from \cite[Lemma 23]{Neuen20}. For the sake of completeness and due to its simplicity, a complete proof is still given below. \begin{proof} Let $\chi$ be a $1$-stable coloring such that $|[v]_\chi| = 1$ for all $v \in D$ and $|[w]_\chi| \geq h$ for all $w \in V(G) \setminus D$. Suppose towards a contradiction that $|N_G(Z)| \geq 3$, and pick $v_1,v_2,v_3 \in N_G(Z)$ to be distinct vertices. Let $C \coloneqq \{\chi(v) \mid v \in Z\}$, and define $H$ to be the graph with $V(H) \coloneqq C$ and \[E(H) \coloneqq \{c_1c_2 \mid \exists v_1 \in \chi^{-1}(c_1), v_2 \in \chi^{-1}(v_2) \colon v_1v_2 \in E(G)\}.\] Let $T$ be a spanning tree of $H$. Also, for each $i \in \{1,2,3\}$, fix a color $c_i \in C$ such that $N_G(v_i) \cap \chi^{-1}(c_i) \neq \emptyset$. Let $T'$ be the induced subtree obtained from $T$ by repeatedly removing all leaves distinct from $c_1,c_2,c_3$. Finally, let $T''$ be the tree obtained from $T'$ by adding three fresh vertices $v_1,v_2,v_3$ where $v_i$ is connected to $c_i$. Observe that $v_1,v_2,v_3$ are precisely the leaves of $T''$. Now, $T''$ contains a unique node $c$ of degree three (possibly $c = c_i$ for some $i \in \{1,2,3\}$). Observe that $|\chi^{-1}(c)| \geq h$. We define $C_i$ to be the set of all internal vertices which appear on the unique path from $v_i$ to $c$ in the tree $T''$. Finally, define $U_i \coloneqq \{v_i\} \cup \bigcup_{c' \in C_i} \chi^{-1}(c')$. Since $\chi$ is $1$-stable and $|[v_i]_{\chi}| = 1$ we get that $G[U_i]$ is connected for all $i \in \{1,2,3\}$. Also, $E_G(U_i,\{w\}) \neq \emptyset$ for all $w \in \chi^{-1}(c)$ and $i \in \{1,2,3\}$. But this provides a minor isomorphic to $K_{3,h}$ with vertices $U_1,U_2,U_3$ on the left side and the vertices from $\chi^{-1}(c)$ on the right side. \end{proof} \section{A Decomposition Theorem} \label{sec:decomposition} In the following, we use the insights gained in the last section to prove a decomposition theorem for graphs that exclude $K_{3,h}$ as a minor. In the remainder of this work, all tree decompositions are rooted, i.e., there is a designated root node and we generally assume all edges to be directed away from the root. \begin{theorem} \label{thm:decomposition-into-2-2-bounded-parts} Suppose $h \geq 3$. Let $G$ be a $3$-connected graph, and suppose $S \subseteq V(G)$ such that \begin{enumerate}[label = (\Alph*)] \item $G - E(S,S)$ excludes $K_{3,h}$ as a minor, \item $3 \leq |S| \leq h$, \item $G - S$ is connected, and \item $S = N_G(V(G) \setminus S)$. \end{enumerate} Then there is a (rooted) tree decomposition $(T,\beta)$ of $G$, a function $\gamma\colon V(T) \rightarrow 2^{V(G)}$, and a vertex-coloring $\lambda$ such that \begin{enumerate}[label = (\Roman*)] \item\label{item:decomposition-output-1} $|V(T)| \leq 2 \cdot |V(G)|$, \item\label{item:decomposition-output-2} the adhesion width of $(T,\beta)$ is at most $h-1$, \item\label{item:decomposition-output-3} for every $t \in V(T)$ with children $t_1,\dots,t_\ell$, one of the following options holds: \begin{enumerate}[label = (\alph*)] \item $\beta(t) \cap \beta(t_i) \neq \beta(t) \cap \beta(t_j)$ for all distinct $i,j \in [\ell]$, or \item $\beta(t) = \beta(t) \cap \beta(t_i)$ for all $i \in [\ell]$, \end{enumerate} \item\label{item:decomposition-output-4} $S \subsetneq \gamma(r)$ where $r$ denotes the root of $T$, \item\label{item:decomposition-output-5} $|\gamma(t)| \leq h^4$ for every $t \in V(T)$, \item\label{item:decomposition-output-6} $\beta(t) \cap \beta(s) \subseteq \gamma(t) \subseteq \beta(t)$ for all $t \in V(T) \setminus \{r\}$, where $s$ denotes the parent of $t$, and \item\label{item:decomposition-output-7} $\beta(t) \subseteq \cl_{2,2}^{(G,\lambda)}(\gamma(t))$ for all $t \in V(T)$. \end{enumerate} Moreover, the decomposition $(T,\beta)$, the function $\gamma$, and the coloring $\lambda$ can be computed in polynomial time, and the output is isomorphism-invariant with respect to $(G,S,h)$. \end{theorem} \begin{proof} We give an inductive construction for the tree decomposition $(T,\beta)$ as well as the function $\gamma$ and the coloring $\lambda$. We start by arguing how to compute the set $\gamma(r)$. \begin{claim} Let $v_1,v_2,v_3 \in S$ be three distinct vertices, and define $\chi\coloneqq \WL{1}{G,S,v_1,v_2,v_3}$. Then there exists some $v \in V(G) \setminus S$ such that $|[v]_\chi| < h$. \end{claim} \begin{claimproof} Let $H \coloneqq (G - (S \setminus \{v_1,v_2,v_3\})) - E(\{v_1,v_2,v_3\},\{v_1,v_2,v_3\})$. It is easy to see that $\chi|_{V(H)}$ is $1$-stable for the graph $H$. Observe that $H - \{v_1,v_2,v_3\} = G - S$ is connected. Suppose there is no vertex $v \in V(G) \setminus S$ such that $|[v]_\chi| < h$. Then $\chi$ is $(h-1)$-CR-stable which implies that $\cl_{h-1,1}^G(v_1,v_2,v_3) = \{v_1,v_2,v_3\}$. On the other hand, $Z \coloneqq V(H) \setminus \{v_1,v_2,v_3\}$ induces a connected component of $H - \{v_1,v_2,v_3\}$, and $N_H(Z) = \{v_1,v_2,v_3\}$ since $S = N_G(V(G) \setminus S)$. But this contradicts Lemma \ref{la:find-small-color-class}. \end{claimproof} Let $v_1,v_2,v_3 \in S$ be distinct. We define $\chi[v_1,v_2,v_3] \coloneqq \WL{1}{G,S,v_1,v_2,v_3}$. Also, let $c[v_1,v_2,v_3]$ denote the unique color such that \begin{enumerate} \item $c[v_1,v_2,v_3] \notin \{\chi[v_1,v_2,v_3](v) \mid v \in S\}$, and \item $|(\chi[v_1,v_2,v_3])^{-1}(c[v_1,v_2,v_3])| \leq h-1$ \end{enumerate} and which is minimal with respect to the linear order on the colors in the image of $\chi[v_1,v_2,v_3]$. Let $\gamma(v_1,v_2,v_3) \coloneqq (\chi[v_1,v_2,v_3])^{-1}(c[v_1,v_2,v_3])$. Observe that $\gamma(v_1,v_2,v_3)$ is defined in an isomorphism-invariant manner given $(G,S,h,v_1,v_2,v_3)$. Now, define \[\gamma(r) \coloneqq S \cup \bigcup_{v_1,v_2,v_3 \in S \text{ distinct}} \gamma(v_1,v_2,v_3).\] Clearly, $\gamma(r)$ is defined in an isomorphism-invariant manner given $(G,S,h)$. Moreover, \[|\gamma(r)| \leq |S| + |S|^3 \cdot (h-1) \leq |S|^3 \cdot h \leq h^4.\] Finally, define $\beta(r) \coloneqq \cl_{2,2}^G(\gamma(r))$. Let $Z_1,\dots,Z_\ell$ be the connected components of $G - \beta(r)$. Also, let $S_i \coloneqq N_G(Z_i)$ and $G_i$ be the graph obtained from $G[S_i \cup Z_i]$ by turning $S_i$ into a clique, $i \in [\ell]$. We have $|S_i| < h$ by Lemma \ref{la:small-separator}. Also, $|S_i| \geq 3$ and $G_i$ is $3$-connected since $G$ is $3$-connected. Clearly, $G_i - S_i$ is connected and $S_i = N_{G_i}(V(G_i) \setminus S_i)$. Finally, $G_i - E(S_i,S_i)$ excludes $K_{3,h}$ as a minor because $G - E(S,S)$ excludes $K_{3,h}$ as a minor. We wish to apply the induction hypothesis to the triples $(G_i,S_i,h)$. If $|V(G_i)| = |V(G)|$ then $\ell = 1$ and $S \subsetneq S_i$. In this case the algorithm still makes progress since the size of $S$ can be increased at most $h-3$ times. By the induction hypothesis, there are tree decompositions $(T_i,\beta_i)$ of $G_i$ and functions $\gamma_i\colon V(T_i) \rightarrow 2^{V(G_i)}$ satisfying Properties \ref{item:decomposition-output-1} - \ref{item:decomposition-output-7}. We define $(T,\beta)$ to be the tree decomposition where $T$ is obtained from the disjoint union of $T_1,\dots,T_\ell$ by adding a fresh root vertex $r$ which is connected to the root vertices of $T_1,\dots,T_\ell$. Also, $\beta(r)$ is defined as above and $\beta(t) \coloneqq \beta_i(t)$ for all $t \in V(T_i)$ and $i \in [\ell]$. Finally, $\gamma(r)$ is again defined as above, and $\gamma(t) \coloneqq \gamma_i(t)$ for all $t \in V(T_i)$ and $i \in [\ell]$. The algorithm clearly runs in polynomial time and the output is isomorphism-invariant (the coloring $\lambda$ is defined below). We need to verify that Properties \ref{item:decomposition-output-1} - \ref{item:decomposition-output-7} are satisfied. Using the comments above and the induction hypothesis, it is easy to verify that Properties \ref{item:decomposition-output-2}, \ref{item:decomposition-output-4}, \ref{item:decomposition-output-5} and \ref{item:decomposition-output-6} are satisfied. For Property \ref{item:decomposition-output-7} it suffices to ensure that $\cl_{2,2}^{(G_i,\lambda)}(\gamma(t)) \subseteq \cl_{2,2}^{(G,\lambda)}(\gamma(t))$. Towards this end, it suffices to ensure that $\lambda(v) \neq \lambda(w)$ for all $v \in \beta(r)$ and $w \in V(G) \setminus \beta(r)$. To ensure this property holds on all levels of the tree, we can simply define $\lambda(v) \coloneqq \{\dist_T(r,t) \mid t \in V(T), v \in \beta(t)\}$. Next, we modify the tree decomposition in order to ensure Property \ref{item:decomposition-output-3}. Consider a node $t \in V(T)$ with children $t_1,\dots,t_\ell$. We say that $t_i \sim t_j$ if $\beta(t) \cap \beta(t_i) = \beta(t) \cap \beta(t_j)$. Let $A_1,\dots,A_k$ be the equivalence classes of the equivalence relation $\sim$. For every $i \in [k]$ we introduce a fresh node $s_i$. Now, every $t_j \in A_i$ becomes a child of $s_i$ and $s_i$ becomes a child of $t$. Finally, we set $\beta(s_i) = \gamma(s_i) \coloneqq \beta(t) \cap \beta(t_j)$ for some $t_j \in A_i$. Observe that after this modification, Properties \ref{item:decomposition-output-2} and \ref{item:decomposition-output-4} - \ref{item:decomposition-output-7} still hold. Finally, it remains to verify Property \ref{item:decomposition-output-2}. Before the modification described in the last paragraph, we have that $|V(T)| \leq |V(G)|$. Since the modification process at most doubles the number of nodes in $T$, the bound follows. \end{proof} \section{An FPT Isomorphism Test for Graphs of Small Genus} \label{sec:main-algorithm} Building on the decomposition theorem given in the last section, we can now prove the main result of this paper. \begin{theorem} Let $G_1,G_2$ be two (vertex- and arc-colored) graphs that exclude $K_{3,h}$ as a minor. Then one can decide whether $G_1$ is isomorphic to $G_2$ in time $2^{{\mathcal O}(h^4 \log h)}n^{{\mathcal O}(1)}$. \end{theorem} \begin{proof} Suppose $G_i = (V(G_i),E(G_i),\chi_V^i,\chi_E^i)$ for $i \in \{1,2\}$. Using standard reduction techniques (see, e.g., \cite{HopcroftT72}) we may assume without loss generality that $G_1$ and $G_2$ are $3$-connected. Pick an arbitrary set $S_1 \subseteq V(G_1)$ such that $|S_1| = 3$ and $G_1 - S_1$ is connected. For every $S_2 \subseteq V(G_2)$ such that $|S_2| = 3$ and $G_2 - S_2$ is connected, the algorithm tests whether there is an isomorphism $\varphi\colon G_1 \cong G_2$ such that $S_1^\varphi = S_2$. Observe that $S_i = N_{G_i}(V(G_i) \setminus S_i)$ for both $i \in \{1,2\}$ since $G_1$ and $G_2$ are $3$-connected. This implies that the triple $(G_i,S_i,h)$ satisfies the requirements of Theorem \ref{thm:decomposition-into-2-2-bounded-parts}. Let $(T_i,\beta_i)$ be the tree decomposition, $\gamma_i\colon V(T_i) \rightarrow 2^{V(G_i)}$ be the function, and $\lambda_i$ be the vertex-coloring computed by Theorem \ref{thm:decomposition-into-2-2-bounded-parts} on input $(G_i,S_i,h)$. Now, the basic idea is compute isomorphisms between $(G_1,S_1)$ and $(G_2,S_2)$ using dynamic programming along the tree decompositions. More precisely, we aim at recursively computing the set \[\Lambda \coloneqq \Iso((G_1,\lambda_1,S_1),(G_2,\lambda_1,S_2))[S_1]\] (here, $\Iso((G_1,\lambda_1,S_1),(G_2,\lambda_1,S_2))$ denotes the set of isomorphisms $\varphi\colon G_1 \cong G_2$ which additionally respect the vertex-colorings $\lambda_i$ and satisfy $S_1^\varphi = S_2$). Throughout the recursive algorithm, we maintain the property that $|S_i| \leq h$. Also, we may assume without loss of generality that $S_i$ is $\lambda_i$-invariant (otherwise, we replace $\lambda_i$ by $\lambda_i'$ defined via $\lambda_i'(v) \coloneqq (1,\lambda_i(v))$ for all $v \in S_i$, and $\lambda_i'(v) \coloneqq (0,\lambda_i(v))$ for all $v \in V(G_i) \setminus S_i$). Let $r_i$ denote the root node of $T_i$. Let $\ell$ denote the number of children of $r_i$ in the tree $T_i$ (if the number of children is not the same, the algorithm concludes that $\Iso((G_1,\lambda_1,S_1),(G_2,\lambda_1,S_2)) = \emptyset$). Let $t_1^i,\dots,t_\ell^i$ be the children of $r_i$ in $T_i$, $i \in \{1,2\}$. For $i \in \{1,2\}$ and $j \in [\ell]$ let $V_j^i$ denote the set of vertices appearing in bags below (and including) $t_j^i$. Also let $S_j^i \coloneqq \beta_i(r_i) \cap \beta_i(t_j^i)$ be the adhesion set to the $j$-th child, and define $G_{j}^i \coloneqq G_i[V_j^i]$. Finally, let $T_j^i$ denote the subtree of $T_i$ rooted at node $t_j^i$, and $\beta_j^i \coloneqq \beta_i|_{V(T_j^i)}$, $\gamma_j^i \coloneqq \gamma_i|_{V(T_j^i)}$ and $\lambda_j^i \coloneqq \lambda_i|_{V_j^i}$. For every $i,i' \in \{1,2\}$, and every $j,j' \in [\ell]$, the algorithm recursively computes the set \[\Lambda_{j,j'}^{i,i'} \coloneqq \Iso((G_j^i,\lambda_j^i,S_j^i),(G_{j'}^{i'},\lambda_{j'}^{i'},S_{j'}^{i'}))[S_j^i].\] We argue how to compute the set $\Lambda$. Building on Theorem \ref{thm:decomposition-into-2-2-bounded-parts}, Item \ref{item:decomposition-output-3}, we may assume that \begin{enumerate}[label = (\alph*)] \item\label{item:option-all-adhesion-sets-distinct} $S_j^i \neq S_{j'}^i$ for all distinct $j,j' \in [\ell]$ and $i \in \{1,2\}$, or \item\label{item:option-all-adhesion-sets-equal} $\beta(r_i) = S_j^i$ for all $j \in [\ell]$ and $i \in \{1,2\}$ \end{enumerate} (if $r_1$ and $r_2$ do not satisfy the same option, then $\Iso((G_1,\lambda_1,S_1),(G_2,\lambda_1,S_2)) = \emptyset$). We first cover Option \ref{item:option-all-adhesion-sets-equal}. In this case $|\beta(r_i)| = |S_j^i| \leq h-1$ by Theorem \ref{thm:decomposition-into-2-2-bounded-parts}, Item \ref{item:decomposition-output-2}. The algorithm iterates over all bijections $\sigma \colon \beta(r_1) \rightarrow \beta(r_2)$. Now, \[\sigma \in \Iso((G_1,\lambda_1,S_1),(G_2,\lambda_1,S_2))[\beta(r_1)] \;\;\;\Leftrightarrow\;\;\; \exists \rho \in \Sym([\ell])\; \forall j \in [\ell]\colon \sigma \in \Lambda_{j,\rho(j)}^{1,2}.\] To test whether $\sigma$ satisfies the right-hand side condition, the algorithm constructs an auxiliary graph $H_\sigma$ with vertex set $V(H_\sigma) \coloneqq \{1,2\} \times [\ell]$ and edge set \[E(H_\sigma) \coloneqq \{(1,j)(2,j') \mid \sigma \in \Lambda_{j,j'}^{1,2}\}.\] Observe that $H_\sigma$ is bipartite with bipartition $(\{1\} \times [\ell], \{2\} \times [\ell])$. Now, \[\sigma \in \Iso((G_1,\lambda_1,S_1),(G_2,\lambda_1,S_2))[\beta(r_1)] \;\;\;\Leftrightarrow\;\;\; H_\sigma \text{ has a perfect matching}.\] It is well-known that the latter can be checked in polynomial time. This completes the description of the algorithm in case Option \ref{item:option-all-adhesion-sets-equal} is satisfied. Next, suppose Option \ref{item:option-all-adhesion-sets-distinct} is satisfied. Here, the central idea is to construct an auxiliary vertex- and arc-colored graphs $H_i = (V(H_i),E(H_i),\mu_V^i,\mu_E^i)$ and sets $A_i \subseteq V(H_i)$ such that \begin{enumerate} \item $\beta_i(r_i) \subseteq A_i$ and $A_i \subseteq \cl_{2,2}^{H_i}(\gamma_i(r_i))$, and \item $\Iso(H_1,H_2)[S_1] = \Iso(H_1[A_1],H_2[A_2])[S_1] = \Lambda$. \end{enumerate} Towards this end, we set \[V(H_i) \coloneqq V(G_i) \uplus \{(S_j^i,\gamma) \mid j \in [\ell], \gamma \in \Lambda_{j,j}^{i,i}\}\] and \[E(H_i) \coloneqq E(G_i) \cup \{(S_j^i,\gamma)v \mid j \in [\ell], \gamma \in \Lambda_{j,j}^{i,i}, v \in S_j^i\}.\] Also, we set \[A_i \coloneqq \beta(r_i) \cup \{(S_j^i,\gamma) \mid j \in [\ell], \gamma \in \Lambda_{j,j}^{i,i}\}.\] The main idea is to use the additional vertices attached to the set $S_j^i$ to encode the isomorphism type of the graph $(G_j^i,\lambda_j^i,S_j^i)$. This information is encoded by the vertex- and arc-coloring building on sets $\Lambda_{j,j'}^{i,i'}$ already computed above. Let ${\mathcal S} \coloneqq \{S_j^i \mid i \in \{1,2\}, j \in [\ell]\}$, and define $S_j^i \sim S_{j'}^{i'}$ if $\Lambda_{j,j'}^{i,i'} \neq \emptyset$. Observe that $\sim$ is an equivalence relation. Let $\{{\mathcal P}_1,\dots,{\mathcal P}_k\}$ be the partition of ${\mathcal S}$ into the equivalence classes. We set \[\mu_V^i(v) \coloneqq (0,\chi_V^i(v),\lambda_i(v))\] for all $v \in S_i$, \[\mu_V^i(v) \coloneqq (1,\chi_V^i(v),\lambda_i(v))\] for all $v \in \gamma_i(r_i) \setminus S_i$, \[\mu_V^i(v) \coloneqq (2,\chi_V^i(v),\lambda_i(v))\] for all $v \in \beta_i(r_i) \setminus \gamma_i(r_i)$, \[\mu_V^i(v) \coloneqq (3,\chi_V^i(v),\lambda_i(v))\] for all $v \in V(G_i) \setminus \beta_i(r_i)$, and \[\mu_V^i(S_j^i,\gamma) \coloneqq (4,q,q)\] for all $q \in [k]$, $S_j^i \in {\mathcal P}_q$, and $\gamma \in \Lambda_{j,j}^{i,i}$. For every $q \in [k]$ fix some $i(q) \in \{1,2\}$ and $j(q) \in [\ell]$ such that $S_{j(q)}^{i(q)} \in {\mathcal P}_q$ (i.e., for each equivalence class, the algorithm fixes one representative). Also, for every $q \in [k]$ and $S_j^i \in {\mathcal P}_q$, fix a bijection $\sigma_j^i \in \Lambda_{j(q),j}^{i(q),i}$ such that $\sigma_{j(q)}^{i(q)}$ is the identity mapping. Finally, for $q \in [k]$, fix a numbering $S_{j(q)}^{i(q)} = \{u_1^q,\dots,u_{s(q)}^q\}$. With this, we are ready to define the arc-coloring $\mu_E^i$. First, we set \[\mu_E^i(v,w) \coloneqq (0,\chi_E^i(v,w))\] for all $vw \in E(G_i)$. Next, consider an edge $(S_j^i,\gamma)v$ where $j \in [\ell]$, $\gamma \in \Lambda_{j,j}^{i,i}$, and $v \in S_j^i$. Suppose $S_j^i \in {\mathcal P}_q$. We set \[\mu_E^i(v,(S_j^i,\gamma)) = \mu_E^i((S_j^i,\gamma),v) \coloneqq (1,c)\] for the unique $c \in [s(q)]$ such that \[v = (u_c^q)^{\sigma_j^i\gamma}.\] This completes the description of the graphs $H_i$ and the sets $A_i$, $i \in \{1,2\}$. Next, we verify that they indeed have the desired properties. \begin{claim} \label{claim:closure-contains-root-bag} $\beta_i(r_i) \subseteq A_i$ and $A_i \subseteq \cl_{2,2}^{H_i}(\gamma_i(r_i))$. \end{claim} \begin{claimproof} The first part holds by definition of the set $A_i$. So let us verify the second part. We have that $V(G_i) \subseteq V(H_i)$ by definition. Also, $V(G_i)$ is $\mu_V^i$-invariant and $(\mu_V^i)[V(G_i)] \preceq \lambda_i$. This implies that \[\beta_i(r_i) \subseteq \cl_{2,2}^{(G_i,\lambda_i)}(\gamma(r_i)) \subseteq \cl_{2,2}^{H_i}(\gamma_i(r_i))\] using Theorem \ref{thm:decomposition-into-2-2-bounded-parts}, Item \ref{item:decomposition-output-7}. By definition of the set $A_i$, it remains to prove that \[\{(S_j^i,\gamma) \mid j \in [\ell], \gamma \in \Lambda_{j,j}^{i,i}\} \subseteq \cl_{2,2}^{H_i}(\gamma_i(r_i)).\] Actually, it suffices to prove that \[\{(S_j^i,\gamma) \mid j \in [\ell], \gamma \in \Lambda_{j,j}^{i,i}\} \subseteq \cl_{2,2}^{H_i}(\beta_i(r_i)).\] Let $\chi_i^*$ denote the $(2,2)$-WL-stable coloring after individualizing all vertices from $\beta_i(r_i)$. We have that $N_{H_i}(S_j^i,\gamma) = S_j^i \subseteq \beta_i(r_i)$ for all $j \in [\ell]$ and $\gamma \in \Lambda_{j,j}^{i,i}$. Recall that $S_j^i \neq S_{j'}^i$ for all distinct $j,j' \in [\ell]$. This implies that $\chi_i^*(S_j^i,\gamma) \neq \chi_i^*(S_{j'}^i,\gamma')$ for all distinct $j,j' \in [\ell]$, $\gamma \in \Lambda_{j,j}^{i,i}$ and $\gamma' \in \Lambda_{j',j'}^{i,i}$. So pick $\gamma,\gamma' \in \Lambda_{j,j}^{i,i}$ to be distinct. We need to show that $\chi_i^*(S_j^i,\gamma) \neq \chi_i^*(S_j^i,\gamma')$. Here, we use the arc-coloring $\mu_E^i$. Indeed, by definition, all edges incident to the vertex $(S_j^i,\gamma)$ receive pairwise different colors. Now pick $w \in S_j^i$ such that $w^{\gamma} \neq w^{\gamma'}$ and suppose that $w = (u_c^q)^{\sigma_j^i}$ where $q \in [k]$ is the unique number for which $S_j^i \in {\mathcal P}_q$. Then $(S_j^i,\gamma)$ and $(S_j^i,\gamma')$ reach different neighbors via $(1,c)$-colored edges. Since all neighbors of $(S_j^i,\gamma)$ and $(S_j^i,\gamma')$ are already individualized in $\chi_i^*$, we conclude that $\chi_i^*(S_j^i,\gamma) \neq \chi_i^*(S_j^i,\gamma')$. This implies that \[\{(S_j^i,\gamma) \mid j \in [\ell], \gamma \in \Lambda_{j,j}^{i,i}\} \subseteq \cl_{2,2}^{H_i}(\beta_i(r_i))\] as desired. \end{claimproof} \begin{claim} \label{claim:equal-isomorphism-sets} $\Iso(H_1,H_2)[S_1] = \Iso(H_1[A_1],H_2[A_2])[S_1] = \Iso((G_1,S_1),(G_2,S_2))[S_1]$. \end{claim} \begin{claimproof} We show the claim by proving the following three inclusions. \begin{description} \item[{$\Iso(H_1,H_2)[S_1] \subseteq \Iso(H_1[A_1],H_2[A_2])[S_1]$:}] Observe that $A_i$ is $\mu_V^i$-invariant. Hence, this inclusion trivially holds. \item[{$\Iso(H_1[A_1],H_2[A_2])[S_1] \subseteq \Iso((G_1,S_1),(G_2,S_2))[S_1]$:}] Let $\psi \in \Iso(H_1[A_1],H_2[A_2])$. First observe that, by the vertex-coloring of $H_i$, we have that $S_1^\psi = S_2$. From the structure of $H_i[A_i]$ it follows that there is some permutation $\rho \in \Sym([\ell])$ such that, for every $j \in [\ell]$ and $\gamma \in \Lambda_{j,j}^{1,1}$ there is some $\gamma' \in \Lambda_{\rho(j),\rho(j)}^{2,2}$ such that \[\psi(S_j^1,\gamma) = (S_{\rho(j)}^2,\gamma')\] Moreover, $S_j^1 \sim S_{\rho(j)}^2$ for all $j \in [\ell]$. Now, fix some $j \in [\ell]$. We argue that $\psi[S_j^i] \in \Lambda_{j,\rho(j)}^{1,2}$. Towards this end, pick $q \in [k]$ such that $S_j^1 \in {\mathcal P}_q$. Observe that also $S_{\rho(j)}^2 \in {\mathcal P}_q$. Let $\gamma \in \Lambda_{j,j}^{1,1}$ be the identify mapping, and pick $\gamma' \in \Lambda_{\rho(j),\rho(j)}^{2,2}$ such that $\psi(S_j^1,\gamma) = (S_{\rho(j)}^2,\gamma')$. Let $v_1 \in S_j^1$ and pick $c \in [s(q)]$ such that $\mu_E^1(v_1,(S_j^1,\gamma)) = (1,c)$. Also, let $v_2 \coloneqq \psi(v_1)$. Since $\psi$ is an isomorphism, we conclude that $\mu_E^2(v_2,(S_j^2,\gamma')) = (1,c)$. By definition of the arc-colorings, this means that \[v_1 = (u_c^q)^{\sigma_j^1\gamma} = (u_c^q)^{\sigma_j^1}\] and \[v_2 = (u_c^q)^{\sigma_{\rho(j)}^2\gamma'}.\] Together, this means that \[v_2 = v_1^{(\sigma_j^1)^{-1}\sigma_{\rho(j)}^2\gamma'}.\] In particular, $\psi[S_j^i] = (\sigma_j^1)^{-1}\sigma_{\rho(j)}^2\gamma' \in \Lambda_{j,\rho(j)}^{1,2}$. So let \[\varphi_{j} \in \Iso((G_j^1,S_j^1),(G_{\rho(j)}^{2},S_{\rho(j)}^{2}))\] such that $\varphi_j[S_j^1] = \psi[S_j^1]$ for all $j \in [\ell]$. Finally, define $\varphi\colon V(G_1) \rightarrow V(G_2)$ via $\varphi(v) \coloneqq \psi(v)$ for all $v \in \beta(r_1)$, and $\varphi(v) \coloneqq \varphi_j(v)$ for all $v \in V_j^1$. It is easy to see that $\varphi \in \Iso((G_1,S_1),(G_2,S_2))$ and $\varphi[S_1] = \psi[S_1]$. \item[{$\Iso((G_1,S_1),(G_2,S_2))[S_1] \subseteq \Iso(H_1,H_2)[S_1]$:}] Let $\varphi \in \Iso((G_1,S_1),(G_2,S_2))$ be an isomorphism. We have that $(T_i,\beta_i)$, the function $\gamma_i$, and the coloring $\lambda_i$ are computed in an isomorphism-invariant fashion. Hence, $\varphi\colon H_1[V(G_1)] \cong H_2[V(G_2)]$. Also, there is a permutation $\rho \in \Sym([\ell])$ such that $(S_j^1)^\varphi = S_{\rho(j)}^2$. We now define a bijection $\psi\colon V(H_1) \rightarrow V(H_2)$ as follows. First of all, $\psi(v) \coloneqq \varphi(v)$ for all $v \in V(G_i) \subseteq V(H_i)$. Now, let $j \in [\ell]$ and $\gamma \in \Lambda_{j,j}^{1,1}$. Observe that $\varphi[S_j^1] \in \Lambda_{j,\rho(j)}^{1,2}$. Pick $q \in [k]$ such that $S_j^1,S_{\rho(j)}^2 \in {\mathcal P}_q$. We have that \[\varphi[S_j^1] = \gamma^{-1}(\sigma_j^1)^{-1}\sigma_{\rho(j)}^2\gamma'\] for some $\gamma' \in \Lambda_{\rho(j),\rho(j)}^{2,2}$. We set \[\varphi(S_j^1,\gamma) \coloneqq (S_{\rho(j)}^2,\gamma').\] By definition, $\mu_V^1(S_j^1,\gamma) = (4,q,q) = \mu_V^2(S_{\rho(j)}^2,\gamma')$. It only remains to verify that, for all $v \in S_j^1$, we have that \[\mu_E^1((S_j^1,\gamma),v) = \mu_E^2((S_{\rho(j)}^2,\gamma'),\varphi(v)).\] Suppose that $\mu_E^1((S_j^1,\gamma),v) = (1,c)$. Then $v = (u_c^q)^{\sigma_j^1\gamma}$. On the other hand, \[(u_c^q)^{\sigma_{\rho(j)}^2\gamma'} = v^{\gamma^{-1}(\sigma_j^1)^{-1}\sigma_{\rho(j)}^2\gamma'} = v^{\varphi}.\] This implies that $\mu_E^2((S_{\rho(j)}^2,\gamma'),\varphi(v)) = (1,c)$. Overall, we get that $\psi \colon H_1 \cong H_2$. \end{description} \end{claimproof} Recall that the algorithm aims at computing the set $\Lambda$. Building on the previous claim, we can simply compute $\Iso(H_1[A_1],H_2[A_2])[S_1]$. Towards this end, the algorithm iterates through all bijections $\tau\colon\gamma_1(r_1)\rightarrow\gamma_2(r_2)$, and wishes to test whether there is an isomorphism $\varphi \in \Iso(H_1[A_1],H_2[A_2])$ such that $\varphi[\gamma_1(r_1)] = \tau$. Note that, since $\gamma_i(r_i)$ is $\mu_V^i$-invariant, it now suffices to solve this latter problem. So fix a bijection $\tau\colon\gamma_1(r_1)\rightarrow\gamma_2(r_2)$ (if $|\gamma_1(r_1)| \neq |\gamma_2(r_2)|$ then the algorithm returns $\Lambda = \emptyset$). Let $\mu_1^*(v) \coloneqq (1,v)$ for all $v \in \gamma_1(r_1)$, $\mu_1^*(v) \coloneqq (0,\mu_V^1)$ for all $v \in V(H_1) \setminus \gamma_1(r_1)$, and $\mu_2^*(v) \coloneqq (1,\tau^{-1}(v))$ for all $v \in \gamma_2(r_2)$ and $\mu_2^*(v) \coloneqq (0,\mu_V^2)$ for all $v \in V(H_2) \setminus \gamma_2(r_2)$. Intuitively speaking, $\mu_1^*$ and $\mu_2^*$ are obtained from $\mu_V^1$ and $\mu_V^2$ by individualizing all vertices from $\gamma_1(r_1)$ and $\gamma_r(r_2)$ according to the bijection $\tau$. Now, we can apply Theorem \ref{thm:bounding-group-t-k-wl} on input graph $H_1^* = (H_1,\mu_1^*)$ and $H_2^* = (H_2,\mu_2^*)$, and parameters $t = k \coloneqq 2$. Building on Claim \ref{claim:closure-contains-root-bag}, we obtain a $\mgamma_2$-group $\Gamma \leq \Sym(A_1)$ and a bijection $\theta\colon A_1 \rightarrow A_2$ such that $\Iso(H_1^*[A_1],H_2^*[A_2]) \subseteq \Gamma\theta$. Now, we can determine whether $H_1^*[A_1] \cong H_2^*[A_2]$ using Theorem \ref{thm:hypergraph-isomorphism-gamma-d}. Using Claim \ref{claim:equal-isomorphism-sets}, this provides the answer to whether $\tau[S_1] \in \Lambda$ (recall that $S_1 \subseteq \gamma_1(r_1)$ by Theorem \ref{thm:decomposition-into-2-2-bounded-parts}, Items \ref{item:decomposition-output-4} and \ref{item:decomposition-output-6}). Overall, this completes the description of the algorithm. It only remains to analyse its running time. Let $n$ denote the number of vertices of $G_1$ and $G_2$. The algorithm iterates over at most $n^3$ choices for the initial set $S_2$, and computes the decompositions $(T_i,\beta_i)$, the functions $\gamma_i$, and the colorings $\lambda_i$ in polynomial time. For the dynamic programming tables, the algorithm needs to compute ${\mathcal O}(n^2)$ many $\Lambda$-sets (using Theorem \ref{thm:decomposition-into-2-2-bounded-parts}, Item \ref{item:decomposition-output-1}), each of which contains at most $h! = 2^{{\mathcal O}(h \log h)}$ many elements by Theorem \ref{thm:decomposition-into-2-2-bounded-parts}, Item \ref{item:decomposition-output-2}. Hence, it remains to analyse the time required to compute the set $\Lambda$ given the $\Lambda_{j,j'}^{i,i'}$-sets. For Option \ref{item:option-all-adhesion-sets-equal}, this can clearly be done in time $2^{{\mathcal O}(h \log h)}n^{{\mathcal O}(1)}$. So consider Option \ref{item:option-all-adhesion-sets-distinct}. The graph $H_i$ can clearly be computed in time polynomial in its size. We have that $|V(H_i)| = 2^{{\mathcal O}(h \log h)}n$. Afterwards, the algorithm iterates over $|\gamma_1(r_1)|!$ many bijections $\tau$. By Theorem \ref{thm:decomposition-into-2-2-bounded-parts}, Item \ref{item:decomposition-output-5}, we have that $|\gamma_1(r_1)|! = 2^{{\mathcal O}(h^4 \log h)}$. For each bijection, the algorithm then requires polynomial computation time by Theorems \ref{thm:bounding-group-t-k-wl} and \ref{thm:hypergraph-isomorphism-gamma-d}. Overall, this proves the bound on the running time. \end{proof} \begin{remark} The algorithm from the last theorem can be extended in two directions. First, if one of the input graphs does not exclude $K_{3,h}$ as a minor, it can modified to either correctly conclude that $G_1$ has a minor isomorphic to $K_{3,h}$, or to correctly decide whether $G_1$ is isomorphic to $G_2$. Indeed, the only part of the algorithm that exploits that the input graphs do not have minor isomorphic to $K_{3,h}$ is the computation of the tree decompositions $(T_i,\beta_i)$ from Theorem \ref{thm:decomposition-into-2-2-bounded-parts}. In turn, this theorem only exploits forbidden minors via Lemmas \ref{la:small-separator} and \ref{la:find-small-color-class}. An algorithm can easily detect if one of the implications of those two statements is violated, in which case it can infer the existence of a minor $K_{3,h}$. Secondly, using standard reduction techniques (see, e.g., \cite{Mathon79}), one can also compute a representation of the set of all isomorphisms $\Iso(G_1,G_2)$ in the same time. \end{remark} Since every graph $G$ of Euler genus $g$ excludes $K_{3,4g+3}$ as a minor \cite{Ringel65}, we obtain the following corollary. \begin{corollary} Let $G_1,G_2$ be two (vertex- and arc-colored) graphs of Euler genus at most $g$. Then one can decide whether $G_1$ is isomorphic to $G_2$ in time $2^{{\mathcal O}(g^4 \log g)}n^{{\mathcal O}(1)}$. \end{corollary} \section{Conclusion} We presented an isomorphism test for graphs excluding $K_{3,h}$ as a minor running in time $2^{{\mathcal O}(h^4 \log h)}n^{{\mathcal O}(1)}$. For this, we provided a polynomial-time isomorphism algorithm for $(t,k)$-WL-bounded graphs and argued that graphs excluding $K_{3,h}$ as a minor can be decomposed into parts that are $(2,2)$-WL-bounded after individualizing a small number of vertices. Still, several questions remain open. Probably one of the most important questions in the area is whether isomorphism testing for graphs excluding $K_h$ as a minor is fixed-parameter tractable with parameter $h$. As graphs of bounded genus form an important subclass of graphs excluding $K_h$ as a minor, the techniques developed in this paper might also prove helpful in resolving this question. As an intermediate step, one can also ask for an isomorphism test for graphs excluding $K_{\ell,h}$ as a minor running in time $f(h,\ell)n^{g(\ell)}$ for some functions $f,g$. Observe that this paper provides such an algorithm for $\ell = 3$. Indeed, combining ideas from \cite{GroheNW20,Neuen21} with the approach taken in this paper, it seems the only hurdle towards such an algorithm is a generalization of Lemma \ref{la:disjoint-trees}. Given a connected graph $G$ for which $|V_c| \geq \ell$ for all $c \in C_V(G,\WL{2}{G})$, is it always possible to find vertex-disjoint, connected subgraphs $H_1,\dots,H_\ell \subseteq G$ such that $V(H_r) \cap V_c \neq \emptyset$ for all $r \in [\ell]$ and $c \in C_V(G,\WL{2}{G})$? As another intermediate problem, one can also consider the class ${\mathcal G}_h$ of all graphs $G$ for which there is a set $X \subseteq V(G)$ of size $|X| \leq h$ such that $G - X$ is planar. Is isomorphism testing fixed-parameter tractable on ${\mathcal G}_h$ parameterized by $h$?
{ "attr-fineweb-edu": 1.364258, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc4zxK7FjYHGHzqq3
\section*{Introduction} P.~Berglund and T.~H\"ubsch \cite{BH1} proposed a method to construct some mirror symmetric pairs of manifolds. Their construction involves a polynomial $f$ of a special form, a so called {\em invertible} one, and its {\em Berglund--H\"ubsch transpose} $\widetilde f$. In \cite{BH1} these polynomials appeared as potentials of Landau--Ginzburg models. This construction was generalized in \cite{BH2} to orbifold Landau--Ginzburg models described by pairs $(f, G)$, where $f$ is an invertible polynomial and $G$ is a (finite) abelian group of symmetries of $f$. For a pair $(f, G)$ one defines the dual pair $(\widetilde{f}, \widetilde{G})$. In \cite{BH2, KY}, there were described some symmetries between invariants of the pairs $(f, G)$ and $(\widetilde{f}, \widetilde{G})$ corresponding to the orbifolds defined by the equations $f=0$ and $\widetilde{f}=0$ in weighted projective spaces. Some duality (symmetry) properties of the singularities defined by $f$ and $\widetilde f$ were observed in \cite{MMJ, BLMS, ET2, Taka}. In particular, in \cite{MMJ} it was shown that the reduced orbifold Euler characteristics of the Milnor fibres of $f$ and $\widetilde f$ with the actions of the groups $G$ and $\widetilde G$ respectively coincide up to sign. Here we consider the (reduced) orbifold zeta function defined in \cite{ET2}. One can say that it collects an information about the eigenvalues of monodromy operators modified by so called age (or fermion) shifts. We show that the (reduced) orbifold zeta functions of Berglund-H\"ubsch-Henningson dual pairs $(f, G)$ and $(\widetilde{f}, \widetilde{G})$ either coincide or are inverse to each other depending on the number $n$ of variables. This is a refinement of the above mentioned result of \cite{MMJ} which means that the degrees of these zeta functions coincide up to sign. \section{Invertible polynomials}\label{Invertible} A quasihomogeneous polynomial $f$ in $n$ variables is called {\em invertible} (see \cite{Kreuzer}) if it contains $n$ monomials, i.e.\ it is of the form \begin{equation}\label{inv} f(x_1, \ldots, x_n)=\sum\limits_{i=1}^n a_i \prod\limits_{j=1}^n x_j^{E_{ij}} \end{equation} for $a_i\in{\mathbb C}^\ast={\mathbb C}\setminus\{0\}$, and the matrix $E=(E_{ij})$ (with non-negative integer entries) is non-degenerate: $\det E\ne 0$. Without loss of generality we may assume that $a_i=1$ for $i=1, \ldots, n$ and that $\det E>0$. The {\em Berglund-H\"ubsch transpose} $\widetilde{f}$ of the invertible polynomial (\ref{inv}) is $$ \widetilde{f}(x_1, \ldots, x_n)=\sum\limits_{i=1}^n a_i \prod\limits_{j=1}^n x_j^{E_{ji}}\,, $$ i.e.\ it is defined by the transpose $E^T$ of the matrix $E$. The (diagonal) {\em symmetry group} of the invertible polynomial $f$ is the group $G_f$ of diagonal linear transformations of ${\mathbb C}^n$ preserving $f$: $$ G_f=\{(\lambda_1, \ldots, \lambda_n)\in ({\mathbb C}^*)^n: f(\lambda_1 x_1, \ldots, \lambda_n x_n)= f(x_1, \ldots, x_n)\}\,. $$ This group is finite and its order $\vert G_f\vert$ is equal to $d=\det E$ \cite{Kreuzer,ET2}. The polynomial $f$ is quasihomogeneous with respect to the rational weights $q_1$, \dots, $q_n$ defined by the equation $$ E(q_1, \ldots, q_n)^T=(1, \ldots, 1)^T\,, $$ i.e.\ $$ f(\exp(2\pi i q_1\tau) x_1, \ldots, \exp(2\pi i q_n\tau) x_n)= \exp(2\pi i\tau) f(x_1, \ldots, x_n)\,. $$ The {\em Milnor fibre} of the polynomial $f$ is the manifold $$ V_f=\{(x_1, \ldots, x_n)\in{\mathbb C}^n: f(x_1, \ldots, x_n)=1\}\,. $$ The {\em monodromy transformation} (see below) is induced by the element $$ g_0=(\exp(2\pi i q_1), \ldots, \exp(2\pi i q_n))\in G_f\,. $$ (In \cite{Krawitz} the element $g_0$ is called the ``exponential grading operator''.) For a finite abelian group $G$, let $G^*={\rm Hom\,}(G,{\mathbb C}^*)$ be its group of characters. (The groups $G$ and $G^*$ are isomorphic, but not in a canonical way.) One can show that the symmetry group $G_{\widetilde{f}}$ of the Berglund-H\"ubsch transpose $\widetilde{f}$ of an invertible polynomial $f$ is canonically isomorphic to $G_f^*$ (see, e.g., \cite{BLMS}). The duality between $G_f$ and $G_{\widetilde{f}}$ is defined by the pairing $$ \langle\underline{\lambda}, \underline{\mu}\rangle_E=\exp(2\pi i(\underline{\alpha}, \underline{\beta})_E)\,, $$ where $\underline{\lambda} = (\exp(2\pi i\,\alpha_1), \ldots, \exp(2\pi i\,\alpha_n))\in G_{\widetilde{f}}$, $\underline{\mu} = (\exp(2\pi i\,\beta_1), \ldots, \exp(2\pi i\,\beta_n))\in G_f$, $\underline{\alpha}=(\alpha_1, \ldots, \alpha_n)$, $\underline{\beta}=(\beta_1, \ldots, \beta_n)$, $$ (\underline{\alpha}, \underline{\beta})_E:=(\alpha_1, \ldots, \alpha_n)E(\beta_1, \ldots, \beta_n)^T $$ (see \cite{BLMS}). \begin{definition} (\cite{BH2}) For a subgroup $H\subset G_f$ its {\em dual} $\widetilde H\subset G_{\widetilde{f}}=G_f^*$ is the kernel of the natural map $i^*:G_f^*\to H^*$ induced by the inclusion $i:H\hookrightarrow G_f$. \end{definition} One can see that $\vert H\vert\cdot\vert\widetilde H\vert=\vert G_f\vert=\vert G_{\widetilde{f}}\vert$. \begin{lemma}\label{SL} Let $G_{f,0}=\langle g_0\rangle$ be the subgroup of $G_f$ generated by the monodromy transformation. One has $\widetilde{G_{f,0}}= G_{\widetilde{f}} \cap {\rm SL\,}(n,{\mathbb C})$. \end{lemma} \begin{proof} For $\underline{\lambda} = (\exp(2\pi i\,\alpha_1), \ldots, \exp(2\pi i\,\alpha_n))\in G_{\widetilde{f}}$, $\langle\underline{\lambda}, g_0\rangle_E=1$ if and only if $$ (\alpha_1, \ldots, \alpha_n)E(q_1, \ldots, q_n)^T\in {\mathbb Z}\,. $$ One has $E(q_1, \ldots, q_n)^T=(1, \ldots, 1)^T$. Therefore $(\alpha_1, \ldots, \alpha_n)(1, \ldots, 1)^T\in {\mathbb Z}$, i.e.\ $\sum\limits_i \alpha_i\in{\mathbb Z}$, $\prod\limits_i \lambda_i=1$. This means that $\underline{\lambda}\in {\rm SL\,}(n,{\mathbb C})$. \end{proof} \section{Orbifold zeta function}\label{sect-orbifold-zeta} The {\em zeta function} $\zeta_h(t)$ of a (proper, continuous) transformation $h:X\to X$ of a topological space $X$ is the rational function defined by \begin{equation} \label{Defzeta} \zeta_h(t)=\prod\limits_{q\ge 0} \left(\det({\rm id}-t\cdot h^{*}{\rm \raisebox{-0.5ex}{$\vert$}}{}_{H^q_c(X;{\mathbb R})})\right)^{(-1)^q}\,, \end{equation} where $H^q_c(X;{\mathbb R})$ denotes the cohomology with compact support. The degree of the zeta function $\zeta_h(t)$, i.e.\ the degree of the numerator minus the degree of the denominator, is equal to the Euler characteristic $\chi(X)$ of the space $X$ (defined via cohomology with compact support). \begin{remark}\label{free} If a transformation $h:X\to X$ defines on $X$ a free action of the cyclic group of order $m$, (i.e.\ if $h^m(x)=x$ for $x\in X$, $h^k(x)\ne x$ for $0<k<m$, $x\in X$), then $\zeta_h(t)=(1-t^m)^{\chi(X)/m}$. \end{remark} The {\em monodromy zeta function}, i.e.\ the zeta function of a monodromy transformation is of the form $\prod\limits_{m\ge1}(1-t^m)^{s_m}$, where $s_m$ are integers such that only finitely many of them are different from zero (see, e.g., \cite{AGV}). In particular, all roots and/or poles of the monodromy zeta function are roots of unity. The {\em orbifold} (monodromy) {\em zeta function} was essentially defined in \cite{ET2}. Let $G$ be a finite group acting on the space ${\mathbb C}^n$ by a representation and let $f:({\mathbb C}^n,0)\to({\mathbb C},0)$ be a $G$-invariant germ of a holomorphic function. The Milnor fibre $V_f$ of the germ $f$ is the manifold $\{f=\varepsilon\}\cap B_\delta^{2n}$, where $B_\delta^{2n}$ is the ball of radius $\delta$ centred at the origin in ${\mathbb C}^n$, $0<\vert\varepsilon\vert\ll\delta$, $\delta$ is small enough. One may assume the monodromy transformation $h_f$ of the germ $f$ \cite{AGV} to be $G$-invariant. For an element $g\in G$, its {\em age} \cite{Ito-Reid} (or fermion shift number: \cite{Zaslow}) is defined by ${\rm age\,}(g):=\sum\limits_{i=1}^n\alpha_i$, where in a certain basis in ${\mathbb C}^n$ one has $g={\rm diag\,}(\exp(2\pi i\,\alpha_1), \ldots, \exp(2\pi i\,\alpha_n))$ with $0\le\alpha_i < 1$. \begin{remark} The map $\exp(2\pi i\,{\rm age\,}(\cdot)) : G \to {\mathbb C}^\ast$ is a group homomorphism. If $f$ is an invertible polynomial and $G$ is the group $G_f$ of its symmetries, then it is an element of $G_f^\ast=G_{\widetilde{f}}$. \end{remark} For a rational function $\varphi(t)$ of the form $\prod\limits_i(1-\alpha_i t)^{r_i}$ with only finitely many of the exponents $r_i\in{\mathbb Z}$ different from zero, its {\em $g$-age shift} is defined by $$ \left(\varphi(t)\right)_g=\prod\limits_i(1-\alpha_i \exp(-2\pi i\,{\rm age\,}(g)) t)^{r_i}\,, $$ i.e.\ all its roots and/or poles are multiplied by $\exp(2\pi i\,{\rm age\,}(g))\in{\mathbb C}^*$. Let ${\rm Conj\,}G$ be the set of conjugacy classes of elements of $G$. For a class $[g]\in{\rm Conj\,}G$, let $g\in G$ be a representative of it. Let $C_G(g)$ be the centralizer of the element $g$ in $G$. Let $({\mathbb C}^n)^g$ be the fixed point set of the element $g$, let $V_f^g=V_f\cap ({\mathbb C}^n)^g$ be the corresponding part of the Milnor fibre, and let $\widehat{V}_f^g= V_f^g/C_G(g)$ be the corresponding quotient space (the ``twisted sector'' in terms of \cite{Chen_Ruan}). One may assume that the monodromy transformation preserves $V_f^g$ for each $g$. Let $\widehat{h}_f^g:\widehat{V}_f^g\to \widehat{V}_f^g$ be the corresponding map (monodromy) on the quotient space. Its zeta function $\zeta_{\widehat{h}_f^g}(t)$ depends only on the conjugacy class of $g$. \begin{definition} The {\em orbifold zeta function} of the pair $(f,G)$ is defined by \begin{equation}\label{orbifold-zeta} \zeta^{{\rm orb}}_{f,G}(t)= \prod\limits_{[g]\in {\rm Conj\,}G}\left(\zeta_{\widehat{h}_f^g}(t)\right)_g\,. \end{equation} \end{definition} One can see that the degree of $\zeta^{{\rm orb}}_{f,G}(t)$ is equal to the orbifold Euler characteristic of $(V_f, G)$ (see, e.g., \cite{HH}, \cite{MMJ}). For an abelian G, $\widehat{V}_f^g=V_f^g/G$ and the product in (\ref{orbifold-zeta}) runs over all elements $g\in G$. \begin{definition} The {\em reduced orbifold zeta function} $\overline{\zeta}^{{\rm orb}}_{f,G}(t)$ is defined by $$ \overline{\zeta}^{{\rm orb}}_{f,G}(t)= \zeta^{{\rm orb}}_{f,G}(t)\left/\prod\limits_{[g]\in {\rm Conj\,}G}(1-t)_g \right. $$ (cf. (\ref{orbifold-zeta})). \end{definition} Now let $G$ be abelian. One can assume that the action of $G$ on ${\mathbb C}^n$ is diagonal and therefore it respects the decomposition of ${\mathbb C}^n$ into the coordinate tori. For a subset $I\subset I_0=\{1, 2, \ldots, n\}$, let $$ ({\mathbb C}^*)^I:= \{(x_1, \ldots, x_n)\in {\mathbb C}^n: x_i\ne 0 {\rm \ for\ }i\in I, x_i=0 {\rm \ for\ }i\notin I\} $$ be the corresponding coordinate torus. Let $V_f^I=V_f\cap ({\mathbb C}^*)^I$. One has $V_f=\coprod\limits_{I\subset I_0}V_f^I$. Let $G^I\subset G$ be the isotropy subgroup of the action of $G$ on the torus $({\mathbb C}^*)^I$. (All points of the torus $({\mathbb C}^*)^I$ have one and the same isotropy subgroup.) The monodromy transformation $h_f$ is assumed to respect the decomposition of the Milnor fibre $V_f$ into the parts $V_f^I$. Let $h_f^I$ and $\widehat{h}_f^I$ be the corresponding (monodromy) transformations of $V_f^I$ and $V_f^I/G$ respectively. One can define in the same way as above the orbifold zeta function corresponding to the part $V_f^I$ of the Milnor fibre: \begin{equation}\label{zeta-part1} \zeta^{{\rm orb},I}_{f,G}(t)= \prod\limits_{g\in G}\left(\zeta_{\widehat{h}_f^{I,g}}(t)\right)_g\,. \end{equation} One has $$ \zeta^{{\rm orb}}_{f,G}(t)= \prod\limits_{I\subset I_0}\zeta^{{\rm orb},I}_{f,G}(t)\,. $$ Since the isotropy subgroups of all points of $({\mathbb C}^*)^I$ are the same (equal to $G^I$), the equation (\ref{zeta-part1}) reduces to \begin{equation}\label{zeta-part2} \zeta^{{\rm orb},I}_{f,G}(t)= \prod\limits_{g\in G^I}\left(\zeta_{\widehat{h}_f^I}(t)\right)_g\,. \end{equation} The (monodromy) zeta function $\zeta_{\widehat{h}_f^I}(t)$ has the form $\prod\limits_{m\ge 0}(1-t^m)^{s_m}$ with only a finite number of the exponents $s_m$ different from zero. Let us compute $\prod\limits_{g\in G^I}\left(1-t^m\right)_g$. \begin{lemma}\label{1-tm} One has $$ \prod\limits_{g\in G^I}\left(1-t^m\right)_g= \left(1-t^{{\rm lcm\,}(m,k)}\right)^{\frac{m\vert G^I\vert}{{\rm lcm\,}(m,k)}}\,, $$ where $k=\vert G^I/ G^I \cap {\rm SL}(n,{\mathbb C})\vert$, ${\rm lcm\,}(\cdot,\cdot)$ denotes the least common multiple. \end{lemma} \begin{proof} The roots of the binomial $(1-t^m)$ are all the $m$th roots of unity. The map $\exp(2\pi i\,{\rm age}(\cdot)): G^I\to {\mathbb C}^*$ is a group homomorphism. Its kernel coincides with $G^I\cap {\rm SL}(n,{\mathbb C})$. Therefore its image consists of all the $k$th roots of unity (each one corresponds to $\vert G^I\cap {\rm SL}(n,{\mathbb C})\vert$ elements of $G^I$). Thus the roots of $\prod\limits_{g\in G^I}\left(1-t^m\right)_g$ are all the roots of unity of degree ${\rm lcm\,}(m,k)$ with equal multiplicities. This means that $\prod\limits_{g\in G^I}\left(1-t^m\right)_g=(1-t^{{\rm lcm\,}(m,k)})^s$. The exponent $s$ is determined by the number of roots. \end{proof} \section{Orbifold zeta functions for invertible polynomials}\label{main} Let $(f,G)$ be a pair consisting of an invertible polynomial $f$ in $n$ variables and a group $G\subset G_f$ of its (diagonal) symmetries and let $(\widetilde{f},\widetilde{G})$ be the Berglund--H\"ubsch--Henningson dual pair ($\widetilde{G}\subset G_{\widetilde{f}}$). (We do not assume that the invertible polynomials are non-degenerate, i.e.\ that they have isolated critical points at the origin.) \begin{theorem} One has \begin{equation}\label{main-eq} \overline{\zeta}^{{\rm orb}}_{\widetilde{f},\widetilde{G}}(t)=\left( \overline{\zeta}^{{\rm orb}}_{f,G}(t) \right)^{(-1)^n}\,. \end{equation} \end{theorem} \begin{proof} We use the notations from Section~\ref{sect-orbifold-zeta}. One has \begin{equation}\label{product} \overline{\zeta}^{{\rm orb}}_{f,G}(t)=\prod\limits_{I\subset I_0} \zeta^{{\rm orb}, I}_{f,G}(t) \left/\prod\limits_{[g]\in {\rm Conj\,}G}(1-t)_g\,. \right. \end{equation} Let ${\mathbb Z}^n$ be the lattice of monomials in the variables $x_1$, \dots, $x_n$ ($(k_1, \ldots, k_n)\in {\mathbb Z}^n$ corresponds to the monomial $x_1^{k_1}\cdots x_n^{k_n}$) and let ${\mathbb Z}^I:=\{(k_1, \ldots, k_n)\in {\mathbb Z}^n: k_i=0 \mbox{ for }i\notin I\}$. For a polynomial $F$ in the variables $x_1$, \dots, $x_n$, let $\mbox{supp\,} F\subset {\mathbb Z}^n$ be the set of monomials (with non-zero coefficients) in $F$. The elements of the subgroup $G_{f,0}\cap G_f^I$ act on $V_f^I$ trivially. The monodromy transformation defines a free action of the cyclic group $G_{f,0}/(G_{f,0}\cap G_f^I)$ on $V_f^I$. Therefore the monodromy transformation on $V_f^I/G$ defines an action of the cyclic group $G_{f,0}/\left(G_{f,0}\cap (G+G_f^I)\right)$ which is also free. According to the remark at the beginning of Section~\ref{sect-orbifold-zeta}, the zeta function is given by \begin{equation}\label{zeta-quotient} \zeta_{\widehat{h}_f^I}(t)=(1-t^{m_I})^{s_I}\,, \end{equation} where $$ m_I=\vert G_{f,0}/\left(G_{f,0}\cap (G+G_f^I)\right)\vert= \frac{\vert G+G_f^I+G_{f,0}\vert}{\vert G+G_f^I\vert}\,, $$ $s_I=\chi(V_f^I/G)/m_I=\chi(V_f^I)/(m_I\vert G/G\cap G_f^I\vert)$. Let $I$ be a proper subset of $I_0=\{1, \cdots, n\}$ (i.e.\ $I\ne \emptyset$, $I\ne I_0$), and let $\overline{I}=I_0\setminus I$. If $({\rm supp\,}f)\cap Z^I$ consists of less than $\vert I\vert$ points, i.e.\ if $f$ has less than $\vert I\vert$ monomials in the variables $x_i$ with $i\in I$, then $\chi(V_f^I)=0$ (e.g. due to the Varchenko formula \cite{Varch}) and therefore $\zeta_{\widehat{h}_f^I}(t)=1$, $\zeta^{{\rm orb}, I}_{f,G}(t)=1$. In this case $({\rm supp\,}\widetilde{f})\cap Z^{\overline{I}}$ consists of less than $\vert \overline{I}\vert$ points and therefore $\zeta^{{\rm orb}, I}_{\widetilde{f},\widetilde{G}}(t)=1$. Let $\vert({\rm supp\,}f)\cap Z^I\vert=\vert I\vert$. From Equation~(\ref{zeta-quotient}) and Lemma~\ref{1-tm} it follows that \begin{equation} \zeta^{{\rm orb}, I}_{f,G}(t)=\left(1-t^{{\rm lcm}(m_I,k_I)}\right)^{s'_I}\,, \end{equation} where $$ k_I=\frac{\vert G\cap G_f^I\vert}{\vert G\cap G_f^I\cap {\rm SL}(n,{\mathbb C})\vert}\,. $$ Therefore \begin{equation} \zeta^{{\rm orb}, I}_{f,G}(t)=\left(1-t^{\ell_I}\right)^{s'_I}\,, \end{equation} where \begin{equation}\label{lcm} \ell_I={\rm lcm}\left(\frac{\vert G+ G_f^I+ G_{f,0}\vert}{\vert G+ G_f^I\vert}, \frac{\vert G\cap G_f^I\vert}{\vert G\cap G_f^I\cap {\rm SL}(n,{\mathbb C})\vert}\right)\,. \end{equation} In this case $\vert({\rm supp\,}\widetilde{f})\cap Z^{\overline{I}}\vert= \vert \overline{I}\vert$ and therefore $$ \zeta^{{\rm orb}, \overline{I}}_{\widetilde{f},\widetilde{G}}(t)= \left(1-t^{\widetilde{\ell}_I}\right)^{\widetilde{s}'_{\overline{I}}}\,, $$ where \begin{equation}\label{lcm2} \widetilde{\ell_I}= {\rm lcm} \left( \frac{\vert \widetilde{G}+ G_{\widetilde{f}}^{\overline{I}}+ G_{\widetilde{f},0}\vert} {\vert \widetilde{G}+ G_{\widetilde{f}}^{\overline{I}}\vert}, \frac{\vert \widetilde{G}\cap G_{\widetilde{f}}^{\overline{I}}\vert} {\vert \widetilde{G}\cap G_{\widetilde{f}}^{\overline{I}}\cap {\rm SL}(n,{\mathbb C})\vert} \right)\,. \end{equation} According to \cite[Lemma 1]{BLMS}, one has $G_{\widetilde{f}}^{\overline{I}}=\widetilde{G_f^I}$; by Lemma~\ref{SL}, one has $\widetilde{G_{\widetilde{f},0}}= G_f \cap{\rm SL}(n,{\mathbb C})$ and $\widetilde{G_{f,0}}= G_{\widetilde{f}} \cap {\rm SL}(n,{\mathbb C})$. This means that the subgroup $G+ G_f^I+ G_{f,0}\subset G_f$ is dual to $\widetilde{G}\cap G_{\widetilde{f}}^{\overline{I}}\cap {\rm SL}(n,{\mathbb C})\subset G_{\widetilde{f}}$ and the subgroup $G+ G_f^I\subset G_f$ is dual to $\widetilde{G}\cap G_{\widetilde{f}}^{\overline{I}}\subset G_{\widetilde{f}}$. Therefore $$ \frac{\vert G+ G_f^I+ G_{f,0}\vert}{\vert G+ G_f^I\vert}= \frac{\vert \widetilde{G}\cap G_{\widetilde{f}}^{\overline{I}}\vert} {\vert \widetilde{G}\cap G_{\widetilde{f}}^{\overline{I}}\cap {\rm SL}(n,{\mathbb C})\vert}\,. $$ In the same way $$ \frac{\vert G\cap G_f^I\vert}{\vert G\cap G_f^I\cap {\rm SL}(n,{\mathbb C})\vert}= \frac{\vert \widetilde{G}+ G_{\widetilde{f}}^{\overline{I}}+ G_{\widetilde{f},0}\vert} {\vert \widetilde{G}+ G_{\widetilde{f}}^{\overline{I}}\vert} $$ and therefore $\ell_I=\widetilde{\ell}_{\overline{I}}$. In \cite{MMJ} it was shown that $\ell_I s'_I=(-1)^n\widetilde{\ell}_{\overline{I}}{\widetilde{s}_{\overline{I}}}'$. Thus $s'_I=(-1)^n \widetilde{s}'_{\overline{I}}$. Therefore the factor $\zeta^{{\rm orb}, I}_{f,G}(t)$ in Equation~(\ref{product}) for $\overline{\zeta}^{{\rm orb}}_{f,G}(t)$ is equal to the factor $\left(\zeta^{{\rm orb}, \overline{I}}_{\widetilde{f},\widetilde{G}}(t)\right)^{(-1)^n}$ in the corresponding equation for $\left(\overline{\zeta}^{{\rm orb}}_{\widetilde{f},\widetilde{G}}(t)\right)^{(-1)^n}$. Now let $I=I_0$. One has $G_f^{I_0}=\{0\}$ and therefore $\zeta^{{\rm orb}, I_0}_{f,G}(t)=\zeta_{\widehat{h}_f^{I_0}}(t)=\left(1-t^{m_{I_0}}\right)^{s_{I_0}}$, where $m_{I_0}=\vert G_{f,0}/G\cap G_{f,0}\vert=\frac{\vert G+G_{f,0}\vert}{\vert G\vert}$. On the other hand, by Lemma~\ref{1-tm}, one has $$ \prod\limits_{g\in\widetilde{G}}(1-t)_g=(1-t^{\widetilde{k}})^{\widetilde{r}}, $$ where $$ \widetilde{k}=\frac{\vert\widetilde{G}\vert}{\vert \widetilde{G}\cap {\rm SL}(n,{\mathbb C})\vert}. $$ Due to Lemma~\ref{SL}, the subgroup $\widetilde{G}\cap {\rm SL}(n,{\mathbb C})\subset G_{\widetilde{f}}$ is dual to the subgroup $G+G_{f,0}\subset G_f$. Therefore $m_{I_0}=\widetilde{k}$. In~\cite{MMJ} it was shown that $m_{I_0}s_{I_0}=(-1)^{n-1}\widetilde{k}\widetilde{r}$. Therefore $s_{I_0}=(-1)^{n-1}\widetilde{r}$ and the factor $\zeta^{{\rm orb}, I_0}_{f,G}(t)$ in Equation~(\ref{product}) for $\overline{\zeta}^{{\rm orb}}_{f,G}(t)$ is equal to the factor $(\prod\limits_{g\in\widetilde{G}}(1-t)_g)^{(-1)^{n-1}}$ in the corresponding equation for $(\overline{\zeta}^{{\rm orb}}_{\widetilde{f},\widetilde{G}}(t))^{(-1)^n}$. \end{proof} \begin{remark} In Equation~(\ref{lcm}) for the exponent $\ell_I$, the first argument of the least common multiple is connected with the monodromy action and the second one with the age shift. The duality interchanges this numbers. The one for the pair $(f,G)$ connected with the monodromy action is equal to the one for the dual pair $(\widetilde{f},\widetilde{G})$ connected with the age shift and vice versa (see~(\ref{lcm2})). \end{remark}
{ "attr-fineweb-edu": 1.305664, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc7zxK1ThhBMLgARz
\section{Introduction} In perturbative electroweak (EW) theory, higher-order corrections are given by emissions of virtual and real gauge bosons and are known to have a large effect in the hard tail of observables at the L\protect\scalebox{0.8}{HC}\xspace and future colliders~\cite{Mishra:2013una}. In contrast to massless gauge theories, where real emission terms are necessary to regulate infrared divergences, in EW theory the weak gauge bosons are massive and therefore provide a natural lower scale cut-off, such that finite logarithms appear instead. Moreover, since the emission of an additional weak boson leads to an experimentally different signature with respect to the Born process, virtual logarithms are of physical significance without the inclusion of real radiation terms. The structure of such logarithmic contributions, referred to as Sudakov logarithms~\cite{Sudakov:1954sw}, and their factorisation properties were derived in full generality by Denner and Pozzorini~\cite{Denner:2000jv,Denner:2002gd,Denner:2001gw,Denner:2004iz} at leading and next-to-leading logarithmic accuracy at both one- and two-loop order in the EW coupling expansion. In particular, they have shown that in the high-energy scattering limit, at least at one-loop order, these logarithmic corrections can be factored as a sum over pairs of external legs in an otherwise process-independent way, hence providing a straightforward algorithm for computing them for any given process. The high-energy limit requires all invariants formed by pairs of particles to be large compared to the scale set by the weak gauge boson masses. In this sense we expect effects due to electroweak corrections in general, and to Sudakov logarithms in particular, to give rise to large effects in the hard tail of observables that have the dimension of energy, such as for example the transverse momentum of a final state particle. In this work we present an implementation for computing EW Sudakov corrections in the form they are presented in~\cite{Denner:2000jv} within the \SHERPA event generator framework~\cite{Bothmann:2019yzt}. While Sudakov corrections have been computed for a variety of processes (e.g.\ \cite{Kuhn:2004em,Kuhn:2005gv,Accomando:2006hq,Kuhn:2007cv,Chiesa:2013yma}), a complete and public implementation is, to our knowledge, missing.\footnote{An implementation is referred to be available in~\cite{Chiesa:2013yma}. However, contrary to the one presented here it has two limitations: it is only available for the processes hard-coded in A\protect\scalebox{0.8}{LP}G\protect\scalebox{0.8}{EN}\xspace, and it does not include corrections for longitudinal external gauge bosons.} The one we present here is fully general and automated, it is only limited by computing resources, and it will be made public in the upcoming major release of \SHERPA. It relies on the internal matrix element generator \COMIX~\cite{Gleisberg:2008fv} to compute all double and single logarithmic contributions at one-loop order for any user-specified Standard Model process. The correction is calculated for each event individually, and is therefore fully differential in phase-space, such that the logarithmic corrections for any observable are available from the generated event sample. The event-wise correction factors are written out as additional weights~\cite{Dobbs:2001ck} along with the main event weight, such that complete freedom is left to the user on how to use and combine these Sudakov correction weights. As Sudakov EW logarithms are given by the high-energy limit of the full set of virtual EW corrections, they can approximate these effects. Indeed, while NLO EW corrections are now becoming a standard in all general purpose Monte Carlo event generators (see~\cite{Schonherr:2017qcj} for \SHERPA, \cite{Frederix:2018nkq} for M\protect\scalebox{0.8}{AD}G\protect\scalebox{0.8}{RAPH}5\_\protect\scalebox{0.8}{A}MC@NLO\xspace and process-specific implementations for P\protect\scalebox{0.8}{OWHEG}-B\protect\scalebox{0.8}{OX}\xspace~\cite{Alioli:2010xd}, such as e.g.\ \cite{Jager:2011ms,Bernaciak:2012hj,Barze:2012tt}), the full set of NLO EW corrections for complicated final states may not be available yet or might not be numerically tractable due to the calculational complexity for a given application. For these reasons, an approximate scheme to deal with NLO EW virtual corrections, dubbed EW$_\text{virt}$\xspace, was devised in~\cite{Kallweit:2015dum}. Within this approximation NLO EW real emission terms are neglected, and one only includes virtual contributions with the minimal amount of counter-terms to be rendered finite. This approach greatly simplifies the integration of such corrections with more complex event generation schemes, such as matching and merging with a QCD parton shower and higher-order QCD matrix elements. Compared to the Sudakov EW corrections used as an approximation of the virtual NLO EW, the EW$_\text{virt}$\xspace scheme differs due to the inclusion of exact virtual corrections and integrated subtraction terms. These differences are either subleading or constant at high energy in the logarithmic counting, but they can lead to numerically sizeable effects, depending on the process, the observable and the applied cuts. Alternatively, one can exploit the fact that EW Sudakov logarithms have been shown to exponentiate up to next-to-leading logarithmic accuracy using IR evolution equation~\cite{Fadin:1999bq,Melles:2001ye,Melles:2001dh,Jantzen:2005xi} and using Soft-Collinear Effective Theory~\cite{Chiu:2007yn,Chiu:2007dg,Chiu:2008vv,Chiu:2009mg,Chiu:2009ft,Fuhrer:2010eu}, and exponentiate the event-wise correction factor to calculate the fully differential next-to-leading logarithmic resummation, as explored e.g.\ in~\cite{Lindert:2017olm}.\footnote{The fact the all-order resummation, at NLL accuracy, can be obtained by exponentiating the NLO result was first derived by Yennie, Frautschi and Suura (YFS)~\cite{Yennie:1961ad} in the case of a massless gauge boson in abelian gauge theories.} This can lead to particularly important effects e.g.\ at a \SI{100}{\TeV} proton-proton collider~\cite{Manohar:2014vxa}, where (unresummed) next-to-leading logarithmic corrections can approach O(1) at the high-energy tail due to the increased kinematic reach, hence even requiring resummation to obtain a valid prediction. The outline of this paper is as follows. In Sec.~\ref{sec:imple}, we present how the algorithm is implemented in the \SHERPA event generator, while in Sec.~\ref{sec:examples-testin} we show a selection of applications of the method used to test our implementation, namely on-shell W boson pair production, EW-induced dijet production, and electron-positron production in association with four jets, providing an estimate of EW corrections for the latter process for the first time and at the same time proving the viability of the method for high multiplicity final states. Finally we report our conclusions~in Sec.~\ref{sec:conclusions}. \section{\SHERPA implementation} \label{sec:imple} Electroweak Sudakov logarithms arise from the infrared region of the integration over the loop momentum of a virtual gauge boson exchange, with the lower integration boundary set by the gauge boson mass. If the boson mass is small compared to the process energy scales, they exhibit the same structure of soft and collinear divergences one encounters in massless gauge theories. As such, the most dominant contributions are given by double logarithms (DL) which arise from the soft and collinear limit, and single logarithms (SL) coming instead from less singular regions, as we describe in more detail in the rest of this section. Virtual EW corrections also contain in general non-logarithmic terms (either constant or power suppressed), which are however beyond the accuracy of this work. \begin{figure}[b] \begin{center} \begin{tikzpicture} \begin{feynman} \vertex [large, pattern color=white!75!black, blob] (a) {$\mathcal M$}; \vertex [node distance=1.0, above right=of a] (b); \vertex [node distance=1.0, below right=of a] (c); \vertex [node distance=0.6, above right=of b] (d) {$i$}; \vertex [node distance=0.6, below right=of c] (e) {$j$}; \diagram* { (a) -- (b), (a) -- (c), (b) -- (d), (c) -- (e), (b) -- [quarter left, photon, edge label=$V$] (c), }; \end{feynman} \end{tikzpicture} \caption{\label{fig:vloop} Feynman diagram contributing to double-logarithmic Sudakov corrections.} \end{center} \end{figure}% To give a more explicit example, let us consider the DL and SL contributions coming from a diagram of the type shown in Fig.~\ref{fig:vloop}. The former correspond to the case of the virtual gauge boson $V$ becoming soft and collinear to one of the external legs~$i$ or~$j$ of the matrix element $\mathcal M$, i.e.\ when $|r_{ij}|\equiv|p_i+p_j|^2\gg m_W^2$. Similarly, as $|p_i+p_j|^2\sim2\,p_i\cdot\,p_j$, this limit also encodes the case of a collinear, non-soft (and vice-versa) gauge boson which instead gives rise to SL contributions. In turn, these must also be included, since they can numerically be of a similar size as the DL one, see the discussion and the detailed derivation in~\cite{Denner:2000jv,Denner:2001gw}. The reason for this can also be understood intuitively: while the diagram in Fig.~\ref{fig:vloop} is the only source of double logarithms there are several contributions giving rise to single logarithms, which we discuss later in this section. In all cases, following the conventions in~\cite{Denner:2000jv}, we write DL and SL terms in the following way, respectively: \begin{align} L\left(\left|r_{i j}\right|\right) \equiv\frac{\alpha}{4 \pi} \log ^{2} \frac{|r_{i j}|}{m_W^{2}} ,\quad l\left(r_{i j}\right) \equiv\frac{\alpha}{4 \pi} \log \frac{r_{i j}}{m_W^{2}} . \end{align} Note that the use of the $W$ mass as a reference scale is only a convention and the same form is indeed also used for contributions coming from $Z$ bosons and photons. Remainder logarithms containing ratios of $m_W$ and $m_Z$ or the vanishing photon mass are taken care of by additional logarithmic terms, as explained in more details below. After having evaluated all relevant diagrams, and using Ward identities, one can show that the DL and SL contributions factorise off the Born matrix element $\mathcal M$, and can be written in terms of a sum over pairs of external legs only. Hence, we define these corrections as a multiplicative factor~$K_{\text{NLL}}$ to the squared matrix element $|\mathcal M(\Phi)|^2$ at a given phase space point~$\Phi$: \begin{equation} \label{eq:kfactor} K_{\text{NLL}}\left(\Phi\right) = 1 + \sum_c \frac{\sum_h 2\,{\rm Re}\left\{ (\delta^c \mathcal M_h) \mathcal M_h^* \right\}}{\sum_h \left| \mathcal M_h \right|^2}, \end{equation} where we have written $\mathcal M_h \equiv \mathcal M_h(\Phi)$ and the sums go over helicity configurations $h$ and the different types of logarithmic contributions $c$. Following the original notation, these contributions are divided into Leading and Subleading Soft and Collinear (LSC/SSC) logarithms, Collinear or Soft (C) logarithms and those arising from Parameter Renormalisation (PR). More details about their respective definitions are given below. Since $\delta^c$ is in general a tensor in $\text{SU}(2)\times\text{U}(1)$, the $\delta^c \mathcal M_h$ are usually not proportional to the original matrix element $\mathcal M_h$. The tensor structure come from the fact that in general the various $\delta^c$ are proportional to EW vertices, which in turn means that a single leg or pairs of legs can get replaced with particles of a different kind, according to the relevant Feynman rules. As in~\cite{Denner:2000jv}, we denote the leg index $i$ as a shorthand for the external fields $\phi_{i}$. Denoting with $\{i\}$ the set of all external fields, we therefore have $\delta^c \mathcal M_h^{\{i\}} \propto \mathcal M_h^{\{i'\}}$. In our implementation, the construction and evaluation of these additional amplitudes is taken care of by an interface to the automated tree-level \COMIX matrix element generator~\cite{Gleisberg:2008fv}, which is available within the \SHERPA framework. Before evaluating such amplitudes, the energy is re-distributed among the particles to put all (replaced) external legs on-shell again. The required auxiliary processes are automatically set up during the initialisation of the framework. Since the construction of additional processes can be computationally non-trivial, we have taken care of reducing their number by re-using processes that are shared between different contributions $c$, and by using crossing relations when only the order of external fields differs. In our implementation we consistently neglect purely electromagnetic logarithms which arise from the gap between zero (or the fictitious photon mass) and $m_W$. In \SHERPA such logarithms can be treated in either of two ways. First, one can compute the purely electromagnetic NLO correction to the desired process, consisting of both virtual and real photon emission, which gives the desired logarithm at fixed order. Alternatively, one can resum soft-photon emission to all orders, which in \SHERPA is achieved through the formalism of Yennie, Frautschi and Suura (YFS)~\cite{Yennie:1961ad} or by a QED parton shower, as is discussed e.g.\ in~\cite{Kallweit:2017khh}. In contrast, logarithms originating from the running between the $W$ mass and the $Z$ mass are included, as we explain in the next paragraphs, where we discuss the individual contributions $\delta^c$. \paragraph{Leading Soft-Collinear logarithms: LSC} The leading corrections are given by the soft and collinear limit of a virtual gauge boson and are proportional to $L(|r_{kl}|)$. By writing \begin{equation}\label{eq:DL} L\left(\left|r_{k l}\right|\right)=L\left(s\right)+2\,l(s) \log \frac{\left|r_{k l}\right|}{s}+L\left(\left|r_{k l}\right|, s\right), \end{equation} and neglecting the last term on the right-hand side, which is of order $\mathcal{O}(1)$, one has split the soft-collinear contribution into a leading soft-collinear (LSC) and a angular-dependent subleading soft-collinear (SSC) one, corresponding to the first and second term on the right-hand side, respectively. The full LSC correction is then given by \begin{align}\label{eq:LSC} \delta^{\mathrm{LSC}} \mathcal{M}^{i_{1} \ldots i_{n}} =\sum_{k=1}^{n} \delta_{i_{k}^{\prime} i_{k}}^{\mathrm{LSC}}(k) \mathcal{M}_{0}^{i_{1} \ldots i_{k}^{\prime} \ldots i_{n}} ,\end{align} where the sum runs over the $n$ external legs, and the coefficient matrix on the right-hand side is given by \begin{align} \label{eq:LSC_coeff} \delta_{i_{k}^{\prime} i_{k}}^{\mathrm{LSC}}(k) =-\frac{1}{2}\left[C_{i_{k}^{\prime} i_{k}}^{\mathrm{ew}}(k) L(s)-2\left(I^{Z}(k)\right)_{i_{k}^{\prime} i_{k}}^{2} \log \frac{M_Z^{2}}{M_W^{2}} l(s) \right] = \delta_{i_{k}^{\prime} i_{k}}^{\overline{\mathrm{LSC}}}(k) + \delta_{i_{k}^{\prime} i_{k}}^Z(k). \end{align} $C^\text{ew}$ and $I^Z$ are the electroweak Casimir operator and the $Z$ gauge coupling, respectively. The second term $\delta^Z$ appears as an artefact of writing $L(s)$ and $l(s)$ in terms of the $W$ mass even for $Z$ boson loops, and hence the inclusion of this term takes care of the gap between the $Z$ and the $W$ mass at NLL accuracy. We denote the remaining terms with the superscript ``$\overline{\text{LSC}}$''. Note that this term is in general non-diagonal, since $C^\text{ew}$ mixes photons and $Z$ bosons. As is explicit in Eq.~\eqref{eq:kfactor}, coefficients need to be computed per helicity configuration; in the case of longitudinally polarised vector boson appearing as external particles, they need to be replaced with the corresponding Goldstone bosons using the Goldstone Boson Equivalence Theorem. This is a consequence of the scaling of longitudinal polarisation vectors with the mass of the particle, which prevents the direct application of the eikonal approximation used to evaluate the high-energy limit, as is detailed in~\cite{Denner:2000jv}. To calculate such matrix elements we have extended the default Standard Model implementation in \SHERPA with a full Standard Model including Goldstone bosons generated through the UFO interface of \SHERPA~\cite{Degrande:2011ua,Hoche:2014kca}. We have tested this implementation thoroughly against R\protect\scalebox{0.8}{ECOLA}\xspace~\cite{Uccirati:2014fda}.\footnote{Note that in particular this adds the possibility for \SHERPA users to compute Goldstone bosons contributions to any desired process, independently of whether this is done in the context of calculating EW Sudakov corrections.} \paragraph{\bf Subleading Soft-Collinear logarithms: SSC} The second term in Eq.~\ref{eq:DL} gives rise to the angular-dependent subleading part of the corrections from soft-collinear gauge-boson loops. It can be written as a sum over pairs of external particles: \begin{align}\label{eq:deltaSSC} \delta^{\mathrm{SSC}} \mathcal{M}^{i_{1} \ldots i_{n}} =\sum_{k=1}^{n} \sum_{l<k} \sum_{V_{a}=A, Z, W^{\pm}} \delta_{i_{k}^{\prime} i_{k} i_{l}^{\prime} i_{l}}^{V_{a}, \mathrm{SSC}}(k, l) \mathcal{M}_{0}^{i_{1} \ldots i_{k}^{\prime} \ldots i_{l}^{\prime} \ldots i_{n}} ,\end{align} where the coefficient matrices for the different gauge bosons $V_a=A,Z,W^\pm$ are \begin{align*} \delta_{i_{k}^{\prime} i_{k} i_{l}^{\prime} i_{l}}^{V_a, \mathrm{SSC}}(k, l) &=2 I_{i_{k}^{\prime} i_{k}}^{V_a}(k) I_{i_{l}^{\prime} i_{l}}^{\bar{V}_a}(l) \, \log \frac{\left|r_{k l}\right|}{s} l(s) .\end{align*} Note that while the photon couplings $I^A$ are diagonal, the eigenvalues for $I^Z$ and $I^\pm \equiv I^{W^\pm}$ can be non-diagonal, leading again to replacements of external legs as described in the LSC case. \paragraph{\bf Collinear or soft single logarithms: C} The two sources that provide either collinear or soft logarithms are the running of the field renormalisation constants, and the collinear limit of the loop diagrams where one external leg splits into two internal lines, one of which being a vector boson $V_a$. The ensuing correction factor can be written as a sum over external legs: \begin{align}\label{eq:deltaC} \delta^{\mathrm{C}} \mathcal{M}^{i_{1} \ldots i_{n}} =\sum_{k=1}^{n} \delta_{i_{k}^{\prime} i_{k}}^{\mathrm{C}}(k) \mathcal{M}_{0}^{i_{1} \ldots i_{k}^{\prime} \ldots i_{n}} .\end{align} For chiral fermions $f$, the coefficient matrix is given by \begin{equation}\label{eq:C_coeff_fermions} \delta_{f_{\sigma}f_{\sigma^{\prime}}}^{\mathrm{C}}\left(f^{\kappa}\right) =\delta_{\sigma \sigma^{\prime}}\left[\frac{3}{2} C_{f^{\kappa}}^{\mathrm{ew}}-\frac{1}{8 s_{\mathrm{w}}^{2}}\left(\left(1+\delta_{\kappa \mathrm{R}}\right) \frac{m_{f_{\sigma}}^{2}}{M_W^{2}}+\delta_{\kappa \mathrm{L}} \frac{m_{f_{-\sigma}}^{2}}{M_W^{2}}\right)\right] l(s) = \delta_{f_{\sigma}f_{\sigma^{\prime}}}^{\mathrm{\overline{C}}}\left(f^{\kappa}\right) + \delta_{f_{\sigma}f_{\sigma^{\prime}}}^{\mathrm{Yuk}}\left(f^{\kappa}\right) .\end{equation} We label the chirality of the fermion by $\kappa$, which can take either the value $L$ (left-handed) or $R$ (right-handed). The label $\sigma$ specifies the isospin, its values $\pm$ refer to up-type quarks/neutrinos or down-type quarks/leptons, respectively. The sine (cosine) of the Weinberg angle is denoted by $s_w$ ($c_w$). Note that we further subdivide the collinear contributions for fermion legs into ``Yukawa'' terms formed by terms proportional to the ratio of the fermion masses over the $W$ mass, which we denote by the superscript ``Yuk'', and the remaining collinear contributions denoted by the superscript ``$\overline{\text{C}}$''. These Yukawa terms only appear for external fermions, such that for all other external particles $\varphi$, we have $\delta^\text{C}_{\varphi'\varphi} = \delta^{\overline{\text{C}}}_{\varphi'\varphi}$. For charged and neutral transverse gauge bosons, \begin{align} \delta_{W^{\sigma} W^{\sigma^{\prime}}}^{C}\left(V_{\mathrm{T}}\right) = \delta_{\sigma \sigma^{\prime}}\frac{1}{2} b_{W}^{\mathrm{ew}} l(s) \quad\text{and}\quad \delta_{N^{\prime}N}^{\mathrm{C}}\left(V_{\mathrm{T}}\right) =\frac{1}{2}\left[E_{N^{\prime} N} b_{A Z}^{\mathrm{ew}}+b_{N^{\prime} N}^{\mathrm{ew}}\right] l(s) \quad\text{with}\quad E\equiv\left(\begin{array}{cc}{0} & {1} \\ {-1} & {0}\end{array}\right) \end{align} must be used, where the $b^\text{ew}$ are combinations of Dynkin operators that are proportional to the one-loop coefficients of the $\beta$-function for the running of the gauge-boson self-energies and mixing energies. Their values are given in terms of $s_w$ and $c_w$ in~\cite{Denner:2000jv}. Longitudinally polarised vector bosons are again replaced with Goldstone bosons. When using the matrix element on the right-hand side of Eq.~\eqref{eq:deltaC} in the physical EW phase, the following (diagonal) coefficient matrices must be used for charged and neutral longitudinal gauge bosons: \begin{align} \delta_{W^{\sigma} W^{\sigma^{\prime}}}^{C}\left(V_{\mathrm{L}}\right) \to \delta_{\phi^{\pm} \phi^{\pm}}^{\mathrm{C}}(\Phi) &=\left[2 C_{\Phi}^{\mathrm{ew}}-\frac{N_{\mathrm{C}}}{4 s_{\mathrm{w}}^{2}} \frac{m_t^{2}}{M_W^{2}}\right] l(s) ,\\ \delta_{N^{\prime}N}^{\mathrm{C}}\left(V_{\mathrm{T}}\right) \to \delta_{\chi \chi}^{\mathrm{C}}(\Phi) &=\left[2 C_{\Phi}^{\mathrm{ew}}-\frac{N_{\mathrm{C}}}{4 s_{\mathrm{w}}^{2}} \frac{m_t^{2}}{M_W^{2}}\right] l(s) , \end{align} where $N_\text C=3$ is the number of colour charges. \paragraph{\bf Parameter Renormalisation logarithms: PR} The last contribution we consider is the one coming from the renormalisation of EW parameters, such as boson and fermion masses and the QED coupling $\alpha$, and all derived quantities. The way we extract these terms, is by running all EW parameters up to a given scale, $\mu_\text{EW}$, which in all cases presented here corresponds to the partonic scattering centre of mass energy, and re-evaluate the matrix element value with these evolved parameters. We then take the ratio of this `High Energy' (HE) matrix element with respect to the original value, such that, calling $\{p_{\text{ew}}\}$ the complete set of EW parameters, \begin{equation} \label{eq:deltapr} \delta^\text{PR}_{i_1\dots i_n} = \left(\frac{\mathcal{M}_{\text{HE}}^{i_1\dots i_n}(\{p_{\text{ew}}\}(\mu_{\text{EW}}))}{\mathcal{M}^{i_1\dots i_n}(\{p_{\text{ew}}\})} - 1 \right) \sim \frac{1}{\mathcal{M}^{i_1\dots i_n}}\sum_{p\in\{p_{\text{ew}}\}}\frac{\delta \mathcal{M}^{i_1\dots i_n}}{\delta p}\,\delta p. \end{equation} The evolution of each EW parameter is obtained through \begin{equation} p_{\text{ew}}(\mu_{\text{EW}}) = \{p_{\text{ew}}\}\left( 1 + \frac{\delta p_{\text{ew}}}{p_{\text{ew}}} \right), \end{equation} and the exact expressions for $\delta p_{\text{ew}}$ can be found in Eqs.~(5.4--5.22) of \cite{Denner:2000jv}. The right hand side of Eq.~\eqref{eq:deltapr} corresponds to the original derivation of Denner and Pozzorini, while the left hand side is the actual implementation we have used. The two differ by terms that are formally of higher order, $(\alpha\,\log\mu_{\text{EW}}^2/m^2_W)^2$. In fact, although they are logarithmically enhanced, they are suppressed by an additional power of $\alpha$ with respect to the leading terms considered here, $\alpha\,\log^2\mu_{\text{EW}}^2/m^2_W$. \paragraph{\bf Generating event samples with EW Sudakov logarithmic contributions} After having discussed the individual contributions $c$, we can return to~Eq.~\eqref{eq:kfactor}, for which we now have all the ingredients to evaluate it for an event with the phase-space point $\Phi$. Defining the relative logarithmic contributions $\Delta^c$, we can rewrite it as \begin{equation} \label{eq:kfactor_contribs} K_{\text{NLL}}\left(\Phi\right) = 1 + \sum_c \Delta^c = 1 + \Delta^{\overline{\text{LSC}}} + \Delta^\text{Z} + \Delta^\text{SSC} + \Delta^{\overline{\text{C}}} + \Delta^\text{Yuk} + \Delta^\text{PR}. \end{equation} In the event sample, the relative contributions $\Delta^c$ are given in the form of named H\protect\scalebox{0.8}{EP}MC\xspace weights~\cite{Dobbs:2001ck} (details on the naming will be given in the manual of the upcoming \SHERPA release). This is done to leave the user freedom on how to combine such weights with the main event weight. In the context of results, Sec.~\ref{sec:examples-testin}, we employ these corrections in either of two ways. One way is to include them at fixed order, \begin{equation} \label{eq:kfactor_fixed_order} \text{d}\sigma^\text{LO + NLL}\left(\Phi\right) = \text{d}\Phi\,\mathcal{B}\left(\Phi\right) \, K_{\text{NLL}}\left(\Phi\right)\, , \end{equation} where $\mathcal{B}$ is the Born contribution. The alternative is to exploit the fact that Sudakov EW logarithms exponentiate (see Refs.~\cite{Fadin:1999bq}--\cite{Fuhrer:2010eu}) to construct a resummed fully differential cross section \begin{equation} \label{eq:kfactor_resum} \text{d}\sigma^\text{LO + NLL (resum)}\left(\Phi\right) = \text{d}\Phi\, \mathcal{B}\left(\Phi\right) \, K^\text{resum}_{\text{NLL}}\left(\Phi\right) = \text{d}\Phi\, \mathcal{B}\left(\Phi\right) \,e^{\left(1 - K_\text{NLL}\left( \Phi \right)\right)}\, , \end{equation} following the approach discussed in~\cite{Lindert:2017olm}. \paragraph{\bf Matching to higher orders and parton showers} \SHERPA internally provides the possibility to the user to obtain NLO corrections, both of QCD~\cite{Gleisberg:2007md} and EW~\cite{Schonherr:2017qcj} origin, and to further generate fully showered~\cite{Schumann:2007mg,Hoche:2015sya} and hadronised events~\cite{Bothmann:2019yzt}. In addition one is able to merge samples with higher multiplicities in QCD through the CKKW~\cite{Catani:2001cc}, or the M\protect\scalebox{0.8}{EPS@}N\protect\scalebox{0.8}{LO}\xspace~\cite{Hoeche:2012yf} algorithms. In all of the above cases (except the NLO EW), the corrections implemented here can be simply applied using the K factor methods in \SHERPA to one or all the desired processes of the calculations, as there is no double counting between EW Sudakov corrections and pure QCD ones. A similar reasoning can be applied for the combination with a pure QCD parton shower. Although we do not report the result of this additional check here, we have tested the combination of the EW Sudakov corrections with the default shower of \SHERPA. Technical checks and physics applications for the combination with matching and merging schemes, and for the combination with QED logarithms, are left for a future publication. If one aims at matching Sudakov logarithms to higher-order EW effects, such as for example combining fixed order NLO EW results with a resummed NLL Sudakov correction, it is for now required for the user to manually do the subtraction of double counting that one encounters in these cases. An automation of such a scheme is also outside the scope of this publication, and will be explored in the future. \section{Results} \label{sec:examples-testin} Before discussing our physics applications, we report a number of exact comparisons to other existing calculations of NLL EW Sudakov logarithms. A subset of the results used for this comparison for $pp \to Vj$ processes is shown in App.~\ref{app:vj}. They all agree with reference ones on a sub-percent level over the entire probed transverse momentum range from $p_{T,V}=\SI{100}{\GeV}$ to \SI{2000}{\GeV}. In addition to this, the implementation has been guided by passing a number of tests based on a direct comparison of tabulated numerical values for each contribution discussed in Sec.~\ref{sec:imple} in the high-energy limit, that are given for several electron-positron collision processes in~\cite{Denner:2000jv} and for the $pp \to WZ$ and $pp \to W\gamma$ processes in~\cite{Pozzorini:2001rs}. Our final implementation passes all these tests. In the remainder of this section, we present a selection of physics results obtained using our implementation and where possible show comparisons with existing alternative calculations. The aim is twofold, first we wish to show the variety of processes that can be computed with our implementation, and second we want to compare to existing alternative methods to obtain EW corrections to further study the quality of the approximation. In particular, where available, we compare our NLL predictions to full NLO EW corrections, as well as to the EW$_\text{virt}$\xspace approximation defined in~\cite{Kallweit:2015dum}. It is important to note that we only consider EW corrections to purely EW processes here, such that there are no subleading Born corrections that might otherwise complicate the comparison to a full EW NLO calculation or to the EW$_\text{virt}$\xspace approximation. We consider diboson production, dijet production, and $Z$ production in association with 4 jets. In all cases, we focus on the discussion of (large) transverse momentum distributions, since they are directly sensitive to the growing effect of the logarithmic contributions when approaching the high-energy limit. The logarithmic contributions are applied to parton-level LO calculations provided by the \COMIX matrix element generator implemented in \SHERPA. The parton shower and non-perturbative parts of the simulation are disabled, including the simulation of multiple interactions and beam remnants, and the hadronisation. Also higher-order QED corrections to the matrix element and by YFS-type resummation are turned off. The contributions to the NLL corrections as defined in Sec.~\ref{sec:imple} are calculated individually and combined a-posteriori, such that we can study their effects both individually and combined. We consider the fixed-order and the resummed option for the combination, as detailed in Eqs.~\eqref{eq:kfactor_fixed_order} and~\eqref{eq:kfactor_resum}, respectively. For the analysis, we use the Rivet 2~\cite{Buckley:2010ar} framework, and events are passed to analysis using the H\protect\scalebox{0.8}{EP}MC\xspace event record library. Unless otherwise specified, simulations are obtained using the NNPDF3.1, next-to-next-to-leading order PDF set~\cite{Ball:2017nwa}, while in processes where we include photon initiated processes we instead use the NNPDF3.1 LUX PDF set~\cite{Bertone:2017bme}. In all cases, PDFs are obtained through the LHAPDF~\cite{Buckley:2014ana} interface implemented in \SHERPA. When jets appear in the final state, we cluster them using the anti-$k_T$ algorithm~\cite{Cacciari:2008gp} with a jet radius parameter of $R=0.4$, through an interface to F\protect\scalebox{0.8}{AST}J\protect\scalebox{0.8}{ET}\xspace\cite{Cacciari:2011ma}. The CKM matrix in our calculation is equal to the unit matrix, i.e.\ no mixing of the quark generations is allowed. Electroweak parameters are determined using tree-level relations using a QED coupling value of $\alpha(m_Z) = 1 / 128.802$ and the following set of masses and decay widths, if not explicitly mentioned otherwise: \begin{align*} m_W&=\SI{80.385}{\GeV} & m_Z&=\SI{91.1876}{\GeV} & m_h&=\SI{125}{\GeV} \\ \Gamma_W&=\SI{2.085}{\GeV} & \Gamma_Z&=\SI{2.4952}{\GeV} & \Gamma_h&=\SI{0.00407}{\GeV}. \end{align*} Note that $\alpha$ is not running in the nominal calculation as running effects are all accounted for by the PR contributions. \subsection{$WW$ production in $pp$ collisions at 13 and 100 TeV} Our first application is the calculation of EW Sudakov effects in on-shell $W$ boson pair production at hadron colliders, which has lately been experimentally probed at the L\protect\scalebox{0.8}{HC}\xspace~\cite{Sirunyan:2017bey,Aaboud:2019nkz}, e.g.\ to search for anomalous gauge couplings. We compare the Sudakov EW approximation to both the full NLO EW calculation and the EW$_\text{virt}$\xspace approximation. The latter has been applied to $WW$ production in \cite{Kallweit:2017khh,Brauer:2020kfv}. In addition to that, EW corrections for $WW$ production have also been studied in~\cite{Kuhn:2011mh,Bierweiler:2012kw,Baglio:2013toa,Gieseke:2014gka,Li:2015ura,Biedermann:2016guo}, and NNLL EW Sudakov corrections have been calculated in~\cite{Kuhn:2011mh}. We have performed this study at current L\protect\scalebox{0.8}{HC}\xspace energies, $\sqrt s = \SI{13}{\TeV}$, as well as at a possible future hadron collider with $\sqrt s = \SI{100}{\TeV}$. In all cases, we include photon induced channels (they can be sizeable at large energies \cite{Bierweiler:2012kw}), and we set the renormalisation and factorisation scales to $\mu_{R,F}^2 = \frac{1}{2}( 2m_W^2 + \sum_i p^2_{T,i})$, where the sum runs over the final-state $W$ bosons, following the choice for gauge-boson pair production in~\cite{Accomando:2001fn}. Lastly, we set the widths of the $Z$ and $W$ boson consistently to 0, as the $W$ is kept on-shell in the matrix elements. \begin{figure}[!t] \centering \includegraphics[width=0.49\linewidth]{figures/ww_13tev_W_pT.pdf} \hfill \includegraphics[width=0.49\linewidth]{figures/ww_W_pT.pdf} \caption{The transverse momentum of the individual $W$ bosons in $W$ boson pair production from proton-proton collisions (including photon induced channels) at $\sqrt s = \SI{13}{\TeV}$ (left) and $\SI{100}{\TeV}$ (right). The baseline LO and NLO EW calculations are compared with the results of the LO+NLL calculation and its variant, where the logarithmic corrections are resummed (``LO+NLL (resum)''). In addition, the virtual approximation EW$_\text{virt}$\xspace and a variant of the NLO EW calculation with additional jets vetoed are also included. The ratio plots show the ratios to the LO and to the EW$_\text{virt}$\xspace calculation, and the relative size of each NLL contribution.} \label{fig:ww} \end{figure} In Fig.~\ref{fig:ww} we report results for the transverse momentum of each $W$ boson in the final state for both centre of mass energies. Plots are divided into four panels, of which the first one collects results for the various predictions. The second and third panels report the ratio to the leading order and to the EW$_\text{virt}$\xspace approximation, respectively. The aim is to show the general behaviour of EW corrections in the tail of distributions in the former, while the latter serves as a direct comparison between the Sudakov and the EW$_\text{virt}$\xspace approximations. Finally the fourth panel shows the relative impact of the individual contributions $\Delta^c$ appearing in Eq.~\eqref{eq:kfactor_contribs}. Looking at the second panel, we see that all but the full NLO EW calculation show a strong suppression in the $p_T > m_W$ region, reaching between $\SI{-70}{\percent}$ and $\SI{-90}{\percent}$ at $p_T=\SI{2}{\TeV}$. This effect is the main effect we discuss in this work, and it is referred to as Sudakov suppression. To explicitly confirm that this behaviour originates from virtual contributions of EW nature, we compare the Sudakov LO+NLL curve to the NLO EW and EW$_\text{virt}$\xspace approximations. Indeed, the latter only takes into account virtual corrections and the minimal amount of integrated counter terms to render the cross section finite. The Sudakov approximation is close to that for both centre of mass energies (see also the third panel), with deviations of the order of a few percent. It begins to deviate more at the end of the spectrum, with similar behaviours observed for both collider setups. However, with a Sudakov suppression of $\SI{70}{\percent}$ and more, we are already in a regime where the relative corrections become $\mathcal{O}(1)$ and a fixed logarithmic order description becomes invalid. In fact, at $p_{T,W} \gtrsim \SI{3}{\TeV}$, the LO+NLL becomes negative both at $\sqrt s = \SI{13}{\TeV}$ and \SI{100}{\TeV}, the same being true for the EW$_\text{virt}$\xspace approximation. Note in particular, that this is the main reason for choosing to show the $p_T$ distribution only up to $p_T = \SI{2}{\TeV}$ in Fig.~\ref{fig:ww} for both collider setups. It is also clear from the second panel that for both setups, the full NLO EW calculation does not show such large suppressions, and in the context of the question whether to use EW Sudakovs or the EW$_\text{virt}$\xspace to approximate the full corrections, this may be worrisome. However, in the full calculation we have included the real emission matrix elements, which also show a logarithmic enhancement at high $p_T$, as e.g.\ discussed in~\cite{Baglio:2013toa}. In this case, the real emission contribution almost entirely cancels the Sudakov suppression at $\sqrt{s}=\SI{13}{\TeV}$, and even overcompensates it by about \SI{20}{\percent} in the $p_{T,W}\lesssim\SI{1}{\TeV}$ region for $\sqrt{s}=\SI{100}{\TeV}$. To show that this is the case, we have also reported a jet-vetoed NLO EW simulation, which indeed again shows the high $p_T$ suppression as expected. Moreover, we have included a prediction labelled ``LO+NLL (resum)'', where we exponentiate the Sudakov contribution using Eq.~\eqref{eq:kfactor_resum}. It is similar to the NLL approximation, but resumming these logarithms leads to a smaller suppression in the large $p_T$ tail, which is reduced by about \SI{20}{\percent} at $p_T=\SI{2}{\TeV}$, thus increasing the range of validity compared to the non-exponentiated prediction. Moreover, this agrees well qualitatively with the NNLL result reported in~\cite{Kuhn:2011mh}, and suggests that even higher-order logarithmic effects should be rather small in comparison in the considered observable range. Lastly, we compare the individual LL and NLL contributions. As expected, we find that the double logarithmic $\overline{\text{LSC}}$ term is the largest contribution and drives the Sudakov suppression. Some single logarithmic terms, in particular the $\overline{\text{C}}$ terms, also give a sizeable contribution, reducing the net suppression. This confirms that the inclusion of single logarithmic terms is needed in order to provide accurate predictions in the Sudakov approximation, with deviations of the order of $\mathcal{O} (\SI{10}{\percent})$ with respect to the EW$_\text{virt}$\xspace calculation. Comparing the individual contributions for the two collider energies, we see qualitatively similar effects. As can be expected from a larger admixture of $b\bar b$ initial states, the $\Delta^\text{Yuk}$ is enhanced at the larger collider energy. \subsection{EW-induced dijet production in $pp$ collisions at 13 TeV} For the second comparison in this section, we simulate purely EW dijet production in hadronic collisions at $\sqrt s = \SI{13}{\TeV}$ at the Born-level perturbative order $\mathcal{O}(\alpha^2\alpha_s^0)$. As for the case of diboson production we add photon initiated channels, and we also include photons in the set of final-state partons, such that $\gamma\gamma$ and $\gamma j$ production is also part of our sample. Partons are thus clustered into jets which are then sorted by their $p_T$. We select events requiring at least two jets, the leading jet (in $p_T$) to have a $p_T > \SI{60}{\GeV}$ and the subleading jet to have a $p_T > \SI{30}{\GeV}$. Note in particular that for all but the real emission case in the full NLO EW simulation, this corresponds to imposing a $p_T$ cut on the two generated partons. The renormalisation and factorisation scales are set to $\mu_{R,F}^2 = \hat{H}^2_T = (\sum_i p_{T,i})^2$, where the sum runs over final-state partons. We compare our LO+NLL EW results, as in the previous subsection, with the LO, the NLO EW and the EW$_\text{virt}$ predictions. NLO EW corrections for dijet production have been first discussed in~\cite{Dittmaier:2012kx,Frederix:2016ost}, while~\cite{Reyer:2019obz} discusses those corrections in the context of the 3-to-2-jet ratio $R_{32}$. In this context, we only consider EW corrections to the Born process described above, i.e.\ we consider the $\mathcal{O}(\alpha_s^0\alpha^3)$ contributions, which is a subset of the contributions considered in the above references. In Fig.~\ref{fig:ew_dijets_and_Z4jets} (left), we present the transverse momentum distribution of the leading jet $p_T$ given by the LO+NLL EW calculation, again both at fixed order and resummed, see Eqs.~\eqref{eq:kfactor_fixed_order} and~\eqref{eq:kfactor_resum}. As for $W$ pair production, the plot is divided into four panels. However, this time we do not include jet-vetoed NLO results, as in this case we do not observe large real emission contributions. The panels below the main plot give the ratio to the LO calculation, the ratio to the EW$_\text{virt}$\xspace approximation, and the ratio of each NLL contribution to the LO calculation. Compared to $W$ pair production, we observe a smaller but still sizeable Sudakov suppression, reaching approximately $-\SI{40}{\percent}$ at $p_T=\SI{2}{\TeV}$. The NLO contributions not included in the NLO EW$_\text{virt}$ (i.e.\ mainly real emission terms) are small and cancel the Sudakov suppression only by a few percent and as such both the Sudakov and the EW$_\text{virt}$\xspace approximations agree well throughout the $p_T$ spectrum with the full NLO calculation. The same is true for the resummed NLL case which gives a Sudakov suppression of $\SI{-30}{\percent}$ for $p_T=\SI{2}{\TeV}$. Note that a small step can be seen for $p_T\sim m_W$ in the Sudakov approximation. The reason for this is that we force, during the simulation, all Sudakov contributions to be zero when at least one of the invariants formed by the scalar product of the external momenta is below the $W$ mass, as Sudakov corrections are technically only valid in the high-energy limit. This threshold behaviour can be disabled, giving a smoother transition between the LO and the LO+NLL, as can be seen for example in Fig.~\ref{fig:ww}. Comparing the individual NLL contributions we find again that the double logarithmic $\overline{\text{LSC}}$ term is the largest contribution, its size is however reduced by a third compared to the diboson case, since the prefactor $C^\text{ew}$ in Eq.~\eqref{eq:LSC_coeff} is smaller for quarks and photons compared to $W$ bosons. Among the single logarithmic terms the $\overline{\text{C}}$ and PR terms are the most sizeable over the whole range, and are of a similar size compared to the diboson case, while the SSC contribution only give a small contribution. Again, subleading terms must be included to approximate the NLO calculation at the observed \SI{10}{\percent} level, although we observe an almost entire accidental cancellation, in this case, of the SL terms. \subsection{Off-shell $Z$ production in association with 4 jets in $pp$ collisions at 13 TeV} \label{neat-example} As a final example, we present for the first time the LO+NLL calculation for $e^+ e^-$ production in association with four additional jets from proton-proton collisions at $\sqrt s = \SI{13}{\TeV}$. The process is one of the key benchmark processes at the L\protect\scalebox{0.8}{HC}\xspace to make precision tests of perturbative QCD and is a prominent background constituent for several Standard Model and New Physics processes. In this case, we neglect photon induced contributions, to better compare to the NLL effects in $Z$ plus one-jet production presented in App.~\ref{app:vj}, which in turn is set up as a direct comparison to~\cite{Kuhn:2004em}. For the same reason, in this case, we only consider QCD partons in the final state. The factorisation and renormalisation scale are set to $\mu_F^2 = \mu_R^2 = \hat{H}_T^{\prime 2} = (M_{T,\ell\ell} + \sum_i p_{T,i})^2/4.0$, where $M_{T,\ell\ell}$ is the transverse mass of the electron-positron pair and the sum runs over the final-state partons. This choice is inspired by~\cite{Anger:2017nkq}, where the full next-to-leading QCD calculation is presented. \begin{figure}[!b] \centering \includegraphics[width=0.49\linewidth]{figures/ew_dijet_jet_pT_1.pdf} \hfill \includegraphics[width=0.50\linewidth]{figures/Z4j_Z_pT.pdf} \caption{The transverse momentum of the leading jet in EW-induced dijet production in proton-proton collisions (including photon channels), and for the reconstructed $Z$ boson in $e^+ e^-$ plus four jets production, For the dijet production, LO and NLO calculations are shown, whereas for the $Z$ plus jets production only the LO is shown. These baseline calculations are compared with the results of the LO+NLL calculation, both at fixed-order and resummed. In the dijet case, the virtual approximation EW$_\text{virt}$\xspace is shown in addition. The ratio plots show the ratios to the LO and the EW$_\text{virt}$\xspace calculations, and the relative size of each NLL contribution.} \label{fig:ew_dijets_and_Z4jets} \end{figure} As we consider here an off-shell $Z$, the invariant mass formed by its decay products is on average only slightly above the $W$ mass threshold. This may cause an issue as this is one of the invariants considered in the definition of the high-energy limit discussed in Sec.~\ref{sec:imple}, and therefore this limit is strictly speaking not fulfilled. In turn, this can introduce sizeable logarithms in particular in Eq.~\eqref{eq:deltaSSC}. However, in practice we only see a small number of large $K$ factors, that only contribute a negligible fraction of the overall cross section. For set-ups at a larger centre of mass energy, one should monitor this behaviour closely, as the average value of $s$ in Eq.~\eqref{eq:deltaSSC} would increase too. We therefore foresee that a more careful treatment might be required then, e.g.\ by vetoing EW Sudakov K factors whenever $|r_{kl}| \ll s$. In Fig.~\ref{fig:ew_dijets_and_Z4jets} (right), we present the LO+NLL EW calculation of the transverse momentum distribution of the reconstructed $Z$ boson. To our knowledge, there is no existing NLO EW calculation to compare it against\footnote{Note that while such a calculation has not yet been published, existing tools, including \SHERPA in conjunction with OpenLoops~\cite{Buccioni:2019sur}, are in principle able to produce such a set-up.}, hence we do not have a ratio-to-NLO plot in this case. However, we do show the ratio-to-NLL plot and the plot that shows the different contributions of the LO+NLL calculation, as for the other processes. The sizeable error bands give the MC errors of the LO calculation. Note that the errors of the LO and the LO+NLL calculation are fully correlated, since the NLL terms are completely determined by the phase-space point of the LO event, and the same LO event samples are used for both predictions. Hence the reported ratios are in fact very precise. With the aim of additionally making a comparison to the $Z+j$ result reported in Fig.~\ref{fig:vj} (right panel), we opt for a linear $x$ axis, in contrast with the other results of this section. In both cases we see a similar overall LO+NLL effect, reaching approximately $\SI{-40}{\percent}$ for $p_T\lesssim\SI{2}{\TeV}$. This in turn implies that the effects of considering an off-shell $Z$ (as opposed to an on-shell decay) as well as the additional number of jets have very little effect on the size of the Sudakov corrections. Finding similar EW corrections for processes that only differ in their number of additional QCD emissions can be explained by the fact that the sum of EW charges of the external lines are equal in this case. As has recently been noted in~\cite{Brauer:2020kfv}, this can be deduced from the general expressions for one-loop corrections in~\cite{Denner:2000jv} and from soft-collinear effective theory~\cite{Chiu:2008vv,Chiu:2009ft}. Although the overall effect for $Zj$ and $Z+4j$ is found to be very similar here, the individual contributions partly exhibit a different behaviour between the two, with the SSC terms becoming negative in the four-jet case and thus switching sign, and the C terms becoming a few percent smaller. It is in general noticeable that the SSC terms exhibit the strongest shape differences among all processes considered in this study. Finally, similarly to the previous studied cases, the resummed result gives a slightly reduced Sudakov suppression, reaching approximately $\SI{-30}{\percent}$ for $p_T\lesssim\SI{2}{\TeV}$, implying that in this case, higher logarithmic contributions should be small. \section{Conclusions} \label{sec:conclusions} We have presented for the first time a complete, automatic and fully general implementation of the algorithm presented in~\cite{Denner:2000jv} to compute double and single logarithmic contributions of one-loop EW corrections, dubbed as EW Sudakov logarithms. These corrections can give rise to large shape distortions in the high-energy tail of distributions, and are therefore an important contribution in order to improve the accuracy of the prediction in this region. Sudakov logarithms can provide a good approximation of the full next-to-leading EW corrections. An exponentiation of the corrections can be used to resum the logarithmic effects and extend the region of validity of the approximation. In our implementation, each term contributing to the Sudakov approximation is returned to the user in the form of an additional weight, such that the user can study them individually, add their sum to the nominal event weight, or exponentiate them first. Our implementation will be made available with the next upcoming major release of the \SHERPA Monte Carlo event generator. We have tested our implementation against an array of existing results in the same approximation, and for a variety of processes against full NLO EW corrections and the virtual approximation EW$_\text{virt}$\xspace, which is also available in \SHERPA. A selection of such tests is reported in this work, where we see that indeed EW Sudakov logarithms give rise to large contributions and model the full NLO corrections well when real emissions are small. We stress that our implementation is not limited to the examples shown here, but it automatically computes such corrections for any Standard Model process. In terms of final-state multiplicity it is only limited by the available computing resources. In a future publication we plan to apply the new implementation in the context of state-of-the-art event generation methods, in particular to combine them with the matching and merging of higher-order QCD corrections and the QCD parton shower, while also taking into account logarithmic QED corrections using the YFS or QED parton shower implementation in \SHERPA. This will allow for an automated use of the method for the generation of event samples in any L\protect\scalebox{0.8}{HC}\xspace or future collider context, in the form of optional additive weights. As discussed towards the end of Sec.~\ref{sec:imple}, we foresee that this is a straightforward exercise, since there is no double counting among the different corrections, and applying differential K factors correctly within these methods is already established within the \SHERPA framework. The result should also allow for the inclusion of subleading Born contributions, as the EW$_\text{virt}$\xspace scheme in \SHERPA does. The automated combination of (exponentiated) EW Sudakov logarithms with fixed-order NLO EW corrections is another possible follow-up, given the presence of phenomenologically relevant applications. \section*{Acknowledgements} We wish to thank Stefano Pozzorini for the help provided at various stages with respect to his original work, as well as Marek Sch\"onherr, Steffen Schumann, Stefan H\"oche and Frank Krauss and all our \SHERPA colleagues for stimulating discussions, technical help and comments on the draft. We also thank Jennifer Thompson for the collaboration in the early stage of this work. The work of DN is supported by the ERC Starting Grant 714788 REINVENT. \clearpage
{ "attr-fineweb-edu": 1.432617, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc805qX_Bk6r0rmei
\section*{Acknowledgments} We are grateful to Weiyao Ke, Guang-You Qin, Yasuki Tachibana and Daniel Pablos for helpful discussions. We thank Wei Chen, Yayun He, Tan Luo and Yuanyuan Zhang for their collaboration on many results presented in this review. This work is supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of Nuclear Physics, of the U.S. Department of Energy under grant Nos. DE-SC0013460 and DE-AC02-05CH11231, the National Science Foundation (NSF) under grant Nos. ACI-1550300 and ACI-1550228 within the framework of the JETSCAPE Collaboration, and the National Natural Science Foundation of China (NSFC) under grant Nos. 11935007, 11221504 and 11890714. \section*{References} \bibliographystyle{iopart-num} \section{Introduction} \label{sec:introduction} It has been established through quantum chromodynamics (QCD) calculations on lattice \cite{Ding:2015ona} that matter with strong interaction governed by QCD under extreme conditions at high temperature and density will go through a transition from hadronic resonant gas to quark-gluon plasma (QGP). The transition is a rapid crossover for matter with nearly zero net baryon density. Many theoretical model studies indicate the transition becomes a first order at high baryon density~\cite{BraunMunzinger:2008tz} which is still yet to be confirmed through first principle calculations with lattice QCD. While experimental search for the first order phase transition and the existence of a critical endpoint (CEP) is been carried out at the beam energy scan (BES) program at the Relativistic Heavy-Ion Collider (RHIC) \cite{Bzdak:2019pkr}, many of the current experimental efforts at RHIC and the Large Hadron Collider (LHC) focus on exploring and extracting properties of the QGP with negligible baryon density formed in high-energy heavy-ion collisions \cite{Gyulassy:2004zy,Muller:2012zq}. These include the extraction of bulk transport properties such as the shear and bulk viscosities of QGP through the study of collective phenomena~\cite{Romatschke:2009im,Braun-Munzinger:2015hba} and the jet transport coefficient through jet quenching~\cite{Majumder:2010qh,Qin:2015srf}. All of these efforts involve systematic comparisons of experimental data and theoretical calculations with ever more realistic simulations of the dynamical evolution of the QGP in high-energy heavy-ion collisions. Jet quenching is a collection of phenomena in high-energy heavy-ion collisions caused by interactions between energetic jet partons and the QGP medium~\cite{Bjorken:1982tu,Gyulassy:1990ye}. Because of the large transverse momentum transfer in the hard processes, the cross section of the initial jet production can be calculated with perturbative QCD (pQCD) which has been shown to agree with experimental data on jet production in proton-proton (p+p) collisions~\cite{Eskola:1995cj}. These pQCD calculations of the jet production rate can be extended to proton-nucleus (p+A) collisions within the collinear factorized pQCD and agree with experimental data after the nuclear modification of the parton distribution functions is taken into account~\cite{Albacete:2013ei}. Such calculations for nucleus-nucleus (A+A) collisions can be used as baselines for the initial jet production against which medium modification of the jet production due to jet quenching can be obtained and compared to experimental data. These include suppression of single hadron, dihadron and $\gamma$-hadron spectra as well as single jet, dijet and $\gamma$-jet spectra. Shortly after its initial production, a hard jet parton has to propagate through the dense QGP medium formed in heavy-ion collisions. During the propagation, it will go through multiple interactions with thermal partons in the QGP and experience both energy loss and transverse momentum broadening before hadronization, leading to jet quenching or suppression of the final jet and hadron spectra. For energetic partons, the energy loss is dominated by radiative processes \cite{Gyulassy:1993hr} and is directly proportional to the jet transport coefficient \cite{Baier:1996sk} which is defined as the averaged transverse momentum broadening squared per unit length of propagation, \begin{equation} \label{qhat} \hat q_a=\sum_{b,(cd)} \int{dq_{\bot }^{2} \, \frac{d\sigma_{ab\rightarrow cd}}{dq_{\bot }^2}}\rho_b \, q_{\bot }^{2}, \end{equation} where $\sigma_{ab\rightarrow cd}$ is the partonic scattering cross section between the jet parton $(a)$ and the medium parton $(b)$ with local density $\rho_b$ (which should contain a degeneracy factor or number of degrees of freedom) and the sum is over the flavor of the thermal parton ($b$) and all possible scattering channels $a+b\rightarrow c+d $ for different flavors of the final partons ($c$, $d$). This jet transport coefficient can be defined for jet parton propagating in a non-thermal medium and can be related to the general gluon distribution density of the QGP medium~\cite{CasalderreySolana:2007sw} and used to characterize a variety of properties of the QGP at finite temperature~\cite{Majumder:2007zh} as well as cold nuclear matter~\cite{Osborne:2002st}. Jet quenching therefore can be used as a hard probe of the QGP properties in high-energy heavy-ion collisions. Since the first discovery of the jet quenching phenomena at RHIC~\cite{Gyulassy:2004zy,Wang:2004dn} and the confirmation at the LHC~\cite{Jacobs:2004qv,Muller:2012zq}, the focus of jet quenching studies have shifted to the precision extraction of jet transport coefficients~\cite{Eskola:2004cr,Armesto:2009zi,Chen:2010te} through systematical comparisons of experimental data with theoretical calculations and phenomenological analyses that incorporate up-to-date theories of parton propagation and interaction with the dense medium, the state-of-the-art model for the dynamical evolution of the QGP medium and modern statistical analysis tools. Such a systematical study of the suppression of the single hadron spectra in heavy-ion collisions at both RHIC and LHC has been carried out by the JET Collaboration~\cite{Burke:2013yra}. The extracted jet transport coefficient in the range of the initial temperature achieved at RHIC and LHC, as shown in Fig.~\ref{fig:qhat}, is about 2 orders of magnitude larger than that inside a cold nucleus, indicating the extremely high temperature and density of the QGP formed in the high-energy heavy-ion collisions. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{JET-qhat} \caption{(Color online) The scaled jet transport parameter $\hat q/T^3$ as a function of the initial temperature $T$ for an initial quark jet with energy $E=10$~GeV at the center of the most central A+A collisions at an initial time $\tau_0=0.6$~fm/$c$ extracted by the JET Collaboration from the experimental data on hadron suppression. See Ref.~\cite{Burke:2013yra} for details.} \label{fig:qhat} \end{figure} While further phenomenological studies on jet quenching in the final hadron spectra will continue to shed light on the detailed properties of the QGP medium, in particular with the combined experimental data on hadron spectra and azimuthal anisotropy~\cite{Kumar:2017des,Shi:2018vys,Andres:2019eus}, the experimental data on fully reconstructed jets available at RHIC and the LHC will provide unprecedented opportunities to explore the QGP properties with jet quenching. The large rate of jet production in a wide range of transverse momentum at the LHC will allow studies on the energy dependence of jet quenching and jet transport coefficient in correlation with many other aspects of heavy-ion collisions, such as the initial geometry through the final state multiplicity and the bulk hadron azimuthal anisotropy, the energy flow of associated soft hadrons and the flavor tagging. The medium modification of jet substructures such as the jet transverse profile and the jet fragmentation function can further elucidate the QGP properties through parton-medium interaction and parton transport. Since fully constructed jets are collections of collimated clusters of hadrons within a given jet cone, they consist of not only leading hadrons from the fragmentation of jet partons but also hadrons from the bulk medium within the jet cone that become correlated with jets through jet-medium interactions. Therefore, the medium modification of jets in heavy-ion collisions is not only determined by the energy loss of leading partons but also influenced by the transport of the energy lost by jet partons in the dynamically evolving medium through radiated gluons and recoil partons in the form of jet-induced medium excitation. It is therefore necessary to include the transport of recoil partons in the study of jet quenching with the full jet suppression and medium modification. This will be the focus of this review on the recent developments of jet quenching studies. The outline of this review is as follows. We will first give a brief review and update on theories of parton propagation and medium-induced energy loss in Sec.~\ref{sec:theory} in a generalized high-twist framework, followed by a review of models of jet evolution in the QGP medium in Sec.~\ref{sec:models} with emphasis on the multi-scale nature of the evolution processes. Then, we will review the description of the jet-induced medium excitation through the transport of recoil partons and the hydrodynamic response to jet-medium interaction in Sec.~\ref{sec:medium_response}. Effects of jet quenching and medium response on hadron and jet spectra and jet substructures will be discussed in Secs.~\ref{sec:hadron_spectra}, \ref{sec:jet_spectra} and~\ref{sec:jet_substructures}. A summary and outlook will be given in Sec.~\ref{sec:summary}. \section{Parton energy loss in QCD medium} \label{sec:theory} During the propagation of an energetic parton inside a QCD medium, it will experience interaction with constituents of the medium involving both elastic and inelastic processes. Within pQCD, one can calculate both elastic and inelastic energy loss of the propagating parton inside the QCD medium when it interacts with the medium partons. \subsection{Elastic energy loss} The elastic energy loss can be calculated from the elastic scattering cross section~\cite{Bjorken:1982tu,Thoma:1990fm}, \begin{equation} \frac{dE^a_{\rm el}}{dx}=\int\frac{d^3k}{(2\pi)^3}\sum_{b,(cd)} \gamma_bf_b(k) \int{dq_{\bot }^{2} \, \frac{d\sigma_{ab\rightarrow cd}}{dq_{\bot }^2}} \, \nu, \end{equation} where $f_b(k)$ is the phase-space distribution (Fermi-Dirac for quarks and Bose-Einstein for gluons) for thermal partons with the degeneracy $\gamma_b$, and $\nu$ is the energy transfer from the jet parton to the thermal medium parton which depends on the propagating parton energy $E$, the thermal parton energy $\omega$ and the transverse momentum transfer $q_\perp$. In the high energy limit $E\gg T$, the $t$-channel gluon and quark scattering cross sections can be approximated by their small angle limits \begin{equation} \frac{d\sigma_{ab}}{dq_\perp^2} = 2\frac{C_2(a)C_2(b)}{N_c^2-1}\frac{2\pi\alpha_\mathrm{s}^2}{(q_\perp^2+\mu^2_{\rm D})^2}, \label{eq-small-el} \end{equation} where $C_2$ is the quadratic Casimir factor of parton $a$/$b$ -- $C_2(q)=C_F=(N_c^2-1)/2N_c=4/3$ for quarks and $C_2(g)=C_A=N_c=3$ for gluons. The color factors $C_{ab}= 2C_2(a)C_2(b)/(N_c^2-1)$ are $C_{gg}=9/4$, $C_{qg}=1$ and $C_{qq}=4/9$ for gluon-gluon, gluon-quark and quark-quark scattering, respectively. This cross section is collinearly divergent at small angle or transverse momentum which can be regularized by the Debye screening mass $\mu_{\rm D}^2=6\pi\alpha_{\rm s} T^2$ in a medium with 3 flavors of massless quarks. Under the small angle approximation for the parton-parton scattering, $\nu\approx q_\perp^2$, the elastic parton energy loss is \begin{equation} \frac{dE^a_{\rm el}}{dx}\approx C_2(a)\frac{3\pi}{2}\alpha_{\rm s}^2 T^2\ln\left(\frac{2.6ET}{4\mu_{\rm D}^2}\right). \end{equation} \subsection{Inelastic parton energy loss} The first calculation of radiative parton energy loss was attempted by Gyulassy and Wang~\cite{Gyulassy:1993hr,Wang:1994fx} within a static potential model of multiple parton interaction for an energetic parton in a thermal medium. However, an important contribution from multiple interaction between the gluon cloud and the medium was not taken into account in this first study which turns out to be the most important in the soft radiation limit. The first calculation that took into account such interaction was by Baier-Dokshitzer-Mueller-Peigne-Schiff (BDMPS)~\cite{Baier:1994bd,Baier:1996sk,Baier:1996kr} in the limit of soft gluon radiation. Later on Zakharov using the path integral formalism \cite{Zakharov:1996fv}, Gyulassy-Levai-Vitev (GLV)~\cite{Gyulassy:1999zd,Gyulassy:2000fs} and Wiedemann~\cite{Wiedemann:2000za} using the opacity expansion method have also calculated the radiative energy loss for a fast parton in a QCD medium. All these calculations assumed the medium as a collection of static scattering centers as in the first study by Gyulassy and Wang~\cite{Gyulassy:1993hr}. A separate method to calculate parton evolution and energy loss in a thermal QGP medium was carried out by Arnold, Moore and Yaffe (AMY)~\cite{Arnold:2002ja} within the framework of hard thermal loop resummed pQCD at finite temperature. In the high-twist (HT) approach~\cite{Guo:2000nz,Wang:2001ifa,Zhang:2003yn,Schafer:2007xh}, the twist-expansion technique was used in a collinear factorization formalism to calculate the medium-induced gluon spectrum and parton energy loss. Within this approach the information of the medium is embedded in the high-twist parton correlation matrix elements. The relations between these different studies of parton propagation and energy loss have been discussed in detail in Refs.~\cite{Arnold:2008iy,CaronHuot:2010bp,Mehtar-Tani:2019tvy} and numerically compared in Ref.~\cite{Armesto:2011ht}. In the latest study referred to as the SCET\textsubscript{G} formalism~\cite{Ovanesyan:2011xy,Ovanesyan:2011kn}, the soft collinear effective theory (SCET) is supplemented with the Glauber modes of gluon field for parton interaction between a fast parton and static scattering centers. In this subsection, we will briefly review the calculation of induced gluon spectra and parton energy loss within the high-twist framework and its connection to the results in the opacity expansion approach. \subsubsection{Vacuum bremsstrahlung} Consider the semi-inclusive deeply inelastic scattering (SIDIS) of a lepton off a large nucleus, \begin{equation} e(l_1) + A(p) \rightarrow e(l_2) + h(l_h) +\mathcal{X}. \nonumber \end{equation} The cross section of this SIDIS process can be expressed as \begin{equation} d \sigma = \frac{e^4}{2s}\frac{\sum_q e_q^2}{q^4}\int \frac{d^4 l_2}{(2\pi)^4} 2\pi \delta(l_2^2) L_{\mu\nu} W^{\mu\nu}, \end{equation} where $s = (l_1 + p)^2$ is the invariant center-of-mass energy squared for the lepton-nucleon scattering, $p$ is the four-momentum per nucleon in the large nucleus with atomic number $A$ and $q$ is the four-momentum of the intermediate virtual photon. The leptonic tensor is \begin{equation} L_{\mu\nu} = \frac{1}{2}{\rm Tr}[\gamma\cdot l_1 \gamma_{\mu} \gamma\cdot l_2 \gamma_{\nu} ], \end{equation} and the semi-inclusive hadronic tensor is, \begin{equation} \begin{split} E_{l_h} \frac{d W^{\mu\nu}}{d^3 l_h} &=\int d^4y e^{-iq\cdot y} \nonumber \\ &\times \sum_\mathcal{X} \langle A| J^{\mu}(y)|\mathcal{X}, h \rangle \langle h, \mathcal{X}|J^{\nu}(0)|A\rangle, \end{split} \end{equation} where $J^{\mu}(0)=\sum_q\bar{\psi}_q(0)\gamma^{\mu}\psi_q(0)$ is the hadronic vector current. The four-momentum of the virtual photon and the initial nucleon are $q=[-Q^2/2q^-, q^-,\vec{0}_{\perp}]$ and $p=[p^+,0,\vec{0}_{\perp}]$, respectively. The lowest order (LO) contribution to the semi-inclusive hadronic tensor is from the process as shown in Fig.~\ref{fig:dis1} where a quark from the target nucleus is stuck by the virtual photon through a single scattering and fragments into a hadron, \begin{equation} \frac{d W^{\mu\nu}_{S(0)}}{d z_h} = \int dx f_q^A(x) H^{\mu\nu}_{(0)}(x) D_{q\rightarrow h}(z_h), \end{equation} where the nuclear quark distribution function is defined as, \begin{equation} f_q^A(x) = \int \frac{dy^-}{2\pi} e^{-ixp^+y^-} \frac{1}{2} \langle A|\bar{\psi}_q(y^-)\gamma^+ \psi_q(0)|A \rangle , \label{eq:dis0} \end{equation} and the quark fragmentation function is, \begin{eqnarray} D_{q \rightarrow h} (z_h) = & \dfrac{z_h}{2}\sum_{S}\int \dfrac{dy^+}{2\pi} e^{il_h^-y^+/z_h} \nonumber\\ &\hspace{-0.05in}\times {\rm Tr}\left[\dfrac{\gamma^{-}}{2}\langle 0|\psi(y^+)|h,\mathcal{S}\rangle \langle \mathcal{S},h|\bar{\psi}(0)|0 \rangle\right]. \label{eq:qfrag} \end{eqnarray} The hard partonic part is, \begin{eqnarray} H^{\mu\nu}_{(0)} (x)& = \frac{1}{2} {\rm Tr}[\gamma\cdot p \gamma^{\mu} \gamma \cdot (q+xp)\gamma^{\nu} ] \nonumber \\ &\times \frac{2\pi}{2p^+q^-} \delta(x-x_{\rm B}), \;\; x_{\rm B} = \frac{Q^2}{2p^+q^-}, \end{eqnarray} where $x_{\rm B}$ is the Bjorken variable. The momentum fraction of the struck quark is $x=x_{\rm B}$ in this LO process. The fraction of the light-cone momentum carried by the observed hadron with momentum $l_h$ is $z_h = l_h^-/q^-$. \begin{figure} \begin{center} \includegraphics[scale=0.85]{dis1} \caption{The lowest order and leading twist contribution to the hadronic tensor in the SIDIS process.} \label{fig:dis1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.75]{dis2} \caption{The next-to-leading order and leading twist contribution to the hadronic tensor in the SIDIS process.} \label{fig:dis2} \end{center} \end{figure} At the next-to-leading order (NLO), the struck quark can have vacuum gluon bremsstrahlung after the photon-quark scattering as shown in Fig.~\ref{fig:dis2}. The hard partonic part of this process is \begin{eqnarray} &H^{\mu\nu}_{(1)}(x,p,q,z) = H^{\mu\nu}_{(0)}(x) \frac{\alpha_\mathrm{s}}{2\pi} \int \frac{d l_{\perp}^2}{l_{\perp}^2} P_{q\rightarrow qg}(z), \\ &P_{q\rightarrow qg}(z) = C_F \frac{1+z^2}{1-z}, \end{eqnarray} where $P_{q\rightarrow qg}(z)$ is the quark splitting function and $z = l_q^-/q^-$ is the quark momentum fraction after the gluon bremsstrahlung. Considering hadrons from the fragmentation of both the final quark and gluon, the NLO correction to the hadronic tensor is then \begin{eqnarray} \frac{d W^{\mu\nu}_{S(1)}}{d z_h} = &\int dx f_q^A(x) H^{\mu\nu}_{(0)}(x) \frac{\alpha_\mathrm{s}}{2\pi} \int_{z_h}^{1}\frac{dz}{z} \int \frac{d l_{\perp}^2}{l_{\perp}^2} \\ &\hspace{-0.6in}\times \left[ P_{q\rightarrow qg}(z) D_{q\rightarrow h}(\frac{z_h}{z}) + P_{q\rightarrow qg}(1-z) D_{g\rightarrow h}(\frac{z_h}{z})\right], \nonumber \label{eq:dw1q} \end{eqnarray} where the gluon fragmentation function is defined as, \begin{eqnarray} D_{g \rightarrow h} (z_h) &= - \dfrac{z_h^2}{2l_h^-} \sum_{\mathcal{S}}\int \dfrac{dy^+}{2\pi} e^{il_h^- y^+/z_h} \nonumber \\ &\times \langle 0|F^{-\alpha}(y^+)|h,\mathcal{S} \rangle \langle \mathcal{S},h|F^{-}_{\quad \alpha}(0)|0 \rangle. \label{eq:gfrag} \end{eqnarray} At NLO, one also has to include the virtual corrections to the hadronic tensor, \begin{eqnarray} \dfrac{dW^{\mu \nu}_{S(v)}}{dz_h} &=& -\int dx f_q^A(x) H^{\mu\nu}_{(0)} (x) \dfrac{\alpha_\mathrm{s}}{2\pi} \nonumber \\ & \times& \int_0^1 dz \int \frac{d l_{\perp}^2}{ l_{\perp}^2} P_{q\rightarrow qg}(z) D_{q \rightarrow h} (z_h) , \end{eqnarray} which can also be obtained from the unitarity requirement on the radiative process. When summed together, the radiative corrections can be organized into the renormalized fragmentation function $D_{q \rightarrow h} (z_h, \mu^2)$ in the hadronic tensor, \begin{equation} \dfrac{dW^{\mu \nu}_{S}}{dz_h} = \int dx f_q^A(x) H^{\mu\nu}_{(0)} D_{q \rightarrow h} (z_h, \mu^2), \end{equation} \begin{eqnarray} &D_{q \rightarrow h} (z_h, \mu^2) = D_{q \rightarrow h} (z_h) + \frac{\alpha_\mathrm{s}}{2\pi} \int_{z_h}^{1} \frac{dz}{z} \int_0^{\mu^2} \frac{d l_{\perp}^2}{l_{\perp}^2} \\ &\hspace{0.1in}\times \left[ P_{q\rightarrow qg}^+(z)D_{q\rightarrow h}(\frac{z_h}{z})+P_{q\rightarrow qg}(1-z) D_{g\rightarrow h}(\frac{z_h}{z}) \right], \nonumber\\ &P_{q\rightarrow qg}^+(z)=C_F\left[\frac{1+z^2}{(1-z)_{+}} + \frac{3}{2}\delta(1-z)\right], \end{eqnarray} where the $+$ function is defined as \begin{equation} \int_0^1 dz \frac{F(z)}{(1-z)_+}\equiv \int_0^1 dz \frac{F(z)-F(1)}{(1-z)}, \end{equation} for any function $F(z)$ which is sufficiently smooth at $z=1$. Note that the above renormalized fragmentation function is free of infrared divergence due to the cancellation between the radiative and virtual corrections and it satisfies the DGLAP equation~\cite{Gribov:1972ri,Dokshitzer:1977sg,Altarelli:1977zs}. \subsubsection{Medium induced gluon spectra} When the initial jet production occurs inside a medium in DIS off a large nucleus or amid the formation of the dense QCD matter in high-energy heavy-ion collisions, the jet parton will have to interact with other partons as it propagates through the medium. Such final state interaction with medium partons will lead to both elastic energy loss as well as inelastic energy loss through induced gluon radiation or quark-anti-quark pair production. Induced gluon bremsstrahlung turns out to be the most dominant energy loss mechanism. For a parton propagating in a medium with large path length, one has to consider many multiple scatterings and the corresponding induced gluon bremsstrahlung. Taking into account of the Landau-Pomeranchuk-Migdal (LPM) interference \cite{Landau:1953gr,Migdal:1956tc} effect, one can calculate the energy loss (per unit length) in the limit of soft gluon radiation which has a unique energy and length dependence. For a medium with finite length, one can consider an expansion in the opacity of the medium as seen by the propagating parton and the leading order contribution comes from processes with one secondary scattering after the initial parton production and the interference between zero and two secondary scattering. This is also equivalent to the twist expansion in the high-twist approach. \begin{figure} \begin{center} \includegraphics[scale=0.75]{dis3} \includegraphics[scale=0.75]{dis4} \caption{Two example Feynman diagrams for induced gluon radiation induced by double scattering that contribute to the hadronic tensor in the SIDIS process.} \label{fig:dis3} \end{center} \end{figure} Under the opacity or twist expansion, the leading contribution to medium-induced gluon spectra is from processes in which the jet parton undergoes a secondary scattering with the medium after its initial production. In the DIS off a large nucleus, these correspond to gluon radiation induced by a secondary scattering of the quark with another gluon from a different nucleon inside the nucleus after it is knocked out by the hard photon-quark scattering as illustrated by the two example Feynman diagrams in Fig.~\ref{fig:dis3}. There are generally two types of processes represented by these Feynman diagrams. The first one corresponds to final-state gluon radiation induced by the hard photon-quark scattering after which the final quark or gluon have a secondary scattering with another gluon from the nucleus. In the second type of processes, the gluon radiation is induced by the secondary scattering between the final quark and the nucleus after the initial photon-quark interaction. The interferences between these two types of processes give rise to the LPM interference effect limiting the gluon radiation whose formation time is longer than the distance between hard photon-quark scattering and the secondary scattering. We only illustrate two symmetrical processes with different cuts (left, central and right) in Fig.~\ref{fig:dis3}. One should also consider asymmetrical cut diagrams from interferences of different radiative processes (initial and final radiation from quark and radiation from gluon lines). In recent studies, radiative gluon spectrum from double scattering in DIS off a large nucleus has been calculated \cite{Zhang:2018kkn,Zhang:2018nie,Zhang:2019toi} in which both finite momentum fraction of the gluon and energy-momentum transfer from the medium are taken into account. Under the static-interaction approximation (neglecting energy-momentum transfer from the medium), the final gluon spectrum induced by a secondary scattering with a medium gluon from a nucleon at $(y^-_1,\vec y_\perp)$ after the initial photon-quark scattering at $(y^-,\vec y_\perp)$ can be expressed as, \begin{eqnarray} \dfrac{d N_g }{d l_{\perp}^2 dz}&= \dfrac{\pi \alpha_\mathrm{s}^2}{N_c}\dfrac{1+(1-z)^2}{z} \int \frac{d y^- d^2y_\perp}{A} \rho_A(y^-,\vec{y}_{\perp}) \nonumber\\ &\hspace{-0.3in}\times \int \dfrac{d ^2k_{\perp}}{(2\pi)^2} \int d y_1^- \rho_A(y_1^-,\vec{y}_{\perp}) \left\lbrace \frac{C_F}{(\vec{l}_{\perp}-z\vec{k}_{\perp})^2} - \frac{C_F}{{l}_{\perp}^2} \right. \nonumber\\ &\hspace{-0.3in}+C_A \left[ \frac{2}{(\vec{l}_{\perp}-\vec{k}_{\perp})^2} - \frac{\vec{l}_{\perp}\cdot(\vec{l}_{\perp}-\vec{k}_{\perp})}{l_{\perp}^2(\vec{l}_{\perp}-\vec{k}_{\perp})^2} \right. \nonumber\\ &\hspace{-0.3in} \left. - \frac{(\vec{l}_{\perp} - \vec{k}_{\perp})\cdot (\vec{l}_{\perp} -z\vec{k}_{\perp})}{(l_{\perp}-k_{\perp})^2(\vec{l}_{\perp}-z\vec{k}_{\perp})^2}\right] (1-\cos[x_{\rm LD}p^+y_1^-]) \nonumber \\ &\hspace{-0.3in} \left.+ \frac{1}{N_c}\left[ \frac{\vec{l}_{\perp}\cdot (\vec{l}_{\perp} -z\vec{k}_{\perp})}{l_{\perp}^2(\vec{l}_{\perp}-z\vec{k}_{\perp})^2} - \frac{1}{l_{\perp}^2} \right] \left(1- \cos[x_{\rm L}p^+y_1^-]\right) \right\rbrace \nonumber \\ &\times \frac{\phi(0,\vec{k}_{\perp})}{{k}_{\perp}^2}. \label{eq:gluonspectra} \end{eqnarray} The nucleon density inside the nucleus is normalized as $\int dy^-d^2y_\perp \rho_A(y^-,\vec y_\perp)=A$. The medium gluon in the secondary scattering carries transverse momentum $\vec k_\perp$ and $\phi(0,\vec{k}_{\perp})$ is the transverse-momentum-dependent (TMD) gluon distribution function (with momentum fraction $x\approx 0$) inside the nucleon. It can be related to the jet transport coefficient \begin{equation} \hat{q}_R(y) \approx \frac{4\pi^2 \alpha_\mathrm{s} C_2(R)}{N_c^2-1} \rho_A(y) \int \frac{d^2 k_{\perp}}{(2\pi)^2} \phi(0, \vec{k}_{\perp}), \label{eq:qhat_nucleon} \end{equation} for a propagating parton in the color representation $R$. Note that the effective quark-nucleon scattering cross section in terms of the TMD gluon distribution is \begin{equation} \sigma_{qN}=\dfrac{2\pi^2\alpha_\mathrm{s}}{N_c} \int \dfrac{d ^2k_{\perp}}{(2\pi)^2} \frac{\phi(0,\vec{k}_{\perp})}{{k}_{\perp}^2}. \end{equation} The following quantity in the above induced gluon spectra, \begin{equation} \dfrac{2\pi^2\alpha_\mathrm{s}}{N_c}\rho_A(y^-,\vec y_\perp) \int \dfrac{d ^2k_{\perp}}{(2\pi)^2} \frac{\phi(0,\vec{k}_{\perp})}{{k}_{\perp}^2}, \end{equation} or $\sigma_{qN}\rho_A$ can also be interpreted as the inverse of quark's mean-free-path due to quark-gluon scattering inside the nucleus. The integration $\int d y^- d^2y_\perp/A$ is the average over the photon-quark scattering position inside the nucleus. The third and fourth terms in the curly bracket correspond to gluon radiation from the initial gluon (with three-gluon vertex) and quark (both initial and final) during the secondary quark-gluon interaction. There are LPM interference effects for these two kinds of induced gluon radiation that limit the radiation within the formation time \begin{eqnarray} \tau_{1f}&= \frac{1}{x_{\rm LD}p^+}, x_{\rm LD}=\frac{(\vec l_\perp -\vec k_\perp)^2}{2p^+q^-z(1-z)}; \\ \tau_{2f}&=\frac{1}{x_{\rm L}p^+}, x_{\rm L}=\frac{l_\perp^2}{2p^+q^-z(1-z)}. \end{eqnarray} The first two terms in the curly bracket are the remaining contributions from the final and initial state gluon radiation from the quark line during the secondary quark-gluon interaction that are not affected by the LPM interference. In the limit of soft gluon radiation $z\approx 0$, the induced gluon spectrum becomes much simpler, \begin{eqnarray} \dfrac{d N_g}{d l_{\perp}^2 dz}&\approx C_A \dfrac{\alpha_\mathrm{s}}{2\pi}\dfrac{1+(1-z)^2}{z}\dfrac{2\pi^2\alpha_\mathrm{s}}{N_c} \int \frac{d y^- d^2y_\perp}{A} \nonumber\\ &\hspace{-0.3in}\times \int d y_1^- \int \dfrac{d ^2k_{\perp}}{(2\pi)^2} \rho_A(y^-,\vec{y}_{\perp}) \rho_A(y_1^-,\vec{y}_{\perp}) \nonumber \\ &\hspace{-0.3in} \times \frac{2\vec{k}_{\perp}\cdot\vec{l}_{\perp}}{l_\perp^2(\vec{l}_{\perp}-\vec{k}_{\perp})^2} \left( 1-\cos[x_{LD}p^+y_1^-]\right) \frac{\phi(0,\vec{k}_{\perp})}{{k}_{\perp}^2}, \label{eq:GLV} \end{eqnarray} which is equivalent to the results from GLV calculations. \subsubsection{High-twist expansion versus GLV result} In the original calculation of parton energy loss in the high-twist formalism, one first expands the scattering amplitude in the transverse momentum of the initial gluon, assuming it is much smaller than the transverse momentum of the radiated gluon $k_\perp \ll l_\perp$. One can perform the same procedure of collinear expansion here for the radiative gluon spectra induced by a secondary scattering. Keeping the following expansion to the quadratic order in $k_\perp$, \begin{eqnarray} &\frac{2\vec{k}_{\perp}\cdot\vec{l}_{\perp}}{l_\perp^2(\vec{l}_{\perp}-\vec{k}_{\perp})^2} \left( 1-\cos[x_{LD}p^+y_1^-]\right) \approx 4\frac{(\vec k_\perp\cdot\vec l_\perp)^2}{l_\perp^6} \nonumber \\ &\times\left( 1-\cos[x_{L}p^+y_1^-]-x_{L}p^+y_1^-\sin[x_{L}p^+y_1^-]\right) +{\cal O}(k_\perp^4), \nonumber \end{eqnarray} one can factor out the TMD gluon distribution from the scattering amplitude. The integration of the TMD gluon distribution over $k_\perp$ leads to the quark transport coefficient $\hat q_F$ as defined in Eq.~(\ref{eq:qhat_nucleon}). The radiative gluon spectrum in the soft gluon approximation in Eq.~(\ref{eq:GLV}) becomes \begin{eqnarray} \dfrac{d N_g }{d l_{\perp}^2 dz}&= \int \frac{d y^- d^2y_\perp}{A} \rho_A(y^-,\vec{y}_{\perp}) \int d y_1^- \hat q_F(y_1^-,\vec y_\perp) \nonumber\\ &\times \frac{C_A \alpha_\mathrm{s}}{\pi} \dfrac{1+(1-z)^2}{z}\frac{1}{l_\perp^4} \left( 1-\cos[x_{L}p^+y_1^-] \right. \nonumber\\ &\left.- x_{L}p^+y_1^-\sin[x_{L}p^+y_1^-]\right), \label{eq:gluonspectra_ht} \end{eqnarray} which is the original high-twist result when the last term from the derivative of the phase factor in the scattering amplitude is dropped. In the above collinear approximation one assumes $k_\perp \ll l_\perp$ for the initial gluon. Such an approximation however missed the contribution to the gluon spectrum from the region $\vec k_\perp \approx \vec l_\perp$ when the formation time $\tau_{1f}= 1/(x_{\rm LD}p^+)$ becomes large. Such contribution can be recovered approximately when the last term in Eq.~(\ref{eq:gluonspectra_ht}) from the derivative of the phase factor in the scattering amplitude is dropped. Considering the Gyulassy-Wang (GW) static potential model of the scattering centers inside a medium as in the GLV calculation, the cross section of the quark-medium scattering is given by Eq.~(\ref{eq-small-el}), \begin{equation} \frac{d\sigma}{dk_\perp^2} = \frac{C_2(R)C_2(T)}{N_c^2-1}\frac{4\pi\alpha_\mathrm{s}^2}{(k_\perp^2+\mu^2_{\rm D})^2}, \end{equation} where $C_2(R)$ and $C_2(T)$ are the quadratic color Casimir factors of the propagating parton and scattering center, respectively, and $\mu_{\rm D}$ is the Debye screening mass. According to the two definitions of the jet transport coefficient $\hat q_R$ in Eqs.~(\ref{qhat}) and (\ref{eq:qhat_nucleon}), the TMD gluon distribution of the medium under the GW static potential model is then \begin{equation} \frac{\phi(0, \vec{k}_{\perp})}{k_{\perp}^2} = C_2(T) \frac{4 \alpha_\mathrm{s}}{(k_{\perp}^2+\mu^2_{\rm D})^2}. \label{eq:static} \end{equation} The radiative gluon spectra without the average over the initial quark production point can be expressed as \begin{eqnarray} \dfrac{d N_g}{d l_{\perp}^2 dz}&\approx C_A \dfrac{\alpha_\mathrm{s}}{2\pi}\dfrac{1+(1-z)^2}{z}\dfrac{2\pi^2\alpha_\mathrm{s}}{N_c} \nonumber\\ &\hspace{-0.3in}\times \int d y_1^- \int \dfrac{d ^2k_{\perp}}{(2\pi)^2} \rho_A(y_1^-,\vec{y}_{\perp}) \frac{2\vec{k}_{\perp}\cdot\vec{l}_{\perp}}{l_\perp^2(\vec{l}_{\perp}-\vec{k}_{\perp})^2} \nonumber \\ &\hspace{-0.3in} \times\left( 1-\cos[x_{\rm LD}p^+y_1^-]\right) C_2(T) \frac{4 \alpha_\mathrm{s}}{(k_{\perp}^2+\mu^2_{\rm D})^2}. \label{eq:GLV2} \end{eqnarray} One can make a variable change $\vec l^\prime_\perp=\vec l_\perp - \vec k_\perp$ and then integrate over $\vec k_\perp$, \begin{eqnarray} \int d^2k_\perp & \frac{k_\perp l^\prime_\perp\cos\phi+k_\perp^2}{(k_\perp^2+{l^\prime}^2_\perp+2k_\perp l\prime_\perp\cos\phi)(k_\perp^2+\mu^2_{\rm D})^2} \nonumber \\ &=\pi \int_{{l^\prime}^2_\perp}^\infty \frac{dk_\perp^2}{(k_\perp^2+\mu^2_{\rm D})^2}=\frac{\pi}{{l^\prime}^2_\perp+\mu^2_{\rm D}}. \end{eqnarray} The radiative gluon spectra can then be expressed as \begin{eqnarray} \dfrac{d N_g}{d l_{\perp}^2 dz}&\approx C_A \dfrac{\alpha_\mathrm{s}}{2\pi}\dfrac{1+(1-z)^2}{z}\dfrac{2\pi\alpha_\mathrm{s}^2}{N_c} \nonumber\\ &\hspace{-0.3in}\times \int d y_1^- \rho_A(y_1^-,\vec{y}_{\perp}) C_2(T) \frac{1}{l_\perp^2(l_\perp^2+\mu^2_{\rm D})} \nonumber \\ &\hspace{-0.3in} \times\left( 1-\cos[x_{\rm L}p^+y_1^-]\right), \label{eq:GLV3} \end{eqnarray} after substituting $l^\prime_\perp\rightarrow l_\perp$. Note that the quark transport coefficient in this static potential model is \begin{equation} \hat q_F=\frac{2\pi\alpha_\mathrm{s}^2}{N_c}C_2(T)\rho_A\left[\log(\frac{Q^2}{\mu^2_{\rm D}}+1)-\frac{Q^2}{Q^2+\mu^2_{\rm D}}\right]. \end{equation} The above induced gluon spectrum is equivalent to the original high-twist result [Eq.~(\ref{eq:gluonspectra_ht})] without the term from the derivative to the phase factor when one rescales the jet transport coefficient by a factor of $2\log(Q^2/\mu^2_{\rm D})$. The calculation of induced gluon spectrum and parton radiative energy loss considers only the first order in opacity expansion and is only valid for a few number of parton rescattering in the medium. This is the general assumption for the calculation by Gyulassy-Levai-Vitev (GLV) and Wiedemann ~\cite{Gyulassy:1999zd,Gyulassy:2000fs,Wiedemann:2000za}. In the limit of many soft scatterings, one have to take the approach by Baier-Dokshitzer-Mueller-Peigne-Schiff and Zakharov (BDMPS-Z)~\cite{Zakharov:1996fv,Baier:1996kr,Baier:1996sk} when one considers soft gluon radiation as a result of multiple scatterings. All of these studies assume the medium as a series of static scattering centers as in the Gyulassy-Wang (GW) model \cite{Gyulassy:1993hr}. Alternatively, Arnold, Moore and Yaffe (AMY)~\cite{Arnold:2001ba,Arnold:2002ja} employed the hard thermal loop improved pQCD at finite temperature to calculate the scattering and gluon radiation rate in a weakly coupled thermal QGP medium. In the recent SCET\textsubscript{G} formalism~\cite{Ovanesyan:2011xy,Ovanesyan:2011kn}, the standard soft collinear effective theory (SCET) is supplemented with the Glauber modes of gluon exchange for interaction between a fast parton and a static scattering medium to study multiple parton scattering and medium-induced gluon splitting. The relations between some of these different studies of parton propagation and energy loss have been discussed in detail in Refs.~\cite{Arnold:2008iy,CaronHuot:2010bp,Mehtar-Tani:2019tvy}. One should bear in mind that in most of the past calculations of radiative parton energy loss and gluon spectrum, one assumes the eikonal approximation for the fast parton propagation, in which the energy of the propagating parton $E$ and the radiated gluon $\omega$ are considered larger than the transverse momentum transfer $k_{\perp}$ in the scattering: $E,\omega \gg k_{\perp}$. The energy of the radiated gluon is also considered larger than its transverse momentum $\omega \gg l_{\perp}$ in the collinear radiation. In addition, the soft gluon approximation $E \gg \omega$ is also assumed in BDMPS-Z and GLV studies. In the models of static scattering centers, interactions between the propagating parton and the medium do not involve energy and longitudinal momentum transfer. The elastic scattering, the radiative process and the corresponding energy loss are calculated separately. To improve these calculations, the GLV calculation has been extended beyond the soft radiation approximation~\cite{Blagojevic:2018nve} and the first order in the opacity expansion~\cite{Sievert:2019cwq}, and now with a dynamical medium through the hard thermal loop resummed gluon propagator~\cite{Djordjevic:2008iz}. The HT approach has also been extended to include the longitudinal momentum diffusion~\cite{Majumder:2009ge,Qin:2014mya}. Further improvements such as effects of color (de)coherence, angular ordering~\cite{MehtarTani:2011tz,Armesto:2011ir,Caucal:2018dla} and overlapping formation time in sequential gluon emissions~\cite{Arnold:2015qya} have also been studied. \section{Jet evolution models} \label{sec:models} \subsection{Vacuum parton showers} \label{subsec:vac_showers} High energy partons produced via hard collisions usually start with large virtuality scales (or off-shellness). Then they evolve toward their mass shells through successive splittings, each of which reduces the virtualities of the daughter partons compared to the mother. The parton/hadron fragmentation function $D_a (z, Q^2)$ at a given scale $Q^2$ is typically described by the DGLAP evolution equation~\cite{Gribov:1972ri,Lipatov:1974qm,Dokshitzer:1977sg,Altarelli:1977zs}, where $z$ is the fractional momentum ($p^+$ in the light-cone coordinate if not otherwise specified in this review) of the daughter parton/hadron taken from the initial parton with flavor $a$. The DGLAP evolution equation can be rewritten in a more convenient fashion using the Sudakov form factor~\cite{Hoche:2014rga}, \begin{align} \label{eq:sudakov} \Delta _a(&Q_\mathrm{max}^2, Q^2_a) = \prod_i \Delta_{ai} (Q_\mathrm{max}^2, Q^2_a)\\ =&\prod_i \exp \left[-\int\limits_{Q^2_a}^{Q_\mathrm{max}^2}\frac{d{Q}^2}{{Q}^2}\frac{\alpha_\mathrm{s}({Q}^2)}{2\pi}\int\limits_{z_\mathrm{min}}^{z_\mathrm{max}}dzP_{ai}(z,{Q}^2)\right], \nonumber \end{align} that represents the probability of no splitting between scales $Q^2_\mathrm{max}$ and $Q^2_a$, where the former denotes the maximum possible virtuality of the given parton. In Eq.~(\ref{eq:sudakov}), $P_{ai}$ is the parton splitting function in vacuum for a particular channel $i$, and $z_\mathrm{min}$ and $z_\mathrm{max}$ are the lower and upper kinematic limits of the fractional momentum $z$. Note that there is no unique prescription for a ``best" choice of the $Q^2_\mathrm{max}$. It depends on different model assumptions or is treated as a parameter to fit hadron/jet spectra in proton-proton collisions. In addition, the values of $z_\mathrm{min}$ and $z_\mathrm{max}$ depend on the definition of $z$ (momentum fraction or energy fraction), as well as the coordinate one uses. More details on these kinematic cuts will be discussed later when different models are presented. With this Sudakov form factor, one may construct an event generator to simulate parton showers. For an $a\rightarrow bc$ splitting process, if a random number $r\in (0,1)$ is smaller than $\Delta_a(Q_\mathrm{max}^2, Q_\mathrm{min}^2)$, parton $a$ is considered stable (no splitting happens) with its virtuality set as $Q_a^2=Q^2_\mathrm{min}$. Here $Q_\mathrm{min}^2$ denotes the minimum allowed virtuality scale of a parton, which is usually taken as 1~GeV$^2$ (nucleon mass scale) in vacuum parton showers. On the other hand, if $r>\Delta_a(Q_\mathrm{max}^2, Q_\mathrm{min}^2)$, the equation $r = \Delta_a(Q_\mathrm{max}^2, Q_a^2)$ is solved to determine the virtuality scale $Q_a^2$ at which parton $a$ splits. The specific channel $i$ through which $a$ splits is then determined using the branching ratios: \begin{equation} \label{eq:branching} \mathrm{BR}_{ai}(Q_a^2)=\int\limits_{z_\mathrm{min}}^{z_\mathrm{max}} dz P_{ai} (z, Q_a^2). \end{equation} Through a particular channel, the longitudinal momentum fractions $z$ and $(1-z)$ of the two daughter partons are determined with the splitting function $P_{ai} (z, Q_a^2)$. Meanwhile, $z^2Q_a^2$ and $(1-z)^2Q_a^2$ are used as the new maximum possible virtualities $Q^2_\mathrm{max}$ of the two daughters respectively, with which their actual virtualities $Q_b^2$ and $Q_c^2$ can be calculated again using Eq.~(\ref{eq:sudakov}) with their own flavors. Finally, the transverse momentum of the daughters with respect to the mother reads \begin{equation} \label{eq:transverse} k_\perp^2=z(1-z)Q_a^2-(1-z)Q_b^2-zQ_c^2, \end{equation} which completes one splitting process \begin{align} \label{eq:splittingProcess} \biggl[p^+,\frac{Q_a^2}{2p^+},0 \biggl] &\rightarrow\biggl[zp^+,\frac{Q_b^2+k_\perp^2}{2zp^+},\vec{k}_\perp\biggl] \nonumber\\ &+\biggl[(1-z)p^+,\frac{Q_c^2+k_\perp^2}{2(1-z)p^+},-\vec{k}_\perp\biggl]. \end{align} One may iterate this process until all daughter partons reach $Q^2_\mathrm{min}$, generating a virtuality-ordered parton shower where the virtuality scales of single partons decrease through the successive splittings. For simplicity, we have neglected the rest mass of partons in the above discussion. To take this into account, one may replace $Q^2_a$ with $M^2_a=Q^2_a+m^2_a$ in Eqs.~(\ref{eq:transverse}) and (\ref{eq:splittingProcess}), where $m_a$ represents the rest mass of parton $a$. This formalism has been successfully implemented in the event generators such as \textsc{Pythia}~\cite{Sjostrand:2006za} for proton-proton (p+p) collisions. Note that in \textsc{Pythia}, $z$ is defined as the fractional energy $E_b/E_a$ instead of the fractional light-cone momentum. With this definition, the kinematic cuts of $z$ in Eqs.~(\ref{eq:sudakov}) and (\ref{eq:branching}) read \begin{align} \label{eq:cutWithE} z_\mathrm{max/min}&=\frac{1}{2}\Bigg[1+\frac{M_b^2-M_c^2}{M_a^2}\nonumber\\ \pm &\frac{|\vec{p}_a|}{E_a}\frac{\sqrt{(M_a^2-M_b^2-M_c^2)^2-4M_b^2M_c^2}}{M_a^2}\Bigg]. \end{align} This can be obtained by solving the momentum $p$ of parton $b/c$ in the rest frame of parton $a$ via $M_a=\sqrt{p^2+M_b^2}+\sqrt{p^2+M_c^2}$ and then boosting it collinearly with $\pm |\vec{p}_a|/E_a$. Before $M_b$ and $M_c$ are determined, one may assume they are zero compared to $M_a$ (or $Q_a$) and use \begin{equation} \label{eq:cutWithES} z_\mathrm{max/min}=\frac{1}{2}\left(1\pm \frac{|\vec{p}_a|}{E_a}\right) \end{equation} in Eqs.~(\ref{eq:sudakov}) and (\ref{eq:branching}). In \textsc{Pythia}, the initial maximum possible virtuality scale of each parton is set as $Q_\mathrm{max}^2 = 4Q_\mathrm{hard}^2$ by default, where $Q_\mathrm{hard}^2$ is the scale of transverse momentum exchange square of the initial hard scattering. One may modify this pre-factor of 4 using the \textsc{parp(67)} parameter inside the \textsc{Pythia 6} code. To take into account the coherence effect on parton splittings, \textsc{Pythia} implements the angular-ordering cut on the virtuality-ordered parton showers. After parton $a$ splits, its daughters $b$ and $c$ are also allowed to further split independently. However, if the opening angle of the splitting from $b$ is larger than the angle between $b$ and $c$, the soft gluon ($b'$) emitted from $b$ can interfere with $c$. In other words, the soft gluon emitted at a large angle corresponds to a large transverse wavelength, which cannot resolve the separate color charges of $b$ and $c$. Therefore, it is treated as being emitted from $a$ directly rather than from $b$ independently. To reduce such soft emission at large angle from the daughter partons, \textsc{Pythia} requires the splitting angle of $b$ (same for $c$) is smaller than the splitting angle of $a$. Using Eq.~(\ref{eq:transverse}) with $Q_a \rightarrow M_a$ and $M_b\approx M_c\approx 0$, one obtains \begin{equation} \label{eq:angleA} \theta_a\approx \frac{k_\perp}{E_b} + \frac{k_\perp}{E_c} \approx \frac{1}{\sqrt{z_a(1-z_a)}}\frac{M_a}{E_a}, \end{equation} where $E_b = z_a E_a$ and $E_c = (1-z_a)E_a$ are applied. Therefore, $\theta_b < \theta_a$ yields \begin{equation} \label{eq:angularOrderingCut} \frac{z_b(1-z_b)}{M_b^2} > \frac{1-z_a}{z_a M_a^2}. \end{equation} \subsection{Medium-modified parton showers} \label{subsec:med_showers_virtual} While this virtuality-ordered (or angular-ordered) formalism has been generally accepted for generating vacuum parton showers in p+p collisions, different approaches have been developed to modify this formalism to include medium effects in heavy-ion collisions. In general, approaches in the literature can be categorized into two groups: (1) directly modifying the splitting function $P_{ai}(z,Q^2)$ in Eqs.~(\ref{eq:sudakov}) and (\ref{eq:branching}) with medium effects; and (2) applying medium modification on the parton kinematics between vacuum-like splittings. The former includes \textsc{Matter}~\cite{Majumder:2013re,Cao:2017qpx}, \textsc{Q-Pythia}~\cite{Armesto:2009fj} and \textsc{Yajem-fmed}~\cite{Renk:2009nz}; and the latter includes \textsc{Jewel}~\cite{Zapp:2011ya,Zapp:2012ak,Zapp:2013vla}, \textsc{Hybrid}~\cite{Casalderrey-Solana:2014bpa,Casalderrey-Solana:2016jvj,Hulcher:2017cpt} and \textsc{Yajem-rad/drag/de}~\cite{Renk:2008pp,Renk:2009nz,Renk:2010mf,Renk:2013pua}. Within the first group, the medium-modified splitting function is written as \begin{equation} \label{eq:splitPtot} P_{ai}(z,Q^2)=P_{ai}^\mathrm{vac}(z)+P_{ai}^\mathrm{med}(z,Q^2), \end{equation} where the first term on the right-hand side is the standard vacuum splitting function, and the second term is known as the medium-induced splitting function. This method is expected to be valid when the medium-induced part can be treated as a small correction to the vacuum part. In the \textsc{Matter} model~\cite{Majumder:2013re,Cao:2017qpx}, the latter is taken from the high-twist energy loss calculation~\cite{Guo:2000nz,Majumder:2009ge}, \begin{align} \label{eq:medP} P_{ai}^\mathrm{med}&(z,Q^2)=\frac{C_A}{C_2(a)}\frac{P_{ai}^\mathrm{vac}(z)}{z(1-z)Q^2}\nonumber\\ \times&\int\limits_0^{\tau_f^+}d\zeta^+ \hat{q}_a\left(\vec{r}+\hat{n}\frac{\zeta^+}{\sqrt{2}}\right)\left[2-2\cos \left(\frac{\zeta^+}{\tau_f^+}\right)\right]. \end{align} Note that the color factor for the medium-induced gluon emission is always $C_A$ which is different from $C_2(a)$ in $P_{ai}^\mathrm{vac}(z)$ for gluon emission in vacuum. Here, $\hat{q}_a$ is the parton jet transport parameter that denotes its transverse momentum broadening square per unit time/length due to elastic scatterings with the medium. It depends on the medium properties -- density and flow velocity -- at the location $\vec{r}+\hat{n}\zeta^+/\sqrt{2}$, where $\vec{r}$ is the production point of the given parton and $\hat{n}=\vec{p}/|\vec{p}|$ denotes its propagation direction. In addition, $\tau_f^+=2p^+/Q^2$ is the mean formation time of the splitting process. Compared to vacuum parton showers, one needs to track not only the virtuality scale of each parton, but also its spacetime information. For heavy-ion collisions, the production vertices of high-energy partons in initial hard scatterings are usually distributed according to the binary collision points from the Glauber model~\cite{Miller:2007ri}. After that, each parton is assumed to stream freely for $\tau^+\approx \tau_f^+$ between its production and splitting vertex. One may either directly use the mean formation time $\tau^+ = \tau_f^+$ for each splitting, or sample the splitting time using a Gaussian distribution with a mean value of $\tau^+_f$~\cite{Majumder:2013re,Cao:2017qpx}: \begin{equation} \label{eq:tau} \rho(\xi^+)=\frac{2}{\tau^+_f\pi}\exp\left[-\left(\frac{\xi^+}{\tau^+_f\sqrt{\pi}}\right)^2\right]. \end{equation} The latter introduces additional effects of quantum fluctuation on parton energy loss into event generator simulations. In the \textsc{Matter} model, the spacetime profile of the QGP medium is taken from the (2+1)-dimensional viscous hydrodynamic model \textsc{Vishnew}~\cite{Song:2007fn,Song:2007ux,Qiu:2011hf}. The entropy density distribution of the QGP fireball is initialized with the Monte-Carlo Glauber model. The starting time of the hydrodynamical evolution is set as $\tau_0=0.6$~fm and the specific shear viscosity is tuned as $\eta/s$=0.08 to describe the soft hadron spectra at RHIC and the LHC. This hydrodynamic simulation then provides the spacetime distributions of the temperature ($T$), entropy density ($s$) and flow velocity ($u$) of the QGP. During the QGP stage -- after $\tau_0=0.6$~fm and before jet partons exit the medium (a critical temperature $T_\mathrm{c}\approx 160$~MeV is used for identifying the QGP boundary) -- the splitting function in Eq.~(\ref{eq:splitPtot}) contains both vacuum and medium-induced parts. Before and after the QGP stage, however, it has only the vacuum contribution. As a jet parton travels inside the QGP, its transport coefficient $\hat{q}_a$ in the local rest frame of the fluid cell is calculated using $\hat{q}_{a,\mathrm{local}} = \hat{q}_{a0} \cdot s/s_0$, where a minimal assumption of its proportionality to the local density of scattering centers (or entropy density) is adopted. Here, $s_0$ is a reference point (e.g. $s_0 \approx 96$~fm$^{-3}$ at the center of the QGP fireballs produced in central $\sqrt{s_\mathrm{NN}}=200$~GeV Au+Au collisions at RHIC), and $\hat{q}_{a0}$ is the jet transport parameter at this reference point. The path length integral in Eq.~(\ref{eq:medP}) is calculated in the center-of-mass frame of collisions. Therefore, one should take into account the effects of the local fluid velocity of the expanding medium by using the rescaled jet transport coefficient $\hat{q}_a=\hat{q}_{a,\mathrm{local}}\cdot p^\mu u_\mu/p^0$~\cite{Baier:2006pt} in Eq.~(\ref{eq:medP}). Compared to \textsc{Pythia}, \textsc{Matter} simulates parton showers starting from each individual parton rather than from the entire proton-proton collision system. Without the knowledge of the hard scattering scale, \textsc{Matter}~\cite{Cao:2017qpx} uses the initial parton energy square $E^2$ as its maximum possible virtuality scale $Q_\mathrm{max}^2$ to initiate the time-like parton showers. In a more recent version of \textsc{Matter} that is embedded into the \textsc{Jetscape} framework~\cite{Putschke:2019yrg}, $Q_\mathrm{max}^2=p_\mathrm{T}^2/4$ is set by default to describe the hadron/jet spectra in proton-proton collisions. In addition, unlike \textsc{Pythia}, \textsc{Matter} uses the light-cone momentum $p^+$ to define the momentum fraction $z$ in the splitting function. This leads to a slightly different form of kinematic cuts $z_\mathrm{max/min}$ compared to Eqs.~(\ref{eq:cutWithE}) and (\ref{eq:cutWithES}). The general idea remains the same: the lower and upper limits of $z$ are obtained when parton $b$ is collinear with parton $a$, i.e., $k_\perp^2=0$ in Eq.~(\ref{eq:transverse}). This yields \begin{align} \label{eq:cutWithP} z_\mathrm{max/min}&=\frac{1}{2}\Bigg[1+\frac{M_b^2-M_c^2}{M_a^2}\nonumber\\ \pm &\frac{\sqrt{(M_a^2-M_b^2-M_c^2)^2-4M_b^2M_c^2}}{M_a^2}\Bigg]. \end{align} Since there is no $|\vec{p_a}|/E_a$ compared to Eq.~(\ref{eq:cutWithE}), one cannot further simplify the expression by taking $M_b=M_c=0$ which will lead to $z_\mathrm{max}=1$ and $z_\mathrm{min}=0$ where the splitting function may be divergent. A natural alternative approximation would be setting $M_b^2=M_c^2=Q_\mathrm{min}^2$, which gives \begin{align} \label{eq:cutWithPS} z_\mathrm{max/min}&=\frac{1}{2}\left[1 \pm \sqrt{1-\frac{4Q_\mathrm{min}^2}{M_a^2}}\right]\nonumber\\ &\approx \frac{1}{2}\left[1 \pm \left(1-2Q_\mathrm{min}^2/M_a^2\right)\right], \end{align} i.e., $z_\mathrm{max}=1-Q_\mathrm{min}^2/M_a^2$ and $z_\mathrm{min}=Q_\mathrm{min}^2/M_a^2$. As a simplification for both Eq.~(\ref{eq:cutWithES}) and Eq.~(\ref{eq:cutWithPS}), the rest masses $m_b$ and $m_c$ have been neglected. If necessary, $M^2_{b/c} = Q_\mathrm{min}^2+m^2_{b/c}$ should be applied instead, especially for heavy quarks. Similar to \textsc{Matter}, \textsc{Q-Pythia}~\cite{Armesto:2009fj} also introduces medium effects on parton showers by modifying the splitting function in the Sudakov form factor. The \textsc{Q-Pythia} model directly modifies the parton shower routine \textsc{pyshow} in the \textsc{Pythia 6} code by introducing the medium-induced splitting function from the BDMPS energy loss formalism~\cite{Baier:1996sk,Baier:1996kr,Zakharov:1996fv}. In \textsc{Q-Pythia}, $Q_\mathrm{max}^2=2E^2$ is used as the initial maximum possible virtuality, where $E$ is the energy of the parton that initiates the showers. At this moment, \textsc{Q-Pythia} uses simplified models of the QGP medium that is mainly characterized by two parameters: the jet transport coefficient $\hat{q}_a$ and the medium length $L$. One more example within this first group of medium modification approaches is \textsc{Yajem-fmed}~\cite{Renk:2009nz}. Similar to \textsc{Q-Pythia}, \textsc{Yajem} is also based on modifying the \textsc{Pythia 6} shower routine \textsc{pyshow}. Although most modes in \textsc{Yajem} belong to the second group that will be discussed below, its \textsc{fmed} mode implements a factor $(1+f_\mathrm{med})$ to enhance the singular part of the splitting function upon the presence of a medium. For example, \begin{align} \label{eq:yajem-fmed} P_{q\rightarrow qg}&(z) = C_F \frac{1+z^2}{1-z} \nonumber\\ &\Rightarrow C_F \left[\frac{2(1+f_\mathrm{med})}{1-z}-(1+z)\right]. \end{align} The factor $f_\mathrm{med}$ is assumed to be proportional to the density of scattering centers inside the medium, or $\epsilon^{3/4}$, where $\epsilon$ is the local energy density of the QGP that is simulated with a (3+1)-D ideal hydrodynamic model~\cite{Nonaka:2006yn}: \begin{align} \label{eq:yajemKf} f_\mathrm{med}&=K_f\int d\zeta \left[\epsilon(\zeta)\right]^{3/4}\nonumber\\ &\times \left[\cosh\rho(\zeta)-\sinh\rho(\zeta)\cos\psi\right], \end{align} with $K_f$ as a model parameter. Here, $\rho$ represents the local flow rapidity of the medium and $\psi$ represents the angle between the medium flow and the jet parton momentum. When jet parton travels with the speed of light, the $[\cosh\rho(\zeta)-\sinh\rho(\zeta)\cos\psi]$ term is the same as the $p^\mu u_\mu/p^0$ factor to take into account the medium flow effect on jet transport coefficients, as discussed in the \textsc{Matter} model above. This factor is exact for rescaling $\hat{q}_a$ since the transverse momentum broadening square is boost invariant~\cite{Baier:2006pt}, but may not be true for other coefficients. The second group of approaches introduce medium modification on parton kinematics between vacuum-like splittings without changing the splitting function itself. The aforementioned \textsc{Yajem} model~\cite{Renk:2008pp,Renk:2009nz,Renk:2010mf,Renk:2013pua} provides several modes that belong to this group. For instance, the \textsc{Yajem-rad} mode enhances the parton virtuality before its next splitting based on its scattering with the medium: \begin{equation} \label{eq:yajemQ} \Delta Q^2_a = \int_{\tau_a^0}^{\tau_a^0+\tau_a^f}d\zeta \hat{q}_a(\zeta), \end{equation} where $\tau_a^0$ is the production time of parton $a$ and $\tau_a^f$ is its splitting time $2E_a/Q_a^2$. This transverse momentum broadening induced virtuality increase $\Delta Q^2_a$ will shorten the splitting time $\tau_a^f$. Thus, one may need to adjust ($\tau_a^f$, $\Delta Q_a^2$) such that a self-consistent pair is obtained. The \textsc{Yajem-drag} mode applies a drag to reduce the energy of parton $a$ before its next splitting: \begin{equation} \label{eq:yajemE} \Delta E_a = \int_{\tau_a^0}^{\tau_a^0+\tau_a^f}d\zeta D_a(\zeta), \end{equation} where $D_a(\zeta)$ can be understood as a drag coefficient due to parton-medium interactions. Both $\hat{q}_a$ and $D_a$ are assumed to be proportional to $\epsilon^{3/4}[\cosh\rho(\zeta)-\sinh\rho(\zeta)\cos\psi]$, with constant scaling factors in front as model parameters. In Ref.~\cite{Renk:2009nz}, different implementations of medium modification are systematically compared within the \textsc{Yajem} model, where no obvious difference in the final state observables has been found between enhancing the parton virtuality (the \textsc{Yajem-rad} mode) and directly modifying the parton splitting function (the \textsc{Yajem-fmed} mode). On the other hand, applying energy loss on jet partons (the \textsc{Yajem-drag} mode) leads to different fragmentation function and angular distribution of final state charged hadrons coming from a single quark. In a later work~\cite{Renk:2013pua}, the $\Delta Q^2_a$ enhancement and $\Delta E_a$ drag methods are combined into a \textsc{Yajem-de} mode. Drag and transverse momentum broadening on jet partons have also been applied in the \textsc{Hybrid} model~\cite{Casalderrey-Solana:2014bpa,Casalderrey-Solana:2016jvj,Hulcher:2017cpt} in a similar way. In this model, \textsc{Pythia}~8 is first applied to generate a chain of vacuum parton showers. To construct the spacetime structure of this chain inside the QGP, the location of the first pair of partons produced from the initial hard collision is distributed according to the Glauber model, after which each parton propagates for a lifetime of $2E/Q^2_a$ before it splits. During its propagation, a drag is applied to the parton, where the form of the drag is taken from holographic calculations of parton energy loss in a strongly coupled plasma using the gauge/gravity duality~\cite{Chesler:2014jva}: \begin{equation} \label{eq:adsDrag} \frac{1}{E_\mathrm{in}}\frac{dE}{dx}=-\frac{4}{\pi}\frac{x^2}{x^2_\mathrm{stop}}\frac{1}{\sqrt{x^2_\mathrm{stop}-x^2}}, \end{equation} where $E_\mathrm{in}$ is the initial parton energy, $x_\mathrm{stop}$ is the stopping distance given by~\cite{Chesler:2008uy,Gubser:2008as} \begin{equation} \label{eq:xStop} x_\mathrm{stop}=\frac{1}{2\kappa_\mathrm{sc}}\frac{E_\mathrm{in}^{1/3}}{T^{4/3}}, \end{equation} with $\kappa_\mathrm{sc}$ as a model parameter. Equations~(\ref{eq:adsDrag}) and~(\ref{eq:xStop}) were derived for energetic quarks propagating through the strongly coupled plasma. In the \textsc{Hybird} model, gluons are assumed to follow the same equation, but with a rescaled parameter as $\kappa_\mathrm{sc}^G = \kappa_\mathrm{sc} (C_A/C_F)^{1/3}$ so that within the string-based picture a gluon has the similar stopping distance to a quark with half of the gluon energy~\cite{Casalderrey-Solana:2014bpa}. The transverse momentum kicks on jet partons from the medium is applied with $dk^2_\perp=\hat{q}_a dt$ (sampled with a Gaussian distribution with $dk^2_\perp$ as its width) for each time step $dt$, where $\hat{q}_a=K_aT^3$ is assumed with $K_a$ as a model parameter. In the \textsc{Hybrid} model, the local temperature $T$ in the QGP is provided by hydrodynamic models. Both boost invariant ideal hydrodynamic simulations~\cite{Hirano:2010je} and viscous hydrodynamic simulations~\cite{Shen:2014vra} have been used. To take into account the local velocity of the expanding QGP during each time step (or unit length), a given jet parton is first boosted into the local rest frame of the hydrodynamic fluid cell in which its momentum is updated with both longitudinal drag and transverse momentum broadening. It is then boosted back into the center-of-mass frame of collisions for propagation to the next time step. Upon each splitting, the \textsc{Hybrid} model assumes that the two daughter partons share the updated momentum of the mother according to the fraction $z$ pre-assigned in the splitting chain based on the \textsc{Pythia} vacuum showers. A more elaborated treatment of parton-medium scatterings between vacuum-like splittings is implemented in \textsc{Jewel}~\cite{Zapp:2011ya,Zapp:2012ak,Zapp:2013vla}. To take into account the transverse phase in the LPM interference accumulated through multiple scattering, a radiated gluon is allowed to scatter with the medium during its formation time $\tau_f=2\omega/k^2_\perp$, where $\omega$ is the gluon energy and $k_\perp$ is its transverse momentum with respect to its mother. Scatterings between the jet parton and thermal partons inside the medium are described using $2 \rightarrow 2$ pQCD matrix elements. Detailed implementation of such perturbative scatterings will be discussed in the next subsection. When an additional scattering happens, the total momentum transfer between the jet parton and the medium increases the virtuality scale of the mother, as well as the accumulated $k_\perp$ of the emitted gluon. Its formation time will also be updated accordingly. If this scattering is still within the updated formation time, it is accepted, otherwise rejected. In the end, when there is no more scattering with the medium, this medium-modified gluon emission will be accepted with the probability $1/N_\mathrm{scat}$, where $N_\mathrm{scat}$ is the number of scatterings within the formation time. Note that this $1/N_\mathrm{scat}$ probability is based on the assumption of the ``totally coherent limit"~\cite{Zapp:2011ya}. How to implement the LPM effect between incoherent and totally coherent limits is still challenging. Apart from modifying existing vacuum-like splittings, these parton-medium interactions can also raise the scales and trigger additional splittings for those partons that otherwise cannot split in vacuum when their scales drop below $Q_\mathrm{min}^2$. In \textsc{Jewel}, thermal partons inside the QGP can be scattered out of the medium by jets. They are then referred to as ``recoil" partons. In principle, these recoil partons can continue interacting with the medium in the same way as jet partons. However, these rescatterings are not implemented in \textsc{Jewel} as the default setting yet. To ensure the energy-momentum conservation of the entire jet-medium system, the distributions of the thermal partons before being scattered also need to be recorded and subtracted from the final state parton spectra. So far, most calculations using \textsc{Jewel} apply a simplified hydrodynamic model that only describes the boost-invariant longitudinal expansion~\cite{Bjorken:1982qr} of an ideal QGP. In principle, more sophisticated medium can also be applied as long as they provide the phase space density of scattering centers to \textsc{Jewel}. \subsection{Parton transport} \label{subsec:transport} As the virtuality scale of jet partons approaches the scale of parton-medium interactions, the virtuality-ordered parton shower model will no longer be applicable. Instead, transport models become a better choice to describe jet-medium scatterings in this region of the phase space. A linear Boltzmann transport (\textsc{Lbt}) model is developed to describe jet parton scatterings through a thermal medium~\cite{Wang:2013cia,Cao:2016gvr,Cao:2017hhk,Chen:2017zte,Luo:2018pto,He:2018xjv}, in which the phase space distribution of a jet parton $a$ with $p_a^\mu = (E_a, \vec{p}_a)$ evolves according to the Boltzmann equation, \begin{equation} \label{eq:boltzmann1} p_a\cdot\partial f_a(x_a,p_a)=E_a (\mathcal{C}_a^\mathrm{el}+\mathcal{C}_a^\mathrm{inel}). \end{equation} On the right-hand side of the above equation, $\mathcal{C}_a^\mathrm{el}$ and $\mathcal{C}_a^\mathrm{inel}$ are the collision integrals for elastic and inelastic scatterings, respectively. For the elastic scattering ($ab\leftrightarrow cd$) process, the collision integral reads \begin{align} \label{eq:collision0} \mathcal{C}_a^\mathrm{el} &= \sum_{b,(cd)} \int \prod_{i=b,c,d}\frac{d[p_i]}{2E_a}(2\pi)^4\delta^4(p_a+p_b-p_c-p_d) \nonumber\\ &\times \left(\frac{\gamma_c \gamma_d}{\gamma_a}f_c f_d \left|\mathcal{M}_{cd\rightarrow ab}\right|^2 - \gamma_b f_a f_b \left|\mathcal{M}_{ab\rightarrow cd}\right|^2 \right) \nonumber\\ &\times S_2(\hat{s},\hat{t},\hat{u}), \end{align} with a gain term subtracted by a loss term of $f_a$ in the second line. Here, $\sum_{b,(cd)}$ sums over the flavors of parton $b$, and different scattering channels with final parton flavors $c$ and $d$, $d[p_i]\equiv d^3p_i/[2E_i(2\pi)^3]$, $\gamma_i$ is the spin-color degeneracy (6 for a quark and 16 for a gluon), and $f_i$ is the phase space distribution of each parton with a given spin and color. For thermal partons inside the QGP ($i=b, d$), $f_i = 1/(e^{E_i/T}\pm 1)$ with $T$ being the local temperature of the fluid cell; for a jet parton ($i=a, c$) at $(\vec{x},\vec{p})$, $f_i = (2\pi)^3\delta^3(\vec{p}_i-\vec{p})\delta^3(\vec{x}_i-\vec{x})$ is taken. Note that in the gain term, only the production of $a$ from the jet-medium parton scattering between $c$ and $d$ is considered, while that from the thermal-thermal or jet-jet scattering is neglected. The scattering matrix $|\mathcal{M}_{ab\rightarrow cd}|^2$ (see Ref.~\cite{Eichten:1984eu} for massless partons and Ref.~\cite{Combridge:1978kx} for heavy quarks) has been summed over the spin-color degrees of freedom of the final state ($cd$) and averaged over the initial state ($ab$), and similarly for $|\mathcal{M}_{cd\rightarrow ab}|^2$. A double step function $S_2(\hat{s},\hat{t},\hat{u})=\theta(\hat{s}\ge 2\mu_\mathrm{D}^2)\theta(-\hat{s}+\mu^2_\mathrm{D}\le \hat{t} \le -\mu_\mathrm{D}^2)$ is introduced~\cite{Auvinen:2009qm} to regulate the collinear divergence in the leading-order (LO) elastic scattering matrices implemented in \textsc{Lbt}, where $\hat{s}$, $\hat{t}$ and $\hat{u}$ are Mandelstam variables, and $\mu_\mathrm{D}^2=g^2T^2(N_c+N_f/2)/3$ is the Debye screening mass inside the QGP. An alternative method to regulate the divergence is to replace $\hat{t}$ by ($\hat{t}-\mu_\mathrm{D}^2$) in the denominators of $|\mathcal{M}_{ab\rightarrow cd}|^2$ (same for $\hat{u}$). Quantum statistics of the final states of $ab\leftrightarrow cd$ scatterings are neglected in Eq.~(\ref{eq:collision0}). Since the detailed balance requires $\gamma_c\gamma_d \left|\mathcal{M}_{cd\rightarrow ab}\right|^2 = \gamma_a\gamma_b \left|\mathcal{M}_{ab\rightarrow cd}\right|^2$, Eq.~(\ref{eq:collision0}) can be simplified as \begin{align} \label{eq:collision} \mathcal{C}_a^\mathrm{el} &= \sum_{b,(cd)} \int \prod_{i=b,c,d}d[p_i]\frac{\gamma_b}{2E_a}(f_c f_d - f_a f_b) S_2(\hat{s},\hat{t},\hat{u})\nonumber\\ & \times (2\pi)^4\delta^4(p_a+p_b-p_c-p_d) \left|\mathcal{M}_{ab\rightarrow cd}\right|^2. \end{align} Examining Eq.~(\ref{eq:boltzmann1}) and the loss term in Eq.~(\ref{eq:collision}), one obtains the following elastic scattering rate (number of scatterings per unit time) for parton $a$, \begin{align} \label{eq:rate} \Gamma_a^\mathrm{el}&(\vec{p}_a,T)=\sum_{b,(cd)}\frac{\gamma_b}{2E_a}\int \prod_{i=b,c,d}d[p_i] f_b S_2(\hat{s},\hat{t},\hat{u})\nonumber\\ &\times (2\pi)^4\delta^{(4)}(p_a+p_b-p_c-p_d)|\mathcal{M}_{ab\rightarrow cd}|^2. \end{align} With the assumption of zero mass for thermal partons $b$ and $d$, this can be further simplified as \begin{align} \label{eq:rate2} \Gamma_a^\mathrm{el}(\vec{p}_a,&T) = \sum_{b,(cd)} \frac{\gamma_b}{16E_a(2\pi)^4}\int dE_b d\theta_b d\theta_d d\phi_{d}\nonumber\\ &\times f_b(E_b,T) S_2(\hat{s},\hat{t},\hat{u})|\mathcal{M}_{ab\rightarrow cd}|^2\nonumber\\ &\times \frac{E_b E_d \sin \theta_b \sin \theta_d}{E_a-|\vec{p}_a| \cos\theta_{d}+E_b-E_b\cos\theta_{bd}}, \end{align} where \begin{align} \label{eq:E4} \cos\theta_{bd}&=\sin\theta_b\sin\theta_d\cos\phi_{d}+\cos\theta_b\cos\theta_d,\\[10pt] E_d=&\frac{E_aE_b-p_aE_b\cos\theta_{b}}{E_a-p_a\cos\theta_{d}+E_b-E_b\cos\theta_{bd}}. \end{align} Here, we let parton $a$ move in the $+z$ direction, and $b$ in the $x-z$ plane with $\theta_b$ as its polar angle. The outgoing momentum of $d$ has $\theta_d$ and $\phi_d$ as its polar and azimuthal angles, and $\theta_{bd}$ is the angle between $b$ and $d$. Within a time interval $\Delta t$, the average number of elastic scatterings is then $ \Gamma_a^\mathrm{el}\Delta t$. One may allow multiple scatterings between the jet parton and the medium by assuming the number of independent scatterings $n$ to obey the Poisson distribution, \begin{equation} \label{eq:poission} P(n)=\frac{(\Gamma_a^\mathrm{el}\Delta t)^n}{n!}e^{-(\Gamma_a^\mathrm{el}\Delta t)}. \end{equation} Thus, the probability of scattering is $P_a^\mathrm{el}=1-\exp(-\Gamma_a^\mathrm{el}\Delta t)$, or just $\Gamma_a^\mathrm{el}\Delta t$ if it is much smaller than 1. One may extend Eq.~(\ref{eq:rate2}) to the average of a general quantity $X$ per unit time: \begin{align} \label{eq:defX} \langle\langle X&(\vec{p}_a,T) \rangle\rangle = \sum_{b,(cd)} \frac{\gamma_b}{16E_a(2\pi)^4}\int dE_b d\theta_b d\theta_d d\phi_{d}\nonumber\\ &\times X(\vec{p}_a,T) f_b(E_b,T) S_2(\hat{s},\hat{t},\hat{u})|\mathcal{M}_{ab\rightarrow cd}|^2\nonumber\\ &\times \frac{E_b E_d \sin \theta_b \sin \theta_d}{E_a-|\vec{p}_a| \cos\theta_{d}+E_b-E_b\cos\theta_{bd}}. \end{align} Therefore, we have $\Gamma_a^\mathrm{el} = \langle\langle 1 \rangle\rangle$ and \begin{align} \hat{q}_a&=\langle\langle \left[\vec{p}_c - (\vec{p}_c \cdot \hat{p}_a)\hat{p}_a\right]^2\rangle\rangle, \label{eq:22qhat}\\ \hat{e}_a&=\langle\langle E_a-E_c\rangle\rangle, \label{eq:22ehat} \end{align} where $\hat{q}_a$ and $\hat{e}_a$ denote the transverse momentum broadening square and elastic energy loss of the jet parton $a$, respectively, per unit time due to elastic scattering. For high energy jet partons with the small-angle-scattering approximation, one may obtain~\cite{Wang:1996yf,He:2015pra}: \begin{align} \Gamma_a^\mathrm{el} &= C_2(a)\frac{42\zeta(3)}{\pi}\frac{\alpha_\mathrm{s}^2 T^3}{\mu_\mathrm{D}^2}, \label{eq:gammaAnalytic}\\ \hat{q}_a&=C_2(a)\frac{42\zeta(3)}{\pi}\alpha_\mathrm{s}^2 T^3 \ln\left(\frac{C_{\hat{q}} E_a T}{4\mu_\mathrm{D}^2}\right), \label{eq:qhatAnalytic}\\ \hat{e}_a&=C_2(a)\frac{3\pi}{2}\alpha_\mathrm{s}^2 T^2 \ln\left(\frac{C_{\hat{e}} E_a T}{4\mu_\mathrm{D}^2}\right), \label{eq:ehatAnalytic} \end{align} where $C_2(a)$ is the quadratic Casimir color factor of parton $a$, $C_{\hat{q}}$ and $C_{\hat{e}}$ are constants depending on kinematic cuts adopted in the calculations. With the implementations discussed above, comparisons between numerical evaluations and these analytical formulae suggest $C_{\hat{q}}\approx 5.7$ and $C_{\hat{e}}\approx 2.6$~\cite{He:2015pra}. Apart from elastic scattering, the inelastic process, or medium-induced gluon radiation, is included in the \textsc{Lbt} model by relating the inelastic scattering rate to the average number of emitted gluons from parton $a$ per unit time, and is evaluated as \cite{Cao:2013ita,Cao:2015hia,Cao:2016gvr} \begin{equation} \label{eq:gluonnumber} \Gamma_a^\mathrm{inel} (E_a,T,t) = \frac{1}{1+\delta_g^a}\int dzdk_\perp^2 \frac{dN_g^a}{dz dk_\perp^2 dt}, \end{equation} in which the Kronecker delta function $\delta_g^a$ is imposed to avoid double counting in the $g\rightarrow gg$ process, and the medium-induced gluon spectrum in the fluid comoving frame is taken from the high-twist energy loss calculation~\cite{Guo:2000nz,Majumder:2009ge,Zhang:2003wk}, \begin{equation} \label{eq:gluondistribution} \frac{dN_g^a}{dz dk_\perp^2 dt}=\frac{2C_A\alpha_\mathrm{s} P^\mathrm{vac}_a(z) k_\perp^4}{\pi C_2(a) (k_\perp^2+x^2 m_a^2)^4}\,\hat{q}_a\, {\sin}^2\left(\frac{t-t_i}{2\tau_f}\right), \end{equation} where $z$ and $k_\perp$ are the fractional energy and transverse momentum of the emitted gluon with respect to its parent parton $a$, $P^\mathrm{vac}_a(z)$ is the vacuum splitting function of $a$ (note again it contains the color factor $C_2(a)$ by our convention here), $\hat{q}_a$ is the parton transport coefficient taken from the elastic scattering process Eq.~(\ref{eq:22qhat}), $t_i$ represents the production time of parton $a$, and $\tau_f={2E_a z(1-z)}/{(k_\perp^2+z^2m_a^2)}$ is the formation time of the emitted gluon with $m_a$ being the mass of parton $a$. In the current \textsc{Lbt} model, zero mass is taken for light flavor quarks and gluon, 1.3~GeV is taken for charm quark mass and 4.2~GeV for beauty quark mass. The lower and upper limits of $z$ are taken as $z_\mathrm{min}=\mu_\mathrm{D}/E_a$ and $z_\mathrm{max}=1-\mu_\mathrm{D}/E_a$ respectively. Note that the medium-induced spectrum Eq.~(\ref{eq:gluondistribution}) here is consistent with the medium-induced splitting function used in the \textsc{Matter} model in Eqs.~(\ref{eq:sudakov}) and (\ref{eq:medP}). Multiple gluon emissions within each time interval $\Delta t$ are allowed in the \textsc{Lbt} model. Similar to the elastic scattering process, the number of emitted gluons obeys a Poisson distribution with a mean value of $\Gamma_a^\mathrm{inel}\Delta t$. Thus, the probability of inelastic scattering is $P_a^\mathrm{inel}=1-\exp(-\Gamma_a^\mathrm{inel}\Delta t)$. Note that both multiple elastic scatterings and multiple emitted gluons are assumed incoherent, possible interferences between each other have not been taken into account in \textsc{Lbt} yet. To combine elastic and inelastic scattering processes, the total scattering probability is divided into two parts: pure elastic scattering without gluon emission $P_a^\mathrm{el}(1-P_a^\mathrm{inel})$ and inelastic scattering $P_a^\mathrm{inel}$. The total probability is then $P_a^\mathrm{tot}=P_a^\mathrm{el}+P_a^\mathrm{inel}-P_a^\mathrm{el} P_a^\mathrm{inel}$. Based on these probabilities, the Monte Carlo method is applied to determine whether a given jet parton $a$ with momentum $\vec{p}_a$ scatters with the thermal medium with local temperature $T$, and whether the scattering is pure elastic or inelastic. With a selected scattering channel, as well as the number of elastic scatterings or emitted gluons given by the Poisson distribution, the energies and momenta of the outgoing partons are sampled using the corresponding differential spectra given by either Eq.~(\ref{eq:rate2}) or Eq.~(\ref{eq:gluondistribution}). In the \textsc{Lbt} model, the emitted gluons are induced by scatterings between jet partons and the thermal medium. Therefore, for an inelastic scattering process, a $2\rightarrow2$ scattering is generated first, after which the four-momenta of the two outgoing partons are adjusted together with the $n$ emitted gluons so that the energy-momentum conservation of the $2 \rightarrow 2 + n$ process is respected. For realistic heavy-ion collisions, the initial momenta of jet partons are either sampled using spectra from pQCD calculations for initial hard collisions or generated with a pQCD Monte Carlo generator such as \textsc{Pythia} or other programs for vacuum showers. Their initial positions are either sampled with the Monte-Carlo Glauber models or taken from the \textsc{Ampt}~\cite{Lin:2004en} simulations for the early time evolution. Different hydrodynamic models, (2+1)-D viscous \textsc{Vishnew}~\cite{Song:2007fn,Song:2007ux,Qiu:2011hf} and (3+1)-D viscous \textsc{Clvisc}~\cite{Pang:2014ipa,Pang:2012he} with Monte-Carlo Glauber or \textsc{Ampt} initial conditions, are used to generate the spacetime evolution profiles of the QGP. At the beginning of a given time step, each jet parton is boosted into the rest frame of its local fluid cell, in which its scattering with the medium is simulated using the linear Boltzmann equation. The outgoing partons after scatterings are then boosted back to the global center-of-mass frame of collisions, in which they propagate to the next time step. On the freeze-out hypersurface of the QGP ($T_\mathrm{c}=165$~MeV), jet partons are converted into hadrons using either the \textsc{Pythia} simulation or the recombination model~\cite{Han:2016uhh}. In the \textsc{Lbt} model, all partons in jet showers are fully tracked, including energetic jet partons and their emitted gluons, as well as ``recoil" partons which are thermal medium partons in the final state of the elastic scattering. All these partons are treated on the same footing and are allowed to re-scatter with the medium. When a recoil parton is generated, it leaves a hole behind in the phase-space inside the medium. These holes are labeled as ``negative" partons in \textsc{Lbt}, denoting the back-reaction of the QGP response to jet propagation. Their energy-momentum will be subtracted from the final-state jet spectra to ensure the energy-momentum conservation of the jet-medium system. A more rigorous treatment of this back-reaction, as well as the subsequent evolution of those soft partons (at or below the thermal scale of the medium) produced in jet-medium scatterings, will be discussed in Sec.~\ref{subsec:concurrent} using the \textsc{CoLbt-Hydro} model. Another example of jet transport model is \textsc{Martini}~\cite{Schenke:2009gb,Park:2018acg}, which implements the AMY energy loss formalism for the radiative energy loss rates~\cite{Arnold:2002ja,Arnold:2002zm} combined with elastic scattering processes~\cite{Schenke:2009ik}. The medium-induced parton splitting processes are realized by solving a set of coupled rate equations for the time evolution of the energy distribution of quark/gluon jet partons $f_{q/g}(p)$: \begin{align} \label{eq:rateMAR} \frac{df_q(p)}{dt} &= \int_k f_q(p+k)\frac{d\Gamma^{q}_{qg}(p+k,k)}{dkdt}\\ -&f_q(p)\frac{d\Gamma^{q}_{qg}(p,k)}{dkdt} +2f_g(p+k)\frac{d\Gamma^g_{q\bar{q}}(p+k,k)}{dkdt},\nonumber\\[10pt] \frac{df_g(p)}{dt} &= \int_k f_q(p+k)\frac{d\Gamma^{q}_{qg}(p+k,p)}{dkdt}\nonumber\\ +&f_g(p+k)\frac{d\Gamma^{g}_{gg}(p+k,p)}{dkdt}\\ - &f_g(p)\left[\frac{d\Gamma^g_{q\bar{q}}(p,k)}{dkdt} + \frac{d\Gamma^g_{gg}(p,k)}{dkdt}\theta(2k-p)\right].\nonumber \end{align} Here, $d\Gamma^a_{bc}(p,k)/dkdt$ is the transition rate taken from the AMY formalism for parton $a$ (with energy $p$) to split into parton $b$ (with energy $p-k$) and parton $c$ (with energy $k$). The factor of 2 in front of the $g\rightarrow q\bar{q}$ rate takes into account the fact that $q$ and $\bar{q}$ are distinguishable; and the $\theta$ function after the $g \rightarrow gg$ rate is introduced to avoid double counting of its final state. The integration range with $k<0$ corresponds to energy gain of jet partons from the thermal medium; and the range with $k>p$ for the $q \rightarrow qg$ process corresponds to the quark annihilating with an anti-quark with energy $k-p$ from the medium into the gluon. The AMY formalism describes the energy loss of hard jets as partons split in a thermal medium. The radiation rates are calculated by means of integral equations~\cite{Arnold:2002ja} with the assumptions that quarks and gluons in the medium are well defined (hard) quasi-particles with momenta much larger than the medium temperature $T$ and thermal masses of the order of $gT$~\cite{Arnold:2002zm}. In the current \textsc{Martini} model, the radiative energy loss mechanism has been improved by implementing the effects of finite formation time~\cite{CaronHuot:2010bp,Park:2018acg} and running coupling~\cite{Young:2012dv}. The formation time of the radiation process is set as $\sim p/p_\mathrm{T}^2$ within which the hard parton and the emitted parton are considered as a coherent state. This effectively reduces the radiation rate at early times after the hard parton is produced. The coupling constant $\alpha_\mathrm{s}(\mu^2)$ runs with the scale of the average momentum transfer square between the jet parton and the medium \begin{equation} \label{eq:muMartini} \mu^2 = {\langle p^2_\perp \rangle} \sim \sqrt{\hat{q}p}, \end{equation} where $\hat{q}$ is the jet transport parameter for the average momentum transfer square per mean-free path. The daughter partons are strictly collinear with their mother at the splitting vertex; additional transverse momentum broadening is introduced by elastic scattering processes. For realistic simulations of jet propagation in heavy-ion collisions within the \textsc{Martini} model, the medium background is provided by the (3+1)-D viscous hydrodynamic model \textsc{Music} with IP-Glasma initial condition~\cite{Schenke:2010nt,McDonald:2016vlt}. Jet partons are initialized with \textsc{Pythia} vacuum showers. After their evolution through \textsc{Martini}, they are fed back to \textsc{Pythia} for hadronization. A recoil parton can be generated within \textsc{Martini} if its momentum -- sum of its original thermal momentum and the momentum transfer from the scattering -- is larger than certain kinematic scale (e.g. $4T$). These recoil partons continue scattering with the medium in the same way as high-energy jet partons. The soft recoil partons below the $4T$ threshold, as well as back-reaction to medium due to the generation of recoil partons, are expected to be deposited into the subsequent QGP evolution in the future work, thus have not been included in the \textsc{Martini} model. In both \textsc{Lbt} and \textsc{Martini}, while jet parton evolution is described using the linear Boltzmann equation or rate equation, the QGP medium evolution is described using hydrodynamic models. In literature, an alternative approach is applying a full Boltzmann simulation for both jet partons and medium partons, although it is still under debate whether the strongly coupled QGP can be modeled with quasi-particles within the perturbative framework. One example is the \textsc{Ampt} model~\cite{Lin:2004en,Ma:2013pha,Ma:2013bia,Ma:2013gga,Ma:2013uqa,Nie:2014pla,Gao:2016ldo}. In \textsc{Ampt}, the initial spatial and momentum information of high-$p_\mathrm{T}$ jets, mini-jets and soft string excitations are taken from \textsc{Hijing} simulations~\cite{Wang:1991hta,Gyulassy:1994ew}, which are further converted into individual quarks via the string melting model~\cite{Lin:2004en}. These quarks, including both jet partons and medium partons, then scatter with each other through elastic collisions whose interaction strength is controlled by a set of partonic cross sections $\sigma$ that are treated as model parameters. Note that gluon components and inelastic scatterings have not been included in the current \textsc{Ampt} model. At the end of partonic collisions, a simple coalescence model is applied to convert two/three nearest quarks in space into mesons/baryons. These hadrons can further scatter with each other within the \textsc{Art} model~\cite{Li:1995pra}. Another example of full Boltzmann transport is the \textsc{Bamps} model~\cite{Xu:2004mz,Xu:2007aa,Senzel:2013dta,Uphoff:2014cba,Senzel:2016qau}, in which both elastic and inelastic scatterings between partons during the QGP phase are simulated with the leading-order perturbative QCD cross sections. For inelastic scatterings, the Gunion-Bertsch approximation is adopted~\cite{Gunion:1981qs,Fochler:2013epa}, and the LPM effect is modeled with a theta function $\theta(\lambda-\tau X_\mathrm{LPM})$ that requires the mean-free path of the parent parton $\lambda$ is larger than the formation time of the emitted gluon $\tau$ scaled with an $X_\mathrm{LPM}$ parameter between 0 and 1. In \textsc{Bamps}, initial partons, including both jet partons and medium partons, are generated by \textsc{Pythia}. At the partonic freeze-out energy density ($\epsilon = 0.6$~GeV/fm$^3$), jet partons are converted into hadrons using the Albino-Kniehl-Kramer (AKK) fragmentation functions~\cite{Albino:2008fy}. \subsection{Multi-scale jet evolution} \label{subsec:multi-scale} Interactions between jets and the QGP differ at various scales. Thus, it is incomplete to apply a single formalism through the entire spacetime history of jet evolution. A first attempt to combine different and complementary theoretical approaches into a multi-scale jet evolution model was implemented in the \textsc{Jetscape} framework~\cite{Cao:2017zih,Putschke:2019yrg,Park:2019sdn,Tachibana:2018yae}, in which medium-modified splittings of jet partons are described using the \textsc{Matter} model at high virtualities as discussed in Sec.~\ref{subsec:med_showers_virtual}, while their subsequent transport via jet-medium interaction at low virtualities are described using either the \textsc{Lbt} model or the \textsc{Martini} model as discussed in Sec.~\ref{subsec:transport}. One crucial quantity in this combined approach is the separation scale $Q_0^2$ between the DGLAP-type parton showers and the in-medium transport. In Ref.~\cite{Cao:2017zih}, two different schemes, fixed $Q_0^2$ and dynamical $Q_0^2$, are investigated within a static medium. For the former, \textsc{Matter} is used to simulate parton splittings when the virtualities are above a fixed value of $Q_0^2$ ($1$, $4$ or $9$~GeV$^2$) while either \textsc{Lbt} or \textsc{Martini} is used to simulate the parton scatterings with the medium when the virtualities are below that separation scale. For the latter, $Q_0^2=\hat{q}\tau_f$ is defined for each parton, in the sense that the virtuality-ordered parton showers should switch to transport when the parton virtuality scale is comparable to the virtuality gain (or transverse momentum broadening square) from scatterings with the medium. With $\tau_f=2E/Q_0^2$, one can obtain $Q_0^2=\sqrt{2E\hat{q}}$, in which $\hat{q}$ can be taken from Eq.~(\ref{eq:qhatAnalytic}) within the picture of perturbative scatterings of jet partons inside a thermal medium. Note that $Q_0^2$ is only applied when it is larger than $Q_\mathrm{min}^2$ (taken as 1~GeV$^2$ in most models) above which the DGLAP evolution is reliable. If not, $Q_\mathrm{min}^2$ is used as the lower boundary for the \textsc{Matter} model. In addition, to calculate the nuclear modification of jets in realistic heavy-ion collisions, if the virtuality scale of a given parton is still larger than $Q_\mathrm{min}^2$ when it exits the QGP, this parton should then continue splitting within \textsc{Matter} with only the vacuum splitting function until all its daughter partons reach $Q_\mathrm{min}^2$. This guarantees a meaningful comparison to the baseline spectra in p+p collisions, which is obtained with vacuum showers in \textsc{Matter} directly down to $Q_\mathrm{min}^2$ for each parton. Within the \textsc{Jetscape} framework, it is found that energetic partons spend finite times within both the DGLAP-splitting stage and the transport stage. The switching time is delayed as the parton energy increases or the switching scale $Q_0^2$ decreases. Thus, the \textsc{Matter} model dominates the medium modification of parton spectra at high $p_\mathrm{T}$, while \textsc{Lbt}/\textsc{Martini} dominates at low $p_\mathrm{T}$. A larger value of $Q_0^2$ typically weakens the medium modification from the \textsc{Matter} stage, thus enhances the relative contribution from the subsequent \textsc{Lbt}/\textsc{Martini} stage. Increasing the medium size also extends the in-medium path length of partons after they reach $Q_0^2$, thus raises the contribution from \textsc{Lbt}/\textsc{Martini}. To date, there is no theoretical determination of the exact value of the switching scale $Q_0^2$ between different implementations of jet energy loss theory. This is treated as a parameter in this multi-scale jet evolution model when comparing with experimental data~\cite{Park:2019sdn,Tachibana:2018yae}. A similar multi-scale approach has also been applied to investigating heavy quark energy loss~\cite{Cao:2017crw} where the medium-modified parton fragmentation functions at low virtualities (heavy meson mass scales) are first extracted from an in-medium transport model and then evolved up to high virtualities (heavy meson $p_\mathrm{T}$ scales) using the medium-modified DGLAP equation. \section{Medium response to jet propagation} \label{sec:medium_response} \subsection{Hydrodynamic evolution of bulk medium} \label{subsec:hydro} Numerical calculations of jet quenching, and in particular the simulation of jet transport, require the space-time profile of the bulk medium evolution in high-energy nuclear collisions. Along the propagation path, one needs the information on the local temperature (or thermal parton density) and fluid velocity in order to evaluate the scattering rate and gluon radiation spectra. Many hydrodynamic models have been developed for this purpose~\cite{Kolb:2000sd,Huovinen:2001cy,Nonaka:2006yn,Romatschke:2007mq,Song:2007fn,Qiu:2011hf,Petersen:2010cw,Werner:2010aa,Schenke:2010nt,Pang:2014ipa,Pang:2012he}. Here we briefly review the \textsc{CLVisc} hydrodynamic model~\cite{Pang:2012he,Pang:2014ipa} which is used for jet transport simulations within the \textsc{Lbt} model discussed in this review. The second-order hydrodynamic equations for the evolution of QGP with zero baryon density are given by \begin{equation} \partial_\mu T^{\mu\nu} = 0, \label{eqn:hydro} \end{equation} with the energy-momentum tensor \begin{equation} T^{\mu\nu}=\varepsilon u^{\mu}u^{\nu}-(P+\Pi) \Delta^{\mu\nu} + \pi^{\mu\nu}, \label{eqn:tmunu} \end{equation} where $\varepsilon$ is the energy density, $P$ the pressure, $u^{\mu}$ the fluid four-velocity, $\pi^{\mu\nu}$ the shear stress tensor, $\Pi$ the bulk pressure and $\Delta^{\mu\nu}=g^{\mu\nu}-u^{\mu}u^{\nu}$ the projection operator which is orthogonal to the fluid velocity. In the Landau frame, the shear stress tensor is traceless ($\pi_{\mu}^{\mu}=0$) and transverse $(u_{\mu}\pi^{\mu\nu}=0)$. In the Milne coordinates, $\tau=\sqrt{t^2 - z^2}$ is the proper time and $\eta_s = (1/2)\ln [(t+z)/(t-z)]$ the space-time rapidity. The \textsc{CLVisc} hydrodynamic model uses the Kurganov-Tadmor algorithm \cite{KURGANOV2000241} to solve the hydrodynamic equation for the bulk medium and the Cooper-Frye particlization for hadron freeze-out with GPU parallelization using the Open Computing Language (OpenCL). With GPU parallelization and Single Instruction Multiple Data (SIMD) vector operations on modern CPUs, \textsc{CLVisc} achieves the best computing performance so far for event-by-event (3+1)D hydrodynamic simulations on heterogeneous computing devices. The initial energy-momentum density distributions for the event-by-event \textsc{CLVisc} hydrodynamic simulations are obtained from partons given by the \textsc{Ampt} model~\cite{Lin:2004en} with a Gaussian smearing, \begin{eqnarray} T^{\mu\nu} (\tau_{0},x,y,\eta_{s}) & = K\sum_{i} \frac{p^{\mu}_{i}p^{\nu}_{i}}{p^{\tau}_{i}}\frac{1}{\tau_{0}\sqrt{2\pi\sigma_{\eta_{s}}^{2}}}\frac{1}{2\pi\sigma_{r}^{2}} \nonumber\\ &\hspace{-0.4in} \times \exp \left[-\frac{(\vec x_\perp-\vec x_{\perp i})^{2}}{2\sigma_{r}^{2}} - \frac{(\eta_{s}-\eta_{i s})^{2}}{2\sigma_{\eta_{s}}^{2}}\right], \label{eq:Pmu} \end{eqnarray} where $p^{\tau}_{i}=m_{i\mathrm{T}}\cosh(Y_{i}-\eta_{i s})$, $\vec p^{\perp}_{i}=\vec p_{i \perp}$, $p^{\eta}_{i}=m_{i \mathrm{T}}\sinh(Y_{i}-\eta_{i s})/\tau_{0}$, $m_{i \mathrm{T}}=\sqrt{p_{i \perp}^2+m_i^2}$ and the summation runs over all partons $(i)$ produced from the \textsc{Ampt} model simulations. The scale factor $K$ and the initial time $\tau_{0}$ are two parameters that are adjusted to fit the experimental data on the central rapidity density of produced hadrons. A parametrized equation of state (EoS) s95p-v1\cite{Huovinen:2009yb} is used in \textsc{CLVisc}. \begin{figure} \centering \includegraphics[width=8.0cm]{He-geo5020} \caption{(Color online) The transverse distributions of mini-jets in a typical \textsc{Ampt} event of Pb+Pb collisions with different centralities at $\sqrt{s}=5.02$~TeV. The figure is from Ref.~\cite{He:v2-private}} \label{fig:geo} \end{figure} The \textsc{Ampt} model employs the \textsc{Hijing} model~\cite{Wang:1991hta,Gyulassy:1994ew} to generate initial bulk partons as well as jets according to the Glauber model of nuclear collisions with the Woods-Saxon nuclear distribution. Bulk parton transport is simulated in \textsc{Ampt} for the whole evolution history in the Cartesian coordinates. Partons at the fixed initial time $\tau_0$ in the Milne coordinates are used to evaluate the initial condition for the \textsc{CLVisc} hydrodynamic evolution according to Eq.~(\ref{eq:Pmu}). The centrality classes of heavy-ion collisions are defined according to the initial parton multiplicity distribution, and the average number of participant nucleons $\langle N_{\rm part}\rangle$ in each centrality class is computed accordingly. In the event-by-event simulation of jet transport through the \textsc{CLVisc} hydrodynamic background, a fixed number of \textsc{CLVisc} hydrodynamic events (e.g. 200) is used for each centrality bin of heavy-ion collisions. For each of these hydrodynamic events, a large number of triggered jets (e.g. 10000) in each bin of transverse momentum and rapidity are simulated, whose initial transverse positions are sampled according to the transverse distribution of mini-jets in the same \textsc{Ampt} event that provides the initial condition for the bulk medium evolution in \textsc{CLVisc}. Shown in Fig.~\ref{fig:geo} are the transverse distributions of mini-jets in a typical \textsc{Ampt} event of Pb+Pb collisions at $\sqrt{s}=5.02$~TeV (per nucleon pair) for different centralities. \subsection{Medium response in parton transport} \label{subsec:recoil} Jet-medium interactions lead not only to medium modification of jets, but also medium response to the energy-momentum deposition from jets. One way to propagate the lost energy and momentum of jets through the medium is via recoil partons. Recoil partons are the final-state partons that are scattered out of their initial phase-space in the thermal background medium by jets. They are fully tracked and allowed to rescatter with the thermal medium in the same way as jet partons do within the \textsc{Lbt} model~\cite{Wang:2013cia,He:2015pra,Luo:2018pto,He:2018xjv}. When these recoil partons are produced, they leave ``holes" in the phase-space of the thermal medium. These ``holes'' that carry the energy-momentum of the initial-state thermal partons of the scattering are labeled as back-reaction or ``negative" partons, and are propagated inside \textsc{Lbt} as well. Shown in Fig.~\ref{fig:LBTcone} are the energy density distributions of the medium response from the LBT simulation~\cite{He:2015pra} of a propagating gluon along the $+z$-direction, starting at $z=0$ and transverse position $r=0$ with an initial energy $E_0=100$~GeV, after 4~fm/$c$ (upper panel) and 8~fm/$c$ (lower panel) of propagation time in a uniform QGP medium at a constant temperature $T=400$~MeV, averaged over many events. One can clearly see the formation and propagation of a Mach-cone-like shock wave induced by parton-medium interactions. The shock wave is rapidly diffused during its propagation because of the dissipation due to the large value of shear viscosity as a result of pQCD parton-parton collisions as implemented in the LBT model. One can also see the depletion of the energy density behind the propagating parton as a diffusion wake induced by the jet-medium interaction. In realistic calculations, both jet partons and recoil partons are utilized to reconstruct jets, with the energy-momentum of ``negative" partons subtracted. This ensures the energy-momentum conservation of the entire jet-medium system. \begin{figure} \centering \includegraphics[width=6.5cm]{He-LBTcone1}\\ \vspace{-0.4 cm} \includegraphics[width=6.5cm]{He-LBTcone2} \caption{(Color online) The energy density distribution of the jet-induced medium response by a gluon with an initial energy $E_0=100$~GeV after (a) 4 and (b) 8 fm of propagation time in a uniform QGP medium at a constant temperature $T=400$~MeV. The gluon propagates along the $+z$-direction from $z=0$ and transverse position $r=0$. The figures are from Ref.~\cite{He:2015pra}.} \label{fig:LBTcone} \end{figure} As discussed in the previous section, this treatment of recoil partons in the jet-induced medium excitation has also been applied in \textsc{Martini}~\cite{Schenke:2009gb,Park:2018acg} and \textsc{Jewel}~\cite{Zapp:2011ya,Zapp:2012ak,Zapp:2013vla}. In \textsc{Martini}, only recoil partons above certain kinematic threshold (e.g. 4$T$) are kept in simulations and are allowed to rescatter with the medium. Recoil partons below the threshold, as well as the back-reaction or ``negative" partons, are regarded as part of the medium evolution and have not been included in \textsc{Martini} yet. In \textsc{Jewel}, both recoil and ``negative" partons are included. However, rescatterings of recoil partons with the medium are not implemented yet. A similar method, though not exactly through recoil particles, has been applied in the \textsc{Hybrid} model~\cite{Casalderrey-Solana:2016jvj} to take into account the jet-induced medium response. The energy-momentum loss from each jet parton $\Delta p^\mu = (\Delta E, \Delta \vec{p})$ is assumed to instantaneously thermalize with the medium background and is directly converted to hadrons using the Cooper-Frye formula. The additional particle production $d\Delta N/d^3p$ due to this energy-momentum deposition is positive along the direction of jet propagation, while can be negative in the opposite direction. Similar to \textsc{Lbt} and \textsc{Jewel}, the latter part is treated as ``negative" particles or back-reaction, representing the diffusion wake behind the jet propagation. To ensure energy-momentum conservation of each jet event, an independent list of hadrons are first sampled using the Cooper-Frye formula until their total energy reaches the lost energy $\Delta E$ of the jet parton. Then the four-momentum of each hadron is re-assigned based on the Cooper-Frye formula again, if this re-assignment improves the the four-momentum conservation according to the Metropolis algorithm, until the total four-momentum of the hadron ensemble is sufficiently close to that lost by the jet parton ($\Delta p^\mu$). Among these different implementations of recoil and back-reaction, \textsc{Jewel} represents the limit where recoil partons do not rescatter with the medium, while \textsc{Hybrid} represents the opposite limit of sudden thermalization and hadronization of the energy-momentum transfer from jet to medium. In between, \textsc{Lbt} assumes perturbative rescatterings of these recoil partons through the thermal medium before they are converted into hadrons at the QGP boundary. A further improved treatment of the energy-momentum deposition from jet into medium is to evolve this deposition within the hydrodynamic model before hadronization, as will be discussed below. \subsection{Hydrodynamic response} \label{subsec:hydroResponse} One can assume the deposited energy and momentum from jets becomes locally thermalized and evolve hydrodynamically together with the background QGP medium. In this scenario, the medium response to jet propagation is described via solving the hydrodynamic equation with a source term, \begin{equation} \label{eq:hydroSource} \partial_\mu T^{\mu\nu}(x)=J^\nu(x). \end{equation} where the source term $J^\nu(x)=[dE/d^4x,d{\vec p}/d^4x]$ represents the space-time profile of the energy-momentum deposition from jets into the medium. Analytical solutions to Eq.~(\ref{eq:hydroSource}) exist under the assumption that the energy-momentum deposition from jets is a small perturbation on top of the medium evolution so that the medium response can be linearized as~\cite{CasalderreySolana:2004qm,Neufeld:2008fi,Neufeld:2008dx} \begin{equation} \label{eq:linearHydro} T^{\mu\nu}\approx T^{\mu\nu}_0+\delta T^{\mu\nu};\;\; \partial_\mu T^{\mu\nu}_0=0, \;\; \partial_\mu \delta T^{\mu\nu}=J^\nu. \end{equation} Here $T^{\mu\nu}_0$ is the energy-momentum tensor of the unperturbed medium, and $\delta T^{\mu\nu}$ from the jet-induced medium excitation can be further decomposed as \begin{equation} \label{eq:linearHydroDecomp} \begin{split} &\delta T^{00}\equiv\delta \epsilon,\;\; \delta T^{0i}\equiv g^i, \\ &\delta T^{ij}=\delta^{ij}c_s^2\delta\epsilon + \frac{3}{4}\Gamma_s (\partial^i g^j + \partial^j g^i + \frac{2}{3} \delta^{ij} \nabla\cdot{\vec g}), \end{split} \end{equation} where $\delta\epsilon$ is the excess energy density, ${\vec g}$ is the momentum current, $c_s$ denotes the speed of sound, $\Gamma_s\equiv 4\eta/[3(\epsilon_0+p_0)]$ is the sound attenuation length with $\epsilon_0$ and $p_0$ being the unperturbed local energy density and pressure, respectively. Note that in Eq.~(\ref{eq:linearHydroDecomp}), the metric convention $g^{\mu\nu}=\mathrm{diag}\, (1, -1, -1, -1)$ is used as in Ref.~\cite{Song:2007ux}, and $\delta^{ij}=\mathrm{diag}\, (1, 1, 1)$ is the Kronecker delta-function. With Fourier transformation, one may rewrite the last part of Eq.~(\ref{eq:linearHydro}) in the momentum space as \begin{equation} \label{eq:linearHydroPSpace} \begin{split} &J^0=-i\omega\delta\epsilon+i{\vec k}\cdot{\vec g}, \\ &{\vec J}=-i\omega{\vec g}+i{\vec k}c_s^2\delta\epsilon+\frac{3}{4}\Gamma_s\left[k^2{\vec g}+\frac{{\vec k}}{3}({\vec k}\cdot{\vec g})\right], \end{split} \end{equation} which yields \begin{align} \label{eq:linearHydroSoln1} \delta\epsilon({\vec k},\omega) &= \frac{(i\omega-\Gamma_s k^2)J^0({\vec k},\omega)+ikJ_L({\vec k},\omega)}{\omega^2-c_s^2 k^2 + i\Gamma_s\omega k^2},\\ \label{eq:linearHydroSoln2} {\vec g}_L({\vec k},\omega) &= \frac{i c_s^2 {\vec k} J^0({\vec k},\omega)+i\omega \hat{k} J_L({\vec k},\omega)}{\omega^2-c_s^2 k^2 + i\Gamma_s\omega k^2},\\ \label{eq:linearHydroSoln3} {\vec g}_T({\vec k},\omega) &= \frac{i {\vec J}_T({\vec k},\omega)}{\omega+\frac{3}{4}i\Gamma_s k^2}. \end{align} Here both the source term and the perturbed momentum current are divided into transverse and longitudinal components in the momentum space: ${\vec J}=\hat{k}{J}_L+{\vec J}_T$ and ${\vec g}={\vec g}_L+{\vec g}_T$. Therefore, with the knowledge of the source term, one may obtain the variation of the energy-momentum tensor of the medium after Fourier transforming Eqs.~(\ref{eq:linearHydroSoln1})-(\ref{eq:linearHydroSoln3}) back to the coordinate space. \begin{figure}[tbp] \centering \includegraphics[width=0.40\textwidth]{Neufeld-linearResponse1} \includegraphics[width=0.40\textwidth]{Neufeld-linearResponse2} \caption{(Color online) The perturbed energy density for different values of the shear-viscosity-to-entropy-density ratio $\eta/s$ when an energetic gluon propagates along the $+z$ direction through a static QGP medium with temperature $T = 350$~MeV and sound velocity $c_s=1/3$. The figures are from Ref.~\cite{Neufeld:2008dx}.} \label{fig:linearResponse-Neufeld} \end{figure} Many studies have been implemented using this linear response approach. For instance, the Mach cone structure of the perturbed medium induced by jet propagation has been proposed in Refs.~\cite{CasalderreySolana:2004qm,Neufeld:2008fi,Neufeld:2008dx,Yan:2017rku}, as demonstrated in Fig.~\ref{fig:linearResponse-Neufeld}, where the intensity of the Mach cone is found weaker with growing kinematic viscosity. If observed in heavy-ion collisions, these Mach cone structures may provide more direct constraints on the QGP properties, such as the shear viscosity and the speed of sound. However, the strong collective motion of the dynamically evolving QGP with realistic initial geometry may destroy the Mach cone pattern~\cite{Renk:2005si,Bouras:2014rea}, and no sensitive experimental observables have been found so far. Using the linear response theory, different structures of jet-induced excitations inside weakly coupled and strongly coupled QGP are also compared in Ref.~\cite{Ruppert:2005uz}. The relation between hard ($\hat{q}$), soft ($\eta/s$) transport parameters and jet energy loss are also investigated in Ref.~\cite{Ayala:2016pvm}. \begin{figure}[tbp] \centering \includegraphics[width=0.40\textwidth]{Qin-quark_coll_xe} \includegraphics[width=0.40\textwidth]{Qin-quark_shower_xe} \caption{(Color online) The linear hydrodynamical response to energy deposition from a single quark (top) vs. a quark-initiated jet shower (bottom). The figures are from Ref.~\cite{Qin:2009uh}.} \label{fig:Qin-shower-source} \end{figure} In earlier studies, a simplified model for constructing the source term was usually applied, where the energy-momentum deposition was assumed to come from a single parton. However, jets are collimated showers of partons that may transfer energy into the medium via a combination of elastic and inelastic processes of all constituent partons. Such more realistic modelings of the source term have been proposed in Refs.~\cite{Neufeld:2009ep,Qin:2009uh}. As shown in Fig.~\ref{fig:Qin-shower-source}, a significantly enhanced conical pattern of the hydrodynamical response can be observed when depositing energy with realistic jet parton showers as compared to using energy loss from a single parton. Moreover, quantum interference effects between the primary parton and the radiated gluons within jet showers have been investigated in Ref.~\cite{Neufeld:2011yh} and shown to enhance the energy transfer from jet showers to the QGP and destroy the Mach cone structure of medium response when the gluon emission angle is large. \begin{figure}[tbp] \centering \includegraphics[width=0.48\textwidth]{Tachibana_profile} \includegraphics[width=0.48\textwidth]{Tachibana_profile_sub} \caption{(Color online) The energy density distribution of the QGP in the transverse plane at mid-rapidity in 2.76~TeV central Pb+Pb collisions, with the presence of a single jet initiated at position $(x=0~\textrm{fm},\,y=6.54~\textrm{fm})$ with momentum $(p_\mathrm{T}=150~\textrm{GeV},\, \phi_p=5\pi/8)$, before (top) and after the background subtraction (bottom). The figures are from Ref.~\cite{Tachibana:2017syd}.} \label{fig:Tachibana-jime-simulation} \end{figure} When the local energy density deposition from jets is comparable to that of the unperturbed medium, linearized hydrodynamic equations [Eq.~(\ref{eq:linearHydro})] are no longer valid. Without these approximations, full solutions to hydrodynamic equations with source terms [Eq.~(\ref{eq:hydroSource})] were provided in Ref.~\cite{Floerchinger:2014yqa} using a (1+1)-D hydrodynamic model, Ref.~\cite{Chaudhuri:2005vc} using a (2+1)-D hydrodynamic model, and Refs.~\cite{Betz:2010qh,Tachibana:2014lja,Tachibana:2020mtb} using full (3+1)-D hydrodynamic models. Within such an improved framework, it is found in Ref.~\cite{Tachibana:2014lja} that while the jet-induced Mach cone is easily distorted in the transverse plane, its pattern remains in the longitudinal direction in the reaction plane due to the expanding $(\tau,\eta)$ coordinates. To obtain the net effects of jet-induced medium excitation, one can subtract the energy density profile from hydrodynamic calculations without the presence of the source term from that with the source, as discussed in Ref.~\cite{Tachibana:2017syd}. Figure~\ref{fig:Tachibana-jime-simulation} shows a snapshot of the QGP evolution that is being disturbed by a single propagating jet before (top) and after (bottom) subtraction of the background medium without energy deposition from jets. One can clearly see a Mach cone induced by the energy and momentum deposition from the jet, as well as a region of energy depletion right behind the wave front, known as the diffusion wake. During the propagation, the wave front is distorted by the radial flow of the medium. Since the jet travels through an off-central path, the Mach cone is deformed asymmetrically in this event. Unfortunately, these typical conic structures of the jet-induced Mach cone are still hard to identify in current experimental measurements of the final hadron observables as we will discuss in the next few sections. \subsection{Coupled parton transport and hydrodynamics} \label{subsec:concurrent} The parton transport and hydrodynamic description of jet-induced medium response, as presented in Secs.~\ref{subsec:recoil} and \ref{subsec:hydroResponse}, can be considered as two different approaches to modeling how the lost energy-momentum from jets evolves inside the medium in the limit of weakly and strongly coupled system, respectively. The real scenario may be something in between. Furthermore, neither of them considers how the modified medium in turn affects the subsequent evolution of jets. This could be important when there are multiple jets within one event where one jet travels through the region disturbed by another jet, or when a slowly moving heavy quark may interact with the medium excitation induced by itself. To take these effects into account and bridge the parton transport and hydrodynamic approach, one can develop a coupled approach with concurrent evolution of jets and the QGP medium. The coupled \textsc{Lbt} and hydrodynamics (\textsc{CoLbt-Hydro}) model~\cite{Chen:2017zte,Chen:2018azc} combines the \textsc{Lbt} model for jet transport and the (3+1)-D viscous hydrodynamic model \textsc{CLVisc}~\cite{Pang:2012he,Pang:2014ipa} for the QGP expansion, and realizes the first concurrent simulation of transport of energetic partons and evolution of the thermal medium. The real-time coupling between parton transport and hydrodynamic evolution of the medium is through a source term that is updated at each time-step of the \textsc{Lbt} and \textsc{CLVisc} simulation. To achieve this, the Boltzmann equation in \textsc{Lbt} is re-written in the Milne coordinates as in the \textsc{CLVisc} hydrodynamic model. At each step of the proper time $(\tau,\tau+\Delta \tau)$, \textsc{CLVisc} provides the temperature and flow velocity information of the local fluid cell for the simulations of the elastic and inelastic scatterings of hard partons, including both jet shower and recoil partons, with the background medium. Among the final-state partons during this step of the proper time, jet and recoil partons below a given energy scale ($p\cdot u < p^0_\mathrm{cut}$), together with all ``negative" partons in the back-reaction, are transferred from the \textsc{Lbt} module to \textsc{CLVisc} in a source term constructed as \begin{equation} \begin{split} J^\nu = &\sum_i \frac{\theta(p^0_\mathrm{cut}-p_i \cdot u)dp_i^\nu/d\tau}{\tau (2\pi)^{3/2}\sigma_r^2\sigma_{\eta_s}}\\ &\times \exp\left[-\frac{({\vec x}_\perp-{\vec x}_{\perp i})^2}{2\sigma_r^2}-\frac{(\eta_s-\eta_{si})^2}{2\sigma_{\eta_s}^2}\right]. \end{split} \end{equation} Here, an instantaneous thermalization of low-energy partons ($p\cdot u < p^0_\mathrm{cut}$) in the source term for the hydrodynamic evolution of the medium is assumed, and the energy-momentum deposition from each parton is smeared in the coordinate space with Gaussian widths $\sigma_r=0.6$~fm and $\sigma_{\eta_s}=0.6$. This source term enters the \textsc{CLVisc} hydrodynamic evolution [Eq.~(\ref{eq:hydroSource})] for the next step of the proper time. Iteration of this algorithm provides a simultaneous evolution of jets, medium and their interactions. As discussed in Sec.~\ref{subsec:hydro}, in order to break the bottleneck of the computing speed for concurrent simulation of parton transport and hydrodynamic evolution, \textsc{CoLbt-Hydro} parallelizes the hydrodynamic calculations on GPUs, including both the Kurganov-Tadmor algorithm for the space-time evolution of the QGP and the Cooper-Frye particlization, using the Open Computing Language (OpenCL). Benefiting from the large number of computing elements on GPUs and the Single Instruction Multiple Data (SIMD) vector operations on modern CPUs, \textsc{CLVisc} brings the fastest (3+1)D hydrodynamics calculations so far and makes event-by-event \textsc{CoLbt-Hydro} simulations possible. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{Chen_profile} \caption{(Color online) The energy density profiles of the QGP and the $\gamma$-jet evolution in the transverse plane at $\eta_s=0$, $\tau = 2.0$ (a, b) and 4.8 fm/c (c, d) in 0-12\% Au+Au collisions at $\sqrt{s}=200$ GeV without (left) and with (right) background subtraction. Straight and wavy lines represent partons' and $\gamma$'s momenta respectively. The figure is from Ref.~\cite{Chen:2017zte}.} \label{fig:Chen-jime-simulation} \end{figure} Figure~\ref{fig:Chen-jime-simulation} shows two snapshots, one at $\tau=2.0$ (upper) and the other at $\tau=4.8$~fm (lower), of a $\gamma$-triggered jet event in central Au+Au collisions at $\sqrt{s}=200$~GeV from \textsc{CoLbt-Hydro} simulations. The $\gamma$-jet is produced at the center of the collision and the photon propagates along the $+y$ direction, as indicated by the wavy lines. The left column displays the energy density profiles of the whole collision event, while the right shows the energy density after subtracting the background from the same hydrodynamic event without the presence of the $\gamma$-jet. In Fig.~\ref{fig:Chen-jime-simulation}, one can clearly observe both the medium modification on jet partons, including their splittings and energy loss as shown by the straight lines, as well as the jet-induced modification on the medium in the form of the Mach-cone-like wave fronts of energy deposition, followed by the energy depletion of the diffusion wake behind. As discussed in Sec.~\ref{subsec:transport}, full Boltzmann transport models have been used to simulate the QGP evolution and propagation of energetic partons through the medium despite the controversy over whether one may use the pQCD-driven transport to describe the strongly coupled QGP matter. Nevertheless, they provide an alternative method that naturally simulates jet and medium evolution concurrently. For instance, by using the \textsc{Ampt} model, Ref.~\cite{Ma:2010dv} investigates how to isolate the effects of jet-induced Mach cone and expanding hot spots on the di-hadron vs. $\gamma$-hadron correlation; Ref.~\cite{Gao:2016ldo} studies how the lost energy from di-jets is redistributed in the lower $p_\mathrm{T}$ hadrons. Similar to Fig.~\ref{fig:linearResponse-Neufeld}, the viscous effects on the Mach cone structures have been studied within the \textsc{Bamps} model~\cite{Bouras:2012mh,Bouras:2014rea}, where effects of different energy loss rates have also been discussed. Moreover, while most studies to date assume instantaneous thermalization of the energy-momentum deposition from jets to the QGP, the detailed thermalization process has been explored within a Boltzmann-based parton cascade model in Ref.~\cite{Iancu:2015uja}. \section{Hadron spectra} \label{sec:hadron_spectra} \subsection{Single inclusive hadrons} \label{subsec:singleHadron} Nuclear modification of single inclusive hadrons is the most direct measure of the in-medium energy loss of energetic partons. The most frequently used observable is the nuclear modification factor first defined in Ref.~\cite{Wang:1998bha} for jet quenching as, \begin{equation} \label{eq:defRAA} R_\mathrm{AA}(p_\mathrm{T},y,\phi)\equiv\frac{1}{\langle N_\mathrm{coll} \rangle}\frac{\;\;\frac{dN_\mathrm{AA}}{dp_\mathrm{T}dyd\phi}\;\;}{\;\;\frac{dN_\mathrm{pp}}{dp_\mathrm{T}dyd\phi}\;\;}, \end{equation} where $\langle N_\mathrm{coll}\rangle$ is the average number of nucleon-nucleon collisions per nucleus-nucleus collision for a given range of centrality, which can be evaluated using the Glauber model. Note that while correctly evaluating $N_\mathrm{coll}$ is important to extract $R_\mathrm{AA}$ from experimental data, it is not necessary in theoretical calculations where the QGP effects are usually implemented on the p+p spectra that have been modified with cold nuclear matter effects. The suppression or nuclear modification factor $R_\mathrm{AA}$ quantifies the variation of hadron spectra in A+A vs. p+p collisions, and has been investigated in nearly all theoretical studies on jets~\cite{Bass:2008rv,Wang:2001ifa,Vitev:2002pf,Salgado:2003gb,Dainese:2004te,Vitev:2004bh,Armesto:2005iq,Wicks:2005gt,Armesto:2009zi,Marquet:2009eq,Chen:2010te,Renk:2010mf,Renk:2011gj,Horowitz:2011gd,Chen:2011vt,Cao:2017hhk,Cao:2017qpx}. In general, the hadron spectra produced in high-energy nuclear collisions can be written as \begin{align} \label{eq:xsectionFactor} d\sigma_{pp(AA)\to hX} &= \sum_{abc} \int dx_a \int dx_b \int dz_c f_{a}(x_a) f_{b}(x_b) \nonumber\\ & \times d\hat{\sigma}_{ab\to c} D^\mathrm{vac(med)}_{h/c}(z_c), \end{align} where $\sum_{abc}$ sums over all possible parton flavors, $f_a(x_a)$ and $f_b(x_b)$ are parton distribution functions (PDFs) per nucleon for partons $a$ and $b$ from the two colliding protons (nuclei), $\hat{\sigma}_{ab\to c}$ is the partonic scattering cross section, and $D_{h/c}(z_c)$ is the parton-to-hadron fragmentation function (FF). The PDFs can be taken from CTEQ parameterizations~\cite{Pumplin:2002vw} for p+p collisions, but need to be convoluted with cold nuclear matter effects for A+A collisions, e.g. as implemented in the EPS parameterizations~\cite{Eskola:2009uj} for the nuclear modification of the PDFs. The vacuum (vac) FF is used in Eq.~(\ref{eq:xsectionFactor}) for p+p collisions, while the medium-modified (med) FF should be applied for A+A collisions, as discussed in detail in Sec.~\ref{sec:theory}. Neglecting hadron production from the hadronization of radiated gluons and recoil partons from the medium response, which contribute mostly to soft hadrons, the medium-modified FF can be approximated by shifting the momentum of the fragmenting parton~\cite{Wang:1996yh,Salgado:2003gb}, \begin{equation} \label{eq:medFF} D_{h/c}^\mathrm{med}(z_c)=\int d\epsilon \frac{P(\epsilon)}{1-\epsilon}D_{h/c}^\mathrm{vac}\left(\frac{z_c}{1-\epsilon}\right), \end{equation} where $\epsilon=\Delta E_c / E_c$ is the fractional energy loss of parton $c$ inside the medium, and $P(\epsilon)$ is the probability distribution of the energy loss $\epsilon$. In this approximation, one assumes that a high-energy parton in A+A collisions first loses energy inside the thermal medium, and then fragments into hadrons with its remaining fractional energy $1-\epsilon$ outside the medium (in vacuum). \begin{figure}[tbp] \centering \includegraphics[width=0.23\textwidth]{JET-CUJET-RHIC} \includegraphics[width=0.23\textwidth]{JET-CUJET-LHC} \includegraphics[width=0.23\textwidth]{JET-HTBW-RHIC} \includegraphics[width=0.23\textwidth]{JET-HTBW-LHC} \includegraphics[width=0.23\textwidth]{JET-HTM-RHIC} \includegraphics[width=0.23\textwidth]{JET-HTM-LHC} \includegraphics[width=0.23\textwidth]{JET-AMY-RHIC} \includegraphics[width=0.23\textwidth]{JET-AMY-LHC} \includegraphics[width=0.23\textwidth]{JET-MARTINI-RHIC} \includegraphics[width=0.23\textwidth]{JET-MARTINI-LHC} \caption{(Color online) Experimental data on the nuclear modification factor $R_\mathrm{AA}$ for single inclusive hadrons at RHIC (left columns) and the LHC (right columns), compared to different model calculations within the JET Collaboration -- (from top to bottom) CUJET-GLV, HT-BW, HT-M, McGill-AMY and MARTINI-AMY. The figures are from Ref.~\cite{Burke:2013yra}.} \label{fig:JET-RAA-all} \end{figure} In most semi-analytical calculations of the modified hadron spectra, one directly convolutes the production cross section of energetic partons ($\hat{\sigma}_{ab\to c}$) with their medium-modified FFs as shown in Eq.~(\ref{eq:xsectionFactor}). In this case, one can also include contributions from hadronization of radiated gluons by adding the gluon FF to the right side of Eq.~(\ref{eq:medFF}) with the average energy fraction $\epsilon_g=\epsilon/n$ for each of the final $n$ number of radiated gluons ~\cite{Wang:1996yh}. A more careful approach to the fragmentation of the radiated gluons is to convolute the vacuum gluon FF with the medium-induced splitting function as discussed in Sec.~\ref{sec:theory} within the high-twist approach~\cite{Guo:2000nz,Wang:2001ifa}. In most Monte-Carlo frameworks, one first generates energetic partons using $\hat{\sigma}_{ab\to c}$ from the initial hard scatterings, then simulates the elastic and inelastic energy loss of these partons through the hot nuclear matter, and in the end converts all partons into hadrons using vacuum FFs outside the medium. One can include both radiated gluons and medium recoil partons in the final hadronization in these kind of Monte Carlo calculations. The suppression factor $R_\mathrm{AA}$ of single inclusive hadrons helps to constrain the interaction strength between jet partons and the medium. As shown in Fig.~(\ref{fig:JET-RAA-all}), it decreases with the increase of the jet transport parameter $\hat{q}$ or the strong coupling constant $\alpha_\mathrm{s}$. With a systematical comparison between various model calculations and experimental data at RHIC and the LHC, the JET Collaboration has obtained the constraint on $\hat{q}$ (of quarks) as $\hat{q}/T^3\approx 4.6\pm 1.2$ at RHIC (in the center of central Au+Au collisions at 200~GeV) and $3.7\pm 1.4$ at the LHC (in the center of central Pb+Pb collisions at 2.76~TeV). Details can be found in Fig.~\ref{fig:qhat} and Ref.~\cite{Burke:2013yra}. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{JETSCAPE-qhat} \caption{(Color online) The temperature scaled quark transport coefficient $\hat{q}/T^3$ as a function of the medium temperature, extracted by the JETSCAPE Collaboration using \textsc{Matter}, \textsc{Lbt} and \textsc{Matter+Lbt} models as compared to previous JET Collaboration result. The figure is from Ref.~\cite{Soltz:2019aea}.} \label{fig:JETSCAPE-qhat} \end{figure} This work of the JET Collaboration has recently been further improved by the JETSCAPE Collaboration~\cite{Soltz:2019aea} in several directions: (1) a single parameter (constant $\hat{q}$ or $\alpha_\mathrm{s}$) fit to a single data set has been extended to a simultaneous fit in a multi-dimensional parameter space to multiple data set at RHIC and the LHC in order to obtain $\hat{q}$ as a continuous function of the parton energy and the medium temperature; (2) instead of using a single energy loss formalism through the entire evolution of jets in the medium, a multi-scale approach (\textsc{Matter+Lbt}) as discussed in Sec.~\ref{subsec:multi-scale} has been employed for the first time to describe the nuclear modification of jets; (3) machine learning and Bayesian analysis methods have been introduced to replace the traditional $\chi^2$ fits, which significantly increase the efficiency of calibrating sophisticated model calculations in a wide range of the parameter-space against a vast amount of data. Shown in Fig.~\ref{fig:JETSCAPE-qhat} is the temperature dependence of $\hat{q}/T^3$ obtained within this new framework, with a 4-parameter $(A, B, C, D)$ ansatz, \begin{align} \label{eq:JETSCAPE-qhat} \frac{\hat{q}}{T^3}&=C_2\frac{42\zeta(3)}{\pi}\left(\frac{4\pi}{9}\right)^2\\ \times&\left\{\frac{A\left[\ln\left(\frac{E}{\Lambda}\right)-\ln(B)\right]}{\left[\ln\left(\frac{E}{\Lambda}\right)\right]^2}+\frac{C\left[\ln\left(\frac{E}{T}\right)-\ln(D)\right]}{\left[\ln\left(\frac{ET}{\Lambda^2}\right)\right]^2}\right\}.\nonumber \end{align} The second term in the curly bracket of the above ansatz takes the form of Eq.~(\ref{eq:qhatAnalytic}) for $\hat{q}$'s dependence on the jet parton energy ($E$) and the medium temperature ($T$). A running coupling $\alpha_\mathrm{s}(Q^2)$ at the leading order is applied here with the scales set to $Q^2=ET$ and $\Lambda = 0.2$~GeV. Possible dependence of the constant factors on kinematic cuts in Eq.~(\ref{eq:qhatAnalytic}) are absorbed in the parameter $D$. This form of $\hat{q}$, based on perturbative scatterings between jet partons and medium partons, is usually applied in transport models for low virtuality jet partons. When their virtuality is much larger than that of the medium temperature scale, the scaled jet transport parameter $\hat{q}/T^3$ may only depend on the jet energy scale, giving rise to the first term of the ansatz in the curly bracket with a parameter $B$. Parameters $A$ and $C$ weigh the contributions from these two terms. This 4-parameter ansatz is used in \textsc{Matter} and \textsc{Lbt} model, respectively, to describe experimental data. In the multi-scale \textsc{Matter+Lbt} model, the separation scale $Q_0$ between \textsc{Matter} and \textsc{Lbt} is introduced as the 5$^\mathrm{th}$ parameter. These setups are applied to calibrate the model calculations against the experimental data on $R_\mathrm{AA}$ for single inclusive hadrons in Au+Au collisions at 200~GeV, Pb+Pb collisions at 2.76 and 5.02~TeV simultaneously. Details can be found in Ref.~\cite{Soltz:2019aea} where different parameterizations of $\hat{q}$ are also discussed and compared. As shown in Fig.~\ref{fig:JETSCAPE-qhat}, when \textsc{Matter} and \textsc{Lbt} are applied separately, the 90\% credible regions of the jet transport parameter $\hat{q}$ extracted from model-to-data comparisons are still consistent with the previous JET Collaboration result~\cite{Burke:2013yra}. In contrast, combining \textsc{Matter} and \textsc{Lbt} leads to larger jet energy loss inside the QGP than using a single model, and thus yields a smaller value of the extracted $\hat{q}$. In addition, the separation scale $Q_0$ between high-virtuality parton shower and low-virtuality transport model has been found at around $2\sim3$~GeV, which reflects the virtuality scale of the QGP medium produced in heavy-ion collisions. The hadron suppression factor $R_\mathrm{AA}(p_\mathrm{T},y)$ quantifies the energy loss of hard partons inside the QGP averaged over the azimuthal angle in the transverse plane. A more differential observable is the elliptic flow coefficient $v_2$, which measures the energy loss anisotropy along different paths through the QGP \cite{Wang:2000fq,Gyulassy:2000gk}. It is defined as the second-order Fourier coefficient of the azimuthal angle distribution of particles in their momentum space \begin{equation} \label{eq:def-v2} v_2(p_\mathrm{T},y)\equiv\frac{\int d\phi \cos(2\phi) \frac{dN}{dp_\mathrm{T} dy d\phi}}{\int d\phi \frac{dN}{dp_\mathrm{T} dy d\phi}}. \end{equation} This $v_2$ coefficient can be analyzed for an ensemble of particles as \begin{equation} \label{eq:def-v2-simple} v_2=\left\langle \cos(2\phi) \right\rangle=\left\langle\frac{p_x^2-p_y^2}{p_x^2+p_y^2}\right\rangle, \end{equation} where $\langle \ldots \rangle$ denotes the ensemble average. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{Cao-v2-Ch-1} \includegraphics[width=0.4\textwidth]{Cao-v2-Ch-2} \caption{(Color online) Effects of (a) different initial conditions and (b) initial state fluctuations on the hadron $v_2$. The figures are from Ref.~\cite{Cao:2017umt}.} \label{fig:Cao-hadron-v2} \end{figure} The elliptic flow coefficient is expected to place more stringent constraints on the path length dependence of jet energy loss. However, few models are able to provide satisfactory descriptions of both the hadron suppression factor $R_\mathrm{AA}$ and the elliptic flow coefficient $v_2$ simultaneously as observed in experiments. As shown in Fig.~\ref{fig:Cao-hadron-v2}, while results from model calculations are consistent with experimental data at high $p_\mathrm{T}$, sizable deviation persists as $p_\mathrm{T}$ is around and below 20~GeV. Considerable efforts have been devoted to solving this $v_2$ puzzle. For instance, it has been suggested~\cite{Noronha-Hostler:2016eow} that using a more geometrically anisotropic initial condition of QGP fireballs and including event-by-event fluctuations of the initial profiles can give rise to sufficiently large $v_2$ of energetic hadrons. However, as shown in Fig.~\ref{fig:Cao-hadron-v2} (a) and (b) respectively, both effects turn out to be small when coupling a realistic jet energy loss model (\textsc{Lbt}) to a (3+1)-D viscous hydrodynamic model (\textsc{CLVisc})~\cite{Cao:2017umt}. Other solutions, such as additional enhancement of the jet-medium interaction strength ($\alpha_\mathrm{s}$ or $\hat{q}$) near $T_\mathrm{c}$~\cite{Xu:2014tda,Das:2015ana}, or delaying the starting time of jet-medium interaction~\cite{Andres:2019eus}, have been proposed to increase the $v_2$ of hard probes while keeping the suppression factor $R_\mathrm{AA}$ fixed. Both ideas suggest that with a fixed amount of total energy loss, weighing more jet-medium interaction towards a later evolution time when the QGP collective flow is more anisotropic can effectively enhance the jet $v_2$. However, little agreement has been reached yet on the detailed mechanisms that shift more energy loss to a later time. Most of the current calculations do not take into account medium modification of the hadronization mechanism such as parton recombination which could influence the flavor dependence of the hadron suppression factor and $v_2$ at low and intermediate $p_\mathrm{T}$~\cite{Greco:2003xt,Fries:2003rf,Fries:2003vb}. Such mechanism, however, will have less impact on the full jet spectra which can be described well by many transport models as we will discuss later in this review. Recent observations of little or no suppression~\cite{Adam:2016ich} but large $v_2$~\cite{Acharya:2017tfn,Sirunyan:2018toe} of hard probes in small colliding (p+Pb) systems have urged us to revisit the parton recombination and initial state effects on jets, because hot nuclear matter effects require a sufficient amount of suppression to accompany a large $v_2$~\cite{Xu:2015iha,Du:2018wsj}. To the contrary, this puzzle can be solved within a model based on the dilute-dense factorization in the Color Glass Condensate (CGC) framework~\cite{Zhang:2019dth}. It starts with a gluon and a quark from the projectile proton: the quark serves as the reference while the energetic gluon fragments into the final state hadron under consideration. The interaction between the incoming partons and the dense gluons in the target nucleus generates correlations between the energetic gluon and the reference quark, leading to the finite $v_2$ of high-energy hadrons. So far, this framework has provided satisfactory descriptions of the $v_2$ of open heavy flavor meson and heavy quarkonium in p+Pb collisions. Further study in this direction may also be essential for solving the puzzle of hard probe $v_2$ in large nucleus-nucleus collisions. \subsection{Heavy flavor hadrons} \label{subsec:heavyHadron} Heavy quarks (charm and beauty quarks) are a special category of hard probe particles. The large mass of heavy quarks suppresses their thermal emission from the QGP, thus most heavy quarks are produced in the primordial hard collisions and then traverse and interact with the QGP with their flavor number conserved. Therefore, they serve as a clean probe of the QGP properties. At low $p_\mathrm{T}$, heavy quarks provide a unique opportunity to study the non-perturbative interaction between hard partons and the thermal medium; at intermediate $p_\mathrm{T}$, heavy quark observables help refine our understanding of the hadronization process from partons to color neutral hadrons; at high $p_\mathrm{T}$, heavy quarks allow us to study the mass and flavor hierarchy of parton energy loss inside the QGP. A more detailed review specializing in heavy quarks, especially their low $p_\mathrm{T}$ dynamics, can be found in Refs.~\cite{Dong:2019byy,Rapp:2018qla,Cao:2018ews,Xu:2018gux}. In this review, we concentrate on the flavor hierarchy of energy loss and heavy quark hadronization which are closely related to high-energy jets. Due to their different masses and color factors, one would expect the energy losses of beauty, charm, light quarks and gluons have the flavor hierarchy $\Delta E_b < \Delta E_c < \Delta E_q < \Delta E_g$. Therefore, the suppression factor $R_\mathrm{AA}$ of $B$ and $D$ mesons and light flavor hadrons should have the inverted hierarchy $R_\mathrm{AA}^B > R_\mathrm{AA}^D > R_\mathrm{AA}^h$. However, the LHC data~\cite{Khachatryan:2016odn,Sirunyan:2017xss,Sirunyan:2017oug,Sirunyan:2018ktu} reveal comparable $R_\mathrm{AA}$'s for $D$ mesons, $B$ mesons and charged hadrons above $p_\mathrm{T}\sim 8$~GeV. Over the past decade, many theoretical efforts have been devoted to investigating this flavor hierarchy of hadron $R_\mathrm{AA}$~\cite{Qin:2009gw,Buzzatti:2011vt,Djordjevic:2013pba,Cao:2017hhk,Xing:2019xae}. \begin{figure}[tbp] \centering \includegraphics[width=0.385\textwidth]{Xing-RAA-h-PbPb5020-0-10} \includegraphics[width=0.4\textwidth]{Xing-RAA-D0-PbPb5020-0-10} \caption{(Color online) The nuclear modification factors of (a) charged hadrons and (b) $D$ mesons in central 5.02~TeV Pb+Pb collisions, compared with contributions from quark and gluon fragmentations. The figures are from Ref.~\cite{Xing:2019xae}.} \label{fig:Xing-RAA-separate} \end{figure} A full understanding of heavy and light flavor parton energy loss requires a Monte-Carlo framework for realistic jet-medium interactions that treats different species of partons on the same footing. This has been realized in the \textsc{Lbt} model~\cite{Cao:2016gvr,Cao:2017hhk} in which elastic and inelastic energy loss of heavy and light flavor partons are simultaneously described using the Boltzmann transport through a hydrodynamic medium. Within this framework, a recent study~\cite{Xing:2019xae} further shows that the gluon splitting process in the next-to-leading-order (NLO) contribution to parton production is crucial for a simultaneous description of the $R_\mathrm{AA}$'s of different hadron species, which is usually ignored in heavy quark studies. While gluon fragmentation dominates the (light flavor) charged hadron production up to $p_\mathrm{T} \sim 50$~GeV, quark fragmentation starts to dominate beyond that. To the contrary, gluon fragmentation contributes to over 40\% $D$ meson yield up to 100~GeV. In Fig.~\ref{fig:Xing-RAA-separate}, the contributions from quark and gluon fragmentations to $R_\mathrm{AA}$ for charged hadrons and $D$ mesons are compared in detail. One observes that the $R_\mathrm{AA}$'s for gluon initiated hadrons and $D$ mesons are much smaller than those initiated by quarks, and the $R_\mathrm{AA}$ for light quark initiated hadron is slightly smaller than that for the charm quark initiated $D$ meson. This supports the flavor hierarchy of parton energy loss -- $\Delta E_c < \Delta E_q < \Delta E_g$ -- as expected. On the other hand, we see that the $R_\mathrm{AA}$ for light hadrons originated from gluon fragmentation is larger than that of $D$ mesons from gluons due to different fragmentation functions. After combining contributions from both quarks and gluons, one obtains similar $R_\mathrm{AA}$ for both charged hadrons and $D$ mesons above $p_\mathrm{T} \sim 8$~GeV. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{Xing-RAA-all-PbPb5020-0-100} \caption{(Color online) The nuclear modification factors of charged hadrons, direct $D$ mesons, $B$ mesons and $B$-decay $D$ mesons in minimum bias 5.02~TeV Pb+Pb collisions. The figure is from Ref.~\cite{Xing:2019xae}.} \label{fig:Xing-RAA-all} \end{figure} The above findings can be further verified by applying the same calculation to the $B$ meson $R_\mathrm{AA}$. As shown in Fig.~\ref{fig:Xing-RAA-all}, within this perturbative framework that combines the NLO production and fragmentation mechanism with the \textsc{Lbt} simulation of parton energy loss through the QGP, one can naturally obtain a simultaneous description of the $R_\mathrm{AA}$'s of charged hadrons, direct $D$ mesons, $B$ mesons and $D$ mesons from $B$-decay over a wide $p_\mathrm{T}$ region. It also predicts that at intermediate $p_\mathrm{T}$, one should observe a larger $R_\mathrm{AA}$ of $B$ mesons compared to $D$ mesons and charged hadrons. However, this separation disappears above $\sim 40$~GeV. This is expected to be tested by future precision measurement and complete our understanding of the flavor hierarchy of jet quenching inside the QGP. While the clean perturbative framework is sufficient to describe the nuclear modification of hadrons at high $p_\mathrm{T}$, non-perturbative effects become important in the low $p_\mathrm{T}$ region. The difference between jet spectra at partonic and hadronic levels is non-negligible; it could be as large as the difference between the LO and NLO calculations in some regions of $p_\mathrm{T}$~\cite{Wobisch:1998wt,Abelev:2013fn,Kumar:2019bvr}. The hadronization mechanism is a challenging topic because of its non-perturbative nature. Nevertheless, heavy quarks provide us a good opportunity to investigate how partons form hadrons at different momentum scales due to the feasibility of tracking their flavor identity during their evolution. While high $p_\mathrm{T}$ heavy quarks tend to fragment directly into hadrons, it is more probable for low $p_\mathrm{T}$ heavy quarks to combine with thermal partons from the QGP to form hadrons. The latter process is known as coalescence, or recombination, and was first proposed in Ref.~\cite{Oh:2009zj} and found to significantly affect the charmed hadron chemistry (baryon-to-meson ratio) in relativistic heavy-ion collisions. This proposal has been qualitatively confirmed by the recent RHIC and LHC data on the $\Lambda_c/D^0$ ratio~\cite{Adam:2019hpq,Acharya:2018ckj}. Meanwhile, this coalescence model has also been quantitatively improved over the past few years in Refs.~\cite{Plumari:2017ntm,Cho:2019lxb,Cao:2019iqs}. The coalescence probability from two (three) constituent quarks to a meson (baryon) is given by the wavefunction overlap between the free quark state and the hadronic bound state. If a heavy quark does not coalescence with thermal quarks from the QGP, it fragments. This coalescence formalism has recently been extended from $s$-wave hadronic states to both $s$ and $p$-wave states of charmed hadrons~\cite{Cao:2019iqs}. It is found that adding the $p$-wave contribution significantly increases the total coalescence probability of charm quarks and makes it possible to normalize this probability at zero momentum with a proper in-medium size of charmed hadrons ($r_{D^0}=0.97$~fm) considering that a zero-momentum charm quark is not energetic enough to fragment into hadrons. Additionally, including $p$-wave states naturally incorporates all major charmed hadron states listed in the Particle Data Group (PDG)~\cite{Tanabashi:2018oca} and enhances the $\Lambda_c/D^0$ ratio. A longstanding deficiency of this coalescence formalism is its violation of the energy conservation. This has also been fixed in this work~\cite{Cao:2019iqs} by first coalescing multiple constituent quarks into an off-shell bound state and then decaying it into on-shell hadrons with the 4-momentum of the entire system strictly conserved. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{Cao-ratio_LD_DsD0} \caption{(Color online) The charmed hadron chemistry from a coalescence-fragmentation hadronization model: (a) the $p_\mathrm{T}$ integrated $\Lambda_c/D^0$ ratio, (b) the $p_\mathrm{T}$ differentiated $\Lambda_c/D^0$ ratio, and (c) the $p_\mathrm{T}$ differentiated $D_s/D^0$ ratio. The figure is from Ref.~\cite{Cao:2019iqs}.} \label{fig:Cao-hadronization} \end{figure} As shown in Fig.~\ref{fig:Cao-hadronization}, after combining this improved hadronization model with a transport-hydrodynamics model that provides the realistic heavy quark distribution after they traverse the QGP, one obtains a satisfactory description of the charmed hadron chemistry as observed in Au+Au collisions at $\sqrt{s}=200$~GeV, including both $p_\mathrm{T}$ integrated and differentiated $\Lambda_c/D^0$ and $D_s/D^0$ ratios. Effects of the QGP flow and fragmentation vs. coalescence mechanism on the charmed hadron chemistry have also been explored in Fig.~\ref{fig:Cao-hadronization}. The $p_\mathrm{T}$ boost from the QGP flow is stronger on heavier hadrons and thus significantly enhances the $\Lambda_c/D^0$ ratio. The coalescence also yields much larger baryon-to-meson ratio than fragmentation. Within this framework, it has been predicted that the in-medium charmed hadron size should be larger than that in vacuum, which may be tested by future hadronic model calculations. There might be other mechanisms affecting the charmed hadron chemistry, such as contributions from possible resonant states beyond the current PDG list~\cite{He:2019vgs} and the sequential coalescence of charmed hadrons at different temperatures~\cite{Zhao:2018jlw}. \subsection{Dihadron and $\gamma$/$Z^0$-triggered hadrons} \label{subsec:diHadron} In addition to single inclusive hadrons, dihadron~\cite{Majumder:2004pt,Zhang:2007ja,Renk:2008xq,Cao:2015cba} and $\gamma$/$Z^0$-triggered hadrons~\cite{Zhang:2009rn,Qin:2009bk,Chen:2017zte} provide additional tools to place more stringent constraints on our understanding of parton energy loss inside the QGP. For instance, one may measure the medium modification of the momentum imbalance between the associated hadron and the triggered hadron or $\gamma$/$Z^0$ in A+A collisions relative to that in p+p. Such observables are also independent of $\langle N_\mathrm{coll} \rangle$ in Eq.~(\ref{eq:defRAA}) for the $R_\mathrm{AA}$ of single inclusive hadrons and therefore are free of the associated systematic uncertainties. The $\gamma$/$Z^0$-triggered hadrons/jets are in particular considered ``golden channels" for the study of jet quenching since the triggered $\gamma$/$Z^0$ does not lose energy inside the medium, and therefore serves as an ideal reference for the energy loss of the associated jet partons. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{Zhang-photon-hadron-IAA} \caption{(Color online) The participant number dependence of the $\gamma$-triggered hadron suppression factor $I_\mathrm{AA}$ in Au+Au collisions at RHIC. Curves from top to bottom correspond to the hadron $p_\mathrm{T}$ at 2, 4, 6, 8, 10~GeV respectively. The figure is from Ref.~\cite{Zhang:2009rn}.} \label{fig:Zhang-photon-hadron-IAA} \end{figure} One quantity being commonly investigated is the triggered nuclear modification factor $I_\mathrm{AA}$ \cite{Adler:2002tq} defined as \begin{equation} \label{eq:defIAA} I_\mathrm{AA}(z_\mathrm{T})\equiv\frac{\frac{1}{N_\mathrm{trig}}\frac{dN^\mathrm{asso}}{dz_\mathrm{T}}|_\mathrm{AA}}{\frac{1}{N_\mathrm{trig}}\frac{dN^\mathrm{asso}}{dz_\mathrm{T}}|_\mathrm{pp}}, \end{equation} where $z_\mathrm{T}=p^{\rm asso}_\mathrm{T}/p_\mathrm{T}^{\rm trig}$ is the $p_\mathrm{T}$ ratio between the associated hadron and the triggered particle (hadron, $\gamma$ or $Z^0$ boson), measuring the transverse momentum imbalance between them. The numerator and denominator in the above equation are both normalized to yields of triggered particles, and are called hadron or $\gamma$/$Z^0$-triggered fragmentation functions in literature. One would expect that with an increase of parton energy loss inside the QGP, there will be a suppression of the associated hadrons at high $p_\mathrm{T}$ ($z_\mathrm{T}$), thus larger imbalance between the associated and the triggered particles. In Fig.~\ref{fig:Zhang-photon-hadron-IAA}, we present the $z_\mathrm{T}$-integrated $I_\mathrm{AA}$ of $\gamma$-triggered hadrons as a function of the participant number ($N_\mathrm{part}$) in Au+Au collisions at $\sqrt{s}=200$~GeV. For a fixed $p_\mathrm{T}$ range of the triggered photon, a stronger suppression of the associated hadron is observed at larger $p_\mathrm{T}$~\cite{Zhang:2009rn}. Due to larger energy loss of the associated jet parton in more central collisions, the $I_\mathrm{AA}$ also decreases with the increase of $N_\mathrm{part}$. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{Cao-initXY-zT} \caption{(Color online) The density distribution of the initial $c\bar{c}$ production positions in the transverse ($x$-$y$) plane for different $z_\mathrm{T}$ values between the final state $D\overline{D}$ pairs: (a) $z_\mathrm{T}\in (0.2, 0.4)$, (b) $z_\mathrm{T}\in (0.4, 0.6)$, (c) $z_\mathrm{T}\in (0.6, 0.8)$, and (d) $z_\mathrm{T}\in (0.8, 1.0)$. The triggered $D$ or $\overline{D}$ mesons are taken along the out-of-plane directions ($|\phi_\mathrm{trig} - \pi/2| < \pi/6$) for central Au+Au collisions at 200 GeV ($p_\mathrm{T,trig} > 4$~GeV and $p_\mathrm{T,asso} > 2$~GeV). The figure is from Ref.~\cite{Cao:2015cba}.} \label{fig:Cao-initXY-zT} \end{figure} It has also been found in Refs.~\cite{Zhang:2007ja,Zhang:2009rn,Qin:2012gp,Cao:2015cba} that the $z_\mathrm{T}$ value can help identify the position from which the jet event is produced in the initial nucleus-nucleus collision. This is demonstrated in Fig.~\ref{fig:Cao-initXY-zT} by the transverse distribution of the initial hard processes leading to heavy flavor meson pairs as an example, where within the 0-10\% centrality of Au+Au collisions at $\sqrt{s}=200$~GeV, $D$ or $\overline{D}$ mesons with $p_\mathrm{T} > 4$~GeV are triggered, and $p_\mathrm{T} > 2$~GeV is required for their associated anti-particles. One may observe for smaller values of $z_\mathrm{T}$, the initial charm quark pairs are more biased toward the edge of the overlap region of heavy-ion collisions so that the difference in the path lengths and thus energy loss is larger between triggered and associated particles. To the contrary, for larger $z_\mathrm{T}$ values, initial charm quark pairs are more likely to spread smoothly over the whole overlap region. Similar analyses have been done for dihadron~\cite{Zhang:2007ja}, $\gamma$-hadron~\cite{Zhang:2009rn} and $\gamma$-jet~\cite{Qin:2012gp} events. This allows us to use the momentum imbalance of triggered particle/jet pairs to probe different regions of the hot nuclear matter and also obtain better understanding of the path length dependence of parton energy loss inside the QGP. Apart from the momentum imbalance, the angular correlation between the associated hadrons and the triggered particle is another interesting observable for quantifying the transverse momentum broadening of jet partons. This may provide a more direct constraint on $\hat{q}$. In Ref.~\cite{Chen:2016vem}, a systematical resummation formalism has been employed for the first time to calculate the dihadron and hadron-jet angular correlation in p+p and peripheral A+A collisions. With a global $\chi^2$ fit to experimental data, the authors obtain the medium-induced broadening of a quark jet around $\langle p_\perp^2\rangle\sim 13$~GeV$^2$, and the jet transport parameter $\hat{q}_0=3.9^{+1.5}_{-1.2}$~GeV$^2$/fm at the top RHIC temperature. In addition, Ref.~\cite{Cao:2015cba} proposes that using the angular correlation between heavy meson pairs can help constrain the detailed energy loss mechanism of heavy quarks, which cannot be uniquely identified with the single inclusive hadron observables. It has been found that with the same $R_\mathrm{AA}$ factor of $D$ mesons, collisional energy loss is much more effective in smearing the angular distribution between the $D\overline{D}$ pairs compared to energy loss from collinear gluon radiation. In this work, different momentum imbalance ($z_\mathrm{T}$) cuts have been applied to separate the energy loss effect from the momentum broadening effect on the angular correlation, so that the best kinematic region has been suggested for future experimental measurements on constraining the heavy quark dynamics in the QGP. \begin{figure}[tbp] \centering \includegraphics[width=0.4\textwidth]{Chen-gamma-hadron-IAA-xi} \caption{(Color online) Nuclear modification of the $\gamma$-triggered hadron yield in different $p_\mathrm{T}^\gamma$ regions in Au+Au collisions at 200~GeV and Pb+Pb collisions at 5.02~TeV, with $|\eta_{h,\gamma}|<0.35$. The figure is from Ref.~\cite{Chen:2017zte}.} \label{fig:Chen-gamma-hadron-IAA-xi} \end{figure} While most earlier work concentrated on the suppression of the high $z_\mathrm{T}$ hadron yield, a recent study~\cite{Chen:2017zte} has found that the enhancement of the soft hadron production at low $z_\mathrm{T}$ in $\gamma$-triggered hadron events could serve as a smoking-gun signal of the QGP response to jet propagation. Such soft hadron enhancement is investigated within the \textsc{CoLbt-Hydro} model that realizes an event-by-event concurrent simulation of jet and QGP evolution in relativistic nuclear collisions as discussed in Sec.~\ref{subsec:concurrent}. The nuclear modification of the $\gamma$-triggered hadron yield in different $p_\mathrm{T}$ ranges of the triggered photon from this study is shown in Fig.~\ref{fig:Chen-gamma-hadron-IAA-xi}. To compare to different data set, the 0-40\% centrality bin is used in panels~(a) and (b), 0-12\% is used in panel~(c), and 0-10\% is used in panel~(d). In order to investigate $I_\mathrm{AA}$ at low $z_\mathrm{T}$, the variable $\xi=\ln(1/z_\mathrm{T})$ is used for the horizontal axis. From Fig.~\ref{fig:Chen-gamma-hadron-IAA-xi}, one clearly observes a suppression of $I_\mathrm{AA}$ at small $\xi$ (large $z_\mathrm{T}$) due to parton energy loss before fragmenting into hadrons. On the other hand, jet-induced medium excitation is clearly shown to lead to an enhancement of soft hadron yield at large $\xi$ (small $z_\mathrm{T}$). The onset of the soft hadron enhancement ($I_\mathrm{AA} \ge 1$) shifts towards a larger $\xi$ value with the increase of $p_\mathrm{T}^\gamma$ [from Fig.~\ref{fig:Chen-gamma-hadron-IAA-xi} (a) to (d)], corresponding to a fixed hadron transverse momentum $p_\mathrm{T}^h = z_\mathrm{T}p_\mathrm{T}^\gamma\sim 2$~GeV. This scale reflects the thermal momentum of hadrons from the jet-induced medium response in QGP which is approximately independent of the jet energy. This is a unique feature of the jet-induced medium response from the \textsc{CoLbt-Hydro} model. The \textsc{CoLbt-Hydro}'s predictions on the high $p_\mathrm{T}^\gamma$ $\gamma$-triggered jets have also been confirmed by the recent LHC data~\cite{Aaboud:2019oac}. Jet-induced medium response in \textsc{CoLbt-Hydro} also explains the enhancement of the jet fragmentation function at small $z_\mathrm{T}$ in $\gamma$-jet events as we will show in the discussion about medium modification of jet substructures in Sec.~\ref{subsec:jetFragmentation}. Similar experimental measurements~\cite{ATLAS:2019gif} together with new theoretical calculations~\cite{Casalderrey-Solana:2015vaa,Zhang:2018urd} on $Z^0$-triggered hadrons/jets also become available. \begin{figure}[tbp] \centering \includegraphics[width=0.45\textwidth]{Chen-gamma-hadron-angular-corr} \caption{(Color online) The $\gamma$-hadron azimuthal correlation in different $p_\mathrm{T}^h$ region in p+p and 0-12\% Au+Au collisions at 200~GeV with $|\eta_{h,\gamma}|<1.0$ and $12<p_\mathrm{T}^\gamma < 20$~GeV, normalized to per triggered photon yield. The half width $\sigma$ is obtained from a Gaussian fit within $|\Delta \phi_{\gamma h}-\pi| < 1.4$. The figure is from Ref.~\cite{Chen:2017zte}.} \label{fig:Chen-gamma-hadron-angular-corr} \end{figure} The effect of jet-induced medium response can also be investigated in the $\gamma$-hadron angular correlation as shown in Fig.~\ref{fig:Chen-gamma-hadron-angular-corr}, where results for central Au+Au collisions at 200~GeV with and without contributions from the jet-induced medium excitation are compared to that in p+p collisions. One can see that contribution from medium response is negligible for high $p_\mathrm{T}^h$. The widths of the angular correlation $\sigma$ in Au+Au and p+p collisions from fitting to a Gaussian function within the $|\Delta \phi_{\gamma h}-\pi| < 1.4$ region are comparable though there is an obvious suppression of the hadron yield at large $p_\mathrm{T}^h$ [Fig.~\ref{fig:Chen-gamma-hadron-angular-corr} (d)]. At low $p_\mathrm{T}^h$ [Fig.~\ref{fig:Chen-gamma-hadron-angular-corr} (a)], on the other hand, there is a significant enhancement of the hadron yield and broadening of their angular distribution in Au+Au collisions due to jet-induced medium excitation. The most interesting feature in the angular distribution of soft hadrons is their depletion near $\Delta \phi_{\gamma h}=0$ along the direction of triggered photon due to the diffusion wake left behind the jet. This is consistent with the snapshots of the \textsc{CoLbt-Hydro} simulation in Fig.~\ref{fig:Chen-jime-simulation}. Experimental verification of such depletion will be an unambiguous signal of jet-induced medium response. \section{Jet spectra} \label{sec:jet_spectra} In the study of the suppression of jet spectra with a given jet-cone size $R$ in heavy-ion collisions, one should consider not only the jet energy loss due to transport of partons to the outside of the jet-cone through elastic scattering and induced gluon radiation, but also the effect of jet-induced medium response that can also contribute to the total energy inside the jet-cone as constructed by a jet-finding algorithm. This contribution from jet-induced medium response will affect the transverse momentum and jet-cone size dependence of jet energy loss and thus jet spectrum suppression. In this section, we will review the suppression of single and $\gamma$/$Z$-triggered jets in heavy-ion collisions and effects of jet-induced medium response. \subsection{Single inclusive jets} \label{subsec:singleJet} To calculate the suppression of single inclusive jet spectra in high-energy heavy-ion collisions, we can first use \textsc{Pythia~8}~\cite{Sjostrand:2006za} or other Monte Carlo programs to generate the initial jet shower parton distributions from elementary nucleon-nucleon collisions and then use transport models such as \textsc{Lbt} to simulate the transport of these jet shower partons through the bulk medium that evolves according to a hydrodynamic model. The \textsc{Fastjet} program~\cite{Cacciari:2011ma}, modified to include the subtraction of ``negative" partons from the total energy inside a jet-cone, is utilized with the anti-$k_\mathrm{T}$ algorithm to reconstruct jets and calculate the final single inclusive jet spectra. In practice as we discuss in this section, we use \textsc{Pythia~8} to generate the initial jet shower partons (with both initial and final state radiation) for a given number of events within the interval of the transverse momentum transfer $p_{\mathrm{T}c} \in (p_{\mathrm{T} c}-d p_{\mathrm{T} c}/2, p_{\mathrm{T} c}+dp_{\mathrm{T} c}/2)$ and the cross section $d\sigma_{\rm LO}^{{\rm pp}(c)}/dp_{\mathrm{T} c}$ in the leading-order (LO) perturbative QCD (pQCD) in p+p collisions. Using \textsc{Fastjet} with a given jet-cone radius $R$, one can get an event-averaged single inclusive jet spectrum $dN^{\rm jet}_{c}(p_\mathrm{T},p_{\mathrm{T}c})/dydp_\mathrm{T}$, here $p_\mathrm{T}$ and $y$ are the transverse momentum and rapidity of the final jet, respectively. The final single inclusive jet cross section in p+p collisions is given by \begin{equation} \frac{d^2\sigma^{\rm jet}_{\rm pp}}{dp_\mathrm{T}dy} = \sum_c\int dp_{\mathrm{T}c} \frac{d\sigma_{\rm LO}^{{\rm pp}(c)} }{dp_{\mathrm{T}c}} \frac{d^2N^{\rm jet}_{c}(p_\mathrm{T}, p_{\mathrm{T}c})} {dp_\mathrm{T} dy}, \label{eq-jetcrs} \end{equation} where the LO pQCD cross section for the production of initial hard parton $c$ in p+p collisions is given by \begin{eqnarray} \frac{d \sigma^{{\rm pp}(c)}_{\rm LO}}{dp_{\mathrm{T}c}} & = & 2 p_{\mathrm{T}c}\sum_{a,b,d} \int dy_c dy_d x_a f_{a/p} (x_a, \mu^2) \nonumber\\ & & \times x_b f_{b/p} (x_b, \mu^2) \frac{d\hat\sigma_{ab\to cd}}{dt}, \label{eq:cs.pp} \end{eqnarray} where $y_c$ and $y_d$ are rapidities of the final hard partons in the $a+b\rightarrow c+d$ processes, $x_a=x_{\mathrm{T}c}(e^{y_c}+e^{y_d})$ and $x_b=x_{\mathrm{T}c}(e^{-y_c}+e^{-y_d})$ are the momentum fractions carried by the initial partons from the two colliding protons with $x_{\mathrm{T}c}=2p_{\mathrm{T}c}/\sqrt{s}$, $f_{a/p}(x,\mu^2)$ is the parton distribution inside a proton at the scale $\mu^2=p_{\mathrm{T}c}^2$ and $d\hat\sigma_{ab\to cd}/dt$ is the parton level LO cross section which depends on the Mandelstam variables $\hat s=x_ax_bs$, $\hat t=-p_{\mathrm{T}c}^2(1+e^{y_d-y_c})$ and $\hat u=-p_{\mathrm{T}c}^2(1+e^{y_c-y_d})$. Because of the initial and final state radiations, there can be more than two jets in the final state and the transverse momentum $p_\mathrm{T}$ of the final leading jet is normally different from the trigger $p_{\mathrm{T}c}$. \begin{figure}[tbp] \centering \includegraphics[width=7.5cm]{He-jetCS} \caption{(Color online) The inclusive jet cross section from \textsc{Pythia~8} as a function of the jet transverse momentum $p_\mathrm{T}$ in different rapidity bins in p+p collisions at $\sqrt{s} = 5.02$ TeV (solid) and 2.76~TeV (dashed), using the anti-$k_\mathrm{T}$ algorithm with the jet cone radius $R = 0.4$ as compared to the ATLAS experimental data~\cite{Aaboud:2018twu}. Results for different rapidities are scaled by successive powers of 100. The figure is from Ref.~\cite{He:2018xjv}.} \label{jetCS} \end{figure} Shown in Fig.~\ref{jetCS} are the single inclusive jet cross sections as a function of the final jet transverse momentum $p_\mathrm{T}$ in different rapidity bins of p+p collisions at $\sqrt{s}=2.76$ and 5.02~TeV from \textsc{Pythia~8} as compared to the ATLAS experimental data~\cite{Aad:2014bxa,Aaboud:2018twu}. We see \textsc{Pythia~8} can describe the experimental data well. The shape of the single inclusive jet spectra at $\sqrt{s}=5.02$~TeV are much flatter than that at 2.76~TeV as determined mainly by the parton distribution functions. To calculate the jet spectra in heavy-ion collisions, one first needs to consider the nuclear modification of the initial parton distributions~\cite{Eskola:2009uj,Ru:2016wfx}. One then lets the initial jet shower partons to propagate through the QGP medium within the \textsc{Lbt} model. Using \textsc{Fastjet} for jet reconstruction, one gets an event-averaged final single inclusive jet distribution $d\widetilde{N}^{\rm jet}_{(c)}(p_\mathrm{T}, p_{\mathrm{T}c},\phi_c,{\vec r},{\vec b})/dydp_\mathrm{T}$ for a given initial production point $\vec r$, azimuthal angle $\phi_c$ of the initially produced hard parton $c$ and impact parameter $\vec b$ of the nucleus-nucleus collisions. The cross section for the single inclusive jet production in A+A collision is then given by, \begin{eqnarray} \frac{d \sigma^{\rm jet}_{\rm AA}}{dp_\mathrm{T}dy} & = &\sum_{a,b,c,d} \int d^2{r} d^2{b} t_A(r) t_A(|{\vec b}-{\vec r}|) \frac{d\phi_c}{\pi} dy_c dy_d \nonumber\\ && \times \int dp_{\mathrm{T}c} p_{\mathrm{T}c} x_a f_{a/A} (x_a, \mu^2) x_b f_{b/B} (x_b, \mu^2) \nonumber \\ && \times \frac{d\hat\sigma_{ab\to cd}}{dt} \frac{d\widetilde{N}^{\rm jet}_{(c)}(p_\mathrm{T},p_{\mathrm{T}c},\phi_c,{\vec r},{\vec b},\phi_c)}{dydp_\mathrm{T}}, \label{eq:cs.aa} \end{eqnarray} where $t_{A}(r)$ is the nuclear thickness function with normalization $\int d^2{ r} t_A(r)=A$ and $f_{a/A}(x,\mu^2)$ is the nuclear modified parton distribution function \cite{Eskola:2009uj,Ru:2016wfx} per nucleon. The range of the impact parameter $b$ is determined by the centrality of the nucleus-nucleus collisions according to experimental measurements. The suppression factor due to interactions between shower and medium partons in heavy-ion collisions is given by the ratio of the jet cross sections for A+A and p+p collisions normalized by the averaged number of binary nucleon-nucleon collisions, \begin{equation} R_{\rm AA}=\frac{1}{\int d^2rd^2b t_A(r) t_A(|{\vec b}-{\vec r}|)} \frac{d\sigma^{\rm jet}_{\rm AA}}{d\sigma^{\rm jet}_{\rm pp}}. \label{eq:raa} \end{equation} In the jet reconstruction using \textsc{Fastjet} one should also subtract the underlying event (UE) background. In the \textsc{Lbt} study presented here, a scheme inspired by the method in the experimental studies~\cite{Aad:2012vca} is used. In \textsc{Lbt} simulations, only jet shower partons, radiated gluons and recoil medium partons including ``negative" partons are used for jet reconstruction. The UE background is very small as compared to the full hydrodynamic UE background. The contribution of UE to the jet energy before the subtraction in \textsc{Lbt} simulations is only a few percent in central Pb+Pb collisions and much smaller in p+p collisions. \begin{figure}[tbp] \centering \includegraphics[width=7.5cm]{He-RAA4in1} \caption{(Color online) The suppression factor $R_{\rm AA}$ of single inclusive jet spectra in the central rapidity $|y|<2.1$ region of 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$~TeV from \textsc{Lbt} simulations with fixed $\alpha_{\rm s}=0.15$ as compared to the ATLAS data at the LHC \cite{Aad:2014bxa}. The jet reconstruction with $R=0.4$ and anti-$k_\mathrm{T}$ algorithm includes four different options on ``negative" partons and UES: (a) with both ``negative" partons and UES, (b) with ``negative" partons but without UES, (c) with UES but without ``negative" partons, and (d) without ``negative" partons and UES. The figure is from Ref.~\cite{He:2018xjv}.} \label{RAA_4opts} \end{figure} Shown in Fig.~\ref{RAA_4opts} are the suppression factors $R_{\rm AA}(p_\mathrm{T})$ in the central rapidity $|y|<2.1$ region of 0-10\% central Pb+Pb collisions at $\sqrt{s}=2.76$~TeV from \textsc{Lbt} simulations with different options on ``negative" partons and UE subtraction (UES) as compared to the ATLAS data. The fixed value of $\alpha_{\rm s}=0.15$ is used which minimizes the $\chi^2$/d.o.f. from fitting to the ATLAS data when ``negative" partons and UES are both considered [Fig.~\ref{RAA_4opts} (a)]. The fixed value of $\alpha_{\rm s}$ is only an effective strong coupling constant in the \textsc{Lbt} model in which the perturbative Debye screening mass is used to regularize the collinear divergence in elastic scattering and radiative gluon spectrum. Other non-perturbative physics such as chromo-magnetic monopoles can effectively increase the screening mass~\cite{Liao:2008jg,Liao:2008dk,Xu:2015bbz,Xu:2014tda}. Furthermore, the thermal mass of medium partons can also reduce the effective thermal parton density significantly in the interaction rate. These can all increase the value of the effective strong coupling constant in \textsc{Lbt} in order to fit experimental data. As we discussed before, the inclusion of recoil partons contributes to the total energy inside the jet-cone and therefore significantly reduces the final jet energy loss. The ``negative" partons from the diffusion wake of the jet-induced medium response, however, will reduce the energy inside the jet-cone. One can consider this as jet-induced modification of the background. It will increase the net jet energy loss. The UES also similarly increases the net jet energy loss. Therefore, they both lead to smaller values of the suppression factor as seen in Fig.~\ref{RAA_4opts}, though the effect of ``negative" partons is relatively larger. Without ``negative" partons, the effect of UES is also larger than with ``negative" partons. \begin{figure}[tbp] \centering \includegraphics[width=7.5cm]{He-pTloss_recoil} \caption{(Color online) The average jet transverse momentum loss as a function of its initial $p_\mathrm{T}$, compared between with and without including recoiled/``negative" parton contributions. The figure is from Ref.~\cite{He:2018xjv}.} \label{fig:plot-pTloss_recoil} \end{figure} To illustrate the effect of jet-induced medium response on the suppression of single inclusive jet spectra, one can examine the colliding energy and transverse momentum dependence of the jet energy loss as shown in Fig.~\ref{fig:plot-pTloss_recoil} for central Pb+Pb collisions at both $\sqrt{s}=2.76$~TeV and 5.02~TeV. Apparently, the average transverse momentum loss of jets depends on whether the jet-induced medium excitation is taken into account. At both colliding energies, one observes that including recoil partons in jet reconstruction significantly enhances the final jet $p_\mathrm{T}$ as compared to that without, thus reduces their $p_\mathrm{T}$ loss. On the other hand, the subtraction of ``negative" partons reduces the jet $p_\mathrm{T}$, thus enhances their $p_\mathrm{T}$ loss. The time (or path length) dependence of these medium response effects on jet energy loss is also interesting~\cite{He:2015pra}. One observes that the contribution from ``negative" partons is negligible at the early stage of parton propagation. However, as the number of partons within the jet shower grows with time, so does the number of jet-medium scatterings and the number of ``negative" partons. At later times, the ``negative" partons significantly deplete the thermal medium behind the propagating jet and effectively modify the background underlying the jet. Only with the subtraction of the ``negative" parton energy, one is able to obtain a linear increase of elastic energy loss with the path length as expected. Therefore, a proper treatment of these medium response effects is crucial for correctly describing jet observables. \begin{figure}[tbp] \centering \includegraphics[width=7.5cm]{He-RAA_twoEnergy} \caption{(Color online) The \textsc{Lbt} results on $R_{\rm AA}(p_\mathrm{T})$ in the central rapidity $|y|<2.1$ for single inclusive jet spectra in 0-10\% central Pb+Pb collisions at $\sqrt{s} = 2.76$ (red dashed line) and 5.02~TeV (blue solid line) as compared to the ATLAS data~\cite{Aad:2014bxa,Aaboud:2018twu}. The figure is from Ref.~\cite{He:2018xjv}.} \label{RAA_twoEnergy} \end{figure} The relatively weak transverse momentum dependence of the jet energy loss in Fig.~\ref{fig:plot-pTloss_recoil} is also shown \cite{He:2018xjv} to be influenced by the combination of many interplaying effects such as radial flow and medium modified flavor composition in addition to jet-induced medium response. Since the initial parton density increases with the colliding energy as reflected in the 20\% increase of the measured hadron multiplicity in the central rapidity~\cite{Abbas:2013bpa,Adam:2016ddh}, the net jet energy loss at $\sqrt{s}=5.02$~TeV is indeed about 15\% larger than at $\sqrt{s}=2.76$~TeV in the $p_\mathrm{T}=50-400$~GeV range when the medium response is taken into account as shown in Fig.~\ref{fig:plot-pTloss_recoil}. Assuming the effective strong coupling constant in \textsc{Lbt} is independent of the local temperature, the predicted suppression factor for single inclusive jet spectra in Pb+Pb collisions at $\sqrt{s}=5.02$~TeV, shown in Fig.~\ref{RAA_twoEnergy} together with ATLAS the data~\cite{Aad:2014bxa,Aaboud:2018twu}, is almost the same as that at 2.76~TeV. The transverse momentum dependence of the jet suppression factor in this range of $p_\mathrm{T}$ is also very weak, which is very different from the suppression factor of single inclusive charged hadrons~\cite{Aamodt:2010jd,CMS:2012aa,Khachatryan:2016odn,Acharya:2018qsh}. Both of these two features are the consequences of the energy and transverse momentum dependence of the jet energy loss and the initial jet spectra. The increased jet energy loss at higher colliding energy is offset by the flatter initial jet spectra (see Fig.~\ref{jetCS}) to give rise to almost the same $R_\mathrm{AA}(p_\mathrm{T})$. Since the net jet energy loss decreases with the jet-cone size as a bigger cone size includes more medium recoil partons and radiated gluons. Inclusion of medium response should lead to a unique cone size dependence of the jet suppression. Shown in Fig.~\ref{RAA-cone} are the jet suppression factors from \textsc{Lbt} with (solid) and without medium recoil (dashed) in the central rapidity region of 0-10\% Pb+Pb collisions at $\sqrt{s}=5.02$~TeV for different jet-cone sizes, $R$=0.5, 0.4, 0.3 and 0.2. As expected, the suppression factor increases with the jet-cone size as the net jet energy loss gets smaller for bigger jet-cone size. Without medium response, the suppression factors are both significantly smaller due to increased energy loss and much less sensitive to the jet-cone size. Similar behavior was also predicted in Refs.~\cite{Vitev:2008rz,Vitev:2009rd,Zapp:2012ak,Kang:2017frl} but with different $p_\mathrm{T}$-dependence because of lacking influence from the medium response and the radial expansion of the QGP. The systematic uncertainties of the CMS data~\cite{Khachatryan:2016jfl} in Fig.~\ref{RAA-cone} for Pb+Pb collisions at $\sqrt{s}=2.76$~TeV are too big to indicate any jet-cone size dependence. \begin{figure}[tbp] \centering \includegraphics[width=7.5cm]{He-RAA-cone} \caption{(Color online) The jet suppression factor $R_{\rm AA}$ as a function of $p_\mathrm{T}$ in 0-10\% Pb+Pb collisions at $\sqrt{s}=5.02$~TeV from \textsc{Lbt} with (solid) and without medium response (dashed) for different jet-cone sizes, $R$=0.5, 0.4, 0.3 and 0.2 as compared to the CMS data~\cite{Khachatryan:2016jfl} in 0-5\% Pb+Pb collisions at $\sqrt{s}=2.76$~TeV. The figure is from Ref.~\cite{He:2018xjv}.} \label{RAA-cone} \end{figure} \begin{figure*}[tbp] \centering \includegraphics[width=12.0cm]{CMS-RAA-cone-new} \caption{(Color online) The preliminary CMS data on the jet suppression factor $R_{\rm AA}(p_\mathrm{T})$ (red solid circles) in 0-10\% central Pb+Pb collisions at $\sqrt{s} = 5.02$~TeV for different jet cone sizes as compared to the ATLAS data~\cite{Aaboud:2018twu} (blue solid circles) and results from different transport model calculations. The figure is from Ref.~\cite{CMS:2019btm}.} \label{RAA-cone-new} \end{figure*} The CMS preliminary data on the jet-cone size dependence of the jet suppression factor with high precision in 0-10\% Pb+Pb collisions at $\sqrt{s}=5.02$~TeV become available recently~\cite{CMS:2019btm}. They are compared with many transport model simulations in Fig.~\ref{RAA-cone-new}. While most of the transport models fail to describe the preliminary CMS data, \textsc{Lbt} (with medium response) and \textsc{Martini} results agree with the data well for large jet-cone size ($R$=0.6, 0.8, 1.0). However, \textsc{Lbt} and \textsc{Martini} results on $R_\mathrm{AA}$ continue to decrease slightly with smaller jet cone-size ($R$=0.4,0.3,0.,2) while the CMS data remain the same and even increase slightly. Other theory calculations without medium response appear to agree with the preliminary data even for small jet-cone size~\cite{CMS:2019btm}. It is important to note that the ATLAS data for $R=0.4$ agree with \textsc{Lbt} and \textsc{Martini} results and are systematically smaller than the CMS preliminary data. Therefore, in order to verify the effect of medium response on the cone-size dependence of jet suppression, it is necessary for CMS and ATLAS to reconcile the discrepancy between their measurements by extending the CMS's coverage to small $p_\mathrm{T}$ and ATLAS's analyses to different (both small and large) jet-cone sizes. \subsection{$\gamma$/$Z^0$-jet correlation} \label{subsec:dijet} Similar to $\gamma$-hadron correlation, $\gamma/Z^0$-jet correlations are excellent probes to study jet quenching and jet-induced medium response because the energy of the triggered $\gamma$/$Z^0$ boson provides an approximate proxy of the initial jet energy before its propagation and transport through the QGP medium. Jet yields per trigger are also free of the uncertainties related to the estimate of the number of binary collisions in the study of quenching of single inclusive hadrons and jets. One can also measure jet transport coefficient directly through di-hadron, $\gamma$-hadron, or $\gamma/Z^0$-jet correlation in the azimuthal angle~\cite{Appel:1985dq,Blaizot:1986ma}. Even though the Sudakov form factor from initial state radiation dominates the azimuthal angle correlation between $\gamma/Z^0$ and jets with large transverse momentum~\cite{Mueller:2016gko,Chen:2016vem}, the large angle correlation could be influenced by large angle scattering between jet shower and medium partons~\cite{DEramo:2012uzl}, in particular when the transverse momentum scale is not too large. There have been many theoretical studies on jet quenching with $\gamma/Z^0$-jets in heavy-ion collisions~\cite{Li:2010ts,Dai:2012am,Wang:2013cia,Kang:2017xnc}. We will review here recent work on $\gamma/Z^0$-jets using the \textsc{Lbt} model with special emphasis on the effect of medium response, multiple jet production and suppression \cite{Luo:2018pto,Zhang:2018urd}. Similar to the simulation of single inclusive jets, \textsc{Pythia~8}~\cite{Sjostrand:2006za} is used to generate the initial jet shower partons in $\gamma$-jet events in p+p collisions with a minimum transverse momentum transfer that is half of the transverse momentum of the triggered photons. Events with bremsstrahlung photons from QCD processes are also included. The distribution of the initial $\gamma$-jet production in the transverse plane is sampled according to the distribution of hard processes in the initial condition for the underlying hydrodynamic evolution of the bulk matter. Parton transport is simulated within the \textsc{Lbt} model until the hadronic phase of the bulk matter. The \textsc{Fastjet}~\cite{Cacciari:2011ma} package is again used to reconstruct jets from the final partons. \begin{figure}[tbp] \centering \includegraphics[width=8.0cm]{Luo-gamma-jet-dndpt30100}\\ \vspace{-0.35in} \includegraphics[width=8.0cm]{Luo-gamma-jet-dndpt030} \caption{(Color online) The transverse momentum distribution of $\gamma$-jet in (a) peripheral (30-100\%) and (b) central (0-30\%) Pb+Pb (red) and p+p collisions (blue) at $\sqrt{s}=2.76$~TeV from \textsc{Lbt} simulations as compared to the CMS experimental data~\cite{Chatrchyan:2012gt}. Dashed lines are the transverse momentum distributions for leading jets only. The figures are from Ref.~\cite{Luo:2018pto}.} \label{ptjet} \end{figure} Shown in Fig.~\ref{ptjet} are the distributions of the associated jets from \textsc{Lbt} simulations as a function of $p_\mathrm{T}^{\rm jet}$ for fixed $p_\mathrm{T}^\gamma \ge 80$ GeV in both p+p and Pb+Pb collisions at $\sqrt{s}=2.76$~TeV which compare fairly well with the experimental data from CMS~\cite{Chatrchyan:2012gt}. The same kinematic cuts $|\eta_\gamma|<1.44$, $|\eta_{\rm jet}|<1.6$ and $|\phi_\gamma-\phi_{\rm jet}|>(7/8)\pi$ are imposed as in the CMS experiments. A lower threshold of the transverse momentum of reconstructed jets is set at $p_\mathrm{T}^{\rm jet}>30$~GeV. To compare to the CMS data, the \textsc{Lbt} results in both Pb+Pb and p+p collisions are convoluted with a Gaussian smearing with the same jet energy resolution in each centrality class of Pb+Pb collisions as in the CMS data. A complete subtraction of the uncorrelated underlying event background is assumed for the \textsc{Lbt} results. In the $\gamma$-jet events, the transverse momentum of the triggered photon is not completely balanced by the jet because of the initial-state gluon bremsstrahlung. However, the peak position of the jet $p_\mathrm{T}^{\rm jet}$ distribution in p+p collisions reflects the average initial $p_\mathrm{T}^{\rm jet}$ value of the $\gamma$-jet. From Fig.~\ref{ptjet} one can clearly see the shift of the peak position to a smaller $p_\mathrm{T}^{\rm jet}$ value due to jet energy loss in Pb+Pb collisions. To illustrate this shift in detail, we show in Fig.~\ref{ptloss} the average transverse momentum loss of the leading jet in $\gamma$-jet events in two centrality classes of Pb+Pb collisions as a function of the initial transverse momentum of the leading jet in p+p collisions. As one can see, including recoil and ``negative" partons from medium response in jet reconstruction reduces the net jet energy loss. The jet transverse momentum loss increases with the initial jet transverse momentum and the dependence is slightly weaker than a linear increase due to a combined effect of the jet energy loss for a given jet flavor (quark or gluon) and the transverse momentum dependence of the initial jet flavor composition. The initial quark fraction increases with transverse momentum in $\gamma$-jets and the energy loss of a gluon jet is found to be about 1.5 times bigger than that of a quark for a jet-cone size $R=0.3$ in this range of $p_\mathrm{T}^{\rm jet}$~\cite{He:2018xjv}. By shifting the transverse momentum distribution of $\gamma$-jet in p+p collisions, one can approximately describe the modification of the $\gamma$-jet spectra in Pb+Pb collisions~\cite{Luo:2018pto}. \begin{figure}[tbp] \centering \includegraphics[width=8.0cm]{Luo-gamma-jet-ptloss} \\ \caption{(Color online) The average transverse momentum loss of the leading $\gamma$-jet in Pb+Pb collisions at $\sqrt{s}=2.76$~TeV calculated within \textsc{Lbt} as a function of the initial jet transverse momentum with (solid) and without (dashed) contributions from recoil and ``negative" partons. The figure is from Ref.~\cite{Luo:2018pto}.} \label{ptloss} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=6.9cm]{Zhang-Zjet-dphi} \caption{(Color Online) The $Z^0$-jet correlation in the azimuthal angle $\Delta \phi_{\rm jZ}$ from \textsc{Lbt} simulations in p+p (blue) and Pb+Pb collisions (red) at $\sqrt{s} = 5.02$~TeV as compared to the CMS data~\cite{Sirunyan:2017jic}. The dotted (dash-dotted) lines show the contributions from $Z^0+1$ jets ($Z^0+(\ge 2)$ jets). The figure is from Ref.~\cite{Zhang:2018urd}.} \label{zjet_deltaphi} \end{figure} In Fig.~\ref{ptjet}, the \textsc{Lbt} results for the associated leading jet are shown (dashed lines) to deviate from the inclusive associated jet yields at small values of $p_\mathrm{T}^{\rm jet}$. The difference at low $p_\mathrm{T}^{\rm jet}<p_\mathrm{T}^\gamma$ is mainly caused by secondary jets associated with the triggered photon. Energy loss and suppression of the sub-leading jets lead to medium modification of the $\gamma$-jet correlation at lower $p_\mathrm{T}^{\rm jet}$ in addition to the modification caused by energy loss of the leading jets in $\gamma$-jet events. Multiple jets are produced from the large angle radiative processes in the initial hard processes. Their contributions can become significant in the region of large momentum imbalance $p_\mathrm{T}^{\rm jet}<p_\mathrm{T}^\gamma$ and even become dominant at large azimuthal angle difference $|\phi^\gamma-\phi^{\rm jet}-\pi|$. One can see this from the $Z^0$-jet correlation in the azimuthal angle $\Delta \phi_{\rm jZ}$ in p+p and Pb+Pb collisions at $\sqrt{s} = 5.02$~TeV as compared to the CMS data~\cite{Sirunyan:2017jic} in Fig.~\ref{zjet_deltaphi}. Here, the initial $Z^0$-jet showers in p+p collisions are simulated with the \textsc{Sherpa} Monte Carlo program~\cite{Gleisberg:2008ta} that combines the NLO pQCD with resummation of a matched parton shower. This model has a much better agreement with the experimental data on the large angle $Z^0$-jet correlation~\cite{Zhang:2018urd} in p+p collisions. As one can see, contributions from $Z^0$+($\ge 2$) jets from the NLO processes are much broader than that of $Z^0$+1 jets and dominate in the large angle $|\Delta \phi_{\rm jZ}-\pi| $ region. The $Z^0$+1 jets contribute mostly in the small angle $|\Delta \phi_{\rm jZ}-\pi|$ region where soft/collinear radiation from parton shower dominates. Jet quenching has negligible effects on the azimuthal correlation contributed by $Z^0$+1 jets because of the trigger bias. However, it suppresses the contribution from $Z^0$+($\ge 2$) jets and therefore leads to the suppression of the $Z^0$-jet correlation at large angle $|\Delta \phi_{\rm jZ}-\pi|$. Similar effects are also seen in the $\gamma$-jet correlation in the azimuthal angle~\cite{Luo:2018pto}. \section{Jet substructures} \label{sec:jet_substructures} \subsection{Jet fragmentation function} \label{subsec:jetFragmentation} \begin{figure*}[tbp] \centering \includegraphics[width=0.9\textwidth]{JETSCAPE-jetFF} \caption{(Color online) Nuclear modification of the jet fragmentation function from (a) \textsc{Matter+Lbt}, (b) \textsc{Matter+Martini} and (c) \textsc{Matter}+AdS/CFT calculations, compared to experimental data. The solid and dashed lines in (a) and (b) are with and without medium effects in \textsc{Matter} respectively, both with separation scale $Q_0=2$~GeV. Different separation scales in \textsc{Matter}+AdS/CFT are compared in (c). The figure is from Ref.~\cite{Tachibana:2018yae}.} \label{fig:JETSCAPE-jetFF} \end{figure*} Recent developments on experimental and computational techniques allow us to measure/calculate not only the total energy loss of full jets, but also how energy-momentum is distributed within jets. The latter is known as the jet substructure (or inner structure), which helps place more stringent constraints on our knowledge of parton-medium interactions, especially how jet-induced medium excitation modifies the soft hadron distribution within jets. The first observable of jet substructure is the jet fragmentation function~\cite{Chatrchyan:2012gw,Chatrchyan:2014ava,Aaboud:2017bzv,Aaboud:2018hpb}: \begin{equation} \label{eq:defJetFF} D(z)\equiv\frac{1}{N_\mathrm{jet}}\frac{dN_h}{dz}, \end{equation} which quantifies the hadron number ($N_h$) distribution as a function of their longitudinal momentum fraction ($z$) with respect to the jet ($z={\vec p}_\mathrm{T}^{\;h}\cdot {\vec p}_\mathrm{T}^\mathrm{\; jet}/|{\vec p}_\mathrm{T}^\mathrm{\; jet}|^2$), normalized to one jet. Note that the fragmentation function defined this way should not be confused with that for hadron distributions from the fragmentation of a single parton as defined in Eqs.~(\ref{eq:qfrag}) and (\ref{eq:gfrag}) and used to calculate single inclusive hadron spectra from hard processes [see e.g., Eqs.~(\ref{eq:dis0}) and (\ref{eq:xsectionFactor})]. Shown in Fig.~\ref{fig:JETSCAPE-jetFF} is a comparison between model calculations and experimental data on the nuclear modification factor of the jet fragmentation function within the \textsc{Jetscape} framework~\cite{Tachibana:2018yae}. Here, charged particles (tracks) with $p_\mathrm{T}^\mathrm{trk}$ are used to calculate the fragmentation function of jets with $R=0.4$, $100<p_\mathrm{T}^\mathrm{jet}<398$~GeV and $|y^\mathrm{jet}|<2.1$ in central Pb+Pb collisions at 2.76~TeV. A general feature of this nuclear modification factor is its enhancement at both small and large $z$ values with a suppression in the intermediate region of $z$. The enhancement near $z\rightarrow 1$ results from the different energy loss mechanisms between full jets and leading partons inside the jets after their transport through the QGP. For a full jet with a given cone size $R$, its energy loss mainly arises from losing soft components either through transport outside the jet-cone or ``absorption" by the medium of soft partons. The absorption occurs when soft hadrons are excluded from jets either through background subtraction or $p_\mathrm{T}$ cuts on particle tracks in the jet reconstruction. Consequently, even after losing an amount of energy, a full jet in A+A collisions can contain more leading partons at large $z$ than a jet in p+p collisions with the same $p_\mathrm{T}^{\rm jet}$. Of course, this enhancement at large $z$ depends on both the jet-cone size and the lower $p_\mathrm{T}^\mathrm{trk}$ cut for soft particles that are used for jet construction. This should be further investigated with more detailed calculations and measurements. As we will show later, such an enhancement at large $z$ disappears and is replaced by a suppression instead if one defines the momentum fraction by the transverse momentum of a triggered particle (e.g. $\gamma/Z^0$) which is approximately the initial jet energy before jet propagation through the QGP medium. Meanwhile, the medium-induced parton splitting and jet-induced medium response generate a large number of low $p_\mathrm{T}$ hadrons, leading to the low $z$ enhancement of the jet fragmentation function. Due to the energy conservation, enhancements at both large and small $z$ must be compensated by a depletion in the intermediate region of $z$. Different model implementations of jet transport in QCD are compared in Fig.~\ref{fig:JETSCAPE-jetFF}. In Figs.~\ref{fig:JETSCAPE-jetFF}~(a) and (b), results with and without medium modification of jets at high virtualities are compared, where the separation scale between the high-virtuality \textsc{Matter} shower and the low-virtuality \textsc{Lbt}/\textsc{Martini} transport is set as $Q_0=2$~GeV. The medium-modified splittings in \textsc{Matter} (solid lines) leads to both suppression of leading hadrons at large $z$ and enhancement of soft hadrons at small $z$ relative to the results with only the vacuum splittings in \textsc{Matter} (dashed line). The medium modification in the \textsc{Matter} phase has a larger impact on the hard cores (leading partons) of jets than on the soft coronas. Including this modification is more effective in increasing the energy loss of leading partons than the full jets. It therefore suppresses the enhancement of the jet fragmentation function at large $z$. For the same reason, reducing $Q_0$ from 2 to 1~GeV in Fig.~\ref{fig:JETSCAPE-jetFF}~(c) increases the \textsc{Matter} contribution in this multi-scale jet transport, hence also suppresses the large $z$ enhancement. Comparing results from the three different combinations of models within the JETSCAPE framework in Figs.~\ref{fig:JETSCAPE-jetFF}~(a), (b) and (c), one can see a much stronger low-$z$ enhancement of the jet fragmentation function from \textsc{Matter}+\textsc{Lbt} simulations in Fig.~\ref{fig:JETSCAPE-jetFF}~(a) because all recoil partons and their transport are fully tracked within the \textsc{Lbt} model to account for jet-induced medium excitation. These are not included in the results from \textsc{Matter}+\textsc{Marini} and \textsc{Matter}+AdS/CFT in Figs.~\ref{fig:JETSCAPE-jetFF}~(b) and (c). Similar effects from medium response on soft hadron production have also been found in Ref.~\cite{Casalderrey-Solana:2016jvj} within the \textsc{Hybrid} model and Ref.~\cite{KunnawalkamElayavalli:2017hxo} within the \textsc{Jewel} model. \begin{figure}[t] \centering \includegraphics[scale=0.36]{Chen-dndxi_jet_all} \caption{(Color online) The $\gamma$-jet fragmentation function as a function of $\xi^{\rm jet}$ in Pb+Pb collisions at $\sqrt{s}$=5.02 TeV for different centrality classes (upper panel) and the corresponding ratio of the Pb+Pb to p+p results (lower panel) for $p_\mathrm{T}^\gamma>60$~GeV, $p_\mathrm{T}^{\rm jet}>30$~GeV, $|\phi^{\rm jet}-\phi^\gamma|<7\pi/8$ and jet cone-size $R=0.3$. The figure is from Ref.~\cite{Chen:2018azc}.} \label{fig-ratio-jet} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.36]{Chen-dndxi_gamma_all} \caption{(Color online) The $\gamma$-jet fragmentation function as a function of $\xi^{\gamma}$ in Pb+Pb collisions at $\sqrt{s}$=5.02 TeV for different centrality classes (upper panel) and the corresponding ratio of the Pb+Pb to p+p results (lower panel) for $p_\mathrm{T}^\gamma>60$~GeV, $p_\mathrm{T}^{\rm jet}>30$~GeV, $|\phi^{\rm jet}-\phi^\gamma|<7\pi/8$ and jet cone-size $R=0.3$. The figure is from Ref.~\cite{Chen:2018azc}.} \label{fig-ratio-gamma} \end{figure} The jet fragmentation function has also been studied within many other theoretical approaches. For instance, a simplified energy loss model~\cite{Spousta:2015fca} that shifts the jet $p_\mathrm{T}$ in A+A collisions can also grasp certain features of the in-medium jet fragmentation function at high $z$, though it fails at low $z$. Using the \textsc{Yajem} model~\cite{Perez-Ramos:2014mna}, one finds that the data on the jet fragmentation function prefer the \textsc{Yajem-rad} module to the \textsc{Yajem-fmed} module of jet energy loss -- the former assumes virtuality enhancement of parton splitting inside the QGP while the latter assumes a simple parametrization of the medium-modified splitting function. Within the \textsc{Pyquen} model~\cite{Lokhtin:2014vda}, the experimental data are found to favor wide angle over small angle radiation in parton showers. The jet fragmentation function is also suggested as a valuable tool to extract the color (de)coherence effects in jet-QGP interactions in Ref.~\cite{Mehtar-Tani:2014yea}. Apart from single inclusive jets, the fragmentation function has also been explored for $\gamma$-triggered jets~\cite{Aaboud:2019oac} in which one can test sophisticated modelings of both parton energy loss and jet-induced medium excitation for fully understanding the $z$-dependence of the nuclear modification factor. Shown in Fig.~\ref{fig-ratio-jet} are the hadron yields of $\gamma$-jets as a function of $\xi^{\rm jet}=\ln(|{\vec p}_\mathrm{T}^{\rm\; jet}|^2/{\vec p}_\mathrm{T}^{\rm\; jet}\cdot{\vec p}_\mathrm{T}^{\;h})$ and their nuclear modification factors in Pb+Pb collisions with different centralities at $\sqrt{s}=5.02$~TeV from \textsc{CoLbt-Hydro} simulations~\cite{Chen:2018azc} as compared to the CMS data~\cite{Sirunyan:2018qec}. One can see there is a significant enhancement of the hadron yields at large $\xi^{\rm jet}$ (or small momentum fraction) due to the contribution from medium response. However, there is little change of hadron yields at small $\xi^{\rm jet}$ (or large momentum fraction). This is because of the trigger bias in the selection of jets with fixed values of $p_\mathrm{T}^{\rm jet}$ as we have just discussed before. One can also calculate the hadron yields per jet as a function of $\xi^\gamma=\ln(|{\vec p}_\mathrm{T}^{\;\gamma}|^2/{\vec p}_\mathrm{T}^{\;\gamma}\cdot{\vec p}_\mathrm{T}^{\;h})$ without fixing the transverse momentum of jets as shown in Fig.~\ref{fig-ratio-gamma}. In this case, there is a suppression of the hard (leading) hadrons at small $\xi^\gamma$ (or large momentum fraction) due to jet quenching and jet energy loss as well as an enhancement of soft hadrons at large $\xi^\gamma$ (or small momentum fraction) due to medium response. The enhancement is much bigger at large $\xi^\gamma$ than at large $\xi^{\rm jet}$ due to the fluctuation of the initial jet energy when $p_\mathrm{T}^\gamma$ is fixed. Note that it has been suggested in Ref.~\cite{Zhang:2015trf} that our current poor knowledge on the hadronization process limits the precision of our description of the hadron number within jets, thus also the fragmentation function. As a result, sub-jet structures are proposed, which measure the distribution of small-cone sub-jets, instead of hadrons, within large-cone jets. These new observables are expected to provide a larger discriminating power between different theoretical models and lead to the discovery of new features in medium modification of jet structures. \subsection{Jet shape} \label{subsec:jetShape} While the jet fragmentation function measures the longitudinal momentum distribution within a full jet, a complimentary observable is the jet shape~\cite{Chatrchyan:2013kwa,Khachatryan:2016tfj,Sirunyan:2018ncy}. It measures the momentum distribution transverse to the jet axis and should be sensitive to jet-induced medium response. It is also known as the jet energy density profile defined as \begin{equation} \label{eq:defJetShape} \rho(r)\equiv\frac{1}{\Delta r}\frac{1}{N_\mathrm{jet}}\sum_\mathrm{jet}\frac{p_\mathrm{T}^\mathrm{jet}(r-\Delta r/2,r+\Delta r/2)}{p_\mathrm{T}^\mathrm{jet}(0,R)}, \end{equation} where $r=\sqrt{(\eta-\eta_\mathrm{jet})^2+(\phi-\phi_\mathrm{jet})^2}$ is the radius to the center of the jet located at $(\eta_\mathrm{jet},\phi_\mathrm{jet})$, and \begin{equation} \label{eq:defJetShapePT} p_\mathrm{T}^\mathrm{jet}(r_1,r_2)=\sum_{\mathrm{trk}\in(r_1, r_2)} p_\mathrm{T}^\mathrm{trk} \end{equation} represents the summed energy of particle tracks within the circular annulus between $(r_1,r_2)$. The jet profile is normalized by the total energy within $(0,R)$ of each jet, with $R$ being the cone size utilized to reconstruct the jet. The above equation is also normalized to per jet event. Note that the study of the jet shape can be extended to the $r>R$ region. \begin{figure}[tbp] \centering \includegraphics[width=0.43\textwidth]{Tachibana-dijet_shape_leading}\\ \vspace{-0.3cm} \includegraphics[width=0.43\textwidth]{Tachibana-dijet_shape_subleading} \caption{(Color online) Jet shape of (a) the leading and (b) subleading jets in dijet events in 2.76~TeV p+p and central Pb+Pb collisions, compared between contributions from jet shower and jet-induced medium excitation in Pb+Pb collisions. The figures are from Ref.~\cite{Tachibana:2017syd}.} \label{fig:Tachibana-dijet_shape} \end{figure} Shown in Fig.~\ref{fig:Tachibana-dijet_shape} are jet shapes of both leading and subleading jets in dijet events from a coupled jet-fluid model~\cite{Tachibana:2017syd}, in which elastic and inelastic energy loss of jets are calculated semi-analytically using transport (rate) equations and the lost energy is deposited in the QGP via a source term in a (3+1)-D hydrodynamic model simulation of the bulk medium. One observes that at small $r$, the energy density profile is dominated by energetic partons initiated from jets (labeled as ``shower"). At large $r$, however, soft hadrons produced from the jet-induced hydrodynamic response in the QGP (labeled as ``hydro") dominate the energy density profile. The contributions from jet showers and QGP response to the jet shape cross around $r=0.4$ just outside the jet cone. To better illustrate effects of jet-induced medium response on nuclear modification of the jet shape, the ratio between jet profiles in Pb+Pb and p+p collisions is shown in Fig.~\ref{fig:Tachibana-nm_jet_shape}. One observes a significant enhancement of this nuclear modification factor at large $r$ after including contributions from jet-induced medium excitation. Since the jet shape defined in Eq.~(\ref{eq:defJetShape}) is a self-normalized quantity, an enhancement at large $r$ will be accompanied by a suppression at smaller $r$. This happens at intermediate $r$ inside the jet-cone since the jet shape at around $r=0$ is dominated by leading hadrons that actually should be enhanced slightly according to what we see in the previous subsection on jet fragmentation function. Similar effects of medium response on the jet shape have also been found in Refs.~\cite{Casalderrey-Solana:2016jvj,KunnawalkamElayavalli:2017hxo,Tachibana:2018yae,Park:2018acg} with different treatments of jet-induced medium excitation as discussed in Sec.~\ref{sec:medium_response}. It is worth noting that even without jet-induced medium response, one can still observe an enhancement of the jet shape at large $r$ since jet-medium interactions, both elastic scatterings and medium-induced splittings, broaden the energy distribution towards larger $r$. In fact, as shown in Fig.~\ref{fig:Tachibana-nm_jet_shape}, current experimental data within $r<0.3$ is incapable of precisely constraining the effects of medium response, and several calculations without medium response~\cite{Ramos:2014mba,Lokhtin:2014vda,Chien:2015hda} can provide reasonable descriptions of the data as well. Much stronger effects of medium response are expected at $r>R$ where the energy density profile is dominated by soft hadron production from jet-induced medium excitation. Thus, more sophisticated comparisons between data and theoretical calculations in this region are required in the future. One bottleneck in current model calculations is the lack of a reliable event generator for p+p collisions. As shown in Ref.~\cite{Kumar:2019bvr}, while \textsc{Pythia}-based event generators produce satisfactory $p_\mathrm{T}$ spectra of inclusive hadrons and jets, clear deviations from experimental data are observed in jet substructures, especially in the kinematic regions dominated by soft hadrons. This problem needs to be solved before we may expect more accurate predictions on nuclear modification of the jet shape. \begin{figure}[tbp] \centering \includegraphics[width=0.43\textwidth]{Tachibana-nm_jet_shape} \caption{(Color online) The nuclear modification factor of the single inclusive jet shape in central Pb+Pb collisions at 2.76~TeV. The figure is from Ref.~\cite{Tachibana:2017syd}.} \label{fig:Tachibana-nm_jet_shape} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.43\textwidth]{Luo-rhoAApp0} \caption{(Color online) The nuclear modification factor of the $\gamma$-triggered jet shape in central Pb+Pb collisions at 2.76~TeV, compared between with and without including medium response; and in central Pb+Pb collisions at 5.02~TeV, compared to the CMS data. The figure is from Ref.~\cite{Luo:2018pto,Luo:private}.} \label{fig:Luo_jet_shape} \end{figure} The nuclear modification factor of the $\gamma$-triggered jet shape can also be calculated as shown in Fig.~\ref{fig:Luo_jet_shape} from the \textsc{Lbt} model simulations~\cite{Luo:2018pto} in which effects of medium response are modeled through propagation of recoil and ``negative" partons. Similar to the single inclusive jet shape discussed above, one can see that jet-medium interactions transport energy towards the outer layer of the jet cone and result in an enhancement of this nuclear modification factor at large $r$. Comparing the results for 2.76~TeV Pb+Pb collisions with and without medium response, it is clear that the medium response significantly increases the energy density near the edge of the jet cone while it has negligible impact on the hard core ($r<0.05$) of the jet. The \textsc{Lbt} result for 5.02~TeV Pb+Pb collisions, including contributions from medium response, is consistent with the CMS data~\cite{Sirunyan:2018ncy}. Another interesting observation of the jet shape is that a small enhancement of the jet shape at very small $r$ for single inclusive jets at $p_\mathrm{T}^\mathrm{jet}>100$~GeV~\cite{Chatrchyan:2013kwa} does not exist for $\gamma$-triggered jets at $p_\mathrm{T}^\mathrm{jet}>30$~GeV~\cite{Sirunyan:2018ncy}. This was first expected to result from different broadening of quark and gluon jets: the hard core of a quark jet (which dominates $\gamma$-triggered jets) is less broadened inside the QGP than that of a gluon jet (which contributes most to single inclusive jets). However, this is shown incorrect in recent studies~\cite{Chang:2016gjp,Chang:2019sae}. Although the shape of a quark jet is narrower than that of a gluon jet, the shape of their mixture is not necessarily in between due to its definition [Eq.~(\ref{eq:defJetShape})]: it is self-normalized to each jet event separately. Instead of the jet flavor (quark vs. gluon) effect, different $p_\mathrm{T}$ regimes are found to be the main reason for different jet shapes. For both single inclusive and $\gamma$-triggered jets, while a small enhancement of the jet shape exists at small $r$ for high $p_\mathrm{T}$ jets, it does not for low $p_\mathrm{T}$ jets. This can be easily tested with future measurements of single inclusive jets at lower $p_\mathrm{T}$ or $\gamma$-jets at higher $p_\mathrm{T}$. \begin{figure}[tbp] \centering \includegraphics[width=0.43\textwidth]{Luo-rhopp}\\ \vspace{-0.4cm} \includegraphics[width=0.43\textwidth]{Luo-rhoAA} \caption{(Color online) Contributions to the jet shape of $\gamma$-jets in (a) p+p and (b) 0-30\% Pb+Pb collisions at 2.76~TeV from partons with different $p_\mathrm{T}$. The solid circles in the lower panel show the total contribution from medium response. The figures are from Ref.~\cite{Luo:2018pto}.} \label{fig:Luo-energyFlow} \end{figure} For a more detailed investigation of the jet shape, one may separate contributions from particles within differentiated $p_\mathrm{T}$ bins~\cite{Khachatryan:2016tfj}, as illustrated in Fig.~\ref{fig:Luo-energyFlow} from the \textsc{Lbt} model calculation for $\gamma$-jets~\cite{Luo:2018pto}. Comparing between p+p and central Pb+Pb collisions, we observe a suppression of high $p_\mathrm{T}$ particles at large $r$ while a significant enhancement of low $p_\mathrm{T}$ particles over the full $r$ range. This comparison demonstrates how the jet energy is transported from energetic partons to soft ones via scatterings and medium-induced splittings. This is referred to as the ``energy flow" within jets. In the lower panel, the total contribution from jet-induced medium excitation, shown as solid circles, constitutes a significant amount of energy density especially at large $r$. As a result, medium response is crucial for understanding how the jet energy loss is re-distributed when jets propagate through the QGP. A similar observable for studying this energy re-distribution is known as the ``missing $p_\mathrm{T}$" in dijet events~\cite{Khachatryan:2015lha}. One may study the imbalance of the summed particle $p_\mathrm{T}$ between the leading jet hemisphere and the subleading jet hemisphere and learn how this imbalanced (missing) $p_\mathrm{T}$ is recovered with the increase of the cone size around the dijet axis. As suggested in Ref.~\cite{Hulcher:2017cpt}, including jet-induced medium excitation is essential for recovering the lost energy from dijet systems and obtaining balanced transverse momenta between the two hemispheres at large cone size. \subsection{Jet splitting function} \label{subsec:jetSplitting} As discussed in Secs.~\ref{sec:theory} and~\ref{sec:models}, parton energy loss and nuclear modification of jets are closely related to the medium-induced parton splitting functions. All observables that have been discussed do not provide direct constraints on the splitting function. However, with the introduction of the soft drop jet grooming algorithm~\cite{Larkoski:2014wba,Larkoski:2015lea,Dasgupta:2013ihk,Frye:2016aiz}, we are now able to eliminate soft hadrons at wide angles and identify the hard splitting within a groomed jet, providing a direct study on the splitting function~\cite{Sirunyan:2017bsd,Kauder:2017cvz,D.CaffarrifortheALICE:2017xsw}. Contributions from hadronization and underlying events are suppressed with the soft drop, and a more direct comparison between data and perturbative QCD calculations becomes possible. \begin{figure}[tbp] \centering \includegraphics[width=0.43\textwidth]{Chang-Pzg-CMS} \caption{(Color online) Nuclear modification of the groomed jet $z_g$ distribution in 0-10\% Pb+Pb collisions at 5.02~TeV, compared between different jet $p_\mathrm{T}$ intervals. The figure is from Ref.~\cite{Chang:2017gkt}.} \label{fig:Chang-Pzg-CMS} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.45\textwidth]{Chang-Pzg-STAR} \caption{(Color online) Nuclear modification of the groomed jet $z_g$ distribution in 0-20\% Au+Au collisions at 200~GeV, compared between different jet $p_\mathrm{T}$ intervals. The figure is from Ref.~\cite{Chang:2017gkt}.} \label{fig:Chang-Pzg-STAR} \end{figure} In the soft drop procedure, as adopted by CMS and STAR Collaboration for analyses of heavy-ion collisions, a full jet constructed using radius $R$ via the anti-$k_\mathrm{T}$ algorithm is first re-clustered using the Cambridge-Aachen (C/A) algorithm and then de-clustered in the reverse order by dropping the softer branch until two hard branches are found to satisfy the following condition: \begin{equation} \label{eq:defSoftDrop} \frac{\min({p_{\mathrm{T}1},p_{\mathrm{T}2}})}{p_{\mathrm{T}1}+p_{\mathrm{T}2}} \equiv z_g > z_\mathrm{cut}\left(\frac{\Delta R}{R}\right)^\mathrm{\beta}, \end{equation} where $p_\mathrm{T1}$ and $p_\mathrm{T2}$ are the transverse momenta of the two subjets at a particular step of declustering, $\Delta R$ is their angular separation, $z_\mathrm{cut}$ is the lower cutoff of the momentum sharing $z_g$. Both CMS~\cite{Sirunyan:2017bsd} and STAR~\cite{Kauder:2017cvz} measurements take $z_\mathrm{cut}=0.1$ and $\beta=0$, the former also requires $\Delta R\ge 0.1$ while the latter does not. With this setup, the self-normalized momentum sharing distribution \begin{equation} \label{eq:defPzg} p(z_g)\equiv\frac{1}{N_\mathrm{evt}}\frac{dN_\mathrm{evt}}{dz_g} \end{equation} is used to characterize the parton splitting function, with $N_\mathrm{evt}$ being the number of jet events in which two qualified hard subjets are found. \begin{figure}[tbp] \centering \includegraphics[width=0.43\textwidth]{Chang-Pzg-coherence} \caption{(Color online) Effects of coherent vs. decoherent jet energy loss on the nuclear modification factor of the groomed jet $z_g$ distribution in 0-10\% Pb+Pb collisions at 5.02~TeV. The figure is from Ref.~\cite{Chang:2017gkt}.} \label{fig:Chang-Pzg-coherence} \end{figure} \begin{figure*}[tbp] \centering \includegraphics[width=0.65\textwidth]{Park-MARTINI-mass} \includegraphics[width=0.26\textwidth]{Park-MARTINI-mass-pT} \caption{(Color online) (a)-(c) The jet mass distribution in 0-10\% Pb+Pb collisions at 2.76~TeV, compared between with and without including recoil contributions, and between different jet $p_\mathrm{T}$ intervals. (d) The average jet mass as a function of the jet $p_\mathrm{T}$. The figures are from Ref.~\cite{Park:2018acg}.} \label{fig:Park-MARTINI-mass} \end{figure*} This momentum sharing distribution has been investigated with different theoretical approaches to jet quenching in heavy-ion collisions~\cite{Chien:2016led,Mehtar-Tani:2016aco,KunnawalkamElayavalli:2017hxo,Chang:2017gkt,Milhano:2017nzm,Li:2017wwc,Caucal:2019uvr}. For example, as shown in Figs.~\ref{fig:Chang-Pzg-CMS} and~\ref{fig:Chang-Pzg-STAR}, a simultaneous description of the nuclear modification factor of $p(z_g)$ can be obtained based on the high-twist energy loss formalism~\cite{Chang:2017gkt}. One interesting observation is that while the nuclear modification of the splitting function is strong for large $p_{\rm T}$ jets at the LHC, it increases as the jet $p_\mathrm{T}$ decreases and becomes rather weak again at lower $p_{\rm T}$ at RHIC. Apart from the different $\Delta R$ in the implementations of soft-drop in CMS and STAR measurements, authors in Ref.~\cite{Chang:2017gkt} find a non-monotonic dependence of this nuclear modification factor on the jet $p_\mathrm{T}$ even when they use the same $\Delta R$ cut. Considering that the extracted splitting function has contributions from both the vacuum and medium-induced splitting [shown by Eq.~(\ref{eq:splitPtot})], one expects vacuum splitting to become more dominant as the jet $p_\mathrm{T}$ increases, and therefore a weaker medium modification as observed in Fig.~\ref{fig:Chang-Pzg-CMS}. On the other hand, the medium-induced splitting $P_\mathrm{med}(z)$ and the vacuum one $P_\mathrm{vac}(z)$ share a similar $1/z$ dependence in the low energy limit, the medium modification of the observed splitting function vanishes as jet $p_\mathrm{T}$ approaches to this low limit. Since $p(z_g)$ is defined as a self-normalized quantity, its nuclear modification effects appear to decrease with jet $p_\mathrm{T}$ at RHIC, as seen in Fig.~\ref{fig:Chang-Pzg-STAR}. The competition between these two effects results in the non-monotonic $p_\mathrm{T}$ dependence as observed in the current data. Note that both effects are seen in the calculations within the high-twist energy loss formalism and could be model dependent. Therefore, this non-monotonic behavior should be further tested with high $p_\mathrm{T}$ jets at RHIC or low $p_\mathrm{T}$ jets at the LHC. The nuclear modification of $p(z_g)$ has also been proposed as a possible probe of the color (de)coherence effects in jet-medium interactions~\cite{Mehtar-Tani:2016aco,Chang:2017gkt} as illustrated in Fig.~\ref{fig:Chang-Pzg-coherence}, where different assumptions of nuclear modification are compared with the CMS data. In this study, ``vac+med, CEL" denotes medium-modified parton splitting functions with coherent energy loss for the two subjets; ``vac+med, IEL" denotes the medium-modified splitting function with incoherent (or independent) energy loss for two subjets within a jet; and ``vac, IEL" denotes vacuum-like splittings of partons followed by incoherent energy loss of subjets. Figure~\ref{fig:Chang-Pzg-coherence} shows that only results with ``vac+med, CEL" splitting can describe the experimental data. Within the high-twist formalism~\cite{Chang:2017gkt}, partons lose smaller fractional energy at higher energy (the energy dependence of the energy loss is less than a linear one) after traversing the QGP. As a result, independent energy loss of the two subjets in a given jet event leads to a larger $z_g$ fraction as compared to a coherent energy loss. This leads to an enhancement in the nuclear modification factor at large $z_g$ after shifting (smaller $z_g$) and re-self-normalizing $p(z_g)$ with incoherent energy loss. Current experimental data seem to prefer the assumption of coherent energy loss within the high-twist formalism. Note that such a conclusion is specific to an energy loss theory and depends on the extent of the $z_g$ shift caused by incoherent energy loss. According to Ref.~\cite{Caucal:2019uvr}, a reasonable description of data may also be obtained using the BDMPS energy loss formalism with an incoherent energy loss assumption. Although implementing soft drop jet grooming is expected to suppress contributions from soft hadrons within jets, Refs.~\cite{KunnawalkamElayavalli:2017hxo,Milhano:2017nzm} still show a sizable effect of medium response on the $z_g$ distribution. Therefore, variations on soft drop grooming, $z_\mathrm{cut}$, $\beta$, $R$ and $\Delta R$ in Eq.~(\ref{eq:defSoftDrop}) need to be investigated in more detail to better separate hard and soft contributions. \subsection{Jet mass} \label{subsec:jetMass} In some jet-quenching models, such as \textsc{Q-Pythia} and \textsc{Yajem-rad} as discussed in Sec.~\ref{subsec:med_showers_virtual}, the parton splitting functions are not directly modified by the medium. Instead, an enhanced parton virtuality by parton-medium scatterings is assumed, which induces additional vacuum-like splittings, leading to an effective jet energy loss. The variation of partons' virtuality can be explored using the jet mass~\cite{Acharya:2017goa} defined as \begin{equation} \label{eq:defJetMass} M=\sqrt{E^2-p_\mathrm{T}^2-p_z^2}, \end{equation} where $E$, $p_\mathrm{T}$ and $p_z$ are the total energy, transverse momentum and longitudinal momentum of a given jet, respectively. There are several theoretical studies searching for medium effects on the jet mass distribution~\cite{KunnawalkamElayavalli:2017hxo,Park:2018acg,Chien:2019lfg,Casalderrey-Solana:2019ubu}. Figure~\ref{fig:Park-MARTINI-mass} presents the \textsc{Martini} model calculation of the jet mass distribution in central Pb+Pb collisions at 2.76~TeV~\cite{Park:2018acg} with and without medium response. Compared to the p+p baseline, jet energy loss without recoil partons from the medium response shifts the mass distribution towards smaller values, while medium recoil partons tend to increase the jet mass. As shown in panel (c), no significant variation of the mass distribution has been found in the experimental data in Pb+Pb collisions from that in p+p. In panel (d) of Fig.~\ref{fig:Park-MARTINI-mass}, one observes that the average jet mass and the contribution from recoil partons both increase with the jet $p_\mathrm{T}$. Similar findings have been seen in Refs.~\cite{KunnawalkamElayavalli:2017hxo,Casalderrey-Solana:2019ubu} as well. \begin{figure}[tbp] \centering \includegraphics[width=0.43\textwidth]{Luo-groomed-jet-mass} \caption{(Color online) The nuclear modification factor of the groomed jet mass distribution in 0-10\% Pb+Pb collisions at 5.02~TeV, from \textsc{Lbt} simulations with and without contributions from recoil partons. The figure is from Ref.~\cite{Luo:2019lgx}.} \label{fig:Luo-groomed-jet-mass} \end{figure} To suppress contributions from soft hadrons and study the virtuality scale of hard splittings, the mass distributions of groomed jets have been measured recently~\cite{Sirunyan:2018gct}. However, as shown by the \textsc{Lbt} model calculation~\cite{Luo:2019lgx} in Fig.~\ref{fig:Luo-groomed-jet-mass}, the effects of jet-induced medium excitation may still be large in jets with large masses. Without the contribution from recoil partons, a suppression of the mass distribution due to parton energy loss is observed at the large groomed mass. To the contrary, taking into account recoil partons results in an enhancement. Again, one should note that while the importance of incorporating jet-induced medium excitation for understanding jet-medium interactions has been commonly recognized, its quantitative contributions in theoretical calculations depend on the detailed model implementations. One should also keep in mind that its signals emerging from experimental data will also depend on the choices of various kinematic cuts. \subsection{Heavy flavor jets} \label{subsec:HFJets} As discussed in Sec.~\ref{subsec:heavyHadron}, heavy quarks are valuable probes of the mass effects on parton energy loss. However, due to the NLO (or gluon fragmentation) contribution to heavy flavor production, it is hard to observe the mass hierarchy directly in the suppression factors $R_\mathrm{AA}$ of light and heavy flavor hadrons at high $p_\mathrm{T}$. Similarly, after including the gluon fragmentation process, the suppression factors $R_\mathrm{AA}$ of single inclusive jets and heavy-flavor-tagged jets are also found to be similar~\cite{Huang:2013vaa}. Photon-tagged or $B$-meson-tagged $b$-jets are proposed in Ref.~\cite{Huang:2015mva} to increase the fraction of $b$-jets that originate from prompt $b$-quarks relative to single inclusive $b$-jets. These await future experimental investigations. \begin{figure}[tbp] \centering \includegraphics[width=0.5\textwidth]{Cunqueiro-HF-Lund} \caption{(Color online) The relative difference between $b$-jets and inclusive jets on the correlation between the splitting angle and the radiator energy. The figure is from Ref.~\cite{Cunqueiro:2018jbh}.} \label{fig:Cunqueiro-HF-Lund} \end{figure} Instead of the integrated spectra of heavy flavor jets, recent studies indicate that one can use their substructures to provide more direct insights into the ``dead cone effect"~\cite{Dokshitzer:2001zm} that suppresses the radiative energy loss of massive quarks. Shown in Fig.~\ref{fig:Cunqueiro-HF-Lund} is the relative difference between $b$-jets and inclusive jets on the correlation between the splitting angle and the radiator energy~\cite{Cunqueiro:2018jbh}. In this work, \textsc{Pythia} is used to generate heavy and light flavor jet events and the soft drop jet grooming is applied. For heavy flavor jets, the branch containing the heavy flavor is always followed when one moves backwards through the jet clustering history. The relative transverse momentum $k_\mathrm{T}$ and angle $\theta$ of the other branch, together with the energy $E_\mathrm{radiator}$ of the parent (radiator) that splits into these two branches, are mapped into the density distribution as shown in Fig.~\ref{fig:Cunqueiro-HF-Lund}. With a proper kinematic cut, e.g. $\ln(k_\mathrm{T})>0$, one can clearly observe a suppression of gluon radiation from heavy quarks between the lower boundary given by the kinematic cut $\theta=2k_\mathrm{T}^\mathrm{min}/E_\mathrm{radiator}$ and the dead cone size $\theta_c=m_b/E_\mathrm{radiator}$ defined with the $b$-quark mass $m_b$. This simulation result serves as a promising guidance for analyzing experimental data in order to directly observe the dead cone of heavy quarks. Similar studies should also be extended to both event generators and experimental analyses of A+A collisions for the purpose of understanding the medium modification of the dead cone, which is the origin of the mass hierarchy of parton energy loss inside the QGP. We can also study the angular distributions of gluon radiation from heavy and light flavor partons using the jet shape. A similar quantity -- the angular distribution of $D$ mesons with respect to the jet axis -- has been explored in both experimental measurements~\cite{Sirunyan:2019dow} and theoretical calculations~\cite{Wang:2019xey}. Comparing p+p and Pb+Pb collisions at the LHC, one observes an enhancement of the nuclear modification factor of this angular distribution at large distance to the jet axis. By utilizing jet as a reference for the heavy quark motion, this provides a novel direction to learn how heavy quarks diffuse and lose energy inside the QGP. To further understand how mass and flavor affect parton-medium interactions and obtain better constraints on jet-quenching theories, it is necessary to continue searching for observables of jet substructures with which heavy and light flavor jets can be directly compared. \section{Summary and outlook} \label{sec:summary} During the last two decades of experimental and theoretical studies of high-energy heavy-ion collisions at both RHIC and LHC, jet quenching has provided a powerful tool to explore the properties of the dense QCD matter formed during the violent collisions at unprecedented energies. The extraction of the jet transport coefficient, which is about 2 (1) orders of magnitude larger than that in a cold nuclear (hot hadronic) matter, provided a quantitative evidence for the formation of the QGP at the center of the heavy-ion collisions at RHIC and LHC. We have entered an era of quantitative study of the properties of QGP through a wide variety of probes, including jet quenching and associated phenomena. We have presented in this review recent progresses in theoretical and phenomenological studies of jet quenching and jet-induced medium response in heavy-ion collisions. We have reviewed new developments in theoretical calculations of large angle gluon radiation induced by jet-medium interactions and the connection to existing results under different approximations in the high-twist approach. The implementation of the induced gluon radiation under different approximations in jet transport models are also discussed. We have reviewed phenomenological studies within these jet transport models on suppression of single inclusive light and heavy flavor hadrons, single inclusive jets, modification of $\gamma$-hadron, $\gamma/Z^0$-jet correlations and jet substructures due to jet quenching. Special emphases have been given to effects of jet-induced medium response in current experimental measurements of these observables. Though an unambiguous signal of Mach-cone caused by the jet-induced medium response is still elusive in current experimental measurements, there are a wide variety of phenomena in the experimental observation that point to the effects of jet-induced medium response during the jet propagation in the QGP medium. \begin{itemize} \item The most striking phenomenon caused by the medium response is the enhancement of soft hadrons in the $\gamma$-hadron correlation and fragmentation functions of single inclusive jets and $\gamma$-jets, especially when the factional momentum in the fragmentation function is defined by the momentum of the triggered photon. The onset of the enhancement starts at a momentum fraction that decreases with the photon's momentum, indicating an intrinsic scale related to the medium, not the jet energy. \item Another striking effect of medium response is the enhancement of the jet profile toward the edge of the jet-cone and at large angles outside the jet-cone. This enhancement is found to be mostly contributed by soft hadrons/partons from jet-induced medium response. Recoil partons from the medium response also enhance the jet mass distribution in the large mass region. \item The most unique feature of jet-induced medium response is the depletion of soft $\gamma$-hadron correlation in the direction of the trigger photon due to the diffusion wake of jet-induced medium response. Measurement of such depletion, however, requires statistical subtraction of a large background. \item The indirect effect of jet-induced medium response is the jet energy dependence of the net jet energy loss and the $p_\mathrm{T}$ dependence of the jet suppression factor $R_\mathrm{AA}$. Inclusion of medium response significantly reduces and changes the jet energy dependence of the net jet energy loss. Therefore, without jet-induced medium response, the $p_\mathrm{T}$ dependence of $R_\mathrm{AA}$ will be different. Including medium response also changes the cone-size dependence of the jet $R_\mathrm{AA}$. \end{itemize} Some of the above listed phenomena may be explained by some alternative mechanisms such as large angle and de-coherent emission of radiated gluons~\cite{MehtarTani:2011tz,Mehtar-Tani:2014yea,Mehtar-Tani:2016aco}. However, it is important to have a coherent and systematic explanation of all the observed phenomena in jet quenching measurements. Since jet-induced medium response is partly caused by elastic scatterings between jet shower and medium partons, which is a well established mechanism of jet-medium interactions in most theoretical descriptions, any established models for jet transport should include them. Only then one can look for additional mechanisms that can also contribute to these jet quenching phenomena. Experimentally, one still needs data with higher precision since some unique features of medium response need to be confirmed with better background subtractions. The tension between ATLAS and CMS experimental data on the cone-size dependence of jet suppression still needs to be resolved, in particular for smaller cone size and lower jet $p_\mathrm{T}$. We also need to explore more novel measurements that can identify medium response, especially the long-sought-after Mach-cone which can help provide more direct constraints on the EoS of the QGP matter produced in relativistic heavy-ion collisions.
{ "attr-fineweb-edu": 1.374023, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc9c4uBhiz5uUyhel
\section{Observations and reductions} Throughout this paper, we specify all times of observations using reduced heliocentric Julian dates \smallskip \centerline{RJD = HJD -2400000.0\,.} \smallskip \subsection{Spectroscopy} The star was observed at the Ond\v{r}ejov (OND), and Dominion Astrophysical (DAO) observatories. The majority of the spectra cover the red spectral region around H$\alpha$, but we also obtained some spectra in the blue region around H$\gamma$ and infrared spectra in the region also used for the Gaia satellite. The journal of spectral observations is in Table~\ref{jourv}. The methods of spectra reductions and measurements were basically the same as in the previous paper~30 of this series devoted to BR~CMi \citep{zarfin30}. See also Appendix~\ref{apa} for details. \subsection{Photometry} The star was observed at Hvar over several observing seasons, first in the \hbox{$U\!B{}V$}, and later in the \hbox{$U\!B{}V\!R$}\ photometric system relative to HD~82861. The check star HD~81772 was observed as frequently as the variable. All data were corrected for differential extinction and carefully transformed to the standard systems via non-linear transformation formulae. We also used the Hipparcos \hbox{$H_{\rm p}$}\ observations, transformed to the Johnson $V$ magnitude after \citet{hpvb} and recent ASAS-SN Johnson $V$ survey observations \citep{asas2014,asas2017}. Journal of all observations is in Table~\ref{jouphot} and the individual Hvar observations are tabulated in Appendix~\ref{apb}, where details on individual data sets are also given. \begin{table} \caption[]{Journal of available photometry.}\label{jouphot} \begin{center} \begin{tabular}{rcrccl} \hline\hline\noalign{\smallskip} Station&Time interval& No. of &Passbands&Ref.\\ &(RJD)&obs. & \\ \noalign{\smallskip}\hline\noalign{\smallskip} 61&47879.03--48974.17& 128&$H_{\rm p}$ & 1 \\ 01&55879.62--56086.36& 36&\hbox{$U\!B{}V$} & 2\\ 01&56747.35--57116.36& 57&\hbox{$U\!B{}V\!R$} & 2\\ 93&56003.85--58452.08& 209&$V$ & 3\\ \noalign{\smallskip}\hline \end{tabular}\\ \tablefoot{{\sl Column ``Station":} The running numbers of individual observing stations they have in the Prague / Zagreb photometric archives:\\ 01~\dots~Hvar 0.65-m, Cassegrain reflector, EMI9789QB tube;\\ 61~\dots~Hipparcos all-sky $H_{\rm p}$ photometry transformed to Johnson $V$.\\ 93~\dots~ASAS-SN all-sky Johnson $V$ photometry.\\ {\sl Column ``Ref."} gives the source of data:\\ 1~\dots~\cite{esa97}; 2~\dots~this paper; 3~\dots~\citet{asas2014,asas2017}. } \end{center} \end{table} \section{Iterative approach to data analysis} The binary systems in phase of mass transfer between the components usually display rather complex spectra; besides the lines of both binary components there are also some spectral lines related to the circumstellar matter around the mass-gaining star \citep[see e.g.][]{desmet2010}. Consequently, a straightforward analysis of the spectra based on one specific tool (e.g. 2-D cross-correlation or spectra disentangling) to obtain RVs and orbital elements cannot be applied. One has to combine several various techniques and data sets to obtain the most consistent solution. This is what we tried, guided by our experience from the previous paper of this series \citep{zarfin30}. We analysed the spectroscopic and photometric observations iteratively in the following steps, which we then discuss in the subsections below. \begin{enumerate} \item We derived and analysed RVs in several different ways to verify and demonstrate that H$\delta$ is indeed a double-lined spectroscopic binary and ellipsoidal variable as first reported by \citet{koubskytatry}. \item We disentangled the spectra of both binary components in a blue spectral region free of emission lines with the program {\tt KOREL} \citep{korel1,korel2,korel3} to find out that -- like in the case of BR~CMi \citep{zarfin30} -- a rather wide range of mass ratios gives comparably good fits. \item Using the disentangled component spectra as templates, we derived 2-D cross-correlated RVs of both components with the help of the program {\tt asTODCOR} \citep[see][for the details]{desmet2010, zarfin30} for the best and two other plausible values of the mass ratio. We verified that the {\tt asTODCOR} RVs of the hotter component define a RV curve, which is in an~almost exact anti-phase to that based on the lines of the cooler star. A disappointing finding was that the resulting RVs of the hot component differ for the three different {\tt KOREL} templates derived for the plausible range of mass ratios. \item We used the {\tt PYTERPOL} program kindly provided by J.~Nemravov\'a to estimate the radiative properties of both binary components from the blue spectra, which are the least affected by circumstellar emission. The function of the program is described in detail in \citet{jn2016}. \footnote{The program {\tt PYTERPOL} is available with a~tutorial at \\ https://github.com/chrysante87/pyterpol/wiki\,.} \item Keeping the effective temperatures obtained with {\tt PYTERPOL} fixed, and using all RVs for the cooler star from all four types of spectra together with all photometric observations we started to search for a~plausible combined light-curve and orbital solution. To this end we used the latest version of the program {\tt PHOEBE} 1.0 \citep{prsa2005,prsa2006} and tried to constrain the solution also by the accurate Gaia parallax of the binary. \item As an additional check, we also used the program {\tt BINSYN} \citep{linnell1994} to an independent verification of our results. \end{enumerate} \subsection{The overview of available spectra} Examples of the red, infrared and blue spectra at our disposal are in Fig.~\ref{samples}. In all spectra numerous spectral lines typical for a late spectral type are present. In addition to it, one can see several \ion{H}{i} lines, affected by emission and a few absorption lines belonging to the hotter component. \begin{figure} \centering \includegraphics[width=9.0cm]{xd300027.pdf} \includegraphics[width=9.0cm]{dao05005.pdf} \includegraphics[width=9.0cm]{dao_det.pdf} \includegraphics[width=9.0cm]{ud110012.pdf} \caption{Examples of available blue, red, and infrared spectra. From top to bottom: blue spectrum, red spectrum, enlarged part of a red spectrum near to H$\alpha$, and the infrared spectrum. All three regions contain numerous lines of the cool component.} \label{samples} \end{figure} \subsection{Light and colour changes} All light curves at our disposal exhibit double-wave ellipsoidal variations with the orbital 33\fd8 period. Their amplitude is decreasing from the $R$ to the $U$ band, in which the changes are only barely visible. We show the light curves later, along with the model curves. In Fig.~\ref{ubv} we compare the colour changes of H$\delta$ in the \hbox{$U\!-\!B$}\ vs. \hbox{$B\!-\!V$}\ diagram with those known for several other well observed Be binary stars. One can see that its colour changes are remarkably similar to those for another ellipsoidal variable BR~CMi while other objects shown are the representatives of the positive and inverse correlation between the light and emission strength as defined by \citet{hvar83} \citep[see also][]{bozic2013}. In particular, it is seen that for KX~And dereddened colours exhibit inverse correlation with the object moving along the main-sequence line in the colour-colour diagram from B1V to about B7V. On the other hand, CX~Dra seems to exhibit a positive correlation after dereddening, moving from B3\,V to B3\,I. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{hd-hr.pdf}} \caption{Variations of H$\delta$ in the colour - colour diagram are compared to those known for some other Be stars observed at Hvar.}\label{ubv} \end{figure} \subsection{Reliable radial velocities} \subsubsection{\tt phdia RVs \label{sssec:spefo}} Using the program {\tt phdia} (see Appendix~A), we measured RVs of a number of unblended metallic lines in all blue, red and infrared spectra at our disposal. A period analysis of these RVs confirmed and reinforced the preliminary result of \citet{koubskytatry} that these RVs follow a~sinusoidal RV curve with a~period of 33\fd77 and a semi-amplitude of 81~km~s$^{-1}$. As recently discussed in paper~30 \citep{zarfin30}, some caution must be exercised when one analyses binaries with clear signatures of the presence of circumstellar matter in the system. The experience shows that the RV curve of the Roche-lobe filling component is usually clean (with a possible presence of a Rossiter effect) and defines its true orbit quite well, while many of the absorption lines of the mass-gaining component are affected by the presence of circumstellar matter, and their RV curves do not describe the true orbital motion properly. It is therefore advisable to select suitable spectral lines in the blue spectral region, free of such effects. \subsubsection{{\tt KOREL} maps} In the next step, we therefore derived a map of plausible solutions using a~Python program kindly provided by J.A.~Nemravov\'a. The program employs the program {\tt KOREL}, calculates true $\chi^2$ values and maps the parameter space to find the optimal values of the semiamplitude $K_2$ and the mass ratio $M_1/M_2$. In our application to HD~81357, we omitted the H$\gamma$ line and used the spectral region 4373 -- 4507~\AA, which contains several \ion{He}{i} and metallic lines of the B-type component~1. As in the case of a similar binary BR~CMi \citep{zarfin30} we found that disentangling is not the optimal technique how to derive the most accurate orbital solution since the interplay between a~small RV amplitude of the broad-lined B primary and its disentangled line widths results in comparably good fits for a~rather wide range of mass ratios. The lowest $\chi^2$ was obtained for a mass ratio of 9.75, but there are two other shallow minima near to the mass ratios 7 and 16. The optimal value of $K_2$ remained stable near to 81 -- 82 km~s$^{-1}$. \subsubsection{Velocities derived via 2-D cross-correlation} As mentioned earlier, the spectrum of the mass-gaining component is usually affected by the presence of some contribution from circumstellar matter, having a slightly lower temperature than the star itself \citep[e.g.][]{desmet2010}. This must have impact on {\tt KOREL}, which disentangles the composite spectra on the premise that all observed spectroscopic variations arise solely from the orbital motion of two binary components. Although we selected a blue spectral region with the exclusion of the H$\gamma$ line (in the hope to minimize the effect of circumstellar matter), it is probable that {\tt KOREL} solutions returned spectra, which -- for the mass-gaining star -- average the true stellar spectrum and a contribution from the circumstellar matter. As an alternative, we decided to derive RVs, which we anyway needed to be able to combine photometry and spectroscopy in the {\tt PHOEBE} program, with 2-D cross-correlation. We used the {\tt asTODCOR} program written by one of us (YF) to this goal. The software is based on the method outlined by \citet{todcor1} and has already been applied in similar cases \citep[see][for the details]{desmet2010,zarfin30}. It performs a 2-D cross-correlation between the composite observed spectra and template spectra. The accuracy and precision of such measurements depend on both, the quality of the observations, and on how suitable templates are chosen to represent the contribution of the two stars. In what follows, we used the observed blue spectra over the wavelength range from 4370 to 4503~\AA, mean resolution 0.12 \ANG~mm$^{-1}$, and a luminosity ratio $L_2/L_1=0.1$. We adopted the spectra disentangled by {\tt KOREL} for the optimal mass ratio 9.75 as the templates for the 2-D cross-correlation. We attempted to investigate the effect of circumstellar matter on the RVs derived with {\tt asTODCOR} for the primary using also the template spectra for the other two mass ratios derived with {\tt KOREL}. The RVs are compared in Fig.~\ref{minmax}. From it we estimate that, depending on the orbital phase, the systematic error due to the presence of circumstellar matter may vary from 0 to 3 km/s. The resulting {\tt asTODCOR} RVs for the optimal mass ratio 9.75 are listed, with their corresponding random error bars, in Table~\ref{rvtod} in Appendix, while the {\tt SPEFO} RVs of the cooler star are in Tab.~\ref{rvspefo}. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{minmax.pdf}} \caption{Comparison of {\tt asTODCOR} RVs of star~1 for the optimal mass ratio q=9.75 (solid line) and for the two extreme mass-ratios {\tt KOREL} templates (q=16 as filled circles, and q=7 as open circles). Typical errors of {\tt asTODCOR} individual RVs are close to 1.0~km~s$^{-1}$ -- see Table~\ref{rvtod}} \label{minmax} \end{figure} Assigning the three sets of {\tt asTODCOR} RVs from the blue spectral region with the weights inversely proportional to the square of their rms errors, we derived orbital solutions for them. The solutions were derived separately for the hot component~1 and the cool component~2 to verify whether those for the hot star describe its true orbital motion properly. We used the program {\tt FOTEL} \citep{fotel1,fotel2}. The results are summarised in Table~\ref{trial}. We note that the values of the superior conjunction of component~1 from the separate solutions for component~1 and component~2 agree within their estimated errors, but in all three cases the epochs from component~1 precede a~bit those from component~2. This might be another indirect indication that the RVs of component~1 are not completely free of the effects of circumstellar matter. A disappointing conclusion of this whole exercise is that there is no reliable way how to derive a~unique mass ratio from the RVs. One has to find out some additional constraints. \subsection{Trial orbital solutions} \begin{table} \caption[]{Trial {\tt FOTEL} orbital solutions based on {\tt asTODCOR} RVs from the 18 blue spectra, separately for star~1, and star~2. Circular orbit was assumed and the period was kept fixed at 33\fd773~d. The solutions were derived using three different RV sets based on the {\tt KOREL} template spectra for the optimal mass ratio $q$ of 9.75, and for two other plausible mass ratios 7, and 16. The epoch of the superior conjunction $T_{\rm super.\, c.}$ is in RJD-55487 and rms is the rms error of 1 measurement of unit weight.} \label{trial} \begin{center} \begin{tabular}{lcccccl} \hline\hline\noalign{\smallskip} Element &$q=9.75$ &$q=7$ &$q=16$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} star 1 \\ \noalign{\smallskip}\hline\noalign{\smallskip} $T_{\rm super.\, c.}$ &0.24(39) &0.50(27) &0.36(64)\\ $K_1$ (km~s$^{-1}$) &8.04(57) &11.09(42) &4.78(92) \\ rms$_1$ (km~s$^{-1}$) &1.451 &1.505 &1.441 \\ \noalign{\smallskip}\hline\noalign{\smallskip} star 2 \\ \noalign{\smallskip}\hline\noalign{\smallskip} $T_{\rm super.\, c.}$ &0.617(24) &0.615(24) &0.621(25)\\ $K_2$ (km~s$^{-1}$) &81.82(32) &81.89(32) &81.82(34) \\ rms$_2$ (km~s$^{-1}$) &0.884 &0.887 &0.965 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \end{center} \tablefoot{We note that due to the use of the {\tt KOREL} template spectra, the {\tt asTODCOR} RVs are referred to zero systemic velocity.} \end{table} In the next step, we derived another orbital solution based on 151 RVs (115 {\tt SPEFO} RVs, 18 {\tt asTODCOR} RVs of component~2 and 18 {\tt asTODCOR} RVs of component~1). As the spectra are quite crowded with numerous lines of component~2, we were unable to use our usual practice of correcting the zero point of the velocity scale via measurements of suitable telluric lines \citep{sef0}. That is, why we allowed for the determination of individual systemic velocities for the four subsets of spectra defined in Table~\ref{jourv}. All RVs were used with the weights inversely proportional to the square of their rms errors. This solution is in Table~\ref{fotelsol}. There is a very good agreement in the systemic velocities from all individual data subsets, even for the {\tt phdia} RVs from blue spectra, where only four spectral lines could be measured and averaged. \begin{table} \caption[]{Trial {\tt FOTEL} orbital solutions based on all 151 {\tt phdia} and {\tt asTODCOR} RVs and a solution for the 65 RVs of the H$\alpha$ emission wings measured in {\tt SPEFO}. The epoch of superior conjunction of star~1 is in RJD, rms is the rms error of 1 observation of unit weight.} \label{fotelsol} \begin{center} \begin{tabular}{lcccccl} \hline\hline\noalign{\smallskip} Element & Binary & emis. wings H$\alpha$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $P$ (d) &33.77458\p0.00079&33.77458 fixed \\ $T_{\rm super.\, c.}$ &55487.657\p0.020 &55492.53\p0.49 \\ $e$ & 0.0 assumed & 0.0 assumed \\ $K_1$ (km~s$^{-1}$) &7.76\p0.56 & 9.47\p0.88 \\ $K_2$ (km~s$^{-1}$) &81.75\p0.17 & -- \\ $K_2/K_1$ &0.0949\p0.0063 & -- \\ $\gamma_1$ (km~s$^{-1}$) &$-$13.46\p0.18 &$-$11.33\p0.85 \\ $\gamma_2$ (km~s$^{-1}$) &$-$13.04\p0.24 & -- \\ $\gamma_3$ (km~s$^{-1}$) &$-$13.10\p1.38 & -- \\ $\gamma_4$ (km~s$^{-1}$) &$-$13.69\p0.25 &$-$13.13\p0.74 \\ $\gamma_{\rm 3T1}$ (km~s$^{-1}$)&$-$0.39\p0.37$^*$&-- \\ $\gamma_{\rm 3T2}$ (km~s$^{-1}$)&$-$0.20\p0.33$^*$&-- \\ rms (km~s$^{-1}$) & 1.138 & 2.71 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \end{center} \tablefoot{$^*)$ We note that the {\tt asTODCOR} RVs derived using the {\tt KOREL} templates refer to zero systemic velocity.} \end{table} \subsection{Radiative properties of binary components}\label{synt} To determine the radiative properties of the two binary components, we used the Python program {\tt PYTERPOL}, which interpolates in a pre-calculated grid of synthetic spectra. Using a set of observed spectra, it tries to find the optimal fit between the observed and interpolated model spectra with the help of a~simplex minimization technique. It returns the radiative properties of the system components such as $T_{\rm eff}$, $v$~sin~$i$ or $\log~g$, but also the relative luminosities of the stars and RVs of individual spectra. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{prof1.pdf}} \resizebox{\hsize}{!}{\includegraphics{prof2.pdf}} \caption{Example of the comparison of an~observed blue spectrum in two selected spectral regions with a~combination of two synthetic spectra. The residuals in the sense observed minus synthetic are also shown on the same flux scale. To save space, the spectra in the bottom panel were linearly shifted from 1.0 to 0.3\,. The H$\gamma$ emission clearly stands out in the residuals in the first panel. See the text for details.} \label{synblue} \end{figure} \begin{table} \caption[]{Radiative properties of both binary components derived from a~comparison of selected wavelength segments of the observed and interpolated synthetic spectra in the blue spectral region.} \label{synpar} \begin{center} \begin{tabular}{rcccccl} \hline\hline\noalign{\smallskip} Element & Component 1 & Component 2 \\ \noalign{\smallskip}\hline\noalign{\smallskip} $T_{\rm eff}$ (K) &12930\p540 &4260\p24\\ {\rm log}~$g$ [cgs] &4.04\p0.13 &2.19\p0.14\\ $L_{4278-4430}$ &0.9570\p0.0092&0.0706\p0.0059\\ $L_{4438-4502}$ &0.9440\p0.0096&0.0704\p0.0075\\ $v$~sin~$i$ (km~s$^{-1}$) &166.4\p1.5 &19.7\p0.8\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} \end{center} \end{table} In our particular application, two different grids of spectra were used: AMBRE grid computed by \citet{delaverny2012} was used for component~2, and the Pollux database computed by \citet{pala2010} was used for component~1. We used the 18 file D blue spectra from OND, which contain enough spectral lines of component~1. The following two spectral segments, avoiding the region of the diffuse interstellar band near to 4430~\AA, were modelled simultaneously: \smallskip \centerline{4278--4430~\AA, and 4438--4502~\AA.} Uncertainties of radiative properties were obtained through Markov chain Monte Carlo (MCMC) simulation implemented within {\tt emcee}\footnote{The library is available through GitHub~\url{https://github.com/dfm/emcee.git} and its thorough description is at~\url{http://dan.iel.fm/emcee/current/}.} Python library by~\citet{fore2013}. They are summarised in Table~\ref{synpar} and an example of the fit is in Fig~\ref{synblue}. We note that {\tt PYTERPOL} derives the RV from individual spectra without any assumption about orbital motion. It thus represents some test of the results obtained in the previous analysis based on {\tt asTODCOR} RVs. \begin{table} \centering \caption{Combined radial-velocity curve and light curve solution with {\tt PHOEBE}. The optimised parameters are given with only their formal errors derived from the covariance matrix. } \label{phoebe} \begin{tabular}{lrlrl} \hline Element& \multicolumn{4}{c}{Orbital properties}\\ \hline\hline $P$\,(d)& \multicolumn{4}{c}{33.77445$\pm0.00041$}\\ $T_{\rm sup.c.}$\,(RJD)& \multicolumn{4}{c}{55487.647$\pm0.010$}\\ $M_1/M_2$\,& \multicolumn{4}{c}{10.0\p0.5}\\ $i$\,(deg)& \multicolumn{4}{c}{63$\pm5$}\\ $a$\,(R$_{\odot}$)& \multicolumn{4}{c}{63.01\p0.09}\\ \hline \hline & \multicolumn{4}{c}{Component properties}\\ & \multicolumn{2}{c}{Component 1}& \multicolumn{2}{c}{Component 2}\\ \hline $T_{\rm eff}$ \,(K)& \multicolumn{2}{c}{12930\p540}& \multicolumn{2}{c}{4260\p24}\\ $\log g_{\rm \left[cgs\right]}$& 3.75\p0.15& &1.71\p0.25\\ $M$\,(\hbox{$\mathcal{M}^{\mathrm N}_\odot$})$^{*}$& 3.36\p0.15& & 0.34\p0.04\\ $R$\,(\hbox{$\mathcal{R}^{\mathrm N}_\odot$})$^{*}$& 3.9\p0.2& &14.0\p0.7\\ $F$& 30.7\p1.9 & &1.0 fixed\\ $L_{\rm U}$ &$0.992\pm0.003$&&$0.008\pm0.003$\\ $L_{\rm B}$ &$0.956\pm0.003$&&$0.044\pm0.003$\\ $L_{\rm Hp}$ &$0.902\pm0.004$&&$0.098\pm0.004$\\ $L_{\rm V}$ &$0.853\pm0.005$&&$0.147\pm0.005$\\ $L_{\rm V(ASAS)}$&$0.853\pm0.005$&&$0.147\pm0.005$\\ $L_{\rm R}$ &$0.749\pm0.007$&&$0.251\pm0.007$\\ \hline \end{tabular} \tablefoot{ $^*$ Masses and radii are expressed in nominal solar units, see \citet{prsa2016}. } \end{table} \begin{table} \caption{$\chi^2$ values for individual data sets for the adopted combined {\tt PHOEBE} solution presented in Table~\ref{phoebe}.}\label{chi2} \centering \begin{tabular}{crccrrrlc} \hline\hline Data set & No. of &Original & Normalised \\ & obs. & $\chi^2$ & $\chi^2$ \\ \hline Hvar $U$ & 27 & 36.1 & 1.34 \\ Hvar $B$ & 27 & 33.0 & 1.22 \\ Hvar $V$ & 27 & 62.1 & 2.30 \\ ASAS-SN $V$ & 209 & 209.9 & 1.00 \\ Hipparcos $Hp$& 128 & 176.3 & 1.38 \\ Hvar $R$ & 15 & 22.7 & 1.51 \\ $RV_2$ & 116 & 304.1 & 2.62 \\ \hline Total & 549 & 1063.9 & 1.94 \\ \hline \end{tabular} \end{table} \subsection{Combined light-curve and orbital solution in {\tt PHOEBE}}\label{psol} To obtain the system properties and to derive the final ephemeris, we used the program {\tt PHOEBE} ~1 \citep{prsa2005,prsa2006} and applied it to all photometric observations listed in Table~\ref{jouphot} and {\tt phdia} and {\tt asTODCOR} RVs for star~2. Since {\tt PHOEBE} cannot treat different systemic velocities, we used actually RVs minus respective $\gamma$ velocities from the solution listed in Table~\ref{fotelsol}. For the OND blue spectra (file~D of Table~\ref{jourv}), we omitted the less accurate {\tt SPEFO} RVs and used only {\tt asTODCOR} RVs. Bolometric albedos for star~1 and 2 were estimated from Fig.~7 of \citet{claret2001} as 1.0 and 0.5, respectively. The coefficients of the gravity darkening $\beta$ were similarly estimated as 1.0 and 0.6 from Fig.~7 of \citet{claret98}. When we tried to model the light curves on the assumption that the secondary is detached from the Roche lobe, we were unable to model the light-curve amplitudes. We therefore conclude that H$\delta$ is a semi-detached binary in the mass transfer phase between the components. It is not possible to calculate the solution in the usual way. One parameter, which comes into the game, is the synchronicity parameter $F$, which is the ratio between the orbital and rotational period for each component. While it is safe to adopt $F_2=1.0$ for the Roche-lobe filling star~2, the synchonicity parameter $F_1$ must be re-caculated after each iteration as \begin{equation} F_1=P_{\rm orbital}{v_1\sin i\over{50.59273R_1^{\rm e}\sin i}}\, \, , \end{equation} \noindent where the equatorial radius $R_1^{\rm e}$ is again in the nominal solar radius \hbox{$\mathcal{R}^{\mathrm N}_\odot$}, the orbital period in days, and the projected rotational velocity in km~s$^{-1}$. We adopt the value of 166.4~km~s$^{-1}$ for $v_1\sin i$ from the {\tt PYTERPOL} solution. It is usual that there is a very strong parameter degeneracy for an ellipsoidal variable. To treat the problem we fixed $T_{\rm eff}$\ of both components obtained from the {\tt PYTERPOL} solution and used the very accurate parallax of H$\delta$ $p=0\farcs0016000 \pm 0\farcs0000345$ from the second data release of the Gaia satellite \citep{gaia2, dr2sum} to restrict a~plausible range of the solutions. From a~{\tt PHOEBE} light-curve solution for the Hvar \hbox{$U\!B{}V$}\ photometry in the standard Johnson system it was possible to estimate the following \hbox{$U\!B{}V$}\ magnitudes of the binary at light maxima \smallskip \centerline{$V_{1+2}=8$\m321, $B_{1+2}=8$\m491, and $U_{1+2}=8$\m300\, .} \smallskip \noindent We calculated a number of trial {\tt PHOEBE} solutions, keeping the parameters of the solution fixed and mapping a~large parameter space. For each such solution we used the resulting relative luminosities to derive the \hbox{$U\!B{}V$}\ magnitudes of the hot star~1, dereddened them in a standard way and interpolating the bolometric correction from the tabulation of \citet{flower96} we derived the range of possible values for the mean radius $R_1$ for the Gaia parallax and its range of uncertainty from the formula \begin{equation} M_{\rm bol\,1}=42.35326 - 5\log R_1 - 10\log T_{\rm eff\,1} \end{equation} \noindent \citep{prsa2016}. This range of the radius was compared to the mean radius $R_1$ obtained from the corresponding {\tt PHOEBE} solution. We found that the agreement between these two radius determinations could only be achieved for a very limited range of mass ratios, actually quite close to the mass ratio from the optimal {\tt KOREL} solution. The resulting {\tt PHOEBE} solution is listed in Table~\ref{phoebe} and defines the following linear ephemeris, which we shall adopt in the rest of this study \begin{equation} T_{\rm super.conj.}={\rm RJD}\,55487.647(10)+33\fd77445(41)\times\ E\,. \label{efe} \end{equation} \noindent The fit of the individual light curves is shown in Fig.~\ref{lcphoebe} while in Fig.~\ref{rvc} we compare the fit of the RV curve of component 2 and also the model RV curve of component~1 with the optimal {\tt asTODCOR} RVs, which however were not used in the {\tt PHOEBE} solution. The combined solutions of the light- and RV-curves demonstrated a~strong degeneracy among individual parameters. In Table~\ref{chi2} we show the original and normalized $\chi^2$ values for the individual data sets used. It is seen that the contribution of the photometry and RVs to the total sum of squares are comparable. A higher $\chi^2$ for RVs might be related to the fact that we were unable to compensate small systematic differences in the zero points of RVs between individual spectrographs and/or spectral regions perfectly, having no control via telluric lines. The degeneracy of the parameter space is illustrated by the fact that over a~large range of inclinations and tolerable range of mass ratios the program was always able to converge, with the total sum of squares differing by less than three per cent. \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics[angle=-90]{ubvr.pdf}} \caption{\hbox{$U\!B{}V\!R$}\ light curves modelled with {\tt PHOEBE}. Abscissa is labelled with orbital phases according to ephemeris (\ref{efe}).The same scale on ordinate was used to show the changes of the amplitude with passband. For all curves, also the rms errors of individual data points are shown.}\label{lcphoebe} \end{figure*} \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics[angle=-90]{rvc2.pdf}} \resizebox{\hsize}{!}{\includegraphics[angle=-90]{rvc1.pdf}} \caption{Top panels: RV curve of star~2 and the residuals from {\tt PHOEBE} solution. The rms errors are comparable to the symbols' size. Bottom panels: {\tt asTODCOR} RV curve of star~1, not used in the {\tt PHOEBE} solution, and its residuals from that solution.} \label{rvc} \end{figure} In passing we note that we also independently tested the results from {\tt PHOEBE} using the {\tt BINSYN} suite of programs \citep{Linnell1984, Linnell1996, Linnell2000} with steepest descent method to optimise the parameters of the binary system \citep{Sudar2011}. This basically confirmed the results obtained with {\tt PHOEBE}. \section{H$\alpha$ profiles} \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{rvh3e.pdf}} \caption{{\tt SPEFO} RVs of the H$\alpha$ emission wings with their estimated rms errors plotted vs. orbital phase.} \label{RV_em_Ha} \end{figure} \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{h3dyn.png}} \caption{ Dynamical spectra in the neighbourhood of the H$\alpha$ profile in a grey-scale representation created in the program {\tt phdia}. Left-hand panel: observed spectra; right-hand panel: difference spectra after the subtraction of a synthetic spectrum (including the H$\alpha$ profile) of star~2. The phases shown on both sides of the figure, correspond to ephemeris (\ref{efe}). The spectra are repeated over two orbital cycles to show the continuity of the orbital phase changes.} \label{h3dyn} \end{figure} The strength of the H$\alpha$ emission peaks ranges from 1.6 to 2.0 of the continuum level and varies with the orbital phase (cf. Fig.~\ref{h3dyn}). For all spectra a~central reversal in the emission is present. But in many cases the absorption structure is quite complicated. The RV curve of the H$\alpha$ emission wings is well defined, sinusoidal and has a~phase shift of some 5 days relative to the RV curve based on the lines of component~1 (see Table~\ref{fotelsol} and Fig.~\ref{RV_em_Ha}). This must be caused by some asymmetry in the distribution of the circumstellar material producing the emission. In principle, however, one can conclude that the bulk of the material responsible for H$\alpha$ emission moves in orbit with component~1 as it is seen from the semi-amplitude and systemic velocity of the H$\alpha$ emission RV curve, which are similar to those of the component~1. We note that in both, the magnitude and sense of the phase shift, this behaviour is remarkably similar to that of another emission-line semi-detached binary AU~Mon \citep{desmet2010}. The orbital variation of the H$\alpha$ emission-line profiles in the spectrum of HD~81357 is illustrated in the dynamical spectra created with the program {\tt phdia} -- see the left panel of Fig.~\ref{h3dyn}. The lines of the cool component are seen both shorward and longward of the H$\alpha$ emission. These lines can be used to trace the motion of H$\alpha$ absorption originating from star~2. It is readily seen that the behaviour of the central absorption in the H$\alpha$ emission is more complicated than what would correspond to the orbital motion of star~2. Three stronger absorption features can be distinguished: region 1 near to phase 0.0, region 2 visible from phase 0.4 to 0.5, and somewhat fainter feature 3 present between phases 0.65 and 0.85. The absorption in region 2 follows the motion of star~2, while the absortion line in region 3 moves in antiphase. In the right panel of Fig.~\ref{h3dyn} we show the dynamical spectra of the difference profiles resulting after subtraction of an interpolated synthetic spectrum of star~2 (properly shifted in RV according to the orbital motion of star~2) from the observed H$\alpha$ profiles. We note that the lines of star~2 disappeared, but otherwise no pronounced changes in the H$\alpha$ profiles occurred in comparison to the original ones. Thus, the regions of enhanced absorption 1, 2, and 3, already seen on the original profiles are phase-locked and must be connected with the distribution of circumstellar matter in the system. There are two principal possible geometries of the regions responsible for the H$\alpha$ emission: Either an accretion disk, the radius of which would be limited by the dimension of the Roche lobe around the hot star, $\sim 37$~\hbox{$\mathcal{R}^{\mathrm N}_\odot$}\ for our model; or a bipolar jet perpendicular to the orbital plane, known, for instance, for $\beta$~Lyr \citep{hec96}, V356~Sgr, TT~Hya, and RY Per \citep{peters2007} or V393~Sco \citep{men2012b}, originating from the interaction of the gas stream encircling the hot star with the denser stream flowing from the Roche-lobe filling star~2 towards star~1 \citep{bis2000}. In passing we note that no secular change in the strength of the H$\alpha$ emission over the interval covered by the data could be detected. \section{Stellar evolution of H$\delta$ with mass exchange} Given a rather low mass and a large radius of the secondary, we were interested to test whether stellar evolution with mass exchange in a binary can produce a~system similar to HD~81357. We use the binary stellar evolution program MESA \citep{pax2011,pax2015} in order to test a certain range of input parameters. We tried the initial masses in the intervals $M_1 \in (1.0; 1.5)\,\hbox{$\mathcal{M}^{\mathrm N}_\odot$}$, $M_2 \in (2.2; 2.7)\,\hbox{$\mathcal{M}^{\mathrm N}_\odot$}$, and the initial binary period $P \in (2; 10)\,{\rm days}$. Hereinafter, we use the same notation as in the preceding text, so that $M_1$ is the original secondary, which becomes the primary during the process of mass exchange. The mass transfer was computed with \cite{Ritter_1988A&A...202...93R} explicit scheme, with the rate limited to $\dot M_{\rm max} = 10^{-7}\,\hbox{$\mathcal{M}^{\mathrm N}_\odot$},{\rm yr}^{-1}$, and magnetic breaking of \cite{Rappaport_1983ApJ...275..713R}, with the exponent $\gamma = 3$. For simplicity, we assumed zero eccentricity, conservative mass transfer, no tidal interactions, and no irradiation. We used the standard time step controls. An example for the initial masses $M_1 = 1.5\,\hbox{$\mathcal{M}^{\mathrm N}_\odot$}$, $M_2 = 2.2\,\hbox{$\mathcal{M}^{\mathrm N}_\odot$}$, the initial period $P = 2.4\,{\rm d}$ and the mass transfer beginning on the SGB is presented in Figure~\ref{HRD}. We obtained a result, which matches the observations surprisingly well, namely the final semimajor axis $a_{\rm syn} = 66.04\,\hbox{$\mathcal{R}^{\mathrm N}_\odot$}$, which corresponds to the period $P_{\rm syn} \doteq 32.60\,{\rm d}$ (while $P_{\rm obs} = 33.77\,{\rm d}$), the final masses $M_1 = 3.37\,\hbox{$\mathcal{M}^{\mathrm N}_\odot$}$ ($3.36\,\hbox{$\mathcal{M}^{\mathrm N}_\odot$}$), $M_2 = 0.33\,\hbox{$\mathcal{M}^{\mathrm N}_\odot$}$ ($0.34\,\hbox{$\mathcal{M}^{\mathrm N}_\odot$}$), the maximum secondary radius $R_2 = 13.1\,\hbox{$\mathcal{R}^{\mathrm N}_\odot$}$ ($14.0\,\hbox{$\mathcal{R}^{\mathrm N}_\odot$}$), with the exception of the primary radius $R_1 = 2.3\,\hbox{$\mathcal{R}^{\mathrm N}_\odot$}$ (cf. $3.9\,\hbox{$\mathcal{R}^{\mathrm N}_\odot$}$). Alternatively, solutions can be also found for different ratios of the initial masses $M_1$, $M_2$, and later phases of mass transfer (RGB), although they are sometimes preceded by an overflow. An advantage may be an even better fit of the final~$R_1$, and a relatively longer duration of the inflated~$R_2$ which makes such systems more likely to be observed. Consequently, we interpret the secondary as a low-mass star with a still inflated envelope, close to the end of the mass transfer phase. We demonstrated that a binary with an ongoing mass transfer is a reasonable explanation for both components of the H$\delta$ system. A more detailed modelling including an accretion disk, as carried out by \citet{VanRensbergen_2016A&A...592A.151V} for other Algols, would be desirable. \begin{figure} \centering \includegraphics[width=9.0cm]{hrd.pdf} \includegraphics[width=9.0cm]{rm.pdf} \includegraphics[width=9.0cm]{at.pdf} \caption{Long-term evolution of HD~81357 binary as computed by MESA \citep{pax2015}. The initial masses were $M_1 = 1.5\,M_\odot$, $M_2 = 2.2\,M_\odot$, and the initial period $P = 2.4\,{\rm d}$. Top: the HR diagram with the (resulting) primary denoted as 1 (dashed black line), secondary as 2 (solid orange line) at the beginning of the evolutionary tracks (ZAMS). Middle: the radius~$R$ vs mass~$M$; the corresponding observed values are also indicated (filled circles). Bottom: the semimajor axis~$a$ of the binary orbit (green) and the radii~$R_1$, $R_2$ of the components vs time~$t$. The observed value $a = 63.95\,R_\odot$ and the time $t \doteq 7.6\cdot10^8\,{\rm yr}$ with the maximal~$R_2$ are indicated (thin dotted). Later evolutionary phases are also plotted (solid grey line). } \label{HRD} \end{figure} \section{Discussion} Quite many of similar systems with a~hot mass-gaining component were found to exhibit cyclic long-term light and colour variations, with cycles an order of magnitude longer than the respective orbital periods \citep[cf. a~good recent review by][]{mennic2017}, while others seem to have a~constant brightness outside the eclipses. The later seems to be the case of HD~81357, for which we have not found any secular changes. It is not clear as yet what is the principal factor causing the presence of the cyclic secular changes \citep[see also the discussion in][]{zarfin30}. A few yellow-band photoelectric observations of H$\delta$ found in the literature all give 8\m3 to 8\m4. It is notable, however, that both magnitudes given in the HD catalogue are 7\m9. Future monitoring of the system brightness thus seems desirable. We note that we have not found any indication of a secular period change and also our modelling of the system evolution and its low mass ratio $M_2/M_1$ seem to indicate that H$\delta$ is close to the end of mass exchange process. The same is also true about BR~CMi studied by \citet{zarfin30}. We can thus conjecture that the cyclic light variations mentioned above do not occur close to the end of the mass exchange. \begin{acknowledgements} We gratefully aknowledge the use of the latest publicly available versions of the programs {\tt FOTEL} and {\tt KOREL} written by P.~Hadrava. Our sincere thanks are also due to A.Pr\v{s}a, who provided us with a modified version of the program {\tt PHOEBE} 1.0 and frequent consultations on its usage, and to the late A.P.~Linnell for his program suite BINSYN. We also thank J.A.~Nemravov\'a, who left the astronomical research in the meantime, for her contributions to this study and for the permission to use her Python programs, {\tt PYTERPOL} and several auxiliary ones, and to C.S.~Kochanek for the advice on the proper use of the ASAS-SN photometry. Suggestions and critical remarks by an anonymous referee helped us to re-think the paper, improve and extend the analyses, and clarify some formulations. A.~Aret, \v{S}.~Dvo\v{r}\'akov\'a, R.~K\v{r}i\v{c}ek, J.A.~Nemravov\'a, P.~Rutsch, K.~\v{S}ejnov\'a, and P.~Zasche obtained a few Ond\v{r}ejov spectra, which we use. The research of PK was supported from the ESA PECS grant 98058. The research of PH and MB was supported by the grants GA15-02112S and GA19-01995S of the Czech Science Foundation. HB, DR, and DS acknowledge financial support from the Croatian Science Foundation under the project 6212 ``Solar and Stellar Variability", while the research of D.~Kor\v{c}\'akov\'a is supported from the grant GA17-00871S of the Czech Science Foundation. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. We gratefully acknowledge the use of the electronic databases: SIMBAD at CDS, Strasbourg and NASA/ADS, USA. \end{acknowledgements} \bibliographystyle{aa}
{ "attr-fineweb-edu": 1.527344, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcE7xaJJQnMJkXV_S
\section{Introduction} Black holes play an important role in string theory and recent developments (for a review, see \cite{Horowitz}) have shown that string theory makes it possible to address their microscopic properties, in particular the statistical origin of the black hole entropy and possibly issues of information loss. The starting point in such investigations is the classical black hole solution and the aim of this paper is to find such solutions in toroidally compactified string theories in dimensions $4\le D \le 9$. The general solutions are found from a particular black hole ``generating solution'', which is specified by a canonical choice of the asymptotic values of the scalar fields, the ADM mass, $[{{D-1}\over 2}]$-components of angular momentum and a (minimal) number of charge parameters. The most general black hole, compatible with the ``no-hair theorem'', is then obtained by acting on the generating solution with classical duality transformations. These are symmetries of the supergravity equations of motion, and so generate new solutions from old. They do not change the $D$-dimensional Einstein-frame metric but do change the charges and scalar fields. We first consider transformations, belonging to the maximal compact subgroup of duality transformations, which preserve the canonical asymptotic values of the scalar fields and show that all charges are generated in this way. Another duality transformation can be used to change the asymptotic values of the scalar fields. For the toroidally compactified heterotic string such a program is now close to completion. Particular examples of solutions had been obtained in a number of papers (for a recent review and references, see \cite{Horowitz}). In dimensions $D=4$, $D=5$ and $6\le D\le 9$ the generating solution has five, three and two charge parameters, respectively. The charge parameters of the generating solution are associated with the $U(1)$ gauge fields arising from Kaluza-Klein (momentum modes) and two-form (winding modes) sectors for at most two toroidally compactified directions. The general black hole solution is then obtained by applying to the generating solution a subset of transformations, belonging to the maximal compact subgroup of the $T$- and $S$-duality transformations \cite{SEN}. The explicit expression for the generating solution has been obtained in $D=5$ \cite{CY5r} and $D\ge 6$\cite{CYNear,Llatas}, however, in $D=4$ only the five charge static generating solution \cite{CY4s} (see also \cite{JMP}) and the four charge rotating solutions \cite{CY4r} were obtained. The BPS-saturated solutions of the toroidally compactified heterotic string have non-singular horizons only for $D=4$ static black holes \cite{CY,CTII} and $D=5$ black holes with one non-zero angular momentum component \cite{TMpl}. In $6\le D\le 9$ the BPS-saturated black holes have singular horizons with zero area. The explicit $T$- and $S$-duality invariant formulae for the area of the horizon and the ADM mass for the general BPS-saturated black holes were given for $D=4$ in \cite{CY,CTII} and for $D=5$ in \cite{TMpl}. In particular, the area of the horizon of the BPS-saturated black holes {\it does not} depend on the asymptotic values of the scalar fields \cite{LWI,CTII,S,FK}, suggesting a microscopic interpretation. The purpose of this paper is to study properties of the classical black hole solutions of {\it toroidally compactified Type II string theory or ${\rm M}$-theory}, in dimensions $4\le D\le 9$, thus completing the program for the toroidally compactified superstring vacua. We identify the minimum number of charge parameters for the generating solutions, which fully specifies the space-time metric of the {\it general} black hole in $D$-dimensions. The ``toroidally'' compactified sector of the heterotic string and the Neveu-Schwarz-Neveu-Schwarz (NS-NS) sector of the toroidally compactified Type II string have the {\it same} effective action and so have the same classical solutions. In this paper we shall show that the generating solutions for black holes in the toroidally compactified Type II string theory (or M-theory) are the {\it same} as the ones of toroidally compactified heterotic string. Note that it could have been the case that a more general generating solution with one or more RR charges was needed. Applying $U$-duality transformations to the generating solution generates all black holes of toroidally compactified Type II string theory (or M-theory). We further address the BPS-saturated solutions, identify the $U$-duality invariant expression for the area of the horizon, i.e. Bekenstein-Hawking (BH) entropy for the general BPS-saturated black holes and outline a procedure to obtain the manifestly $U$-duality invariant mass formulae. The paper is organized as follows. In Section II we summarize the symmetries of the effective action of toroidally compactified Type II string and obtain the general solution by applying a compact subgroup of duality transformations to the generating solution. In Section III we concentrate on general static BPS-saturated black holes in $4\le D\le 9$ and derive the $U$-duality invariant expression for the area of the horizon. In Appendix A the effective action of the NS-NS sector of toroidally compactified Type II string is given. The explicit form of some of the generating solutions is given in in Appendix B. \section{ Toroidally Compactified Type II String Theory } \subsection{Symmetries} The low-energy effective action for the Type II string or M-theory, toroidally compactified to $D$-dimensions, is the maximal supergravity theory, which has a continuous duality symmetry $U$ of its equations of motion \cite{CJ} (see Table I, first column). This has a maximal compact subgroup $C_U$ (second column in Table I). In the quantum theory the continuous classical symmetry $U$ is broken to a discrete subgroup ${ Q}_{ U}$ \cite{HT} (third column in Table I) which is the $U$-duality symmetry of the string theory. However, we will sometimes refer to the group $U$ as the classical $U$-duality. \subsection{Solution Generating Technique} The general black hole solution is obtained by acting on generating solutions with $U$-duality transformations. The scalar fields take values in the coset $U/C_U$ and can be parameterised by a $U$-valued matrix ${\cal V}(x)$ which transforms under rigid $U$-transformations from the right and local $C_U$ transformations from the left \cite{CJ}. The Kaluza-Klein and and antisymmetric tensor $U(1)$ gauge fields also transform under $U$. It is convenient to define ${\cal M}={\cal V}^t{\cal V}$ which is inert under $C_U$ and transforms under $U$ as ${\cal M}{} \to { \Omega} {\cal M}{}{ \Omega}^T $ (${\Omega} \in { U}$). The asymptotic value ${\cal M}_{\infty} $ of ${\cal M}$ can be brought to the canonical value ${\cal M}_{0\, \infty}= {\bf 1}$ by a suitable $U$-duality transformation $\Omega_0$. The canonical value ${\cal M}_{0\, \infty}$ is preserved by $C_U$ and the most general solution with the asymptotic behaviour ${\cal M}_{\infty} ={\cal M}_{0\, \infty}$ is obtained by acting on the generating solution with a subset of $C_U$ transformations, i.e. the $C_U$ orbits which are of the form $C_U/C_0$ where $C_0$ is the subgroup preserving the generating solution. In particular, with this procedure the complete set of charges is obtained. Indeed, the generating solution is labelled by $n_0$ charges ($n_0=5,3,2$ for $D=4,5,\ge 6$, respectively) and if the dimension of the $C_U$ orbits is $n_1$, then $n_0+n_1$ is the correct dimension of the vector space of charges for the general solution, as we shall check in the following Section. Black holes with arbitrary asymptotic values of scalar fields ${\cal M}_{\infty}$ can then be obtained from these by acting with $\Omega_0$. We shall seek the general charged rotating black hole solutions. In $D=4$, such solutions are specified by electric {\it and} magnetic charges, while in $D>4$ they carry electric charges only (once all $(D-3)$-form gauge fields have been dualised to vector fields). \section{Black Holes in Various Dimensions} We will first propose generating solutions for Type II string (or M-theory) black holes in dimensions $4\le D\le 9$ and then go on to show that the action of duality transformations generates all solutions. Remarkably, the generating solutions are the same as those used for the heterotic string. We will discuss only the charge assignments here, and give the explicit solutions in Appendix B. \subsection{Charge Assignments for the Generating Solution}\label{cgs} \subsubsection{D=4} The generating solution is specified in terms of {\it five} charge parameters. It is convenient to choose these to arise in the NS-NS sector of the compactified Type II string as follows. We choose two of the toroidal dimensions labelled by $i=1,2$ and let $A_{\mu\, i}^{(1)}$ be the two graviphotons (corresponding to $G_{\mu i}$) and $A_{\mu\, i}^{(2)}$ the two $U(1)$ gauge fields coming from the antisymmetric tensor (corresponding to $B_{\mu i}$) (see Appendix A). Corresponding to these four $U(1)$ gauge fields there are four electric charges $Q^{(1),(2)}_i$ and four magnetic ones $P^{(1),(2)}_i$. The generating solution, however carries the following five charges: $ Q_1\equiv Q_1^{(1)},\ Q_2\equiv Q_1^{(2)}, \ P_1\equiv P_2^{(1)}, P_2\equiv P_2^{(2)}$ and $q\equiv Q_2^{(1)}=-Q_2^{(2)}$. It will be useful to define the left-moving and right-moving charges $Q_{i\, L,R}\equiv Q_i^{(1)}\mp Q_i^{(2)}$ and $P_{i\, L,R}\equiv P_i^{(1)}\mp P_i^{(2)}$ ($i=1,2$). The generating solution then carries five charges associated with the first two compactified toroidal directions of the NS-NS sector, where the dyonic charges are subject to the constraint ${\vec P}_R{\vec Q}_R=0$. We choose the convention that all the five charge parameters are positive. \subsubsection{D=5} In $D=5$ the generating solution is parameterised by three (electric) charge parameters: $Q_1\equiv Q_1^{(1)}, \ Q_2\equiv Q_1^{(2)},$ and $ {\tilde Q}$. Here the electric charges $Q_i^{(1),(2)}$ arise respectively from the graviphoton $A_{\mu\, i}^{(1)}$ and antisymmetric tensor $A_{\mu\, i}^{(2)}$ $U(1)$ gauge fields of the $i$-$th$ toroidally compactified direction of the NS-NS sector, and $\tilde Q$ is the electric charge of the gauge field, whose field strength is related to the field strength of the two-form field $B_{\mu\nu}$ by duality transformation (see Appendix A). Again we choose the convention that all three charges are positive. \subsubsection{$6\le D\le 9$} In $6\le D\le 9$ the generating solution is parameterised by two electric charges: $Q_1\equiv Q_1^{(1)}, \ Q_2\equiv Q_1^{(2)}$. Again, the electric charges $Q_i^{(1),(2)}$ arise respectively from the graviphoton $A_{\mu\, i}^{(1)}$ and antisymmetric tensor $A_{\mu\, i}^{(2)}$ $U(1)$ gauge fields of the $i$-$th$ toroidally compactified direction and we use the convention that both charges are positive. Note that the explicit form of the generating solutions with the above charge assignments is the {\it same} as the one of the toroidally compactified heterotic string, since the corresponding NS-NS sector of the toroidally compactified string and the ``toroidal'' sector of the heterotic string are the same. \subsection{Action of Duality Transformations on Generating Solution} \subsubsection{D=4} The $N=8$ supergravity has 28 abelian gauge fields and so the general black hole solution carries 56 charges (28 electric and 28 magnetic). The ${ U}$-duality group is $E_{7(7)}$, the maximal compact subgroup ${ C_U}$ is $SU(8)$ and the $T$-duality subgroup is $SO(6,6)$ . We use the formulation with rigid $E_7$ symmetry and local $SU(8)$ symmetry \cite{CJ}. The 56 charges fit into a vector ${\cal Z}$ transforming as a {\bf 56} of $E_7$. In the quantum theory, ${\cal Z}$ is constrained to lie in a lattice by charge quantisation \cite{HT}. This ``bare'' charge vector can be ``dressed'' with the asymptotic value ${\cal V}_\infty$ of the scalar field matrix ${\cal V}$ to form \begin{equation} \bar {\cal Z}={\cal V}_\infty{\cal Z}= \left ( \matrix{ q^{ab}\cr p_{ab}\cr} \right ) \ , \label{zv} \end{equation} which is invariant under $E_7$ but transforms under local $SU(8)$. The 28 electric and 28 magnetic dressed charges are $q_{ab}$ and $p_{ab}$ ($a,b=1, \cdots , 8$ and $q_{ab}=-q_{ba}$,$p_{ab}=-p_{ba}$). They can be combined to form the $Z_{4\,AB}$ matrix ($A,B=1, \cdots , 8$ are $SU(8)$ indices) transforming as the complex antisymmetric representation of $SU(8)$, by defining $ Z_{4\,AB}= (q^{ab}+ip_{ab}) (\Gamma^{ab})^B{}_A$ where $(\Gamma^{ab})^B{}_A$ are the generators of $SO(8)$ in the spinor representation \cite{CJ}. The matrix $Z_{4\,AB}$ appears on the right hand side of the anticommutator of chiral two-component supercharges \begin{equation} [Q _{A\alpha}, Q_{B\beta}]= C_{\alpha\beta} Z_{4\,AB} \ , \end{equation} and thus corresponds to the matrix of 28 complex central charges. An $SU(8)$ transformation $ Z_4\rightarrow Z^0_4=({\cal U} Z_4{\cal U}^T)$ brings this charge matrix to the skew-diagonal form: \begin{equation} Z^0_4= \left ( \matrix{ 0& \lambda_1&0&0&0&0&0&0 \cr -\lambda_1&0&0&0&0&0&0&0 \cr 0&0&0& \lambda_2&0&0&0&0 \cr 0&0&-\lambda_2&0&0&0&0&0\cr 0&0&0&0&0& \lambda_3&0&0\cr 0&0&0&0&-\lambda_3&0&0&0\cr 0&0&0&0&0&0&0& \lambda_4\cr 0&0&0&0&0&0&-\lambda_4&0\cr} \right )\ , \label{diag4} \end{equation} where the complex $\lambda_{i}$ ($i=1,2,3,4$) are the skew eigenvalues. For the generating solution with the five charge parameters $Q_{1,2}$, $P_{1,2}$ and $q$ (see Subsection \ref{cgs}) the eigenvalues are \footnote{For the four charges $Q_{1,2}$ and $P_{1,2}$ the eigenvalues were given in \cite{KK}.}: \begin{eqnarray} \lambda_1&=&Q_{1R} + P_{2R},\cr \lambda_2&=&Q_{1R} -P_{2R},\cr \lambda_3&=&Q_{1L} + P_{2L}+2iq,\cr \lambda_4&=&Q_{1L} - P_{2L}-2iq \label{fivepar} \end{eqnarray} (recall $Q_{1\, L,R}\equiv Q_1\mp Q_2$ and $P_{2\, L,R}\equiv P_1\mp P_2$). We now consider the action of duality transformations on the generating solution and show that all $D=4$ black hole solutions are indeed generated. The $U$-duality group $E_7$ has a maximal subgroup $SO(6,6)\times SL(2,{\bf R})$ where $SO(6,6)$ is the $T$-duality group and $ SL(2,{\bf R})$ is the $S$-duality group. (Strictly speaking, the duality groups are discrete subgroups of these.) The {\bf 56} representation of $E_7$ decomposes as \begin{equation} {\bf 56} \to (12,2) \oplus (32,1) \end{equation} under $SO(6,6)\times SL(2,{\bf R})$ and thus the 56 charges ${\cal Z}$ decompose into $12$ electric and $12$ magnetic charges in the NS-NS sector, and 32 charges in the Ramond-Ramond (RR) sector. We choose for now the asymptotic value of the scalars to be the canonical one, i.e. ${\cal V}_\infty={\cal V}_{0\infty}\equiv {\bf 1}$. The maximal compact symmetry of the $T$-duality group is $SO(6)_L\times SO(6)_R\sim SU(4)_L\times SU(4)_R$, and under $ SU(4)_L\times SU(4)_R\subset SU(8)$ the complex representation ${\bf 28}$ decomposes into complex representations $(12,1)+(1,12)+(4,4)$. This decomposition corresponds to splitting the $8\times 8$ matrix of charges $Z_4$ into $4\times 4$ blocks. The two $4\times 4$ diagonal blocks $Z_R$ and $Z_L$, transform respectively as the antisymmetric complex representations of $SU(4)_{R,L}\sim SO(6)_{R,L}$ and represent the $12+12$ charges of the NS-NS sector. The off-diagonal blocks correspond to the 16 complex RR charges. The maximal compact subgroup of $SO(6,6)\times SL(2,{\bf R})$ is $SO(6)_L\times SO(6)_R\times SO(2)$ and it preserves ${\cal V}_{0\infty}$. The subgroup that preserves the charges of the generating solution is $SO(4)_L\times SO(4)_R$. Thus acting on the generating solution with $SO(6)_L\times SO(6)_R\times SO(2)$, gives orbits corresponding to the 19-dimensional space \begin{equation} {SO(6)_L\times SO(6)_R \over SO(4)_L\times SO(4)_R} \times SO(2)\ . \label{osp} \end{equation} As the generating solution has five charges, acting on the generating solution with $SO(6)_L\times SO(6)_R\times SO(2)$ gives the required $5+19=24$ NS-NS charges, i.e. the 24 NS-NS charges are parameterised in terms of the five charges of the generating solution and the 19 coordinates of the orbit space (\ref{osp}). The above procedure is closely related to that for $D=4$ toroidally compactified heterotic string vacua \cite{CY,CTII}, where the general black hole with the $5+51=56$ charges is obtained from the same five-parameter generating solution, and the 51 coordinates of the orbit \begin{equation} {SO(22)_L\times SO(6)_R\over SO(20)_L\times SO(4)_R} \times SO(2) \ . \end{equation} We can now generalise this procedure to include the RR charges. The group $C_U=SU(8)$ preserves the canonical asymptotic values of the scalar fields and only the subgroup $SO(4)_{L}\times SO(4)_{R}$ leaves the generating solution invariant. Then acting with $SU(8)$ gives orbits \begin{equation} SU(8)/[SO(4)_L\times SO(4)_R] \label{4du} \end{equation} of dimension $63-6-6=51$. The $SU(8)$ action then induces $51$ new charge parameters, which along with the original five parameters provide charge parameters for the general solution with 56 charges. Finally, the general black hole with arbitrary asymptotic values of the scalars is obtained from these 56-parameter solutions by acting with a $E_7$ transformation. This transformation leaves the central charge matrix $Z_4$ and its eigenvalues $\lambda_i$ invariant, but changes the asymptotic values of scalars and ``dresses'' the physical charges. The orbits under $E_7$ are the 70-dimensional coset $E_7/SU(8)$, as expected. The fact that the same five-parameter generating solution that was used for the $D=4$ toroidally compactified heterotic string should be sufficient to generate all black holes with NS-NS charges of $D=4$ toroidally compactified Type II is unsurprising, given the equivalence between the ``toroidal'' sector of the heterotic string and the NS-NS sector of the Type II string. However, it is interesting that the procedure outlined above is also sufficient to generate {\it all} RR charges of the general black hole solution, as it could have been the case that a more general generating solution carrying one or more RR charges was needed. \subsubsection{D=5} The ${ U}$-duality group is $E_{6(6)}$, the maximal compact subgroup ${ C_U}$ is $USp(8)$ and the $T$-duality group is $SO(5,5)$ with its maximal compact subgroup $SO(5)_L\times SO(5)_R$. In this case there are 27 abelian gauge fields and the 27 electric charges (dressed with asymptotic values of the scalar fields) transform as a {\bf 27} of USp(8) and can be assembled into an $8\times 8$ matrix $Z_{5\,AB}$ ($A,B=1,\dots, 8$) with the properties \cite{Cremmer}: \begin{equation} Z_5^{AB\, *}=\Omega^{AC}\Omega^{BD} Z_{5\,CD}, \ \ \Omega^{AB}Z_{5\,AB}=0 , \label{Z5} \end{equation} where $\Omega$ is the $USp(8)$ symplectic invariant, which we take to be \begin{eqnarray} \Omega= \left ( \matrix{ 0&1&0&0&0&0&0&0 \cr -1&0&0&0&0&0&0&0 \cr 0&0&0&1&0&0&0&0 \cr 0&0&-1&0&0&0&0&0\cr 0&0&0&0&0& 1&0&0\cr 0&0&0&0&-1&0&0&0\cr 0&0&0&0&0&0&0& 1\cr 0&0&0&0&0&0&-1&0\cr} \right ) . \label{explo}\end{eqnarray} With $\Omega$ given by (\ref{explo}), the $Z_5$ charge matrix can be written in the following form: \begin{eqnarray} Z_5= \left ( \matrix{ 0& z_{12}& z_{13}&z_{14}& \cdots \cr -z_{12}&0&-z_{14}^*&z_{13}^*&\cdots \cr -z_{13}^*&z_{14}&0&z_{34}&\cdots\cr -z_{14}^*&-z_{13}&-z_{34}&0&\cdots\cr \cdots&\cdots&\cdots&\cdots&\cdots} \right ). \label{explZ0}\end{eqnarray} Here $z_{12},\ z_{34}, \ z_{56}$ are real and satisfy $ z_{12}+ z_{34}+ z_{56}=0$. The matrix $Z_5$ occurs in the superalgebra and represents the 27 (real) central charges. It can be brought into a skew-diagonal form of the type (\ref{diag4}) by an $USp(8)$ transformation $ Z_5\rightarrow Z^0_5=({\cal U} Z_5 {\cal U}^T)$. The four real eigenvalues $\lambda_i$ are subject to the constraint: $\sum_{i=1}^4\lambda_i=0$. The generating solution is parameterised by three charges $Q_1\equiv Q_1^{(1)},\ Q_2\equiv Q_1^{(2)}$ and $\tilde Q$ (see Subsection \ref{cgs}). The four (constrained, real) eigenvalues $\lambda_i$ are then \begin{eqnarray} \lambda_1&=& \tilde Q+Q_{1R},\cr \lambda_2&=&\tilde Q-Q_{1R},\cr \lambda_3&=&-\tilde Q+Q_{1L},\cr \lambda_4&=&-\tilde Q-Q_{1L}, \label{threepar} \end{eqnarray} where $Q_{1\, L,R}\equiv Q_1\mp Q_2$. These indeed satisfy the constraint $\sum_{i=1}^4\lambda_i=0$. The three parameter solution is indeed the generating solution for black holes in $D=5$. The $USp(8)$ duality transformations preserve the canonical asymptotic values of the scalar fields and the subgroup $SO(4)_L\times SO(4)_R[\subset SO(5)_L\times SO(5)_R] \subset USp(8)$ preserves the generating solution. Acting with $USp(8)$ on the generating solution gives orbits \begin{equation} USp(8)/[SO(4)_L\times SO(4)_R]\ , \label{5du} \end{equation} and thus induces $36 -4\times 3=24$ new charge parameters, which along with the original three charge parameters provide 27 electric charges for the general solution in $D=5$. \subsubsection{D=6} The ${ U}$-duality group is $SO(5,5)$, the maximal compact subgroup ${ C_U}$ is $SO(5)\times SO(5)$ and the $T$-duality group is $SO(4,4)$ which has maximal compact subgroup $SO(4)_L\times SO(4)_R\sim [SU(2)\times SU(2)]_L\times [SU(2)\times SU(2)]_R $. There are 16 abelian vector fields and the bare charges ${\cal Z}$ transform as a {\bf 16} (spinor) of $SO(5,5)$. The dressed charges transform as the $(4,4)$ representation of $SO(5)\times SO(5)$ and can be arranged into a $4 \times 4 $ charge matrix $Z_6$. Under $ [SU(2)\times SU(2)]_L\times [SU(2)\times SU(2)]_R\subset SO(5)\times SO(5)$ the $(4,4)$ decomposes into $(2,2,1,1) +(1,1,2,2)+(1,2,2,1)+(2,1,1,2)$. This decomposition corresponds to splitting the $4\times 4$ matrix of charges $Z_6$ into $2\times 2$ blocks. The two $2\times 2$ diagonal blocks $Z_R$ and $Z_L$, transform respectively as $(2,2,1,1)$ and $(1,1,2,2)$ representations of $ [SU(2)\times SU(2)]_L\times [SU(2)\times SU(2)]_R$ representing the $4+4$ charges of the NS-NS sector. The off-diagonal blocks correspond to $(1,2,2,1)+(2,1,1,2)$ representations of $ [SU(2)\times SU(2)]_L\times [SU(2)\times SU(2)]_R $ and represent 8 RR charges. The matrix $Z_6$ occurs in the superalgebra and represents the 16 (real) central charges. It can be brought into a skew-diagonal form of the type (\ref{diag4}) by an $SO(5)\times SO(5)$ transformation $ Z_6\rightarrow Z^0_6=({\cal U} Z_6 {\cal U}^T)$ with the two eigenvalues $\lambda_i$. The generating solution is parameterised by two charges $Q_1\equiv Q_1^{(1)}, \ Q_2=Q_1^{(2)}$ (see Subsection \ref{cgs}). The two eigenvalues $\lambda_i$ are then \begin{eqnarray} \lambda_1&=& Q_{1R},\cr \lambda_2&=&Q_{1L}, \label{twopar} \end{eqnarray} where again $Q_{1\, L,R}\equiv Q_1\mp Q_2$. The generating solution is preserved by $SO(3)_L\times SO(3)_R [\subset SO(4)_L\times SO(4)_R]\subset SO(5)\times SO(5)$ so acting with $SO(5)\times SO(5)$ gives \begin{equation} [SO(5)\times SO(5)]/[SO(3)_L\times SO(3)_R] \label{6dtr} \end{equation} orbits, and thus introduces $2 (10-3)=14$ charge parameters, which along with the two charges ($Q_{1,2}$) of the generating solution provide the 16 charge parameters of the general solution in $D=6$. \subsubsection{D=7} The ${ U}$-duality group is $SL(5,{\bf R})$, the maximal compact subgroup ${ C_U}$ is $SO(5)$ and the $T$-duality group is $SO(3,3)$ with its maximal compact subgroup $SO(3)_L\times SO(3)_R$. There are ten abelian vector fields and the ten bare electric charges transform as the $\bf 10$ representation of $SL(5, {\bf R})$. Dressing of these with asymptotic values of scalars gives the ten central charges which are inert under $SL(5, {\bf R})$ but transform as a $\bf 10$ under $SO(5)$. The dressed charges can be assembled into a real antisymmetric $5\times 5$ charge matrix $Z_{7ij}$, which appears in the superalgebra as the $4 \times 4$ central charge matrix $Z_{7AB}={1 \over 2}Z_{7ij}{\gamma ^{ij}}_{AB}$ where $ {\gamma ^{ij}}_{AB}$ are the generators of $SO(5)$ in the spinor ({\bf 4}) representation. The matrix $Z_{7ij}$ has two real skew eigenvalues, $\lambda_1, \lambda_2$, which for the generating solution correspond to the two charges $Q_{1\, L,R}$. The subgroup $SO(2)_L\times SO(2)_R\subset SO(3)_L\times SO(3)_R\subset O(5)$ preserves the generating solution, so that the action of $SO(5)$ gives orbits \begin{equation} SO(5)/[SO(2)_L\times SO(2)_R]\ , \label{6du} \end{equation} thus introducing $10-2=8$ charge parameters, which together with the two charges ($Q_{1,2}$) of the generating solution (see Subsection \ref{cgs}) provide the ten charges of the general solution in $D=7$. \subsubsection{D=8} The ${ U}$-duality group is $SL(3,{\bf R})\times SL(2, {\bf R})$, the maximal compact subgroup ${C_U}$ is $SO(3)\times U(1)$ and the $T$-duality group is $SO(2,2)$ with maximal compact subgroup $SO(2)_L\times SO(2)_R$. There are six abelian gauge fields and the six bare electric charges transform as $({\bf 3},{\bf 2})$ under $SL(3, {\bf R})\times SL(2,{\bf R})$. No $C_U$ transformations preserve the generating solution, so that the orbits are \begin{equation} C_U=SO(3)\times U(1)\ , \label{7du} \end{equation} and the $C_U$ transformations introduce $(3+1)$-charge parameters, which along with the two charges of the generating solution provide the six charges of the general solution in $D=8$. \subsubsection{D=9} The ${U}$-duality group is $SL(2,{\bf R})\times {\bf R}^+$ and the maximal compact subgroup ${ C_U}$ is $U(1)$. There are three abelian gauge fields and the three bare electric charges transform as $({\bf 3}, {\bf 1})$ under $SL(2,{\bf R})\times {\bf R}^+$. The action of \begin{equation} C_U=U(1), \label{9du} \end{equation} introduces one-charge parameter, which along with the two charges of the generating solution provides the three charges of the general solution in $D=9$. \section{Entropy and Mass of BPS-Saturated Static Black Holes} We now study the properties of static BPS-saturated solutions. In the preceding Section we identified the charge assignments for the generating solutions, which fully specify the space-time of the general black hole solution in $D$-dimensions for toroidally compactified Type II string (or M-theory) vacuum. The explicit form for these solutions has been given in the literature (with the exception of the rotating five-charge solution in $D=4$). The static generating solutions are given in Appendix B. Also, the area of the horizon for the BPS-saturated (as well as for non-extreme solutions) was calculated explicitly. In addition to static BPS-saturated solutions, we shall also consider near-BPS-saturated solutions in $4\le D\le 9$. \subsection{The Bogomol'nyi Bound} Consider first the $D=4$ case. Standard arguments \cite{GH} imply that the ADM mass $M$ is bounded below by the moduli of the eigenvalues $\lambda _i$ (\ref{diag4}) of the central charge matrix $Z_4$, i.e. $M \ge |\lambda _i|$, $i=1,...,4$. Without loss of generality the eigenvalues can be ordered in such a way that $|\lambda_i|\ge |\lambda_j|$ for $j\ge i$ . If $M$ is equal to $|\lambda _1|=\cdots =|\lambda_p|$, the solution preserves ${p\over 8}$ of $N=8$ supersymmetry\footnote{For a related discussion of the number of preserved supersymmetries see \cite{Kal,KK}.}. For example, if $M=|\lambda _1|>|\lambda _{2,3,4}|$ then ${\textstyle{1\over 8}}$ of the supersymmetry is preserved, while for $M=|\lambda _1|=|\lambda _2|=|\lambda _3|=|\lambda _4|$, ${\textstyle{1\over 2}}$ of the supersymmetry is preserved. The eigenvalues $\lambda _i$ are each invariant under $E_7$ and $SU(8)$. The physical quantities such as the Bekenstein-Hawking (BH) entropy and the ADM mass of BPS-saturated black holes, can then be written in terms of these quantities, which depend on both the bare charges $\cal Z$ and the asymptotic values of the scalar fields parameterised by ${\cal V}_{\infty}$. However, there are special combinations of these invariants for which the dependence on the asymptotic values of scalar fields drops out. In particular, such combinations play a special role in the BH entropy for the BPS-saturated black hole solutions as discussed in the following Subsection. Similar comments apply in $D>4$. \subsection{Bekenstein-Hawking Entropy} The BH entropy is defined as $S_{BH}= {\textstyle{1\over {4G_D}}} A_{h}$ where $G_D$ is the $D$-dimensional Newton's constant and $A_{h}$ is the area of the horizon. Since the Einstein metric is duality invariant, geometrical quantities such as $A_{h}$ should be too. Thus it should be possible to write $A_{h}$ in terms of duality invariant quantities such as the eigenvalues $\lambda _i$, the ADM mass (or the ``non-extremality'' parameter $m$ defined in Appendix B) and the angular momentum components. However, in the case of BPS-saturated black holes the BH entropy is, in addition, {\it independent } of the asymptotic values of scalar fields. This property was pointed out in \cite{LWI}, and further exhibited for the general BPS-saturated solutions of toroidally compactified heterotic string vacua in $D=4$ \cite{CTII} and $D=5$ \cite{CY5r} as well as for certain BPS-saturated black holes of $D=4$ $N=2$ superstring vacua in \cite{FKS,S}. This property was explained in terms of ``supersymmetric attractors'' in \cite{FK}. When applying the above arguments to BPS-saturated black holes of toroidally compactified Type II string (or M-theory) their BH entropy can be written in terms of a $U$-duality invariant combination of {\it bare} charges alone, thus implying that only a very special combination of bare charges can appear in the BH entropy formula. One is then led to the remarkable result that the entropy must be given in terms of the quartic invariant of $E_7$ in $D=4$ and the cubic invariant of $E_6$ in $D=5$, as these are the only possible $U$-invariants of bare charges.\footnote{In $D=3$ there is a unique quintic $E_{8(8)}$-invariant which should play a similar role for $D=3$ black hole solutions.} This fact was first pointed out in \cite{KK} and \cite{FK}, respectively (and independently in \cite{CHU}). It has been checked explicitly for certain classes of $D=4$ BPS-saturated black holes \cite{KK}. Below we will extend the analysis to general BPS-saturated black holes in $D=4,5$. For $6\le D\le 9$ there are {\it no} non-trivial $U$-invariant quantities that can be constructed from the bare charges alone, in agreement with the result that there are no BPS-saturated black holes with non-singular horizons and finite BH entropy in $D\ge 6$, as has been shown explicitly in \cite{KT,CYNear}. \subsubsection{D=4} The five-parameter static generating solution has the following BH entropy \cite{CTII}: \begin{equation} S_{BH}= 2\pi\sqrt{Q_1Q_2P_1P_2-{\textstyle{1\over 4}}q^2(P_1+P_2)^2}. \label{4ent} \end{equation} We shall now show that (\ref{4ent}) can be rewritten in terms of the $E_7$ quartic invariant (of bare charges). The quartic $E_{7(7)}$ invariant $J_4$, constructed from the charge matrix $Z_{4\,AB}$, is \cite{CJ}: \begin{eqnarray} {J_4}&= &{\rm Tr}({ Z_4}^\dagger{ Z_4})^2-{\textstyle{1\over 4}} ({\rm Tr}Z_4^\dagger{ Z_4})^2+\cr & &{\textstyle {1\over {96}}}(\epsilon_{ABCDEFGH}{ Z_4}^{AB\, *}{ Z_4}^{CD\, *}{ Z_4}^{EF\, *} { Z_4}^{GH\, *}+\epsilon^{ABCDEFGH}Z_{4\, AB}Z_{4\, CD} Z_{4\, EF} Z_{4\, GH})\ , \label{quartic} \end{eqnarray} which can be written in terms of the skew-eigenvalues $\lambda _i$ by substituting the skew-diagonalised matrix $Z^0_{4}$ (\ref{diag4}) in (\ref{quartic}) to give (as in \cite{KK}): \begin{eqnarray} {J_4}&=&\sum_{i=1}^4 |\lambda_i|^4 -2\sum_{j>i=1}^4 |\lambda_i|^2|\lambda_j|^2\cr &+&4({ \lambda}_1^*{\lambda}_2^*{\lambda}_3^*{ \lambda}_4^*+ \lambda_1\lambda_2\lambda_3\lambda_4)\ . \label{diagfour} \end{eqnarray} For the five parameter generating solution, the $\lambda _i$ are given by (\ref{fivepar}), so that (\ref{diagfour}) becomes: \begin{eqnarray} {J_4}&=&16[(Q_{1R}^2-Q_{1L}^2)(P_{2R}^2-P_{2L}^2)-4P_{2R}^2q^2]\cr &=&16^2[Q_1Q_2P_1P_2-{\textstyle{1\over 4}}q^2(P_1+P_2)^2] \ . \label{fourfivepar} \end{eqnarray} Comparing with (\ref{4ent}), we learn that for the five-parameter generating solution the BH entropy is given by \begin{equation} S_{BH}={\pi\over 8}\sqrt{J_4}. \label{4Jent} \end{equation} This result generalises the one in \cite{KK}, where the result for the four-parameter solution with $q=0$ was established. Acting on the generating solution with $SU(8)$ transformations to generate the general charged black hole, and then with a $E_7$ transformation to generate the solution with general asymptotic values of scalar fields, leaves the BH entropy (\ref{4Jent}) invariant, since $J_4$ is invariant. As the dressing of the charges is by an $E_7$ transformation, i.e. ${\bar {\cal Z}} ={{\cal V}_\infty {\cal Z}}$, the dependence on the asymptotic values of scalar fields ${\cal V}_\infty$ drops out of the BH entropy, which thus can be written in terms of the bare charges alone, as expected. \subsubsection{D=5} The BH entropy of the three-parameter static BPS-saturated generating solution is \cite{VS,TMpl,CY5r}: \begin{equation} S_{BH}= 2\pi\sqrt{Q_1Q_2\tilde Q} \ . \label{5ent} \end{equation} The cubic $E_{6(6)}$ invariant $J_3$ constructed from the charge matrix $Z_{5\,AB}$ is \cite{Cremmer}: \begin{equation} {J_3}=-\sum_{A,B,C,D,E,F=1}^{8}\Omega^{AB} Z_{5\,BC} \Omega^{CD} Z_{5\,DE} \Omega^{EF} Z_{5\,FA} \ . \label{cubic} \end{equation} By transforming $Z_5$ to a skew-diagonal matrix $Z^0_{5}$, given in terms of the four constrained real eigenvalues $\lambda_i$ (\ref{threepar}), and using (\ref{explo}) for $\Omega_{AB}$, the cubic form $J_3$ can be written as \begin{equation} {J_3}=2\sum_{i=1}^4 \lambda_i^3. \label{diagfive} \end{equation} For the (three charge parameter) generating solution the eigenvalues are (\ref{threepar}) so that (\ref{diagfive}) is: \begin{eqnarray} {J_3}&=&12(Q_{1R}^2-Q_{1L}^2){\tilde Q}\cr &=&48Q_1Q_2\tilde Q, \label{Jfivegen} \end{eqnarray} which together with (\ref{5ent}) implies \begin{equation} S_{BH}=\pi\sqrt{{\textstyle{1\over {12}}}J_3}. \label{5Jent} \end{equation} This result gives the entropy in terms of the cubic invariant for the generating solution, and so, by $U$-invariance, for all charged static BPS-saturated $D=5$ black holes, as conjectured in \cite{FK,Pol,CHU}. \subsubsection{$6 \le D \le 9$} A BPS-saturated black hole in $D\ge 6$ dimensions should have a horizon area that is an invariant under $U$-duality constructed from the bare charges alone, involving no scalars. This would involve, for example, constructing an $SO(5,5)$ singlet from tensor products of charges transforming as a chiral spinor {\bf 16} of $SO(5,5)$ in $D=6$ and constructing singlets of $SL(5,{\bf R})$ from tensor products of charges transforming as a {\bf 10}. There are no such non-trivial singlets in $D\ge 6$, so that the only invariant result for the area is zero, which is precisely what is found (see Appendix B). Indeed, the generating solution for BPS-saturated solution in $D\ge 6$ has {\it zero} horizon area \cite{CYNear} and so zero BH entropy. \subsubsection{Entropy of Non-Extreme Black Holes} We now comment on the BH entropy of non-extreme black holes, in particular, static black holes in $6\le D\le 9$. (For another approach to address the BH entropy of non-extreme black holes, where additional auxiliary charges are introduced, see \cite{HMS,KR}.) The non-extreme generating solutions are specified in terms of two electric charges $Q_{1,2}$ and a parameter $m$ which measures the deviation from extremality, i.e. the BPS-saturated limit is reached when $m=0$ while the charges $Q_{1,2}$ are kept constant (see Appendix B). The BH entropy is given by (\ref{DBHent}) in Appendix B, which in the near-BPS-saturated limit ($Q_{1,2}\gg m$) reduces to the form (\ref{DBPSBHent}): \begin{equation} S_{BH}=4\pi\sqrt{{\textstyle{1\over{(D-3)^2}}}Q_1Q_2{(2m)}^{2\over{D-3}}}. \label{Dent} \end{equation} The BH entropy of the non-extreme black holes can also be rewritten in a manifestly duality-invariant manner. We demonstrate this for static near-extreme black holes in $D=7$; examples of such black holes in other dimensions are similar. The $5 \times 5$ matrix of dressed charges $Z_{7ij}$ transforms as an antisymmetric tensor under the local $SO(5)$ symmetry but is invariant under the rigid $SL(5,{\bf R})$ duality symmetry. (The central charges are given by the $4 \times 4$ matrix $Z_{7ij} \gamma^{ij}$ where $ \gamma^{ij}$ are the generators of $SO(5)$ in the 4-dimensional spinor representation.) The two skew eigenvalues $\lambda_i$ of $Z_{7ij} $ are given by $\lambda_i=Q_{R,L} $ and these are invariant under $SO(5) \times SL(5,{\bf R})$. The BH entropy in the near-extreme case is then \begin{equation} S_{BH}= {{\pi}\over 2}\sqrt{ (|\lambda_1|^2- |\lambda_2|^2){(2m)}^{1\over{2}}}\ {}, \label{Denta} \end{equation} which can be rewritten in a manifestly $U$-duality invariant form as \begin{equation} S_{BH}= {{\pi}\over 2} \left( [2tr(Y_7^2)-{\textstyle {1\over 2}}(tr Y_7)^2]m \right )^{1\over 4} \ . \label{Dentb} \end{equation} Here $Y_7\equiv Z_7^tZ_7$ and we have used the relationship: $tr(Y_7^m)=2 (-|\lambda_1|^2)^{m}+(-|\lambda_2|^2)^{m}$. Note that now the entropy does depend on the asymptotic values of the scalar fields. \subsection{ADM Masses and Supersymmetry Breaking} We now comment on a $U$-duality invariant form of the black hole mass formula for BPS-saturated black holes with different numbers of preserved supersymmetries. We shall derive such expressions in $D=4$. Examples in other dimensions are similar. As discussed in the beginning of this Section, in $D=4$ the BPS-saturated black holes will preserve ${p\over 8}$ of the $N=8$ supersymmetry if the BPS-saturated ADM mass $M$ is equal to $|\lambda _1|=\cdots =|\lambda_p|$ where $\lambda _i$ ($i=1,\cdots, 4$) are eigenvalues (\ref{diag4}) of central charge matrix $Z_4$. Without loss of generality one can order the eigenvalues in such a way that $|\lambda_i|\ge |\lambda_j|$ for $j\ge i$ . Note also that from (\ref{diag4}) \begin{equation} tr(Y_4^m)= 2\sum _{i=1}^4(-|\lambda_i|^2)^{m}\ , \end{equation} where $Y_4\equiv Z_4^\dagger Z_4$ and $m=1,\cdots, 5-p$. \subsubsection{p=4} These solutions preserve ${1\over 2}$ of $N=8$ supersymmetry and \begin{equation} M=|\lambda_1|=|\lambda_2|=|\lambda _3|=|\lambda_4|. \end{equation} Examples of such solutions are obtained from the generating solution with only one non-zero charge, e.g., those with only $Q_1\ne 0$. The mass can be written in a $U$-invariant form as \begin{equation} M=-{\textstyle{1\over 8}}trY_4\ . \end{equation} \subsubsection{p=3} These solutions preserve ${3\over 8}$ of $N=8$ supersymmetry thus \begin{equation} M=|\lambda_1|=|\lambda_2|=|\lambda _3|>|\lambda_4|\ . \end{equation} An example of such a generating solution corresponds to the case with $(Q_1, Q_2, P_1=P_2)\ne 0$, while non-zero $q$ is determined in terms of other nonzero charges as $q={\textstyle{1\over 2}}[(Q_{1R}+P_{2R})^2-Q_{1L}^2]^{1/2}$. The $U$-duality invariant form of the BPS-saturated mass can now be written in terms of two invariants: \begin{equation} tr Y_4=-6|\lambda _1|^2-2|\lambda _4|^2, \qquad tr (Y_4^2)= 6|\lambda _1|^4+2|\lambda _4|^4 \end{equation} as the (larger) root of the quadratic equation: \begin{equation} 48 M^4 +12 tr Y_4 M^2 + (tr Y_4)^2-2 tr(Y_4^2)=0 \ , \end{equation} \begin{equation} M^2=-{\textstyle{1\over 8}} tr Y_4 + \sqrt{ {\textstyle{1\over {24}} tr (Y_4^2) - {\textstyle{1\over {192}}}(tr Y_4)^2}}\ . \end{equation} \subsubsection{p=2} These solutions preserve ${1\over 4}$ of $N=8$ supersymmetry and have the mass \begin{equation} M=|\lambda_1|=|\lambda_2|>|\lambda _3|\ge|\lambda_4| \ . \end{equation} An example of such a generating solution is the case with only $(Q_1,Q_2)\ne 0$. The general mass can be written in terms of the three invariants $tr (Y_4^m)$ for $m=1,2,3$. The $U$-duality invariant expression for the BPS-saturated mass formula is then given by the (largest) root of a cubic equation; we do not give it explicitly here. Some simplification is obtained if $|\lambda _3|=|\lambda_4|$, in which case only two invariants and a quadratic equation are needed. \subsubsection{p=1} These solutions preserve ${1\over 8}$ of the $N=8$ supersymmetry and the mass is \begin{equation} M=|\lambda_1|> |\lambda_2|\ge|\lambda _3|\ge|\lambda_4|\ . \end{equation} Examples of such generating solutions are the case with only $(Q_1,Q_2,P_1)\ne 0$ and the generating solution with all the five charges non-zero is also in this class. The $U$-invariant mass can be written in terms of the four invariants $tr (Y_4^m)$ for $m=1,2,3,4$ and involves the (largest) root of a quartic equation so we do not give it explicitly here. \acknowledgments We would like to thank K. Chan, S. Gubser, F. Larsen, A. Sen, A. Tseytlin and D. Youm for useful discussions. The work is supported by the Institute for Advanced Study funds and J. Seward Johnson foundation (M.C.), U.S. DOE Grant No. DOE-EY-76-02-3071 (M.C.), the NATO collaborative research grant CGR No. 940870 (M.C.) and the National Science Foundation Career Advancement Award No. PHY95-12732 (M.C.). The authors acknowledge the hospitality of the Institute for Theoretical Physics at the University of California, Santa Barbara, where the work was initiated, the hospitality of the Department of Applied Mathematics and Theoretical Physics of Cambridge (M.C.) and of the CERN Theory Division (M.C.). \vskip2.mm \newpage \section{Appendix A: Effective Action of the NS-NS Sector of Type II String on Tori} For the sake of completeness we briefly summarize the form of the effective action of the NS-NS sector for the toroidally compactified Type II string in $D$-dimensions (see, e.g., \cite{MS}.). The notation used is that of \cite{CY4r}. The compactification of the $(10-D)$-spatial coordinates on a $(10-D)$-torus is achieved by choosing the following abelian Kaluza-Klein Ansatz for the ten-dimensional metric \begin{equation} \hat{G}_{MN}=\left(\matrix{e^{a\varphi}g_{{\mu}{\nu}}+ G_{{m}{n}}A^{(1)\,m}_{{\mu}}A^{(1)\,n}_{{\nu}} & A^{(1)\,m}_{{\mu}} G_{{m}{n}} \cr A^{(1)\,n}_{{\nu}}G_{{m}{n}} & G_{{m}{n}}}\right), \label{4dkk} \end{equation} where $A^{(1)\,m}_{\mu}$ ($\mu = 0,1,...,D-1$; $m=1,...,10-D$) are $D$-dimensional Kaluza-Klein $U(1)$ gauge fields, $\varphi \equiv \hat{\Phi} - {1\over 2}{\rm ln}\,{\rm det}\, G_{mn}$ is the $D$-dimensional dilaton field, and $a\equiv {2\over{D-2}}$. Then, the affective action is specified by the following massless bosonic fields: the (Einstein-frame) graviton $g_{\mu\nu}$, the dilaton $e^{\varphi}$, $(20-2D)$ $U(1)$ gauge fields ${\cal A}^i_{\mu} \equiv (A^{(1)\,m}_{\mu},A^{(2)}_{\mu\,m})$ defined as $A^{(2)}_{\mu\,m} \equiv \hat{B}_{\mu m}+\hat{B}_{mn} A^{(1)\,n}_{\mu}$, and the following symmetric $O(10-D,10-D)$ matrix of the scalar fields (moduli): \begin{equation} M=\left ( \matrix{G^{-1} & -G^{-1}C \cr -C^T G^{-1} & G + C^T G^{-1}C} \right ), \label{modulthree} \end{equation} where $G \equiv [\hat{G}_{mn}]$, $C \equiv [\hat{B}_{mn}]$ and are defined in terms of the internal parts of ten-dimensional fields. Then the NS-NS sector of the $D$-dimensional effective action takes the form: \begin{eqnarray} {\cal L}&=&{1\over{16\pi G_D}}\sqrt{-g}[{\cal R}_g-{1\over (D-2)} g^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi+{1\over 8} g^{\mu\nu}{\rm Tr}(\partial_{\mu}ML\partial_{\nu}ML)\cr&-&{1\over{12}} e^{-2a\varphi}g^{\mu\mu^{\prime}}g^{\nu\nu^{\prime}} g^{\rho\rho^{\prime}}H_{\mu\nu\rho}H_{\mu^{\prime}\nu^{\prime} \rho^{\prime}} -{1\over 4}e^{-a\varphi}g^{\mu\mu^{\prime}}g^{\nu\nu^{\prime}} {\cal F}^{i}_{\mu\nu}(LML)_{ij} {\cal F}^{j}_{\mu^{\prime}\nu^{\prime}}], \label{effaction} \end{eqnarray} where $g\equiv {\rm det}\,g_{\mu\nu}$, ${\cal R}_g$ is the Ricci scalar of $g_{\mu\nu}$, and ${\cal F}^i_{\mu\nu} = \partial_{\mu} {\cal A}^i_{\nu}-\partial_{\nu} {\cal A}^i_{\mu}$ are the $U(1)^{20-2D}$ gauge field strengths and $H_{\mu\nu\rho} \equiv (\partial_{\mu}B_{\nu\rho}-{1\over 2}{\cal A}^i_{\mu}L_{ij} {\cal F}^j_{\nu\rho}) + {\rm cyc.\ perms.\ of}\ \mu , \nu , \rho$ is the field strength of the two-form field $B_{\mu\nu}$. The $D$-dimensional effective action (\ref{effaction}) is invariant under the $O(10-D,10-D)$ transformations ($T$-duality): \begin{equation} M \to \Omega M \Omega^T ,\ \ \ {\cal A}^i_{\mu} \to \Omega_{ij} {\cal A}^j_{\mu}, \ \ \ g_{\mu\nu} \to g_{\mu\nu}, \ \ \ \varphi \to \varphi, \ \ \ B_{\mu\nu} \to B_{\mu\nu}, \label{tdual} \end{equation} where $\Omega$ is an $O(10-D,10-D)$ invariant matrix, {\it i.e.}, with the following property: \begin{equation} \Omega^T L \Omega = L ,\ \ \ L =\left ( \matrix{0 & I_{10-D}\cr I_{10-D} & 0 } \right ), \label{4dL} \end{equation} where $I_n$ denotes the $n\times n$ identity matrix. In $D=4$ the field strength of the abelian gauge field is self-dual, i.e. $\tilde{\cal F}^{i\,\mu\nu} = {1\over 2\sqrt{-g}} \varepsilon^{\mu\nu\rho\sigma}{\cal F}^i_{\rho\sigma}$, and thus the charged solutions are specified by the electric and magnetic charges. In $D=5$ the effective action is specified by the graviton, 26 scalar fields (25 moduli fields in the matrix $M$ and the dilaton $\varphi$), 10 $U(1)$ gauge fields, and the field strength $H_{\mu\nu\rho}$ of the two-form field $B_{\mu\nu}$. By the duality transformation $ H^{\mu\nu\rho}=-{e^{4\varphi/3}\over{2!\sqrt{-g}}} \varepsilon^{\mu\nu\rho\lambda\sigma}{\tilde F}_{\lambda\sigma},$ $H_{\mu\nu\rho}$ can be related to the field strength $\tilde F_{\mu\nu}$ of the gauge field $ \tilde A_\mu$, which specifies an additional electric charge $\tilde Q$. In $D\ge 6$ there the allowed charges are only electric charges associated with the $(20-2D)$ NS-NS sector abelian gauge fields. \section{Appendix B: Static Generating Solutions} For the sake of simplicity we present explicitly only the explicit solution for the non-extreme {\it static} generating solution in $D=4$, $D=5$, $6\le D \le 9$ and with four\cite{CYI,HLMS}, three \cite{CY5r,HMS} and two charge\cite{Peet} parameters of the NS-NS sector, respectively. Note that the full generating solution in $D=4$ is parameterised by {\it five} charge parameters. For the explicit form of the rotating generating solution in $D=5$ see \cite{CY5r} and in $6\le D\le 9$ see \cite{CYNear,Llatas}, while in $D=4$ the four charge parameter rotating solution is given in \cite{CY4r} and the five charge static solution is given in \cite{CY5r}. The parameterisation there is given in terms of the ``toroidal'' sector of toroidally compactified heterotic string. We choose to parameterise the generating solutions in terms of the mass $m$ of the $D$-dimensional Schwarzschild solution, and the boost parameters $\delta_i$, specifying the charges of the solution. The notation used is similar to that in \cite{CTIII}. \subsection{$D=4$-Four Charge Static Solution} The expression for the non-extreme dyonic rotating black hole solution in terms of the (non-trivial) four-dimensional bosonic fields is of the following form \footnote{The four-dimensional Newton's constant is taken to be $G_N^{D=4}={1\over 8}$ and we follow the convention of \cite{MP}, for the definitions of the ADM mass, charges, dipole moments and angular momenta.}: \begin{eqnarray} ds^2_{E}&=& -\lambda fdt^2+\lambda^{-1}[f^{-1}dr^2+r^2d\Omega_2^2 ],\cr G_{11}&=&{{T_1}\over{T_2}}, \ \ G_{22}={{F_2}\over {F_1}}, \ \ e^{2\varphi}={{F_1F_2}\over{T_1T_2}}, \label{4dsol} \end{eqnarray} where $ds^2_{E}$ specifies the Einstein-frame ($D$-dimensional) space-time line element, $G_{ij}$ correspond to the internal toroidal metric coefficients and $\varphi$ is the $D$-dimensional dilaton field (see Appendix A). Other scalar fields are constant and assume canonical values (one or zero). Here \begin{equation} f=1-{{2m}\over r}, \ \ \lambda=(T_1T_2F_1F_2)^{-{1\over 2}} \label{4fl} \end{equation} and \begin{eqnarray} T_1&=&1+{{2m{\rm sinh}^2 \delta_{e1}}\over r},\ \ T_2=1+{{2m{\rm sinh}^2 \delta_{e2}}\over r}, \cr F_1&=&1+{{2m{\rm sinh}^2 \delta_{m1}}\over r}, \ \ F_2=1+{{2m{\rm sinh}^2 \delta_{m2}}\over r}, \label{4dpar} \end{eqnarray} The ADM mass, and four $U(1)$ charges $Q_1,Q_2,P_1, P_2$, associated with the respective gauge fields $A^{(1)}_{1\mu},A^{(2)}_{1\mu},A^{(1)}_{2\mu},A^{(2)}_{2\mu}$, can be expressed in terms of $m$, and four boosts $\delta_{e1,e2,m1,m2}$ in the following way: \begin{eqnarray} M&=&4m({\rm cosh}^2 \delta_{e1}+{\rm cosh}^2 \delta_{e2}+ {\rm cosh}^2 \delta_{m1}+{\rm cosh}^2 \delta_{m2})-8m, \cr Q_1&=&4m{\rm cosh}\delta_{e1}{\rm sinh}\delta_{e1},\ \ \ \ \ Q_2= 4m{\rm cosh}\delta_{e2}{\rm sinh}\delta_{e2}, \cr P_1&=&4m{\rm cosh}\delta_{m1}{\rm sinh}\delta_{m1},\ \ \ \ \ P_2 =4m{\rm cosh}\delta_{m2}{\rm sinh}\delta_{m2}. \label{4dphys} \end{eqnarray} The BH entropy is of the form\cite{CYI,HLMS}: \begin{equation} S_{BH}=2\pi (4m)^2\cosh\delta_{e1}\cosh\delta_{e2}\cosh\delta_{m1}\cosh\delta_{m2}, \label{4BHent} \end{equation} which in the BPS-saturated limit ($m\to 0$, $\delta_{e1,e2,m1,m2}\to \infty$ while keeping $Q_{1,2}$, $P_{1,2}$ finite) reduces to the form \cite{CY}: \begin{equation} S_{BH}=2\pi\sqrt{Q_1Q_2P_1P_2}. \label{4BPSBHent} \end{equation} In the case of the fifth charge parameter $q$ added, the BH entropy of the BPS-saturated black holes becomes \cite{CTII}: \begin{equation} S_{BH}=2\pi\sqrt{Q_1Q_2P_1P_2-{\textstyle{1\over 4}}q^2(P_1+P_2)^2}. \label{4BPSBH5p} \end{equation} \subsection{$D=5$-Three Charge Static Solution} The expression for the non-extreme dyonic rotating black hole solution in terms of the (non-trivial) five-dimensional bosonic fields is of the following form \cite{CY5r,HMS}\footnote{The five-dimensional Newton's constant is taken to be $G_N^{D=5}={{2\pi}\over 8}$.}: \begin{eqnarray} ds^2_{E}&=& -\lambda^2 fdt^2+\lambda^{-1}[f^{-1}dr^2+r^2d\Omega_3^2 ],\cr G_{11}&=&{{T_1}\over{T_2}}, \ \ e^{2\varphi}={\tilde T^2\over{T_1T_2}}, \label{5dsol} \end{eqnarray} with other scalars assuming constant canonical values. Here \begin{equation} f=1-{{2m}\over r^2}, \ \ \lambda=(T_1T_2\tilde T)^{-{1\over 3}}, \label{5fl} \end{equation} and \begin{eqnarray} T_1&=&1+{{2m{\rm sinh}^2 \delta_{e1}}\over r^2}, \ \ T_2=1+{{2m{\rm sinh}^2 \delta_{e2}}\over r^2}, \ \ \tilde T=1+{{2m{\rm sinh}^2 \delta_{\tilde e}}\over r^2} \ . \label{5dpar} \end{eqnarray} The ADM mass, three charges $Q_{1,2},\tilde Q$ associated with respective gauge fields $A_{1\mu}^{(1)}, \ A_{1\mu}^{(2)}$ and ${\tilde A}_\mu$ (the gauge field related to the two from field $B_{\mu\nu}$ by a duality transformation), are expressed in terms of $m$, and three boosts $\delta_{e1,e2,\tilde e}$ in the following way: \begin{eqnarray} M&=&2m({\rm cosh}^2 \delta_{e1}+{\rm cosh}^2 \delta_{e2}+ {\rm cosh}^2 \delta_{\tilde e})-3m, \cr Q_1&=&2m{\rm cosh}\delta_{e1}{\rm sinh}\delta_{e1},\ \ Q_2= 2m{\rm cosh}\delta_{e2}{\rm sinh}\delta_{e2},\ \ {\tilde Q}=2m{\rm cosh}\delta_{\tilde e}{\rm sinh}\delta_{\tilde e}. \label{5dphys} \end{eqnarray} The BH entropy is of the form \cite{HMS}: \begin{equation} S_{BH}=2\pi (2m)^{3\over 2}\cosh\delta_{e1}\cosh\delta_{e2}\cosh\delta_{\tilde e} \label{5BHent}, \end{equation} which in the BPS-saturated limit ($m\to 0$, $\delta_{e1,e2,\tilde e}\to \infty$ with $Q_{1,2}, \ \tilde Q$ finite) reduces to the form \cite{VS,TMpl}: \begin{equation} S_{BH}=2\pi\sqrt{Q_1Q_2\tilde Q}. \label{5BPSBHent} \end{equation} \subsection{$6\le D\le 9$-Two Charge Static Solution} The expression for the non-extreme dyonic rotating black hole solution in terms of the (non-trivial) five-dimensional bosonic fields is of the following form \cite{Peet,HSen,CYNear,Llatas}\footnote{The $D$-dimensional Newton's constant is taken to be $G_N^{D}={{(2\pi)^{D-4}}\over 8}$.}: \begin{eqnarray} ds^2_{E}&=& -\lambda^{D-3} fdt^2+\lambda^{-1}[f^{-1}dr^2+r^2d\Omega_{D-2}^2 ],\cr G_{11}&=&{{T_1}\over{T_2}}, \ \ e^{2\varphi}={1\over{T_1T_2}}, \label{Ddsol} \end{eqnarray} while other scalar fields are constant and assume canonical values. Here \begin{equation} f=1-{{2m}\over r^{D-3}}, \ \ \lambda=(T_1T_2)^{1\over{D-2}}, \label{Dfl} \end{equation} and \begin{equation} T_1=1+{{2m{\rm sinh}^2 \delta_{e1}}\over r^{D-3}}, \ \ T_2=1+{{2m{\rm sinh}^2 \delta_{e2}}\over r^{D-3}}. \label{Ddpar} \end{equation} The ADM mass, $U(1)$ charges $Q_1,Q_2$ associated with respective $A_{1\mu}^{(1)}, \ A_{2\mu}^{(2)}$ gauge fields, are expressed in terms of $m$, and two boosts $\delta_{e1,e2}$ in the following way: \begin{eqnarray} M&=&{{\omega_{D-2}m}\over{8\pi G_D}}[ (D-3)({\rm cosh}^2( \delta_{e1}+{\rm cosh}^2 \delta_{e2})-(D-4)], \cr Q_1&=&{{\omega_{D-2}m}\over{8\pi G_D}}(D-3){\rm cosh}\delta_{e1}{\rm sinh}\delta_{e1},\ Q_2={{\omega_{D-2}m}\over{8\pi G_D}}(D-3){\rm cosh}\delta_{e2}{\rm sinh}\delta_{e2},\ \ \label{Ddphys} \end{eqnarray} where $\omega_{D-2}={{2\pi^{{D-1}\over 2}}/ {\Gamma({{D-1}\over 2})}}$. The BH entropy is of the form: \begin{equation} S_{BH}={{\omega_{D-2}}\over {2G_D}} m^{{D-2}\over{D-3}}\cosh\delta_{e1}\cosh\delta_{e2}, \label{DBHent} \end{equation} which in the near-BPS-saturated limit ($Q_{1,2}\gg m$) reduces to the form: \begin{equation} S_{BH}=4\pi\sqrt{{\textstyle{1\over{(D-3)^2}}}Q_1Q_2{(2m)}^{2\over{D-3}}}. \label{DBPSBHent} \end{equation}
{ "attr-fineweb-edu": 1.998047, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcFLxK4sA-5fmwG7l
\section{Introduction} \label{intro} This article outlines a simple approach to a general problem in text analysis, the selection of documents for costly annotation. We then show how inverse regression can be applied with variable interactions to obtain both generic and subject-specific predictions of document sentiment, our annotation of interest. We are motivated by the problem of design and analysis of a particular text mining experiment: the scoring of Twitter posts (`tweets') for positive, negative, or neutral sentiment directed towards particular US politicians. The contribution is structured first with a proposal for optimal design of text data experiments, followed by application of this technique in our political tweet case study and analysis of the resulting data through inverse regression. Text data are viewed throughout simply as counts, for each document, of phrase occurrences. These phrases can be words (e.g., {\it tax}) or word combinations (e.g. {\it pay tax} or {\it too much tax}). Although there are many different ways to process raw text into these {\it tokens}, perhaps using sophisticated syntactic or semantic rules, we do not consider the issue in detail and assume tokenization as given; our case study text processing follows a few simple rules described below. Document $i$ is represented as $\bm{x}_i = [x_{i1},\ldots,x_{ip}]'$, a sparse vector of counts for each of $p$ tokens in the vocabulary, and a document-term count matrix is written $\bm{X} = [\bm{x}_1 \cdots \bm{x}_n]'$, where $n$ is the number of documents in a given corpus. These counts, and the associated frequencies $\bm{f}_i = \bm{x}_i/m_i$ where $m_i = \sum_{j=1}^p x_{ij}$, are then the basic data units for statistical text analysis. Hence, text data can be characterized simply as exchangeable counts in a very large number of categories, leading to the common assumption of a multinomial distribution for each $\bm{x}_i$. We are concerned with predicting the {\it sentiment} $\bm{y} = [y_1,\ldots,y_n]'$ associated with documents in a corpus. In our main application, this is positive, neutral, or negative sentiment directed toward a given politician, as measured through a reader survey. More generally, sentiment can be replaced by any annotation that is correlated with document text. Text-sentiment prediction is thus just a very high-dimensional regression problem, where the covariates have the special property that they can be represented as draws from a multinomial distribution. Any regression model needs to be accompanied with data for training. In the context of sentiment prediction, this implies documents scored for sentiment. One can look to various sources of `automatic' scoring, and these are useful to obtain the massive amounts of data necessary to train high-dimensional text models. Section \ref{data} describes our use of emoticons for this purpose. However, such automatic scores are often only a rough substitute for the true sentiment of interest. In our case, generic happy/sad sentiment is not the same as sentiment directed towards a particular politician. It is then necessary to have a subset of the documents annotated with precise scores, and since this scoring will cost money we need to choose a subset of documents whose content is most useful for predicting sentiment from text. This is an application for {\it pool based active learning}: there is a finite set of examples for which predictions are to be obtained, and one seeks to choose an optimal representative subset. There are thus two main elements to our study: design -- choosing the sub-sample of tweets to be sent for scoring -- and analysis -- using sentiment-scored tweets to fit a model for predicting Twitter sentiment towards specific politicians. This article is about both components. As a design problem, text mining presents a difficult situation where raw space filling is impractical -- the dimension of $\bm{x}$ is so large that every document is very far apart -- and we argue in Section \ref{design} that it is unwise to base design choices on the poor estimates of predictive uncertainty provided by text regression. Our solution is to use a space-filling design, but in an estimated lower dimensional multinomial-factor space rather than in the original $\bm{x}$-sample. Section \ref{design}.1 describes a standard class of {\it topic models} that can be used to obtain low-dimensional factor representations for large document collections. The resulting unsupervised algorithm (i.e., sampling proceeds without regard to sentiment) can be combined with any sentiment prediction model. We use the multinomial inverse regression of \cite{Tadd2012a}, with the addition of politician-specific interaction terms, as described in Section \ref{mnir}. \subsection{Data application: political sentiment on Twitter} \label{data} The motivating case study for this article is an analysis of sentiment in tweets about US politicians on Twitter, the social blog, from January 27 to February 28, 2012, a period that included the Florida (1/31), Nevada (2/4), Colorado, Missouri, and Minnesota (2/7), Maine (2/11), and Michigan and Arizona (2/28) presidential primary elections. Twitter provides streaming access to a large subset of public (as set by the user) tweets containing terms in a short list of case insensitive filters. We were interested in conversation on the leading candidates in the Republican presidential primary, as well as that concerning current president Barack Obama; our list of filter terms was {\smaller\sf obama}, {\smaller\sf romney}, {\smaller\sf gingrich}, {\smaller\sf ron paul}, and, from February 13 onward, {\smaller\sf santorum}. Note that Romney, Gingrich, and Paul were the only front-runners at the beginning of our study, but Santorum gained rapidly in the polls following his surprise victories in three state votes on February 7: the Minnesota and Colorado caucuses and the Missouri Primary. Daily data collection is shown by politician-subject in Figure \ref{volume}; total counts are 10.2\e{5} for Obama, 5\e{5} for Romney, 2.2\e{5} for Gingrich, 2.1\e{5} for Santorum, and 1.5\e{5} for Paul, for a full sample of about 2.1 million tweets. In processing the raw text, we remove a limited set of stop words (terms that occur at a constant rate regardless of subject, such as {\it and} or {\it the}) and punctuation before converting to lowercase and stripping suffixes from roots according to the Porter stemmer \citep{Port1980}. The results are then tokenized into single terms based upon separating white-space, and we discard any tokens that occur in $<$ 200 tweets and are not in the list of tokens common in our generic emoticon-sentiment tweets, described in the next paragraph. This leads to 5532 unique tokens for Obama, 5352 for Romney, 5143 for Gingrich, 5131 for Santorum, and 5071 for Paul. \begin{figure}[t] \includegraphics[width=6.3in]{polsVolume} \caption{\label{volume} Tweet sample volume for political candidates. All are taken from the stream of public Twitter posts from Jan 27 through the end of February, except for Santorum who was only tracked after Feb 13. } \end{figure} The primary analysis goal is to classify tweets by sentiment: positive, negative, or neutral. We have two data sources available: twitter data that is scored for generic sentiment, and the ability to survey readers about sentiment in tweets directed at specific politicians. In the first case, 1.6 million tweets were obtained, from the website {\smaller\sf http://twittersentiment.appspot.com}, that have been automatically identified as positive or negative by the presence of an emoticon (symbols included by the author -- e.g., a happy face indicates a positive tweet and a sad face a negative tweet). Tokenization for these tweets followed the same rules as for the political Twitter sample above, and we discard tokens that occur in less than 0.01\% of tweets. This leads to a vocabulary of 5412 `emoticon' tokens; due to considerable overlap, the combined vocabulary across all tweets (political and emoticon) is only 5690 tokens. As our second data source, we use the Amazon Mechanical Turk ({\smaller\sf https://www.mturk.com/}) platform for scoring tweet sentiment. Tweets are shown to anonymous workers for categorization as representing either positive (e.g., `support, excitement, respect, or optimism') or negative (e.g., `anger, distrust, disapproval, or ridicule') feelings or news towards a given politician, or as neutral if the text is `irrelevant, or not even slightly positive or negative'. Each tweet is seen by two independent workers, and it is only considered scored if the two agree on categorization. In addition, workers were pre-screened as `masters' by Amazon and we monitored submissions for quality control, blocking poor workers. Given the 2-3 cents per-tweet paid to individual workers, as well as the overhead charged by Amazon, our worker agreement rates of around 80\% imply an average cost near \$0.075 per sentiment scored tweet. \section{Sentiment prediction via multinomial inverse regression} \label{mnir} Sentiment prediction in this article follows the multinomial inverse regression (MNIR) framework described in \citet{Tadd2012a}. Section \ref{mnir}.1 summarizes that approach, while Section 2.2 discusses an adaptation specific to the main application of this paper. Inverse regression as a general strategy looks to estimate the {\it inverse distribution} for covariates given response, and to use this as a tool in building a {\it forward model} for $y_i$ given $\bm{x}_i$. The specific idea of MNIR is to estimate a simple model for how the multinomial distribution on text counts changes with sentiment, and to derive from this model low dimensional text projections that can be used for predicting sentiment. \subsection{Single-factor MNIR} As a simple case, suppose that $y_i$ for document $i$ is a discrete ordered sentiment variable with support $\mc{Y}$ -- say $y_i \in \{-1,0,1\}$ as in our motivating application. Only a very complicated model will be able to capture the generative process for an individual's text, $\bm{x}_i |y_i$, which involves both heterogeneity between individuals and correlation across dimensions of $\bm{x}_i$. Thus estimating a model for $\bm{x}_i |y_i$ can be far harder than predicting $y_i$ from $\bm{x}_i$, and inverse regression does not seem a clever place to be starting analysis. However, we can instead concentrate on the {\it population average} effect of sentiment on text by modeling the conditional distribution for collapsed token counts $\bm{x}_{y} = \sum_{i:y_i=y} \bm{x}_i$. A basic MNIR model is then \begin{equation} \label{basic-mnir} \bm{x}_{y} \sim \mr{MN}(\bm{q}_{y}, m_{y})~~\text{with}~~ q_{yj} = \frac{\exp[\alpha_j + y\varphi_j]}{\sum_{l=1}^p \exp[\alpha_l + y\varphi_l ]},~~\text{for}~~j=1,\ldots,p,~~y \in \mc{Y} \end{equation} where each $\mr{MN}$ is a $p$-dimensional multinomial distribution with size $m_{y} = \sum_{i:y_i=y} m_i$ and probabilities $\bm{q}_{ y} = [q_{y1},\ldots,q_{yp}]'$ that are a linear function of $y$ through a logistic link. Although independence assumptions implied by (\ref{basic-mnir}) are surely incorrect, within-individual correlation in $\bm{x}_i$ is quickly overwhelmed in aggregation and the multinomial becomes decent model for $\bm{x}_{y}$. (One could also argue against an equidistant three point scale for $y$; however such a scale is useful to simplify inverse regression and we assume that misspecification here can be accommodated in forward regression). Given sentiment $y$ and counts $\bm{x}$ drawn from the multinomial distribution $\mr{MN}(\bm{q}_{y}, m)$ in (\ref{basic-mnir}), the projection $\bs{\varphi}'\bm{x}$ is {\it sufficient for sentiment} in the sense that $y \perp\!\!\!\perp \bm{x} \mid \bs{\varphi}'\bm{x}, m$. A simple way to demonstrate this is through application of Bayes rule (after assigning prior probabilities for each element of $\mc{Y}$). Then given $\bm{x}_i$ counts for an {\it individual} document, $\bs{\varphi}'\bm{x}_i$ seems potentially useful as a low-dimensional index for predicting $y_i$. More specifically, we normalize by document length in defining the {\it sufficient reduction} (SR) score \begin{equation}\label{basic-sr} z_i = \bs{\varphi}'\bm{f}_i = \bs{\varphi}'\bm{x}_i/m_i. \end{equation} Now, since (\ref{basic-mnir}) is a model for collapsed text counts rather than for $\bm{x}_i$ given $y_i$, the SR score in (\ref{basic-sr}) is {\it not} theoretically sufficient for that document's sentiment. \cite{Tadd2012a} describes specific random effects models for the information loss in regressing $y_i$ onto $z_i$ instead of $\bm{x}_i$, and under certain models the individual document regression coefficients approach $\bs{\varphi}$. However, in general this population average projection is {\it misspecified} as an individual document projection. Hence, instead of applying Bayes rule to invert (\ref{basic-mnir}) for sentiment prediction, $z_i$ is treated as an observable in a second-stage regression for $y_i$ given $z_i$. Throughout this article, where $y$ is always an ordered discrete sentiment variable, this {\it forward regression} applies logistic proportional odds models of the form $\mr{p}(y_i < c) = \left(1 + \exp[ -(\gamma_c + \beta z_i)]\right)^{-1}$. \subsection{MNIR with politician-interaction} In the political twitter application, our approach needs to be adapted to allow different text-sentiment regression models for each politician, and also to accommodate positive and negative emoticon tweets, which are sampled from all public tweets rather than always being associated with a politician. This is achieved naturally within the MNIR framework by introducing interaction terms in the inverse regression. The data are now written with text in the i$^{th}$ tweet for politician $s$ as $\bm{x}_{si}$, containing a total of $m_{si}$ tokens and accompanied by sentiment $y_{si}\in \{-1,0,1\}$, corresponding to negative, neutral, and positive sentiment respectively. Collapsed counts for each politician-sentiment combination are obtained as $x_{syj} = \sum_{i: y_{si} = y} x_{sij}$ for each token $j$. This yields 17 `observations': each of three sentiments for five politicians, plus positive and negative emoticon tweets. The multinomial inverse regression model for sentiment-$y$ text counts directed towards politician $s$ is then $\bm{x}_{sy} \sim \mr{MN}(\bm{q}_{sy}, m_{sy})$, $q_{syj} = e^{\eta_{scy}}/\sum_{l=1}^p e^{\eta_{syl}}$ for $j=1\ldots p$, with linear equation \begin{equation}\label{pols-mnir} \eta_{syj} = \alpha_{0j} + \alpha_{sj} + y(\varphi_{0j} + \varphi_{sj}). \end{equation} Politician-specific terms are set to zero for emoticon tweets (which are not associated with a specific politician), say $s=e$, such that $\eta_{eyj} = \alpha_{0j} + y\varphi_{0j}$ as a generic sentiment model. Thus all text is centered on main effects in $\bs{\alpha}_{0}$ and $\bs{\varphi}_0$, while interaction terms $\bs{\alpha}_s$ and $\bs{\varphi}_s$ are identified only through their corresponding turk-scored political sentiment sample. Results in \cite{Tadd2012a} show that $\bm{x}'[\bs{\varphi}_{0}, \bs{\varphi}_{s}]$ is sufficient for sentiment when $\bm{x}$ is drawn from the collapsed count model implied by (\ref{pols-mnir}). Thus following the same logic behind our univariate SR scores in (\ref{basic-sr}), $\bm{z}_{i} = [\bm{z}_{i0},\bm{z}_{is}] = \bm{f}_{i}'[\bs{\varphi}_{0}, \bs{\varphi}_{s}]$ is a bivariate sufficient reduction score for tweet $i$ on politician $s$. The forward model is again proportional-odds logistic regression, \begin{equation}\label{pols-fwd} \mr{p}( y_{i} \leq c) = 1/(1 + \exp[ \beta_0 z_{i0} + \beta_s z_{is} - \gamma_c ]), \end{equation} with main $\beta_0$ and subject $\beta_s$ effects. Note the absence of subject-specific $\gamma_{sc}$: a tweet containing no significant tokens (such that $z_{i0} = z_{is} = 0$) is assigned probabilities according to the overall aggregation of tweets. Such `empty' tweets have $\mr{p}(-1) = 0.25$, $\mr{p}(0) = 0.65$, and $\mr{p}(1) = 0.1$ in the fitted model of Section \ref{analysis}, and are thus all classified as `neutral'. \subsection{Notes on MNIR estimation} Estimation of MNIR models like those in (\ref{basic-mnir}) and (\ref{pols-mnir}) follows exactly the procedures of \cite{Tadd2012a}, and the interested reader should look there for detail. Briefly, we apply the {\it gamma lasso} estimation algorithm, which corresponds to MAP estimation under a hierarchical gamma-Laplace coefficient prior scheme. Thus, and this is especially important for the interaction models of Section \ref{mnir}.1, parameters are estimated as exactly zero until a large amount of evidence has accumulated. Optimization proceeds through coordinate descent and, along with the obvious efficiency derived from collapsing observations, allows for estimation of single-factor SR models with hundreds of thousands of tokens in mere seconds. The more complicated interaction model in (\ref{pols-mnir}) can be estimated in less than 10 minutes. To restate the MNIR strategy, we are using a simple but very high-dimensional (collapsed count) model to obtain a useful but imperfect text summary for application in low dimensional sentiment regression. MNIR works because the multinomial is a useful representation for token counts, and this model assumption increases efficiency by introducing a large amount of information about the functional relationship between text and sentiment into the prediction problem. Implicit here is an assumption that ad-hoc forward regression can compensate for mis-application of population-average summary projections to individual document counts. \cite{Tadd2012a} presents empirical evidence that this holds true in practice, with MNIR yielding higher quality prediction at lower computational cost when compared to a variety of text regression techniques. However the design algorithms of this article are not specific to MNIR and can be combined with any sentiment prediction routine. \section{Topic-optimal design} \label{design} Recall the introduction's pool-based design problem: choosing from the full sample of 2.1 million political tweets a subset to be scored, on mechanical turk, as either negative, neutral, or positive about the relevant politician. A short review of some relevant literature on active learning and experimental design is in the appendix. In our specific situation of a very high dimensional input space (i.e a large vocabulary), effective experimental design is tough to implement. Space-filling is impractical since limited sampling will always leave a large distance between observations. Boundary selection -- where documents with roughly equal sentiment-class probabilities are selected for scoring -- leads to samples that are very sensitive to model fit and is impossible in early sampling where the meaning of most terms is unknown (such that the vast majority of documents lie on this boundary). Moreover, one-at-a-time point selection implies sequential algorithms that scale poorly for large applications, while more elaborate active learning routines which solve for optimal batches of new points tend to have their own computational limits in high dimension. Finally, parameter and predictive uncertainty -- which are relied upon in many active learning routines -- is difficult to quantify in complicated text regression models; this includes MNIR, in which the posterior is non-smooth and is accompanied by an ad-hoc forward regression step. The vocabulary is also growing with sample size and a full accounting of uncertainty about sentiment in unscored texts would depend heavily on a prior model for the meaning of previously unobserved words. While the above issues make tweet selection difficult, we do have an advantage that can be leveraged in application: a huge pool of unscored documents. Our solution for text sampling is thus to look at space-filling or optimal design criteria (e.g., D-optimality) but on a reduced dimension factor decomposition of the covariate space rather than on $\bm{X}$ itself. That is, although the main goal is to learn $\bs{\Phi}$ for the sentiment projections of Section \ref{mnir}, this cannot be done until enough documents are scored and we instead look to space-fill on an {\it unsupervised} factor structure that can be estimated without labelled examples. This leads to to what we call {\it factor-optimal design}. Examples of this approach include \citet{GalvMaccBezz2007} and \citet{ZhanEdga2008}, who apply optimal design criteria on principal components, and \citet{DavyLuz2007}, a text classification contribution that applies active learning criteria to principal components fit for word counts. The proposal here is to replace generic principal component analysis with text-appropriate topic model factorization. \subsection{Multinomial topic factors} A $K$-topic model \citep{BleiNgJord2003} represents each vector of document token counts, $\bm{x}_i \in \{\bm{x}_{1}\ldots \bm{x}_n\}$ with total $m_i = \sum_{j=1}^p x_{ij}$, as a multinomial factor decomposition \begin{equation}\label{eq:tpc} \bm{x}_i \sim \mr{MN}(\omega_{i1} \bs{\theta}_{1} + \ldots + \omega_{iK} \bs{\theta}_{K}, m_i) \end{equation} where topics $\bs{\theta}_k = [\theta_{k1} \cdots \theta_{kp}]'$ and weights $\bs{\omega}_i$ are probability vectors. Hence, each topic $\bs{\theta}_k$ -- a vector of probabilities over words or phrases -- corresponds to factor `loadings' or `rotations' in the usual factor model literature. Documents are thus characterized through a mixed-membership weighting of topic factors and $\bs{\omega}_i$ is a reduced dimension summary for $\bm{x}_i$. Briefly, this approach assumes independent prior distributions for each probability vector, \begin{equation}\label{eq:prior} \bs{\omega}_i \stackrel{iid}{\sim} \mr{Dir}(1/K),~i=1\ldots n,~~\text{and}~~ \bs{\theta}_k \stackrel{iid}{\sim} \mr{Dir}(1/(Kp)),~k=1\ldots K, \end{equation} where $\bs{\theta} \sim \mr{Dir}(\alpha)$ indicates a Dirichlet distribution with concentration parameter $\alpha$ and density proportional to $\prod_{j=1}^{\mr{dim}(\bs{\theta})} \theta_j^\alpha$. These $\alpha < 1$ specifications encourage a few dominant categories among mostly tiny probabilities by placing weight at the edges of the simplex. The particular specification in (\ref{eq:prior}) is chosen so that prior weight, measured as the sum of concentration parameters multiplied by the dimension of their respective Dirichlet distribution, is constant in both $K$ and $p$ (although not in $n$). The model is estimated through posterior maximization as in \citet{Tadd2012b}, and we employ a Laplace approximation for simulation from the conditional posterior for $\bs{\Omega}$ given $\bs{\Theta} = [\bs{\theta}_1 \cdots \bs{\theta}_K]$. The same posterior approximation allows us to estimate Bayes factors for potential values of $K$, and we use this to {\it infer} the number of topics from the data. Details are in Appendix \ref{bayes}. \subsection{Topic D-optimal design} As a general practice, one can look to implement any space filling design in the $K$ dimensional $\bs{\omega}$-space. For the current study, we focus on D-optimal design rules that seek to maximize the determinant of the information matrix for linear regression; the result is thus loosely optimal under the assumption that sentiment has a linear trend in this representative factor space. The algorithm tends to select observations that are at the edges of the topic space. An alternative option that may be more robust to sentiment-topic nonlinearity is to use a latin hypercube design; this will lead to a sample that is spread evenly throughout the topic space. In detail, we seek to select a design of documents $\{i_1 \ldots i_T\} \subset \{1\ldots n\}$ to maximize the topic information determinant $D_T = |\bs{\Omega}_T'\bs{\Omega}_T|$, where $\bs{\Omega}_T = [\bs{\omega}_1 \cdots \bs{\omega}_T]'$ and $\bs{\omega}_t$ are topic weights associated with document $i_t$. Since construction of exact D-optimal designs is difficult and the algorithms are generally slow \citep[see][for an overview of both exact and approximate optimal design]{AtkiDone1992}, we use a simple greedy search to obtain an {\it ordered} list of documents for evaluation in a near-optimal design. Given $D_t = |\bs{\Omega}_t'\bs{\Omega}_t|$ for a current sample of size $t$, the topic information determinant after adding $i_{t+1}$ as an additional observation is \begin{equation}\label{dup} D_{t+1} = \left|\bs{\Omega}_t'\bs{\Omega}_t + \bs{\omega}_{t+1}' \bs{\omega}_{t+1}\right| = D_{t}\left( 1 + \bs{\omega}_{t+1}' \left(\bs{\Omega}_t'\bs{\Omega}_t\right)^{-1} \bs{\omega}_{t+1}\right), \end{equation} due to a standard linear algebra identity. This implies that, given $\bs{\Omega}_t$ as the topic matrix for your currently evaluated documents, $D_{t+1}$ is maximized simply by choosing $i_{t+1}$ such that \begin{equation}\label{max} \bs{\omega}_{t+1} = \mr{argmax}_{\{\bs{\omega}\in \bs{\Omega}/\bs{\Omega}_t\}} ~~\bs{\omega}' \left(\bs{\Omega}_t'\bs{\Omega}_t\right)^{-1} \!\!\bs{\omega} \end{equation} Since the topic weights are a low ($K$) dimensional summary, the necessary inversion $\left(\bs{\Omega}_t'\bs{\Omega}_t\right)^{-1}$ is on a small $K\times K$ matrix and will not strain computing resources. This inverted matrix provides an operator that can quickly be applied to the pool of candidate documents (in parallel if desired), yielding a simple score for each that represents the proportion by which its inclusion increases our information determinant. For the recursive equation in (\ref{max}) to apply, the design must be initially seeded with at least $K$ documents, such that $\bs{\Omega}_t'\bs{\Omega}_t$ will be non-singular. We do this by starting from a simple random sample of the first $t=K$ documents (alternatively, one could use more principled space-filling in factor space, such as a latin hypercube sample). Note that again topic-model dimension reduction is crucial: for our greedy algorithm to work in the full $p$ dimensional token space, we would need to sample $p$ documents before having an invertible information matrix. Since this would typically be a larger number of documents than desired for the full sample, such an approach would never move beyond the seeding stage. In execution of this design algorithm, the topic weights for each document must be estimated. In what we label MAP topic D-optimal design, each $\bs{\omega}_i$ for document $i$ is fixed at its MAP estimate as described in Section \ref{design}.1. As an alternative, we also consider a {\it marginal} topic D-optimality wherein a set of topic weights $\{\bs{\omega}_{i1}\ldots \bs{\omega}_{iB}\}$ are sampled for each document from the approximate posterior in Appendix A.1, such that recursively D-optimal documents are chosen to maximize the {\it average} determinant multiplier over this set. Thus, instead of (\ref{max}), marginal D-optimal $i_{t+1}$ is selected to maximize $\frac{1}{B}\sum_b \bs{\omega}_{i_{t+1}b}' \left(\bs{\Omega}_t'\bs{\Omega}_t\right)^{-1} \!\!\bs{\omega}_{i_{t+1}b}$. \subsection{Note on the domain of factorization} The basic theme of this design framework is straightforward: fit an unsupervised factor model for $\bm{X}$ and use an optimal design rule in the resulting factor space. Given a single sentiment variable, as in examples of Section \ref{examples}, the $\bm{X}$ to be factorized is simply the entire text corpus. Our political twitter case study introduces the added variable of `politician', and it is no longer clear that a single shared factorization of all tweets is appropriate. Indeed, the interaction model of Section \ref{mnir}.2 includes parameters (the $\alpha_{sj}$ and $\varphi_{sj}$) that are only identified by tweets on the corresponding politician. Given the massive amount of existing data from emoticon tweets on the other model parameters, any parameter learning from new sampling will be concentrated on these interaction parameters. Our solution in Section \ref{analysis} is to apply stratified sampling: fit independent factorizations to each politician-specific sub-sample of tweets, and obtain D-optimal designs on each. Thus we ensure a scored sample of a chosen size for each individual politician. \section{Example Experiment} \label{examples} To illustrate this design approach, we consider two simple text-sentiment examples. Both are detailed in \cite{Tadd2012a,Tadd2012b}, and available in the {\smaller\sf textir} package for {\smaller\sf R}. {\it Congress109} contains 529 legislators' usage counts for each of 1000 phrases in the $109^{th}$ US Congress, and we consider party membership as the `sentiment' of interest: $y=1$ for Republicans and $0$ otherwise (two independents caucused with Democrats). {\it We8there} consists of counts for 2804 bigrams in 6175 online restaurant reviews, accompanied by restaurant {\it overall} rating on a scale of one to five. To mimic the motivating application, we group review sentiment as negative ($y=-1$) for ratings of 1-2, neutral ($y=0$) for 3-4, and positive ($y=1$) for 5 (average rating is 3.95, and the full 5-class analysis is in \citealt{Tadd2012a}). Sentiment prediction follows the single-factor MNIR procedure of Section \ref{mnir}, with binary logistic forward regression $\ds{E}[y_i] = \exp[ \gamma + \beta z_i]/(1 + \exp[ \gamma + \beta z_i] )$ for the congress data, and proportional-odds logistic regression $\mr{p}(y_i \leq c) = \exp[ \gamma_c - \beta z_i]/(1 + \exp[ \gamma_c - \beta z_i] )$, $c=-1,0,1$ for the we8there data. We fit $K =$ 12 and 20 topics respectively to the congress109 and we8there document sets. In each case, the number of topics is chosen to maximize the approximate marginal data likelihood, as detailed in the appendix and in \cite{Tadd2012b}. Ordered sample designs were then selected following the algorithms of Section \ref{design}.2: for MAP D-optimal, using MAP topic weight estimates, and for marginal D-optimal, based upon approximate posterior samples of 50 topic weights for each document. We also consider principal component D-optimal designs, built following the same algorithm but with topic weights replaced by the same number (12 or 20) of principal components directions fit on token frequencies $\bm{f}_i = \bm{x}_i/m_i$. Finally, simple random sampling is included as a baseline, and was used to seed each D-optimal algorithm with its first $K$ observations. Each random design algorithm was repeated 100 times. \begin{figure}[t] \hskip -.4cm\includegraphics[width=6.6in]{LearningExperiment} \caption{\label{experiment} Average error rates on 100 repeated designs for the 109$^{th}$ congress and we8there examples. `MAP' is D-optimal search on MAP estimated topics; `Bayes' is our search for marginal D-optimality when sampling from the topic posterior; `PCA' is the same D-optimal search in principal components factor space; and `random' is simple random sampling. Errors are evaluated over the entire dataset.} \end{figure} Results are shown in Figure \ref{experiment}, with average error rates (misclassification for congress109 and mean absolute error for we8there) reported for maximum probability classification over the entire data set. The MAP D-optimal designs perform better than simple random sampling, in the sense that they provide faster reduction in error rates with increasing sample size. The biggest improvements are in early sampling and error rates converge as we train on a larger proportion of the data. There is no advantage gained from using a principal component (rather than topic) D-optimal design, illustrating that misspecification of factor models can impair or eliminate their usefulness in dimension reduction. Furthermore, we were surprised to find that, in contrast with some previous studies on active learning \citep[e.g.][]{TaddGramPols2011}, averaging over posterior uncertainty did not improve performance: the MAP D-optimal design does as well or better than the marginal alternative, which is even outperformed by random sampling in the we8there example. Our hypothesis is that, since conditioning on $\bs{\Theta}$ removes dependence across documents, sampling introduces Monte Carlo variance without providing any beneficial information about correlation in posterior uncertainty. Certainly, given that the marginal algorithm is also much more time consuming (with every operation executed $B$ times in addition to the basic cost of sampling), it seems reasonable to focus on the MAP algorithm in application. \section{Analysis of Political Sentiment in Tweets} \label{analysis} This section describes selection of tweets for sentiment scoring from the political Twitter data described in Section \ref{data}, under the design principles outlined above, along with an MNIR analysis of the results and sentiment prediction over the full collection. \subsection{Topic factorization and D-optimal design} As the first step in experimental design, we apply the topic factorization of Section \ref{design}.1 independently to each politician's tweet set. Using the Bayes factor approach of \cite{Tadd2012b}, we tested $K$ of 10, 20, 30 and 40 for each collection and, in every case, selected the simple $K=10$ model as most probable. Although this is a smaller topic model than often seen in the literature, we have found that posterior evidence tends to favor such simple models in corpora with short documents \citep[see][for discussion of information increase with $m_i$]{Tadd2012b}. Across politicians, the most heavily used topic (accounting for about 20\% of words in each case) always had {\smaller\sf com}, {\smaller\sf http}, and {\smaller\sf via} among the top five tokens by topic lift -- the probability of a token within a topic over its overall usage proportion. Hence, these topics appear to represent a Twitter-specific list of stopwords. The other topics are a mix of opinion, news, or user specific language. For example, in the Gingrich factorization one topic accounting for 8\% of text with top tokens {\smaller\sf herman}, {\smaller\sf cain}, and {\smaller\sf endors} is focused on Herman Cain's endorsement, {\smaller\sf \#teaparty} is a top token in an 8\% topic that appears to contain language used by self identified members of the Tea Party movement (this term loads heavily in a single topic for each politician we tracked), while another topic with {\smaller\sf @danecook} as the top term accounts for 10\% of traffic and is dominated by posts of unfavorable jokes and links about Gingrich by the comedian Dane Cook (and forwards, or `retweets', of these jokes by his followers). Viewing the sentiment collection problem through these interpreted topics can be useful: since a D-optimal design looks (roughly) for large variance in topic weights, it can be seen as favoring tweets on single topics (e.g., the Cain endorsement) or rare combinations of topics (e.g., a Tea Partier retweeting a Dane Cook joke). As a large proportion of our data are retweets (near 40\%), scoring those sourced from a single influential poster can yield a large reduction in predictive variance, and tweets containing contradictory topics help resolve the relative weighting of words. In the end, however, it is good to remember that the topics do not correspond to subjects in the common understanding, but are simply loadings in a multinomial factor model. The experimental design described in the next section treats the fitted topics as such. \subsection{Experimental design and sentiment collection} Using the MAP topic D-optimal algorithm of Section \ref{design}.2, applied to each politician's topic factorization, we built ordered lists of tweets to be scored on Mechanical Turk: 500 for each Republican primary candidate, and 750 for Obama. Worker agreement rates varied from 78\% for Obama to 85\% for Paul, leading to sample sizes of 406 for Romney, 409 for Santorum, 418 for Gingrich, 423 for Paul, and 583 for Obama. Unlike the experiment of Section \ref{examples}, we have no ground truth for evaluating model performance across samples without having to pay for a large amount of turk scoring. Instead, we propose two metrics: the number of non-zero politician specific loadings $\varphi_{js}$, and the average entropy $-\sum_{c=-1,0,1} \mr{p}_{c} \log(\mr{p}_{c})$ across tweets for each politician, where $\mr{p}_{c} = \mr{p}(y =c)$ is based on the forward proportional-odds regression described below in \ref{analysis}.2. We prefer the former for measuring the {\it amount of sample evidence} -- the number of tokens estimated as significant for politician-specific sentiment in gamma-lasso penalized estimation -- as a standard statistical goal in design of experiments, but the latter corresponds to the more common machine learning metric of classification precision (indeed, entropy calculations inform many of the close-to-boundary active learning criteria in Appendix \ref{back}). \begin{figure}[t] \includegraphics[width=6.4in]{polsLearning} \caption{\label{learning} Learning under the MAP topic D-optimal design. For increasing numbers of scored tweets added from the ranked design, the left shows the number of significant (nonzero) loadings in the direction of politician-specific sentiment and the right shows mean entropy $-\sum \mr{p}_c \log(\mr{p}_{c})$ over the full sample. As in Figure \ref{volume}, blue is Obama, orange Romney, red Santorum, pink Gingrich, and green Paul.} \end{figure} Results are shown in Figure \ref{learning} for the sequential addition of scored tweets from the design-ranked Turk results (sentiment regression results are deferred until Section \ref{analysis}.3). On the left, we see that there is a steady climb in the number of nonzero politician-specific loadings as the sample sizes increase. Although the curves flatten with more sampling, it does appear that had we continued spending money on sending tweets to the Turk it would have led to larger politician-sentiment dictionaries. The right plot shows a familiar pattern of early overfit (i.e., underestimated classification variance) before the mean entropy begins a slower steady decline from $t=200$ onwards. \subsection{MNIR for subject-specific sentiment analysis} After all Turk results are incorporated, we are left with 2242 scored political tweets, plus the 1.6 million emoticon tweets, and a 5566 token vocabulary. This data were used to fit the politician-interaction MNIR model detailed in Section \ref{mnir}.2. The top ten politician-specific loadings ($\varphi_{sj}$) by absolute value are shown in Table \ref{loadings} (recall that these are the effect on log odds for a unit increase in sentiment; thus, e.g., negatively loaded terms occur more frequently in negative tweets). This small sample shows some large coefficients, corresponding to indicators for users or groups, news sources and events, and various other labels. For example, the Obama column results suggest that his detractors prefer to use `GOP' as shorthand for the republican party, while his supporters simply use `republican'. However, one should be cautious about interpretation: these coefficients correspond to the partial effects of sentiment on the usage proportion for a term {\it given} corresponding change in relative frequency for all other terms. Moreover, these are only estimates of average correlation; this analysis is not intended to provide a causal or long-term text-sentiment model. Summary statistics for fitted SR scores are shown in Table \ref{zsmry}. Although we are not strictly forcing orthogonality on the factor directions -- $z_{0}$ and $z_s$, say the emotional and political sentiment directions respectively -- the political scores have only weak correlation (absolute value $<$ 0.2) with the generic emotional scores. This is due to an MNIR setup that estimates politician-specific loadings $\varphi_{sj}$ as the sentiment effect on language about a given politician {\it after} controlling for generic sentiment effects. Notice that there is greater variance in political scores than in emotional scores; this is due to a few large token loadings that arise by identifying particular tweets (that are heavily retweeted) or users that are strongly associated with positive or negative sentiment. However, since we have far fewer scored political tweets than there are emoticon tweets, fewer token-loadings are non-zero in the politician-specific directions than in the generic direction: $\bs{\varphi}_0$ is only 7\% sparse, while the $\bs{\varphi}_s$ are an average of 97\% sparse. \begin{table}[t] \vspace{.1cm} \centering\small \begin{tabular}{cl|cl|cl|cl|cl} \multicolumn{2}{c}{\normalsize \sc Obama} &\multicolumn{2}{c}{\normalsize \sc Romney} &\multicolumn{2}{c}{\normalsize \sc Santorum} &\multicolumn{2}{c}{\normalsize \sc Gingrich} &\multicolumn{2}{c}{\normalsize \sc Paul}\\ [-1.5ex]\\ \smaller\sf republican&15.5&\smaller\sf fu&-10&\smaller\sf @addthi&-11.5&\smaller\sf bold&10.6&\smaller\sf \#p2&11.1\\ \smaller\sf gop&-13.2&\smaller\sf 100\%&-9.6&\smaller\sf @newtgingrich&-9.9&\smaller\sf mash&-10&\smaller\sf \#teaparti&11\\ \smaller\sf \#teaparti&-12.7&\smaller\sf lover&-9.4&\smaller\sf clown&-9.4&\smaller\sf ap&9.9&\smaller\sf ht&10\\ \smaller\sf \#tlot&-11.9&\smaller\sf quot&-9.4&\smaller\sf @youtub&-9.2&\smaller\sf obama&9.9&\smaller\sf airplan&9.6\\ \smaller\sf economi&11&\smaller\sf anytim&-9.2&\smaller\sf polit&-8.7&\smaller\sf campaign&-9.9&\smaller\sf legal&-9.5\\ \smaller\sf cancer&10&\smaller\sf abt&-8.6&\smaller\sf speech&-8.6&\smaller\sf lesbian&-9.7&\smaller\sf paypal&7.4\\ \smaller\sf cure&9.6&\smaller\sf lip&-8.5&\smaller\sf opportun&-8.2&\smaller\sf pre&9.5&\smaller\sf flight&6.9\\ \smaller\sf ignor&9.2&\smaller\sf incom&-8.4&\smaller\sf disgust&-8.2&\smaller\sf bid&9.5&\smaller\sf rep&6.7\\ \smaller\sf wors&9.2&\smaller\sf januari&8.1&\smaller\sf threw&-7.4&\smaller\sf recip&-9.2&\smaller\sf everyth&-6.4\\ \smaller\sf campaign&9.2&\smaller\sf edg&8&\smaller\sf cultur&-7.3&\smaller\sf america&9.1&\smaller\sf debat&6 \end{tabular} \caption{ Top ten politician-specific token loadings $\varphi_{sj}$ by their absolute value in MNIR. \label{loadings}} \end{table} \begin{figure}[t] \begin{minipage}{3.2in} \includegraphics[width=3.2in]{polsFit} \caption{\label{fit} In-sample sentiment fit: the forward model probabilities for each observation's true category. } \end{minipage} ~~~~~ \begin{minipage}{2.8in}\small \vspace{1cm} \begin{tabular}{lccc} \multicolumn{1}{c}{} & $\mr{cor}(z_s,z_0)$ & $\mr{sd}(z_s)$ & $\bar z_s$ \\ \\[-1.5ex]\cline{2-4}\\[-1.5ex] \sf obama&-0.12&0.31&0.1\\ \sf romney&0.17&0.23&-0.07\\ \sf santorum&0.16&0.19&-0.19\\ \sf gingrich&-0.07&0.26&-0.06\\ \sf paul&0.07&0.16&0.1\\ \sf emoticons &---&0.06&0.006 \end{tabular} \vskip .25cm \captionof{table}{\label{zsmry} {\it Full} sample summary statistics for politician-specific sufficient reduction scores. } \end{minipage} \end{figure} Figure \ref{fit} shows fitted values in forward proportional-odds logistic regression for these SR scores. We observe some very high fitted probabilities for both true positive and negative tweets, indicating again that the analysis is able to identify a subset of similar tweets with easy sentiment classification. Tweet categorization as neutral corresponds to an absence of evidence in either direction, and neutral tweets have fitted $\mr{p}(0)$ with mean around 0.6. In other applications, we have found that a large number of `junk' tweets (e.g., selling an unrelated product) requires non-proportional-odds modeling to obtain high fitted neutral probabilities, but there appears to be little junk in the current sample. As an aside, we have experimented with adding `junk' as a fourth possible categorization on Mechanical Turk, but have been unable to find a presentation that avoids workers consistently getting confused between this and `neutral'. \begin{table}[t] \vspace{.2cm} \small\hspace{-.3cm} \begin{tabular}{lccccccccc} &\multicolumn{2}{c}{\normalsize \sc Intercepts $\gamma_c$} & &\multicolumn{6}{c}{\normalsize \sc SR score coefficients $\beta_0$, $\beta_s$} \\ \cline{2-3} \cline{5-10} \\[-2ex] & $\leq -1$ & \hspace{-.3cm}$\leq 0$ && \smaller\sf emoticons & \smaller\sf obama & \smaller\sf romney & \smaller\sf santorum & \smaller\sf gingrich & paul\\ [-2ex]\\ Estimate & -1.1 {\smaller (0.1)}&\hspace{-.3cm} 2.2 {\smaller (0.1)}&&8.3 {\smaller (1.1)}&4.9 {\smaller (0.5)}& 5.6 {\smaller (0.5)}&5.8 {\smaller (0.5)}&7.9 {\smaller (1.0)}&11.9 {\smaller (1.1)}\\ [-2.5ex]\\ $\beta \times \bar z_s$ & & & & 0.0 & 0.5 & -0.4& -1.1& -0.5 & 1.2 \\ {$\exp[\beta \times \mr{sd}(z)]$}\hspace{-.2cm} & & & & 1.6 & 4.5 & 3.6 & 2.9 & 7.7 & 6.4 \end{tabular} \caption{ MAP estimated parameters and the conditional standard deviation (ignoring variability in $\bm{z}$) in the forward proportional-odds logistic regression $\mr{p}( y_{i} \leq c) = (1 + \exp[ \beta_0 z_{i0} + \beta_s z_{is} - \gamma_c])^{-1}$, followed by the average effect on log-odds for each sufficient reduction score and exponentiated coefficients scaled according to the corresponding full-sample score standard deviation. \label{coef}} \end{table} The forward parameters are MAP estimated, using the {\smaller\sf arm} package for {\smaller\sf R} \citep{GelmSuYajiHillPittKermZhen2012}, under diffuse $t$-distribution priors; these estimates are printed in Table \ref{coef}, along with some summary statistics for the implied effect on the odds of a tweet being at or above any given sentiment level. The middle row of this table contains the average effect on log-odds for each sufficient reduction score: for example, we see that Santorum tweet log-odds drop by an average of -1.1 ($e^{-1.1} \approx 0.3$) when you include his politician-specific tweet information. The bottom row shows implied effect on sentiment odds scaled for a standard deviation increase in each SR score direction: an extra deviation in emotional $z_0$ multiplies the odds by $e^{0.5} \approx 1.6$, while a standard deviation increase in political SR scores implies more dramatic odds multipliers of 3 (Santorum) to 8 (Gingrich). This agrees with the fitted probabilities of Figure \ref{fit}, and again indicates that political directions are identifying particular users or labels, and not `subjective language' in the general sense. \begin{figure}[t] \includegraphics[width=6.3in]{polsSentiment} \caption{\label{sentiment} Twitter sentiment regression full-sample predictions. Daily tweet count percentages by sentiment classification are shown with green for positive, grey neutral, and red negative. } \end{figure} Figure \ref{sentiment} shows predicted sentiment classification for each of our 2.1 million collected political tweets, aggregated by day for each politician-subject. In each case, the majority of traffic lacks enough evidence in either direction, and is classified as neutral. However, some clear patterns do arise. The three `mainstream' Republicans (Romney, Santorum, Gingrich) have far more negative than positive tweets, with Rick Santorum performing worst. Libertarian Ron Paul appears to be relatively popular on Twitter, while President Obama is the only other politician to receive (slightly) more positive than negative traffic. It is also possible to match sentiment classification changes to events; for example, Santorum's negative spike around Feb 20 comes after a weekend of new agressive speeches in which he referenced Obama's `phony theology' and compared schools to `factories', among other lines that generated controversy. Finally, note for comparison that without the interaction terms (i.e, with only {\it score} as a covariate in inverse regression), the resulting univariate SR projection is dominated by emoticon-scored text. These projections turn out to be a poor summary of sentiment in the political tweets: there is little discrimination between SR scores across sentiment classes, and the in-sample mis-classification rate jumps to 42\% (from 13\% for the model that uses politician-specific intercepts). Fitted class probabilities are little different from overall class proportions, and with true neutral tweets being less common (at 22\% of our turk-scored sample) the result is that all future tweets are unrealistically predicted as either positive or negative. \section{Discussion} \label{discussion} This article makes two simple proposals for text-sentiment analysis. First, looking to optimal design in topic factor space can be useful for choosing documents to be scored. Second, sentiment can be interacted with indicator variables in MNIR to allow subject-specific inference to complement information sharing across generic sentiment. Both techniques deserve some caution. Topic D-optimal design ignores document length, even though longer documents can be more informative; this is not a problem for the standardized Twitter format, and did not appear to harm design for our illustrative examples, but it could be an issue in other settings. In the MNIR analysis, we have observed that subject-specific sentiment loadings (driven in estimation by small sample subsets) can be dominated by news or authors specific to the given sample. While this is not technically overfit, since it is finding persistent signals in the current time period, it indicates that one should constantly update models when using these techniques for longer-term prediction. A general lesson from this study is that traditional statistical techniques, such as experimental design and variable interaction, will apply in new areas like text mining when used in conjunction with careful dimension reduction. Basic statistics principles can then be relied upon to build optimal predictive models and to assess their risk and sensitivity in application.
{ "attr-fineweb-edu": 1.824219, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcIfxK2li-LJsu4R8
\section{Introduction}\label{intro} Investigation of the Higgs-Yukawa model using Lattice Field Theory can result in input for physics at the Large Hadron Collider (LHC). In particular, it enables us to extend the study of the model to the non-perturbative regime where a rich phase structure is being found~\cite{Hasenfratz:1992xs, Gerhold:2007yb, Gerhold:2007gx, Bulava:2012rb}. Such a phase structure can be employed to address the hierarchy problem and the issue of triviality. It can also provide insight into the nature of electroweak thermal phase transition which plays an important role in phenomenology of baryogenesis. This article presents two projects on the Higgs-Yukawa model. First we discuss the extension of the theory with a dimension-six operator that may lead to a strong first-order thermal phase transition~\cite{Grojean:2004xa, Huang:2015izx, Damgaard:2015con, Cao:2017oez, deVries:2017ncy}. Secondly, we show results from our study of finite-size scaling for the Higgs-Yukawa model near the Gaussian fixed point. Unlike the usual practice in finite-size scaling where the scaling functions is unknown, in this project we are able to derive these functions. To our knowledge this is the first time such finite-size scaling functions are determined, and confronted with data from lattice simulations\footnote{In Ref.~\cite{Gockeler:1992zj}, the authors derived similar scaling functions for the pure scalar $O(4)$ model in the large-volume limit.}. Such a study is of its own interests and will allow us to develop useful tools in looking for possible non-trivial fixed points at strong coupling. In this work, we investigate the Higgs-Yukawa model that is described by the continuum action \begin{eqnarray} \label{eq:cont_action} && S^{{\mathrm{cont}}}[\varphi,\bar{\psi},\psi] = \int \mathrm{d}^{4}x \left\{ \frac{1}{2} \left( \partial_{\mu}\varphi\right)^{\dagger} \left( \partial_{\mu}\varphi \right) + \frac{1}{2} m_{0}^2 \varphi^{\dagger} \varphi +\lambda \left( \varphi^{\dagger} \varphi \right)^2 \right\} \nonumber \\ && \hspace{3.0cm}+\int \mathrm{d}^{4}x \left\{ \bar{\Psi} \partial \hspace*{-1.8 mm} \slash \Psi + y \left( \bar{\Psi}_{L} \varphi b_{R} + \bar{\Psi}_{L} \tilde{\varphi} t_{R} + h.c. \right) \right\}, \nonumber\\ &&\hspace{0.5cm}{\mathrm{where}} \quad \varphi = \left ( \begin{array}{c} \phi_2+i\phi_1 \\ \phi_0 - i\phi_3 \end{array} \right ), \quad \tilde{\varphi}=i\tau_2\varphi, \quad \Psi = \left ( \begin{array}{c} t \\ b \end{array} \right ) , \quad \Psi_{L,R} = \frac{1\mp \gamma_{5}}{2} \Psi, \end{eqnarray} with $\phi_{i}$ being real scalar fields, $t$ and $b$ being the ``top'' and the ``bottom'' quark fields, and $\tau_2$ being the second Pauli matrix. Amongst the scalar fields, the component $\phi_{0}$ will develop a non-vanishing vacuum expectation value (vev) in the phase of spontaneously-broken $O(4)$ symmetry. The above action contains three bare couplings, $m_{0}$, $\lambda$ and $y$. Notice that we employ degenerate Yukawa couplings in this work. To discretise this action, we resort to overlap fermions that allow us to properly define the lattice version of the left- and right-handed fermions in the Yukawa terms. Furthermore, we follow the convention in representing the bosonic component of the lattice action as \begin{equation} S_B[\Phi] = -\kappa \sum\limits_{x,\mu} \Phi_x^{\dagger} \left[\Phi_{x+\mu} + \Phi_{x-\mu}\right] + \sum\limits_{x} \Phi_x^{\dagger} \Phi_x + \hat{\lambda}\sum\limits_{x} \left[ \Phi_x^{\dagger} \Phi_x - 1 \right]^2 , \end{equation} where $\kappa$ is the hopping parameter, $x$ labels lattice sites, and $\mu$ specifies the space-time directions. The relationship between the lattice and the continuum bosonic fields and relevant couplings is \begin{equation} a \varphi = \sqrt{2 \kappa} \Phi_{x} \equiv \sqrt{2 \kappa} \left( \begin{array}{c} \Phi_{x}^2 + i\Phi_{x}^1 \\ \Phi_{x}^0 - i \Phi_{x}^3 \end{array} \right) ,\quad \lambda = \frac{\hat{\lambda}}{{4 \kappa^2}},\quad m_0^2 = \frac{1 - 2 \hat{\lambda} -8 \kappa}{\kappa} , \end{equation} where $a$ denotes the lattice spacing. All the numerical works reported in this article have been performed with the choice of the bare Yukawa coupling, \begin{equation} \label{eq:choice_of_bare_y} y = 175/246 , \end{equation} as motivated by the physical values of the Higgs-field vev and the top-quark mass. \section{Thermal phase transition in the Higgs-Yukawa model with a dimension-six operator}\label{sec:dim6_op} In this section, we present our study of the Higgs-Yukawa model with the inclusion of a term, \begin{equation} \label{eq:dim6_op} {\mathcal{O}}_{6} = \frac{\lambda_{6}}{\Lambda^{2}} \int {\mathrm{d}}^{4}x \left ( \varphi^{\dagger} \varphi \right )^{3} , \end{equation} in the action of Eq.~(\ref{eq:cont_action}). This term can serve as a prototype of new physics. Here $\Lambda$ is the cut-off scale that can be realised as $1/a$ on the lattice. It is natural to include this operator as one of the higher-dimension terms when interpreting the Higgs-Yukawa model in the language of effective field theory~\cite{Bilenky:1994kt}. In the above expression, $\lambda_{6}$ is dimensionless. The addition of the dimension-six operator, $\left ( \varphi^{\dagger} \varphi \right )^{3}$, can enrich both thermal and non-thermal phase structures of the theory. Our main interest in the Higgs-Yukawa model with the operator in Eq.~(\ref{eq:dim6_op}) is the search for a viable scenario for a strong first-order thermal phase transition in the theory, while maintaining a second-order transition at zero temperature. In performing such search, we scan the bare-coupling space to identify choices of parameters such that \begin{enumerate} \item The cut-off is high enough compared to the renormalised Higgs-field vev, denoted as $\langle \varphi \rangle \equiv \langle \phi_{0} \rangle$. This means the condition, \begin{equation} \label{eq:high_cutoff} a \langle \varphi \rangle \ll 1 , \end{equation} has to be satisfied in our simulations. \item The ratio between $\langle \varphi \rangle$ and the Higgs-boson mass is compatible with experimental results, {\it i.e.}, \begin{equation} \label{eq:higgs_vev_mass_ratio} \frac{\langle \varphi \rangle}{m_{H}} \sim 2 . \end{equation} \item The thermal phase transition is first-order. \end{enumerate} To ensure that Eq.~(\ref{eq:high_cutoff}) is realised in our simulations, we have to examine the non-thermal phase structure of the model. In the phase where the $O(4)$ symmetry is spontaneously broken, this condition can be satisfied near any second-oder non-thermal phase transitions. In our previous work~\cite{Chu:2015nha}, a thorough investigation in this regard was conducted for two choices of $\lambda_{6}$ ($\lambda_{6} = 0.001$ and $\lambda_{6} = 0.1$), leading to useful information for the current study. In order to check the constraint of Eq.~(\ref{eq:higgs_vev_mass_ratio}), we determine the Higgs-boson mass from the momentum-space Higgs propagator. An important tool in our study of the phase structures is the constraint effective potential (CEP)~\cite{Fukuda:1974ey, ORaifeartaigh:1986axd}\footnote{An alternative, somewhat similar, tool for such studies is the extended mean-field theory~\cite{Akerlund:2015fya}.}. The CEP, $U(\hat{v})$, is a function of the Higgs-field zero mode, \begin{equation} \hat{v} = \frac{1}{V} \left | \sum_{x} \Phi^{0}_{x} \right |, \end{equation} where $V$ is the four-volume and the sum is over all lattice points. This effective potential can be calculated analytically using perturbation theory. It can also be extracted numerically through a histogramming procedure of $\hat{v}$ in Monte-Carlo simulations. Figure~\ref{fig:CEPlambda6euqls0point001} exhibits our results at $\lambda_{6} = 0.001$ and $\lambda = -0.008$. \begin{figure}[thb] \centering \includegraphics[width=7.5cm,clip]{magnetisation_from_both_CEP_and_simulations_lambda6_0point001_lambda_minus0point008.pdf} \includegraphics[width=6.5cm,clip]{finite_T_CEP_from_simulation_L20T4_lambda6_0point001_lambda_minus0point008_Kap_0point12289.pdf} \caption{Results for $\lambda_{6} = 0.001$ and $\lambda = -0.008$. In the left-hand panel, the errors are statistical from lattice computations, and symbols without errors represent results obtained in perturbative calculations using the constraint effective potential. ``Ratio'' means $\langle \varphi \rangle/m_{H}$, and ``PT'' stands for ``perturbation theory''. In this figure, the Higgs vev, $\langle \varphi \rangle$, is plotted in lattice units. The right-hand side is the constraint effective potential, obtained in the Monte-Carlo simulation at non-zero temperature, and at the four-volume $(L/a)^{3}\times (T/a) = 20^{3} \times 4$ and $\kappa = 0.12289$.} \label{fig:CEPlambda6euqls0point001} \end{figure} According to the study in Ref.~\cite{Chu:2015nha}, perturbative calculations of the CEP are reliable at these values of the couplings. The plot in the left-hand panel demonstrates that this choice of the self couplings can lead to a second-order non-thermal and a first-order thermal phase transitions. The first-order transition is further evidenced by our numerical study of the CEP, as presented in the right-hand panel. On the other hand, results from perturbation theory show that the ratio $\langle \varphi \rangle /m_{H}$ does not satisfy the condition in Eq.~(\ref{eq:higgs_vev_mass_ratio}). To perform further search, we choose $\lambda_{6} = 0.1$ and $\lambda=-0.378$. Our work in Ref.~\cite{Chu:2015nha} shows that perturbation theory for the CEP is no longer reliable at this value of $\lambda_{6}$. Therefore we only resort to Monte-Carlo simulations on the lattice. Results for the CEP near the phase transitions are displayed in Fig.~\ref{fig:CEPlambda6euqls0point1}. \begin{figure}[thb] \centering \includegraphics[width=4.65cm,clip]{CEP_zero_T_simulation_lambda6_0point1_lambda_minus0point378_symm.pdf} \includegraphics[width=4.65cm,clip]{CEP_zero_T_simulation_lambda6_0point1_lambda_minus0point378_trans.pdf} \includegraphics[width=4.65cm,clip]{CEP_zero_T_simulation_lambda6_0point1_lambda_minus0point378_broken.pdf}\\ \includegraphics[width=4.65cm,clip]{CEP_finite_T_simulation_lambda6_0point1_lambda_minus0point378_symm.pdf} \includegraphics[width=4.65cm,clip]{CEP_finite_T_simulation_lambda6_0point1_lambda_minus0point378_trans.pdf} \includegraphics[width=4.65cm,clip]{CEP_finite_T_simulation_lambda6_0point1_lambda_minus0point378_broken.pdf} \caption{Results for $\lambda_{6} = 0.1$ and $\lambda = -0.378$. The plots on the first row are the CEP at three representative values of $\kappa$ near the non-thermal phase transition. The second row displays their counterparts at the finite-temperature transition.} \label{fig:CEPlambda6euqls0point1} \end{figure} Although it is not clear whether these transitions are first- or second-order, it can be concluded that the non-thermal and the thermal phase transitions exhibit almost the same properties. Hence we conclude that this choice of the bare scalar self-couplings does not lead to a viable scenario for a strong first-order thermal phase transition. Presently we are continuing this search with other choices of $\lambda_{6}$ and $\lambda$. \section{Finite-size scaling for the Higgs-Yukawa model near the Gaussian fixed point} \label{sec:FFS} In the second project, we study the Higgs-Yukawa model as described by the continuum action in Eq.~(\ref{eq:cont_action}), {\it i.e,}, we have \begin{equation} \lambda_{6} = 0 , \end{equation} and only include operators with dimension less than or equal to four in the action. The purpose of this investigation is to develop tools for exploring the scaling behaviour of the model near the Gaussian fixed point. These tools can be used to confirm the triviality of the Higgs-Yukawa theory, or to search for alternative scenarios where strong-coupling fixed points exist. Predictions from perturbation theory indicate the possible appearance of non-trivial fixed points in the Higgs-Yukawa model~\cite{Molgaard:2014mqa}. This issue was also examined with the approach of functional renormalisation group~\cite{Gies:2009hq}, where no non-Gaussian fixed point was found. Nevertheless, early lattice computations showed evidence for the opposite conclusion~\cite{Hasenfratz:1992xs}. We stress that an extensive, non-perturbative study of the Higgs-Yukawa theory from first-principle calculations is still wanting. This is in contrast to the situation of the pure-scalar models which are now widely believed to be trivial (see, {\it e.g.}, Refs.~\cite{Frohlich:1982tw, Luscher:1988uq, Hogervorst:2011zw, Siefert:2014ela}). In our previous attempt at addressing the issue of the triviality in the Higgs-Yukawa model, as reported in Ref.~\cite{Bulava:2012rb}, we employed the technique of finite-size scaling. The main finding in Ref.~\cite{Bulava:2012rb} is that one needs to understand the logarithmic corrections to the mean-field scaling behaviour, in order to draw concrete conclusions. In view of this, we developed a strategy, and worked out its analytic aspects, as reported in Refs.~\cite{Chu:2015jba, Chu:2016svq}. The main result in Refs.~\cite{Chu:2015jba, Chu:2016svq} is that, using the techniques established by Zinn-Justin and Brezin for scalar field theories~\cite{Brezin:1985xx}, we can derive finite-size scaling formulae for various quantities in one-loop perturbation theory near the Gaussian fixed point. It is natural to include the leading-order logarithmic corrections to the mean-field scaling law through this procedure. In this strategy, we first match correlators obtained with lattice regularisation to an on-shell renormalisation scheme, with the matching scale chosen to be the pole mass, $m_{P}$, of the scalar particle. This pole mass can be extracted by studying the scalar-field propagator on the lattice. Its relationship with the renormalised mass parameter [the renormalised counterpart of the bare coupling $m_{0}$ in Eq.~(\ref{eq:cont_action})], $m$, in the theory is \begin{eqnarray} \label{eq:pole_to_renorm_mass} && m^2 (m_P) = m_{P}^{2} \mbox{ }\mbox{ }{\mathrm{in}}\mbox{ }{\mathrm{the}}\mbox{ }{\mathrm{symmetric}}\mbox{ }{\mathrm{phase}} , \nonumber \\ && m^2 (m_P) = -\frac{1}{2} m_{P}^{2} \mbox{ }\mbox{ }{\mathrm{in}}\mbox{ }{\mathrm{the}}\mbox{ }{\mathrm{broken}}\mbox{ }{\mathrm{phase}} , \end{eqnarray} where the renormalisation scale is $m_{P}$. Notice that $m_{P}$ is the Higgs-boson pole mass in the broken phase. Under the assumption that we work closely enough to the critical surface of the Gaussian fixed point, the condition $m_{P} \ll 1/a$ is satisfied. We can then carry out one-loop running of the renormalised correlators from $m_{P}$ to another low-energy scale that is identified with the inverse lattice size, $L^{-1}$, that is of the same order of $m_{P}$ but with the constraint $m_{P} L > 1$. This leads to predictions of finite-size scaling behaviour of these correlators. In performing the above one-loop running, one has to solve the relevant renormalisation group equations, introducing integration constants. These constants will then be treated as fit parameters when confronting the scaling formulae with lattice numerical data. Up to the effect of wavefunction renormalisation, which results in additional $L{-}$dependence that can also be accounted for using one-loop perturbation theory, we have found that all the correlators containing only the zero mode of the scalar field can be expressed in terms of a class of functions \begin{eqnarray} \bar{\varphi}_0 (z) &=& \frac{\pi}{8} \exp \left( \frac{z^2}{32} \right) \sqrt{|z|} \left[ I_{-1/4}\left( \frac{z^2}{32} \right) - \mathrm{Sgn}(z) \, I_{1/4}\left( \frac{z^2}{32} \right) \right],\nonumber \\ \bar{\varphi}_1 (z) &=& \frac{\sqrt{\pi}}{8} \exp \left( \frac{z^2}{16} \right) \left[ 1-\mathrm{Sgn}(z) \, \mathrm{Erf} \left( \frac{|z|}{4} \right) \right], \mbox{ } \label{recursion_formula} \bar{\varphi}_{n+2} (z) = -2 \frac{\mathrm{d}}{\mathrm{d} z} \bar{\varphi}_n (z), \end{eqnarray} where the scaling variable, $z$, is \begin{equation} \label{scaling_variable} z=\sqrt{s}m^{2} \left( L^{-1} \right) L^{2} \lambda_{R} \left( L^{-1} \right)^{-1/2}, \end{equation} with $\lambda_{R} (L^{-1})$ being the quartic coupling renormalised at the scale $L^{-1}$, and $s$ being the anisotropy of the four-volume, $L^{3}\times T = L^{3} \times sL$. Notice that the $L{-}$dependence in $\lambda_{R} (L^{-1})$ can be complicated, involving two integration constants resulted from solving the renormalisation group equations for the Yukawa and the quartic couplings~\cite{our_scaling_paper}. On the other hand, there is no unknown parameter in the renormalised mass, $m(L^{-1})$, because it is obtained from the scalar pole mass computed numerically on the lattice. To our knowledge this is the first time the formulae in Eq.~(\ref{recursion_formula}) are derived, although similar results were obtained in the large-volume limit in Ref.~\cite{Gockeler:1992zj}. In this work, we study the Higgs-field vev, $\langle \varphi \rangle \equiv \langle \phi_{0} \rangle$, its susceptibility, $\chi$, and Binder's cumulant, $Q$. The finite-size scaling formulae for these quantities are found to be \begin{eqnarray} \label{eq:scaling_formulae} \langle \varphi \rangle &=& s^{-1/4} A^{(\varphi)} L^{-1} \left[ \lambda_{R} (L^{-1}) \right]^{-1/4} \frac{\bar{\varphi}_4(z)}{\bar{\varphi}_3(z)}, \nonumber \\ \chi &=& s L^4 \left( \langle \varphi^2 \rangle -\langle \varphi \rangle^2 \right) = s^{1/2} A^{(\chi)} L^{2} \left[ \lambda_{R} (L^{-1}) \right]^{-1/2} \left[ \frac{\bar{\varphi}_5(z)}{\bar{\varphi}_3(z)} - \left( \frac{\bar{\varphi}_4(z)}{\bar{\varphi}_3(z)} \right)^2 \right], \nonumber \\ Q &=& 1-\frac{\langle \varphi^4 \rangle}{3\langle \varphi^2 \rangle^2} = 1-\frac{\bar{\varphi}_7(z)\bar{\varphi}_3(z)}{3\bar{\varphi}_5(z)^2} , \end{eqnarray} where $\langle \varphi^{2} \rangle \equiv \langle \phi_{0}^{2} \rangle$ in the definition of the susceptibility, $A^{(\varphi)}$ and $A^{(\chi)}$ are unknown constants resulting from integrating the renormalisation group equation for the the wavefunction. As mentioned above, the formulae in Eq.~(\ref{eq:scaling_formulae}) can be complicated in the Higgs-Yukawa theory. Therefore as the first numerical test of our strategy, we resort to the pure-scalar $O(4)$ model, in which the one-loop $\lambda_{R} (L^{-1})$ takes the simple form, \begin{equation} \label{eq:lambda_R_O4} \lambda_{R} (L^{-1}) = \frac{\lambda_{m_{P}}}{1+\frac{6}{\pi^2}\log(m_{P}L)} , \end{equation} with only one integration constant, $\lambda_{m_{P}} \equiv \lambda_{R} (m_{P})$, to be fitted from numerical data. In this numerical test, the bare quartic coupling is chosen to be 0.15, to ensure that we are working in the perturbative regime. Throughout our analysis procedure, the scalar-particle pole mass is determined by fitting the four-momentum space propagator, and then extrapolated to the infinite-volume limit employing a ChPT-inspired formula. More details of this aspect of our work will be reported in a near-future publication~\cite{our_scaling_paper}. Figure~\ref{fig:Fit_scaling} shows the results of the fits for $\langle \varphi \rangle$, $\chi$ and $Q$ in the pure-scalar $O(4)$ model using the scaling formulae in Eqs.~(\ref{eq:scaling_formulae}) and~(\ref{eq:lambda_R_O4}). The fit parameters are $\lambda_{m_{R}}$ and $A^{(\varphi, \chi)}$. We distinguish them in the symmetric and the broken phases, since the numerical values of these parameters need not be the same in two different phases. Upon extracting these parameters from our lattice numerical result, we can then evaluate the scaling variable, $z$, for our data points, as well as removing (rescaling away) the volume-dependence introduced {\it via} the effect of the wavefunction renormalisation in Eq.~(\ref{eq:scaling_formulae}). This allows us to test our finite-size scaling formulae through plotting the rescaled $\langle \varphi \rangle$, $\chi$ and $Q$ as functions of $z$. In Fig.~\ref{fig:Plot_scaling}, it is demonstrated that these rescaled quantities all lie on universal curves that are only functions of $z$. Such behaviour, together with the good or reasonable values of the $\chi^{2}$, show strong evidence that our formulae indeed capture the scaling properties of the theory as governed by the Gaussian fixed point. \begin{figure}[thb] \centering \includegraphics[width=4.65cm,clip]{MagFit.pdf} \includegraphics[width=4.65cm,clip]{SusFit.pdf} \includegraphics[width=4.65cm,clip]{BinFit.pdf} \caption{Results for the fit of the Higgs-field vev, its susceptibility and Binder's cumulant using the finite-size scaling formulae in the pure-scalar $O(4)$ model. The parameters $\lambda_{s,b}$ are $\lambda_{m_{R}}$ in the symmetric and the broken phases, with similar symbols indicating the relevant $A^{(\varphi)}$ and $A^{(\chi)}$ in these two phases. All dimensionfull quantities are expressed in lattice units.} \label{fig:Fit_scaling} \end{figure} \begin{figure}[thb] \centering \includegraphics[width=4.65cm,clip]{MagScaling.pdf} \includegraphics[width=4.65cm,clip]{SusScaling.pdf} \includegraphics[width=4.65cm,clip]{BinScaling.pdf} \caption{Scaling behaviour of the Higgs-field vev, its susceptibility and Binder's cumulant in the pure-scalar $O(4)$ model. All dimensionful quantities are expressed in lattice units. The subscript $rs$ means these quantities are rescaled properly to remove the volume dependence introduced {\it via} the effect of wavefunction renormalisation. The parameters $\lambda_{s,b}$ are $\lambda_{m_{R}}$ in the symmetric and the broken phases, with similar symbols indicating the relevant $A^{(\varphi)}$ and $A^{(\chi)}$ in these two phases, and $\chi_{r}$ indicates the $\chi^{2}/{\mathrm{d.o.f.}}$ of the fit.} \label{fig:Plot_scaling} \end{figure} \section{Conclusion and outlook} In this article, we present two projects on lattice simulations for the Higgs-Yukawa model. The results from our study show that Lattice Field Theory can be employed for investigating non-perturbative aspects of the model. Our first project is the search for phenomenologically viable scenarios for a strong first-order thermal phase transition in the Higgs-Yukawa theory with the addition of a dimension-six operator. This dimension-six operator can serve as a prototype of new physics. In this work, we demonstrate that such first-order transitions can indeed be observed, when the cut-off scale is kept high in comparison to the Higgs-boson vev. However, we are yet to find a suitable choice of parameters which leads to large enough Higgs-boson mass. Currently we are performing more lattice computations to further scan the bare parameter space. The second project presented in this article is our work on finite-size scaling behaviour of the Higgs-Yukawa model near the Gaussian fixed point. In this regard, we have derived the scaling formulae by solving the path integrals at one-loop. These formulae are tested against lattice numerical data in the pure-scalar $O(4)$ model, where good agreement is found. Such formulae can be important tools for future works on confirming the triviality of the Higgs-Yukawa theory, or searching for alternative scenarios where strong-coupling fixed points exist. \section*{Acknowledgments} DYJC and CJDL acknowledge the Taiwanese MoST grant number 105-2628-M-009-003-MY4. DYJC also thanks the financial support from Chen Cheng Foundation.
{ "attr-fineweb-edu": 1.724609, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcM_xK03BfMLypt8p
\section{Introduction} Support Vector Machines (Boser et al., 1992) belong to core machine learning techniques for binary classification. Given a large number of training samples characterized by a large number of features, a linear SVM is often the \textit{go-to} approach in many applications. A handy collection of software packages, e.g., \texttt{LIBLINEAR} (Fan et al., 2008), \texttt{Pegasos} (Shalev-Shwartz et al., 2011), \texttt{SVM$^{\texttt{perf}}$} (Joachims, 2006), \texttt{Scikit-learn} (Pedregosa et al., 2011) provide practitioners with efficient algorithms for fitting linear models to datasets. Finding optimal hyperparameters of the algorithms for model selection is crucial though for good performance at test-time. A vanilla cross-validated grid-search is the most common approach to choosing satisfactory hyperparameters. However, grid search scales exponentially with the number of hyperparameters while choosing the right sampling scheme over the hyperparameter space impacts model performance (Bergstra \& Bengio, 2012). Linear SVMs typically require setting a single $C$ hyperparameter that equally regularizes the training \texttt{loss} of misclassified data. (Klatzer \& Pock, 2015) propose bi-level optimization for searching several hyperparameters of linear and kernel SVMs and (Chu et al., 2015) use warm-start techniques to efficiently fit an SVM to large datasets but both approaches explore the hyperparameter regularization space partially. The algorithm proposed in (Hastie et al., 2004) builds the entire regularization path for linear and kernel SVMs that use single, symmetric cost for misclassifying negative and positive data\footnote{Solutions path algorithms relate to parametric programming techniques. With independent revival in machine learning, these techniques have traditionally been applied in optimization and control theory (Gartner et al., 2012)}. The stability of the algorithm was improved in (Ong et al., 2010) by augmenting the search space of feasible event updates from one- to multi-dimensional hyperparameter space. In this paper, we also show that a one-dimensional path following method can diverge to unoptimal solution wrt KKT conditions. Many problems often require setting multiple hyperparameters (Karasuyama et al., 2012). They arise especially when dealing with imbalanced datasets (Japkowicz \& Stephen, 2002) and require training an SVM with two cost hyperparameters assymetrically attributed to positive and negative examples. (Bach et al., 2006) builds a pencil of one-dimensional regularization paths for the assymetric-cost SVMs. On the other hand, (Karasuyama et al., 2012) build a one-dimensional regularization path but in a multidimensional hyperspace. In contrast to algorithms building one-dimensional paths in higher-dimensional hyperparameter spaces, we describe a solution path algorithm that explores the entire regularization path for an assymetric-cost linear SVMs. Hence, our path is a two-dimensional path in the two-dimensional hyperparameter space. Our main contributions include: \begin{itemize} \item development of the entire regularization path for assymetric-cost linear support vector machine (AC-LSVM) \item algorithm initialization at arbitrary location in the $(C^+,C^-)$ hyperparameter space \item computationally and memory efficient algorithm amenable to local parallelization. \end{itemize} \section{Problem formulation} Our binary classification task requires a \textit{fixed} input set of $N$ training examples $\forall_{i\in \mathbb{N}_i} \mathbf{x}_i$, where $\mathbf{x}_i \in \mathbb{R}^{d\times 1}$, $\mathcal{N}_i = \lbrace i: i \in \mathbb{N}^+ \wedge i \in \langle 1,N \rangle \rbrace$, $d \in \mathbb{N}^+$, to be annotated with corresponding binary labels $y_i \in \lbrace -1, +1 \rbrace$ denoting either class. Then, the objective is to learn a decision function $g(\mathbf{x}_t)$ that will allow its associated classifier $y_t\leftarrow\texttt{sign}\left[ g(\mathbf{x}_t)\right] $ to predict the label $y_t$ for new sample $\mathbf{x}_t$ at test-time. AC-LSVM learns the array of parameters $\boldsymbol{\beta} \in \mathbb{R}^{d\times 1}$ of the decision function $g(\mathbf{x}_i) = \boldsymbol{\beta}^T \mathbf{x}_i$ by solving the following, primal quadratic program (QP): \begin{align} \label{eq:linsvm} &\underset{\boldsymbol{\beta}, \mathbf{\xi}}{\text{argmin}}~~\dfrac{1}{2} \Vert \boldsymbol{\beta}\Vert_2^2 + C^+\sum_{i=1}^{N^+}\xi_i + C^-\sum_{i=1+N^{+}}^{N}\xi_i\\ &~~~~\text{s.t.} ~~ \forall_{i} ~ \boldsymbol{\beta}^T \mathbf{x}_i \geq 1 - \xi_i, ~~\xi_i \geq 0 \end{align} where we include the scalar valued bias term $b$ in $\boldsymbol{\beta}^T$ and augment data points $\mathbf{x}_i^T$ by some constant $B$: \begin{equation} \boldsymbol{\beta}^T \leftarrow \left[ \widehat{\boldsymbol{\beta}}^T, b\right], ~~~~ \mathbf{x}_i^T \leftarrow y_i \left[ \widehat{\mathbf{x}}_i^T, B\right] \label{eq:bias_augm} \end{equation} where $B$ is defined by a user (Hsieh, 2008). The above formulation should learn $\boldsymbol{\beta}$ to assign scores higher than margin $1$ to positive examples $\left\lbrace x_i,y_i=+1\right\rbrace $ and lower than margin $-1$ to negative examples $\left\lbrace x_i,y_i=-1\right\rbrace $. As data may be inseparable in $\mathbb{R}^{d\times 1}$, the objective function (1) penalizes violations of these constraints (2) with slack variables $\xi_i \geq 0$, asymmetrically weighted by constants $C^+$ and $C^-$. \paragraph{Active sets} Solving the primal QP \eqref{eq:linsvm} is often approached with the help of Lagrange multipliers $\boldsymbol{\alpha} \in \mathbb{R}^N$, where $\boldsymbol{\alpha} = \left[ \alpha_1,\ldots,\alpha_N \right]^T$, which are associated with $N$ constraints in \eqref{eq:linsvm}. Let $\mathbf{X}_{d \times N} = \left[ \mathbf{x}_1,\ldots,\mathbf{x}_N \right]$ and $\mathbf{1} = \left[ 1, \ldots, 1\right]^T $. Then, the dual problem takes the familiar form: \begin{align} &\underset{\boldsymbol{\alpha}}{\text{argmin}}~~\dfrac{1}{2} \boldsymbol{\alpha}^T\mathbf{X}^T\mathbf{X}\boldsymbol{\alpha} - \boldsymbol{\alpha}^T \mathbf{1}\\ &\text{s.t.} ~~ \forall_{i\in \langle 1,N^+ \rangle} ~~~~~ 0 \leq \alpha_i \leq C^+ \\ &~~~~~~\forall_{i\in \langle 1+N^+,N \rangle} ~ 0 \leq \alpha_i \leq C^- \end{align} The immediate consequence of applying the Lagrange multipliers is the expression for the LSVM parameters $\boldsymbol{\beta}=\mathbf{X}\boldsymbol{\alpha}$ yielding the decision function $g(\mathbf{x}_i) = \boldsymbol{\beta}^T \mathbf{x}_i = \mathbf{x}_i^T\mathbf{X}\boldsymbol{\alpha}$. The optimal solution $\boldsymbol{\alpha}$ of the dual problem is dictated by satisfying the usual Karush-Kuhn-Tucker (KKT) conditions. Notably, the KKT conditions can be algebraically rearranged giving rise to the following \textit{active sets}: \begin{equation} \mathcal{M}^{+/-} = \lbrace i: g(\mathbf{x}_i) = \mathbf{x}_i^T\mathbf{X}\boldsymbol{\alpha} = 1, 0 \leq \alpha_i \leq C^{+/-} \rbrace \label{eq:Mset} \end{equation} \begin{equation} \mathcal{I}^{+/-} = \lbrace i: g(\mathbf{x}_i) = \mathbf{x}_i^T\mathbf{X}\boldsymbol{\alpha} < 1, \alpha_i = C^{+/-} \rbrace \label{eq:Lset} \end{equation} \begin{equation} \mathcal{O} = \lbrace i: g(\mathbf{x}_i) = \mathbf{x}_i^T\mathbf{X}\boldsymbol{\alpha} > 1, \alpha_i = 0 \rbrace \label{eq:Rset} \end{equation} Firstly, the $\mathcal{M}\mathcal{I}\mathcal{O}$ sets \eqref{eq:Mset}$-$\eqref{eq:Rset} cluster data points $\mathbf{x}_i$ to the margin $\mathcal{M}$, to the left $\mathcal{I}$, and to the right $\mathcal{O}$ of the margin along with their associated scores $g(\mathbf{x}_i)$. Secondly, the sets indicate the range within the $(C^+,C^-)$ space for Lagrange multipliers $\alpha_i$ over which $\boldsymbol{\alpha}$ is allowed to vary thereby giving rise to a convex polytope in that space. \paragraph{Convex polytope} A unique region in $(C^+,C^-)$ satisfying a particular configuration of the $\mathcal{M}\mathcal{I}\mathcal{O}$ set is bounded by a convex polytope. The first task in path exploration is thus to obtain the boundaries of the convex polytope. Following (Hastie, 2004), we obtain\footnote{A similar derivation appeared in (Bach et al., 2006).} linear inequality constraints from \eqref{eq:Mset}$-$\eqref{eq:Rset}: \begin{equation} \mathbf{h}_{\alpha_0^{+/-}}^T=\left[ -\mathbf{X}_{\mathcal{M}}^*\mathbf{X}_{\mathcal{I}_{+}} \mathbf{1}~~ -\mathbf{X}_{\mathcal{M}}^*\mathbf{X}_{\mathcal{I}_{-}} \mathbf{1}~~ (\mathbf{X}_{\mathcal{M}}^T\mathbf{X}_{\mathcal{M}})^{-1}\mathbf{1}\right] \label{eq:h_alpha0} \end{equation} \begin{equation} \mathbf{h}_{\alpha_C^+}^T= \mathbf{h}_{\alpha_0^+}^T + \left[\mathbf{1} ~~ \mathbf{0} ~~ \mathbf{0}\right]^T \label{eq:h_alphaC1} \end{equation} \begin{equation} \mathbf{h}_{\alpha_C^-}^T= \mathbf{h}_{\alpha_0^-} + \left[\mathbf{0} ~~ \mathbf{1} ~~ \mathbf{0}\right]^T \label{eq:h_alphaC2} \end{equation} \begin{equation} \mathbf{h}_{\mathcal{I}}^T=\left[ -\mathbf{x}_{\mathcal{I}}^T \mathbf{P}_{\mathcal{M}}^{\perp}~\mathbf{x}_{\mathcal{I}_+} \mathbf{1}~~ -\mathbf{x}_{\mathcal{L}}^T \mathbf{P}_{\mathcal{M}}^{\perp}~\mathbf{x}_{\mathcal{I}_-} \mathbf{1}~~ \mathbf{1}-\mathbf{x}_{\mathcal{I}}^T~\mathbf{X}_{\mathcal{M}}^{\star T} \mathbf{1}\right] \label{eq:h_L} \end{equation} \begin{equation} \mathbf{h}_{\mathcal{O}}^T=\left[ \mathbf{x}_{\mathcal{O}}^T \mathbf{P}_{\mathcal{M}}^{\perp}~\mathbf{x}_{\mathcal{I}_{+}} \mathbf{1}~~ \mathbf{x}_{\mathcal{O}}^T \mathbf{P}_{\mathcal{M}}^{\perp}~\mathbf{x}_{\mathcal{I}_{-}} \mathbf{1}~~ -(\mathbf{1}-\mathbf{x}_{\mathcal{O}}^T~\mathbf{X}_{\mathcal{M}}^{\star T} \mathbf{1})\right] \label{eq:h_R} \end{equation} where, $\mathbf{P}_{\mathbf{X}^{\perp}_{\mathcal{M}}}=\mathbf{I} - \mathbf{X}_{\mathcal{M}} \mathbf{X}_{\mathcal{M}}^*$ is the orthogonal projector onto the orthogonal complement of the subspace spanned by $\mathbf{X}_{\mathcal{M}}$ and $\mathbf{X}_{\mathcal{M}}^*=(\mathbf{X}_{\mathcal{M}}^T\mathbf{X}_{\mathcal{M}})^{-1} \mathbf{X}_{\mathcal{M}}^T$ is the Moore-Penrose pseudoinverse if $\mathbf{X}_{\mathcal{M}}$ has full column rank. Specifically, let $\mathbf{H}$ be a matrix composed of constraints \eqref{eq:h_alpha0}$-$\eqref{eq:h_R}. \begin{equation} \mathbf{H}=\left[ \mathbf{h}_{\alpha_0^+} ~~ \mathbf{h}_{\alpha_0^-} ~~ \mathbf{h}_{\alpha_C^+} ~~ \mathbf{h}_{\alpha_C^-} ~~ \mathbf{h}_{\mathcal{L}} ~~ \mathbf{h}_{\mathcal{R}} \right] \label{eq:H} \end{equation} Then, the boundaries of the convex polytope in the $(C^+,C^-)$ space are indicated by a subset of active constraints in $\mathbf{c}^T \mathbf{H} \geq \mathbf{0}^T$, which evaluate to $0$ for some $\mathbf{c}^T = \left[C^+ ~~ C^- ~~ 1\right]$. The boundaries can be determined in linear time $\mathcal{O}(N)$ with efficient convex hull (\texttt{CH}) routines (Avis et al., 1997). Now, in order grow the entire regularization path in $(C^+,C^-)$, the $\mathcal{M}\mathcal{I}\mathcal{O}$ sets have to updated at $l$-th step such that the KKT conditions will hold, thereby determining an $(l+1)$-th convex polytope. The polytope constraints $\mathbf{h}_{\alpha_0^{+/-}}$ for which $\alpha_i = 0$ indicate that a point $\mathbf{x}_i$ has to go from $\mathcal{M}$ to $\mathcal{O}$ in order to satisfy the KKT conditions. Likewise, $\mathbf{h}_{\alpha_C^{+/-}}$ for which $\alpha_i = C^{+/-}$ indicate updating the point from $\mathcal{M}$ to $\mathcal{I}$, $\mathbf{h}_{\mathcal{R}}$ indicate point transition from $\mathcal{O}$ to $\mathcal{M}$, and $\mathbf{h}_{\mathcal{I}}$ indicate point transition from $\mathcal{I}$ to $\mathcal{M}$. These set transitions are usually called events, while the activated constraints are called breakpoints. Therefore, at breakpoint, we determine the event for $i$-th point by a function that updates the $\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l$ set from $l$ to $l+1$ as: \begin{equation} u(i,t,l) = \begin{cases} \mathcal{M}_{l+1}= \mathcal{M}_{l} \setminus i \wedge \mathcal{O}_{l+1}= \mathcal{O}_{l} \cup i,& t = 0 \\ \mathcal{O}_{l+1}= \mathcal{O}_{l} \setminus i \wedge \mathcal{M}_{l+1}= \mathcal{M}_{l} \cup i,& t = 0 \\ \mathcal{M}_{l+1}= \mathcal{M}_{l} \setminus i \wedge \mathcal{I}_{l+1}= \mathcal{I}_{l} \cup i,& t = 1 \\ \mathcal{I}_{l+1}= \mathcal{I}_{l} \setminus i \wedge \mathcal{M}_{l+1}= \mathcal{M}_{l} \cup i,& t = 1 \\ \end{cases} \label{eq:update} \end{equation} where the direction of the transition depends on the current $\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l$ set configuration. Following (Hastie et al., 2004), our algorithm requires $\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l$ set to proceed to the next set by \eqref{eq:update}. However, unlike (Hastie et al., 2004), the constraints \eqref{eq:h_alpha0}$-$\eqref{eq:h_R} are independent of the previous computations of $\alpha_i$ and $\boldsymbol{\beta}$. This has several implications. Firstly, our algorithm does not accumulate potential numerical errors in these parameters. Secondly, the algorithm can be initialized from an arbitrary location in the $(C^+,C^-)$ space. \section{Proposed method} \label{sec:algo} The evolution of $\boldsymbol{\alpha}$ is continuous and piecewise linear in the $(C^+,C^-)$ space (Bach et al., 2006). An immediate consequence is that the active constraints have to flip during the set update \eqref{eq:update}. \paragraph{Flipping constraints} Suppose we have a single event that $i \in \mathcal{M}_l^+$ goes to $\mathcal{I}_{l+1}^+$ thereby forcing the set update rule $u(i,t=1,l)$. Then, the $i$-th constraint in \eqref{eq:h_alpha0} can be rearranged using the \textit{matrix inversion lemma} wrt $\mathbf{x}_i$ as: \begin{equation} \begin{bmatrix} \mathbf{x}_i^T \mathbf{P}_{\mathbf{X}^{\perp}_{\mathcal{M}_l \setminus i}} \mathbf{X}_{\mathcal{I}^{+}_{l}} \mathbf{1} + \mathbf{x}_i^T \mathbf{P}_{\mathbf{X}^{\perp}_{\mathcal{M}_l \setminus i}} \mathbf{x}_i\\ \mathbf{x}_i^T \mathbf{P}_{\mathbf{X}^{\perp}_{\mathcal{M}_l \setminus i}} \mathbf{X}_{\mathcal{I}^{-}_{l}} \mathbf{1} \\ -1+\mathbf{x}_i^T~\mathbf{X}_{\mathcal{M}_l \setminus i}^{\star T} \mathbf{1} \end{bmatrix} \label{eq:h_alpha0_inv} \end{equation} where $\mathbf{x}_i^T \mathbf{P}_{\mathbf{X}^{\perp}_{\mathcal{M}_l \setminus i}} \mathbf{x}_i \geq 0 $. This constraint is equal to its corresponding, sign-flipped counterpart in \eqref{eq:h_L} at $l+1$ as: \begin{equation} \begin{bmatrix} -\mathbf{x}_i^T \mathbf{P}_{\mathbf{X}^{\perp}_{\mathcal{M}_{l+1}}} (\mathbf{X}_{\mathcal{I}^{+}_{l}} \mathbf{1} + \mathbf{x}_i)\\ -\mathbf{x}_i^T \mathbf{P}_{\mathbf{X}^{\perp}_{\mathcal{M}_{l+1}}} \mathbf{X}_{\mathcal{I}^{-}_{l+1}} \mathbf{1} \\ 1-\mathbf{x}_i^T~\mathbf{X}_{\mathcal{M}_{l+1}}^{\star T} \mathbf{1} \end{bmatrix} \label{eq:h_L_inv} \end{equation} with $\mathcal{I}^{-}_{l} = \mathcal{I}^{-}_{l+1}$. The same argument holds for update type $t=0$. Furthermore, (Hastie et al., 2004) express the evolution of $\boldsymbol{\alpha}$ with a single cost parameter $C$. This case is equivalent to $C^-=C^+$ that yields the identity line in the $(C^+,C^-)$ space. (Ong et al., 2010) observe that one-dimensional path exploration over a line can lead to incorrect results and resort to searching for alternative set updates over a higher-dimensional hyperparameter space. Notably, when two points hit the margin at the same time at $l$, the matrix updated by both points $\mathbf{X}_{\mathcal{M}_{l+1}}^T\mathbf{X}_{\mathcal{M}_{l+1}}$ not necessarily needs to become singular. However, the $\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l$ sets can be incorrectly updated. We formalize this by introducing the notion of \textit{joint events} that may co-occur at some point on the line. In our setting of the $2$D path exploration, this is always the case when a vertex of a polytope coincides with a line in the $(C^+,C^-)$ space. \paragraph{Joint events} At the vertices of the convex polytope at least two events occur concurrently. In this case, the $\mathcal{M}\mathcal{I}\mathcal{O}$ set can be updated twice from $l$ to $l+1$. Hence, this vertex calls $3$ different updates of the $\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l$ set, \textit{i.e.} two single updates for both edges and a joint update. Note that the piecewise continuous $2$D path of $\boldsymbol{\alpha}$ also implies piecewise continuous $1$D path of the events. Moreover, as each vertex is surrounded by $4$ different $\mathcal{M}\mathcal{I}\mathcal{O}$ sets, two events at the vertex have to satisfy the following \textit{vertex loop property}: \begin{align} \begin{array}{cccc} &\multicolumn{1}{r}{\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l} & {\xrightarrow{\quad u(i_1,t_1,l) }} & \multicolumn{1}{c}{\mathcal{M}_{l+1}\mathcal{I}_{l+1}\mathcal{O}_{l+1}} \\ &&&\\ \multicolumn{2}{l} {\text{{\scriptsize{$u(i_2,t_2,l+3)$}} } \uparrow \quad } & \multicolumn{2}{r}{\quad \quad \quad\downarrow \text{\scriptsize{$u(i_2,t_2,l+1)$}}} \\ &&&\\ &\multicolumn{1}{l}{\mathcal{M}_{l+3}\mathcal{I}_{l+3}\mathcal{O}_{l+3} } & \xleftarrow{u(i_1,t_1,l+2)} &\multicolumn{1}{c}{\mathcal{M}_{l+2}\mathcal{I}_{l+2}\mathcal{O}_{l+2}} \end{array} \label{eq:vloop} \end{align} stating that the events have to flip at the vertex such that a sequence of up to $3$ single updates reaches each $\mathcal{M}_{l_1}\mathcal{I}_{l_1}\mathcal{O}_{l_1}$ set from any other $\mathcal{M}_{l_2}\mathcal{I}_{l_2}\mathcal{O}_{l_2}$ set associated with that vertex. \subsection{AC-LSVMPath algorithm} We now describe our algorithm. We represent the entire regularization path for the AC-LSVM by the set of vertices $\mathcal{V}$, edges $\mathcal{E}$, and facets $\mathcal{F}$. Let $j,k,l \in \mathbb{N}^+$. Then: \begin{equation} \mathcal{V}=\lbrace v_j : v_j = (C^+,C^-), \langle k \rangle\rbrace \end{equation} \begin{equation} \mathcal{E}=\lbrace e_k : e_k = (i,t), \langle j \rangle, \langle l \rangle\rbrace \end{equation} \begin{equation} \mathcal{F}=\lbrace f_l : f_l = (\mathcal{M}_l \mathcal{I}_l \mathcal{O}_l), \langle k \rangle\rbrace \end{equation} where $(\cdot)$ and $\langle \cdot \rangle$ denote attribute and connectivity, respectively, of each element in the $\mathcal{V}\mathcal{E}\mathcal{F}$ sets. \paragraph{Ordering} The $\mathcal{V}\mathcal{E}\mathcal{F}$ sets admit the following connectivity structure. Let $\mathcal{V} = \lbrace \mathcal{V}_m \rbrace_{m=1}^M$, $\mathcal{E} = \lbrace \mathcal{E}_m \rbrace_{m=1}^M$, and $\mathcal{F} = \lbrace \mathcal{F}_m \rbrace_{m=1}^M$ be partitioned into $M$ subsets where $v_{j \in j_m} \in \mathcal{V}_m$, $e_{k \in k_m} \in \mathcal{E}_m$, and $f_{l \in l_m} \in \mathcal{F}_m$. The subsets admit a sequential ordering, where $j_m < j_{m+1}$, $k_m < k_{m+1}$, and $l_m < l_{m+1}$, such that edges $e_{k} \in \mathcal{E}_m$ determine the adjacency of facet pairs\footnote{This is also known as \textit{facet-to-facet} property in parametric programming literature (Spjotvold, 2008)} $f_{l_{1,2}} \in \mathcal{F}_m$ or $f_{l_{1}} \in \mathcal{F}_m \wedge f_{l_{2}} \in \mathcal{F}_{m+1}$ while vertices $v_{j} \in \mathcal{V}_m$ determine the intersection of edges $e_{k} \in \mathcal{E}_m$ or $e_k \in \mathcal{E}_m \wedge e_{k} \in \mathcal{E}_{m+1}$. In effect, our algorithm orders facets into a \textit{layer}-like structure. We define a vertex $v_j$ as an \textit{open vertex} $v_j^o$ when $\vert v_j^o\langle k_m \rangle \vert = 2$ or $\vert v_j^o\langle k_m \rangle \vert = 3$, where $\vert \cdot \vert$ is set cardinality, if the vertex does not lie on neither $C$ axis. We define a \textit{closed vertex} when $\vert v_j^c\langle k_m \rangle \vert = 4$. When $\vert v_j^c\langle k_m \rangle \vert = 3$, the vertex is also closed if it lies on either $C$ axis. Similarly, an edge $e_k$ is called an \textit{open edge} $e_k^{o}$ when $\vert e_k^o\langle j_m \rangle \vert = 1$ and a \textit{closed edge} $e_k^{c}$ when $\vert e_k^c\langle j_m \rangle \vert = 2$. Then, a facet $f_l$ is called an \textit{open facet} when first and last edge in $f_l\langle k_m \rangle$ are unequal; otherwise it is a \textit{closed facet}. Finally, $v_j, e_k, f_l$ are called either \textit{single} $v_j^s, e_k^s, f_l^s$ when they are unique or \textit{replicated} $v_j^r, e_k^r, f_l^r$, otherwise. We propose to explore the AC-LSVM regularization path in a sequence of layers $m=1,\ldots,M$. We require that $\mathcal{F}_1(\mathcal{M}\mathcal{I}\mathcal{O})$ facet attributes are given at the beginning, where $\vert\mathcal{F}_1 \vert \geq 1 $, $\vert \mathcal{E}_1 \vert \geq 0$, and $\vert \mathcal{V}_1 \vert \geq 0$. An $m$-th layer is then composed of four successive steps. \textbf{1. Closing open edges and facets} - \texttt{CEF} For each $\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l$ set, which is attributed to $f_l \in \mathcal{F}_m$, the algorithm separately calls a convex hull routine \texttt{CH}. The routine uses \eqref{eq:H} to compute linear inequality constraints $H_l \geq 0$ creating a convex polytope at $l$. The ordered set of edges $f_l \langle k_m \rangle $, where the first and last edge are open, serve as initial, mandatory constraints in the \texttt{CH} routine. After completion, the routine augments the set $\mathcal{E}_m$ by closed edges $e_{k_m}^c$ and the set $\mathcal{V}_m$ by open vertices $v_{j_m}^o$. \textbf{2. Merging closed edges and open vertices} - \texttt{MEV} As the \texttt{CH} routine runs for each facet $f_l \in \mathcal{F}_m$ separately, some edges $e_{k_m}^c$ and/or vertices $v_{j_m}^o$ may be replicated, thereby yielding $e_{k_m}^c = e_{k_m}^{rc} \cup e_{k_m}^{sc}$ and $v_{j_m}^o = v_{j_m}^{ro} \cup v_{j_m}^{so}$. Notably, a vertex $v_{j_1} \in v_{j_m}^o$ is replicated when another vertex $v_{j_2} \in v_{j_m}^o$ (or other vertices) has the same attribute, i.e. $v_{j_1}(C^+,C^-) = v_{j_2}(C^+,C^-)$. However, we argue that merging vertices into a single vertex based on the distance between them in some metric space may affect the numerical stability of the algorithm. On the other hand, a closed edge $e_{k_1} \in e_{k_m}^c$ is replicated by another closed edge $e_{k_2} \in e_{k_m}^c$, when both edges connect a pair of vertices that are both replicated. Replicated edges cannot merge solely by comparing their event attributes $e_k(i,t)$. As they are piecewise continuous in the $(C^+,C^-)$ space, they are not unique. Similarly to vertices though, the edges might be merged by numerically comparing their associated linear constraints, which are only sign-flipped versions of each other, as shown in \eqref{eq:h_alpha0_inv}$-$\eqref{eq:h_L_inv}. However, this again raises concerns about the potential numeric instability of such a merging procedure. In view of this, we propose a sequential merging procedure that leverages $f_{l \in l_m}(\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l)$ sets, which are both unique in $\mathcal{F}_m$ and discrete. To this end, we first introduce two functions that act on attributes and connectivity of objects $f_l, e_k, v_j$. Let $I(\mathcal{Q}_1,\mathcal{Q}_2,p)$ be an indexing function that groups $\mathcal{Q}_2$ by assigning labels from set $\mathcal{N}_{q \in \mathcal{Q}} = \left\lbrace q : q \in \mathcal{Q} \right\rbrace$ to $q \in \mathcal{Q}_2$ based on $p(q)$ indexed over $q \in \mathcal{Q}_1 \cup \mathcal{Q}_2$: \begin{equation} I(\mathcal{Q}_1,\mathcal{Q}_2,p): \bigcup_{q \in \mathcal{Q}_1 \cup \mathcal{Q}_2} p(q) \rightarrow \mathcal{N}_{q \in \mathcal{Q}_1}^{\vert \mathcal{Q}_2 \vert \times 1} \end{equation} Let $R(\mathcal{Q}_1,\mathcal{Q}_2,p)$ then be a relabeling function that assigns labels from $\mathcal{N}(q_2)$ to $p(q_1)$ indexed over $q_1 \in \mathcal{Q}_1$: \begin{equation} R(\mathcal{Q}_1,\mathcal{Q}_2,p): \forall_{q_1 \in \mathcal{Q}_1} \forall_{q_2 \in \mathcal{Q}_2} ~~\mathcal{N}(q_2) \rightarrow p(q_1) \end{equation} The algorithm commences the merging procedure by populating initially empty set $\mathcal{F}_{m+1}$ with facets $f_{l \in l_{m+1}}$ that are obtained by separately updating \eqref{eq:update} the facets $f_{l \in l_m}$ through the events attributed to each edge $e_k \in e_{k_m}^c$. Note, however, that replicated edges $e_{k_{m}}^{rc}$ will produce facet attributes in $\mathcal{F}_{m+1}$ that replicate facet attributes from the preceding layer. Moreover, single edges $e_{k_{m}}^{sc}$ may as well produce replicated facet attributes in the current layer. Hence, we have that $f_{l \in l_{m+1}}^r(\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l) \subseteq f_{l \in l_{m}}^s(\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l) \cup f_{l \in l_{m+1}}^s(\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l)$. In order to group facets into single and replicated $\mathcal{M}\mathcal{I}\mathcal{O}$ sets, the algorithm indexes facet attributes with $I(l_m \cup l_{m+1}^s, l_{m+1}^s \cup l_{m+1}^r, f_l(\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l))$ based on their equality. Then, relabeling facet-edge connectivities of edges $e_{k_m}^c$ with $R(k_m^c, l_{m+1}^s \cup l_{m+1}^r, e_k\langle l \rangle)$ allows for indexing the connectivities with $I(k_m^{sc},k_m^{sc} \cup k_m^{rc},e_k\langle l \rangle)$ also based on their equality. Having indicated single and replicated edges, the algorithm relabels edge-vertex connectivities of vertices $v_{j_m}^o$ with $R(j_m^o,k_{m}^c, v_j\langle k \rangle)$. Note that there are two general vertex replication schemes. Vertices, which indicate the occurrence of joint events, can be replicated when two facets connect (i) through an edge (i.e., vertices share replicated edge) or (ii) through a vertex (i.e., vertices connect only to their respective, single edges). At this point of the merging procedure, a vertex $v_{j_m}^o$ is associated indirectly through edges $e_{k_m}^{sc}$ with two facets $f_l \in f_{l \in l_{m}}^s \cup f_{l \in l_{m+1}}^s$, when it lies on either $C$ axis, or three facets $f_l \in f_{l \in l_{m}}^s \cup f_{l_{1,2} \in l_{m+1}}^s$, otherwise. Two vertices lying on, say, $C^+$ axis are replicated when their respective edges share a facet and are attributed the events that refer to the same, negative point $\lbrace x_i, y_i=-1\rbrace$, but yet that have opposite event types, i.e. $t_1 = \neg~t_2$. This condition follows directly from \eqref{eq:h_alpha0}$-$\eqref{eq:h_L}, as $\alpha_i=C^-=0$ at $(C^+,0)$. Conversely, when vertices lie on $C^-$ axis, they are replicated when their edges have events referring to the positive point, $\lbrace x_i, y_i=+1\rbrace$. Then, two vertices lying on neither $C$ axis are replicated when their respective edges are associated with two common facets and equal joint events. Hence, vertices are indexed with $I(j_m^{so},j_m^{so} \cup j_m^{ro}, v_j\langle k \rangle\rightarrow e_k \langle l \rangle \cup e_k(i,t))$ based on the equality of edge-facet connectivities along with edge attributes. Alternatively, should the joint events be unique, the vertices could then be merged solely by comparing these events. Showing that joint events being unique is true or false, i.e. two $1$D event paths can intersect only at a single point in the entire $(C^+,C^-)$ space, is an interesting future work. Having grouped facets $f_{l \in l_{m+1}}$, edges $e_{k \in k_{m}}^c$, and vertices $v_{j \in j_{m}}^o$, now the algorithm can merge facet-edge connectivities $f_l \langle k \rangle$ of the replicated facets, prune replicated edges $e_k^r$, and merge vertex-edge connectivites $v_j \langle k \rangle$ of the replicated vertices. Being left with only single facets $f_{l \in l_{m+1}}^s$, edges $e_{k \in k_{m}}^{sc}$, and vertices $v_{j \in j_{m}}^{so}$, the algorithm relabels with $R(k_m^{so} \cup k_m^{sc}, j_m^o, e_k\langle j \rangle)$ the edge-vertex connectivities of single, open edges $e_{k_m}^{so}$ and single, closed edges $e_{k_m}^{sc}$ intersecting with $e_{k_m}^{so}$. Finally, the algorithm relabels with $R(l_m \cup l_{m+1}, k_m^c, f_l\langle k \rangle)$ the facet-edge connectivites of facets from the preceding and current layer. \textbf{3. Closing open vertices} - \texttt{CV} In this step, the algorithm closes the vertices $v_{j_m}^{so}$ by attaching open edges. Specifically, by exploiting the piecewise continuity of events at vertices, the algorithm populates the $\mathcal{E}_{m+1}$ set with open edges $e_{k_{m+1}}^o$, such that a vertex $v_{j_m}^{sc}$ now connects either to (i) $3$ edges, when it lies on, say, $C^+$ axis and connects to event edge with associated positive point, or to (ii) $4$ edges when it lies on neither axis. Using the vertex loop property \eqref{eq:vloop}, the algorithm then augments the set $\mathcal{F}_{m+1} = \mathcal{F}_{m+1} \cup f_{l_{m+1}}^a$ with additional facets $f_{l_{m+1}}^a$ such that now the closed vertex $v_{j_m}^{sc}$ connects indirectly through its edges $e_k \in e_{k_m}^o \cup e_{k_m}^c \cup e_{k_{m+1}}^o$ to facets $f_l \in f_{l_m} \cup f_{l_{m+1}}$ and additionally up to $1$ new facet $f_l \in f_{l_{m+1}}^a$. There are several advantages for generating open edges. Firstly, augmenting the initialization list of edges during the call to the \texttt{CH} routine reduces the number of points for processing, with computational load $\mathcal{O}(N-\vert f_l\langle k_m \rangle\vert)$. Secondly, each vertex generates up to two single open edges. However, there can be two single vertices that generate the same open edge thereby merging the $1$D path of an event. In this case, both open edges are merged into a single closed edge and the facet is closed without processing it with \texttt{CH} routine. This merging step is described next. \textbf{4. Merging open edges and facets} - \texttt{MEF} As open edges and their facets, which are generated in step $3$, can also be single or replicated, step $4$ proceeds similarly to step $2$. The algorithm indexes additional facets with $I(l_{m+1}^s \cup l_{m+1}^{sa}, l_{m+1}^{sa} \cup l_{m+1}^{ra}, f_l(\mathcal{M}_l\mathcal{I}_l\mathcal{O}_l))$ and relabels the open edge connectivities with $R(k_{m+1}^o, l_{m+1}^{sa} \cup l_{m+1}^{ra}, e_k\langle l \rangle)$. Then, the algorithm indexes these connectivites with $I(k_{m+1}^{so},k_{m+1}^{so} \cup k_{m+1}^{ro},e_k\langle l \rangle)$ and merges edge-vertex $e_k \langle j \rangle$ and facet-edge $f_l \langle k \rangle$ connectivities. Finally, the algorithm relabels with $R(l_{m+1} \cup l_{m+1}^a, k_{m+1}^o, f_l\langle k \rangle)$ the facet-edge connectivity of all facets in $\mathcal{F}_{m+1}$ and returns to step $1$. \paragraph{Termination} The algorithm terminates at $M$-th layer, in which the \texttt{CH} routine for $\mathcal{M}\mathcal{I}\mathcal{O}$ sets of all facets in $\mathcal{F}_M$, where $\vert\mathcal{F}_M\vert \geq1$, produces \textit{open polytopes} in the $(C^+,C^-)$ space. \paragraph{Special cases} As mentioned in (Hastie et al., 2004), two special case events may occur after closing facet $f_l$ and set updating \eqref{eq:update}. When (i) replicated data points $\left\lbrace \mathbf{x}_i,y_i\right\rbrace $ exist in the dataset and enter the margin, or (ii) single points simultaneously project onto the margin such that $\vert \mathcal{M}_{l+1}\vert > d$, then the matrix $\mathbf{X}_{\mathcal{M}_{l+1}}^T\mathbf{X}_{\mathcal{M}_{l+1}}$ becomes singular and thus not invertible, yielding non-unique paths for some $\alpha_i$. In contrast to (Hastie et al., 2004), note that the case (ii) is likely to occur in the considered LSVM formulation (1)$-$\eqref{eq:bias_augm} as the positive and negative data points span up to $d-1$ subspace after being affine transformed, yielding e.g. parameters $\boldsymbol{\beta}= \left[ \mathbf{0} ,1/B \right]$. In the context of our algorithm, both cases (i)$-$(ii) are detected at $f_l$ when the matrix $\widehat{\mathbf{H}}_{n \times 3}$ formed of $n \geq 3$ constraints \eqref{eq:h_alpha0}$-$\eqref{eq:h_L} associated with these points either has $rank(\widehat{\mathbf{H}}) = 1$, producing multiple events at an edge denoted by constraints that are identical up to positive scale factor, or has $rank(\widehat{\mathbf{H}}) = 2$, producing multiple joint events at a vertex denoted by constraints that intersect at the same point. We propose the following procedure for handling both special cases. Namely, when some facets $f_l \in f_{l_m}$ close with edges having multiple events or with vertices having multiple joint events that would lead to cases (i)$-$(ii), the algorithm moves to step $2$, as it can obtain facet updates in these special cases. However, it skips step $3$ for these particular facets. While we empirically observed that such vertices close with $2$ edges having multiple joint events, it is an open issue how to generate open edges in this case. Instead, during successive layers, step $2$ augments the list of facets, edges, and vertices by the ones associated to (i)$-$(ii) for indexing and relabeling them with respect to successive ones that will become replicated in further layers. In effect, our algorithm 'goes around' these special case facets and attempts to close them by computing adjacent facets. However, the path for $\boldsymbol{\alpha}$ in these cases is not unique and remains unexplored. Nevertheless, our experiments suggest that unexplored regions occupy relatively negligibly small area in the hyperparameter space. When the algorithm starts with all points in $\mathcal{I}$ and either case (i)$-$(ii) occurs at the initial layers, the exploration of the path may halt\footnote{The path exploration will halt when these cases occur as the data points that are the first to enter the margin; and more generally, when $1$D multiple event paths referring to these cases will go to both $C$ axis, instead of to one $C$ axis and to infinity.} due to the piecewise continuity of the (multiple) events. A workaround can then be to run a regular LSVM solver at yet unexplored point $(C^+,C^-)$, obtain $\mathcal{M}\mathcal{I}\mathcal{O}$ sets, and extract convex polytope to restart the algorithm. Our future work will focus on improving our tactics for special cases. We posit that one worthy challenge in this regard is to efficiently build the entire regularization path in $N$-dimensional hyperparameter space. \paragraph{Computational complexity} Let $\vert \widehat{\mathcal{M}} \vert$ be the average size of a margin set for all $l$, let $\vert \widehat{\mathcal{F}} \vert$ be the average size of $\mathcal{F}_m$. Then, the complexity of our algorithm is $\mathcal{O}(M \vert \widehat{\mathcal{F}} \vert (N + \vert \widehat{\mathcal{M}} \vert^3))$, where $\vert \widehat{\mathcal{M}} \vert^3$ is the number of computations for solving \eqref{eq:h_alpha0} (without inverse updating/downdating (Hastie et al., 2004)) and we hid constant factor $2$ related to convex hull computation. However, note that typically we have $\vert \widehat{\mathcal{M}} \vert \ll N$. In addition, we \textit{empirically} observed that $M \approx N$ (but cf. (Gartner et al., 2012)), so that the number of layers $m$ approximates dataset size. Our algorithm is sequential in $M$ but parallel in $\vert \widehat{\mathcal{F}} \vert$. Therefore, the complexity of a parallel implementation of the algorithm can drop to $\mathcal{O}(N^2 + N\vert \widehat{\mathcal{M}} \vert^3)$. Finally, at each facet, it is necessary to evaluate \eqref{eq:h_alpha0}. But then the evaluation of constraints \eqref{eq:h_alphaC1}$-$\eqref{eq:h_R} can be computed in parallel, as well. While this would lead to further reduce the computational burden, memory transfer remains the main bottleneck on modern computer architectures. Our algorithm partitions the sets $\mathcal{F}$, $\mathcal{E}$, $\mathcal{V}$ into a \textit{layer}-like structure such that our two-step merging procedure requires access to objects only from layer pairs $m$ and $m+1$ and not to preceding layers\footnote{When the algorithm encounters special cases at $m$, it requires access to $f_l$, $e_k$, $v_j$ objects related to these cases even after $m_1 > m+1$ layers, but the number of these objects is typically small.}. In effect, the algorithm only requires $\mathcal{O}(\vert \widehat{\mathcal{F}} \vert + \vert \widehat{\mathcal{E}} \vert + \vert \widehat{\mathcal{V}} \vert)$ memory to cache the sets at $m$, where $\vert \widehat{\mathcal{E}} \vert$ and $\vert \widehat{\mathcal{V}} \vert$ are average edge and vertex subset sizes of $\mathcal{E}_m$ and $\mathcal{V}_m$, respectively. \begin{figure*}[h] \vspace{.3in} \centerline{\fbox{ \includegraphics[width=\linewidth, height=8.0cm] {im/Exper1234} }} \vspace{.3in} \caption{Visualization of the entire regularization path for the AC-LSVM. Experiments (i)-(iv) are shown in counterclockwise order. In (i) we show a portion of the entire regularization path, where red dots indicate facet means. In (ii) we show intertwined layers of facets up to some layer $m$ (blue and green) and 1D event paths of several points (cyan - event $t=0$ and red - event $t=1$). In (iii) and (iv) we show the entire regularization path.} \end{figure*} \begin{figure*}[h] \vspace{.3in} \centerline{\fbox{ \includegraphics[width=\linewidth, height=8.0cm]{im/RegP12} }} \vspace{.3in} \caption{Visualization of the entire regularization paths for several Langrange multipliers $\alpha_i$ for experiment (iv).} \end{figure*} \section{Numerical experiments} In this section, we evaluate our AC-LSVMPath algorithm described in section \ref{sec:algo}. We conduct three numerical experiments for exploring the two-dimensional path of assymetric-cost LSVMs on synthetic data. We generate samples from a gaussian distribution $\mathcal{N}(\mu, \sigma^2)$ for (i) a small dataset with large number of features $N \ll d$, (ii) a large dataset with small number of features $N \gg d$, and (iii) a moderate size dataset with moderate number of features $N = d$. We also build two-dimensional regularization path when input features are sparse (iv). We use off-the-shelf algorithm for training flexible part mixtures model (Yang \& Ramanan, 2013), that uses positive examples from Parse dataset and negative examples from INRIA's Person dataset (Dalal \& Triggs, 2006). The model is iteratively trained with hundreds of positive examples and millions of hard-mined negative examples. We keep original settings. The hyperparameters are set to $C^+=0.004$ and $C^-=0.002$ to compensate for imbalanced training (Akbani et al., 2004). For experiments (i)$-$(iv), we have the following settings: (i) $d=10^6$, $N^+=25$, $N=50$, (ii) $d=2$, $N^+=50$, $N=100$, (iii) $d=10^2$, $N^+=50$, $N=100$, (iv) $d\approx 10^5$, $N^+=10$, $N=50$. We set $B=0.01$ in all experiments, as in (Yang \& Ramanan, 2013). The results are shown in Fig. 1 and Fig. 2. \section{Conclusions} This work proposed an algorithm that explores the entire regularization path of asymmetric-cost linear support vector machines. The events of data concurrently projecting onto the margin are usually considered as special cases when building one-dimensional regularization paths while they happen repeatedly in the two-dimensional setting. To this end, we introduced the notion of joint events and illustrated the set update scheme with vertex loop property to efficiently exploit their occurrence during our iterative path exploration. Finally, as we structure the path into successive layers of sets, our algorithm has modest memory requirements and can be locally parallelized at each layer of the regularization path. Finally, we posit that extending our algorithm to the entire $N$-dimensional regularization path would facilitate processing of further special cases. \subsubsection*{References} Japkowicz, N., Stephen, S. (2002). The class imbalance problem: A systematic study. {\it Intelligent Data Analysis}, 6(5), 429-449 Bergstra, J., Bengio, Y. (2012). Random search for hyper-parameter optimization. {\it The Journal of Machine Learning Research}, 13(1), 281-305 Bach, F. R., Heckerman, D., Horvitz, E. (2006). Considering cost asymmetry in learning classifiers. {\it The Journal of Machine Learning Research}, 1713-1741 Hastie, T., Rosset, S., Tibshirani, R., Zhu, J. (2004). The entire regularization path for the support vector machine. {\it The Journal of Machine Learning Research}, 1391-1415 Hsieh, C. J., Chang, K. W., Lin, C. J., Keerthi, S. S., Sundararajan, S. (2008). A dual coordinate descent method for large-scale linear SVM. In { \it Proceedings of the 25th International Conference on Machine learning}, 408-415 Spjotvold, J. (2008). Parametric programming in control theory. {\it PhD Thesis} Fan, R. E., Chang, K. W., Hsieh, C. J., Wang, X. R., Lin, C. J. (2008). LIBLINEAR: A library for large linear classification. {\it The Journal of Machine Learning Research}, 9, 1871-1874 Shalev-Shwartz, S., Singer, Y., Srebro, N., Cotter, A. (2011). Pegasos: Primal estimated sub-gradient solver for SVM. {\it Mathematical programming}, 127(1), 3-30 Joachims, T. (2006). Training linear SVMs in linear time. In {\it ACM SIGKDD International Conference on Knowledge Discovery and Data mining}, 217-226 Gartner, B., Jaggi, M., Maria, C. (2012). An exponential lower bound on the complexity of regularization paths. {\it Journal of Computational Geometry}, 3(1), 168-195 Karasuyama, M., Harada, N., Sugiyama, M., Takeuchi, I. (2012). Multi-parametric solution-path algorithm for instance-weighted support vector machines. {\it Machine learning}, 88(3), 297-330 Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., et al., Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. {\it The Journal of Machine Learning Research}, 12, 2825-2830 Klatzer, T., Pock, T. (2015). Continuous Hyper-parameter Learning for Support Vector Machines. In {\it Computer Vision Winter Workshop} Chu, B. Y., Ho, C. H., Tsai, C. H., Lin, C. Y., Lin, C. J. (2015). Warm Start for Parameter Selection of Linear Classifiers. {\it ACM SIGKDD International Conference on Knowledge Discovery and Data Mining}, 149-158 Snoek, J., Larochelle, H., Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. In {\it Advances in Neural Information Processing Systems}, 2951-2959 Ong, C. J., Shao, S., Yang, J. (2010). An improved algorithm for the solution of the regularization path of support vector machine. {\it IEEE Transactions on Neural Networks}, 21(3), 451-462 Akbani, R., Kwek, S., Japkowicz, N. (2004). Applying support vector machines to imbalanced datasets. In {\it Machine Learning: ECML}, 39-50 Dalal, N., Triggs, B. (2005). Histograms of oriented gradients for human detection. In {\it Computer Vision and Pattern Recognition},886-893 Boser, B. E., Guyon, I. M., Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In {\it Proceedings of the fifth annual workshop on Computational learning theory}, 144-152 Avis, D., Bremmer, D., Seidel, R. (1997). How good are convex hull algorithms?. {\it Computational Geometry}, 7(5), 265-301 \end{document}
{ "attr-fineweb-edu": 1.639648, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcOzxK7Ehm9ic_nP7
\section{Introduction} Neural architecture search deals with the task of identifying the best neural network architecture for a given prediction or classification problem. Treating the network architecture as a black box that can be evaluated by first training and then testing it, a systematic try-and-error process has been demonstrated to discover architectures that outperform the best human-designed ones for a number of standard benchmark datasets~\citep{zoph2017neural}. As each single black-box evaluation of an architecture involves training a deep neural network until convergence, it is attractive to use a faster heuristic to predict the performance of candidate architectures. The ability to still find a near-optimal solution depends on the correlation between the predicted and the actual performance of the candidate. A remarkably simple and attractive performance estimation method has been recently proposed by~\cite{pham2018efficient}. In their method, called \emph{Efficient Neural Architecture Search (ENAS)}, the whole search space of possible architectures utilizes a common set of shared weights. These weights are trained simultaneously with a controller learning to generate architectures. As the ENAS search process has been demonstrated to find very good architectures within very reasonable computational constraints, we are using it as the basis for our studies, which have the goal to further reduce the search time. As a starting point, we evaluate the architectures generated by ENAS over the course of the search process, in order to receive indication about when the \emph{exact} architecture performance converges. In this work the exact performance of an architecture shall be defined as its test accuracy after having trained its weights for a sufficient number of epochs with the full training set. As a very surprising result, we did not find any measurable improvement in terms of the exact performance over the course of the search. In other words, the architectures generated by the ENAS controller at the beginning of the search performed just as well as the architectures generated after hundreds of controller training epochs. Our observation has been consistently confirmed across runs of the experiment for the CIFAR-10 and CIFAR-100 dataset \citep{Krizhevsky2009learning} with the two distinct search spaces described by \cite{pham2018efficient}. While our results are inconsistent with the ablation studies presented in \citep{pham2018efficient}, they are in line with a recent series of findings about the search performance of ENAS \citep{Li2019random, Adam2019understanding}. Our results support the hypothesis that excellent neural architectures can be obtained by a one-shot approach, and, with an appropriate design space and probability distribution, one-shot methods are a serious alternative to ENAS-like search methods. We remark that already ENAS could be seen as a one-shot method, because during its search it never evaluates the exact performance of any candidate architecture before making its final decision. Our work demonstrates that skipping the ENAS search process, which takes several GPU-hours, does not diminish the quality of the result. A brief overview of related work is given in the subsequent section. In Section~\ref{sec:background} we outline the ENAS method which is re-evaluated in this work, using the experimental setup described in Section~\ref{sec:experiments}. The results of our experiments are also presented and discussed in Section~\ref{sec:experiments}, and we conclude this work in Section~\ref{sec:conclusion}. \section{Related work} \label{sec:related_work} The idea of using machine learning methods to determine neural network architectures has been popularized by the seminal work of~\cite{zoph2017neural}, who have demonstrated that \emph{Neural Architecture Search} (NAS) is able to find networks that outperform the best human-designed neural architectures. The most obvious drawback of the original NAS method is its resource requirement, as the learning process involves generating a large number of deep neural networks, whose performance is evaluated by fully training them with the given training dataset and evaluating them with the test set, which takes several hours or even days per candidate architecture. There is a large body of follow-up work studying attempts to speed up the search. Much attention has been gained by~\cite{pham2018efficient}, who have proposed the Efficient Neural Architecture Search method based on the idea to design the search space such that each candidate network is a subgraph of a joint network. The weights of the joint network are trained while an architecture generator (controller) is trained to select the best sub-network for the given dataset. Using this approach, the authors were able to improve the time complexity from thousands of GPU days to less than a single GPU day for the CIFAR-10 dataset while reaching a performance similar to the original NAS results. Numerous alternative methods to ENAS have been presented as well. \cite{Suganuma2018genetic} are among the authors who have provided evolutionary methods for finding neural architectures. The search method described by \cite{Liu2018progressive} starts with evaluating simpler architectures and gradually makes them more complex, using a surrogate function to predict the performance of candidate architectures. \cite{Liu2008darts} employ a continuous relaxation of the network selection problem, which enables the search procedure to use gradients to guide the search. A continuous search space is also employed by \cite{Luo2018neural}. Here a model is trained to estimate the performance of an architecture based on a representation in a continuous space, providing a gradient based on which the architecture can be improved. A comprehensive survey of neural architecture search methods has been published by \cite{Elsken2019neural}. One-shot approaches for finding neural architectures have been described in several recent works. \cite{Brock2018one} have proposed to have the weights of candidate neural architectures generated by a hypernetwork that is trained only once. A simpler mechanism inspired by the ENAS approach has been presented by \cite{Bender2018understanding}. Here the joint network containing all weights is first trained, and then the validation accuracy using these weights is used to predict the performance of candidate architectures. Instead of using a reinforcement learning based controller, the candidate architectures are generated by random search. The latter two approaches are applying search techniques which enumerate many possible architectures, but they can be considered one-shot methods in the sense that only one architecture is fully trained. In this sense, also ENAS can be considered a one-shot method. In contrast, our results support the hypothesis a single architecture generated at random from an appropriate search space - thus applying one-shot learning in a stricter sense - has competitive performance to architectures resulting from search processes. Our findings are consistent with the very recent results of \cite{Adam2019understanding}, who have analyzed the behavior of the ENAS controller during search and found that its hidden state does not encode any properties of the generated architecture, providing an explanation of the observation made by \cite{Li2019random} that ENAS does not perform better than simple random architecture search. Our work is complementing that line of research by providing an experimental study of the learning progress of ENAS, demonstrating that good performance is already achieved before any search has taken place. This is also in consistent with recent results of \cite{Xie2019Exploring}, who have experimented with neural network generators for image recognition and have found that, without search, the randomly generated networks achieve state-of-the-art performance. \section{Background} \label{sec:background} In this work we study the learning progress of the ENAS procedure as presented by~\cite{pham2018efficient}, systematically evaluating early and late architectures that are generated during the search process. Candidate architectures, so-called \emph{child models}, are generated by ENAS using a \emph{controller}, which is modeled as an LSTM with 400 hidden units. The architecture search space is designed such that all child models are subgraphs of a \emph{joint network}; each child model is defined as a subgraph containing a subset of the joint network's nodes and edges. Importantly, the trainable weights are shared across the child models. Whenever a child model has been generated by the controller, the weights contained by it are updated using the gradient from a single training minibatch. The performance of the child model on the validation set is then used as a feedback signal to improve the controller. For the CIFAR-10 dataset, \cite{pham2018efficient} experiment with two distinct search spaces. In the \emph{macro search space} the controller samples architectures layer by layer, deciding about the layer type (3x3 or 5x5 convolution, 3x3 or 5x5 depthwise-separable convolution, average pooling or max pooling) and the subset of lower layers to form skip connections with. The set of available operations as well as the number of layers is defined by hand. In the \emph{micro search space}, the network macro-architecture in terms of interconnected convolutional and reduction \emph{cells} is defined by hand, while the controller decides about the computations that happen inside each cell. Inside a convolutional cell, there are a fixed number of nodes, where each node can be connected with two previous nodes chosen by the controller and performs an operation among identity, 3x3 or 5x5 separable convolution, average pooling, or max pooling. Reduction cells are generated using the same search space, but with the nodes applying their operations with a stride of 2. The cell output is obtained by concatenating the output of all nodes that have not been selected as inputs to other nodes. \cite{pham2018efficient} also design a dedicated search space for designing recurrent neural networks for language processing tasks. We do not include experiments with this search space in our experiments, but note that similar findings can be expected in light of recent studies of the ENAS search process in \citep{Li2019random, Adam2019understanding}. \section{Experiments} \label{sec:experiments} For our experiments we are using the original implementation of ENAS as provided by \cite{pham2018efficient} at \texttt{https://github.com/melodyguan/enas/}. This repository provides an implementation ready to be used to search for a CNN for the CIFAR-10 dataset as well as an implementation of ENAS search for an RNN architecture. To search for a CNN architecture for CIFAR-100, we only modified the input data pipeline. The existing implementation also included no support for realizing average pooling and max pooling operations in the final training of architectures generated using the macro search space. We have added this feature in our version, which is available at \texttt{https://github.com/prabhant/ENAS-cifar100}. The experiments were executed using a Nvidia 1080Ti processor. \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{scatterplot.pdf} \caption{ \label{fig:scatterplot} Scatter plot showing validation accuracy using the shared weights (x-axis) and test accuracy of the architecture after re-training the architecture for 260 epochs. Each point represents a neural architecture generated by the ENAS controller after having been trained for 155 epochs using the macro search space. } \end{center} \end{figure} \begin{table}[h] \centering \caption{ \label{tab:cifar} Performance of architectures generated by ENAS for CIFAR-10 and CIFAR-100. For the macro search space, the controller was trained for 310 epochs. For the micro search space, it was trained for 150 epochs. Before testing, all architectures were re-trained for 310 epochs. Table cells with multiple numbers correspond to multiple separate runs of the same experiment. } \begin{tabular}{|c|c||c|c|} \hline data & search space & initial accuracy & final accuracy \\ \hline CIFAR-10 & macro & {96.69\% / 95.80\% / 95.71\%} & {95.38\% / 95.81\% / 95.70\%} \\ CIFAR-10 & micro & {94.56\%} & {94.59\%} \\ CIFAR-100 & macro & {77.12\% / 80.75\% / 80.55} & {80.07\% / 80.39\% / 80.47\%} \\ CIFAR-100 & micro & {79.59\% / 77.69\%} & {80.20\% / 80.50 \%} \\ \hline \end{tabular} \end{table} In Figure~\ref{fig:scatterplot} we take a snapshot of the macro search space controller for CIFAR-100 at training epoch 155 (i.e. after half of the training has been done) and compare the validation accuracies (used for further controller training) of some generated architectures with the test accuracies of the same architectures after having re-trained them. We find that in this sample there is no positive correlation between these two metrics, and thus we cannot expect that the validation accuracy with shared weights represents a useful training feedback for controller improvement. Table~\ref{tab:cifar} shows the test accuracy after having re-trained architectures generated by the ENAS controller before and after the architecture search procedure. In each experiment run, we have trained the controller using the ENAS procedure. In each row we compare the pair of architectures achieving the best validation accuracy at the first and after the last epoch of controller training, but we perform the comparison in terms of the test accuracy after re-training. Note that the validation accuracy in the first epoch is computed with the randomly initialized shared weights, while the validation accuracy after the last epoch is computed using the shared weights that have been trained during the search process. The way of selecting the architecture after the last epoch corresponds to the final architecture selection method proposed by~\cite{pham2018efficient}. Most of the experiments were done for the CIFAR-100 dataset, which is more challenging than CIFAR-10 and thus should have a higher potential to show benefits of architecture search. However, the numbers in the table show that the improvement is below 1\% in the majority of tested configurations. Thus, the training process, which takes several GPU-hours, can often be skipped without sacrificing any accuracy. \section{Conclusion} \label{sec:conclusion} We have evaluated the architecture improvement achieved during the search progress of ENAS, and we found that high-quality architectures are already found before any training of controller and shared weights has started. Future work will include a reproduction of our results in more experiment runs, using further datasets and including recurrent neural network design. Furthermore, a systematic study on the prediction quality of the validation accuracy (when using tentative weights like in ENAS) will shed more light on the effectiveness of this speedup-technique in general. Finally, the methodology applied in this work can be utilized to evaluate the learning progress of other neural architecture search methods as well. \bibliographystyle{plainnat}
{ "attr-fineweb-edu": 1.830078, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcVnxK0iCl7DT7qIU
\subsection{Overview} First, we denote TKG as $G_{KG} = \{(v, r, u, t)\} \subseteq V_{KG} \times \set{R}_{KG} \times \set{V}_{KG} \times \set{T}_{KG}$, where $\set{V}_{KG}$ is a set of entities, $\set{R}_{KG}$ is a set of relations, and $\set{T}_{KG}$ is a set of timestamps associated with the relations. Given the graph $G_{KG}$ and query $\vect{q} = (v_{query}, r_{query}, ?, t_{query})$, TKG completion is formulated as predicting $u \in \set{V}_{KG}$ most probable to fill in the query. We also denote $\IN{i}$ as a set of incoming neighbor nodes of $v_i$, i.e. nodes posessing edges toward $v_i$, and $\ON{i}$ as a set of outgoing neighbor nodes of $v_i$. Figure \ref{fig:architecture} illustrates an overview of \method's reasoning process. \method consists of 4 sub-modules: Preliminary GNN (PGNN), Subgraph GNN (SGNN), Subgraph Sampling, and Attention Flow. Given $G_{KG}$ and the query, in the encoding phase, \method first uses PGNN to create preliminary node feature $\vect{h}_i$ for all entities in the $G_{KG}$. Next, at each decoding step $t = 1, \cdots, T$, \method iteratively samples a subgraph $G^{(t)}_{sub}$ from $G_{KG}$, that consists only of query-relevant nodes and edges. For each entity $i$ included in $G^{(t)}_{sub}$, SGNN creates query-dependent node feature $\vect{g}_i^{(t)}$, incorporating the query vector $\vect{q}$ and the preliminary feature $\vect{h}_i$. As noted in \citet{dpmpn}, this additional encoding of induced subgraph not only helps in extracting only the query-relevant information from the original graph, but also scales up GNN by input-dependent pruning of irrelevant edges. Using both $\vect{h}_i$ and $\vect{g}_i^{(t)}$, Attention Flow computes transition probability to propagate the attention value of each node to its reachable neighbor nodes, creating the next step's node attention distribution $a_i^{(t+1)}$. After the final propagation step $T$, the answer to the input query is inferred as the node with the highest attention value $a^{(T)}_i$. \subsection{Preliminary GNN} Given $G_{KG}$, \method first randomly initializes node feature $\vect{h}_i$ for all $v_i \in V_{KG}$. Then, to contextualize the representation of entities in $G_{KG}$ with the graph structure, each layer in PGNN updates node feature $\vect{h}_i$ of entity $v_i$ by attentively aggregating $v_i$'s neighborhood information. The important intuition underlying PGNN is that the temporal displacement between timestamps of the query and each event is integral to capture the time-related dynamics of each entity. Therefore, for each timestamp $t_e$ of edge $e$ in $G_{KG}$, we resolve to separately encode the sign and magnitude of the temporal displacement $\triangle t_{e} = t_e - t_{query}$. Concretely, PGNN computes message $\vect{m}_{ij}$ from entity $v_i$ to $v_j$ as follows: \begin{equation} \label{eq:1} \begin{split} &\vect{m}_{ij} = \mat{W}_{\lambda(\triangle t_{ij})}(\vect{h}_i + \gvect{\rho}_{ij} + \gvect{\tau}_{|\triangle t_{ij}|}) \\ &\text{where} \,\, \mat{W}_{\lambda(\triangle t_{ij})} = \begin{cases} \mat{W}_{past} & \text{if } \triangle t_{ij} < 0\\ \mat{W}_{present} & \text{if } \triangle t_{ij} = 0\\ \mat{W}_{future} & \text{if } \triangle t_{ij} > 0 \end{cases} \end{split} \end{equation} $\gvect{\rho}_{ij}$ is a relation-specific parameter associated with $r_{ij}$ which denotes the relation that connects $v_i$ to $v_j$. In addition to the entity and relation, we learn the discretized embedding of size of temporal displacement, i.e. $\gvect{\tau}_{|\triangle t_{ij}|}$. We take account of the sign of displacement by applying sign-specific weight for each event. Next, the new node feature $\vect{h}'_j$ is computed by attention weighted sum of all incoming messages to $v_j$: \begin{equation} \label{eq:2} \begin{split} &\vect{h}'_j = \sum_{i \in \IN{j}} a_{ij} \vect{m}_{ij}, \\ &a_{ij} = \texttt{softmax}_i(\alpha_{ij}),\\ &\alpha_{ij} = \texttt{LeakyReLU}\left((\mat{W}_Q \vect{h}_j)^{\top} (\mat{W}_K \vect{m}_{ij})\right) \end{split} \end{equation} The attention values are computed by applying softmax over all incoming edges of $v_j$, with $\vect{h}_j$ as query and $\vect{m}_{ij}$ as key. In addition, we extend this attentive aggregation scheme to multi-headed attention, which helps to stabilize the learning process and jointly attend to different representation subspaces~\cite{gat}. Hence our message aggregation scheme is modified to: \begin{equation} \label{eq:3} \vect{h}'_j = \overset{K}{\underset{k=1}{\big{\|}}} \sum_{i \in \IN{j}} a^{k}_{ij} \vect{m}^{k}_{ij} \end{equation} concatenating independently aggregated neighborhood features from each attention heads, where $K$ is a hyperparameter indicating the number of attention heads. \subsection{Subgraph GNN} At each decoding step $t$, SGNN updates node feature $\vect{g}_i$ for all entities that are included in the induced subgraph of current step, $G^{(t)}_{sub}$. We present the detailed procedure of subgraph sampling in upcoming section. Essentially, SGNN not only contextualizes $\vect{g}_i$ with respective incoming edges, but also infuses the query context vector with the entity representation. First, the subgraph features for entities newly added to the subgraph, are initialized to their corresponding preliminary features $\vect{h}_j$. Next, SGNN performs message propagation, using the same message computation and aggregation scheme as PGNN (Eq. \ref{eq:1}-\ref{eq:3}), but with separate parameters: \begin{equation} \vect{\widetilde{g}}'_j = \overset{K}{\underset{k=1}{\big{\|}}} \sum_{i \in \IN{j}} a^{k}_{ij} \vect{m}^{k}_{ij} \end{equation} This creates an intermediate node feature $\vect{\widetilde{g}}'_j$. The intermediate features are then concatenated with query context vector $\vect{q}$, and linear-transformed back to the node embedding dimension, creating new feature $\vect{g}'_j$: \begin{equation} \vect{g}'_j = \mat{W}_g[\vect{\widetilde{g}}'_j\, \| \,\vect{q}] \end{equation} \begin{equation} \vect{q} \! = \! \mat{W}_c \! \times \! \texttt{LeakyReLU}\!\left(\mat{W}_{present}(\vect{h}_{query} \! + \! \gvect{\rho}_{query})\right) \end{equation} where $\vect{h}_{query}$ is the preliminary feature of $v_{query}$, and $\gvect{\rho}_{query}$ is the relation parameter for $r_{query}$. \subsection{Attention Flow} \method models path traversal with the soft approximation of attention flow, iteratively propagating the attention value of each node to its outgoing neighbor nodes. Initially, the node attention is initialized to 1 for $v_{query}$, and 0 for all other entities. Hereafter, at each step $t$, Attention Flow propagates edge attention $\widetilde{a}^{(t)}_{ij}$ and aggregates them to node attention $a^{(t)}_j$: \begin{equation*} \begin{split} &\widetilde{a}^{(t+1)}_{ij} = \T^{(t+1)}_{ij} a^{(t)}_i, \,\, a^{(t+1)}_j = \sum_{i \in \IN{j}}\widetilde{a}^{(t+1)}_{ij}\\ &s.t. \,\, \sum_i a^{(t+1)}_i = 1, \,\, \sum_{ij} \widetilde{a}^{(t+1)}_{ij} = 1 \end{split} \end{equation*} The key here is the transition probability $\T_{ij}$. In this work, we define $\T_{ij}$ as applying softmax over the sum of two scoring terms, regarding both the preliminary feature $\vect{h}$, and the subgraph feature $\vect{g}$: \begin{equation*} \begin{split} \T_{ij}^{(t+1)} = \texttt{softmax}_j(&score(\vect{g}^{(t)}_i, \vect{g}^{(t)}_j, \gvect{\rho}_{ij}, \gvect{\tau}_{|\triangle t_{ij}|})\, +\\ &score(\vect{g}^{(t)}_i, \vect{h}_j, \gvect{\rho}_{ij}, \gvect{\tau}_{|\triangle t_{ij}|})),\\ \end{split} \end{equation*} \begin{equation*} score(\vect{i}, \vect{j}, \vect{r}, \gvect{\tau}) = \sigma\left((\mat{W}_Q \vect{i})^{\top}(\vect{W}_K (\vect{j} + \vect{r} + \gvect{\tau}))\right) \end{equation*} The first scoring term accounts only for subgraph feature $\vect{g}_i$ and $\vect{g}_j$, giving additional point to entities that are already included in the subgraph (note that $\vect{g}_i$ is initialized to zero for entities not yet included in the subgraph). Meanwhile, the second scoring term could be regarded as \textit{exploring term}, as it relatively prefers entities not included in the subgraph, by modeling the interaction between $\vect{g}_i$ and $\vect{h}_j$. As \method consists only of differentiable operations, the path traversal of \method can be trained end-to-end by directly supervising on the node attention distribution after $T$ propagation steps. We train \method to maximize the log probability of the answer entity $u_{label}$ at step $T$. \begin{equation*} \mathcal{L} = -\log{a^{(T)}_{u_{label}}} \end{equation*} \input{table/performance} \subsection{Subgraph Sampling} The decoding process of \method depends on the iterative sampling of query-relevant subgraph $G^{(t)}_{sub}$. The initial subgraph $G^{(0)}_{sub}$ before the first propagation step contains only one node, $v_{query}$. As the propagation step proceeds, edges with high relevance to the input query, measured by the size of attention value assigned to the edges, are added to the previous step's subgraph. Specifically, the subgraph sampling at step $t$ proceeds as follows: \begin{itemize} \item Find $x$ number of \textit{core nodes} with highest (nonzero) node attention value $a^{(t-1)}_i$ at the previous step. \item For each of the core node, sample $y$ number of edges that originate from the node. \item Among $x \cdot y$ sampled edges, find $z$ number of edges with highest edge attention value $\widetilde{a}^{(t)}_{ij}$ at the current step. \item Add the $z$ edges to $G^{(t-1)}_{sub}$. \end{itemize} In this module, $x, y, z$ are hyperparameters. Intuitively, we only collect `important' events that originate from `important' entities (core nodes) with respect to the query, while keeping the subgraph size under control (edge sampling). We provide an illustrative example on subgraph sampling in Appendix A. Note that although edge sampling brings in stochasticity to \method's inference, this does not hinder the end-to-end training of the model. Since the sampling is not parameterized and we only use node feature $\vect{g}$ from the sampled subgraph, gradients back-propagate through $\vect{g}$, not through the sampling operation. \subsection{Datasets} We evaluate our proposed method on three benchmark datasets for TKG completion: ICEWS14, ICEWS05-15, and Wikidata11k, all suggested by \citet{ta-distmult}. ICEWS14 and ICEWS05-15 are subsets of ICEWS\footnote{https://dataverse.harvard.edu/dataverse/icews}, each containing socio-political events in 2014, and from 2005 to 2015 respectively. Wikidata11k is a subset of Wikidata\footnote{https://www.wikidata.org/wiki/Wikidata:Main\_Page}, posessing facts of various timestamps that span from A.D. 20 to 2020. All facts in Wikidata11k are annotated with additional temporal modifier, \textit{occurSince} or \textit{occurUntil}. For the sake of consistency and simplicity, we follow \citet{ta-distmult} to merge the modifiers into predicates rather than modeling them in separate dimension (e.g. (A, loves, B, since, 2020) transforms to (A, loves-since, B, 2020)). Detailed statistics of the three datasets are provided in Appendix B. \subsection{Baselines} We compare \method with representative static KG embeddings - TransE and DistMult, and state-of-the-art embedding -based baselines on temporal KGs, including ConT \cite{cont}, TTransE \cite{ttranse}, HyTE \cite{hyte}, TA \cite{ta-distmult}, DE-SimplE \cite{de-simple}, and T(NT)ComplEx \cite{tcomplex}. \subsection{Experimental Setting} For each dataset, we create $G_{KG}$ with only the triples in the train set. We add inverse edges to $G_{KG}$ for proper path-based inference on reciprocal relations. Also, we follow \citet{dpmpn} by adding self-loops to all entities in the graph, allowing the model to stay at the `answer node' if it reaches an optimal entity in $t < T$ steps. To measure \method's performance in head entity prediction, we add reciprocal triples to valid and test sets too. For all datasets, we find through empirical evaluation that setting the maximal path length $T = 3$ results in the best performance. Following previous works, we fix the dimension of entity / relation / displacement embedding to 100. Except for the embedding size, we search for the best set of hyperparameters using grid-based search, choosing the value with the best \textit{Hits@1} while all other hyperparameters are fixed. We implement \method with PyTorch and DGL, and plan to make the code publicly available. We provide further implementation details including hyperparameter search bounds and the best configuration in Appendix C. \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{FIG/distribution.png} \caption{Edge attention distribution over different temporal displacements. Note that y-axis is in log scale. For better visualization, we cut the range of temporal displacement to $[-100, 100]$ in all charts. {\method learns to effectively attend on events with specific temporal distance from the query time, depending on the relation type of the query.} } \label{fig:disp_distribution} \end{figure*} \subsection{Results} Table \ref{tab:performance} shows the overall evaluation result of \method against baseline methods. Along with Hits@1, 3, 10, we report MRR of the ground-truth entity, compared to the baseline methods. As seen in the table, \method outperforms baseline models in all the benchmarks, improving up to 10\% relative performance consistently over all metrics. We find through ablation study that the improvements are mainly contributed from resolving the two shortcomings of pre-existing TKG embeddings, which we indicate in earlier sections - absence of 1) neighborhood encoding scheme, 2) path-based inference with scalable subgraph sampling. \input{table/ablation} \input{table/generalize} \subsubsection{Model Variants and Ablation} To further examine the effect of our proposed method in solving the aforementioned two problems, we conduct an ablation study as shown in Table~\ref{tab:ablation}. First, we consider \method without temporal displacement encoding. In both PGNN and SGNN, we do not consider the sign and magnitude of temporal displacement, and simply learn the embedding of each timestamp as is. While computing the message $\vect{m}_{ij}$, the two GNNs simply add the time embedding $\gvect{\tau}_{t_{ij}}$ to $\vect{h}_i$ + $\gvect{\rho}_{ij}$. No sign-specific weight is multiplied, and all edges are linear-transformed with a same weight matrix. In this setting, \method's performance on ICEWS14 degrades about 30\% in Hits@1, and performs similar to TA-DistMult in Table~\ref{tab:performance}. The result attests to the importance of temporal displacement for the neighborhood encoding in temporal KGs. Next, to analyze the effect of subgraph sampling on overall performance, we resort to a new configuration of \method where no subgraph sampling is applied, and SGNN creates node feature $\vect{g}_i$ for all entities in $G_{KG}$. Here, \method's performance slightly degrades about 1 percent in all metrics. This implies the importance of subgraph sampling to prune query-irrevlant edges, helping \method to concentrate on the plausible substructure of the input graph. Finally, we analyze the effect of PGNN by training \method with different numbers of PGNN layers. We find that \method, trained with 1-layer PGNN, performs superior to the model without PGNN by absolute gain of 1\% in MRR. However, adding up more layers in PGNN gives only a minor gain, or even aggravates the test set accuracy, mainly owing to early overfitting on the triples in train set. \subsubsection{Generalizing to Unseen Timestamps} We conduct an additional study that measures the performance of \method in generalizing to queries with unseen timestamps. Following \citet{de-simple}, we modify ICEWS14 by including all triples except those on $5^{th}$, $15^{th}$, $25^{th}$ day of each month in the train set, and creating valid and test sets using only the excluded triples. The performance of \method against the strongest baselines in this dataset are presented in Table~\ref{tab:generalize}. In this setting, DE-SimplE and T(NT)ComplEx perform much more similar to each other than in Table~\ref{tab:performance}, while \method performs superior to all baselines. DE-SimplE shows strength in generalizing over time, as it represents each entity as a continuous function over temporal dimension. However, the model is weak when the range of timestamps is large and sparse, as shown for Wikidata in Table~\ref{tab:performance}. Meanwhile, TComplEx and TNTComplEx show fair performance in Wikidata, but poorly infer for unseen timestamps, as they only learn independent embeddings of discrete timestamps. On the contrary to these models, \method not only shows superior performance in all benchmarks but also is robust to unseen timestamps, by accounting for the temporal displacement, not the independent time tokens. \subsection{Interpretation} We provide a detailed analysis on the interpretability of \method in its relational reasoning process. \subsubsection{Relation Type and Temporal Displacement} Intuitively, the query relation type, and the temporal displacement between relevant events and the query are closely correlated. For a query such as \textit{(PersonX, member\_of\_sports\_team, ?, $t_1$)}, events that happened 100 years before \textit{$t_1$} or 100 years after \textit{$t_1$} will highly likely be irrelevant. On the contrary, for a query given as \textit{(NationX, wage\_war\_against, ?, $t_2$)}, one might have to consider those events far-off the time of interest. To verify whether \method understands this implicit correlation, we analyze the attention distribution over edges with different temporal displacements, when \method is given input queries with a specific relation type. The visualization of the distributions for three relation types are presented in Figure \ref{fig:disp_distribution}. For all queries in the test set of WikiData11k with a specific relation type, we visualize the average attention value assigned to edges with each temporal displacement (red bars). We compare this with the original distribution of temporal displacement, counted for all edges reachable in $T$ steps from the head entity $v_{query}$ (blue bars). Remarkably, on the contrary with the original distribution of high variance over wide range of displacement, \method tends to focus most of the attention to edges with specific temporal displacement, depending on the relation type. For queries with relation \textit{award\_won}, the attention distribution is extremely skewed, focusing over 90\% of the attention to events with displacement $= 0$ (i.e. events in the same year with the query). Note that we have averaged the distribution for all queries with \textit{award\_won}, including both temporal modifiers \textit{occurSince} and \textit{occurUntil}. The skewed distribution mainly results from the fact that the majority of the `award' entities in Wikidata11k are annual awards, such as \textit{Latin Grammy Award}, or \textit{Emmy Award}. The annual property of the candidate entities naturally makes \method to focus on clues such as the nomination of awardees, or significant achievement of awardees in the year of interest. \input{table/case_study} Next, we test \method for queries with relation \textit{member\_of\_sports\_team-occurUntil}. In this case, the attention is more evenly distributed than the former case, but slightly biased toward past events. We find that this phenomenon is mainly due to the existence of temporally reciprocal edge in $G_{KG}$, which is a crucial key in solving the given query. Here, \method sends more than half of the attention value (on average) to an event with relation \textit{member\_of\_sports\_team-occurSince}, that happened few years before the time of interest. The inference follows our intuition to look for the last sports club where the player became member of, before the timestamp of the query. The third case with relation \textit{educated\_at-occurSince} is the opposite of the second case. Majority of the attention have been concentrated on events in 1-5 years after the query time, searching for the first event with relation \textit{educated\_at-occurUntil}, that happened after the time of interest. As the analysis suggests, \method discovers important clues for each relation type, adequately accounting for the temporal displacement between the query and related events, while aligning with human intuition. \subsubsection{Case Study} We resort to a case study, to provide a detailed view on \method's attention-based traversal. In this study, our model is given an input query \textit{(North\_Korea, threaten, ?, 2014/04/29)} in ICEWS14 where the correct answer is \textit{South\_Korea}. For each propagation step, we list top-5 edges that received the highest attention value in the step. The predominant edges and their associated attention values are shown in Table \ref{tab:case_study}. In the first step, \method attends to various events pertinent to \textit{North\_Korea}, that mostly include negative predicates against other nations. As seen in the table, the two plausible query-filling candidates are \textit{Japan}, and \textit{South\_Korea}. \textit{Japan} receives slightly more attention than \textit{South\_Korea}, as it is associated with more relevant facts such as ``\textit{North\_Korea} threatened \textit{Japan} at May 12th''. In the second step, however, \method discovers additional relevant facts, that could be crucial in answering the given query. As these facts have either \textit{Japan} or \textit{South \_Korea} as head entity, they could not be discovered in the first propagation step, which only propagates the attention from the query head \textit{North\_Korea}. \method attends to the events \textit{(South\_Korea, threaten / criticize\_or\_denounce, North\_Korea)} that happened only a few days before our time of interest. These facts imply the strained relationship between the two nations around the query time. Also, \method finds that most of the edges that span from \textit{Japan} to \textit{North\_Korea} before/after few months the time of interest, tend to be positive events. As a result, in the last step, \method propagates most of the node attention in \textit{North\_Korea} to the events associated with \textit{South\_Korea}. Highest attention have been assigned to the relation \textit{make\_statement}. Although the relation itself does not hold a negative meaning, in ICEWS14, \textit{make\_statement} is typically accompanied by \textit{threaten}, as entities do formally threaten other entities by making statements. Through the case study, we find that \method leverages the propagation-based decoding as a tool to fix its traversal over wrongly-selected entities. Although \textit{Japan} seemed like an optimal answer in the first step, it understands through the second step that the candidate was sub-optimal with respect to the query, propagating the attention assigned to \textit{Japan} back to \textit{North\_Korea}. \method fixes its attention propagation in the last step, resulting in a completely different set of attended events compared to the first step. Such an amendment would not have been possible with conventional approaches to path-based inference, which greedily select an optimal entity to traverse at each decoding step. \section{\raggedright{A \quad Subgraph Sampling Example}} \label{sec:app_subgraph_sampling} Figure~\ref{fig:subgraph_example} is an illustrative example for the subgraph sampling procedure in \method. The hyperparameters for the example are as follows: $x = 2$ (the maximum number of core nodes), $y = 3$ (the maximum number of candidate edges, considered by each core node), and $z = 2$ (the number of sampled edges added to the subgraph). In the initial state, the only node associated with nonzero attention $a^{(0)}_i$ is the query head $v_{query}$. Also, the initial subgraph $G^{(0)}_{sub}$ consists only of the node $v_{query}$. After the first propagation step $t=1$, \method first finds top-$x$ core nodes (where $x=2$) w.r.t. nonzero node attention scores $a^{(0)}_i$ of the previous step $t=0$. Since the only node with nonzero attention value is $v_{query}$, it is retrieved as the core node. Next, \method randomly samples at most $y = 3$ edges that originate from the core node (e.g. dashed edges with weights). Among the sampled edges, it selects top-$z$ (where $z=2$) edges in the order of edge attention values $\widetilde{a}^{(1)}_{ij}$ at the current step; then, they are added to $G^{(0)}_{sub}$, resulting in the new subgraph $G^{(1)}_{sub}$. After the second propagation step $t=2$, \method again finds $x$ core nodes that correspond to highest attention values $a^{(1)}_i$ (e.g. nodes annotated with $0.1$ and $0.2$, respectively). Then, $y$ outgoing edges for each core node are sampled; among $x \cdot y$ sampled edges, $z$ edges with highest edge attention values $\widetilde{a}^{(2)}_{ij}$ are added to $G^{(1)}_{sub}$, creating the new subgraph $G^{(2)}_{sub}$. As seen in the figure, the incremental subgraph sampling scheme allows our model to iteratively expand the range of nodes and edges to attend, while guaranteeing that the critical nodes and edges in the previous steps are kept included in the latter subgraphs. By flexibly adjusting the subgraph related hyperparameters $x, y$, and $z$, \method is readily calibrated between reducing computational complexity and optimizing the predictive performance. Intuitively, with more core nodes, more sampled edges, and more edges added to the subgraph, \method can better attend to the substructure of TKG that otherwise might have been discarded. Meanwhile, with small $x, y$, and $z$, \method can easily scale-up to large graphs by reducing the number of message-passing operations in SGNN. \newpage \section{\raggedright{B \quad Dataset Statistics}} \label{sec:app_dataset_stat} \input{table/dataset_stat} \clearpage \onecolumn \section{\raggedright{C \quad Additional Implementation Detail}} \label{sec:app_impl_detail} \input{table/impl_detail} \section{Introduction} \label{sec:intro} \input{010intro} \section{Related Work} \label{sec:related} \input{020related} \section{Proposed Method} \label{sec:method} \input{030method} \section{Experiment} \label{sec:experiment} \input{040experiment} \section{Conclusion} \label{sec:conclusion} \input{050conclusion}
{ "attr-fineweb-edu": 1.71582, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc_HxaKgTuVvaEroc
\section{Introduction} Many systems out of equilibrium \cite{haken,cross} exhibit spatial striped patterns on macroscopic scales. These are often caused by transport of matter or charge induced by a drive which leads to heterogeneous ordering. Such phenomenology occurs in flowing fluids \cite{rheology2}, and during phase separation in colloidal \cite{loewen3}, granular \cite% {reis,sanchez}, and liquid--liquid \cite{liqliq} mixtures. Further examples are wind ripples in sand \cite{dunes3}, trails by animals and pedestrians \cite{helbing}, and the anisotropies observed in high temperature superconductors \cite{cuprates0,cuprates1} and in two--dimensional electron gases \cite{2deg1,mosfet}. Studies of these situations, often described as nonequilibrium phase transitions, have generally focused on lattice systems \cite{Liggett,Privman,Zia,Marro,Odor}, i.e., models based on a discretization of space and in considering interacting particles that move according to simple local rules. Such simplicity sometimes allows for exact calculations and is easy to be implemented in a computer. Moreover, some powerful techniques have been developed to deal with these situations, including nonequilibrium statistical field theory. However, lattice models are perhaps a too crude oversimplification of fluid systems so that the robustness of such an approach merits a detailed study. The present paper describes Monte Carlo (MC) simulations and field theoretical calculations that aim at illustrating how slight modifications of dynamics at the microscopic level may influence, even quantitatively, the resulting (nonequilibrium) steady state. We are also, in particular concerned with the influence of dynamics on criticality. With this objective, we take as a reference the \textit{driven lattice gas} (DLG), namely, a kinetic nonequilibrium Ising model with conserved dynamics. This system has become a prototype for anisotropic behavior, and it has been useful to model, for instance, ionic currents \cite{Marro} and traffic flows \cite{antal}. In fact, in certain aspects, this model is more realistic for traffic flows than the standard \textit{asymmetric simple exclusion process} \cite{Liggett,Privman}. Here we compare the transport and critical properties of the DLG with those for apparently close lattice and off--lattice models. There is some related previous work addressing the issue of how minor variations in the dynamics may induce dramatic morphological changes both in the early time kinetics and in the stationary state \cite{valles,rutenberg,manolo0}. However, these papers do not focus on transport nor on critical properties. We here in particular investigate the question of how the lattice itself may condition transport, structural and critical properties and, with this aim, we consider nearest--neighbor (NN) and next--nearest--neighbor (NNN) interactions. We also compare with a microscopically off--lattice representation of the \textit{driven lattice gas} in which the particles' spatial coordinates vary continuously. A principal conclusion is that spatial discretization may change significantly not only morphological and early--time kinetics properties, but also critical properties. This is in contrast with the concept of universality in equilibrium systems, where critical properties are independent of dynamic details. \begin{figure}[b] \begin{center} \includegraphics[scale=0.245]{fig1.eps} \end{center} \caption[]{Schematic comparison of the sites a particle (at the center, marked with a dot) may occupy (if the corresponding site is empty) for nearest--neighbor (NN) and next--nearest--neighbor (NNN) hops at equilibrium (left) and in the presence of an \textquotedblleft infinite\textquotedblright $ $ horizontal field (right). The particle--hole exchange between neighbors is either forbidden ($\times$) or allowed ($\surd$), depending on the field value.} \label{fig1} \end{figure} \section{Driven Lattice Gases} The \textit{driven lattice gas}, initially proposed by Katz, Lebowitz, and Spohn \cite{KLS}, is a nonequilibrium extension of the Ising model with conserved dynamics. The DLG consists of a \textit{d}-dimensional square lattice gas in which pair of particles interact via an attractive and short--range Ising--like Hamiltonian, \begin{equation} H=-4\sum_{\langle j,k \rangle} \sigma_j \sigma_k \: . \label{eq:ham} \end{equation} Here $\sigma_k=0(1)$ is the lattice occupation number at site $k$ for an empty (occupied) state and the sum runs over all the NN sites (the accessible sites are depicted in Fig.~\ref{fig1}). Dynamics is induced by the competion between a heat bath at temperature $T$ and an external driving field $E$ which favors particle hops along one of the principal lattice directions, say horizontally ($\hat{x}$), as if the particles were positively charged. Consequently, for periodic boundary conditions, a nontrivial nonequilibrium steady state is set in asymptotically. MC simulations by a biased \textit{Metropolis} rate reveal that, as in equilibrium, the DLG undergoes a second order phase transition. At high enough temperature, the system is in a disordered state while, below a critical point (at $T\leq T_{E}$) it orders displaying anisotropic phase segregation. That is, an anisotropic (striped for $d=2$) rich--particle phase then coexists with its gas. It is also found that the critical temperature $T_{E}$ monotonically increases with $E$. More specifically, for $d=2$, assuming a half filled square lattice in the large field limit (in order to maximize the nonequilibrium effect), one has a \textit{nonequilibrium} critical point at $T_{\infty}\simeq 1.4T_{0}$, where the equilibrium value is $T_{0}=2.269Jk_{B}^{-1}$. It was numerically shown that this belongs to a universality class other than the Onsager one, e.g., MC data indicates that the order parameter critical exponent is $\beta_{\mathrm{DLG}} \simeq 1/3$ \cite{Marro,achahbar} (instead of the Onsager value $1/8$). \begin{figure}[b] \begin{center} \includegraphics[width=6cm]{fig2a.eps} \includegraphics[width=6cm]{fig2b.eps} \end{center} \caption[]{Parallel (squares) and transverse (triangles) components of the two--point correlation function (left) and the structure factor (right) above criticality with NN (filled symbols) and NNN (empty symbols) interactions for a $128 \times 128$ half filled lattice. The inset shows the $x^{-2}$ power law decay in $C_{\parallel}$ for both discrete cases: DLG ($\circ$) and NDLG ($\times$).} \label{fig2} \end{figure} Other key features concern the two--particle correlation function $C(x,y)$ and its Fourier transform $S(k_{x},k_{y})$, i.e., the structure factor. As depicted in the left graph of Fig.~\ref{fig2}, correlations are favored (inhibited) along (against) the field direction. In fact, the DLG shows a slow decay of the two--point correlations due to the spatial anisotropy associated with the dynamics \cite{pedro}. This long range behavior translates into a characteristic discontinuity singularity at the origin ($\lim_{k_{x}\rightarrow 0}S_{\parallel}\neq \lim_{k_{y}\rightarrow 0}S_{\perp}$) in the structure factor \cite{Zia}, which is confirmed in Fig.~\ref{fig2}. How do all these features depend on the number of neighbor sites to which a particle can hop? Or in other words, how robust is the behavior when extending interactions and accessible sites to the NNN? Previous work has shown that extending hopping in the DLG to NNN leads to an inversion of triangular anisotropies during the formation of clusters \cite{rutenberg}, and also that dramatic changes occur in the steady state, including the fact that, contrary to the DLG with NN interactions, the critical temperature decreases with increasing $E$ \cite{manolo0}. However, other important features such as correlations and criticality seem to remain invariant. Analysis of the parallel ($C_{\parallel}$) and transverse ($C_{\perp}$) components reveals that correlations are quantitatively similar for the DLG and for the DLG with NNN interactions (henceforth NDLG) ---although somehow weaker for the latter case. Also persists a slow decay of correlations which yield to the discontinuity at the origin of $S(k_x,k_y)$. These facts are shown in Fig.~\ref{fig2}. On the other hand, recent MC simulations of the NDLG indicate that the order parameter critical exponent is $\beta_{\mathrm{NDLG}}\approx 1/3$ \cite{achahbar2}, as for the DLG. The \textit{anisotropic diffusive system} approach \cite{paco}, which is a Langevin--type (mesoscopic) description, predicts this critical behavior. In both cases, DLG and NDLG, the Langevin equations, as derived by coarse graining the master equation, lead to $\beta=1/3$. These two Langevin equations are identical, except for new entropic terms in the NDLG due to the presence of additional neighbors \cite{manolo2}. The fact that extending particle hops and interaction to the diagonal sites leaves invariant both correlations and criticality seems to indicate that the two systems, DLG and NDLG, belong to the same universality class. \section{A Driven Off-lattice Gas} In order to deep further on this interesting issue, we studied to what extent the DLG behavior depends on the lattice itself. With this aim, we considered a driven system with continuous variation of the particles' spatial coordinates ---instead of the discrete variations in the DLG--- which follows as close as possible the DLG strategy. In particular, we analyzed an off--lattice, microscopically--continuum analog of the DLG with the symmetries and short--range interaction of this model.\\ \subsection{The Model} Consider a \textit{fluid} consisting of $N$ interacting particles of mass $m$ confined in a two--dimensional box of size $L\times L$ with periodic (toroidal) boundary conditions. The particles interact via a truncated and shifted Lennard--Jones (LJ) pair potential \cite{Allen}:% \begin{equation} \phi(r)\equiv \left\{ \begin{array}{cl} \phi_{LJ}(r)-\phi_{LJ}(r_{c}), & \mathit{if} \: r<r_{c} \\ 0, & \mathit{if} \: r\geq r_{c},% \end{array}% \right. \label{pot} \end{equation}% where $\phi_{LJ}(r)=4\epsilon \left[ (\sigma /r)^{12}-(\sigma /r)^{6}\right]$ is the LJ potential, $r$ is the interparticle distance, and $r_{c}$ is the \textit{cut-off} which we shall set at $r_{c}=2.5\sigma$. The parameters $\sigma$ and $\epsilon$ are, respectively, the characteristic length and energy. For simulations, all the quantities were reduced according to $\epsilon$ and $\sigma$, and $k_{B}$ and $m$ are set to unity. The uniform (in space and time) external driving field $E$ is implemented by assuming a preferential hopping in the horizontal direction. This favors particle jumps along the field, as it the particles were positively charged; see dynamic details in Fig.~\ref{fig3}. As in the lattice counterpart, we consider the large field limit $E\rightarrow \infty$. This is the most interesting case because, as the strength of the field is increased, one eventually reaches saturation, i.e., particles cannot jump against the field. This situation may be formalized by defining the transition probability per unit time (\textit{rate}) as \begin{equation} \omega(\eta \rightarrow \eta'; E, T)=\frac{1}{2}\left[ 1+\tanh(E \cdot \delta) \right] \cdot \min \left\lbrace 1,exp(-\Delta \Phi/T)\right\rbrace . \label{rate} \end{equation} Here, any configuration is specified by $\eta\equiv\left\{ \mathbf{r}_{1},\cdots,\mathbf{r}_{N}\right\}$, where $\mathbf{r}_i$ is the position of the particle $i$, that can move anywhere in the torus, $\Phi(\eta)=\sum_{i<j}\phi(|\mathbf{r}_i-\mathbf{r}_j|)$ stands for the energy of $\eta$, and $\delta=(x_{i}^{\prime }-x_{i})$ is the displacement corresponding to a single MC trial move along the field direction, which generates an increment of energy $\Delta \Phi =\Phi(\eta^{\prime})-\Phi(\eta)$. The biased hopping which enters in the first term of Eq.~(\ref{rate}) makes the \textit{rate} asymmetric under $\eta \leftrightarrow \eta^{\prime}$. Consequently, Eq.~(\ref{rate}), in the presence of toroidal boundary conditions, violates detailed balance. This condition is only recovered in the absence of the driving field. In this limit the \textit{rate} reduces to the Metropolis one, and the system corresponds to the familiar \textit{truncated and shifted two--dimensional LJ fluid} \cite{smit,Allen}. Note that each trial move concerning any particle will satisfy that $0<|\mathbf{r}_{i}^{\prime}-\mathbf{r}_{i}|<\delta_{max},$ where $\delta_{max}$ is the maximum displacement in the radial direction (fixed at $\delta_{max}=0.5$ in our simulations). \begin{figure}[b] \begin{center} \includegraphics[width=9cm]{fig3.eps} \end{center} \caption[]{Schematic representation of the accessible (shaded) region for a particle trial move at equilibrium (left) and out-of-equilibrium (right), assuming the field points along the horizontal direction ($\hat x$). The right hand side shows typical steady state configurations above (upper snapshot) and below (lower snapshot) criticality in the large field limit.} \label{fig3} \end{figure} MC simulations using the \textit{rate} defined in Eq.~(\ref{rate}) show highly anisotropic states (see Fig.~\ref{fig3}) below a critical point which is determined by the pair of values $(\rho_{\infty},T_{\infty})$. A linear interface forms between a high density phase and its vapor: a single strip with high density extending horizontally along $\hat{x}$ throughout the system separates from a lower density phase (vapor). The local structure of the anisotropic condensate changes from a strictly hexagonal packing of particles at low temperature (below $T=0.10$), to a polycrystalline--like structure with groups of defects and vacancies which show a varied morphology (e.g., at $T=0.12$), to a fluid--like structure (e.g., at $T=0.30,$) and, finally, to a disordered state as the temperature is increased further. This phenomenology makes our model useful for interpreting structural and phase properties of nonequilibrium fluids, in contrast with lattice models, which are unsuitable for this purpose. Skipping the microscopic structural details, the stationary striped state is similar to the one in lattice models, however. \begin{figure}[b] \begin{center} \includegraphics[width=6cm]{fig4a.eps} \includegraphics[width=6cm]{fig4b.eps} \end{center} \caption[]{Left graph: Temperature dependence of the net current for the driven LJ fluid. Right graph: Transverse--to--the--field current profiles below criticality. The shaded (full) line corresponds to the current (velocity) profile of the off--lattice model. For comparison we also show the current profile of the DLG with NN interactions (circle--dotted line). Since each distribution is symmetric with respect to the system center of mass (located here at $L/2$) we only show their right half parts.} \label{fig4} \end{figure} \subsection{Transport Properties} Regarding the comparison between off--lattice and lattice transport properties, the left graph in Fig.~\ref{fig4} shows the net current $j$ as a function of temperature. Saturation is only reached at $j_{max}=4\delta_{max}/3\pi$ when $T\rightarrow \infty$. The current approaches its maximal value logarithmically, i.e., slower than the exponential behavior predicted by the Arrhenius law. The sudden rising of the current as $T$ is increased can be interpreted as a transition from a poor--conductor (low--temperature) phase to a rich--conductor (high--temperature) phase, which is reminiscent of ionic currents \cite{Marro}. This behavior of the current also occurs in the DLG. Revealing the persistence of correlations, the current is nonzero for any low $T,$ though very small in the solid--like phase. From the temperature dependence of $j$ one may estimate the transitions points between the different phases, in particular, as the condensed strip changes from solid to liquid ($T\approx 0.15$) and finally changes to a fully disordered state ($T\approx 0.31$). The current is highly sensitive to the anisotropy. The most relevant information is carried by the transverse--to--the--field current profile $j_{\perp}$, which shows the differences between the two coexisting phases (right graph in Fig.~\ref{fig4}). Above criticality, where the system is homogeneous, the current profile is flat on the average. Otherwise, the condensed phase shows up a higher current (lower mean velocity) than its mirror phase, which shows up a lower current (higher mean velocity). Both the transversal current and velocity profiles are shown in Fig.~\ref{fig4}. The current and the density vary in a strongly correlated manner: the high current phase corresponds to the condensed (high density) phase, whereas the low current phase corresponds to the vapor (low density) phase. This is expectable due to the fact that there are many carriers in the condensed phase which allow for higher current than in the vapor phase. However, the mobility of the carriers is much larger in the vapor phase. The maximal current occurs in the interface, where there is still a considerable amount of carriers but they are less bounded than in the particles well inside the \textit{bulk} and, therefore, the field drives easily those particles. This enhanced current effect along the interface is more prominent in the lattice models (notice the large peak in the current profile in Fig.~\ref{fig4}). Moreover in both lattice cases, DLG and NDLG, there is no difference between the current displayed by the coexisting phases because of the particle--hole symmetry. Such a symmetry is derived from the Ising--like Hamiltonian in Eq.~(\ref{eq:ham}) and it is absent in the off--lattice model. \begin{figure}[b] \begin{center} \includegraphics[width=6cm]{fig5a.eps} \includegraphics[width=6cm]{fig5b.eps} \end{center} \caption[]{The temperature--density phase diagram (left graph)was obtained from the transversal density profile (right graph) for $N=7000,$ $\protect\rho =0.35,$ and different temperatures. The coexistence curve separates the liquid--vapor region (shaded area) and the liquid phase (unshaded area). The diamond represents the critical point, which has been estimated using the scaling law and the rectilinear diameter law (as defined in the main text).} \label{fig5} \end{figure} \subsection{Critical Properties} A main issue is the (nonequilibrium) liquid--vapor coexistence curve and the associated critical behavior. The coexistence curve may be determined from the density profile transverse to the field $\rho_{\perp}$. This is illustrated in Fig.~\ref{fig5}. At high enough temperature above the critical temperature the local density is roughly constant around the mean system density ($\rho=0.35$ in Fig.~\ref{fig5}). As $T$ is lowered, the profile accurately describes the striped phase of density $\rho _{+}$ which coexists with its vapor of density $\rho _{-}$ ($\rho _{-}\leq\rho _{+}$). The interface becomes thinner and less rough, and $\rho _{+}$ increases while $\rho_{-}$ decreases, as $T$ is decreased. As an order parameter for the second order phase transition one may use the difference between the coexisting densities $\rho _{+}-\rho _{-}$. The result of plotting $\rho _{+}$ and $\rho _{-}$ at each temperature is shown in Fig.~\ref{fig5}. The same behavior is obtained from the transversal current profiles (Fig.~\ref{fig4}). It is worth noticing that the estimate of the coexisting densities $\rho_{\pm}$ is favored by the existence of a linear interface, which is simpler here than in equilibrium. This is remarkable because we can therefore get closer to the critical point than in equilibrium. Lacking a \textit{thermodynamic} theory for \textquotedblleft phase transitions\textquotedblright $ $ in non--equilibrium liquids, other approaches have to be considered in order to estimate the critical parameters. Consider to the rectilinear diameter law $(\rho _{+}+\rho _{-})/2=\rho _{\infty }+b_{0}(T_{\infty }-T)$ which is a empirical fit extensively used for fluids in equilibrium. This, in principle, has no justification out of equilibrium. However, we found that our MC data nicely fit the diameters equation. We use this fact together with a universal scaling law $\rho _{+}-\rho _{-}=a_{0}(T_{\infty }-T)^{\beta }$ to accurately estimate the critical parameters. The simulation data in Fig.~\ref{fig5} thus yields $\rho_{\infty}=0.321(5)$, $T_{\infty}=0.314(1)$, and $\beta=0.10(8)$, where the estimated errors in the last digit are shown in parentheses. These values are confirmed by the familiar log--log plots. Compared to the equilibrium case \cite{smit}, one has that $T_{0}/T_{\infty }\approx 1.46$. This confirms the intuitive observation above that the field acts in this system favoring disorder. On the other hand, our estimate for the order--parameter critical exponent is fully consistent with both the extremely flat coexistence curve which characterizes the equilibrium two--dimensional LJ fluids and the equilibrium Ising value, $% \beta_{\mathrm{Ising}} =1/8$ (non--mean--field value). Although the error bar is large, one may discard with confidence the DLG value $% \beta_{\mathrm{DLG}} \approx 1/3$ as well as the mean field value. This result is striking because our model \textit{seems} to have the symmetries and short--range interactions of the DLG. Further understanding for this difference will perhaps come from the statistical field theory. \section{Final Comments} In summary, we reported MC simulations and field theoretical calculations to study the effect of discretization in \textit{driven diffusive systems} In particular, we studied structural, transport, and critical properties on the \textit{driven lattice gas} and related non--equilibrium lattice and off--lattice models. Interestingly, the present \textit{Lennard--Jones} model in which particles are subject to a constant driving field is a computationally convenient prototypical model for anisotropic behavior, and reduces to the familiar LJ case for zero field. Otherwise, it exhibits some arresting behavior, including currents and striped patterns, as many systems in nature. We have shown that the additional spatial freedom that our fluid model possesses, compared with its lattice counterpart, is likely to matter more than suggested by some naive intuition. In fact, it is surprising that its critical behavior is consistent with the one for the Ising equilibrium model but not with the one for the \textit{driven lattice gas}. The main reason for this disagreement might be the particle--hole symmetry violation in the driven \textit{Lennard--Jones} fluid. However, to determine exactly this statement will require further study. It also seems to be implied that neither the current nor the inherent anisotropy are the most relevant feature (at least regarding criticality) in these driven systems. Indeed, the question of what are the most relevant ingredients and symmetries which determine unambiguously the universal properties in driven diffusive systems is still open. In any case, the above important difference between the lattice and the off--lattice cases results most interesting as an unquestionable nonequilibrium effect; as it is well known, such microscopic detail is irrelevant to universality concerning equilibrium critical phenomena.\\ We acknowledge very useful discussions with F. de los Santos and M. A. Mu\~{n}oz, and financial support from MEyC and FEDER (project FIS2005-00791).
{ "attr-fineweb-edu": 1.96582, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcc_xK1yAgXBVEClK
\section{Abstract}\label{Abs} We provide a brief survey of our current developments in the simulation-based design of novel families of mesoscale porous materials using computational kinetic theory. Prospective applications on exascale computers are also briefly discussed and commented on, with reference to two specific examples of soft mesoscale materials: microfluid crystals and bi-continuous jels. \section{Introduction} \label{Introd} Complex fluid-interface dynamics \cite{fernandez2016fluids,piazza2011soft,doi2013soft,stratford2005colloidal}, disordered liquid-liquid emulsions \cite{frijters2012effects,costantini2014highly}, soft-flowing microfluidic crystals \cite{raven2009microfluidic,marmottant2009microfluidics,montessori2019mesoscale, montessori2019modeling}, all stand as complex states of matter which, besides raising new challenges to modern non-equilibrium thermodynamics, pave the way to many engineering applications, such as combustion and food processing \cite{cant2008introduction,muschiolik2017double}, as well as to questions in fundamental biological and physiological processes, like blood flows and protein dynamics \cite{bernaschi2019mesoscopic}. In particular this novel state of soft matter opens up exciting prospects for the design of new materials whose effective building blocks are droplets instead of molecules \cite{garstecki2006flowing,raven2009microfluidic,abate2009high,beatus2012physics}. The design of new materials has traditionally provided a relentless stimulus to the development of computational schemes spanning the full spectrum of scales, from electrons to atoms and molecules, to supramolecular structures all the way up to the continuum, encompassing over ten decades in space (say from Angstroms to meters) and at least fifteen in time (from femtoseconds to seconds, just to fix ideas). Of course, no single computational model can handle such huge spectrum of scales, each region being treated by dedicated and suitable methods, such as electronic structure simulations, ab-initio molecular dynamics, classical molecular dynamics, stochastic methods, lattice kinetic theory, finite difference and finite elements for continuum fields. Each of these methods keeps expanding its range of applicability, to the point of generating overlap regions which enable the development of powerful multiscale procedures \cite{weinan2011principles,levitt2014birth,yip2013multiscale,broughton1999concurrent,de2011coarse,succi2008lattice,falcucci2011lattice}. In this paper we focus on a very specific window of material science, {\it meso-materials}, namely materials whose ``effective'' constitutive bricks are neither atoms nor molecules, but droplets instead. Obviously, droplets cannot serve as ``super-molecules'' in a literal sense, since they generally lack chemical specificity. Yet, in recent years, droplets have revealed unsuspected capabilities of serving as building blocks (motifs) for the assembly of new types of soft materials, such as dense foams, soft glasses, microfluidic crystals and many others \cite{garstecki2006flowing,raven2009microfluidic,abate2009high,beatus2012physics}. Droplets offer a variety of potential advantages over molecules, primarily the possibility of feeding large-scale productions of soft materials, such as scaffolds for tissue engineering and other biomedical applications \cite{fernandez2016fluids,piazza2011soft,mezzenga2005understanding}. From a computational standpoint, mesoscale materials offer the opportunity to deploy mesoscale models with reduced needs of down-coupling to molecular dynamics and even less for to upward coupling to continuum methods. Mesoscale particle methods \cite{gompper2009multi,noguchi2007particle} and especially lattice Boltzmann methods \cite{succi2018lattice,kruger2017lattice,dorazio2003} stand out as major techniques in point. In this paper we focus precisely on the recent extensions of the latter which are providing a versatile tool for the computational study of such droplet-based mesoscale materials. The main issue of this class of problems is that they span six or more orders of magnitude in space (nm to mm) and nearly twice as many in time, thus making the direct simulation of their dynamics unviable even on most powerful present-day supercomputers. Why six orders? Simply because the properties of such materials are to a large extent controlled by nanoscale interactions between near-contact fluid-fluid interfaces, which affect the behaviour of the entire device, typically extending over millimiter-centimeters scales. Why twice as many in time? Typically, because the above processes are driven by capillary forces, resulting in very slow motion and long equilibration times, close to the diffusive scaling $t_{dif} \sim L^2$, $t_{dif}$ being the diffusive equilibration time of a device of size $L$. Notwithstanding such huge computational demand, a clever combination of large-scale computational resources and advanced multiscale modelling techniques may provide decisive advances in the understanding of the aforementioned complex phenomena and on the ensuing computational design of new mesoscale materials. Many multiscale techniques have emerged in the past two decades, based on static or moving grid methods, as well as various forms of particle dynamics, both at the atomistic and mesoscale levels \cite{feng2004immersed,praprotnik2006adaptive,tiribocchi2019curvature}. In this paper, we shall be concerned mostly with a class of mathematical models known as Lattice Boltzmann (LB) methods, namely a lattice formulation of a Boltzmann's kinetic equation, which has found widespread use across a broad range of problems involving complex states of flowing matter at all scales, from macroscopic fluid turbulence, to micro and nanofluidics \cite{succi2018lattice}. Among others, one of the main perks of the LB method is its excellent amenability to massively parallel implementations, including forthcoming Exascale platforms. From the computational point of view an Exascale system (the supercomputer class foreseen in 2022) will be able to deliver a peak performance of more then one billion of billions floating-point operations per second ($10^{18}$), an impressive figure that, once properly exploited, can benefit immensely soft matter simulations. As a matter of fact, attaining Exaflop performance on these applications is an open challenge, as it requires a synergic integration of many different skills and a performance-oriented match between system architecture and the algorithmic formulation of the mathematical model. In this paper we provide a description of the different key points that need to be addressed to achieve Exascale performance in Lattice Boltzmann simulations of soft mesoscale materials. The article is organized as follows. In section \ref{LBE_intro}, for the sake of self-consistency, we provide a brief introduction to the main features of the LB method. In section \ref{Exascale} a qualitative description of the up-to-date performance of pre-exascale computers is discussed, together with an eye at LB performances on such pre-exascale (Petascale) computers {\it for macroscopic hydrodynamics}. In section \ref{par:Microfluidics} a concrete LB application to microfluidic problems is presented. In section \ref{par:Colloidal} a new high-perfomance lattice Boltzmann code for colloidal flows is presented. Finally, in section \ref{par:Conclusions} partial conclusions are drawn together with an outlook of future developments \section{Basics of Lattice Boltzmann Method} \label{LBE_intro} The LB method was developed in the late 1980's in a (successful) attempt to remove the statistical noise hampering the lattice gas cellular automata (LGCA) approach to fluid dynamic simulations \cite{succi2018lattice}.\\ Over the subsequent three decades, it has witnessed an impressive boost of applications across a remarkably broad spectrum of complex flow problems, from fully developed turbulence to relativistic flow all the way down to quark-gluon plasmas \cite{succi2008lattice,succi2015lattice}.\\ The key idea behind LB is to solve a minimal Boltzmann Kinetic Equation (BKE) on a suitable phase-space-time crystal. This means tracing the dynamics of a set of discrete distribution functions (often named \textit{populations}) $f_i(\vec{x};t)$, expressing the probability of finding a particle at position $\vec{x}$ and time $t$ with a discrete velocity $\vec{v}=\vec{c}_i$. \begin{figure*} \begin{center} \includegraphics[width=0.5\linewidth]{fig1.png} \end{center} \caption{Left: D3Q27 lattice, a three dimensional mesh composed of a set of 27 discrete velocities. Right: D3Q19 lattice, a three dimensional mesh composed of a set of 19 discrete velocities.} \label{fig:D3Q27} \end{figure*} To correctly solve the Boltzmann kinetic equation, the set of discrete velocities must be chosen in order to secure enough symmetry to comply with mass-momentum-energy conservation laws of macroscopic hydrodynamics as well as with rotational symmetry. In Fig. \ref{fig:D3Q27}, two typical 3D lattices used for current LB simulations are shown, one with a set of 27 velocities (D3Q27, left) and the other one with 19 discrete velocities (D3Q19, right). In its simplest and most compact form, the LB equation reads as follows: \begin{equation} \label{eq:eq1} f(\vec{x}+\vec{c}_i,t+1) = f'_i(\vec{x};t) \equiv (1-\omega) f_i(\vec{x};t) + \omega f_i^{eq}(\vec{x};t) + S_i,\;\;i=1,b \end{equation} where $\vec{x}$ and $\vec{c}_i$ are 3D vectors in ordinary space, $ f_i^{eq} $ are the equilibrium distribution functions and $S_i$ is a source term. Such equation represents the following situation: the populations at site $\vec{x}$ at time $t$ collides (a.k.a. collision step) and produce a post-collision state $f'_i(\vec{x};t)$, which is then scattered away to the corresponding neighbour (a.k.a. propagation step) at $\vec{x}_i$ at time $t+1$. The lattice time step is made unitary, so that $\vec{c}_i$ is the length of the link connecting a generic lattice site node $\vec{x}$ to its $b$ neighbors, located at $\vec{x}_i = \vec{x}+\vec{c}_i$. In the D3Q19 lattice, for example, the index $i$ runs from $1$ to $19$, hence there are $19$ directions of propagation (i.e. neighbors) for each grid-point $\vec{x}$. The local equilibrium populations are provided by a lattice truncation, to second order in the Mach number $M=u/c_s$, of the Maxwell-Boltzmann distribution, namely \begin{equation} \label{eq:equil} f_i^{eq}(\vec{x};t) = w_i \rho (1 + u_i + q_i) \end{equation} where $w_i$ is a set of weights normalized to unity, $u_i = \frac{\vec{u} \cdot \vec{c}_i}{c_s^2}$ and $q_{i}=(c_{ia}c_{ib}-c_{s}^{2}\delta_{ab})u_{a}u_{b}/2c_{s}^{4}$, with $c_s$ equal to the speed of sound in the lattice, and an implied sum over repeated latin indices $a,b=x,y,z$. The source term $S_i$ of Eq.~\ref{eq:eq1} typically accounts for the momentum exchange between the fluid and external (or internal) fields, such as gravity or self-consistent forces describing potential energy interactions within the fluid. By defining fluid density and velocity as \begin{equation} \label{eq:eq2} \rho = \sum_i f_i \hspace{2cm} \vec{u}=(\sum_i f_i \vec{c}_i)/\rho, \end{equation} the Navier-Stokes equations for an isothermal quasi-incompressible fluid can be recovered in the continuum limit if the lattice has the suitable symmetries aforementioned and the local equilibria are chosen according to Eq.~\ref{eq:equil}. Finally, the relaxation parameter $\omega$ in Eq.~\ref{eq:eq1} controls the viscosity of the lattice fluid according to \begin{equation} \label{eq:eq3} \nu = c_s^2 (\omega^{-1}-1/2). \end{equation} Further details about the method can be found in Ref.~\cite{kruger2017lattice,montessori2018lattice}. One of the main strengths of the LB scheme is that, unlike advection, streaming is i) exact, since it occurs along straight lines defined by the lattice velocity vectors $\vec{c}_i$, regardless of the complex structure of the fluid flow, and ii) it is implemented via a memory-shift without any floating-point operation. This also allows to handle fairly complex boundary conditions \cite{leclaire2017generalized} in a more {\it conceptually} transparent way with respect to other mesoscale simulation techniques \cite{schiller2018mesoscopic}. \subsection{Multi-component flows} The LB method successfully extends to the case of multi-component and multi-phase fluids. In a binary fluid, for example, each component (denoted by $r$ and $b$ for red and blue, respectively) comes with its own populations plus a term modeling the interactions between fluids. In this case the equations of motion read as follows: \begin{eqnarray}\label{eq:twobgk} f^r_{i}(\vec{x}+\vec{c}_{i};t+1)=(1-\omega_{eff})f^r_{i}(\vec{x};t)+\omega_{eff} f_{i}^{eq,r}(\rho^r;\vec{u})+S^r_{i}(\vec{x};t) \\ f^b_{i}(\vec{x}+\vec{c}_{i};t+1)=(1-\omega_{eff})f^b_{i}(\vec{x};t)+\omega_{eff} f_{i}^{eq,b}(\rho^b;\vec{u})+S^b_{i}(\vec{x};t) \\ \omega_{eff}=2c_s^2/(2 \bar{\nu} -c_s^2) \\ \frac{1}{\bar{\nu}}=\frac{\rho_k}{(\rho_k+\rho_{\bar{k}})}\frac{1}{\nu_k} + \frac{\rho_{\bar{k}}}{(\rho_k+\rho_{\bar{k}})}\frac{1}{\nu_{\bar{k}}} \end{eqnarray} where $\omega_{eff}$ is related to the kinematic viscosity $\bar{\nu}$ of the mixture of the two fluids. \\ The extra term $S^r_{i}(\vec{x};t)$ of Eq.~\ref{eq:twobgk} (and similarly for $S^b$) can be computed as the difference between the local equilibrium population, calculated at a shifted fluid velocity, and the one taken at the effective velocity of the mixture \cite{kupershtokh2004new}, namely: \begin{equation} S^r_{i}(\vec{r};t)=f_{i}^{eq,r}(\rho^r,\vec{u}+\frac{\vec{F}^{r} \Delta t}{\rho^r})-f_{i}^{eq,r}(\rho^r,\vec{u}).\label{eq:edm} \end{equation} Here $\vec{F}^r$ is an extra cohesive force, usually defined as \cite{shan1993lattice} \begin{equation}\label{shanchen_force} \vec{F}^{r}(\vec{x},t)=\rho^{r}(\vec{x},t)G_C\sum_{i}w_{i}\rho^{b}(\vec{x}+\vec{c}_{i},t)\vec{c_{i}}, \end{equation} capturing the interaction between the two fluid components. In Eq. \ref{shanchen_force}, $G_C$ is a parameter tuning the strength of this intercomponent force, and takes positive values for repulsive interactions and negative for attractive ones. This formalism has proved extremely valuable for the simulation of a broad variety of multiphase and multi-component fluids and represents a major mainstream of current LB research. \section{LB on Exascale class computers} \label{Exascale} Exascale computers are the next major step in the high performance computing arena, the deployment of the first Exascale class computers being planned at the moment for 2022. To break the barrier of $10^{18}$ floating point operations per second, this class of machines will be based on a hierarchical structure of thousands of nodes, each with up to hundreds cores using many accelerators like GPGPU (General Purpose GPU or similar devices) per node. Indeed, a CPU-only Exascale Computer is not feasible due to heat dissipation constraints, as it would demand more than of 100MW of electric power! As a reference, the current top-performing Supercomputer (Summit) in 2019 is composed of 4608 nodes, with 44 cores per node and 6 Nvidia V100 GPU per node, falling short of the Exascale target by a factor five, namely 200 Petaflops (see online at Top500, \cite{top500}), and over 90\% of the computational performance being delivered by the GPUs.\\ Since no major improvement of single-core clock time is planned, due again to heat power constraints, crucial to achieve exascale performance is the ability to support concurrent execution of many tasks in parallel, from $O(10^4)$ for hybrid parallelization (e.g. OpenMP+MPI or OpenAcc+MPI) up to $ O(10^7) $ for a pure MPI parallelization.\\ Hence, three different levels of \textit{parallelism} have to be implemented for an efficient Exascale simulation: i) at core/floating point unit level: (Instruction level parallelism, vectorization), ii) at node level: (i.e., shared memory parallelization with CPU or CUDA, for GPU, threads), iii) at cluster level: (i.e., distributed memory parallelization among tasks). All these levels have to be efficiently orcherstrated to achieve performance by the user's implementations, together with the tools (compilers, libraries) and technology and the topology of the network, as well as the efficiency of the communication software. Not to mention other important issues, like reliability, requiring a fault tolerant simulation environment \cite{da2015exascale,snir2014addressing}.\\ How does LB score in this prospective scenario?\\ In this respect, the main LB features are as follows: \begin{itemize} \item Streaming step is exact (zero round-off) \item Collision step is completely local (zero communication) \item First-neighbors communication (eventually second for high-order formulations) \item Conceptually easy-to-manage boundary conditions (e.g. porous media) \item Both pressure and fluid stress tensor are locally available in space and time \item Emergent interfaces (no front-tracking) for multi-phase/species simulation \end{itemize} All features above are expected to facilitate exascale implementations \cite{succi2019towards}. In Ref.~\cite{liu2019sunwaylb}, an overview of different LB code implementations on a variety of large-scale HPC machines is presented, and it is shown that LB is, as a matter of fact, in a remarkable good position to exploit Exascale systems.\\ In the following we shall present some figures that can be reached with LB codes on exascale systems. \\ According to the roofline model \cite{williams2009roofline}, achievable performance can be ranked in terms of Operational Intensity (OI), defined as the ratio between flops performed and data that need to be loaded/stored from/to memory. At low OI ($ < 10 $), the performance is limited by the memory bandwidth, while for higher values the limitation comes from the floating point units availability. It is well known that LB is a bandwidth limited numerical scheme, like any other CFD model. Indeed, the OI index for LB schemes is around 0.7 for double precision (DP) simulations using a D3Q19 lattice\footnote{For a single fluid, if $F \simeq 200 \div 250 $ is the number of floating point operations per lattice site and time step and $B= 19 \times 2 \times 8=304 $ is load/store demand in bytes (using double precision), the operational intensity is $F/B \sim 0.7$. For Single-precision simulation is 1.4} (see Fig.~\ref{fig:roof} where the roofline for LB is shown\footnote{Bandwidth and Float point computation limits are obtained performing {\em stream} and HPL benchmark.}).\\ In Fig.~\ref{fig:CYL}, a 2D snapshot of the vorticity of the flow around a 3D cylinder at $ Re=2000 $ is shown (\cite{DSFD2020} in preparation).\\ This petascale class simulation was performed with an hybrid MPI-OpenMP parallelization (using 128 Tasks and 64 threads per task), using a single-phase, single time relaxation, 3D lattice. Using $O(10^4)$ present-day GPUs\footnote{At this time only GPUs seems the only device mature enough to be used for a Exascale Machine.}, a hundred billion lattice simulation would complete one million time-steps in something between $1$ and $3$ hours, corresponding to about 3 and 10 TLUPS (1 Tera LUPS is a trillion Lattice Units per Second). So using an Exascale machine more realistic structure, both in terms of size, complexity (i.e. decorated structure) and simulated time can be performed \cite{AmatiFalcucci2018}. To achieve this we must be able to handle order of $ 10^4 $ tasks, and each of them must be split in many, order of $1'000 $, GPUS threads-like processes. \\ \begin{figure*} \begin{center} \includegraphics[width=0.7\linewidth]{fig2.jpeg} \end{center} \caption{Roofline model for single-phase, single time relaxation, Lattice Boltzmann. The vertical line indicates the performance range using double precision.} \label{fig:roof} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.7\linewidth]{fig3.png} \end{center} \caption{Vorticity of a flow around a cylinder at Reynolds number $=2000$. The color map indicates the direction of rotation, blue for clockwise and red for counterclockwise. The flow was simulated with an optimised LB code for macroscopic hydrodynamics \cite{succi2019towards}} \label{fig:CYL} \end{figure*} What does this mean in terms of the multiscale problems sketched in the opening of this article? With a one micron lattice spacing and one nanosecond timestep, this means simulating a cubic box 5 mm in side over one millisecond in time. Although this does not cover the full six orders in space, from, say, 10 nanometers to centimeters, which characterize most meso-materials, it offers nonetheless a very valuable order of magnitude boost in size as compared to current applications. \section{LB method for microfluidic crystals} \label{par:Microfluidics} As mentioned in the Introduction, many soft matter systems host concurrent interactions encompassing six or more decades in space and nearly twice as many in time. Two major directions can be endorsed to face this situation: the first consists in developing sophisticated multiscale methods capable of covering five-six spatial decades through a clever combination of advanced computational techniques, such as local grid-refinement, adaptive grids, or grid-particle combinations \cite{mehl2019adaptive,lahnert2016towards,lagrava2012advances,dupuis2003theory,filippova1998grid}. The second avenue consists in developing suitable coarse-grained models, operating at the mesoscale, say microns, through the incorporation of effective forces and potentials designed in such a way as to retain the essential effects of the fine-grain scales on the coarse-grained ones (often providing dramatic computational savings) Of course, the two strategies are not mutually exclusive; on the contrary, they should be combined in a synergistic fashion, typically employing as much coarse-graining as possible to reduce the need for high-resolution techniques. In \cite{montessori2019mesoscale}, the second strategy has been successfully adopted to the simulation of microdevices. Experiments \cite{raven2009microfluidic,marmottant2009microfluidics} have shown that a soft flowing microfluidic crystal can be designed by air dispersion in the fluid with a flow focuser: the formation of drops is due to the balance between pressure drop, due to the sudden expansion of the channel, and the shear stress, exerted by the continuous phase inside the nozzle. In Fig. \ref{fig:micro1} (top), we show the typical experimental setup for the production of ordered dispersion of mono-disperse air droplets. \begin{figure*} \begin{center} \includegraphics[width=.45\linewidth]{fig5.png} \end{center} \caption{(Top). Flow focuser used for the production of air bubbles \cite{raven2009microfluidic,marmottant2009microfluidics}. Gas is injected from the horizontal branch at pressure $P_g$ while air enters at flow rate $Q_l/2$ from the two vertical braches. They are focused into a striction of width $\simeq 100$ $\mu$m, and resulting air bubbles are collected downstream in the outlet chamber. (Bottom) Lattice Boltzmann simulation showing the production of roughly mono-disperse fluid droplets within a microfluidic flow focuser. The red phase represents the dispersed fluid (oil) while the black phase the continuous one (water).} \label{fig:micro1} \end{figure*} In our LB experiments (Fig.~\ref{fig:micro1}, bottom), droplet formation is controlled by tuning i) the dispersion-to-continuous flow ratio $\alpha$ (defined as $\alpha=u_d/2u_c$, where $u_d$ and $u_c$ are the speeds of the dispersed and the continuous phase at the inlet channel) and by ii) the mesoscopic force $F_{rep}$ modeling the repulsive effect of a surfactant confined at the fluid interface. The dispersed phase (in red in Fig.~\ref{fig:micro1}, bottom) is pumped with a predefined speed $u_d$ within the horizontal branch, whereas the continuous phase (black) comes from the two vertical branches at speed $u_c$. They are driven into the orifice ,where the droplet form and are finally collected in the outlet chamber. A schematic representation of $F_{rep}$ on the lattice is reported in Fig.\ref{fig:micro0}. This term enters the LB equation as a forcing contribution acting solely at the fluid interfaces when in close contact. Its analytical expression is \begin{equation} F_{rep} = -A_h [h(x)] n \delta_I, \end{equation} where $\delta_I\propto \nabla\psi$ is a function, proportional to the fluid concentration $\psi$, confining the near-contact force at the fluid interface, while $A_h[h(x)]$ sets the strength of the near-contact interactions. It is equal to a positive constant $A$ if $h<h_{min}$ and it decays as $h^{-3}$ if $h>h_{min}$, with $h_{min}=3-4$ lattice units. Although other functional forms of $A[h(x)]$ are certainly possible, this one proves sufficient to capture the effects occurring at the sub-micron scale, such as the stabilization of the fluid film formed between the interfaces and the inhibition of droplet merging. \begin{figure}[h] \label{fig:micro0} \begin{center} \centerline{\resizebox{.4\linewidth}{!}{\includegraphics{fig4.pdf}}} \end{center} \caption{Close-up view of the modelling of the near interaction between two droplets. $F_{rep}$ is the repulsive force and $n$ is the unit vector perpendicular to the interfaces, while $x$ and $y$ indicate the positions, at distance $h$, located within the fluid interface. } \end{figure} An appropriate dimensionless number capturing the competition between surface tension $\sigma$ and the near-contact forces $F_{rep}$ can be defined as $N_c = A \Delta{x} / \sigma$, where $\Delta{x}$ is the lattice spacing. Usually, if $N_c \ll 1$, capillary effects dominate and drops merge, whereas if $N_c \sim 1$, close contact interactions prevail and droplet fusion is inhibited. A typical arrangement reproducing the latter case is shown in Fig. \ref{fig:micro1}, obtained for $A_h = 1$ and $N_c = 0.1$. These results suggest that satisfactory compliance with experimental results can be achieved by means of suitable coarse-grained models, which offer dramatical computational savings over grid-refinement methods. However, success or failure of coarse graining must be tested case by case, since approximations which hold for some materials, say dilute microfluidic crystals, may not necessarily apply to other materials, say dense emulsions in which all the droplets are in near-touch with their neighbours. In this respect, significant progress might be possible by resorting to Machine-Learning techniques \cite{lecun2015deep,goodfellow2016deep}, the idea being of semi-automating the procedure of developing customized coarse-grained models, as detailed in the next section. \subsection{Machine-learning for LB microfluidics} Machine learning has taken modern science and society by storm. Even discounting bombastic claims mostly devoid of scientific value, the fact remains that the idea of automating difficult tasks through the aid of properly trained neural networks may add a new dimension to the space of scientific investigation \cite{CARLEO,BRENNER,SINAI}. For a recent critical review, see for instance \cite{BIGDATA}. In the following, we portray a prospective machine-assisted procedure to facilitate the computational design of microfluidic devices for soft mesoscale materials. The idea is to ``learn'' the most suitable coarse-grained expression of the Korteweg tensor, the crucial quantity controlling non-ideal interactions in soft mesoscale materials. The procedure develops through three basic steps: 1) Generate high-resolution data via direct microscale simulations; 2) Generate coarse-grained data upon projection (averaging) of high-resolution data; 3) Derive the coarse-grained Korteweg tensor using machine learning techniques fed with data from step 2). The first step consists of performing very high resolution simulations of a microfluidic device, delivering the fluid density $\rho_f(\vec{x}_f,t_f)$, the flow field $\vec{u}_f(\vec{x}_f,t_f)$ at each lattice location $\vec{x}_f$ and given time instant $t_f$ of a fine-grid simulation with $N_f$ lattice sites and $M_f$ timesteps, for a total of $D_f= 4 N_f M_f$ degrees of freedom in three spatial dimensions. To be noted that such fine-grain information may be the result of an underlying molecular dynamics simulation. The second step consists of coarse-graining the high-resolution data to generate the corresponding ``exact'' coarse-grained data. Upon suitable projection, for instance averaging over blocks of $B=b^4$ fine-grain variables, $b$ being the spacetime blocking factor, this provides the corresponding (``exact'') coarse-grained density and velocity $\rho_c$ and $\vec{u}_c$, for a total of $D_c=D_f/B \ll D_f$ degrees of freedom. The third step is to devise a suitable model for the Korteweg tensor at the coarse scale $x_c = b x_f$. A possible procedure is to postulate parametric expressions of the coarse-grained Korteweg tensor, run the coarse grained simulations and perform a systematic search in parameter space, by varying the parameters in such a way as to minimize the departure between the "exact" expression $K_c = \mathcal{P} \rho_f$ obtained by projection of the fine-grain simulations and the parametric expression $K_c[\rho_c;\lambda]$, where $\lambda$ denotes the parameters of the coarse-grained model, typically the amplitude and range of the coarse-grained forces. The optimization problem reads as follows: find $\lambda$ such as to minimise the error, \begin{equation} e[\lambda] = \left\lVert (P_c, P_c[\lambda]) \right\rVert \end{equation} where $ \left\lVert ..\right\rVert$ denotes a suitable metrics in the $D_c$-dimensional functional space of solutions. This is a classical and potentially expensive optimization procedure. A possible way to reduce its complexity is to leverage the machine learning paradigm by instructing a suitably designed neural network (NN) to ``learn'' the expression of $K_c$ as a functional of the coarse-grained density field $\rho_c$. Formally: \begin{equation} \label{ML} K_c^{ml} = \sigma_L [W \rho_c] \end{equation} where $\rho_c = \mathcal{P} \rho_f$ is the ``exact'' coarse-grained density and $\sigma_L$ denotes the set of activation functions of a $L$-levels deep neural network, with weights $W$. Note that the left hand side is an array of $6 D_c$ values, six being the number of independent components of the Korteweg tensor in three dimensions, while the input array $\rho_c$ contains only $D_c$ entries. Hence, the set of weights in a fully connected NN contains $36 N_c ^2$ entries. This looks utterly unfeasible, until one recalls that the Korteweg tensor only involves the Laplacian and gradient product of the density field $$ K_{ab}=\lambda [\rho \Delta \rho+\frac{1}{2}(\nabla \rho)^2] \delta_{ab} -\lambda \nabla_a \rho \nabla_b \rho $$ where $a,b=x,y,z$ and $\lambda$ controls the surface tension. The coarse-grain tensor is likely to expose additional nonlocal terms, but certainly no global dependence, meaning that the set of weights connecting the value of the $K$-tensor at a given lattice site to the density field should be of the order of O(100) at most. For the sake of generality, one may wish to express it in integral form \begin{equation} K_c(\vec{x}_c)= \rho(\vec{x}_c) \int G(\vec{x}_c,\vec{y}_c) \rho(\vec{y}_c) d \vec{y}_c \end{equation} and instruct the machine to learn the kernel $G$. This is still a huge learning task, but not an unfeasible one, the number of parameters being comparable to similar efforts in ab-initio molecular dynamics, whereby the machine learns multi-parameter coarse-grained potentials \cite{CAR,PAR}. Further speed can be gained by postulating the functional expression of the coarse grained $K$-tensor in formal analogy with recent work on turbulence modelling \cite{RUI}, i.e. based on the basic symmetries of the problem, which further constrains the functional dependence of $K_c$ on the density field. Work is currently in progress to implement the aforementioned ideas. \section{High-performance LB code for bijel materials} \label{par:Colloidal} For purely illustrative puroposes, in this section we discuss a different class of complex flowing systems made up of colloidal particles suspended in a binary fluid mixture, such as oil and water. A notable example of such materials is offered by {\it bijels} \cite{stratford2005colloidal}, soft materials consisting of a pair of bi-continuous fluid domains, frozen into a permanent porous matrix by a densely monolayer of colloidal particles adsorbed onto the fluid-fluid interface. The mechanical properties of such materials, such as elasticity and pore size, can be fine-tuned through the radius and the volume fraction of the particles, typically in the range $0.01 < \phi < 0.1$, corresponding to a number $$ N = \frac{3 \phi}{4 \pi} (\frac{L}{R})^3 $$ of colloids of radius $R$ in a box of volume $L^3$. Since the key mechanisms for arrested coarsening are i) the sequestration of the colloids around the interface and ii) the replacement of the interface by the colloid itself, the colloidal radius should be significantly larger than the interface width, $R/w >2$. Given that LB is a diffuse-interface method and the interfaces span a few lattice spacings (say three to five), the spatial hierarchy for a typical large scale simulation with, say, one billion grid points reads as follows: $$ dx=1,\; w=3 \div 5, \; R=10 \div 50, L = 1000 $$ With a typical volume fraction $\phi=0.1$, these parameters correspond to about $N \sim 10^3 \div 10^5$ colloids. In order to model bijels, colloidal particles are represented as rigid spheres of radius $R$, moving under the effect of a total force ($\vec{F}_p$) and torque ($\vec{T}_p$), both acting on the center of mass $\vec{r}_p$ of the $p$-th particle. The particle dynamics obeys Newton's equations of motion (EOM): \begin{equation}\label{newton} \frac{d \vec{r}_p}{d t}=\vec{u}_p,\hspace{0.5cm} m_p \frac{d \vec{u}_p}{d t}=\vec{F}_p,\hspace{0.5cm}I_p \frac{d \vec{\omega}_p}{d t}=\vec{T}_p, \end{equation} where $\vec{u}_p$ is the particle velocity and $\vec{\omega}_p$ is the corresponding angular velocity. The force on each particle takes into account the interactions with the fluid and the inter-particle forces, including the lubrication term \cite{ladd1994numerical,nguyen2002lubrication}. Hence, the equations of motion (EOM) are solved by a leapfrog approach, using quaternion algebra for the rotational component (\cite{rozmanov2010robust,svanberg1997research}). The computation is parallelised using MPI. While for the hydrodynamic part the domain is equally distributed (according to a either 1D, or 2D or 3D domain decomposition), for the particles a complete replication of their physical coordinates (position, velocity, angular velocity) is performed. More specifically, $\vec{r}_p$, $\vec{v}_p$ and $\vec{\omega}_p$ are allocated and maintained in all MPI tasks, but each MPI task solves EOM only for the particles whose centers of mass lie in its sub-domain, defined by the fluid partition. Whenever a particle crosses two or more sub-domains, the force and torque are computed with an MPI reduction and once the time step is completed, new values of $\vec{r}_p$, $\vec{v}_p$ and $\vec{\omega}_p$ are broadcasted to all tasks. Even at sizeable volume fractions (such as $\phi=0.2$), the number of particles is of the order of ten-twenty thousands, hence much smaller than the dimension of the simulation box ($L^3=1024^3$), which is why the ``replicated data'' strategy (\cite{smith1991molecular,smith1993molecular}) does not significantly affect the MPI communication time (see Fig.\ref{fig6}). \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{fig6.png} \end{center} \caption{Left: GLUPS versus number of cores measured in a cubic box of linear size $L=1024$, in which spherical colloids are dispersed in a bicontinuous fluid. From the top to the bottom, the box was filled with $N=0$, $N=15407$, $N=154072$, and $N=308144$ colloids, corresponding to a particle volume fraction $\phi$ equal to 0\%, 1\%, 10\% and 20\%, respectively. Right: Run (wall-clock) time, in seconds, per single time step iteration, $t_{\text{s}}$, with corresponding GLUPS and parallel efficiency, $E_p$, versus the number of computing cores, $n_p$ for the same system. Note that the parallel efficiency is reported in percentage.} \label{fig6} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=.45\linewidth]{fig7.png} \end{center} \caption{Typical morphology of a bijel material obtained by dispersing rigid colloids (of radius $R=5.5$~lu) in a bicontinuous fluid. Particles are adsorbed at the fluid interface and, if their volume fraction is sufficiently high, domain coarsening arrests. Red and blue colors indicate the two fluid components while grey spheres are colloids.} \label{FigLadd} \end{figure*} Results from a typical simulation of a bijel are shown in Fig.\ref{FigLadd}, in which solid particles accumulate at the interface leading to the arrest of the domain coarsening of the bi-continuous fluid, which in turn supports the the formation of a soft and highly porous fluid matrix. Further results are shown in Fig \ref{FigShearBijel}, in which the bijel is confined within solid walls with opposite speeds $\pm U = 0.01$, hence subject to a shear $S=2U/H$, $H$ being the channel width. The complex and rich rheology of such confined bijels is a completely open research item, which we are currently pursuing through extensive simulations using LBsoft, a open-source software for soft glassy emulsion simulations \cite{bonaccorso2020lbsoft}. Figure \ref{FigShearBijel} reports the bijel before and after applying the shear over half-million timesteps. From this figure, it is apparent that the shear breaks the isotropy of the bijel, leading to a highly directional structure aligned along the shear direction. Further simulations show that upon releasing the shear for another half-million steps does not recover the starting condition, showing evidence of hysteresis. This suggests the possibility of controlling the final shape of the bijel by properly fine-tuning the magnitude of the applied shear, thereby opening the possibility to exploit the shear to imprint the desired shape to bijels within controlled manufacturing processes. Systematic work along these lines is currently in progress \begin{figure}[h] \begin{center} \includegraphics[width=0.5\linewidth]{fig8_start.png} \end{center} \begin{center} \includegraphics[width=0.5\linewidth]{fig8.png} \end{center} \caption{Bijel in a channel of $128 \times 128 \times 1024$ lu: top and bottom walls slide with opposite directions (aligned with the mainstream axis $z$). Left-right walls are bounce-back while the mainstream axis $z$ is periodic. The main parameters are as follows: wall speed $U=0.01$, volume fraction $\phi=0.15$, sphere radius $R=5.5$ lu. Top: Initial configuration. Bottom: Elongated structures after half-million steps with applied shear.} \label{FigShearBijel} \end{figure} As to performance, LBsoft delivers GLUPS on one-billion gridpoint configurations on large-scale parallel platforms, with a parallel efficiency ranging from above ninety percent for plain LB (no colloids), down to about fifty percent with twenty percent colloidal volume fractions (see Fig.~\ref{fig6}). Assuming parallel performance can be preserved up to TLUPS, on a Exascale computer, one could run a trillion gridpoint simulation (four decades in space) over one million timesteps in about two weeks wall-clock time. Setting the lattice spacing at $1$ nm to fully resolve the colloidal diameter ($10-100$ nm), this corresponds to a sample of material of $10$ micron in side, over a time span of about one microsecond. By leveraging dynamic grid refinement around the interface, both space and time spans could be boosted by two orders of magnitude. However, the programming burden appears fairly significant, especially due to the presence of the colloidal particles. Work along these lines is also in progress. \newpage \section{Summary and outlook} \label{par:Conclusions} Summarising, we have discussed some of the main challenges and prospects of Exascale LB simulations of soft flowing systems for the design of novel soft mesoscale materials, such as microfluidic crystals and colloidal bijels. Despite major differences in the basic physics, both systems raise a major challenge to computer modelling, due to the coexistence of dynamic interactions over about six decades in space, from tens of nanometers for near-contact interactions, up to the centimeter scale of the experimental device. Covering six spatial decades by direct numerical simulations is beyond reach even for Exascale computers, which permit to span basically four (a trillion gridpoints). The remaining two decades can either be simulated via local-refinement methods, such as multigrid or grid-particle hybrid formulations, or by coarse-graining, i.e. subgrid modeling of the near-contact interactions acting below the micron scale. Examples of both strategies have been discussed and commented on. We conclude with two basic takehome's: 1) Extracting exascale performance from exascale computers requires a concerted multi-parallel approach, no ``magic paradigm'' is available, 2) even with exascale performance secured, only four spatial decades can be directly simulated, hence many problems in soft matter research will still require an appropriate blend of grid-refinement techniques and coarse-grained models. The optimal combination of such two strategies is most likely problem-dependent, and in some fortunate instances, the latter alone may suffice. However, in general, future generations of computational soft matter scientists should be prepared to imaginative and efficient ways of combining the two. \section{Acknowledgements} S. S., F. B., M. L., A. M. and A. T. acknowledge funding from the European Research Council under the European Union's Horizon 2020 Framework Programme (No. FP/2014-2020) ERC Grant Agreement No.739964 (COPMAT). \bibliographystyle{elsarticle-num}
{ "attr-fineweb-edu": 1.879883, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcg7xK0wg09FXVEMK
\section{Introduction} We are concerned here with the global analytic hypoellipticity of second order operators of the form \begin{equation} P = \sum_{j,k = 1}^ma_{jk}(x,t)X_jX_k + \sum_{j = 1}^mb_j(x,t)X_j + b_0(t)X_0 + c(x,t)\label{1}\end{equation} on a product of tori, \begin{equation}{\Bbb T}^N = {\Bbb T}^m \times {\Bbb T}^n,\label{2}\end{equation} where $x \in {\Bbb T}^m, t \in {\Bbb T}^n.$ Here the functions $a_{jk},b_j \,$ and $c(x,t)$ may be complex valued, but the `rigid' vector fields \begin{equation} X_j = \sum_{k=1}^m d_{jk}(x) {\partial\over \partial x_k} + \sum_{k=1}^n e_{jk}(x){\partial \over \partial t_k}\label{3}\end{equation} are real. The hypotheses we make are that \begin{equation}\{X_j^\prime = \sum_{k=1}^m d_{jk}(x) {\partial\over \partial x_k}\}\label{4}\end{equation} are independent, $k=1,\ldots,m,$ and there exists a constant C such that for all smooth $v$, \begin{equation}\sum_{j=0}^m\normL2{X_jv}^2 + \normL2{v}^2 \leq C\{|\Re\ip{Pv}{v}| + \norms{v}{-1}\}.\label{5}\end{equation} For example, if the vector fields $\{X_j\}_{j=0,...,m}$ satisfy the H\"ormander condition that their iterated commutators span then whole tangent space and the matrix $A=(a_{jk})$ is the identity, then one even has a subelliptic estimate, which implies {\sl arbitrary positivity} (an arbitrarily large multiple of the second term on the left, provided one adds a corresponsingly sufficiently large multiple of the negative norm on the right). The positivity of the self-adjoint matrix $A = (a_{jk})$ alone will give an estimate of this form without the second term on the left and with the norm on the right replaced by the $L^2$ norm, but we need very slightly more. For example, the positivity of $A$ together with \begin{equation} -\sum (X_ja_{jk})X_k-\sum ((X_ja_{jk})X_k)^* +\sum b_j(x,t)X_j$$ $$+ (\sum b_j(x,t)X_j)^* + X_0 + X_0^* + c(x,t) > 0\label{6}\end{equation} will suffice, and this in turn would follow from sufficient positivity of the zero order term $c(x,t).$ \par This class of operators generalizes that given in \cite{Cordaro-Himonas 1994}, and in our opinion simplifies the proof. The more flexible proof techniques we employed in \cite{Tartakoff 1976} and \cite{Tartakoff 1978} allow us to handle this broader class of operators. At one point in \cite{Cordaro-Himonas 1994} the authors also prove a theorem of analyticity that is global in some variables and local in others for operators like \begin{equation} P= \left({\partial \over {\partial x_1}}\right)^2 + \left({\partial \over {\partial x_2}}\right)^2 + \left(a(x){\partial \over {\partial t}}\right)^2.\end{equation} Our methods apply to these operators as well (cf. Theorems 2, 3 and 4). \par Our interest in these probems was stimulated by the work of Cordaro and Himonas \cite{Cordaro-Himonas 1994}. \section{Statement and Proofs of the Theorems} \begin{theorem} Let $P$ be a partial differential operator of the form (1) above with real analytic coefficients $a_{jk}(x,t), b_k(x,t)$ and $c(x,t),$ where the real analytic vector fields $\{X_j\}_{j=0,\ldots ,n}$ are `rigid' in the sense of (4). Assume that $P$ satisfies the {\it a priori} estimate (5) for some $C\geq 0.$ Then P is globally analytic hypoelliptic - that is, if $v$ is a distribution on ${\Bbb T} ^N,$ with $Pv$ analytic on ${\Bbb T} ^N,$ then $v$ itself is analytic on ${\Bbb T} ^N.$ \end{theorem} We also state three theorems which are local in some variables and global in others. In so doing, we hope to elucidate the distinction between local and global analyticity. These results are stated for rather explicit, low dimensional operators for easy reading. For much fuller and more general results, the reader is referred to the forthcoming paper of Bove and Tartakoff (\cite{Bove-Tartakoff 1994}). The restriction to second order operators is undoubtedly artificial, as the methods of our recent paper with Popivanov \cite{Popivanov-Tartakoff 1994} suggest. First we assume that $x \in {\Bbb T} ^2,$ but that $t \in I,$ where $I$ an open interval: \begin{theorem} Let the operator $P$ be given by \begin{equation} P= \left({\partial \over {\partial x_1}}\right)^2 + \left({\partial \over {\partial x_2}}\right)^2 + \left(a(x_1,x_2){\partial \over {\partial t}}\right)^2=\sum_1^3 X_j^2.\end{equation} with $x\in {\Bbb T} ^2$ but $t\in I, I$ an interval. Then if $a(x_1,x_2)$ is analytic, zero at $0$ but not identically zero (so that the H\"ormander condition is satisfied for $P$), and $Pu=f$ with $f$ real analytic on ${\Bbb T}^2 \times I,$ then $u$ is also analytic on ${\Bbb T} ^2 \times I.$ \end{theorem} \par\noindent {\bf Remark.} Theorem 2 holds for a wide class of operators of this type. For example, if we denote by $Y_j$ the vector fields $$Y_1 = {\partial \over {\partial x_1}},\,\, Y_2 = {\partial \over {\partial x_2}}, \hbox{ and } Y_3=a(x_1,x_2){\partial \over {\partial t}}$$ then Theorem 2 holds for any second order polynomial in the $Y_j$ \begin{equation} P = \sum_{|\alpha|\leq 2}b_\alpha (x,t) Y_{I_\alpha} \end{equation} with (non-rigid) variable coefficients $b_\alpha (x,t)$ such that (5) holds with $X_j$ replaced by $Y_j.$ Next we look at what happens with $x_1 \in I_1,$ $x_2 \in {\Bbb T} ^2,$ and $t \in I_2,$ when the coefficient $a(x) = a(x_1),$ where the $I_j$ are open intervals: \begin{theorem} Let the operator $P$ be given by \begin{equation} P= \left({\partial \over {\partial x_1}}\right)^2 + \left({\partial \over {\partial x_2}}\right)^2 + \left(a(x_1){\partial \over {\partial t}}\right)^2=\sum_1^3 X_j^2\end{equation} with $x_1 \in I_1,$ $x_2 \in {\Bbb T} ^1,$ and $t \in I_2,$ the $I_j$ being intervals. Then if $a(x_1)$ is analytic, zero at $0$ but not identically zero (so that the H\"ormander condition is satisfied for $P$), and $Pu=f$ with $f$ real analytic on $I_1\times{\Bbb T} ^1 \times I_2,$ then $u$ is also analytic on $I_1\times{\Bbb T} ^1 \times I_2.$ \end{theorem} Finally we consider the case where $a(x) = a(x_1,x_2),$ not identically zero, has the form $$a^2(x) = a_1^2(x_1)+a^2_2(x_2),$$ \begin{theorem} Let the operator $P$ be given by \begin{equation} P= \left({\partial \over {\partial x_1}}\right)^2 + \left({\partial \over {\partial x_2}}\right)^2 + \left(a_1^2(x_1)+a_2^2(x_2)\right)\left({\partial \over {\partial t}}\right)^2=\sum_1^4 X_j^2\end{equation} with $x\in {\Bbb T} ^m$ but $t\in I, I$ an interval. Then if $a(x)$ is analytic, zero at $0$ but not identically zero (so that the H\"ormander condition is satisfied for $P$), and $Pu=f$ with $f$ real analytic near $0,$ then $u$ is real analytic near $0.$ \end{theorem} \par\noindent {\bf Remark } These theorems have evident microlocal versions and allow suitable variable coefficient combinations of the appropriate vector fields as well as the addition of lower order terms in these vector fields (${\partial \over {\partial x_1}}, {\partial \over {\partial x_2}},$ and $a(x){\partial \over {\partial t}}$ for Theorem 2, $a_1(x_1){\partial \over {\partial t}}$ in the case of Theorem 3, and $a_1(x_1){\partial \over {\partial t}}$ {\it and} $a_2(x_2){\partial \over {\partial t}}$ in the case of the Theorem 4). \section{Proofs of the Theorems} For the moment we shall assume that $v$ and $u$ are known to belong to $C^\infty ,$ and at the end make some comments about the $C^\infty$ regularity of the solutions. \subsection{Proof of Theorem 1} Using well known results, it suffices to show that, in $L^2$ norm, we have Cauchy estimates on derivatives of $v$ of the form \begin{equation}\normL2{X^\alpha T^\beta v} \leq C^{|\alpha| + |\beta| +1}|\alpha|! |\beta|!\label{7}\end{equation} for all $\alpha$ and $\beta$. And microlocally, since the operator is elliptic in the complement of the span $W$ of the vector fields $\partial \over \partial t_j ,$ it suffices to look near $W$, and there all derivatives are bounded by powers of the $\partial \over \partial t_j ,$ alone. That is, modulo analytic errors, we may take $\alpha = 0$ and indeed $\beta = (0,\ldots,0,r,0,\ldots,0),$ as follows by integration by parts and a simple induction. Here $T = (T_1,\ldots,T_m)$ with \begin{equation} T_k = {\partial \over {\partial t_k}},\label{8}\end{equation} $k = 1, \ldots, m,$ and we shall take $T^\beta = T_1^b$ for simplicity. In particular, note that \begin{equation} [T_l, P] = \sum_{j,k = 1}^ma_{jk}^\prime(x,t)X_jX_k + \sum_{j = 1}^mb_j^\prime(x,t)X_j + c^\prime(x,t)\label{9}\end{equation} and thus \begin{equation} |\Re\ip{[P,T_1^b]v}{T_1^bv}| \leq C|\sum_{b^\prime \geq 1}{b\choose{b^\prime}} \ip{c^{(b^\prime)}X^2T_1^{b-b^\prime}v} {T^bv}| \label{10}\end{equation} $$\leq |\sum_{b^\prime \geq 1}{b\choose{b^\prime}} \ip{c^{(b^\prime +1(-1))}XT_1^{b-b^\prime}v} {(X)T_1^bv}|,$$ $$\leq l.c.\sum_{b^\prime \geq1}b^{2b^\prime} C_c^{2(b^\prime +1)} \normL2{XT_1^{b-b'}v}^2 + s.c.\normL2{(X)T_1^bv}^2,$$ where the $(X)$ on the right represents the fact that this X may not or may be present, depending on whether $c^{(b')}$ received one more derivative or not. We have written $X^2$ for a generic $X_jX_k$ as well as $C_c$ for the largest of the constants which appear in the Cauchy estimates for the analytic coefficients in $P.$ The large constant ($l.c.$) and small constant ($s.c.$) are independent of $b$, of course, the small constant being small enough to allow this term to be absorbed on the left hand side of the inequality. Absorbing yields: $$\sum_1^m\normL2{X_jT_1^bu}^2 + \normL2{X_jT_1^bu}^2 \leq C\{\normL2{T^bPu}^2 + \sum_{b\geq b'\geq 1} b^{2b^\prime} C_c^{2(b^\prime +1)} \normL2{XT_1^{b-b'}v}^2\}$$ which, iterated until the last term on the right is missing, gives $$\sum_1^n\normL2{X_jT_1^bu}^2 + \normL2{T_1^bu}^2 \leq C_{(Pu)}^{b+1}b!,$$ which implies the analyticity of $u.$ Finally, we have taken the solution to belong to $C^\infty;$ for the $C^\infty$ behavior of the solution, the methods of \cite{Tartakoff 1973} which utilize the hypoellipticity techniques of \cite{Hormander 1963} will lead quickly to the $C^\infty$ result. \subsection{Proof of Theorem 2} We recall that we are (for simplicity) taking the operator $P$ to have the particular form \begin{equation} P= \left({\partial \over {\partial x_1}}\right)^2 + \left({\partial \over {\partial x_2}}\right)^2 + \left(a(x){\partial \over {\partial t}}\right)^2.\end{equation} Again we take $u \in C^\infty,$ since subellipticity is a local phenomenon and the operators we are dealing with are clearly subelliptic under our hypotheses. And again it suffices to estimate derivatives in $L^2$ norm, i.e., to show that with $\phi , \psi$ of compact support, and \begin{equation} X_j = {\partial \over {\partial x_1}}, {\partial \over {\partial x_2}}, {\hbox{ or }} a(x){\partial \over {\partial t}},\label{def:X_j}\end{equation} we will have \begin{equation} \sum_j\normL2{X_j\psi (x)\phi (t) Z^p u} \leq C_u^{p+1}p! \end{equation} where each $Z$ is (also) of the form $Z={\partial \over {\partial x_1}}, {\partial \over {\partial x_2}},$ or $a(x){\partial \over {\partial t}}.$ That this will suffice is due to a result by Helffer and Mattera (\cite{Helffer-Mattera 1980}) but it won't save us any work as we find (\cite{Tartakoff 1976}, \cite{Tartakoff 1978}, \cite{Tartakoff 1978}) that in trying to bound powers of the first two types of $Z$ we are led to needing to establish analytic type growth of derivatives measured by powers of $Z$ of the form ${{\partial}\over{\partial t}}$ itself. Actually we shall show that for any given $N,$ there exists a localizing function $\phi_N(t) \in C_0^{\infty}$ and $\psi (x)$ (independent of $N$) with \begin{equation} \sum_j\normL2{X_j\psi (x)\phi_N(t) Z^p u} \leq C_u^{p+1}N^p, \qquad p \leq N. \end{equation} And in fact the functions $\phi_N(t)$ will be chosen to satisfy \begin{equation} |D^r \phi_N(t)| \leq C_u^{r+1}N^r, \qquad r\leq N \end{equation} uniformly in $N.$ \par The philosophy of all $L^2$ proofs is to replace $v$ in (5) by $\psi (x)\phi (t) \tilde{Z}^p u$ with $\tilde{Z}={\partial \over {\partial x_1}}, {\partial \over {\partial x_2}},$ or ${\partial \over {\partial t}}$ and commute $\psi (x)\phi (t) \tilde{Z}^p$ past the differential operator $P$. For argument's sake, and since everything else is simpler, we may restrict ourselves to the worst case which is given by $\tilde{Z}={\partial \over {\partial t}}.$ In doing so, we encounter the errors \begin{equation} [{\partial\over {\partial x_1}},\psi \phi ({\partial \over {\partial t}})^p], \,\, [{\partial\over {\partial x_2}},\psi\phi({\partial \over {\partial t}})^p], {\hbox{ and }} [a(x){\partial\over {\partial t}},\psi\phi({\partial \over {\partial t}})^p] \end{equation} Thus, starting with a given value of $p$, the left hand side of the {\it a priori} inequality (5) will bound $X_j\psi (x)\phi (t) ({\partial \over {\partial t}})^pu$ in $L^2$ norm (after taking the inner product on the right and integrating by parts one derivative to the right) by \begin{equation} \normL2{{{\partial \psi (x)\phi (t)}\over {\partial x}} ({\partial \over {\partial t}})^pu} {\hbox{ and }} \normL2{a(x){{\partial \psi (x)\phi (t)} \over {\partial t}} ({\partial \over {\partial t}})^pu}, \end{equation} (and related terms arising from the integration by parts, terms which exhibit the same qualitative behavior as these). So, at the very least, we have bounded $ \normL2{X_j\psi (x)\phi (t) ({\partial \over {\partial t}})^pu}$ by \begin{equation} \normL2{\psi' (x)\phi (t) ({\partial \over {\partial t}})^pu} {\hbox{ and }} \normL2{a(x)\psi (x)\phi' (t)({\partial \over {\partial t}})^pu}. \label{baderrors}\end{equation} In the first term, we have lost the `good' derivative ${\partial \over {\partial x}}$ and seen it appear on the localizing function, but we cannot iterate this process, since the {\it a priori} estimate (5) is only truly effective when a `good' derivative is preserved; when no `good' derivative is preserved, we have seen often enough that only in the global situation, when the derivative that appeared on the localizing function can be absorbed by a {\em constant} by the introduction of a partition of unity (in that variable, in this case it is the $x,$ or toroidal, variable) can one obtain analyticity (in {\em that} variable). The second type of term in (\ref{baderrors}) is actually good, since the factor $a(x)$ will combine with one of the `bad' derivatives ${\partial \over {\partial t}}$ to give a `good' derivative $Z=a(x){\partial \over{\partial t}}$ which may be iterated under (5). That is, modulo terms which lead to global analyticity in $x,$ we have the iteration schema \begin{equation}\sum_j\normL2{X_j\psi (x)\phi (t) ({\partial \over {\partial t}})^pu} \rightarrow C\sum_j\normL2{X_j\psi (x)\phi' (t) ({\partial \over {\partial t}})^{p-1}u}\end{equation} which may indeed be iterated. The result of multiple iterations is, for $\phi(t) =\phi_N(t)$ satisfying the estimates (16) but the localizations in $x$ merely smooth and subject to $\sum_k\psi_k(x) = 1$ (none will ever receive more than a couple of derivatives) $$\sum_{j,k}\normL2{X_j\psi_k (x)\phi_N (t) ({\partial \over {\partial t}})^pu}\leq $$ $$\leq \sum_{{j,k}\atop{p'\leq p}} C^{p'}{\normL2{\psi_k (x)\phi_N^{(p-p')}(t)({\partial \over {\partial t}})^{p'}Pu} + \sum_{{j,k}\atop{p'\leq p}} C^pN^{p'}\normL2{(X_j)\psi_k (x)\phi_N^{(p')}(t)u}}.$$ Since $p\leq N$ and $N^N \leq C^{N+1}N!$ by Stirling's formula, under the bounds (16) this yields the desired analyticity, which is local in $t$ but global in $x.$ \subsection{Proof of Theorem 3} The new ingredient in Theorem 3 is that the function $a(x)$ is now of a more special form. Thus it is only on the hypersurface $x_1=0$ that the operator $P$ is not elliptic; if in the above proof we replace the compactly supported function $\psi (x)$ by a product: $$\psi (x) = \psi_1(x_1)\psi_2(x_2),$$ with both $\psi_j(s)$ equal to one near $s=0,$ then when derivatives enter on $\psi_1(x_1),$ the support of $\psi_1^\prime $ is contained in the elliptic region, and only in $x_2$ does one need to pass to further and further patches, ultimately using a (finite) partition of unity on the torus in $x_2.$ \subsection{Proof of Theorem 4} The new ingredient in Theorem 4 is that there are four vector fields, $${\partial \over{\partial x_1}}, {\partial \over{\partial x_2}}, a_1(x_1){\partial \over{\partial t}}, {\hbox{ and }} a_1(x_1){\partial \over{\partial t}}.$$ The above considerations apply to $x_1$ and $x_2$ separately, now, since if {\em either} $x_1\neq 0$ {\em or} $x_2\neq 0$ we are in the elliptic region where the solution is known to be analytic. \par\noindent{\bf Remark} It is not hard to see that derivatives in $x_1$ and $x_2$ {\em always} behave well - i.e. that $(x_1, x_2, t; \xi_1, \xi_2, \tau)$ is never in the analytic wave front set $WF_A(u)$ for $(\xi_1, \xi_2) \neq (0,0)$ whenever this is true (punctually) of $Pu,$ since only points of the form $(x_1, x_2, t; 0,0,\tau)$ are characteristic for $P.$ Hence the above theorems are actually "microlocal(-global)" in a sense which is fairly evident, much as in \cite{Derridj-Tartakoff 1993c}.
{ "attr-fineweb-edu": 1.040039, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUciU4ubnjosH9GrgH
\section{Introduction} This is a continuation of our previous work \cite{ot} on the Haagerup approximation property (HAP) for a von Neumann algebra. The origin of the HAP is the remarkable paper \cite{haa3}, where U. Haagerup proved that the reduced group $\mathrm{C}^*$-algebra of the non-amenable free group has Grothendieck's metric approximation property. After his work, M. Choda \cite{cho} showed that a discrete group has the HAP if and only if its group von Neumann algebra has a certain von Neumann algebraic approximation property with respect to the natural faithful normal tracial state. Furthermore, P. Jolissaint \cite{jol} studied the HAP in the framework of finite von Neumann algebras. In particular, it was proved that it does not depend on the choice of a faithful normal tracial state. In the last few years, the Haagerup type approximation property for quantum groups with respect to the Haar states was actively investigated by many researchers (e.g.\ \cite{br1,br2,dfsw,cfy,kv,le}). The point here is that the Haar state on a quantum group is not necessarily tracial, and so to fully understand the HAP for quantum groups, we need to characterize this property in the framework of arbitrary von Neumann algebras. In the former work \cite{ot}, we introduce the notion of the HAP for arbitrary von Neumann algebras in terms of the standard form. Namely, the HAP means the existence of contractive completely positive compact operators on the standard Hilbert space which are approximating to the identity. In \cite{cs}, M. Caspers and A. Skalski independently introduce the notion of the HAP based on the existence of completely positive maps approximating to the identity with respect to a given faithful normal semifinite weight such that the associated implementing operators on the GNS Hilbert space are compact. Now one may wonder whether these two approaches are different or not. Actually, by combining several results in \cite{ot} and \cite{cs}, it is possible to show that these two formulations are equivalent. (See \cite{cost}, \cite[Remark 5.8]{ot} for details.) This proof, however, relies on the permanence results of the HAP for a core von Neumann algebra. One of our purposes in the present paper is to give a simple and direct proof for the above mentioned question. Our strategy is to use the positive cones due to H. Araki. He introduced in \cite{ara} a one-parameter family of positive cones $P^\alpha$ with a parameter $\alpha$ in the interval $[0, 1/2]$ that is associated with a von Neumann algebra admitting a cyclic and separating vector. This family is ``interpolating'' the three distinguished cones $P^0$, $P^{1/4}$ and $P^{1/2}$, which are also denoted by $P^\sharp$, $P^\natural$ and $P^\flat$ in the literature \cite{t2}. Among them, the positive cone $P^\natural$ at the middle point plays remarkable roles in the theory of the standard representation \cite{ara,co1,haa1}. See \cite{ara,ko1,ko2} for comprehensive studies of that family. In view of the positive cones $P^\alpha$, on the one hand, our definition of the HAP is, of course, related with $P^\natural$. On the other hand, the associated $L^2$-GNS implementing operators in the definition due to Caspers and Skalski are, in fact, ``completely positive'' with respect to $P^\sharp$. Motivated by these facts, we will introduce the notion of the ``interpolated'' HAP called $\alpha$-HAP and prove the following result (Theorem \ref{thm:equiv}): \begin{thme} A von Neumann algebra $M$ has the $\alpha$-HAP for some $\alpha\in[0, 1/2]$ if and only if $M$ has the $\alpha$-HAP for all $\alpha\in[0, 1/2]$ \end{thme} As a consequence, it gives a direct proof that two definitions of the HAP introduced in \cite{cs,ot} are equivalent. In the second part of the present paper, we discuss the Haagerup approximation property for non-commutative $L^p$-spaces ($1<p<\infty$) \cite{am,haa2,han,izu,ko3,te1,te2}. One can introduce the natural notion of the complete positivity of operators on $L^p(M)$, and hence we will define the HAP called the $L^p$-HAP when there exists a net of completely positive compact operators approximating to the identity on $L^p(M)$. Since $L^2(M)$ is the standard form of $M$, it follows from the definition that a von Neumann algebra $M$ has the HAP if and only if $M$ has the $L^2$-HAP. Furthermore, by using the complex interpolation method due to A. P. Calder\'{o}n \cite{ca}, we can show the following result (Theorem \ref{thm:L^p-HAP3}): \begin{thme} Let $M$ be a von Neumann algebra. Then the following statements are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{{\rm (\arabic{enumi})}}\renewcommand{\itemsep}{0pt} \item $M$ has the HAP; \item $M$ has the $L^p$-HAP for all $1<p<\infty$; \item $M$ has the $L^p$-HAP for some $1<p<\infty$. \end{enumerate} \end{thme} We remark that a von Neumann algebra $M$ has the completely positive approximation property (CPAP) if and only if $L^p(M)$ has the CPAP for some/all $1\leq p<\infty$. In the case where $p=1$, this is proved by E. G. Effros and E. C. Lance in \cite{el}. In general, this is due to M. Junge, Z-J. Ruan and Q. Xu in \cite{jrx}. Therefore Theorem B is the HAP version of this result. \vspace{10pt} \noindent \textbf{ Acknowledgments.} The authors would like to thank Marie Choda and Yoshikazu Katayama for their encouragement and fruitful discussion, and Martijn Caspers and Adam Skalski for valuable comments on our work. They also would like to thank Yoshimichi Ueda for stimulating discussion. \section{Preliminaries} We first fix the notation and recall several facts studied in \cite{ot}. Let $M$ be a von Neumann algebra. We denote by $M_{\mathrm{sa}}$ and $M^+$, the set of all self-adjoint elements and all positive elements in $M$, respectively. We also denote by $M_*$ and $M_*^+$, the space of all normal linear functionals and all positive normal linear functionals on $M$, respectively. The set of faithful normal semifinite (f.n.s.) weights is denoted by $W(M)$. Recall the definition of a standard form of a von Neumann algebra. \begin{definition}[{\cite[Definition 2.1]{haa1}}] Let $(M, H, J, P)$ be a quadruple, where $M$ denotes a von Neumann algebra, $H$ a Hilbert space on which $M$ acts, $J$ a conjugate-linear isometry on $H$ with $J^2=1_H$, and $P\subset H$ a closed convex cone which is self-dual, i.e., $P=P^\circ$, where $P^\circ :=\{\xi\in H \mid \langle\xi, \eta\rangle\geq 0 \ \text{for } \eta\in H\}$. Then $(M, H, J, P)$ is called a {\em standard form} if the following conditions are satisfied: \begin{enumerate}\renewcommand{\labelenumi}{{\rm (\arabic{enumi})}}\renewcommand{\itemsep}{0pt} \item $J M J= M'$; \item $J\xi=\xi$ for any $\xi\in P$; \item $aJaJ P\subset P$ for any $a\in M$; \item $JcJ=c^*$ for any $c\in \mathcal{Z}(M):= M\cap M'$. \end{enumerate} \end{definition} \begin{remark} In \cite{ah1}, Ando and Haagerup proved that the condition (4) in the above definition can be removed. \end{remark} We next introduce that each f.n.s.\ weight $\varphi$ gives a standard form. We refer readers to the book of Takesaki \cite{t2} for details. Let $M$ be a von Neumann algebra with $\varphi\in W(M)$. We write \[ n_\varphi :=\{x\in M \mid \varphi(x^*x)<\infty\}. \] Then $H_\varphi$ is the completion of $n_\varphi$ with respect to the norm \[ \|x\|_\varphi^2 :=\varphi(x^*x)\quad \text{for } x\in n_\varphi. \] We write the canonical injection $\Lambda_\varphi\colon n_\varphi\to H_\varphi$. Then \[ \mathcal{A}_\varphi :=\Lambda_\varphi(n_\varphi\cap n_\varphi^*) \] is an achieved left Hilbert algebra with the multiplication \[ \Lambda_\varphi(x)\cdot\Lambda_\varphi(x) :=\Lambda_\varphi(xy)\quad \text{for } x\in n_\varphi\cap n_\varphi^* \] and the involution \[ \Lambda_\varphi(x)^\sharp :=\Lambda_\varphi(x^*)\quad \text{for } x\in n_\varphi\cap n_\varphi^*. \] Let $\pi_\varphi$ be the corresponding representation of $M$ on $H_\varphi$. We always identify $M$ with $\pi_\varphi(M)$. We denote by $S_\varphi$ the closure of the conjugate-linear operator $\xi\mapsto\xi^\sharp$ on $H_\varphi$, which has the polar decomposition \[ S_\varphi =J_\varphi\Delta_\varphi^{1/2}, \] where $J_\varphi$ is the modular conjugation and $\Delta_\varphi$ is the modular operator. The modular automorphism group $(\sigma_t^\varphi)_{t\in\mathbb{R}}$ is given by \[ \sigma_t^\varphi(x):=\Delta_\varphi^{it}x\Delta_\varphi^{-it} \quad\text{for } x\in M. \] For $\varphi\in W(M)$, we denote the centralizer of $\varphi$ by \[ M_\varphi:=\{x\in M \mid \sigma_t^\varphi(x)=x\ \text{for } t\in\mathbb{R}\}. \] Then we have a self-dual positive cone \[ P_\varphi^\natural :=\overline{\{\xi(J_\varphi\xi) \mid \xi\in\mathcal{A}_\varphi\}} \subset H_\varphi. \] Note that $P_\varphi^\natural$ is given by the closure of the set of $\Lambda_\varphi(x\sigma_{i/2}^\varphi(x)^*)$, where $x\in \mathcal{A}_\varphi$ is entire with respect to $\sigma^\varphi$. Therefore the quadruple $(M, H_\varphi, J_\varphi, P_\varphi^\natural)$ is a standard form. Thanks to \cite[Theorem 2.3]{haa1}, a standard form is, in fact, unique up to a spatial isomorphism, and so it is independent to the choice of an f.n.s.\ weight $\varphi$. Let us consider the $n\times n$ matrix algebra $\mathbb{M}_n$ and the normalized trace $\mathrm{tr}_n$. The algebra $\mathbb{M}_n$ becomes a Hilbert space with the inner product $\langle x, y\rangle:=\mathrm{tr}_n(y^*x)$ for $x,y\in\mathbb{M}_n$. We write the canonical involution $J_{\mathrm{tr}_n}\colon x\mapsto x^*$ for $x\in \mathbb{M}_n$. Then the quadruple $(\mathbb{M}_n, \mathbb{M}_n, J_{\mathrm{tr}_n}, \mathbb{M}_n^+)$ is a standard form. In the following, for a Hilbert space $H$, $\mathbb{M}_n(H)$ denotes the tensor product Hilbert space $H\otimes \mathbb{M}_n$. \begin{definition}[{\cite[Definition 2.2]{mt}}] Let $(M, H, J, P)$ be a standard form and $n\in\mathbb{N}$. A matrix $[\xi_{i, j}]\in\mathbb{M}_n(H)$ is said to be {\em positive} if \[ \sum_{i, j=1}^nx_iJx_jJ\xi_{i, j}\in P \quad \text{for all } x_1,\dots,x_n\in M. \] We denote by $P^{(n)}$ the set of all positive matrices $[\xi_{i, j}]$ in $\mathbb{M}_n(H)$. \end{definition} \begin{proposition}[{\cite[Proposition 2.4]{mt}}, {\cite[Lemma 1.1]{sw1}}] Let $(M, H, J, P)$ be a standard form and $n\in\mathbb{N}$. Then $(\mathbb{M}_n(M), \mathbb{M}_n(H), J\otimes J_{\mathrm{tr}_n}, P^{(n)})$ is a standard form. \end{proposition} Next, we will introduce the complete positivity of a bounded operator between standard Hilbert spaces. \begin{definition} \label{defn:cpop} Let $(M_1, H_1, J_1, P_1)$ and $(M_2, H_2, J_2, P_2)$ be two standard forms. We will say that a bounded linear (or conjugate-linear) operator $T\colon H_1\to H_2$ is {\em completely positive} if $(T\otimes1_{\mathbb{M}_n})P_1^{(n)}\subset P_2^{(n)}$ for all $n\in\mathbb{N}$. \end{definition} \begin{definition}[{\cite[Definition 2.7]{ot}}] \label{defn:HAP} A W$^*$-algebra $M$ has the {\em Haagerup approximation property} (HAP) if there exists a standard form $(M, H, J, P)$ and a net of contractive completely positive (c.c.p.) compact operators $T_n$ on $H$ such that $T_n\to 1_{H}$ in the strong topology. \end{definition} Thanks to \cite[Theorem 2.3]{haa1}, this definition does not depend on the choice of a standard from. We also remark that the weak convergence of a net $T_n$ in the above definition is sufficient. In fact, we can arrange a net $T_n$ such that $T_n\to 1_H$ in the strong topology by taking suitable convex combinations. In the case where $M$ is $\sigma$-finite with a faithful state $\varphi\in M_*^+$. We denote by $(H_\varphi, \xi_\varphi)$ the GNS Hilbert space with the cyclic and separating vector associated with $(M, \varphi)$. If $M$ has the HAP, then we can recover a net of c.c.p.\ maps on $M$ approximating to the identity with respect to $\varphi$ such that the associated implementing operators on $H_\varphi$ are compact. \begin{theorem}[{\cite[Theorem 4.8]{ot}}] \label{thm:sigma-finite} Let $M$ be a $\sigma$-finite von Neumann algebra with a faithful state $\varphi\in M_*^+$. Then $M$ has the HAP if and only if there exists a net of normal c.c.p.\ maps $\Phi_n$ on $M$ such that \begin{itemize} \item $\varphi\circ\Phi\leq \varphi$; \item $\Phi_n\to\mathrm{id}_{M}$ in the point-ultraweak topology; \item The operator defined below is c.c.p.\ compact on $H_\varphi$ and $T_n\to 1_{H_\varphi}$ in the strong topology: \[ T_n(\Delta_\varphi^{1/4}x\xi_\varphi) =\Delta_\varphi^{1/4}\Phi_n(x)\xi_\varphi \ \text{for } x\in M. \] \end{itemize} \end{theorem} This translation of the HAP looks similar to the following HAP introduced by Caspers and Skalski in \cite{cs}. \begin{definition}[{\cite[Definition 3.1]{cs}}] \label{defn:CS1} Let $M$ be a von Neumann algebra with $\varphi\in W(M)$. We will say that $M$ has the \emph{Haagerup approximation property with respect to $\varphi$} in the sense of \cite{cs} (CS-HAP$_\varphi$) if there exists a net of normal c.p.\ maps $\Phi_n$ on $M$ such that \begin{itemize} \item $\varphi\circ\Phi_n\leq\varphi$; \item The operator $T_n$ defined below is compact and $T_n\to 1_{H_\varphi}$ in the strong topology: \[ T_n\Lambda_\varphi(x):=\Lambda_\varphi(\Phi_n(x)) \quad \mbox{for } x\in n_\varphi. \] \end{itemize} \end{definition} Here are two apparent differences between Theorem \ref{thm:sigma-finite} and Definition \ref{defn:CS1}, that is, the existence of $\Delta_\varphi^{1/4}$ of course, and the assumption on the contractivity of $\Phi_n$'s. Actually, it is possible to show that the notion of the CS-HAP$_\varphi$ does not depend on the choice of $\varphi$ \cite[Theorem 4.3]{cs}. Furthermore we can take contractive $\Phi_n$'s. (See Theorem \ref{thm:cpmap}.) The proof of the weight-independence presented in \cite{cs} relies on a crossed product work. Here, let us present a direct proof of the weight-independence of the CS-HAP. \begin{lemma}[{\cite[Theorem 4.3]{cs}}] \label{lem:CS-indep} The CS-HAP is the weight-free property. Namely, let $\varphi, {\psi}\in W(M)$. Then $M$ has the CS-HAP$_\varphi$ if and only if $M$ has the CS-HAP$_{\psi}$. \end{lemma} \begin{proof} Suppose that $M$ has the CS-HAP$_\varphi$. Let $\Phi_n$ and $T_n$ be as in the statement of Definition \ref{defn:CS1}. Note that an arbitrary ${\psi}\in W(M)$ is obtained from $\varphi$ by combining the following four operations: \begin{enumerate} \item $\varphi\mapsto \varphi\otimes\Tr$, where $\Tr$ denotes the canonical tracial weight on $\mathbb{B}(\ell^2)$; \item $\varphi\mapsto \varphi_e$, where $e\in M_\varphi$ is a projection; \item $\varphi\mapsto \varphi\circ\alpha$, $\alpha\in \Aut(M)$; \item $\varphi\mapsto \varphi_h$, where $h$ is a non-singular positive operator affiliated with $M_\varphi$ and $\varphi_h(x):=\varphi(h^{1/2}xh^{1/2})$ for $x\in M^+$. \end{enumerate} For its proof, see the proof of \cite[Th\'{e}or\`{e}me 1.2.3]{co1} or \cite[Corollary 5.8]{st}. Hence it suffices to consider each operation. (1) Let ${\psi}:=\varphi\otimes\Tr$. Take an increasing net of finite rank projections $p_n$ on $\ell^2$. Then $\Phi_n\otimes (p_n\cdot p_n)$ does the job, where $p_n\cdot p_n$ means the map $x\mapsto p_n xp_n$. (2) Let $e\in M_\varphi$ be a projection. Set ${\psi}:=\varphi_e$ and $\Psi_n:=e\Phi_n(e\cdot e)e$. Then we have ${\psi}\circ\Psi_n\leq{\psi}$. Indeed, for $x\in (eMe)_+$, we obtain \[ {\psi}(x) = \varphi(exe) \geq \varphi(\Phi_n(exe)) \geq \varphi(e\Phi_n(exe)e) = {\psi}(\Psi_n(x)). \] Moreover for $x\in n_\varphi$, we have \begin{align*} \Lambda_{\varphi_e}(\Psi_n(exe)) &= eJeJ \Lambda_\varphi(\Phi_n(exe)) \\ &= eJeJ T_n \Lambda_\varphi(exe) \\ &= eJeJ T_neJeJ \Lambda_{\varphi_e}(exe). \end{align*} Since $eJeJ T_n eJeJ$ is compact, we are done. (3) Let ${\psi}:=\varphi\circ\alpha$. Regard as $H_\psi=H_\varphi$ by putting $\Lambda_{\psi}=\Lambda_\varphi\circ\alpha$. Then we obtain the canonical unitary implementation $U_\alpha$ which maps $\Lambda_\varphi(x)\mapsto\Lambda_{\psi}(\alpha^{-1}(x))$ for $x\in n_\varphi$. Set $\Psi_n:=\alpha^{-1}\circ\Phi_n\circ\alpha$. Then we have \[ {\psi}(x)=\varphi(\alpha(x)) \geq\varphi(\Phi_n(\alpha(x))) ={\psi}(\Psi_n(x)) \quad\text{for } x\in M^+, \] and \[ U_\alpha T_nU_\alpha^*\Lambda_{\psi}(x) =U_\alpha T_n\Lambda_\varphi(\alpha(x)) =U_\alpha \Lambda_\varphi(\Phi_n(\alpha(x))) =\Lambda_{\psi}(\Psi(x)) \quad\text{for } x\in n_\varphi. \] Since $U_\alpha T_nU_\alpha^*$ is compact, we are done. (4) This case is proved in \cite[Proposition 4.2]{cs}. Let us sketch out its proof for readers' convenience. Let $e(\cdot)$ be the spectral resolution of $h$ and put $e_n:=e([1/n,n])$ for $n\in\mathbb{N}$. Considering $\varphi_{he_n}$, we may and do assume that $h$ is bounded and invertible by \cite[Lemma 4.1]{cs}. Put $\Psi_n(x):=h^{-1/2}\Phi_n(h^{1/2}xh^{1/2})h^{-1/2}$ for $x\in M$. Then we have $\varphi_h\circ\Psi_n\leq\varphi_h$, and the associated implementing operator is given by $h^{-1/2}T_nh^{1/2}$, which is compact. \end{proof} \section{Haagerup approximation property and positive cones} In this section, we generalize the HAP using a one-parameter family of positive cones parametrized by $\alpha\in [0, 1/2]$, which is introduced by Araki in \cite{ara}. Let $M$ be a von Neumann algebra and $\varphi\in W(M)$. \subsection{Complete positivity associated with positive cones} Recall that $\mathcal{A}_\varphi$ is the associated left Hilbert algebra. Let us consider the following positive cones: \[ P_\varphi^\sharp :=\overline{\{ \xi\xi^\sharp \mid \xi\in \mathcal{A}_\varphi\}}, \quad P_\varphi^\natural :=\overline{\{\xi(J_\varphi\xi) \mid \xi\in\mathcal{A}_\varphi\}}, \quad P_\varphi^\flat:=\overline{\{ \eta\eta^\flat \mid \xi\in \mathcal{A}_\varphi'\}} \] Then $P_\varphi^\sharp$ is contained in $D(\Delta_\varphi^{1/2})$, the domain of $\Delta_\varphi^{1/2}$. \begin{definition}[cf. {\cite[Section 4]{ara}}] For $\alpha\in[0, 1/2]$, we will define the positive cone $P_\varphi^\alpha$ by the closure of $\Delta_\varphi^\alpha P_\varphi^\sharp$. \end{definition} Then $P_\varphi^\alpha$ has the same properties as in \cite[Theorem 3]{ara}: \begin{enumerate}\renewcommand{\labelenumi}{{\rm (\arabic{enumi})}}\renewcommand{\itemsep}{0pt} \item $P_\varphi^\alpha$ is the closed convex cone invariant under $\Delta_\varphi^{it}$; \item $P_\varphi^\alpha \subset D(\Delta_\varphi^{1/2-2\alpha})$ and $J_\varphi\xi=\Delta_\varphi^{1/2-2\alpha}\xi$ for $\xi\in P_\varphi^\alpha$; \item $J_\varphi P_\varphi^\alpha=P_\varphi^{\hat{\alpha}}$, where $\hat{\alpha}:=1/2-\alpha$; \item $P_\varphi^{\hat{\alpha}} =\{\eta\in H_\varphi \mid \langle \eta, \xi\rangle\geq 0 \ \text{for } \xi\in P_\varphi^\alpha\}$; \item $P_\varphi^\alpha=\Delta_\varphi^{\alpha-1/4}(P_\varphi^{1/4}\cap D(\Delta_\varphi^{\alpha-1/4}))$; \item $P_\varphi^\natural=P_\varphi^{1/4}$ and $P_\varphi^\flat=P_\varphi^{1/2}$. \end{enumerate} The condition (4) means the duality between $P_\varphi^\alpha$ and $P_\varphi^{1/2-\alpha}$. On the modular involution, we have $J_\varphi\xi = \Delta_\varphi^{1/2-2\alpha}\xi$ for $\xi\in P_\varphi^\alpha$. This shows that $J_\varphi P_\varphi^\alpha=P_\varphi^{1/2-\alpha}$, that is, $J_\varphi$ induces an inversion in the middle point $1/4$. (See also \cite{miura} for details.) We set $\mathbb{M}_n(\mathcal{A}_\varphi):=\mathcal{A}_\varphi\otimes\mathbb{M}_n$ and $\varphi_n:=\varphi\otimes\tr_n$. Then $\mathbb{M}_n(\mathcal{A}_\varphi)$ is a full left Hilbert algebra in $\mathbb{M}_n(H_\varphi)$. The multiplication and the involution are given by \[ [\xi_{i, j}]\cdot[\eta_{i, j}]:=\sum_{k=1}^n[\xi_{i, k}\eta_{k, j}] \quad\text{and}\quad [\xi_{i, j}]^\sharp:=[\xi_{j, i}^\sharp]_{i, j}. \] Then we have $S_{\varphi_n}=S_\varphi\otimes J_{\tr}$. Hence the modular operator is $\Delta_{\varphi_n}=\Delta_\varphi\otimes\mathrm{id}_{\mathbb{M}_n}$. Denote by $P_{\varphi_n}^{\alpha}$ the positive cone in $\mathbb{M}_n(H_\varphi)$ for $\alpha\in[0, 1/2]$. We generalize the complete positivity presented in Definition \ref{defn:cpop}. \begin{definition} Let $\alpha\in [0, 1/2]$. A bounded linear operator $T$ on $H_\varphi$ is said to be {\em completely positive} {\em with respect to $P_\varphi^\alpha$} if $(T\otimes1_{\mathbb{M}_n})P_{\varphi_n}^\alpha\subset P_{\varphi_n}^\alpha$ for all $n\in\mathbb{N}$. \end{definition} \subsection{Completely positive operators from completely positive maps} Let $M$ be a von Neumann algebra and $\varphi\in W(M)$. Let $C>0$ and $\Phi$ a normal c.p.\ map on $M$ such that \begin{equation} \label{eq:cp} \varphi\circ\Phi(x)\leq C\varphi(x) \quad \text{for } x\in M^+. \end{equation} In this subsection, we will show that $\Phi$ extends to a c.p.\ operator on $H_\varphi$ with respect to $P_\varphi^\alpha$ for each $\alpha\in[0, 1/2]$. We use the following folklore among specialists. (See, for example, \cite[Lemma 4]{ara} for its proof.) \begin{lemma} \label{lem:ara} Let $T$ be a positive self-adjoint operator on a Hilbert space. For $0\leq r\leq 1$ and $\xi\in D(T)$, the domain of $T$, we have $\|T^r\xi\|^2\leq\|\xi\|^2+\|T\xi\|^2$. \end{lemma} The proof of the following lemma is inspired by arguments due to Hiai and Tsukada in \cite[Lemma 2.1]{ht}. \begin{lemma}\label{lem:ht} For $\alpha\in [0, 1/2]$, one has \[ \|\Delta_\varphi^\alpha\Lambda_\varphi(\Phi(x))\| \leq C^{1/2}\|\Phi\|^{1/2} \|\Delta_\varphi^\alpha \Lambda_\varphi(x)\| \quad \text{for } x\in n_\varphi\cap n_\varphi^*. \] \end{lemma} \begin{proof} Note that if $x\in n_\varphi$, then $\Phi(x)\in n_\varphi$ because \[ \varphi(\Phi(x)^*\Phi(x)) \leq \|\Phi\|\varphi(\Phi(x^*x)) \leq C\|\Phi\|\varphi(x^*x)<\infty \] Let $x, y\in n_\varphi$ be entire elements with respect to $\sigma^\varphi$. We define the entire function $F$ by \[ F(z):= \langle \Lambda_\varphi(\Phi(\sigma_{iz/2}^\varphi(x))), \Lambda_\varphi(\sigma_{-i\overline{z}/2}^\varphi(y)) \rangle \quad \mbox{for } z\in \mathbb{C}. \] For any $t\in \mathbb{R}$, we have \begin{align*} |F(it)| &= |\langle \Lambda_\varphi(\Phi(\sigma_{-t/2}^\varphi(x))), \Lambda_\varphi(\sigma_{-t/2}^\varphi(y)) \rangle| \\ &\leq \|\Lambda_\varphi(\Phi(\sigma_{-t/2}^\varphi(x)))\| \cdot\|\Lambda_\varphi(\sigma_{-t/2}^\varphi(y))\| \\ &= \varphi ( \Phi(\sigma_{-t/2}^\varphi(x))^*\Phi(\sigma_{-t/2}^\varphi(x)) )^{1/2} \cdot\|\Lambda_\varphi(y)\| \\ &\leq C^{1/2}\|\Phi\|^{1/2}\|\Lambda_\varphi(x)\|\|\Lambda_\varphi(y)\|, \end{align*} and \begin{align*} |F(1+it)| &= |\langle \Delta_\varphi^{1/2} \Lambda_\varphi(\Phi(\sigma_{(i-t)/2}^\varphi(x))), \Delta_\varphi^{-it/2}\Lambda_\varphi(y) \rangle| \\ &= |\langle J_\varphi \Lambda_\varphi(\Phi(\sigma_{(i-t)/2}^\varphi(x))^*), \Delta_\varphi^{-it/2}\Lambda_\varphi(y) \rangle| \\ &\leq \|\Lambda_\varphi(\Phi(\sigma_{(i-t)/2}^\varphi(x))^*)\| \cdot\|\Lambda_\varphi(y)\| \\ &= \varphi( \Phi(\sigma_{(i-t)/2}^\varphi(x))\Phi(\sigma_{(i-t)/2}^\varphi(x))^*)^{1/2} \cdot\|\Lambda_\varphi(y)\| \\ &\leq C^{1/2}\|\Phi\|^{1/2} \varphi( \sigma_{(i-t)/2}^\varphi(x)\sigma_{(i-t)/2}^\varphi(x)^* )^{1/2} \cdot \|\Lambda_\varphi(y)\| \quad\text{by (\ref{eq:cp})} \\ &=C^{1/2}\|\Phi\|^{1/2} \|\Lambda_\varphi(\sigma_{(i-t)/2}^\varphi(x)^*)\| \cdot\|\Lambda_\varphi(y)\| \\ &=C^{1/2}\|\Phi\|^{1/2} \|J_\varphi\Lambda_\varphi(\sigma_{-t/2}^\varphi(x))\| \cdot\|\Lambda_\varphi(y)\| \\ &=C^{1/2}\|\Phi\|^{1/2} \|\Lambda_\varphi(x)\|\|\Lambda_\varphi(y)\|. \end{align*} Hence the three-lines theorem implies the following inequality for $0\leq s\leq 1$: \[ |\langle \Delta_\varphi^{s/2} \Lambda_\varphi(\Phi(\sigma_{is/2}^\varphi(x))), \Lambda_\varphi(y) \rangle| =|F(s)| \leq C^{1/2}\|\Phi\|^{1/2} \|\Lambda_\varphi(x)\|\|\Lambda_\varphi(y)\|. \] By replacing $x$ by $\sigma_{-is/2}^\varphi(x)$, we obtain \[ |\langle \Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x)), \Lambda_\varphi(y) \rangle| \leq C^{1/2}\|\Phi\|^{1/2} \|\Lambda_\varphi(\sigma_{-is/2}^\varphi (x))\| \|\Lambda_\varphi(y)\|. \] Since $y$ is an arbitrary entire element of $M$ with respect to $\sigma^\varphi$, we have \begin{equation} \label{eq:delta} \|\Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x))\| \leq C^{1/2}\|\Phi\|^{1/2} \|\Lambda_\varphi(\sigma_{-is/2}^\varphi(x))\| =C^{1/2}\|\Phi\|^{1/2} \|\Delta_\varphi^{s/2}\Lambda_\varphi(x)\|. \end{equation} For $x\in \mathcal{A}_\varphi$, take a sequence of entire elements $x_n$ of $M$ with respect to $\sigma^\varphi$ such that \[ \|\Lambda_\varphi(x_n)-\Lambda_\varphi(x)\|\to 0 \text{ and } \|\Lambda_\varphi(x_n^*)-\Lambda_\varphi(x^*)\|\to 0 \quad (n\to\infty). \] Then we also have \begin{align*} \|\Delta_\varphi^{s/2} \Lambda_\varphi(x_n-x)\|^2 &\leq\|\Lambda_\varphi(x_n-x)\|^2 +\|\Delta_\varphi^{1/2}\Lambda_\varphi(x_n-x)\|^2 \quad\text{by Lemma \ref{lem:ara}} \\ &=\|\Lambda_\varphi(x_n-x)\|^2+\|\Lambda_\varphi(x_n^*-x^*)\|^2 \\ &\to 0. \end{align*} Since \begin{align*} \|\Lambda_\varphi(\Phi(x_n))-\Lambda_\varphi(\Phi(x))\|^2 &=\|\Lambda_\varphi(\Phi(x_n-x))\|^2 \\ &\leq C\|\Phi\| \|\Lambda_\varphi(x_n-x)\|^2\to 0, \end{align*} we have \begin{equation}\label{eq:weak} \langle \Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x_n)), \Lambda_\varphi(y) \rangle \to \langle \Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x)), \Lambda_\varphi(y) \rangle \quad \text{for } y\in n_\varphi. \end{equation} Moreover, since \begin{align*} \|\Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x_m)) -\Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x_n))\| &\leq C^{1/2}\|\Phi\|^{1/2}\|\Delta_\varphi^{s/2}\Lambda_\varphi(x_m-x_n)\| \quad\text{by (\ref{eq:delta})}\\ &\to 0\quad (m, n\to\infty), \end{align*} the sequence $\Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x_n))$ is a Cauchy sequence. Thus $\Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x_n))$ converges to $\Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x))$ in norm by (\ref{eq:weak}). Therefore, we have \begin{align*} \|\Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x))\| &= \lim_{n\to\infty}\|\Delta_\varphi^{s/2}\Lambda_\varphi(\Phi(x_n))\| \\ &\leq C^{1/2}\|\Phi\|^{1/2} \lim_{n\to\infty}\|\Delta_\varphi^{s/2}\Lambda_\varphi(x_n)\| \\ &=C^{1/2}\|\Phi\|^{1/2} \|\Delta_\varphi^{s/2}\Lambda_\varphi(x)\|. \end{align*} \end{proof} \begin{lemma} \label{lem:cp-operator} Let $M$ be a von Neumann algebra with $\varphi\in W(M)$ and $\Phi$ be a normal c.p.\ map on $M$. Suppose $\varphi\circ\Phi\leq C\varphi$ as before. Then for $\alpha\in[0, 1/2]$, one can define the bounded operator $T_\Phi^\alpha$ on $H_\varphi$ with $\|T_\Phi^\alpha\|\leq C^{1/2}\|\Phi\|^{1/2}$ by \[ T_\Phi^\alpha(\Delta_\varphi^\alpha \Lambda_\varphi(x)) :=\Delta_\varphi^\alpha \Lambda_\varphi(\Phi(x)) \quad\text{for } x\in n_\varphi\cap n_\varphi^*. \] \end{lemma} It is not hard to see that $T_\Phi^\alpha$ in the above is c.p.\ with respect to $P_\varphi^\alpha$ since $T_\Phi^\alpha\oti1_{\mathbb{M}_n}=T_{\Phi\otimes\id_{\mathbb{M}_n}}^\alpha$ preserves $P_{\varphi_n}^\alpha$. \subsection{Haagerup approximation property associated with positive cones} We will introduce the ``interpolated'' HAP for a von Neumann algebra. \begin{definition}\label{def:HAP} Let $\alpha\in [0, 1/2]$ and $M$ a von Neumann algebra with $\varphi\in W(M)$. We will say that $M$ has the $\alpha$-{\em Haagerup approximation property with respect to $\varphi$} ($\alpha$-HAP$_\varphi$) if there exists a net of compact contractive operators $T_n$ on $H_\varphi$ such that $T_n\to 1_{H_{\varphi}}$ in the strong topology and each $T_n$ is c.p.\ with respect to $P_\varphi^\alpha$. \end{definition} We will show the above approximation property is actually a weight-free notion in what follows. \begin{lemma} \label{lem:reduction} Let $\alpha\in [0, 1/2]$. Then the following statements hold: \begin{enumerate}\renewcommand{\labelenumi}{{\rm (\arabic{enumi})}}\renewcommand{\itemsep}{0pt} \item Let $e\in M_\varphi$ be a projection. If $M$ has the $\alpha$-HAP$_\varphi$, then $eMe$ has the $\alpha$-HAP$_{\varphi_e}$; \item If there exists an increasing net of projections $e_i$ in $M_\varphi$ such that $e_i\to1$ in the strong topology and $e_i Me_i$ has the $\alpha$-HAP$_{\varphi_{e_i}}$ for all $i$, then $M$ has the $\alpha$-HAP$_\varphi$. \end{enumerate} \end{lemma} \begin{proof} (1) We will regard $H_{\varphi_e}=eJeJ H_\varphi$, $J_{\varphi_e}=eJe$ and $\Delta_{\varphi_e}=eJeJ\Delta_\varphi$ as usual. Then it is not so difficult to show that $P_{\varphi_e}^\alpha=eJeJ P_\varphi^\alpha$. Take a net $T_n$ as in Definition \ref{def:HAP}. Then the net $eJeJT_n eJeJ$ does the job. (2) Let $\mathcal{F}$ be a finite subset of $H_\varphi$ and $\varepsilon>0$. Take $i$ such that \[ \|e_i J_\varphi e_i J_\varphi\xi-\xi\|<\varepsilon/2 \quad \mbox{for all } \xi\in\mathcal{F}. \] We identify $H_{\varphi_{e_i}}$ with $e_i J_\varphi e_i J_\varphi H_\varphi$ as usual. Then take a compact contractive operator $T$ on $H_{\varphi_{e_i}}$ such that it is c.p.\ with respect to $P_{\varphi_{e_i}}^\alpha$ and satisfies \[ \|Te_i J_\varphi e_i J_\varphi\xi-e_i J_\varphi e_i J_\varphi\xi\| <\varepsilon/2 \quad \mbox{for all } \xi\in \mathcal{F}. \] Thus we have $\|Te_iJ_\varphi e_i J_\varphi\xi-\xi\|<\varepsilon$ for $\xi\in \mathcal{F}$. It is direct to show that $Te_iJ_\varphi e_i J_\varphi$ is a compact contractive operator such that it is c.p.\ with respect to $P_\varphi^\alpha$, and we are done. \end{proof} \begin{lemma} \label{lem:HAPindep} The approximation property introduced in Definition \ref{def:HAP} does not depend on the choice of an f.n.s.\ weight. Namely, let $M$ be a von Neumann algebra and $\varphi,{\psi}\in W(M)$. If $M$ has the $\alpha$-HAP$_\varphi$, then $M$ has the $\alpha$-HAP$_{\psi}$. \end{lemma} \begin{proof} Similarly as in the proof of Lemma \ref{lem:CS-indep}, it suffices to check that each operation below inherits the approximation property introduced in Definition \ref{def:HAP}. \begin{enumerate} \item $\varphi\mapsto \varphi\otimes\Tr$, where $\Tr$ denotes the canonical tracial weight on $\mathbb{B}(\ell^2)$; \item $\varphi\mapsto \varphi_e$, where $e\in M_\varphi$ is a projection; \item $\varphi\mapsto \varphi\circ\alpha$, $\alpha\in \Aut(M)$; \item $\varphi\mapsto \varphi_h$, where $h$ is a non-singular positive operator affiliated with $M_\varphi$. \end{enumerate} (1) Let $N:=M\otimes B(\ell^2)$ and ${\psi}:=\varphi\otimes\Tr$. Take an increasing sequence of finite rank projections $e_n$ on $\ell^2$ such that $e_n\to1$ in the strong topology. Then $f_n:=1\otimes e_n$ belongs to $N_{\psi}$ and $f_n Nf_n=M\otimes e_n \mathbb{B}(\ell^2)e_n$, which has the $\alpha$-HAP$_{{\psi}_{f_n}}$. By Lemma \ref{lem:reduction} (2), $N$ has the $\alpha$-HAP$_{\psi}$. (2) This is nothing but Lemma \ref{lem:reduction} (1). (3). Let ${\psi}:=\varphi\circ\alpha$. Regard as $H_{{\psi}}=H_\varphi$ by putting $\Lambda_{\psi}=\Lambda_\varphi\circ\alpha$. We denote by $U_\alpha$ the canonical unitary implementation, which maps $\Lambda_\varphi(x)$ to $\Lambda_{\psi}(\alpha^{-1}(x))$ for $x\in n_\varphi$. Then it is direct to see that $\Delta_{\psi}=U_\alpha\Delta_\varphi U_\alpha^*$, and $P_{\psi}^\alpha=U_\alpha P_\varphi^\alpha$. We can show $M$ has the $\alpha$-HAP$_{\psi}$ by using $U_\alpha$. (4). Our proof requires a preparation. We will give a proof after proving Lemma \ref{lem:Th}. \end{proof} Let $\alpha\in[0,1/2]$ and $\varphi\in W(M)$. Note that for an entire element $x\in M$ with respect to $\sigma^\varphi$, an operator $xJ_\varphi \sigma_{i(\alpha-{\hat{\al}})}^\varphi(x)J_\varphi$ is c.p.\ with respect to $P_\varphi^\alpha$. \begin{lemma} \label{lem:eiejT} Let $T$ be a c.p.\ operator with respect to $P_\varphi^\alpha$ and $\{e_i\}_{i=1}^m$ a partition of unity in $M_\varphi$. Then the operator $\sum_{i,j=1}^m e_i J_\varphi e_j J_\varphi T e_i J_\varphi e_j J_\varphi$ is c.p.\ with respect to $P_\varphi^\alpha$. \end{lemma} \begin{proof} Let $E_{ij}$ be the matrix unit of $\mathbb{M}_m(\mathbb{C})$. Set $\rho:=\sum_{i=1}^m e_i\otimes E_{1i}$. Note that $\rho$ belongs to $(M\otimes \mathbb{M}_m(\mathbb{C}))_{\varphi\otimes\tr_n}$. Then the operator \[ \rho J_{\varphi\otimes\tr_n}\rho J_{\varphi\otimes\tr_n} (T\oti1_{\mathbb{M}_n}) \rho^* J_{\varphi\otimes\tr_n}\rho^* J_{\varphi\otimes\tr_n} \] on $H_\varphi\otimes\mathbb{M}_m(\mathbb{C})$ is positive with respect to $P_{\varphi\otimes\tr_n}^\alpha$ since so is $T\oti1_{\mathbb{M}_n}$. By direct calculation, this operator equals $\sum_{i,j=1}^m e_i J_\varphi e_j J_\varphi T e_i J_\varphi e_j J_\varphi \otimes E_{11}J_{\tr} E_{11} J_{\tr}$. Thus we are done. \end{proof} Let $h\in M_\varphi$ be positive and invertible. We can put $\Lambda_{\varphi_h}(x):=\Lambda_\varphi(xh^{1/2})$ for $x\in n_{\varphi_h}=n_\varphi$. This immediately implies that $\Delta_{\varphi_h}=hJ_\varphi h^{-1} J_\varphi \Delta_\varphi$, and $P_{\varphi_h}^\alpha=h^\alpha J_\varphi h^{{\hat{\al}}} J_\varphi P_\varphi^\alpha$. Thus we have the following result. \begin{lemma} \label{lem:Th} Let $h\in M_\varphi$ be positive and invertible. If $T$ is a c.p.\ operator with respect to $P_\varphi^\alpha$, then \[ T_h:=h^\alpha J_\varphi h^{{\hat{\al}}} J_\varphi T h^{-\alpha} J_\varphi h^{-{\hat{\al}}} J_\varphi \] is c.p.\ with respect to $P_{\varphi_h}^\alpha$. \end{lemma} \begin{proof}[Resumption of Proof of Lemma \ref{lem:HAPindep}] Let ${\psi}:=\varphi_h$ and $e(\cdot)$ the spectral resolution of $h$. Put $e_n:=e([1/n, n])\in M_\varphi$ for $n\in\mathbb{N}$. Note that $M_\varphi=M_{\psi}$. Since $e_n\to1$ in the strong topology, it suffices to show that $e_nMe_n$ has the $\alpha$-HAP$_{\varphi_{he_n}}$. Thus we may and do assume that $h$ is bounded and invertible. Let us identify $H_{\psi}=H_\varphi$ by putting $\Lambda_{\psi}(x):=\Lambda_\varphi(xh^{1/2})$ for $x\in n_\varphi$ as usual, where we should note that $n_\varphi=n_{\psi}$. Then we have $\Delta_{\psi}=hJ_\varphi h^{-1} J_\varphi\Delta_\varphi$ and $P_{\psi}^\alpha=h^\alpha J_\varphi h^{\hat{\alpha}}J_\varphi P_\varphi^\alpha$ as well. Let $\mathcal{F}$ be a finite subset of $H_\varphi$ and $\varepsilon>0$. Take $\delta>0$ so that $1-(1+\delta)^{-1/2}<\varepsilon/2$. Let $\{e_i\}_{i=1}^m$ be a spectral projections of $h$ such that $\sum_{i=1}^m e_i=1$ and $h e_i \leq \lambda_i e_i \leq (1+\delta)he_i$ for some $\lambda_i>0$. Note that $e_i$ belongs to $M_\varphi\cap M_{\varphi_h}$. For a c.p.\ operator $T$ with respect to $P_\varphi^\alpha$, we put \begin{align*} T_{h,\delta} &:= \sum_{i,j=1}^m e_i J_\varphi e_j J_\varphi T_h e_i J_\varphi e_j J_\varphi \\ &= \sum_{i,j=1}^m h^\alpha e_i J_\varphi h^{\hat{\al}} e_j J_\varphi T h^{-\alpha} e_i J_\varphi h^{-{\hat{\al}}}e_j J_\varphi, \end{align*} which is c.p.\ with respect to $P_{\varphi_h}^\alpha$ by Lemma \ref{lem:eiejT} and Lemma \ref{lem:Th}. The norm of $T_{h,\delta}$ equals the maximum of $\|h^\alpha e_i Jh^{\hat{\al}} e_j J\,T h^{-\alpha} e_i Jh^{-{\hat{\al}}}e_j J\|$. Since we have \begin{align*} \|h^\alpha e_i Jh^{\hat{\al}} e_j J\,T h^{-\alpha} e_i Jh^{-{\hat{\al}}}e_j J\| &\leq \|h^\alpha e_i\| \|h^{\hat{\al}} e_j\| \|T\| \|h^{-\alpha} e_i\| \|h^{-{\hat{\al}}}e_j \| \\ &\leq \lambda_i^\alpha\lambda_j^{\hat{\al}} ((1+\varepsilon)\lambda_i^{-1})^{\alpha} ((1+\varepsilon)\lambda_j^{-1})^{{\hat{\al}}} \|T\| \\ &= (1+\delta)^{1/2}, \end{align*} we get $\|T_{h,\delta}\|\leq (1+\delta)^{1/2}$. Since $M$ has the $\alpha$-HAP$_\varphi$, we can find a c.c.p.\ compact operator $T$ with respect to $P_\varphi^\alpha$ such that $\|T_{h,\delta}\xi-\xi\|<\varepsilon/2$ for all $\xi\in\mathcal{F}$. Then $\widetilde{T}:=(1+\delta)^{-1/2}T_{h,\delta}$ is a c.c.p.\ operator with respect to $P_{\varphi_h}^\alpha$, which satisfies $\|\widetilde{T}\xi-\xi\|<\varepsilon$ for all $\xi\in F$. Thus we are done. \end{proof} Therefore, the $\alpha$-HAP$_\varphi$ does not depend on a choice of $\varphi\in W(M)$. So, we will simply say $\alpha$-HAP for $\alpha$-HAP$_\varphi$. Now we are ready to introduce the main theorem in this section. \begin{theorem}\label{thm:equiv} Let $M$ be a von Neumann algebra. Then the following statements are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{{\rm (\arabic{enumi})}}\renewcommand{\itemsep}{0pt} \item $M$ has the HAP, i.e., the $1/4$-HAP; \item $M$ has the $0$-HAP; \item $M$ has the $\alpha$-HAP for any $\alpha\in[0,1/2]$; \item $M$ has the $\alpha$-HAP for some $\alpha\in[0,1/2]$. \item $M$ has the CS-HAP; \end{enumerate} \end{theorem} We will prove the above theorem in several steps. \begin{proof}[Proof of {\rm (1)$\Rightarrow$(2)} in Theorem \ref{thm:equiv}] Suppose that $M$ has the $1/4$-HAP. Take an increasing net of $\sigma$-finite projections $e_i$ in $M$ such that $e_i\to1$ in the strong topology. Thanks to Lemma \ref{lem:reduction}, it suffices to show that $e_i M e_i$ has the $0$-HAP. Hence we may and do assume that $M$ is $\sigma$-finite. Let $\varphi\in M_*^+$ be a faithful state. By Theorem \ref{thm:sigma-finite}, we can take a net of normal c.c.p.\ maps $\Phi_n$ on $M$ with $\varphi\circ\Phi_n\leq\varphi$ such that the following implementing operator $T_n$ is compact and $T_n\to 1_{H_\varphi}$ in the strong topology: \[ T_n(\Delta_\varphi^{1/4}x\xi_\varphi) = \Delta_\varphi^{1/4}\Phi_n(x)\xi_\varphi \quad \mbox{for } x\in M. \] Let $T_{\Phi_n}^0$ be the closure of $\Delta_\varphi^{-1/4}T_n\Delta_\varphi^{1/4}$ as in Lemma \ref{lem:cp-operator}. Recall that $T_{\Phi_n}^0$ satisfies \[ T_{\Phi_n}^0( x\xi_\varphi) = \Phi_n(x)\xi_\varphi \quad \mbox{for } x\in M. \] However, the compactness of $T_{\Phi_n}^0$ is not clear. Thus we will perturb $\Phi_n$ by averaging $\sigma^\varphi$. Let us put \[ g_\beta(t):=\sqrt{\frac{\beta}{\pi}}\exp(-\beta t^2) \quad\text{for } \beta>0 \ \text{and}\ t\in\mathbb{R}, \] and \[ U_\beta:=\int_\mathbb{R} g_\beta(t)\Delta_\varphi^{it}\,dt = \widehat{g}_\beta(-\log\Delta_\varphi), \] where \[ \widehat{g}_\beta(t) := \int_\mathbb{R} g_\beta(s)e^{-ist}\,ds = \exp(-t^2/(4\beta)) \quad \text{for } t\in\mathbb{R}. \] Then $U_\beta\to1$ in the strong topology as $\beta\to\infty$. For $\beta,\gamma>0$, we define \[ \Phi_{n,\beta, \gamma}(x):= (\sigma_{g_\beta}^\varphi\circ\Phi_n\circ\sigma_{g_\gamma}^\varphi)(x) \quad\text{for } x\in M. \] Since $\int_\mathbb{R} g_\gamma(t)\,dt=1$ and $g_\gamma\geq 0$, the map $\Phi_{n, \beta, \gamma}$ is normal c.c.p.\ such that $\varphi\circ\Phi_{n, \beta, \gamma}\leq \varphi$. By Lemma \ref{lem:cp-operator}, we obtain the associated operator $T_{\Phi_{n,\beta, \gamma}}^0$, which is given by \[ T_{\Phi_{n,\beta, \gamma}}^0 (x\xi_\varphi) = \Phi_{n,\beta, \gamma}(x)\xi_\varphi \quad\text{for } x\in M. \] Moreover, we have $T_{\Phi_{n,\beta, \gamma}}^0 = U_\beta T_{\Phi_n}^0 U_\gamma = U_\beta\Delta_\varphi^{-1/4} T_n \Delta_\varphi^{1/4}U_\gamma $. Hence $T^0_{\Phi_{n,\beta, \gamma}}$ is compact, because $e^{-t/4}\widehat{g}_\beta(t)$ and $e^{t/4}\widehat{g}_\gamma(t)$ are bounded functions on $\mathbb{R}$. Thus we have shown that $\left( T_{\Phi_{n,\beta, \gamma}}^0 \right)_{(n,\beta,\gamma)}$ is a net of contractive compact operators. It is trivial that $T^0_{\Phi_{n,\beta, \gamma}}\to1_{H_\varphi}$ in the weak topology, because $U_\beta,U_\gamma\to1_{H_\varphi}$ as $\beta,\gamma\to\infty$ and $T_n\to1_{H_\varphi}$ as $n\to\infty$ in the strong topology. \end{proof} In order to prove Theorem \ref{thm:equiv} (2)$\Rightarrow$(3), we need a few lemmas. In what follows, let $M$ be a von Neumann algebra with $\varphi\in W(M)$. \begin{lemma}\label{lem:JTJ} Let $\alpha\in[0,1/2]$. Then $M$ has the $\alpha$-HAP$_\varphi$ if and only if $M$ has the $\hat{\alpha}$-HAP$_\varphi$. \end{lemma} \begin{proof} It immediately follows from the fact that $T$ is c.p.\ with respect to $P_\varphi^\alpha$ if and only if $J_\varphi TJ_\varphi$ is c.p.\ with respect to $P_\varphi^{\hat{\alpha}}$. \end{proof} \begin{lemma}\label{lem:uniform} Let $(U_t)_{t\in \mathbb{R}}$ be a one-parameter unitary group and $T$ be a compact operator on a Hilbert space $H$. If a sequence $(\xi_n)$ in $H$ converges to $0$ weakly, then $(TU_t\xi_n)$ converges to $0$ in norm, compact uniformly for $t\in\mathbb{R}$. \end{lemma} \begin{proof} Since $T$ is compact, the map $\mathbb{R}\ni t\mapsto TU_t\in \mathbb{B}(H)$ is norm continuous. In particular, for any $R>0$, the set $\{TU_t\mid t\in[-R,R]\}$ is norm compact. Since $(\xi_n)$ converges weakly, it is uniformly norm bounded. Thus the statement holds by using a covering of $\{TU_t\mid t\in[-R,R]\}$ by small balls. \end{proof} \begin{lemma} \label{lem:Pbe} Let $\alpha\in[0,1/4]$ and $\beta\in[\alpha,\hat{\alpha}]$. Then $P_\varphi^\alpha\subset D(\Delta_\varphi^{\beta-\alpha})$ and $P_\varphi^\beta=\overline{\Delta_\varphi^{\beta-\alpha}P_\varphi^\alpha}$. \end{lemma} \begin{proof} Since $P_\varphi^\alpha\subset D(\Delta_\varphi^{1/2-2\alpha})$ and $0\leq \beta-\alpha\leq 1/2-2\alpha$, it turns out that $P_\varphi^\alpha\subset D(\Delta_\varphi^{\beta-\alpha})$. Let $\xi\in P_\varphi^\alpha$ and take a sequence $\xi_n\in P_\varphi^\sharp$ such that $\Delta_\varphi^\alpha\xi_n\to\xi$. Then we have \begin{align*} \|\Delta_\varphi^{\beta}(\xi_m-\xi_n)\|^2 &= \|\Delta_\varphi^{\beta-\alpha} \Delta_\varphi^\alpha (\xi_m-\xi_n)\|^2 \\ &\leq \|\Delta_\varphi^0\cdot\Delta_\varphi^\alpha(\xi_m-\xi_n)\|^2 \\ &\quad +\|\Delta_\varphi^{1/2-2\alpha}\cdot \Delta_\varphi^\alpha(\xi_m-\xi_n)\|^2 \quad\text{by Lemma \ref{lem:ara}} \\ &= \|\Delta_\varphi^\alpha(\xi_m-\xi_n)\|^2 +\|J_\varphi \Delta_\varphi^\alpha S_\varphi(\xi_m-\xi_n)\|^2 \\ &=2\|\Delta_\varphi^\alpha (\xi_m-\xi_n)\|^2 \to 0. \end{align*} Hence $\Delta_\varphi^{\beta}\xi_n$ converges to a vector $\eta$ which belongs to $P_\varphi^\beta$. Since $\Delta_\varphi^{\beta-\alpha}(\Delta_\varphi^\alpha \xi_n) =\Delta_\varphi^{\beta} \xi_n\to \eta$ and $\Delta_\varphi^{\beta-\alpha}$ is closed, $\Delta_\varphi^{\beta-\alpha}\xi=\eta\in P_\varphi^{\beta}$. Hence $P_\varphi^\beta\supset \overline{\Delta_\varphi^{\beta-\alpha}P_\varphi^\alpha}$. The converse inclusion is obvious since $\Delta_\varphi^\beta P_\varphi^\sharp =\Delta_\varphi^{\beta-\alpha}(\Delta_\varphi^\alpha P_\varphi^\sharp)$. \end{proof} Note that the real subspace $R_\varphi^\alpha:=P_\varphi^\alpha-P_\varphi^\alpha$ in $H_\varphi$ is closed and the mapping \[ S_\varphi^\alpha\colon R_\varphi^\alpha+iR_\varphi^\alpha\ni\xi+i\eta \mapsto \xi-i\eta\in R_\varphi^\alpha+iR_\varphi^\alpha \] is a conjugate-linear closed operator which has the polar decomposition \[ S_\varphi^\alpha=J_\varphi\Delta_\varphi^{1/2-2\alpha}. \] (See \cite[Poposition 2.4]{ko1} in the case where $M$ is $\sigma$-finite.) \begin{lemma} \label{lem:onlyif} Let $\alpha\in [0, 1/4]$ and $T\in \mathbb{B}(H_\varphi)$ a c.p.\ operator with respect to $P_\varphi^\alpha$. Let $\beta\in[\alpha,\hat{\alpha}]$. Then the following statements hold: \begin{enumerate}\renewcommand{\labelenumi}{{\rm (\arabic{enumi})}}\renewcommand{\itemsep}{0pt} \item Then the operator $\Delta_\varphi^{\beta-\alpha}T\Delta_\varphi^{\alpha-\beta}$ extends to the bounded operator on $H_\varphi$, which is denoted by $T^\beta$ in what follows, so that $\|T^\beta\|\leq\|T\|$. Also, $T^\beta$ is a c.p.\ operator with respect to $P_\varphi^{\beta}$; \item If a bounded net of c.p.\ operators $T_n$ with respect to $P_\varphi^\alpha$ weakly converges to $1_{H_\varphi}$, then so does the net $T_n^\beta$; \item If $T$ in {\rm (1)} is non-zero compact, then so does $T^\beta$. \end{enumerate} \end{lemma} \bp (1) Let $\zeta\in P_\varphi^{\sharp}$ and $\eta:=\Delta_\varphi^\beta \zeta$ which belongs to $P_\varphi^\beta$. We put $\xi:= T\Delta_\varphi^{\alpha-\beta} \eta $. Since $\Delta_\varphi^{\alpha-\beta}\eta=\Delta_\varphi^\alpha\zeta\in P_\varphi^\alpha$ and $T$ is c.p.\ with respect to $P_\varphi^\alpha$, we obtain $\xi \in P_\varphi^\alpha$. By Lemma \ref{lem:Pbe}, we know that $\Delta_\varphi^{\beta-\alpha}\xi\in P_\varphi^\beta$. Thus $\Delta_\varphi^{\beta-\alpha}T\Delta_\varphi^{\alpha-\beta}$ maps $\Delta_\varphi^\beta P_\varphi^\sharp$ into $P_\varphi^{\beta}$. Hence the complete positivity with respect to $P_\varphi^{\beta}$ immediately follows when we prove the norm boundedness of that map. The proof given below is quite similar as in the one of Lemma \ref{lem:ht}. Recall the associated Tomita algebra $\mathcal{T}_\varphi$. Let $\xi,\eta\in \mathcal{T}_\varphi$. We define the entire function $F$ by \[ F(z):= \langle T \Delta_\varphi^{-z}\xi, \Delta_\varphi^{\overline{z}}\eta\rangle \quad\text{for } z\in\mathbb{C}. \] For any $t\in\mathbb{R}$, we have \begin{align*} |F(it)| &= |\langle T\Delta_\varphi^{-it}\xi, \Delta_\varphi^{-it}\eta\rangle| \leq \|T\|\|\xi\|\|\eta\|. \end{align*} Note that \begin{align*} \Delta_\varphi^{-(\hat{\alpha}-\alpha+it)}\xi &= \Delta_\varphi^\alpha \Delta_\varphi^{-(\hat{\alpha}+it)}\xi \\ &= \Delta_\varphi^\alpha \xi_1+i\Delta_\varphi^\alpha \xi_2 \in R_\varphi^\alpha+i R_\varphi^\alpha, \end{align*} where $\xi_1, \xi_2\in R_\varphi^\alpha$ satisfies $\Delta_\varphi^{-(\hat{\alpha}+it)}\xi=\xi_1+i\xi_2$. Note that $\xi_1$ and $\xi_2$ also belong to $\mathcal{T}_\varphi$. Since $T$ is c.p.\ with respect to $P_\varphi^\alpha$, we see that $T R_\varphi^\alpha\subset R_\varphi^\alpha$. Then we have \begin{align*} \Delta_\varphi^{\hat{\alpha}-\alpha}T\Delta_\varphi^{-(\hat{\alpha}-\alpha+it)}\xi &= \Delta_\varphi^{1/2-2\alpha} (T\Delta_\varphi^\alpha \xi_1+i T\Delta_\varphi^\alpha \xi_2) \\ &=J_\varphi(T\Delta_\varphi^\alpha \xi_1-iT\Delta_\varphi^\alpha \xi_2) \\ &=J_\varphi T S_\varphi^\alpha (\Delta_\varphi^\alpha \xi_1+i\Delta_\varphi^\alpha \xi_2) \\ &=J_\varphi T J_\varphi\Delta_\varphi^{1/2-2\alpha} \Delta_\varphi^{-(\hat{\alpha}-\alpha+it)}\xi\\ &=J_\varphi T J_\varphi \Delta_\varphi^{-it}\xi. \end{align*} In particular, $\Delta_\varphi^{\hat{\alpha}-\alpha}T\Delta_\varphi^{-(\hat{\alpha}-\alpha)}$ is norm bounded, and its closure is $J_\varphi T J_\varphi$. Hence \begin{align*} \left|F(\hat{\alpha}-\alpha+it)\right| &= |\langle T\Delta_\varphi^{-(\hat{\alpha}-\alpha+it)}\xi, \Delta_\varphi^{\hat{\alpha}-\alpha-it}\eta\rangle| \\ &= |\langle J_\varphi T J_\varphi \Delta_\varphi^{-it}\xi,\Delta_\varphi^{it}\eta\rangle| \\ &\leq \|T\|\|\xi\|\|\eta\|. \end{align*} Applying the three-lines theorem to $F(z)$ at $z=\beta-\alpha\in[0,\hat{\alpha}-\alpha]$, we obtain \begin{equation} \label{eq:Tal} |\langle \Delta_\varphi^{\beta-\alpha}T\Delta_\varphi^{\alpha-\beta} \xi,\eta\rangle| =|F(\beta-\alpha)| \leq\|T\|\|\xi\|\|\eta\|. \end{equation} This implies \[ \|(\Delta_\varphi^{\beta-\alpha}T\Delta_\varphi^{\alpha-\beta}) \xi\| \leq \|T\|\|\xi\| \quad \mbox{for all }\xi\in \mathcal{T}_\varphi. \] Therefore $\Delta_\varphi^{\beta-\alpha}T\Delta_\varphi^{\alpha-\beta}$ extends to a bounded operator, which we denote by $T^\beta$, on $H_\varphi$ such that $\|T^\beta\|\leq \|T\|$. (2) By (1), we have $\|T_n^\beta\|\leq\|T_n\|$, and thus the net $(T_n^\beta)_n$ is also bounded. Hence the statement follows from the following inequality for all $\xi,\eta\in \mathcal{T}_\varphi$: \[ |\langle (T_n^\beta-1_{H_\varphi}) \xi,\eta\rangle| = |\langle (T_n-1_{H_\varphi}) \Delta_\varphi^{\alpha-\beta} \xi,\Delta_\varphi^{\beta-\alpha}\eta\rangle|. \] (3) Suppose that $T$ is compact. Let $\eta_n$ be a sequence in $H_\varphi$ with $\xi_n\to 0$ weakly. Take $\xi_n\in \mathcal{T}_\varphi$ such that $\|\xi_n-\eta_n\|<1/n$ for $n\in\mathbb{N}$. It suffices to check that $\|T^\beta\xi_n\|\to 0$. Since the sequence $\xi_n$ is weakly converging, there exists $D>0$ such that \begin{equation}\label{eq:bound} \|\xi_n\|\leq D \quad\text{for all } n\in\mathbb{N}. \end{equation} Let $\eta\in \mathcal{T}_\varphi$. For each $n\in\mathbb{N}$, we define the entire function $F_n$ by \[ F_n(z):=\exp(z^2) \langle T\Delta_\varphi^{-z}\xi_n, \Delta_\varphi^{\overline{z}}\eta \rangle. \] Let $\varepsilon>0$. Take $t_0>0$ such that \begin{equation}\label{eq:exp} e^{-t^2}\leq\frac{\varepsilon}{D\|T\|} \quad\text{for } |t|>t_0. \end{equation} We let $I:=[-t_0,t_0]$. Since $T$ is compact, there exists $n_0\in\mathbb{N}$ such that \begin{equation}\label{eq:compact} \|T\Delta_\varphi^{-it}\xi_n\| \leq \varepsilon \quad\text{and}\quad \|J_\varphi TJ_\varphi \Delta_\varphi^{-it}\xi_n\|\leq \varepsilon \quad\text{for } n\geq n_0 \ \text{and}\ t\in I. \end{equation} Then for $n\geq n_0$ we have \begin{align*} |F_n(it)| &=e^{-t^2} |\langle T\Delta_\varphi^{-it}\xi_n, \Delta_\varphi^{-it}\eta \rangle| \\ &\leq e^{-t^2} \|T\Delta_\varphi^{-it}\xi_n\| \|\eta\|. \end{align*} Hence if $t\not\in I$, then \begin{align*} |F_n(it)| &\leq e^{-t^2}\|T\| \|\xi_n\| \|\eta\| \\ &\leq e^{-t^2}D\|T\| \|\eta\| \quad\text{by}\ (\ref{eq:bound}) \\ &\leq \varepsilon\|\eta\| \quad\text{by}\ (\ref{eq:exp}), \end{align*} and if $t\in I$, then \begin{align*} |F_n(it)| &\leq\|T\Delta_\varphi^{-it}\xi_n\| \|\eta\| \\ &\leq \varepsilon\|\eta\| \quad\text{by}\ (\ref{eq:compact}). \end{align*} We similarly obtain \[ |F_n(\hat{\alpha}-\alpha+it)|\leq \varepsilon\|\eta\| \quad\text{for } n\geq n_0 \ \text{and}\ t\in\mathbb{R}. \] Therefore the three-lines theorem implies \[ e^{(\beta-\alpha)^2} |\langle T^\beta \xi_n, \eta\rangle| =\left|F_n\left(\beta-\alpha\right)\right| \leq\varepsilon\|\eta\| \quad\text{for } n\geq n_0. \] Hence we have $\|T^\beta \xi_n\|\leq\varepsilon$ for $n\geq n_0$. Therefore $T^\beta$ is compact. \end{proof} \begin{lemma} \label{lem:albe} Let $M$ be a von Neumann algebra and $\alpha\in[0,1/4]$. If $M$ has the $\alpha$-HAP, then $M$ also has the $\beta$-HAP for all $\beta\in[\alpha,\hat{\alpha}]$. \end{lemma} \begin{proof} Take a net of c.c.p.\ compact operators $T_n$ with respect to $P_\varphi^\alpha$ as before. By Lemma \ref{lem:onlyif}, we obtain a net of c.c.p.\ compact operators $T_n^\beta$ with respect to $P_\varphi^\beta$ such that $T_n^\beta$ is converging to $1_{H_\varphi}$ in the weak topology. Thus we are done. \end{proof} Now we resume to prove Theorem \ref{thm:equiv}. \begin{proof}[Proof of {\rm (2)$\Rightarrow$(3)} in Theorem \ref{thm:equiv}] It follows from Lemma \ref{lem:albe}. \end{proof} \begin{proof}[Proof of {\rm (3)$\Rightarrow$(4)} in Theorem \ref{thm:equiv}] This is a trivial implication. \end{proof} \begin{proof}[Proof of {\rm (4)$\Rightarrow$(1)} in Theorem \ref{thm:equiv}] Suppose that $M$ has the $\alpha$-HAP for some $\alpha\in[0, 1/2]$. By Lemma \ref{lem:JTJ}, we may and do assume that $\alpha\in[0, 1/4]$. By Lemma \ref{lem:albe}, $M$ has the $1/4$-HAP. \end{proof} Therefore we prove the conditions from (1) to (4) are equivalent. Finally we check the condition (5) and the others are equivalent. \begin{proof}[Proof of {\rm (1)$\Rightarrow$(5)} in Theorem \ref{thm:equiv}] It also follows from the proof of {\rm (1)$\Rightarrow$(2)}. \end{proof} \begin{proof}[Proof of {\rm (5)$\Rightarrow$(1)} in Theorem \ref{thm:equiv}] We may assume that $M$ is $\sigma$-finite by \cite[Lemma 4.1]{cs} and \cite[Proposition 3.5]{ot}. Let $\varphi\in M_*^+$ be a faithful state. For every finite subset $F\subset M$, we denote by $M_F$ the von Neumann subalgebra generated by $1$ and \[ \{\sigma_t^\varphi(x)\mid x\in F, t\in \mathbb{Q}\}. \] Then $M_F$ is a separable $\sigma^\varphi$-invariant and contains $F$. By \cite[Theorem IX.4.2]{t2}, there exists a normal conditional expectation $\mathcal{E}_F$ of $M$ onto $M_F$ such that $\varphi\circ\mathcal{E}_F=\varphi$. As in the proof of \cite[Theorem 3.6]{ot}, the projection $E_F$ on $H_\varphi$ defined below is a c.c.p.\ operator: \[ E_F(x\xi_\varphi)=\mathcal{E}_F(x)\xi_\varphi \quad\text{for}\ x\in M. \] It is easy to see that $M_F$ has the CS-HAP. It also can be checked that if $M_F$ has the HAP for every $F$, then $M$ has the HAP. Hence we can further assume that $M$ is separable. Since $M$ has the CS-HAP, there exists a {\em sequence} of normal c.p.\ maps $\Phi_n$ with $\varphi\circ\Phi_n\leq\varphi$ such that the following implementing operator $T_n^0$ is compact and $T_n^0\to 1_{H_\varphi}$ strongly: \[ T_n^0(x\xi_\varphi):=\Phi_n(x)\xi_\varphi \quad \mbox{for}\ x\in M. \] In particular, $T_n^0$ is a c.p.\ operator with respect to $P_\varphi^\sharp$. By the principle of uniform boundedness, the sequence $(T_n^0)$ is uniformly norm-bounded. By Lemma \ref{lem:onlyif}, we have a uniformly norm-bounded sequence of compact operators $T_n$ such that each $T_n$ is c.p.\ with respect to $P_\varphi^{1/4}$ and $T_n$ weakly converges to $1_{H_\varphi}$. By convexity argument, we may assume that $T_n\to1_{H_\varphi}$ strongly. It turns out from \cite[Theorem 4.9]{ot} that $M$ has the HAP. \end{proof} Therefore we have finished proving Theorem \ref{thm:equiv}. We will close this section with the following result that is the contractive map version of Definition \ref{defn:CS1}. \begin{theorem} \label{thm:cpmap} Let $M$ be a von Neumann algebra. Then the following statements are equivalent: \begin{enumerate} \item[{\rm (1)}] $M$ has the HAP; \item[{\rm (2)}] For any $\varphi\in W(M)$, there exists a net of normal c.c.p.\ maps $\Phi_n$ on $M$ such that \begin{itemize} \item $\varphi\circ\Phi_n\leq\varphi$; \item $\Phi_n\to\mathrm{id}_{M}$ in the point-ultraweak topology; \item For all $\alpha\in[0,1/2]$, the associated c.c.p.\ operators $T_n^\alpha$ on $H_{\varphi}$ defined below are compact and $T_n^\alpha\to\mathrm1_{H_{\varphi}}$ in the strong topology: \begin{equation} \label{eq:PhTnal} T_n^\alpha\Delta_{\varphi}^\alpha \Lambda_\varphi(x) =\Delta_{\varphi}^\alpha\Lambda_\varphi(\Phi_n(x)) \quad \mbox{for all } x\in n_\varphi. \end{equation} \end{itemize} \item[{\rm (3)}] For some $\varphi\in W(M)$ and some $\alpha\in[0,1/2]$, there exists a net of normal c.c.p.\ maps $\Phi_n$ on $M$ such that \begin{itemize} \item $\varphi\circ\Phi_n\leq\varphi$; \item $\Phi_n\to\mathrm{id}_{M}$ in the point-ultraweak topology; \item The associated c.c.p.\ operators $T_n^\alpha$ on $H_{\varphi}$ defined below are compact and $T_n^\alpha\to\mathrm1_{H_{\varphi}}$ in the strong topology: \begin{equation} \label{eq:PhTn} T_n^\alpha\Delta_{\varphi}^\alpha \Lambda_\varphi(x) =\Delta_{\varphi}^\alpha\Lambda_\varphi(\Phi_n(x)) \quad \mbox{for all } x\in n_\varphi. \end{equation} \end{itemize} \end{enumerate} \end{theorem} First, we will show that the second statement does not depend on a choice of $\varphi$. So, let us here denote by the approximation property $(\alpha,\varphi)$, this approximation property and by the approximation property $(\alpha)$ afterwards as well. \begin{lemma} The approximation property $(\alpha, \varphi)$ does not depend on any choice of $\varphi\in W(M)$. \end{lemma} \begin{proof} Suppose that $M$ has the approximation property $(\alpha, \varphi)$. It suffices to show that each operation listed in the proof of Lemma \ref{lem:CS-indep} inherits the property $(\alpha,\varphi)$. It is relatively easy to treat the first three operations, and let us omit proofs for them. Also, we can show that if $e_i$ is a net as in statement of Lemma \ref{lem:reduction} (2) and $e_i Me_i$ has the approximation property $(\alpha, \varphi_{e_i})$ for each $i$, then $M$ has the approximation property $(\alpha, \varphi)$. Thus it suffices to treat ${\psi}:=\varphi_h$ for a positive invertible element $h\in M_\varphi$. Our idea is similar as in the one of the proof of Lemma \ref{lem:HAPindep}. Let $\varepsilon>0$. Take $\delta>0$ so that $2\delta/(1+\delta)<\varepsilon$. Let $\{e_i\}_{i=1}^m$ be a spectral projections of $h$ such that $\sum_{i=1}^m e_i=1$ and $h e_i \leq \lambda_i e_i \leq (1+\delta)he_i$ for some $\lambda_i>0$. For a normal c.c.p.\ map $\Phi$ on $M$ such that $\varphi\circ\Phi\leq\varphi$, we let $\Phi_h(x):=h^{-1/2}\Phi(h^{1/2}xh^{1/2})h^{-1/2}$ for $x\in M$. Then $\Phi_h$ is a normal c.p.\ map satisfying ${\psi}\circ\Phi_h\leq{\psi}$. Next we let $\Phi_{(h, \delta)}(x):=\sum_{i,j=1}^m e_i\Phi_h(e_ixe_j)e_j$ for $x\in M$. For $x\in M^+$, we have \begin{align*} {\psi}(\Phi_{(h,\delta)}(x)) &= \sum_{i=1}^m {\psi}(e_i\Phi_h(e_ixe_i)) \leq \sum_{i=1}^m {\psi}(\Phi_h(e_ixe_i)) \\ &\leq \sum_{i=1}^m {\psi}(e_ixe_i)) ={\psi}(x). \end{align*} Also, we obtain \[ \Phi_{(h,\delta)}(1) = \sum_{i=1}^m e_i\Phi_h(e_i)e_i = \sum_{i=1}^m e_ih^{-1/2}\Phi(he_i)h^{-1/2}e_i, \] and the norm of $\Phi_{(h,\delta)}(1)$ equals the maximum of that of $e_ih^{-1/2}\Phi(he_i)h^{-1/2}e_i$. Since \begin{align*} \|e_ih^{-1/2}\Phi(he_i)h^{-1/2}e_i\| &\leq \|e_ih^{-1/2}\|^2\|he_i\| \leq (1+\delta)\lambda_i^{-1}\cdot\lambda_i \\ &= 1+\delta, \end{align*} we have $\|\Psi_{\delta}\|\leq1+\delta$. Now let $\mathcal{F}$ be a finite subset in the norm unit ball of $M$ and $\mathcal{G}$ a finite subset in $M_*$. Let $\alpha\in[0,1/2]$. By the property $(\alpha, \varphi)$, we can take a normal c.c.p.\ map $\Phi$ on $M$ such that $\varphi\circ\Phi\leq\varphi$, $|\omega(\Phi_{(h,\delta)}(x)-x)|<\delta$ for all $x\in\mathcal{F}$ and $\omega\in\mathcal{G}$ and the implementing operator $T^\alpha$ of $\Phi$ with respect to $P_\varphi^\alpha$ is compact. Put $\Psi_{(h,\delta)}:=(1+\delta)^{-1}\Phi_{(h,\delta)}$ that is a normal c.c.p.\ map satisfying ${\psi}\circ\Psi_{(h,\delta)}\leq{\psi}$. Then we have $|\omega(\Psi_{(h,\delta)}(x)-x)|<2\delta/(1+\delta)<\varepsilon$ for all $x\in \mathcal{F}$ and $\omega\in \mathcal{G}$. By direct computation, we see that the implementing operator of $\Psi_{(h,\varepsilon)}$ with respect to $P_\varphi^\alpha$ is equal to the following operator: \[ \widetilde{T}:= (1+\delta)^{-1} \sum_{i,j=1}^m h^\alpha e_i J_\varphi h^{\hat{\al}} e_j J_\varphi T h^{-\alpha} e_i J_\varphi h^{-{\hat{\al}}}e_j J_\varphi. \] Thus $\widetilde{T}$ is compact, and we are done. (See also $\widetilde{T}$ in the proof of Lemma \ref{lem:HAPindep}.) \end{proof} \begin{proof}[Proof of Theorem \ref{thm:cpmap}] (1)$\Rightarrow$(2). Take $\varphi_0\in W(M)$ such that there exists a partition of unity $\{e_i\}_{i\in I}$ of projections in $M_{\varphi_0}$, the centralizer of $\varphi_0$, such that ${\psi}_i:=\varphi_0 e_i$ is a faithful normal state on $e_i M e_i$ for each $i\in I$. Then we have an increasing net of projections $f_j$ in $M_{\varphi_0}$ such that $f_j\to1$. Thus we may and do assume that $M$ is $\sigma$-finite as usual. Employing Theorem \ref{thm:sigma-finite}, we obtain a net of normal c.c.p.\ maps $\Phi_n$ on $M$ such that \begin{itemize} \item $\varphi\circ\Phi\leq \varphi$; \item $\Phi_n\to\mathrm{id}_{M}$ in the point-ultraweak topology; \item The operator defined below is c.c.p.\ compact on $H_\varphi$: \[ T_n(\Delta_\varphi^{1/4}x\xi_\varphi) =\Delta_\varphi^{1/4}\Phi_n(x)\xi_\varphi \ \text{for } x\in M. \] \end{itemize} Now recall our proof of Theorem \ref{thm:equiv} (1)$\Rightarrow$(2). After averaging $\Phi_n$ by $g_\beta(t)$ and $g_\gamma(t)$, we obtain a normal c.c.p.\ map $\Phi_{n,\beta,\gamma}$ which satisfies $\varphi\circ\Phi_{n,\beta,\gamma}\leq\varphi$ and $\Phi_{n,\beta,\gamma}\to\id_M$ in the point-ultraweak topology. For $\alpha\in[0,1/2]$, we define the following operator: \[ T_{n,\beta,\gamma}^\alpha\Delta_\varphi^\alpha\Lambda_\varphi(x) := \Delta_\varphi^\alpha\Lambda_\varphi(\Phi_{n,\beta,\gamma}(x)) \quad \mbox{for } x\in n_\varphi. \] Then we can show the compactness of $T_{n,\beta,\gamma}^\alpha$ in a similar way to the proof of Theorem \ref{thm:equiv} (1)$\Rightarrow$(2), and we are done. (2)$\Rightarrow$(3). This implication is trivial. (3)$\Rightarrow$(1). By our assumption, we have a net of c.c.p.\ compact operators $T_n^\alpha$ with respect to some $P_\varphi^\alpha$ such that $T_n^\alpha\to1$ in the strong operator topology. Namely $M$ has the $\alpha$-HAP, and thus $M$ has the HAP by Theorem \ref{thm:equiv}. \end{proof} \section{Haagerup approximation property and non-commutative $L^p$-spaces} In this section, we study some relations between the Haagerup approximation property and non-commutative $L^p$-spaces associated with a von Neumann algebra. \subsection{Haagerup's $L^p$-spaces} We begin with Haagerup's $L^p$-spaces in \cite{haa2}. (See also \cite{te1}.) Throughout this subsection, we fix an f.n.s.\ weight $\varphi$ on a von Neumann algebra $M$. We denote by $R$ the crossed product $M\rtimes_\sigma\mathbb{R}$ of $M$ by the $\mathbb{R}$-action $\sigma:=\sigma^\varphi$. Via the natural embedding, we have the inclusion $M\subset R$. Then $R$ admits the canonical faithful normal semifinite trace $\tau$ and there exists the dual action $\theta$ satisfying $\tau\circ\theta_s=e^{-s}\tau$ for $s\in\mathbb{R}$. Note that $M$ is equal to the fixed point algebra $R^\theta$, that is, $M=\{y\in R \mid \theta_s(y)=y\ \text{for } s\in\mathbb{R}\}$. We denote by $\widetilde{R}$ the set of all $\tau$-measurable closed densely defined operators affiliated with $R$. The set of positive elements in $\widetilde{R}$ is denoted by $\widetilde{R}^+$. For $\psi\in M_*^+$, we denote by $\hat{\psi}$ its dual weight on $R$ and by $h_\psi$ the element of $\widetilde{R}^+$ satisfying $ \hat{\psi}(y)=\tau(h_\psi y) $ for all $y\in R$. Then the map $\psi\mapsto h_\psi$ is extended to a linear bijection of $M_*$ onto the subspace \[ \{h\in \widetilde{R} \mid \theta_s(h)=e^{-s}h \ \text{for } s\in\mathbb{R}\}. \] Let $1\leq p< \infty$. The $L^p$-space of $M$ due to Haagerup is defined as follows: \[ L^p(M):= \{ a\in\widetilde{R} \mid \theta_s(a)=e^{-\frac{s}{p}}a \ \text{for } s\in \mathbb{R} \}. \] Note that the spaces $L^p(M)$ and their relations are independent of the choice of an f.n.s. weight $\varphi$, and thus canonically associated with a von Neumann algebra $M$. Denote by $L^p(M)^+$ the cone $L^p(M)\cap\widetilde{R}^+$. Recall that if $a\in\widetilde{R}$ with the polar decomposition $a=u|a|$, then $a\in L^p(M)$ if and only if $|a|^p\in L^1(M)$. The linear functional $\mathrm{tr}$ on $L^1(M)$ is defined by \[ \mathrm{tr}(h_\psi):=\psi(1) \quad\text{for } \psi\in M_*. \] Then $L^p(M)$ becomes a Banach space with the norm \[ \|a\|_p:=\mathrm{tr}(|a|^p)^{1/p} \quad\text{for } a\in L^p(M). \] In particular, $M_*\simeq L^1(M)$ via the isometry $\psi\mapsto h_\psi$. For non-commutative $L^p$-spaces, the usual H\"older inequality also holds. Namely, let $q>1$ with $1/p+1/q=1$, and we have \[ |\tr(ab)|\leq\|ab\|_1\leq\|a\|_p\|b\|_q \quad\text{for } a\in L^p(M), b\in L^q(M). \] Thus the form $(a, b)\mapsto\tr(ab)$ gives a duality between $L^p(M)$ and $L^q(M)$. Moreover the functional $\tr$ has the ``tracial'' property: \[ \tr(ab)=\tr(ba) \quad\text{for } a\in L^p(M), b\in L^q(M). \] Among non-commutative $L^p$-spaces, $L^2(M)$ becomes a Hilbert space with the inner product \[ \langle a, b\rangle:=\mathrm{tr}(b^*a) \quad\text{for } a, b\in L^2(M). \] The Banach space $L^p(M)$ has the natural $M$-$M$-bimodule structure as defined below: \[ x\cdot a\cdot y:=xay \quad \mbox{for } x, y\in M,\ a\in L^p(M). \] The conjugate-linear isometric involution $J_p$ on $L^p(M)$ is defined by $a\mapsto a^*$ for $a\in L^p(M)$. Then the quadruple $(M, L^2(M), J_2, L^2(M)^+)$ is a standard form. \subsection{Haagerup approximation property for non-commutative $L^p$-spaces} We consider the f.n.s.\ weight $\varphi^{(n)}:=\varphi\otimes\mathrm{tr}_n$ on $\mathbb{M}_n(M):=M\otimes\mathbb{M}_n$. Since $\sigma_t^{(n)}:=\sigma_t^{\varphi^{(n)}}=\sigma_t\otimes\mathrm{id}_n$, we have \[ R^{(n)}:=\mathbb{M}_n(M)\rtimes_{\sigma^{(n)}}\mathbb{R} =(M\rtimes_\sigma\mathbb{R})\otimes\mathbb{M}_n=\mathbb{M}_n(R). \] The canonical f.n.s.\ trace on $R^{(n)}$ is given by $\tau^{(n)}=\tau\otimes\mathrm{tr}_n$. Moreover $\theta^{(n)}:=\theta\otimes\mathrm{id}_n$ is the dual action on $R^{(n)}$. Since $\widetilde{R^{(n)}}=\mathbb{M}_n(\widetilde{R})$, we have \[ L^p(\mathbb{M}_n(M))=\mathbb{M}_n(L^p(M)) \quad\text{and}\quad \mathrm{tr}^{(n)}=\mathrm{tr}\otimes\mathrm{tr}_n. \] \begin{definition} Let $M$ and $N$ be two von Neumann algebras with f.n.s.\ weights $\varphi$ and $\psi$, respectively. For $1\leq p\leq \infty$, a bounded linear operator $T\colon L^p(M)\to L^p(N)$ is {\em completely positive} if $T^{(n)}\colon L^p(\mathbb{M}_n(M))\to L^p(\mathbb{M}_n(N))$ is positive for every $n\in\mathbb{N}$, where $T^{(n)}[a_{i, j}]=[Ta_{i, j}]$ for $[a_{i, j}]\in L^p(\mathbb{M}_n(M))=\mathbb{M}_n(L^p(M))$. \end{definition} In the case where $M$ is $\sigma$-finite, the following result gives a construction of a c.p.\ operator on $L^p(M)$ from a c.p.\ map on $M$. \begin{theorem}[cf. {\cite[Theorem 5.1]{hjx}}] \label{thm:hjx} If $\Phi$ is a c.c.p.\ map on $M$ with $\varphi\circ\Phi\leq C\varphi$, then one obtain a c.p.\ operator $T^p_\Phi$ on $L^p(M)$ with $\|T^p_\Phi\|\leq C^{1/p}\|\Phi\|^{1-1/p}$, which is defined by \begin{equation}\label{eq:L^p-op} T^p_\Phi(h_\varphi^{1/2p}xh_\varphi^{1/2p}) :=h_\varphi^{1/2p}\Phi(x)h_\varphi^{1/2p} \quad\text{for } x\in M. \end{equation} \end{theorem} Let $M$ be a $\sigma$-finite von Neumann algebra with a faithful state $\varphi\in M_*^+$. Since \[ \|h_\varphi^{1/4} x h_\varphi^{1/4}\|_2^2 \\ =\mathrm{tr}(h_\varphi^{1/4}x^*h_\varphi^{1/2}xh_\varphi^{1/4}) =\|\Delta_\varphi^{1/4}x\xi_\varphi\|^2 \quad\text{for } x\in M, \] we have the isometric isomorphism $L^2(M)\simeq H_\varphi$ defined by $h_\varphi^{1/4}xh_\varphi^{1/4} \mapsto \Delta_\varphi^{1/4}x\xi_\varphi$ for $x\in M$. Therefore under this identification, the above operator $T_\Phi^2$ is nothing but $T_\Phi^{1/4}$, which is given in Lemma \ref{lem:cp-operator}. \begin{definition} Let $1<p<\infty$ and $M$ be a von Neumann algebra. We will say that $M$ has the $L^p$-\emph{Haagerup approximation property} ($L^p$-HAP) if there exists a net of c.c.p.\ compact operators $T_n$ on $L^p(M)$ such that $T_n\to1_{L^p(M)}$ in the strong topology. \end{definition} Note that a von Neumann algebra $M$ has the HAP if and only if $M$ has the $L^2$-HAP, because $(M, L^2(M), J_2, L^2(M)^+)$ is a standard form as mentioned previously. \subsection{Kosaki's $L^p$-spaces} We assume that $\varphi$ is a faithful normal state on a $\sigma$-finite von Neumann algebra $M$. For each $\eta\in[0,1]$, $M$ is embedded into $L^1(M)$ by $M\ni x\mapsto h_\varphi^\eta x h_\varphi^{1-\eta}\in L^1(M)$. We define the norm $\|h_\varphi^\eta x h_\varphi^{1-\eta}\|_{\infty, \eta} :=\|x\|$ on $h_\varphi^\eta Mh_\varphi^{1-\eta}\subset L^1(M)$, i.e., $M\simeq h_\varphi^\eta Mh_\varphi^{1-\eta}$. Then $(h_\varphi^\eta Mh_\varphi^{1-\eta}, L^1(M))$ becomes a pair of compatible Banach spaces in the sense of A. P. Calder\'{o}n \cite{ca}. For $1<p<\infty$, Kosaki's $L^p$-space $L^p(M; \varphi)_\eta$ is defined as the complex interpolation space $C_\theta(h_\varphi^\eta Mh_\varphi^{1-\eta}, L^1(M))$ equipped with the complex interpolation norm $\|\cdot\|_{p, \eta}:=\|\cdot\|_{C_\theta}$, where $\theta=1/p$. In particular, $L^p(M; \varphi)_0$, $L^p(M; \varphi)_1$ and $L^p(M; \varphi)_{1/2}$ are called the left, the right and the symmetric $L^p$-spaces, respectively. Note that the symmetric $L^p$-space $L^p(M; \varphi)_{1/2}$ is exactly the $L^p$-space studied in \cite{te2}. From now on, we assume that $\eta=1/2$, and we will use the notation $L^p(M; \varphi)$ for the symmetric $L^p$-space $L^p(M; \varphi)_{1/2}$. Note that $L^p(M; \varphi)$ is exactly $h_\varphi^{1/2q}L^p(M)h_\varphi^{1/2q}$, where $1/p+1/q=1$, and \[ \|h_\varphi^{1/2q}ah_\varphi^{1/2q}\|_{p, 1/2}=\|a\|_p \quad\text{for } a\in L^p(M). \] Namely, we have $L^p(M; \varphi) =h_\varphi^{1/2q}L^p(M)h_\varphi^{1/2q} \simeq L^p(M)$. Furthermore, we have \[ h_\varphi^{1/2}Mh_\varphi^{1/2} \subset L^p(M; \varphi)\subset L^1(M), \] and $h_\varphi^{1/2}Mh_\varphi^{1/2}$ is dense in $L^p(M; \varphi)$. Let $\Phi$ be a c.p.\ map on $M$ with $\varphi\circ\Phi\leq \varphi$. Note that $T^2_\Phi$ in Theorem \ref{thm:hjx} equals $T^{1/4}_\Phi$ in Lemma \ref{lem:cp-operator} under the identification $L^2(M; \varphi)=H_\varphi$. By the reiteration theorem for the complex interpolation method in \cite{bl,ca}, we have \begin{equation}\label{eq:p>2} L^p(M; \varphi) =C_{2/p}(h_\varphi^{1/2} Mh_\varphi^{1/2}, L^2(M; \varphi)) \quad\text{for } 2<p<\infty, \end{equation} and \begin{equation}\label{eq:p<2} L^p(M; \varphi) =C_{\frac{2}{p}-1}(L^2(M; \varphi), L^1(M)) \quad\text{for } 1<p<2. \end{equation} (See also \cite[Section 4]{ko3}.) Thanks to \cite{lp}, if $T^2_\Phi=T^{1/4}_\Phi$ is compact on $L^2(M; \varphi)=H_\varphi$, then $T^p_\Phi$ is also compact on $L^p(M; \varphi)$ for $1<p<\infty$. \subsection{The equivalence between the HAP and the $L^p$-HAP} We first show that the HAP implies the $L^p$-HAP in the case where $M$ is $\sigma$-finite. \begin{theorem}\label{thm:L^p-HAP} Let $M$ be a $\sigma$-finite von Neumann algebra with a faithful state $\varphi\in M_*^+$. Suppose that $M$ has the HAP, i.e., there exists a net of normal c.c.p.\ map $\Phi_n$ on $M$ with $\varphi\circ\Phi_n\leq\varphi$ satisfying the following: \begin{itemize} \item $\Phi_n\to\mathrm{id}_M$ in the point-ultraweak topology; \item the associated operators $T^2_{\Phi_n}$ on $L^2(M)$ defined below are compact and $T^2_{\Phi_n}\to 1_{L^2(M)}$ in the strong topology: \[ T^2_{\Phi_n}(h_\varphi^{1/4}xh_\varphi^{1/4}) =h_\varphi^{1/4}\Phi_n(x)h_\varphi^{1/4} \quad\text{for } x\in M. \] \end{itemize} Then $T^p_{\Phi_n}\to 1_{L^p(M)}$ in the strong topology on $L^p(M)$ for $1<p<\infty$. In particular, $M$ has the $L^p$-HAP for all $1<p<\infty$. \end{theorem} \begin{proof} We will freely use notations and results in \cite{ko3}. First we consider the case where $p>2$. By (\ref{eq:p>2}) we have \[ L^p(M; \varphi) =C_\theta(h_\varphi^{1/2} Mh_\varphi^{1/2}, L^2(M; \varphi)) \quad\text{with}\ \theta:=2/p. \] Let $a\in L^p(M; \varphi)$ with $\|a\|_{L^p(M; \varphi)}=\|a\|_{C_\theta}\leq 1$ and $0<\varepsilon<1$. By the definition of the interpolation norm, there exists $f\in F(h_\varphi^{1/2} Mh_\varphi^{1/2}, L^2(M; \varphi))$ such that $a=f(\theta)$ and $|\!|\!| f |\!|\!|_F\leq 1+\varepsilon/3$. By \cite[Lemma 4.2.3]{bl} (or \cite[Lemma 1.3]{ko3}), there exists $g\in F_0(h_\varphi^{1/2} Mh_\varphi^{1/2}, L^2(M; \varphi))$ such that $|\!|\!| f-g |\!|\!|_F\leq \varepsilon/3$ and $g(z)$ is of the form \[ g(z)=\exp(\lambda z^2)\sum_{k=1}^K\exp(\lambda_kz) h_\varphi^{1/2}x_kh_\varphi^{1/2}, \] where $\lambda>0$, $K\in\mathbb{N}$, $\lambda_1,\dots,\lambda_K\in\mathbb{R}$ and $x_1,\dots,x_K\in M$. Then \[ \|f(\theta)-g(\theta)\|_\theta\leq|\!|\!| f-g |\!|\!|_F\leq \varepsilon/3. \] Since \[ \lim_{t\to\pm\infty}\|g(1+it)\|_{L^2(M; \varphi)}=0, \] a subset $\{g(1+it) \mid t\in\mathbb{R}\}$ of $L^2(M; \varphi)$ is compact in norm. Hence there exists $n_0\in\mathbb{N}$ such that \[ \|T^2_{\Phi_n} g(1+it)-g(1+it)\|_{L^2(M; \varphi)} \leq\left(\frac{\varepsilon}{4^{1-\theta}3}\right)^{1/\theta} \quad\text{for } n\geq n_0 \ \text{and}\ t\in\mathbb{R}. \] Moreover, \begin{align*} \|\Phi_n(g(it))-g(it)\| &\leq\|\Phi_n-\mathrm{id}_M\|\|g(it)\| \\ &\leq 2|\!|\!| g |\!|\!|_F \\ &\leq 2\left(|\!|\!| f |\!|\!|_F+\varepsilon/3\right) \\ &\leq 2\left(1+2\varepsilon/3\right)<4. \end{align*} We put \[ T_{\Phi_n} g(z) :=\exp(\lambda z^2)\sum_{k=1}^K\exp(\lambda_kz) h_\varphi^{1/2}\Phi_n(x_k)h_\varphi^{1/2} \in F_0(h_\varphi^{1/2}Mh_\varphi^{1/2}, L^2(M; \varphi)). \] Then $T^p_{\Phi_n} g(\theta)=T_{\Phi_n} g(\theta)\in L^p(M; \varphi)$. Hence by \cite[Lemma 4.3.2]{bl} (or \cite[Lemma A.1]{ko3}), we have \begin{align*} \|(T^p_{\Phi_n} g)(\theta)-g(\theta)\|_\theta &\leq \left( \int_\mathbb{R}\|\Phi_n(g(it))-g(it)\|P_0(\theta, t) \, \frac{dt}{1-\theta} \right)^{1-\theta} \\ &\quad\times\left( \int_\mathbb{R}\|T^2_{\Phi_n} g(1+it)-g(1+it)\|_{L^2(M; \varphi)}P_1(\theta, t) \,\frac{dt}{\theta} \right)^\theta \\ &\leq 4^{1-\theta}\cdot \varepsilon/(4^{1-\theta}3) =\varepsilon/3. \end{align*} Therefore since $T^p_{\Phi_n}$ are contractive on $L^p(M; \varphi)$, we have \begin{align*} \|T^p_{\Phi_n} f(\theta)-f(\theta)\|_\theta &\leq \|T^p_{\Phi_n} f(\theta)-T^p_{\Phi_n} g(\theta)\|_\theta +\|T^p_{\Phi_n} g(\theta)-g(\theta)\|_\theta \\ &\quad +\|g(\theta)-f(\theta)\|_\theta \\ &<\varepsilon. \end{align*} Hence $T^p_{\Phi_n}\to 1_{L^p(M; \varphi)}$ in the strong topology. In the case where $1<p<2$, the same argument also works. \end{proof} We continue further investigation of the $L^p$-HAP. \begin{lemma}\label{lem:dual} Let $1<p, q<\infty$ with $1/p+1/q=1$. Then $M$ has the $L^p$-HAP if and only if $M$ has the $L^q$-HAP. \end{lemma} \begin{proof} Suppose that $M$ has the $L^p$-HAP, i.e., there exists a net of c.c.p.\ compact operators $T_n$ on $L^p(M)$ such that $T_n\to1_{L^p(M)}$ in the strong topology. Then we consider the transpose operators ${}^tT_n$ on $L^q(M)$, which are defined by \[ \mathrm{tr}({}^tT_n(b)a)=\mathrm{tr}(bT_n(a)) \quad\text{for } a\in L^p(M), b\in L^q(M). \] It is easy to check that ${}^tT_n$ is c.c.p.\ compact and ${}^tT_n\to 1_{L^q(M)}$ in the weak topology. By taking suitable convex combinations, we have a net of c.c.p.\ compact operators $\widetilde{T}_n$ on $L^q(M)$ such that $\widetilde{T}_n\to1_{L^q(M)}$ in the strong topology. Hence $M$ has the $L^q$-HAP. \end{proof} We use the following folklore among specialists. (See \cite[Proposition 7.6]{pt}, \cite[Poposition 3.1]{ko2}.) \begin{lemma}\label{lem:pt} Let $h$ and $k$ be a $\tau$-measurable self-adjoint operators such that $h$ is non-singular. Then there exists $x\in M^+$ such that $k=h^{1/2}xh^{1/2}$ if and only if $k\leq ch$ for some $c\geq 0$. In this case, we have $\|x\|\leq c$. \end{lemma} In the case where $p=2$, the following lemma is proved in \cite[Lemma 4.1]{ot}. \begin{lemma} \label{lem:order} Let $1<p<\infty$ and $M$ be a $\sigma$-finite von Neumann algebra with $h_0\in L^1(M)^+$ such that $h_0^{1/2}$ is cyclic and separating in $L^2(M)$. Then $\Theta_{h_0}^p\colon M_{\mathrm{sa}}\to L^p(M)$, which is defined by \[ \Theta_{h_0}^p(x):=h_0^{1/2p}xh_0^{1/2p} \quad\text{for } x\in M_{\mathrm{sa}}, \] induces an order isomorphism between $\{x\in M_{\mathrm{sa}}\mid -c1\leq x\leq c1\}$ and $K_{h_0}^p:=\{h\in L^p(M)_{\mathrm{sa}}\mid -ch_0^{1/p}\leq a\leq ch_0^{1/p}\}$ for each $c>0$. Moreover $\Theta_{\xi_0}^p$ is $\sigma(M, M_*)$-$\sigma(L^p(M), L^q(M))$ continuous. \end{lemma} \begin{proof} Suppose that $p>2$ and take $q>1$ with $1/p+1/q=1$. First we will show that $\Theta_{h_0}^p$ is $\sigma(M, M_*)$-$\sigma(L^p(M), L^q(M))$ continuous. If $x_n\to 0$ in $\sigma(M, M_*)$, then for $b\in L^q(M)$ we have \[ \tr(\Theta_{h_0}^p(x_n)b) =\tr((h_0^{1/2p}x_nh_0^{1/2p})b) =\tr(x_n(h_0^{1/2p}bh_0^{1/2p}))\to 0, \] because $h_0^{1/2p}bh_0^{1/2p}\in L^1(M)=M_*$. Next we will prove that $\Theta_{h_0}^p$ is an order isomorphism between $\{x\in M\mid 0\leq x\leq 1\}$ and $\{a\in L^p(M)\mid 0\leq a\leq h_0^{1/p}\}$. If $x\in M$ with $0\leq x\leq 1$, then \[ \tr((h_0^{1/p}-h_0^{1/2p}xh_0^{1/2p})b) \\ =\tr((1-x)h_0^{1/2p}bh_0^{1/2p})\geq 0 \quad\text{for } b\in L^q(M)^+. \] Hence $h_0^{1/p}\geq \Theta_{h_0}^p(x)=h_0^{1/2p}xh_0^{1/2p}\geq 0$. Conversely, let $a\in L^p(M)$ with $0\leq a\leq h_0^{1/p}$. By Lemma \ref{lem:pt}, there exists $x\in M$ with $0\leq x\leq 1$ such that $a=h_0^{1/2p}xh_0^{1/2p}$. \end{proof} We will use the following results. \begin{lemma}[{\cite[Theorem 4.2]{ko4}}] \label{thm:norm-conti} For $1\leq p, q<\infty$, the map \[ L^p(M)^+\ni a\mapsto a^{\frac{p}{q}}\in L^q(M)^+ \] is a homeomorphism with respect to the norm topologies. \end{lemma} In \cite{ko5}, it was proved that Furuta's inequality \cite{fu} remains valid for $\tau$-measurable operators. In particular, the L\"{o}wner--Heinz inequality holds for $\tau$-measurable operators. \begin{lemma} \label{lem:LH-ineq} If $\tau$-measurable positive self-adjoint operators $a$ and $b$ satisfy $a\leq b$, then $a^r\leq b^r$ for $0<r<1$. \end{lemma} The following lemma can be proved similarly as in the proof of \cite[Lemma 4.2]{ot}. \begin{lemma} \label{lem:cp-ex} Let $1\leq p<\infty$. If $a\in L^p(M)^+$, then \begin{enumerate}\renewcommand{\labelenumi}{{\rm (\arabic{enumi})}}\renewcommand{\itemsep}{0pt} \item A functional $f_a\colon L^q(M)\to\mathbb{C}$, $b\mapsto\mathrm{tr}(ba)$ is a c.p.\ operator; \item An operator $g_a\colon\mathbb{C}\to L^p(M)$, $z\mapsto z a$ is a c.p.\ operator. \end{enumerate} \end{lemma} In the case where $p=2$, the following lemma is also proved in \cite[Lemma 4.3]{ot}. We give a proof for reader's convenience \begin{lemma} \label{lem:cyc-sep} Let $1< p<\infty$ and $M$ be a $\sigma$-finite von Neumann algebra with a faithful state $\varphi\in M_*^+$. If $M$ has the $L^p$-HAP, then there exists a net of c.c.p.\ compact operators $T_n$ on $L^p(M)$ such that $T_n\to1_{L^p(M)}$ in the strong topology, and $(T_nh_\varphi^{1/p})^{p/2}\in L^2(M)^+$ is cyclic and separating for all $n$. \end{lemma} \begin{proof} Since $M$ has the $L^p$-HAP, there exists a net of c.c.p.\ compact operators $T_n$ on $L^p(M)$ such that $T_n\to1_{L^p(M)}$ in the strong topology. Set $a_n^{1/p}:=T_nh_\varphi^{1/p}\in L^p(M)^+$. Then $a_n\in L^1(M)^+$. If we set \[ h_n:=a_n+(a_n-h_\varphi)_-\in L^1(M)^+, \] then $h_n\geq h_\varphi$. By Lemma \ref{lem:LH-ineq}, we obtain $h_n^{1/2}\geq h_\varphi^{1/2}$. It follows from \cite[Lemma 4.3]{co2} that $h_n^{1/2}\in L^2(M)^+$ is cyclic and separating. Now we define a compact operator $T_n'$ on $L^p(M)$ by \[ T_n'a:=T_na+\mathrm{tr}(ah_\varphi^{1/q}) (h_n^{1/p}-a_n^{1/p}) \quad\text{for } a\in L^p(M). \] Since $h_n^{1/p}\geq a_n^{1/p}$ by Lemma \ref{lem:LH-ineq}, each $T_n$ is a c.p.\ operator, because of Lemma \ref{lem:cp-ex}. Note that \[ T_n'h_\varphi^{1/p}= T_nh_\varphi^{1/p}+\mathrm{tr}(h_\varphi) (h_n^{1/p}-a_n^{1/p}) =h_n^{1/p}. \] Since $a_n^{1/p}=T_nh_\varphi^{1/p}\to h_\varphi^{1/p}$ in norm, we have $a_n\to h_\varphi$ in norm by Lemma \ref{thm:norm-conti}. Since \[ \|h_n-a_n\|_1 =\|(a_n-h_\varphi)_-\|_1 \leq\|a_n-h_\varphi\|_1 \to 0, \] we obtain $\|h_n^{1/p}-a_n^{1/p}\|_p\to 0$ by Lemma \ref{thm:norm-conti}. Therefore $\|T_n'a-a\|_p\to 0$ for any $a\in L^p(M)$. Since $\|T_n'-T_n\|\leq\|h_n^{1/p}- a_n^{1/p}\|_p\to 0$, we get $\|T_n'\|\to 1$. Then operators $\widetilde{T}_n:=\|T_n'\|^{-1}T_n'$ give a desired net. \end{proof} If $M$ is $\sigma$-finite and the $L^p$-HAP for some $1<p<\infty$, then we can recover a net of normal c.c.p.\ maps on $M$ approximating to the identity such that the associated implementing operators on $L^p(M)$ are compact. In the case where $p=2$, this is nothing but \cite[Theorem 4.8]{ot} (or Theorem \ref{thm:cpmap}). \begin{theorem} \label{thm:L^p-HAP2} Let $1<p<\infty$ and $M$ a $\sigma$-finite von Neumann algebra with a faithful state $\varphi\in M_*^+$. If $M$ has the $L^p$-HAP, then there exists a net of normal c.c.p.\ map $\Phi_n$ on $M$ with $\varphi\circ\Phi_n\leq\varphi$ satisfying the following: \begin{itemize} \item $\Phi_n\to\mathrm{id}_M$ in the point-ultraweak topology; \item the associated c.c.p.\ operator $T^p_{\Phi_n}$ on $L^p(M)$ defined below are compact and $T^p_{\Phi_n}\to1_{L^p(M)}$ in the strong topology: \[ T^p_{\Phi_n}(h_\varphi^{1/2p}xh_\varphi^{1/2p}) =h_\varphi^{1/2p}\Phi_n(x)h_\varphi^{1/2p} \quad\text{for } x\in M. \] \end{itemize} \end{theorem} \begin{proof} The case where $p=2$ is nothing but \cite[Theorem 4.8]{ot}. Let $p\ne 2$. Take $q>1$ such that $1/p+1/q=1$. By Lemma \ref{lem:cyc-sep}, there exists a net of c.c.p.\ compact operator $T_n$ on $L^p(M)$ such that $T_n\to1_{L^p(M)}$ in the strong topology, and $h_n^{1/2}:=(T_nh_\varphi^{1/p})^{p/2}$ is cyclic and separating on $L^2(M)$ for all $n$. Let $\Theta_{h_\varphi}^p$ and $\Theta_{h_n}^p$ be the maps given in Lemma \ref{lem:order}. For each $x\in M_{\mathrm{sa}}$, take $c>0$ such that $-c1\leq x\leq c1$. Then \[ -ch_\varphi^{1/p} \leq h_\varphi^{1/2p}x h_\varphi^{1/2p} \leq ch_\varphi^{1/p}. \] Since $T_n$ is positive, we have \[ -ch_\varphi^{1/p} \leq T_n(h_\varphi^{1/2p}x h_\varphi^{1/2p}) \leq ch_\varphi^{1/p}. \] From Lemma \ref{lem:order}, the operator $(\Theta_{h_n}^p)^{-1}(T_n(h_\varphi^{1/2p}x h_\varphi^{1/2p}))$ in $M$ is well-defined. Hence we can define a linear map $\Phi_n$ on $M$ by \[ \Phi_n:=(\Theta_{h_n}^p)^{-1}\circ T_n\circ \Theta_{h_\varphi}^p. \] In other words, \[ T_n(h_\varphi^{1/2p}x h_\varphi^{1/2p}) =h_n^{1/2p}\Phi_n(x) h_n^{1/2p} \quad\text{for } x\in M. \] One can easily check that $\Phi_n$ is a normal u.c.p.\ map. \vspace{5pt} \noindent \textbf{ Step 1.} We will show that $\Phi_n\to\mathrm{id}_M$ in the point-ultraweak topology. \vspace{5pt} Since $h_\varphi^{1/2}Mh_\varphi^{1/2}$ is dense in $L^1(M)$, it suffices to show that \[ \mathrm{tr}(\Phi_n(x)h_\varphi^{1/2}yh_\varphi^{1/2})\to \mathrm{tr}(xh_\varphi^{1/2}yh_\varphi^{1/2}) \quad\text{for } x, y\in M. \] However \begin{align*} |\mathrm{tr}((\Phi_n(x)-x)h_\varphi^{1/2}yh_\varphi^{1/2})| &=|\mathrm{tr}(h_\varphi^{1/2p}(\Phi_n(x)-x)h_\varphi^{1/2p}\cdot h_\varphi^{\frac{1}{2q}}yh_\varphi^{\frac{1}{2q}})| \\ &=|\mathrm{tr}((T_n-1_{L_p(M)})(h_\varphi^{1/2p}xh_\varphi^{1/2p})\cdot h_\varphi^{\frac{1}{2q}}yh_\varphi^{\frac{1}{2q}})| \\ &\leq\|(T_n-1_{L_p(M)})(h_\varphi^{1/2p}xh_\varphi^{1/2p})\|_p\|h_\varphi^{\frac{1}{2q}}yh_\varphi^{\frac{1}{2q}}\|_q \\ &\to 0. \end{align*} \vspace{5pt} \noindent \textbf{ Step 2.} We will make a small perturbation of $\Phi_n$. \vspace{5pt} By Lemma \ref{thm:norm-conti}, we have $\|h_n-h_\varphi\|_1\to 0$, i.e., $\|\varphi_n-\varphi\|\to 0$, where $\varphi_n\in M_*^+$ is the unique element with $h_n=h_{\varphi_n}$. By a similar argument as in the proof of \cite[Theorem 4.8]{ot}, one can obtain normal c.c.p.\ maps $\widetilde{\Phi}_n$ on $M$ with $\widetilde{\Phi}_n\to\mathrm{id}_M$ in the point-ultraweak topology, and c.c.p.\ compact operators $\widetilde{T}_n$ on $L^p(M)$ with $\widetilde{T}_n\to 1_{L^p(M)}$ in the strong topology such that $\varphi\circ\widetilde{\Phi}_n\leq \varphi$ and \[ \widetilde{T}_n(h_\varphi^{1/2p}xh_\varphi^{1/2p}) =h_\varphi^{1/2p}\widetilde{\Phi}_n(x)h_\varphi^{1/2p} \quad\text{for } x\in M. \] Moreover operators $\widetilde{T}_n$ are nothing but $T^p_{\Phi_n}$. \end{proof} Recall that $M$ has the completely positive approximation property (CPAP) if and only if $L^p(M)$ has the CPAP for some/all $1\leq p<\infty$. This result is proved in \cite[Theorem 3.2]{jrx}. The following is the HAP version of this fact. \begin{theorem}\label{thm:L^p-HAP3} Let $M$ be a von Neumann algebra. Then the following are equivalent: \begin{enumerate}\renewcommand{\labelenumi}{{\rm (\arabic{enumi})}}\renewcommand{\itemsep}{0pt} \item $M$ has the HAP; \item $M$ has the $L^p$-HAP for all $1<p<\infty$; \item $M$ has the $L^p$-HAP for some $1<p<\infty$. \end{enumerate} \end{theorem} \begin{proof} We first reduce the case where $M$ is $\sigma$-finite by the following elementary fact similarly as in the proof of \cite[Theorem 3.2]{jrx}. Take an f.n.s.\ weight $\varphi$ on $M$ and an increasing net of projection $e_n$ in $M$ with $e_n\to 1_M$ in the strong topology such that $\sigma_t^\varphi(e_n)=e_n$ for all $t\in\mathbb{R}$ and $e_nMe_n$ is $\sigma$-finite for all $n$. Then we can identify $L^p(e_nMe_n)$ with a subspace of $L^p(M)$ and there exists a completely positive projection from $L^p(M)$ onto $L^p(e_nMe_n)$ via $a\mapsto e_nae_n$. Moreover the union of these subspaces is norm dense in $L^p(M)$. Therefore it suffices to prove the theorem in the case where $M$ is $\sigma$-finite. (1)$\Rightarrow$(2). It is nothing but Theorem \ref{thm:L^p-HAP}. (2)$\Rightarrow$(3). It is trivial. (3)$\Rightarrow$(1). Suppose that $M$ has the $L^p$-HAP for some $1<p<\infty$. We may and do assume that $p<2$ by Lemma \ref{lem:dual}. Let $\varphi\in M_*$ be a faithful state. By Theorem \ref{thm:L^p-HAP2}, there exists a net of normal c.c.p.\ maps $\Phi_n$ on $M$ with $\varphi\circ\Phi_n\leq \varphi$ such that $\Phi_n\to\mathrm{id}_M$ in the point-ultraweak topology and a net of the associated compact operators $T^p_{\Phi_n}$ converges to $1_{L^p(M)}$ in the strong topology. By the reiteration theorem for the complex interpolation method, we have $L^2(M; \varphi) =C_\theta(h_\varphi^{1/2}Mh_\varphi^{1/2}, L^p(M; \varphi))$ for some $0<\theta<1$. By \cite{lp}, operators $T^2_{\Phi_n}$ are also compact. Moreover, by the same argument as in the proof of Theorem \ref{thm:L^p-HAP}, we have $T^2_{\Phi_n}\to 1_{L^2(M)}$ in the strong topology. \end{proof} \begin{remark} In the proof of \cite[Theorem 3.2]{jrx}, it is shown that c.p.\ operators on $L^p(M)$ give c.p.\ maps on $M$ by using the result of L. M. Schmitt in \cite{sch}. See \cite[Theorem 3.1]{jrx} for more details. However our approach is much different and based on the technique of A. M. Torpe in \cite{tor}. \end{remark}
{ "attr-fineweb-edu": 1.362305, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcinxK6EuNBRhHlc0
\section*{Acknowledgments} The authors wish to thank Basescu Cristina and Simone Colombo for their extremely helpful comments and suggestions. This research was supported in part by U.S. Office of Naval Research grant N00014-19-1-2361 and the AXA Research Fund. \section{Background} In this section, we present a brief background on blockchain and smart contracts for definition purposes, introduce background on front-running attacks and various mitigation strategies and describe a real-world DeFi actor that is significantly vulnerable to these attacks: the Automated Market Marker (AMM). \subsection{Blockchain \& Transaction Ordering} A blockchain is an immutable append-only ledger of ordered transactions. However, its transactions go through a series of stages before they are finally committed---irreversibly ordered---on the blockchain. After a sender creates a transaction, one needs to propagate the transaction among the consensus nodes, who then place the transaction in a pool of pending transactions, most commonly known as mempool. Notably, these transactions are \emph{not yet irreversibly ordered}, opening up the possibility for front-running attacks. Further, under certain probabilistic consensus algorithms, such as Proof-of-Work(PoW) or Poof-of-Stake(PoS), a transaction inserted onto the blockchain can still be reordered by inducing a fork of the underlying block\-chain\xspace. Hence, to guarantee irreversible ordering for probabilistic consensus algorithms, a transaction must receive enough block confirmations---the number of blocks succeeding the block containing the transaction~\cite{nakamoto2008bitcoin, blockconfirmationsKraken, blockconfirmationsCoinbase}. \subsection{Smart Contract \& Decentralized Exchange} A smart contract is an executable computer program modeled after a contract or an agreement that executes automatically~\cite{savelyev2017contract}. A natural fit for smart contracts is on top of fault-tolerant consensus algorithms, such as PBFT-style, PoW, and PoS, ensuring their execution and integrity by all consensus nodes~\cite{wood2014ethereum, nakamoto2008bitcoin, kogias2016enhancing}. While Bitcoin utilizes smart contracts~\cite{nakamoto2008bitcoin}, it was not until Ethereum's introduction that the blockchain space realized Turing-complete smart contracts, the backbone necessary for the creation of complex decentralized exchanges. To operate complex smart contracts, users need to pay \textit{gas}, a pseudo-currency representing the execution cost by miners. However, given the expressive nature of smart contracts, they come with significant risks, from inadvertent vulnerabilities to front-running. Front-running is, unfortunately, exhibited by the lack of guarantees that the underlying block\-chain\xspace provides in terms of ordering while transactions wait to be committed. \subsection{Front-running Attacks \& Mitigation} The practice of front-running involves benefiting from advanced knowledge of pending transactions~\cite{eskandari2019sok, bernhardt2008front, NasdaqFrontRunning}. In itself, knowledge of pending transactions is harmless but having the ability to act on this information is where the true problem lies. In the context of blockchains, a front-running attack is performed by an adversary who can influence the order of transactions, provided that transactions in the mempool are completely in the clear. Cryptocurrencies mainly suffer three types of front-running attacks~\cite{eskandari2019sok}: \emph{displacement}: displacing a targeted transaction with a new transaction, \emph{insertion}: inserting a transaction before the targeted transaction, and \emph{suppression}: delaying a targeted transaction indefinitely. A recent study looking at the past five years of Ethereum's history has found that the most common attack is an insertion attack, the most profitable attack is a displacement attack and the most revenue-generating attack is a suppression attack~\cite{torres2021frontrunner}. In an ideal world, front-running protection would consist of an \emph{immediate} global ordering of each transaction as they are created to prevent attackers from changing their order. In reality, such global ordering is practically impossible, even if all participants were honest due to clock synchronization issues~\cite{dalwadi2017comparative} and consistency problems if two transactions have the exact same time. By including malicious participants, timings can naturally easily be manipulated. A more practical solution is to encrypt transactions such that the consensus group\xspace has no knowledge about transactions when ordering them. This solution mitigates front-running attacks, as an attacker cannot benefit from pending transactions if they are encrypted. \subsection{Automated Market Maker}\label{sec:background:AMM} AMM is a decentralized exchange built on liquidity pools---pools of assets provided by the exchange---rather than conventional order books. AMM allows any user to trade between different assets using the liquidity pool as the counterpart. Consider an AMM's liquidity pool holding two assets, $\tau_1$ and $\tau_2$, whose balances are $r_1$ and $r_2$, respectively. When using the constant product model, the total amount of $r_1r_2$ remains constant when carrying out a trade between these two assets. The rule indicates that the larger the input $\Delta r_1$ of trade, the smaller the exchange rate between $r_2$ and $r_1$. We can intuitively reason that large transactions with respect to the size of the liquidity pool cause significant price slippage indicated by the difference between expected and executed prices. By launching a sandwich attack, {\em i.e.}, an attack consisting of one front-run and one back-run transaction with respect to the targeted transaction, front-running actors can then profit from this price slippage effect~\cite{baum2021sok}. Fundamentally, the front-run transaction reduces the exchange rate, while the back-run transaction benefits from the improved exchange rate caused by the execution of the victim's transaction. To provide long-term orders, Paradigm introduces the notion of a \textit{scheduled AMM}, allowing a trade not to be executed immediately but in the future at a particular block height~\cite{Paradigm}. Smart contracts, however, then store these scheduled AMM inputs, making it easier for an adversary to launch sandwich attacks. A natural approach to hinder adversaries' exploitation of front-running is to blind transaction inputs, such as to ensure the adversary does not know the direction of the exchange: $\tau_1$ to $\tau_2$ or $\tau_2$ to $\tau_1$. Nevertheless, these actors can still launch a \textit{speculative sandwich attack} using side-channels, such as determining the sender's balance for $\tau_1$ and $\tau_2$ and examining past transaction history, with increased difficulty and cost but remains profitable~\cite{baum2021sok}. A further improvement is to encrypt the entire transaction, not only transaction inputs, effectively hiding the smart contract address involved and rendering the speculative attack even harder since an attacker needs to first infer the smart contract's address. \section{Conclusion} This paper introduces F3B\xspace, a blockchain architecture that addresses front-running attacks by encrypting pending transactions with threshold encryption. We performed an exploration demonstrating that F3B\xspace can protect the Ethereum blockchain with a low latency overhead. While acknowledging that widespread deployment of F3B\xspace is challenging given the need for modifications with the blockchain workflow, F3B\xspace provides substantial benefits: the blockchain, by default, would now contain standard front-running protection for all applications at once without requiring any modifications to smart contracts themselves. \section{Deployment} This section discusses a few considerations when F3B\xspace is deployed on real-world blockchains. \paragraph{Settling on block confirmations:} To reliably provide front-running protection, we must select a reasonable $m$: the number of block confirmations. For a PBFT-style consensus, a transaction is irreversible once placed inside a block, and thus $m$ can safely be set to 1. However, for probabilistic consensus algorithms ({\em e.g.}, PoW, PoS), choosing $m$ is more precarious. On the one hand, a larger $m$ suggests a better security level, rendering a successful front-running attack less likely. On the other hand, a larger $m$ indicates more time to commit a transaction, affecting the minimum baseline time until commitment and execution across the entire blockchain network. Unlike blockchains without F3B\xspace protection, the public cannot observe transactions until the consensus group\xspace commits to them and the secret-management committee\xspace releases their corresponding keys. The exact formula for choosing $m$ is out of the scope of this paper but we refer to Gervais et al.'s proposed framework to systematically analyze the security and performance of proof-of-work blockchain\cite{gervais2016security}, which offers a reference for choosing $m$. \paragraph{Storing encrypted transactions and delayed execution:} Since all transactions are encrypted, the consensus group\xspace cannot verify and execute them until the secret-management committee\xspace releases their corresponding keys, which is only done after the consensus group\xspace commits a transaction on the underlying block\-chain\xspace. This process indicates that the consensus group\xspace must store the transaction on the blockchain without any regard to its content. Unfortunately, this brings a notable challenge as attackers can then spam the blockchain by fabricating invalid transactions to flood the blockchain with, causing a backlog of pending transactions. Nonetheless, we present a mitigation technique in \S\ref{sec:spamming} involving a storage fee. \paragraph{A hard fork on existing blockchain:} The easiest way to deploy F3B\xspace is on a new blockchain. If an existing blockchain like Ethereum decides to deploy F3B\xspace, this would require a scheduled hard fork, given that all nodes need to adopt the new rules due to the different execution workflow. \label{sec:SMCdiscussion} \paragraph{Reconfiguration of secret-management committee\xspace:} The membership of the secret-management committee\xspace needs to be reconfigured at some predefined interval (each epoch) to allow for new trustees and the removal of others. In addition, this helps to prevent a silent violation of our threat model---the assumption that $f+1$ trustees cannot collude---over a long period. Care must be taken when transitioning between two epochs so that pending transactions are unaffected by the transition. This could involve having the decommissioned secret-management committee\xspace operate into a new epoch until the remaining pending transactions are committed and revealed. \paragraph{Multiple secret-management committees\xspace:} The secret-management committee\xspace may inadvertently become the bottleneck of the underlying consensus protocol given its inability to achieve the same throughput. Yet, a secret-management committee\xspace is agnostic to the underlying blockchain layer, allowing for independently-operated secret-management committees\xspace. We thus suggest adopting a \emph{sharding} strategy by allowing for numerous secret-management committees\xspace to operate in parallel, balancing pending transactions between them. Each secret-management committee\xspace needs to run the DKG protocol to set up its keys and store the public key \cpk{} and verification keys $h_i$ to the underlying block\-chain\xspace as well as define its security parameters by choosing, for example, their committee size and $n$-bit security level. One option would then be for secret-management committees\xspace to form at will and let users freely choose a secret-management committee\xspace over another according to their preferences or needs. This would naturally introduce a free market, where secret-management committees\xspace would compete with one another to provide quality service at low cost. Another option is pooling all qualifying nodes and randomly sample them into multiple shards at the beginning of each epoch using bias-resistant, distributed randomness~\cite{syta2017scalable}. The aggregated sampling changes the threat model from an individual secret-management committee\xspace to all nodes. Under the assumption of a \textit{mildly adaptive} adversary~\cite{luu2016secure, kiayias2017ouroboros} where an adversary can compromise a number of trustees with a time-delay, random sampling helps ensure that the compromise of any single secret-management committee\xspace is harder to accomplish. While this does introduce additional complexity for the setup of secret-management committees\xspace, this overhead would not affect transaction-processing latency as it can be bootstrapped during the previous epoch. Overall, we expect that, by adopting a sharding strategy, the throughput of F3B\xspace increases linearly with the number of trustees in secret-management committees\xspace. \paragraph{Multiple blockchains:} A secret-management committee\xspace can also provide threshold encryption services for multiple blockchains if it can monitor all the blockchains it supports and follow their set of rules ({\em e.g.}, the blockchains can have different values for $m$). Recall that each blockchain is labeled with a distinct $L$ ({\em e.g.}, the genesis block's hash), which allows the sender to designate which blockchain their encrypted transactions belongs to. \section{Optimization Discussion} This section discusses some possible directions to optimize the protocol. We leave the detailed analysis of those optimizations as future work. \paragraph{Not every node needs to do the reconstruction:} Under our proposed protocol, every consensus node needs to fetch the shares and run the Lagrange interpolation to reconstruct the key. Would it instead be possible for one of the consensus nodes to reconstruct the symmetric key $k$ from the secret shares and propagate it to other consensus nodes with a succinct proof? We thus propose a solution that requires additional storage overhead in exchange for faster verification: Instead of constructing their encrypted transaction as ($c_k, c_{\ensuremath{\mathrm{tx}}\xspace}$), the sender additionally adds a hash of the symmetric key $h_k=H(k)$ as the third entry, creating the following signed write transaction: $\ensuremath{\mathrm{tx_w}}\xspace = [c_k, c_{\ensuremath{\mathrm{tx}}\xspace}, h_k]_{\sign{\sk_A}}$. During key reconstruction, after recovering or receiving $k$, consensus nodes only need to verify whether the hash of $k$ is consistent with the one ($h_k$) published on the ledger. If it is consistent, consensus nodes can continue to decrypt the transaction and propagate the key $k$ to others. If it is inconsistent, the provided key $k$ is incorrect and discards it. \paragraph{Reducing storage overhead:} In the TDH2 cryptosystem, the label $L$ is attached to the ciphertext during the encryption process, which includes the information that can be used by a secret-management committee\xspace to determine if the decryption is authorized~\cite{shoup1998securing}. While we cannot remove $L$ since its used for protection against replay attacks~\S\ref{sec:ReplayAttack}, each party (miners, secret-management committee\xspace, sender) knows $L$ implicitly and can insert $L$ in their computation and verification steps, allowing Alice to exclude $L$ from \ensuremath{\mathrm{tx_w}}\xspace. \paragraph{Metadata leakage:} \label{sec:metadata} In our architecture, adversaries can only observe encrypted transactions until they are committed, thus preventing the revelation of transaction contents to launch front-running attacks. Nevertheless, adversaries can rely on side channels such as transaction metadata to launch speculative attacks. Concretely, since the sender needs to pay the storage fee (\S\ref{sec:spamming}) for publishing an encrypted transaction to the underlying blockchain, this leaks the sender's address. Knowledge of the sender's address can help in launching a front-running attack since an adversary may be able to predict the sender's behavior based on history. In order to prevent this second-order front-running attack, an underlying blockchain needs to offer anonymous payment to users, such as Zerocash~\cite{sasson2014zerocash} or a mixing service~\cite{ziegeldorf2018secure}. Another side-channel leakage is the size of the encrypted transaction or the time the transaction is propagated. A possible remedy for mitigating metadata leakage is PURBs~\cite{nikitin2019reducing}. \paragraph{Encryption based on block:} Under our F3B\xspace architecture, each transaction is encrypted with a key. However, this might not be necessary as transactions are committed and executed on a per-block basis; thus, transactions in the same block can be encrypted with the same key by leveraging identity-based encryption~\cite{boneh2001identity}. By adopting this batching approach, we expect that it can reduce the latency and increase throughput; we leave this approach's actual performance and trade-offs as future work. \paragraph{Key Storage and Node Catchup:} In our protocol, if a new node wants to join the consensus group, it cannot execute the historical transactions to catch up unless it obtains all decryption keys. The secret-management committee\xspace or consensus group can store those keys independently from the blockchain, but this requires them to maintain an additional immutable ledger. Since consensus nodes already maintain one immutable storage medium, namely the underlying blockchain, the keys can be stored on this medium as metadata, and the blockchain rule can require storing valid keys to produce blocks. However, this optimization brings about a timing issue, {\em i.e.}, when should the blockchain require the consensus group to store keys in a block? From our protocol, the transaction is committed at block height $n$ and revealed at block height $n+m$, making the earliest block to write the key at block height $n+m+1$. With respect to the latest block height to write the key, there is much more flexibility and one needs to consider the balance between the delay tolerance for all consensus nodes to retrieve the key and the time that consensus nodes must retain the key. If we assume that the key reconstruction step takes up to $\delta$ block times, the key should be written in or before the block $n+m+\delta$. While this setup would work well for for a blockchain with fixed block time, care must be taken with respect to blockchains where block time is probabilistic since the key may not have been replicated to all consensus nodes at block height $n+m+\delta$, inducing some artificial delay for new blocks. \section{Evaluation} \label{sec:evaluation} \begin{figure} \centering \includegraphics[scale=0.53]{resources/latency.pdf} \caption{Overhead of F3B\xspace for varying sizes of the secret-management committee\xspace. } \label{Fig:latency} \end{figure} We implemented a prototype of F3B\xspace in Go~\cite{Go}, built on Calypso~\cite{kokoris2018calypso} and supported by Kyber~\cite{kyber}, an advanced cryptographic library. We use ByzCoin~\cite{kogias2016enhancing}, a PBFT-style consensus protocol as our underlying blockchain. We instantiate our cryptographic primitives using the Edward25519 elliptic curve with 128-bit security. We ran our experiment on a server with 32GB of memory and 40 CPU cores running at 2.1GHz. \subsection{Latency} In Figure \ref{Fig:latency}, while varying the number of secret-management committee\xspace trustees from 8 to 128 nodes, we present the latency time a) for setting up a DKG, b) for sending a transaction to the Byzcoin blockchain (Step \ref{item:sharing}), c) for reconstructing the key (Step \ref{item:reconstruction}). Recall that DKG setup is a one-time operation per epoch that can be bootstrapped during the previous epoch; thus, b) and c) represent the true transaction-latency of our solution. \input{resources/latencyTable} We consider the transaction's overall latency in F3B\xspace to be expressed as $mL_b+L_r$ (\S\ref{sec:protocol}). To evaluate F3B\xspace with Ethereum's consensus model, we adopt 13 seconds as the expected block time~\cite{ethereumBlock}. Since there is no standard number of confirmations before a block is considered committed, we vary both the number of confirmations and the size of the secret-management committee\xspace, as summarized in Table \ref{tab:latency}. Our result shows that, for example, based on a committee size of 128 with 20 block confirmations, F3B\xspace brings a 0.68\% latency overhead. Figure \ref{Fig:comparison} presents the latency comparison between the baseline protocol (\S\ref{sec:baseline}), the F3B\xspace protocol---varying the size of the secret-management committee\xspace stated after ``F3B-"---and a sender-only commit-and-reveal protocol as presented in Strawman 1 (\S\ref{sec:StrawmanI}). Our simulation adopts a fixed 13 seconds block time with 20 block confirmations to commit a transaction consistent between different comparisons; thus, any data written into the blockchain needs $13*20=260$ seconds. As for the baseline protocol, sending a transaction only needs to write data to the blockchain once. Hence, the total latency for baseline is $260$ seconds. Recall that in a sender-only commit-and-reveal approach, the sender needs first to commit a hash to the blockchain and then reveal the transaction, in which each step takes 260 seconds. Those two steps cannot be parallelized since the hash must be committed on the blockchain before sending the reveal transaction. Otherwise, an adversary can read the reveal transaction and create a fork of the blockchain, thwarting any front-running protection. In practice, the protocol needs to offer enough buffer time to allow the reveal transaction to be included in the blockchain, inducing additional latency. This buffer time needs to be conservative in case a cryptokitties storm or blockchain DoS attacks were to happen. The Fomo3D attack blocked the Ethereum blockchain for about three minutes~\cite{eskandari2019sok}, suggesting a reference for choosing the buffer time. Hence, $260*2=520$ seconds is the lower bound of this approach. Further, this approach also suffers the leakage of the smart contract address when submitting the commit transaction to the blockchain. Submarine is a more advanced approach that hides the smart contract address, but it requires the sender to send three different transactions to the blockchain, suffering a 200\% latency overhead compared to the baseline~\cite{libsubmarine, breidenbach2018enter} Compared with F3B\xspace, the reveal phase (Key Reconstruction step) does not require writing any data onto the blockchain. We, therefore, emphasize a significant difference between F3B and other commit-and-reveal approaches where F3B only requires sending one transaction to the underlying block\-chain\xspace. This advantage brings a low latency overhead for Ethereum. \begin{figure}[h] \centering \includegraphics[scale=0.5]{resources/latencyComparison.pdf} \caption{Latency comparison of F3B with the sender-only commit-and-reveal approach, if they were deployed in the Ethereum. Because F3B needs only one round interaction with the underlying block\-chain\xspace, it induces a small latency overhead with respect to Ethereum. } \label{Fig:comparison} \end{figure} \subsection{Throughput} With respect to throughput, we only focus on Key Reconstruction (Step 3) as Write Transaction (Step 1) is identical to sending a transaction to the underlying block\-chain\xspace, except for some negligible overhead due to the additional size of an encrypted transaction. Figure \ref{Fig:throughput} presents the throughput result of key reconstruction with a secret-management committee\xspace consisting of 128 trustees. If the keys are individually reconstructed, F3B\xspace can only provide limited throughput due to the network transmission overhead on a per-transaction basis and non-parallel execution. Instead, we can batch the keys, reconstruct them concurrently and present them in one network transmission. We present this batching effect in Figure \ref{Fig:throughput}, varying the batching sizes to measure the throughput and the corresponding latency. Increasing the batching size to 128, can improve throughput to 13.2 txns/sec from 0.56 txns/sec. Our result shows that the marginal improvement is negligible when further increasing the batching size to 256 or 512. The increased throughput comes with the cost of higher latency: with the batching size of 128, the key reconstruction takes 9.7 seconds to process, equivalent to 3.7\% latency overhead under 20 block confirmations in the Ethereum blockchain. \com{ \subsection{Storage Overhead} } \section{Achieving the System Goals} In this section, we present how F3B\xspace achieves the system goals set forth in \S\ref{sec:goals}. \textbf{Front-running protection:} \emph{Preventing entities from launching front-running attacks.} We reason the protection offered by F3B\xspace from the definition of front-running: if an adversary cannot benefit from pending transactions, he cannot launch front-running attacks. In F3B\xspace, the sole entity that knows the content of a pending transaction is the sender, who is financially incentivized \emph{not} to release its contents. The key is released only when a transaction is committed; thus, by definition, the attacker has no chance to launch a front-running attack. However, we acknowledge that attackers may use metadata---such as the sender' address and transaction size---of the encrypted transaction to launch \emph{speculative} front-running attacks, as discussed in~\S\ref{sec:threatmodel} and~\S\ref{sec:metadata}. We present a more comprehensive security analysis in \S\ref{sec:securityAnalysis}. \textbf{Decentralization:} \emph{Mitigating a single point of failure or compromise.} Due to the properties of DKG~\cite{gennaro1999secure} and THD2~\cite{shoup1998securing} cryptosystems, the secret-management committee\xspace can handle up to $t-1$ malicious trustees and up to $n-t$ offline trustees. \textbf{Confidentiality:} \emph{Revealing a transaction only after the underlying consensus layer commits it. } Each transaction is encrypted with the sender's generated symmetric key and this key is encrypted with the secret-management committee\xspace's public key. Per our threat model, only $f$ trustees may behave maliciously, ensuring that the symmetric key cannot be revealed since $f+1$ trustees follow the protocol. We outline a more detailed security analysis in \S\ref{sec:securityAnalysis}. \textbf{Compatibility:} \emph{Remaining agnostic to the underlying consensus algorithm and smart contract implementation.} Since we are encrypting the entire transaction and require a modification at the blockchain layer to adapt F3B\xspace, F3B\xspace does not require modifications to existing smart contract implementations and support various consensus algorithm (PBFT-style, PoW, PoS). \textbf{Low latency overhead}: \emph{Exhibiting low latency transaction-pro\-cessing overhead.} Since F3B requires writing one transaction onto the underlying block\-chain\xspace, as in the baseline protocol, this feature enables F3B to have a low latency overhead in comparison to front-running protection protocols which require multiple transactions. We present an evaluation of this latency overhead in~\S\ref{sec:evaluation}. \section{Incentive Analysis} Each actor must be incentivized to operate and follow the protocol honestly. In this section, we address some of the key incentives in F3B\xspace to ensure protection against unsolicited transactions and prevent the release of shares prematurely. \subsection{Spamming protection} \label{sec:spamming} As transactions are encrypted, the consensus group\xspace cannot verify or execute transactions, opening up an availability attack that would otherwise not be present in an open system. A malicious adversary could spam the blockchain with inexecutable transactions ({\em e.g.}, inadequate fees, malformed transactions), significantly hindering the throughput of honest transactions. To prevent such an attack, we introduce a storage fee alongside the traditional execution fee ({\em e.g.}, Ethereum gas) that makes it costly for an attacker to operate this attack. The storage fee paid to miners covers the placement of the transaction on the blockchain and can either vary based on some parameter ({\em e.g.}, size of the transaction) or be a flat fee. The execution fee is not calculated until after the secret-management committee\xspace reveals the transaction, given the lack of knowledge of the transaction's contents. \subsection{The operational incentive for a SMC} Similar to how consensus nodes are incentivized to maintain the blockchain with block rewards and transaction fees, we need a similar incentive structure for the secret-management committees\xspace. In a permissioned blockchain system, becoming trustees of a secret-management committee\xspace could be an added responsibility to the consensus nodes and in exchange, receive storage fees as additional income. In a permissionless blockchain system, such as a PoW blockchain, the secret-management committee\xspace might differ from miners, indicating that a payment channel needs to be established between senders and the secret-management committee\xspace. Now, how should a secret-management committee\xspace set the price? We believe free market forces will automatically solve this problem when multiple secret-management committees\xspace are competing with each other to provide cheap and quality service. On the one hand, if a secret-management committee\xspace sets the price too high, this will discourage users from using this secret-man\-age\-ment committee\xspace. On the other hand, if a secret-management committee\xspace sets the price too low, the trustees may not be able to cover the cost of running a secret-management committee\xspace or handle the volume of incoming transactions. In general, we can expect a price equilibrium to be reached with some price fluctuations depending on the volume of transactions. \subsection{The incentive for not leaking the share} A corrupted secret-management committee\xspace might silently collaborate with the consensus group\xspace to front-run users' transactions by releasing the shares prematurely. Since this behavior is difficult to detect---out of band collusion---we must provide an incentive structure that (significantly) rewards anyone that can prove misbehavior by a trustee or the entire secret-management committee\xspace to disincentivize such malicious activity. At the same time, we do not want anyone to accuse a secret-management committee\xspace or a specific trustee without repercussions if the secret-man\-age\-ment committee\xspace or trustee did not actually misbehave. To accomplish our objective, each trustee of each secret-management committee\xspace must stake some amount of cryptocurrency in a smart contract that handles disputes between a defendant (the entire secret-management committee\xspace or a particular trustee) and a plaintiff. To start a dispute, the plaintiff will invoke the smart contract with the correct decryption share for a currently pending transaction and their own stake. If the smart contract validates that this is a correct decryption share and that the dispute started before the transaction in question was revealed by the secret-management committee\xspace, then the defendant's stake is forfeited and sent to the plaintiff. At a protocol level, to prove a correct decryption share, the plaintiff submits $[u_i, e_i, f_i]$ such that $e_i=\hash_2\left(u_i,\hat{u_i},\hat{h_i}\right)$ where $\hat{u_i}=\frac{u^{f_i}}{{u_i}^{e_i}}$ and $\hat{h_i}=\frac{g^{f_i}}{{h_i}^{e_i}}$. Deploying such a mechanism would require the smart contract to access the ciphertext of a transaction ({\em e.g.}, $u$ is necessary to verify the submitted share), suggesting a modification of the smart contract virtual machine. \section{Introduction} Front-running is the practice of benefiting from advanced knowledge of pending transactions by entering into a financial position~\cite{eskandari2019sok, bernhardt2008front, NasdaqFrontRunning}. While benefiting some entities involved, this practice puts others at a significant financial disadvantage, leading regulators to judge this behavior as illegal (and unethical) in traditional markets~\cite{eskandari2019sok}. Once a significant problem for traditional (centralized) markets addressed via regulations, front-running has now become a significant problem for decentralized finance (DeFi), given blockchain transactions' openness and pseudonymous nature and the difficulties involved with pursuing miscreants across numerous jurisdictions~\cite{lewis2014flash, eskandari2019sok, daian2019flash}. Since transactions are transparent before commitment, adversaries can take advantage of the contents of those pending transactions for their own financial gain by, for example, creating their own transactions and positioning them in a specific sequence with respect to the targeted transaction~\cite{baum2021sok, daian2019flash, eskandari2019sok}. Front-running negatively impacts all honest DeFi actors, but automated market makers (AMM) are particularly vulnerable due to price slippage~\cite{baum2021sok}, {\em i.e.}, the difference between expected and executed prices. An estimate shows that front-running attacks amount to \$280 million in losses for DeFi actors each month~\cite{losedefi}. In addition, front-running attacks threaten the underlying consensus layer's security by incentivizing unnecessary forks~\cite{dapp2020uniswap, daian2019flash}. While there are existing approaches to address front-running attacks, they are either inefficient or restricted to a specific application or consensus algorithm. Namecoin, an early example of using a commit-and-reveal approach to mitigate front-running attacks, requires two rounds of communication to the underlying block\-chain\xspace~\cite{kalodner2015empirical}. Submarine further improves the commit-and-reveal approach by hiding the addresses of smart contracts involved, but induces three rounds of communication to the underlying block\-chain\xspace~\cite{libsubmarine, kalodner2015empirical}, bringing significant latency overhead. Other solutions focus on specific applications, {\em e.g.}, FairMM only applies to the market-maker-based exchange~\cite{ciampi2021fairmm}. Other approaches apply only to a specific consensus algorithm~\cite{noah2021secure, stathakopoulou2021adding}. We present Flash Freezing Flash Boys\footnote{ The name \emph{Flash Boys} comes from a popular book revealing this aggressive market-exploiting strategy on Wall Street in 2014~\cite{lewis2014flash}. } (F3B\xspace), a new blockchain architecture with front-running protection that exhibits low transaction processing latency overhead while remaining legacy compatible with both the underlying consensus algorithm and smart contract implementations. F3B\xspace addresses front-running by adopting a commit-and-reveal design that involves encrypting transactions and revealing them after they are committed by the consensus group, as presented in Figure \ref{Fig:architecture}. By encrypting transactions, adversaries no longer have foreknowledge of the contents of transactions, hindering their ability to launch successful front-running attacks. While adversaries may launch \emph{speculative} front-running attacks, where the adversary guesses the contents of the transaction based on side-channel information, these attacks have a greater chance of failure and may prove to be unprofitable~\cite{baum2021sok}. F3B\xspace builds on Calypso~\cite{kokoris2018calypso}, an architecture that enables on-chain secrets by introducing a secret-management committee\xspace (SMC) that reveals these on-chain secrets when designated. In our setup, the on-chain secrets are the encrypted transactions and the designated reveal period occurs when the consensus group irreversibly commits the transaction on the blockchain. Once a transaction is revealed, the consensus nodes can proceed with verifying and executing a transaction. At this point, it is too late for malicious actors to launch a front-running attack since the consensus group has already irreversibly ordered the transaction on the blockchain. To achieve practical front-running protection, the F3B architecture must address several challenges: a) preventing a single point of failure or compromise in the decryption process, b) mitigating spamming of inexecutable encrypted transactions onto the underlying block\-chain\xspace, and c) limiting latency overhead. To withstand the single point of failure or compromise, we adopt threshold cryptosystems. To mitigate spamming, we introduce a storage fee for storing encrypted transactions along with the traditional execution fee ({\em e.g.}, Ethereum gas) and let free market forces determine the price. To limit the latency overhead, we require F3B\xspace to only write data once onto the underlying block\-chain\xspace by delaying the transaction execution time until commitment. We implemented a prototype of F3B\xspace in Go~\cite{Go}. We use ByzCoin~\cite{kogias2016enhancing}, a state-of-art PBFT-style consensus protocol as our underlying blockchain for our experiment. We model the Ethereum block\-chain with 13 seconds block time~\cite{ethereumBlock} and vary the number of block confirmations before a transaction is considered irreversible. Our analysis shows that for a committee size of 128, the latency overhead is 1.77 seconds, equivalent to a low latency increase of 0.68\% with respect to Ethereum blockchain under 20 block confirmations; in comparison, Submarine exhibits 200\% latency overhead since Submarine requires three rounds of communication with the underlying block\-chain\xspace~\cite{libsubmarine,breidenbach2018enter}. Our key contributions are: \begin{enumerate} \item To our knowledge, this is the first work to systematically evaluate threshold encryption at the blockchain architecture layer for front-running protections at this level of scalability. \item An experimental analysis shows F3B\xspace's viability by demonstrating low latency overhead with respect to Ethereum. \end{enumerate} \begin{figure}[t] \centering \includegraphics[scale=0.26]{resources/architecture.pdf} \caption{Senders publish encrypted transactions to the consensus group. Once the transactions are no longer pending, the secret-management committee\xspace releases the keys.} \label{Fig:architecture} \end{figure} \section{Mitigation at Different Layers} \label{sec:layer} From previous front-running mitigation work ~\cite{breidenbach2018enter,libsubmarine,ankalkoti2017relative, kursawe2020wendy,kursawe2021wendy,saad2020frontrunning,ciampi2021fairmm}, we categorize those mitigation approaches into two layers --- the blockchain layer and the application layer Further, based on the layer that one chooses to address front-running, we analyze the trade-offs inherently exhibited in their approaches. We present an in-depth discussion of these trade-offs. We consider the blockchain layer as the storage and consensus element of what makes a blockchain operable and regard the application layer as the program element, such as smart contracts agnostic to the underlying blockchain layer. \begin{table*}[t] \centering \caption{Comparison of front-running protection at different layers \zhq{NOTE: This table is NOT intended to be in the AFT submission! Reference Only!}} \label{tab:comparsion} \begin{tabular}{l|c|c} \toprule & Application Layer & Blockchain Layer \\ \midrule Definition of pending transaction & \makecell{Transactions that wait to be executed\\ by a smart contract} & \makecell{Transactions that wait to be committed \\ by the consensus group} \\ \hline Front-running attack vector & \makecell{Benefiting from pending transactions \\ in the application layer} & \makecell{Benefiting from pending transactions \\ in the blockchain layer} \\ \hline Protection for scheduled AMM? & Yes & No \\ \hline Protection scope & Specific smart contract & All smart contracts \\ \hline Modification of smart contract & Yes & Not Necessarily \\ \hline Modification of blockchain & Not Necessarily & Yes \\ \hline Compatibility & Various blockchains & Existing smart contracts \\ \hline Smart contract address & Hard to hide & Easy to hide \\ \hline Examples& Submarine~\cite{breidenbach2018enter, libsubmarine}, Namecoin~\cite{kalodner2015empirical}, FairMM~\cite{ciampi2021fairmm}& Wendy~\cite{kursawe2020wendy,kursawe2021wendy} CodeChain~\cite{saad2020frontrunning}, F3B \\ \bottomrule \end{tabular} \end{table*} Table \ref{tab:comparsion} demonstrates the key differences between the mitigation strategies at two layers. We now unpack the differences in detail. \paragraph{Definition of Pending Transaction} We define \emph{pending transactions} as transactions that the relevant layer has not yet processed. At the blockchain layer, this means that the transaction is not committed---either sitting in mempool or waiting for a certain number of block confirmations (for probabilistic consensus algorithms). While at the application layer, this means that the smart contract has not finalized the transaction. A pending transaction can look differently in those two layers, for example, when an explicit queue of transactions is defined by a smart contract that are meant to be executed sometime in the future: a prominent application that utilizes these pending transaction queues is the scheduled AMMs~\cite{baum2021sok}. \paragraph{Front-running Attack Vectors} Given the different definitions for pending transactions, there are two attack vectors: an attacker can launch front-running attacks based on pending transactions in application or blockchain layers. Front-running attacks apply to both layers in most cases but scheduled AMM might be different. A scheduled AMM transaction may be committed in the blockchain, but its smart contract does not execute it due to the delayed execution; thus, the attacker benefits from a pending transaction in the application layer, but not in the blockchain layer. \paragraph{Mitigation Approaches:} As each application is independent, the mitigation solution in the application layer can only protect a specific smart contract that adopts the solution, while the solution in the blockchain layer can protect all smart contracts at once. \paragraph{Modification and Compatibility:} The application layer's mitigation solution requires smart contract modification possibly without the need to fork the underlying blockchain thus providing compatibility to various blockchains. In some cases, as a solution requires additional functionality, the blockchain needs a fork to provide the new function ~\cite{breidenbach2018enter, libsubmarine}. On the other hand, the mitigation solution in the blockchain layer requires only a fork of the underlying block\-chain\xspace, possibly without the need to alter the smart contract, thus providing compatibility with the existing smart contracts. \paragraph{Side Channels:} At the application layer, an adversary can gain information about the smart contract that a pending transaction invokes, while at the blockchain layer, an adversary is incapable of knowing which smart contract will be executed, if the whole transaction is encrypted. This knowledge gap is significant because even though an approach can provide front-running protections, the adversary can launch \emph{speculative} attacks using the information at its disposal. Items of information may include on-chain data such as the sender's balance for a given asset and the invoked smart contract~\cite{baum2021sok}. Notably, front-running protection at the blockchain layer can easily ensure that the invoked smart contract is hidden from public view by encrypting all transactions. In contrast, at the application layer, this protection is much harder and costly to deploy, {\em e.g.}, Submarine requires to send three transactions into the blockchain to hide the application's smart contract address ~\cite{breidenbach2018enter, libsubmarine}. This information may unsuspectingly provide a significant advantage to an adversary---in the case of an AMM with only two assets, the adversary has at least a 50\% chance of choosing the right asset exchanging direction, and by checking the sender's balance for these assets, this chance could increase to 100\%~\cite{baum2021sok}. \\ F3B\xspace's focus is on front-running protection at the blockchain layer given the numerous benefits. First, F3B has access to all transaction fields, ensuring that F3B can hide fields that are not possible from the application layer. Second, all existing and new applications will immediately benefit from the front-running protection implemented at the blockchain layer. Recall that there are needs to be additional front-running protections that may be necessary for certain applications such as scheduled AMMs. Finally, any updates to the front-running protection is immediately apply to all applications without their intervention. \cb{The section presents a really nice comparison of tradeoffs in dealing with front running at application vs blockchain layer. I have one main question, due to two statements I see as conflicting. On the one hand, you said in the paragraph "front running attack vectors" that although a txn might already be committed in the blockchain, the smart contract may only execute it later. On the other hand, in later paragraphs you claim that countering front-running at the blockchain later also addresses front-running at the application layer. But, what if the application layer has a delayed execution, wouldn't front-running still be possible?} \cb{I'd avoid the term "forking the blockchain", because most of the time forks have a negative implication and reviewers might think of unintentional forks. In my understanding, you're referring to scheduled forks caused by a change in blockchain implementation/algorithm. Thus, I'd change all appearances of "fork" in the paper to "scheduled fork".} \veg{very good point about scheduled forks} \cb{I agree that this section doesn't belong this early in the paper. I would, however, encourage you to have a discussion section; reviewers love discussion sections :), especially if you have more than one thing to discuss. One discussion topic would be in this case "App layer vs blockchain layer approaches". In the discussion, you could summarize some of the points in this section in 1-2 paragraphs, without the table. For example, I would add (1) the difference in meaning of comitting a txn in the blockchain vs application layer, and (2) the point on side channels, concluding with (3) that the blockchain approach is more general.} \cb{Then, if the conference allows for it, I'd have an appendix that goes more in-depth in the topic and has the table. Also, the term "possibly no" in the table is confusing to me, consider replacing it with something else, for example "not necessarily".} } \section*{Check List Before Submission} \begin{itemize} \item Checking the warnings. \item Checking the introduction sentence of each section and subsection. \item Rearrange the figures' positions. \item Checking the abbreviations are introduced(PoW,PoS,AMM,DeFi). \end{itemize} }} \input{introduction} \input{background} \input{layer} \input{strawman} \input{overview} \input{system} \input{goal} \input{deployment} \input{incentive} \input{security} \input{discussion} \input{evaluation} \input{related} \input{conclusion} \input{acknowledgment} \input{main.bbl} \end{document} \endinput \section{System Overview} This section presents F3B\xspace's system goals, architecture, and models. \subsection{System Goals} \label{sec:goals} Our system goals inspired by our strawman protocols (\S\ref{sec:Strawman}) are: \begin{itemize} \item \textbf{Front-running protection:} Preventing entities from launching front-running attacks. \item \textbf{Decentralization:} Mitigating a single point of failure or compromise. \item \textbf{Confidentiality:} Revealing a transaction only after the underlying consensus layer commits it. \item \textbf{Compatibility:} Remaining agnostic to the underlying consensus algorithm and smart contract implementation. \item \textbf{Low latency overhead}: Exhibiting low latency transaction-processing overhead. \end{itemize} \subsection{Architecture Overview} F3B\xspace, shown in Figure \ref{Fig:architecture}, mitigates front-running attacks by working with a secret-management committee\xspace to manage the storage and release of on-chain secrets. Instead of propagating their transactions in cleartext, the sender now encrypts their transactions and stores the corresponding secret keys with the secret-management committee\xspace. Once the transaction is committed, the secret-management committee\xspace releases the secret keys so that consensus nodes of the underlying blockchain can verify and execute transactions. Overall, the state machine replication of the underlying block\-chain\xspace is achieved in two steps: the first is about ordering transactions, and the second is about the execution of transactions. As long as the majority of trustees in the secret-management committee\xspace is secure and honest and the key is revealed to the public when appropriate, each consensus node can always maintain the same state. Notably, F3B\xspace encrypts the entire transaction, not just inputs, as other information such as the smart contract address may provide enough information to launch a probabilistic front-running attack, such as the Fomo3D attack~\cite{eskandari2019sok} or a speculative attack~\cite{baum2021sok}. \begin{figure} \centering \includegraphics[scale=0.66]{resources/protocol.pdf} \caption{F3B\xspace per-transaction protocol steps: (1) Sending an encrypted transaction to the underlying blockchain, (2) Waiting for the transaction commitment, (3) Releasing the key and executing the transaction } \label{Fig:protocol} \end{figure} \subsection{System and Network Model} F3B\xspace's architecture consists of three components: \emph{senders} that publish (encrypted) transactions, the \emph{secret-management committee\xspace} that manages and releases secrets, and the \emph{consensus group} that maintains the underlying block\-chain\xspace. While F3B\xspace supports various consensus algorithms, such as PoW like Ethereum and PBFT-style consensus algorithms like ByzCoin~\cite{kogias2016enhancing}, F3B\xspace does require a forked instance of the underlying blockchain to allow the consensus group to commit encrypted transactions and process them after revealing their contents. In a permissioned blockchain, the secret-management committee\xspace and the consensus nodes can consist of the same set of servers. For clarity, however, we discuss them as separate entities. In addition, we interchangeably use the name ``Alice'' to represent a generic sender. We assume that the membership of the secret-management committee\xspace is static, but we discuss group membership reconfiguration in \S\ref{sec:SMCdiscussion}. For the underlying network, we assume that all honest blockchain nodes and trustees of the secret-management committee\xspace are well connected, that their communication channels are synchronous, {\em i.e.}, if an honest node or trustee broadcasts a message, and that all honest nodes and trustees receive the message within a known maximum delay~\cite{pass2017analysis}. \subsection{Threat Model} \label{sec:threatmodel} We assume that the adversary is computationally bounded, that cryptographic primitives we use are secure, and that the Diffie-Hellman problem is hard. We further assume that all messages are digitally signed, and the consensus nodes and the secret-management committee\xspace only process correctly signed messages. The secret-management committee\xspace has $n$ trustees, which $f$ can fail or behave maliciously. We require $n \geq 2f + 1$ and set the secret-recovery threshold to $t = f + 1$. We assume that the underlying blockchain is secure: {\em e.g.}, that at most $f'$ of $3f'+1$ consensus nodes fail or misbehave in a PBFT-style blockchain, or the adversary controls less than $50\%$ computational power in a PoW blockchain like Bitcoin or Ethereum. We assume that attackers do not launch speculative front-running attacks~\cite{baum2021sok} but present a discussion on some mitigation strategies to reduce side channel leakage in \S\ref{sec:metadata}. \subsection{Blockchain Layer Approach} We can categorize front-running mitigation approaches into two layers---the blockchain layer~\cite{breidenbach2018enter,libsubmarine,kalodner2015empirical} and the application layer~\cite{ciampi2021fairmm,kursawe2020wendy,saad2020frontrunning}. F3B\xspace is a systematic approach at the blockchain layer, exhibiting some fundamental trade-offs. On the one hand, blockchain layer approaches a) offer protection to all smart contracts at once while an application layer approach would require protection individually~\cite{libsubmarine, kalodner2015empirical}, b) have access to all transaction fields, ensuring that they can hide fields that are not accessible from the application layer ({\em e.g.}, the smart contract address), and c) provide compatibility with existing smart contracts. On the other hand, blockchain layer approaches a) cannot deal with front-running attacks that benefit from time-delayed transactions at the application layer ({\em e.g.}, scheduled AMM) and b) require modification to the underlying block\-chain\xspace ({\em e.g.} scheduled hard fork). \section{Related Work} Namecoin, a decentralized name service, is an early example of using a commit-and-reveal approach to mitigate front-running attacks~\cite{kalodner2015empirical}. In Namecoin, a user first sends a salted hash to the blockchain and, after a certain number of blocks, broadcasts the actual name when it is too late for an attacker to front-run the user, similar to our Strawman I protocol (\S\ref{sec:Strawman}). Later, Eskandari et al.~\cite{eskandari2019sok} first systematized front-running attacks on the blockchain by presenting three types of front-running attacks: displacement, insertion, and suppression. Daian et al.~\cite{daian2019flash} first quantified front-running attacks from an economic point of view, determining that front-running attacks can pose a security problem of the underlying consensus layer by incentivizing unnecessary forks driven by the miner extractable value (MEV). Previous works \cite{noah2021secure, saad2020frontrunning} explore the idea of applying threshold encryption at the blockchain layer to mitigate front-running attacks. Schmid~\cite{noah2021secure} proposed a secure causal atomic broadcast protocol with threshold encryption to prevent front-running attacks but did not provide a solution for integrating with a PoW blockchain nor scale to a large number of trustees. A commercial crypto project CodeChain~\cite{saad2020frontrunning} proposed to reveal transactions by their trustees using the Shamir secret sharing scheme but did not offer implementation or experiment details. Submarine, an application layer front-running protection approach, uses a commit-and-reveal strategy to enable hiding the smart contract address, but it requires three rounds of communication to the underlying block\-chain\xspace, inducing a high latency overhead~\cite{libsubmarine, breidenbach2018enter}. To our knowledge, our work is the first solution to achieve legacy compatibility with both the consensus algorithm and smart contract implementation while achieving low latency overhead. Other research adopts different approaches to mitigate front-running. A series of recent studies focus on fair ordering~\cite{kelkar2021order, kursawe2020wendy, kursawe2021wendy}, but it alone cannot prevent a rushing adversary~\cite{baum2021sok}. Wendy explores the possibility of combining fair ordering with commit-and-reveal~\cite{kursawe2021wendy} but does not present quantitative overhead analysis. Furthermore, F3B\xspace hides the content of all transactions before commitment, unlike Wendy which only guarantees the fairness of transactions with the same label. Although our approach has a higher latency overhead than Wendy, we believe it is necessary as the label can provide useful information for adversaries to launch a front-running attack, such as the Fomo3D attack~\cite{eskandari2019sok}. Some research adopts time-lock puzzles~\cite{rivest1996time} to blind transactions. Injective protocol~\cite{chen2018injective} deploys a verifiable delay function~\cite{boneh2018verifiable} as proof-of-elapsed-time to prevent front-running attacks. However, it is still an open challenge to link the time-lock puzzle parameters to the actual real-world delay~\cite{baum2021sok}. Tesseract, proposed by Bentov et al., is an exchange preventing front-running attacks by leveraging a trusted execution environment~\cite{bentov2019tesseract}. However, this approach brings an unreliable centralized component subject to a single point of failure and compromise~\cite{ragab2021crosstalk,van2018foreshadow}. Calypso is a framework that enables on-chain secrets by adopting threshold encryption governed by a secret-management committee\xspace~\cite{kokoris2018calypso}. Calypso allows ciphertexts to be stored on the blockchain and collectively decrypted by trustees according to a pre-defined policy. F3B\xspace builds on Calypso to mitigate front-running attacks. \begin{figure}[t] \centering \includegraphics[scale=0.5]{resources/throughput.pdf} \caption{Throughput performance of the key reconstruction with 128 trustees in the secret-management committee\xspace for varying batching sizes. } \label{Fig:throughput} \end{figure} \section{Security Analysis} \label{sec:securityAnalysis} This section introduces the security analysis of F3B\xspace's protocol. \subsection{Front-running protection} We reason, from our threat model, why an attacker can no longer launch front-running attacks with absolute certainty of a financial reward, even with the collaboration of at most $f$ malicious trustees. As we assume that the attacker does not launch speculative attacks based on metadata of the encrypted transactions, the only way the attacker can front-run transactions is by using the plaintext content of the transaction. If the attacker cannot access the content of the transaction before it is committed on the underlying block\-chain\xspace, the attacker cannot benefit from the pending transaction, thus preventing front-running attacks (by the definition of front-running). Since we assume that the symmetric encryption we use is secure, the attacker cannot decrypt the transaction based on its ciphertext. In addition, based on the properties of the TDH2 cryptosystem~\cite{shoup1998securing} and DKG~\cite{gennaro1999secure} and our threat model, the attacker cannot obtain the private key nor reconstruct of the symmetric key. Recall that the attacker can only collude with at most $f$ trustees and $f+1$ are required to recover or gain information about the symmetric key. \subsection{Replay attack} \label{sec:ReplayAttack} We consider another scenario in which an adversary can copy a pending (encrypted) transaction and submit it as their own transaction to reveal the transaction's contents before the victim's transaction is committed. By revealing the contents of the copied transaction, the attacker can then trivially launch a front-running attack. However, we present why the adversary is unable to benefit from such a strategy. In the first scenario, the adversary completely copies the ciphertext $c_k$, the encrypted transaction $c_{\ensuremath{\mathrm{tx}}\xspace}$ from \ensuremath{\mathrm{tx_w}}\xspace, and makes their own write transaction \ensuremath{\mathrm{tx_w'}}\xspace digitally signed with their signature. However, when sending the transaction on the underlying block\-chain\xspace, the adversary's \ensuremath{\mathrm{tx_w'}}\xspace is decrypted no earlier than the victim's transaction \ensuremath{\mathrm{tx_w}}\xspace. In our second scenario, the adversary instead sends the transaction to a blockchain with smaller $m$ block confirmations. Consider two blockchains $b_1$ and $b_2$ whose required number of confirmation blocks are $m_1$ and $m_2$ with $m_1 > m_2$. If the adversary changes the label $L$ to $L'$ for the blockchain $b_2$ instead of blockchain $b_1$, the secret-management committee\xspace will successfully decrypt the transaction. However, we argue that forming a valid write transaction with $L'$ by the adversary is hard. The adversary would need to generate an $e'=\hash_1\left(c,u,\bar{u},w,\bar{w},L'\right), f=s+re'$, without knowing the random parameter $r$ and $s$. Suppose the adversary generates $u=g^r, \bar{u}=\bar{g}^{r'}$ with $r \neq r'$ and $w=g^s, \bar{w}=\bar{g}^{s'}$ with $s \neq s'$. To have \ensuremath{\mathrm{tx_w'}}\xspace be valid, we must have $g^f = wu^e$ and $\bar{g}^f = \bar{w}\bar{u}^e$ which implies that $(s-s')+e(r-r')=0$. Since $r \neq r'$ , the adversary only has a negligible chance of having \ensuremath{\mathrm{tx_w'}}\xspace pass verification. \section{Strawman Protocols} \label{sec:Strawman} In order to explore the challenges inherent in building a framework like F3B, we first examine a couple of promising but inadequate strawman approaches, simplified from state-of-art front-running techniques~\cite{libsubmarine,breidenbach2018enter}. \subsection{Strawman I: Sender Commit-and-Reveal} \label{sec:StrawmanI} The first strawman has the sender create two transactions: a \emph{commit} and a \emph{reveal} transaction. The reveal transaction contains the sensitive inputs that could give an adversary the necessary information to launch a front-running attack while the commit transaction namely contains a commitment ({\em e.g.} hash) of the reveal transaction to prove the sender's intent to transact at a specific time without giving the sender the ability to change the contents of the reveal transaction. The sender will proceed with propagating the commit transaction and wait until it is included inside a block by the consensus group before releasing the reveal transaction. Once the reveal transaction is propagated, the consensus group can proceed to verify and execute the transaction in the execution order that the commit transaction was committed on the blockchain. This simple strawman mitigates front-running attacks since the execution order is determined by the commit transaction and the contents of the commit transaction do not expose the contents of the reveal transaction but this strawman presents some notable challenges: a) the sender must continuously monitor the blockchain to determine when to reveal her transaction, b) the sender may not be able to reveal her transaction due to a cryptokitties storm or blockchain DoS attacks~\cite{eskandari2019sok}, c) this approach is subject to output bias as consensus nodes, or sender may deliberately choose not to reveal certain transactions after commitment~\cite{baum2021sok}, and d) this approach has a significant latency overhead of over 100\% given that the sender must now send two transactions instead of one. \subsection{Strawman II: The Trusted Custodian} A straightforward method to remove the sender from the equation after sending the first commit transaction is by employing a trusted custodian. This commit transaction would then consist of the necessary information for the trusted custodian to reveal the transaction after the consensus group has committed the transaction on the underlying block\-chain\xspace. This strawman also mitigates front-running attacks as the nodes cannot read the content of the transaction before ordering but significantly improves upon the challenges presented in the previous strawman. However, the trusted custodian presents a single point of failure: consensus nodes cannot decrypt and execute a transaction if the custodian crashes. In addition, the trusted custodian represents a single point of compromise where the trusted custodian may secretly act maliciously, such as colluding with front-running actors for their own share of the profit. Instead, by employing a \emph{decentralized} custodian, we can mitigate the single point of failure and compromise issue and significantly make collusion more difficult. \section{F3B Protocol} We introduce the F3B\xspace's protocol in this section, starting with preliminaries that offer necessary background knowledge and then presenting the full protocol that captures F3B\xspace's detailed design. \subsection{Preliminaries} This subsection outlines a few preliminary knowledge, such as the definition of transaction commitment and the cryptographic primitives used in F3B\xspace. \paragraph{Modeling the Underlying Blockchain} \label{sec:baseline} To compare F3B's impact, we model the underlying blockchain to involve a consensus protocol that commits transactions into a block linked to a previous block. We define the underlying's block time as $L_b$ seconds, {\em e.g.}, on average, Ethereum has a block time of 13 seconds~\cite{ethereumBlock}. For PBFT-style consensus algorithms, a transaction is committed once introduced into a block. For probabilistic consensus algorithms ({\em e.g.}, PoW or PoS), a transaction is committed only after a certain number of additional blocks have been added to the chain (also known as block confirmations\footnote{ For example, for Ethereum transactions, exchanges such as Kraken and Coinbase require 20 and 35 block confirmations before the funds are credited to users' accounts~\cite{blockconfirmationsCoinbase,blockconfirmationsKraken}}). We thus define that a transaction is committed after $m$ block confirmations. Thereby, in our baseline, the transaction latency\footnote{ For simplicity, we leave out the time for blockchain nodes to verify and execute transactions, and assume that a transaction propagates the network within one block time as to not contribute to transaction-processing latency.} is $mL_b$. For PBFT-style consensus, the required $m$ is 1. Further, we assume that the underlying blockchain has a throughput of $T_b$ tps. \paragraph{Shamir Secret Sharing} A $(t, n)$-threshold secret sharing scheme enables a dealer to share a secret $s$ among $n$ trustees such that any group of $t \leq n$ or more trustees can reconstruct $s$, but any group less than $t$ trustees learn no information about $s$. While a simple secret sharing scheme assumes an honest dealer, verifiable secret sharing (VSS) allows the trustees to verify that the shares they received are valid~\cite{feldman1987practical}. Public verifiable secret sharing (PVSS) further improves VSS to allow a third party to check all shares~\cite{schoenmakers1999simple}. \paragraph{Distributed Key Generation(DKG)} DKG is a multi-party $(t, n)$ key-generation process to collectively generate a private-public key pair $(sk, pk)$ without relying on a single trusted dealer; each trustee $i$ obtains a share $sk_i$ of the secret key $sk$, and collectively obtains a public key $pk$~\cite{gennaro1999secure}. Any client can now use $pk$ to encrypt a secret, and at least $t$ trustees must cooperate to retrieve this secret~\cite{shoup1998securing}. \subsection{Protocol Outline} \label{sec:protocol} \subsubsection{Setup} At each epoch, the secret-management committee\xspace runs a DKG protocol to generate a private key share $sk^i_{smc}$ for each trustee and a collective public key $pk_{smc}$. To offer chosen ciphertext attack protection and to verify the correctness of secret shares, we utilize the TDH2 cryptosystem containing NIZK proofs~\cite{shoup1998securing}. \subsubsection{Per-Transaction Protocol} We unpack the per-transaction protocol (Figure \ref{Fig:protocol}) as follows: \begin{enumerate} \item \textit{Write Transaction:} Alice, as the sender, first generates a symmetric key $k$ and encrypts it with $pk_{smc}$, obtaining the resulting ciphertext $c_k$. % Next, Alice creates her transaction and symmetrically encrypts it using $k$, denoted as $c_{tx}=enc_k(tx)$. % Finally, Alice sends $(c_{tx},c_k)$ to the consensus group, who writes the pair onto the blockchain. % \label{item:sharing} \item \textit{Wait for confirmations:} The secret-management committee\xspace waits for Alice's transaction $(c_{tx},c_k)$ to be committed on the blockchain (after $m$ block confirmations). \label{item:waiting} \item \textit{Key Reconstruction:} Once committed, each secret-management committee\xspace trustee reads $c_k$ from Alice's transaction and releases their decrypted share of $k$ along with a NIZK proof of correctness for the decryption process. % Consensus nodes then verify the decrypted shares and use them to reconstruct $k$ by Lagrange interpolation of shares when there are at least $t$ valid shares. % The consensus group finally symmetrically decrypts the transaction $tx=dec_k(c_{tx})$ using $k$, allowing them to verify and execute $tx$. % We denote the time for the entire key reconstruction step as $L_r$. \label{item:reconstruction} \end{enumerate} \subsection{Overhead Analysis} In the per-transaction protocol, (\ref{item:sharing}) and (\ref{item:waiting}) involve committing a transaction on the underlying blockchain and waiting until its committed, which takes $mL_b$ time based on our baseline model (\S\ref{sec:baseline}). Further, comparing our protocol with the baseline, (\ref{item:reconstruction}) is an additional step, and we denote this overhead to be $L_r$. The secret-management committee\xspace actor may cause bottlenecks with respect to the system's throughput so we model the throughput of our proposed protocol as $min(T_{b},T_{smc})$, assuming the secret-management committee\xspace's throughput is $T_{smc}$. \begin{figure} \centering \begin{subfigure}[t]{0.48\textwidth} \includegraphics[scale=0.32]{resources/executiona.pdf} \caption{Execution and commitment time in Ethereum} \label{fig:execitiona} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[scale=0.32]{resources/executionb.pdf} \caption{Execution and commitment time in F3B} \label{fig:execitionb} \end{subfigure} \caption{In Ethereum, transactions are executed once inserted in the blockchain and committed after receiving $m$ block confirmations, while in F3B\xspace, transactions are encrypted, and their executions are postponed after receiving $m$ block confirmations when the secret-management committee\xspace releases the encryption keys. Both scenarios have a similar commitment time. \com{Change the order of commitment and execution} } \label{fig:execition} \end{figure} Figure \ref{fig:execition} demonstrates the conceptual difference in commitment and execution time between F3B and the baseline protocol. Since the secret-management committee\xspace releases the secret keys with a delay of $m$ blocks, this introduces an execution delay of $m$ blocks. However, for both PBFT-style consensus algorithms ($m = 1$) and probabilistic consensus algorithms, the recipient should not accept a transaction until a transaction is committed for a variety of reasons, such as double-spending. This indicates that F3B on a commercial level is similar to the baseline protocol since it is the commitment time of a transaction that is of use to the recipient\footnote{In F3B, transaction finalization is slower due to the key reconstruction and delayed execution after transaction commitment, but the overhead is insignificant compared with commitment time, as we discuss in \S\ref{sec:evaluation}.}. \subsection{Protocol} \label{sec:fullprotocol} In this section, we present the in-depth F3B\xspace protocol. We define a cyclic group $\group$ of prime order $q$ with generators $g$ and $\bar{g}$ that are known to all parties and define the following two hash functions: $\hash_1: \group^5 \times {\left \{0,1\right \}}^l \rightarrow \group$ and $\hash_2: \group^3 \rightarrow \ensuremath{\mathbb{Z}}_q$. \paragraph{Step 0: DKG Setup.} For a given epoch, the secret-management committee\xspace runs a DKG protocol to generate a shared public key $\cpk{} = g^{\csk{}}$, and shares of the private key for each trustee denoted as $\sk_i$. The corresponding private key \csk{} can only be reconstructed by combining $t$ private key shares. All trustees also know the verification keys $h_i=g^{\sk_i}$. We assume that \cpk{} and $h_i$ are written into the blockchain as metadata, {\em e.g.}, in the first block denoting the start of this epoch. Given the rarity of this reconfiguration, we adopt the synchronous DKG protocol proposed by Gennaro et al.~\cite{gennaro1999secure}, using the underlying block\-chain\xspace to emulate synchronous communication. \paragraph{Step 1: Write Transaction.} For the write transaction step, we use the encryption protocol presented by the TDH2 cryptosystem~\cite{shoup1998securing} Alice\xspace and the consensus group\xspace execute the following protocol to write the \ensuremath{\mathrm{tx_w}}\xspace on the underlying block\-chain\xspace. Alice\xspace then starts the protocol by performing the following steps to create the transaction \ensuremath{\mathrm{tx_w}}\xspace: \begin{enumerate} \item Obtain the secret-management committee\xspace threshold collective public key \cpk{} from the underlying block\-chain\xspace. \item Generate a symmetric key $k$ and encrypt the transaction \ensuremath{\mathrm{tx}}\xspace using authenticated symmetric encryption as $c_{\ensuremath{\mathrm{tx}}\xspace} = \enc{k}(\ensuremath{\mathrm{tx}}\xspace)$. \item Embed $k$ as a point $k' \in \group$, and choose $r,s \in \ensuremath{\mathbb{Z}}_q$ at random. \item Compute: \begin{align*} c=\cpk{r}k', u=g^r, w=g^s, \bar{u}=\bar{g}^r, \bar{w}=\bar{g}^s, \\ e=\hash_1\left(c,u,\bar{u},w,\bar{w},L\right), f=s+re, \end{align*} where $L$ is the label of the underlying block\-chain\xspace \footnote{This can be the hash of the genesis block.}. \item Finally, form the ciphertext $c_k=\left(c,L,u,\bar{u},e,f\right)$ and construct the write transaction as $\ensuremath{\mathrm{tx_w}}\xspace = [c_k, c_{\ensuremath{\mathrm{tx}}\xspace}]_{\sign{\sk_A}}$ signed with Alice's private key ${\sk_A}$. \item Send \ensuremath{\mathrm{tx_w}}\xspace to the consensus group\xspace. \end{enumerate} Upon receiving the \ensuremath{\mathrm{tx_w}}\xspace, the consensus group\xspace commits it onto the blockchain following its defined consensus rules. \paragraph{Step 2: Wait for confirmations.} Each trustee of the secret-management committee\xspace monitors the transaction $\ensuremath{\mathrm{tx_w}}\xspace$ by determining which block the transaction is placed onto the blockchain and the number of blocks that have passed since then. Once the number of block confirmations is equal to $m$, indicating that the consensus group\xspace commits the transaction, each trustee may proceed with the following step. \paragraph{Step 3: Key Reconstruction.} For the key reconstruction step, each trustee of the secret-management committee\xspace must provide its decryption share along with a proof of correctness to the consensus group who then reconstructs the shares. Each trustee $i$ performs the following steps to release its decryption share along with proof of correctness. \begin{enumerate} \item Extract $L$ from $c_k$ and verify that $L$ is consistent with the underlying block\-chain\xspace's metadata. \item Verify the correctness of the ciphertext $c_k$ using the NIZK proof by checking: \begin{align*} e=\hash_1\left(c,u,\bar{u},w,\bar{w},L\right) , \end{align*} where $w=\frac{g^f}{u^e}$ and $\bar{w}=\frac{\bar{g}^f}{\bar{u}^e}$. \label{step:KeyReconstructionProof} \item If the \ensuremath{\mathrm{tx_w}}\xspace is valid, choose $s_i \in \ensuremath{\mathbb{Z}}_q$ at random and compute: \begin{align*} u_i=u^{\sk_i}, \hat{u_i}=u^{s_i}, \hat{h_i}=g^{s_i}, \\ e_i=\hash_2\left(u_i,\hat{u_i},\hat{h_i}\right), f_i=s_i+\sk_i e_i . \end{align*} \item Create and sign the share: ${\mathrm{share}_i} = [u_i, e_i, f_i]_{\sign{\sk_i}},$ and send it to the consensus group\xspace. \end{enumerate} In stage (\ref{step:KeyReconstructionProof}), the NIZK proof ensures that $\log_g u=\log_{\bar{g}} \bar{u}$, guaranteeing that whoever generated the \ensuremath{\mathrm{tx_w}}\xspace knows the random value $r$. If one knows the value of $r$, then they are capable of decrypting the transaction; since it is impossible to generate \ensuremath{\mathrm{tx_w}}\xspace without knowing the plaintext transaction, this property prevents replay attacks mentioned in \S\ref{sec:ReplayAttack}. The following steps describe the operation of each node in the consensus group\xspace. \begin{enumerate} \item Upon receiving a share, each node in the consensus group\xspace verifies the share by checking: \begin{align*} e_i=\hash_2\left(u_i,\hat{u_i},\hat{h_i}\right), \end{align*} where $\hat{u_i}=\frac{u^{f_i}}{{u_i}^{e_i}}$ and $\hat{h_i}=\frac{g^{f_i}}{{h_i}^{e_i}}$. \label{step:VerifyShare} \item After receiving $t$ valid shares, the set of decryption shares is of the form: \begin{align*} \{(i,u_i): i \in S\}, \end{align*} where $S \subset \{1,...,n\}$ has a cardinality of $t$. Each node then executes the recovery algorithm, which does the Lagrange interpolation of the shares: $$ \cpk{r}= \prod_{i \in S} {u_i}^{\lambda_i} ,$$ where $\lambda_i$ is the $i^{th}$ Lagrange element. \item Recover the encoded encryption key: \begin{align*} k' = c (\cpk{r})^{-1} = (\cpk{r} k') (\cpk{r})^{-1}. \end{align*} \item Retrieve $k$ from $k'$ and decrypt the transaction $tx=\dec{k}(c_{tx})$. \item Execute the transaction following the consensus group\xspace's defined rules. \end{enumerate} In stage (\ref{step:VerifyShare}), the NIZK proof ensures that ($u,h_i,u_i$) is a Diffie-Hellman triple, {\em i.e.}, that $u_i=u^{\sk_i}$, guaranteeing the correctness of the share.
{ "attr-fineweb-edu": 1.890625, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcq05qsNCPbQZ6s9v
\section*{\Large{Supplemental Material}} \section{Self-duality} \begin{figure}[b] \centering \includegraphics[width=0.3\linewidth]{fig_s1a.pdf} \includegraphics[width=0.4\linewidth]{fig_s1b.pdf} \caption{(a) An illustration of the lattice setup and the $\sigma$--$\tau$ self-duality. (b) An illustration of the mapping onto the 3D classicial model on the layered triangular lattice, the black and red lines denotes the triangular prism cage and the dual bonds, respectively. } \label{fig_s1} \end{figure} The quantum Newman-Moore model exhibits a Krammer-Wanier self-duality \cite{fracton_rev_2}. To elucidate this duality formulism, we define an effective spin-$1/2$ $\tau_\alpha$ at the center of each downward triangle on the dual lattice (Fig.~\ref{fig_s1}a) with \begin{equation} \tau^x_\alpha=\sigma_i^z\sigma_j^z\sigma_k^z. \end{equation} In this way, we can rewrite the dual-Hamiltonian as follows. The $\sigma_i^z\sigma_j^z\sigma_k^z$ term becomes the onsite Zeeman field $\tau^x_\alpha$ while the spin-flip term $\sigma^x$ is dual to the three-body interaction among the three spins on the upward triangle of the dual lattice $\sigma^x_i=\tau^z_\alpha \tau^z_\beta \tau^z_\gamma$, \begin{equation} H=-J\sum_\alpha\tau^x_\alpha-\Gamma\sum_\triangle \tau^z_\alpha\tau^z_\beta\tau^z_\gamma. \end{equation} This is exactly the same as the original model with interchanging coupling constant $J$ and $\Gamma$. The self-duality can be verified numerically by measuring the polarizations $m_\triangle=\langle\tau^x\rangle$ and $m^x\langle\sigma^x\rangle$. We find that these two curves are symmetrical with respect to the $\Gamma=\Gamma_c$ vertical line in Fig.~\ref{fig_s2}. \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{fig_s2.pdf} \caption{The polarizations $m_\triangle=\langle\tau^x\rangle$ and $m^x\langle\sigma^x\rangle$ in different system sizes.} \label{fig_s2} \end{figure} The self-duality is also manifested by mapping the path integral of this 2D quantum model to the partition function of a 3D classical model on the layered triangular lattice at finite temperature \cite{duality} \begin{equation} S=-\tilde{J}\sum_{(ijk)_\triangledown,\tau}\sigma_{i\tau}^z\sigma_{j\tau}^z\sigma_{k\tau}^z-\tilde{\Gamma}\sum_{i,\tau}\sigma_{i\tau}^z\sigma_{i,\tau+1}^z, \end{equation} where $\tilde{J}=J\delta\tau$ and $\mathrm e^{-2\tilde{\Gamma}}=\tanh\delta\tau$, $\delta\tau$ being the imaginary time slice. The partition function of the above model is, \begin{equation} Z=\sum_{\{\sigma\}}\mathrm e^{-S}=\sum_{\{\sigma\}}\prod_\triangledown\left(\cosh\tilde{J}+\sigma_1^\triangledown\sigma_2^\triangledown\sigma_3^\triangledown\sinh\tilde{J}\right)\times\prod_b\left(\cosh\tilde{\Gamma}+\sigma_1^b\sigma_2^b\sinh\tilde{\Gamma}\right) . \end{equation} We introduce triangle variables $k_\triangledown=0,1$ and bond variables $k_b=0,1$ and define $c_0=\cosh\tilde{J}$, $c_1=\sinh\tilde{J}$, $d_0=\cosh\tilde{\Gamma}$, $d_1=\sinh\tilde{\Gamma}$. The partition function can be interpreted as, \begin{equation} Z=\sum_{\{s\}}\sum_{\{k_\triangledown,k_b\}}\prod_\triangledown c_{k_\triangledown}(\sigma_1^\triangledown\sigma_2^\triangledown\sigma_3^\triangledown)^{k_\triangledown}\times\prod_bd_{k_b}(\sigma_1^b\sigma_2^b)^{k_b} . \end{equation} First, take the spin sum: each spin is summed to two if raised to an even power and zero otherwise. This results in a constrained sum over $k$ variables: \begin{equation} Z=2^N\sum'_{\{k_\triangledown,k_b\}}\prod c_{k_\triangledown}\prod d_{k_b}. \end{equation} Each site of the original lattice is connected to three downward triangles and two $z$-bonds with the constraint that the sum of the five $k$ variables is even for all sites. We introduce the dual spins $\tau$ located at the center of each cage-unit on the layered triangular lattice. For site $i$ on the original lattice, its three neighboring down-pointing triangles are pierced by vertical bonds of the dual lattice. For each piercing bond $b$ of the dual lattice, we set $k_\triangledown=(1-\tau^b_1\tau^b_2)/2$. Similarly, each of the two vertical bonds $b$ containing site $i$ pierces a down-pointing triangle of the dual lattice, and we set $k_b=(1-\tau_1^\triangledown\tau_2^\triangledown\tau_3^\triangledown)$. The $k$ variables given by $\tau$ dual spins automatically satisfy the constraints. Now we calculate the dual couplings, \begin{equation} \begin{aligned} c_{k_\triangledown}&=k_\triangledown\sinh\tilde{J}+(1-k_\triangledown)\cosh\tilde{J}\\ &=\frac{1+\tau_1^b\tau_2^b}{2}\cosh\tilde{J}+\frac{1-\tau_1^b\tau_2^b}{2}\sinh\tilde{J}\\ &=\frac{\mathrm e^{\tilde{K}}}{2}(1+\tau_1^b\tau_2^b\cosh\tilde{\Gamma}^*)\\ &=(2\sinh(2\tilde{\Gamma}^*))^{-1/2}\mathrm e^{\tilde{\Gamma}^*\tau_1^b\tau_2^b}, \end{aligned} \end{equation} where we have defined $\tanh\tilde{\Gamma}^*=\mathrm e^{-2\tilde{J}}$. By the same process \begin{equation} d_{k_b}=(2\sinh(2\tilde{K}^*))^{-1/2}\mathrm e^{\tilde{K}^*\tau_1^\triangledown\tau_2^\triangledown\tau_3^\triangledown} \end{equation} with $\tanh\tilde{K}^*=\mathrm e^{-2\tilde{\Gamma}}$. These relations determine the self-dual point at $\tilde{J}=\tilde{\Gamma}=\log\frac{1+\sqrt{2}}{2}$. \section{Fractal symmetry and polynomial representation} When periodic boundary conditions are imposed, the ground state degeneracy oscillates with the system size. The symmetry operator which connects these degenerate ground state is known to be a ``subsystem symmetry''. A subsystem symmetry operation acts only on a subset of the lattice sites with a subextensive degree of freedom. Unlike global symmetries which act on the whole system, here the subsystem symmetry acts on a fractal part of the lattice with the shape of a Sierpinski-triangle. A more elegant formulation of such subsystem symmetries is provided in terms of a polynomial representation. We give here a minimal description and refer to Ref.~\cite{fracton_rev_2} for more details. Every site $(i,j)$ is represented by a term $x^iy^j$. The factors of the terms are $\mathbb{Z}_2$ numbers which take value only from $\{0,1\}$ and $1+1=0$. The Sierpinski triangle, for example, can be expressed as \begin{equation} F(x,y)=\sum_{k=0}^{\infty}(y+xy)^k. \end{equation} The terms in the Hamiltonian $\sigma^z_{ij}\sigma^z_{i,j-1}\sigma^z_{i-1,j-1}$ can be written as $\mathscr{Z}(x^iy^j(1+y^{-1}+x^{-1}y^{-1}))$, where $\mathscr{Z}$ maps a polynomial into the product of a set of Pauli-$z$ matrices, \begin{equation} \mathscr{Z}\left(\sum_{ij}c_{ij}x^iy^j\right)=\prod_{ij}(\sigma_{ij}^z)^{c_{ij}} \end{equation} and $\mathscr{X}$ is similarly defined. In this way we can express the Hamiltonian as \begin{equation} H=-J\sum_{ij}\mathscr{Z}(x^iy^j(1+y^{-1}+x^{-1}y^{-1}))-h\sum_{ij}\mathscr{X}(x^iy^j) \end{equation} and the symmetry operator can be written as \begin{equation} S=\mathscr{X}(q(x)F(x,y)). \end{equation} For a half infinite plane, the $q(x)$ is an arbitrary polynomial; for a torus $q(x)$ has to satisfy an additional condition \begin{equation} q(x)(1+x)^L=q(x) \end{equation} with $x^L=1$. This condition breaks down into a system of linear equations over $\mathbb{Z}_2$, from which we can obtain the degeneracy, \begin{equation} \log_2N(L)=\left\{\begin{array}{ll} 2^m(2^n-2),&L=C\times 2^m(2^n-1)\\ 0,&\textrm{otherwise.} \end{array}\right. \end{equation} \section{Many-body correlation and order parameter} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig_s3.pdf} \caption{An illustration of the $\Pi_i$ operator as a generator of the many-body correlation, where the points of different color denotes the $\sigma^z$'s multiplied in the $\Pi_i$ operator.} \label{fig_s3} \end{figure} The many-body correlation operator can characterize the quantum phase transition between fractal symmetry breaking phase and paramagnetic phase~\cite{duality,fracton_rev_2} \begin{equation} C(r)=\mathscr{Z}(1+(y^{-1}+x^{-1}y^{-1})^r). \end{equation} This polynomial representation of the correlator can be interpreted as the product of the spins on the corner together with all the spins in the $r$-th line of a down-pointing Sierpinski triangle as shown in Fig.~\ref{fig_s3}, \textit{e.g.}, $C(r=3)=\sigma_{0,0}^z\sigma_{\bar 3,0}^z\sigma_{\bar 3,\bar 1}^z\sigma_{\bar 3,\bar 2}^z\sigma_{\bar 3,\bar 3}^z$ and $C(r=4)=\sigma_{0,0}^z\sigma_{\bar 4,0}^z\sigma_{\bar 4,\bar 4}^z$, where the bar denotes a negative sign. This operator reduces to a three-body correlation when $r=2^k$. The operator is expected to be non-vanishing in the fractal symmetry breaking phase and decays hyper-exponentially as $C(r)\sim\mathrm e^{-(r/\xi)^d}$ \cite{fracton_rev_2} in the paramagnetic phase, where $d=\ln 3/\ln 2$ is the fractal dimension. The nonvanishing expectation value of $C(r)$ in the symmetry-breaking phase implies the quantum correlation among the spins living at the corners of the Sierpinski triangle. It is noteworthy to mention that the $C(r)$ depends on the number $N_r$ of $\sigma_z$ in the product, which can be regarded as the number of ``corners'' on the fractal boundary. Such fractal scaling can be visualized from the dual picture. As $C(r)$ is the product of all $\tau_x=\sigma_i^z\sigma_j^z\sigma_k^z$ operators inside the Sierpinski fractal, it counts the total number of dual $Z_2$ charges ($\tau_x$) inside the fractal region. The transverse field $\sigma_i^x$ creates three fracton defects with $\tau_x=\sigma_i^z\sigma_j^z\sigma_k^z=-1$ in the adjacent downward triangle. In the fractal symmetry breaking phase, such fracton defect is confined and forms a three-fracton bound state so the defect is always created in triads as Fig.~\ref{fig_s1}. These triple defect excitations could reverse the dual-charge inside the Sierpinski triangle if and only if they live at the corner of the fractal. Thus, the scaling of many-body operator $C(r)$ obeys a ``corner-law''. Likewise, in the paramagnetic phase, the isolated defects proliferate and immerse into the fractal. Each isolated defect with $\tau_x=-1$ inside the fractal flips the parity of the dual charges so the expectation value of the dual charge inside the fractal has a hyper-exponentially scaling $C(r)\sim\mathrm e^{-(r/\xi)^d}$ with $r^d$ being the number of sites inside the fractal. At the quantum critical point, the many-body operator $C(r)$ displays short-ranged correlation. In particular, $C(r)$ decreases with distance $r$ in a manner that is faster than any power-law function but slower than the exponential function. In Fig.~\ref{qcpfig}, we plot the correlation function in the log–log graph and its finite scaling implies $C(r)\sim e^{-C_1(\ln(r))^a}$ with $a=2.7(1)$. \begin{figure}[ht] \centering \includegraphics[width=0.5\linewidth]{qcp.pdf} \caption{The many-body correlation function $C(r)$ at the quantum critical point plotted on a log–log graph; red points --- Monte Carlo results; black dashed line --- fitting in the form $C(r)=C_0e^{-C_1(\ln(r))^a}$.} \label{qcpfig} \end{figure} We can generalize the expression by introducing another subscript \begin{equation} C_{rl}=\mathscr{Z}((y^{-1}+x^{-1}y^{-1})^r+(y^{-1}+x^{-1}y^{-1})^l). \end{equation} Taking the summation over $r$ and $l$, \begin{equation} \sum_{rl}C_{rl}=\sum_{rl}\mathscr{Z}((y^{-1}+x^{-1}y^{-1})^r)\mathscr{Z}((y^{-1}+x^{-1}y^{-1})^l)=L^2|\psi|^2 \end{equation} the summation can be decomposed into the square of an ``order parameter'', defined as \begin{equation} \psi=\frac{1}{L}\sum_{r=0}^{L-1}\mathscr{Z}((y^{-1}+x^{-1}y^{-1})^r). \end{equation} This quantity genuinely distinguishes the ordered phase and the paramagnetic phase. \section{UV-IR mixing at criticality --- How renormalization breakdown} In the manuscript, we build a Gaussian theory to explain the fractal scaling dimension at the phase transition point \begin{align}\label{ga} & \mathscr L=K(\partial_t \theta)^2+K (D_i \theta)^2\nonumber\\ & D_i=a_0\nabla_x \nabla_y+1/a_0+\nabla_y. \end{align} Here $a_0$ is the lattice spacing and $D_i$ is a lattice differential polynomimal that creates the three-body interaction. We illustrate this differential polynomimal formulism in Fig.~\ref{th} on the tilted triangular lattice. Each site contains an Ising degree of freedom $\sigma^z=e^{i\theta},\sigma^x=e^{i \pi n}$. Here $\theta,n$ are conjugate pairs with discrete values $\theta=0,\pi$ and $n=0,1$. \begin{figure}[t] \centering \includegraphics[width=0.35\linewidth]{th.png} \caption{The three-body interaction on a triangle can be expressed as $\cos(\theta_{i,j}+\theta_{i+1,j}-\theta_{i,j+1})$.} \label{th} \end{figure} The three-body coupling can be written as \begin{align} \cos(\theta_{i,j}+\theta_{i+1,j}-\theta_{i,j+1}). \end{align} Expand the cosine term to the quadratic level and replace lattice difference with differentials, \begin{align} &(\theta_{i,j}+\theta_{i+1,j}-\theta_{i,j+1})^2\nonumber\\ &=[(\theta_{i,j}-\theta_{i+1,j}-\theta_{i,j+1}+\theta_{i+1,j+1})+(\theta_{i+1,j}-\theta_{i+1,j+1})+\theta_{i+1,j}]^2\nonumber\\ &=((a_0\nabla_x \nabla_y+\nabla_y+1/a_0) \theta)^2\equiv (D_i\theta)^2. \end{align} The Newman-Moore model becomes, \begin{align} &\mathcal{L}=n \partial_t \theta+\Gamma (D_i \theta)^2+J n^2. \end{align} Integrating out the density fluctuation $n$, we get \begin{align} &\mathcal{L}=\frac{1}{4J} (\partial_t \theta)^2+\Gamma (D_i \theta)^2. \end{align} Here we keep the value of $4J/\Gamma$ fixed and tune the strength of $\Gamma$ throughout the phase transition. After a rescaling of time, we get \begin{align} &\mathcal{L}=K (\partial_t \theta)^2+K (D_i \theta)^2. \end{align} Notably, this differential polynomial $D_i$ contains a constant piece which implies the global charge conservation is broken due to the three-body interaction while the fractal symmetry is still respected. At the critical point, the gapless excitation has a dispersion $\epsilon(k)=|a_0 k_x k_y+1/a_0+k_y|$. Here the lattice regularization $a_0\sim 1/\Lambda$ is essential due to subsystem symmetry and we will keep the momentum cutoff as our UV regulator. Peculiarly, the energy minimal does not appear at the $\Gamma$ point. This can be manifested from the absence of global $Z_2$ symmetry. In addition, there should exist a large region at finite momentum with zero energy. This agrees with our numerical result for the static structure factor \begin{align} G_x(\mathbf{q})=\int d\omega ~~\omega^2 G(q,\omega) =\int d\omega ~~\frac {\omega^2}{K(\omega^2+\epsilon^2(q))}=\epsilon^2(q)/K. \end{align} Unlike the other critical phenomenon whose low energy excitations are ascribed by long wave-length modes and the structure factor minimum appears at the $\Gamma$ point, the structure factor $G_x(\mathbf{q})$ at the fractal symmetric critical point contains an extensive region in the momentum space with low intensity, corresponding to the strong fluctuation of defects at short wave-length. On the other hand, the zero momentum had a higher intensity since zero momentum modes cost finite energy. Therefore the low-energy dynamics at the phase transition does not occur at long wave-length region and the short-wavelength physics controls the critical behavior. The quantum critical points connecting different phase patterns have broad indications and deep impact from various aspects. The divergence of the correlation length at the critical region implies the effective interaction at IR becomes highly non-local and hence requires us to visualize the system at a larger scaling at long wavelength. A complete understanding of critical phenomena is accomplished via the development of a renormalization group theory. The RG theory of critical phenomena provides the natural framework for defining quantum field theories at a nonperturbative level and hence has broad implications for strong coupling problems. In particular, the universal properties of a broad class of random or statistical systems can be understood by coarse-graining the short-wavelength physics with local fluctuation and focus on the long-wavelength behaviors. Based on this observation, the critical phenomenon shares many universal properties that are independent of the UV Hamiltonian but only rely on symmetry and dimensionality. For instance, the scaling dimension and the correlation function with respect to the energy density at the critical point (phase) have a universal power-law exponent $S(r)=r^{-D}$ depends on the space-time dimension $D$. Such a universal power law lies in the fact that the low energy part of the spectrum is contributed by long-wavelength physics fluctuations at small momentum. Based on this assumption, the field patterns at low energy, which control the IR behavior, are determined by the spatial dimension and dynamical exponent that are insensitive to UV cut-offs. However, the fractal symmetry breaking phase transition we demonstrate here is a peculiar example that escapes from the renormalization paradigm. The essence of such RG breakdown lies in the fact that the subsystem symmetry in the many-body system engenders a large number of collective excitations with strong fluctuation at short wavelength but still survive in the low energy spectrum when proximate to the phase transition point. The fractal scaling dimension of the energy density correlation and the subextensive number of zero-energy modes at finite momentum is a direct manifestation of such UV-IR mixing. In contrast to the spontaneous breaking of global symmetries, our phase transition theory is dominated by short-wavelength physics with local fluctuation. The low energy modes at the critical point contain a large number of rough field configurations connected by fractal symmetry. The survival of these rough patterns at the critical point engenders the ``UV-IR mixing'' as the low energy degree of freedom is manipulated by the physics at short wavelength and the traditional renormalization group paradigm does not apply. In particular, we cannot simply coarse-grain the local fluctuation nor change the UV cut-off as the high momentum modes with zero energy can bring additional singularity and hence qualitatively change the universal behavior. \section{Numerical method} For the Monte Carlo simulations, we use the stochastic series expansion (SSE) method. We adapt the transverse field Ising model update strategy \cite{qmc_2} for the three-body Ising operator. In quantum statistics, the measurement of observables is closely related to the calculation of partition function $Z$ \begin{equation} \langle\mathscr{O}\rangle=\mathrm{tr}\:\left(\mathscr{O}\exp(-\beta H)\right)/Z,\quad Z=\mathrm{tr}\exp(-\beta H), \end{equation} where $\beta=1/T$ is the inverse temperature, $H$ is the Hamiltonian of the system and $\mathscr{O}$ is an arbitrary observable. Typically, in order to evaluate the ground state property, one takes a sufficiently large $\beta$ such that $\beta\sim L^z$, where $L$ is the system scale and $z$ is the dynamical exponent. In SSE, such evaluation of $Z$ is done by a Taylor expansion of the exponential and the trace is taken by summing over a complete set of suitably chosen basis \begin{equation} Z=\sum_\alpha\sum_{n=0}^\infty\frac{\beta^n}{n!}\langle\alpha|(-H)^n|\alpha\rangle. \end{equation} We then write the Hamiltonian as the sum of a set of operators whose matrix elements are easy to calculate \begin{equation} H=-\sum_iH_i. \end{equation} In practice, we truncate the Taylor expansion at a sufficiently large cutoff $M$ and it is convenient to fix the sequence length by introducing in identity operator $H_0=1$ to fill in all the empty positions despite it is not part of the Hamiltonian \begin{equation} (-H)^n=\sum_{\{i_p\}}\prod_{p=1}^nH_{i_p}=\sum_{\{i_p\}}\frac{(M-n)!n!}{M!}\prod_{p=1}^nH_{i_p} \end{equation} and \begin{equation} Z=\sum_\alpha\sum_{\{i_p\}}\beta^n\frac{(M-n)!}{M!}\langle\alpha|\prod_{p=1}^nH_{i_p}|\alpha\rangle. \end{equation} To carry out the summation, a Monte Carlo procedure can be used to sample the operator sequence $\{i_p\}$ and the trial state $\alpha$ with according to their relative weight \begin{equation} W(\alpha,\{i_p\})=\beta^n\frac{(M-n)!}{M!}\langle\alpha|\prod_{p=1}^nH_{i_p}|\alpha\rangle. \end{equation} For the sampling, we adopt a Metropolis algorithm where the configuration of one step is generated based on updating the configuration of the former step and the update is accepted at a probability \begin{equation} P(\alpha,\{i_p\}\rightarrow\alpha',\{i'_p\})=\min\left(1,\frac{W(\alpha',\{i'_p\})}{W(\alpha,\{i_p\})}\right). \end{equation} Diagonal updates, where diagonal operators are inserted into and removed from the operator sequence, and cluster update, where diagonal and off-diagonal operates convert into each other, are adopted in the update strategy. For the transverse field Ising model $H=J\sum_bS_{i_b}^zS_{j_b}^z-h\sum_i\sigma_i^x$, we write the Hamiltonian as the sum of the following operators \begin{equation} \begin{aligned} H_0&=1\\ H_i&=h(S_i^++S_i^-)/2\\ H_{i+n}&=h/2\\ H_{b+2n}&=J(1/4-S_{i_b}^zS_{j_b}^z), \end{aligned} \end{equation} where a constant is added into the Hamiltonian for convenience. For the non-local opdate, a branching cluster update strategy is constructed, where a cluster is formed in $(D+1)$-dimensions by grouping spins and operators together. Each cluster terminates on site operators and includes bond operators. All the spins in each cluster are flipped together at a probability $1/2$ after all clusters are identified. In our transverse field Newman-Moore model, we give values also to the triangular operators on which the constraint is broken \begin{equation} \langle\uparrow\uparrow\uparrow|H_i|\uparrow\uparrow\uparrow\rangle=J+\epsilon, \langle\downarrow\downarrow\downarrow|H_i|\downarrow\downarrow\downarrow\rangle=\epsilon. \end{equation} On each triangular operator, one special site is chosen (denoted as $A$, the other two sites as $B,C$). When an update line enters site $B,C$, it exits from the other three legs at site $B,C$; when an update line enters site $A$, it only exits from the other \textit{one} leg at site $A$ (Fig. \ref{fig_s5}a). This update process will change the weight of the configuration, so each cluster is flipped at a probability $P_\mathrm{acc}=(J/\epsilon+1)^{\Delta N}$ calculated after the cluster is formed, where $\Delta N$ denotes the change in the number of energetically favorable operators. \begin{figure} \centering \includegraphics[width=0.3\linewidth]{fig_s4a.pdf} \includegraphics[width=0.5\linewidth]{fig_s4b.pdf} \caption[An illustration of the algorithm.]{(a) An illustration of our update strategy. (b) The Edwards-Anderson order parameter as a function of transverse field $\Gamma$ for different system sizes and temperatures. } \label{fig_s5} \end{figure} \section{Glassiness and verification of ergodity} Given the restricted mobility of excitations, the Monte Carlo simulation in such a system is easy to encounter the problem of glassiness \cite{newman_2}, which may undermine the ergodicity of our algorithm. To evaluate the glassiness, we calculate the Edwards-Anderson order parameter \cite{glass_order} \begin{equation} q_\mathrm{EA}=\frac{1}{N}\sum_i\left(\langle S_i^z\rangle\right)^2. \end{equation} The limits $q_\mathrm{EA}=0$ and $1$ signify that the spins are completely flippable and completely frozen. We test three sizes $L=8,16$ and $32$ with different temperature $\beta=L/2$ and $L/8$ (Fig. \ref{fig_s5}b). When $\beta=L/8$, the system shows no glassiness for small sizes $L=8,16$. For $L=32$, the ergodicity of Monte Carlo updates seems acceptable in the vicinity of the transition point $\Gamma=1$, while glassiness shows up deep in the confined phase. When $\beta=L$ and $L=16$, the glassiness reaches a high level once we enter the confined phase. So we confine our following calculations to $L=8\beta$ without special indication. In order to reach the ground state in the ordered phase in spite of the glassiness, we adopt an annealing process by adding a longitudinal field \begin{equation} H'=-h\sum_i\sigma^z_i. \end{equation} The field is lowered to zero during the thermalization process to restore the original Hamiltonian. \section{Finite size scaling} To determine the critical exponents, we perform a finite-size scaling. It is worth noting that the order parameter we define is non-local, so that the exponents can only be formally defined. Their true significance remains to be studied. We choose 8 sizes between $8$ and $32$ at which the ground state is unique $L=8,11,13,16,20$, $23,26,32$ to perform the finite-size scaling. The scaling hypothesis says that for any quantity $Q$, its finite size behavior should follow \begin{equation} x_L(\Gamma)=(\Gamma/\Gamma_c-1)L^{1/\nu},\ y_L(\Gamma)=Q_L(\Gamma)L^{-\kappa/\nu}, \end{equation} where $y$ is a universal function of $x$, regardless of the system size. The $\kappa$ denotes the exponent related to this quantity $Q$. \textit{E.g.}, for heat capacity $C$, $\kappa=\alpha$; for order parameter $\psi$, $\kappa=\beta$; for susceptibility $\chi$, $\kappa=\gamma$; for the dimensionless Binder ratio, we just have $\kappa=0$. As large sizes aren't reachable for us, correction terms are needed in some cases \begin{equation} x_L(\Gamma)=(\Gamma/\Gamma_c-1)L^{1/\nu},\ y_L(\Gamma)=Q_L(\Gamma)\frac{L^{-\kappa/\nu}}{1+cL^{-\omega}}. \end{equation} To carry out the analysis in practice, we only retain the first terms in the Taylor expansion \begin{equation} y=C_0+C_1x+C_2x^2+C_3x^3. \end{equation} The finite size scaling for the Binder ratio $U_\psi$ gives $\nu=0.50(2)$; the order parameter $\psi$ gives $\beta/\nu=0.448(8)$; the susceptibility gives $\gamma/\nu=2.19(4)$ and the heat capacity $C$ gives $\alpha/\nu=0.58(3)$. With the scaling relations, we can obtain the other exponents \begin{equation} \gamma/\nu=2-\eta, \end{equation} which gives us a negative anomalous dimension $\eta=-0.19(4)$. All of our results on critical exponents are put together in Table \ref{tbl_1}. \begin{table}[t] \caption{All the critical exponents calculated. } \begin{tabular}{c|c} \hline Exponent&Value\\ \hline $\nu$&$0.50(2)$\\ $\beta$&$0.224(5)$\\ $\gamma$&$1.10(5)$\\ $\alpha$&$0.28(2)$\\ \hline $\eta$&$-0.19(4)$\\ \hline \end{tabular} \label{tbl_1} \end{table} \end{document}
{ "attr-fineweb-edu": 1.386719, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUctg5qhLA4YB2lObq
\section{Introduction} The canonical compressed sensing problem can be formulated as follows. The signal is a random $n$-dimensional vector $X^n = (X_1,\ldots,X_n)$ whose entries are drawn independently from a common distribution $P_X$ with finite variance. The signal is observed using noisy linear measurements of the form \begin{align} Y_k = \langle A_k, X^n \rangle + W_k,\notag \end{align} where $\{A_k\}$ is a sequence of $n$-dimensional measurement vectors, $\{W_k\}$ is a sequence of standard Gaussian random variables, and $\langle \cdot, \cdot \rangle$ denotes the Euclidean inner product between vectors. The primary goal is to reconstruct $X^n$ from the set of $m$ measurements $\{(Y_k, A_k)\}_{k=1}^m$. Since the reconstruction problem is symmetric under simultaneous scaling of $X^n$ and $\{W_k\}$, the unit-variance assumption on $\{W_k\}$ incurs no loss of generality. In matrix form, the relationship between the signal and a set of $m$ measurements is given by \begin{align} Y^m = A^m X^n + W^m \label{eq:model} \end{align} where $A^m$ is an $m \times n $ measurement matrix whose $k$-th row is $A_k$. This paper analyzes the minimum mean-square error (MMSE) reconstruction in the asymptotic setting where the number of measurements $m$ and the signal length $n$ increase to infinity. The focus is on scaling regimes in which the measurement ratio $\delta_n = m/n$ converges to a number $\delta \in (0,\infty)$. The objective is to show that the normalized mutual information (MI) and MMSE converge to limits, \begin{align*} \mathcal{I}_n (\delta_n) & \triangleq \frac{1}{n} I\left(X^n ; Y^{m} \! \mid \! A^{m} \right) \to \mathcal{I}(\delta) \\ \mathcal{M}_n (\delta_n) & \triangleq \frac{1}{n} \mathsf{mmse}\left(X^n \! \mid \! Y^{m} , A^m \right) \to \mathcal{M}(\delta), \end{align*} almost everywhere and to characterize these limits in terms of the measurement ratio $\delta$ and the signal distribution $P_X$. We note that all mutual informations are computed in nats Using the replica method from statistical physics, Guo and Verd\'{u}~\cite{guo:2005} provide an elegant characterization of these limits in the setting of i.i.d.\ measurement matrices. Their result was stated originally as a generalization of Tanaka's replica analysis of code-division multiple-access (CDMA) with binary signaling \cite{tanaka:2002}. The replica method was also applied specifically to compressed sensing in~\cite{kabashima:2009,guo:2009, rangan:2012, reeves:2012,reeves:2012a, tulino:2013}. The main issue, however, is that the replica method is not rigorous. It requires an exchange of limits that is unjustified, and it requires the assumption of replica symmetry, which is unproven in the context of the compressed sensing problem. The main result of this paper is that replica prediction is correct for i.i.d.\ Gaussian measurement matrices provided that the signal distribution, $P_X$, has bounded fourth moment and satisfies a certain `single-crossing' property. The proof differs from previous approaches in that we first establish some properties of the finite-length MMSE and MI sequences, and then use these properties to uniquely characterize their limits. \subsection{The Replica-Symmetric Prediction} We now describe the results predicted by the replica method. For a signal distribution $P_X$, the function $R : \mathbb{R}_+^2 \goto \mathbb{R}_+$ is defined as \begin{align} R(\delta, z) &= I_X\left( \frac{\delta}{1+z} \right) + \frac{\delta}{2}\left[ \log\left( 1+ z\right) - \frac{ z}{ 1 + z} \right], \notag \end{align} where $I_X(s) = I\left( X ; \sqrt{s} X + N \right)$ is the scalar mutual information function (in nats) of $X \sim P_X$ under independent Gaussian noise $N \sim \mathcal{N}(0, 1)$ with signal-to-noise ratio $s \in \mathbb{R}_+$~\cite{guo:2005,guo:2009}. \begin{definition}\label{def:rs_limits} The replica-MI function $\cI_{\mathrm{RS}}\colon \mathbb{R}_{+} \to \mathbb{R}_{+}$ and the replica-MMSE function $\cM_{\mathrm{RS}}\colon \mathbb{R}_{+} \to \mathbb{R}_{+}$ are defined as \begin{align} \cI_{\mathrm{RS}} (\delta) &= \min_{z \ge 0} R(\delta, z) \notag\\% \label{eq:tildeI}\\ \cM_{\mathrm{RS}} (\delta) & \in \arg\min_{z \ge 0 } R(\delta, z). \nota \end{align} \end{definition} \vspace{-0.5mm} The function $\cI_{\mathrm{RS}}(\delta)$ is increasing because $R(\delta,z)$ is increasing in $\delta$ and it is concave because it is the pointwise minimum of concave functions. The concavity implies that $\cI_{\mathrm{RS}}'(\delta)$ exists almost everywhere and is decreasing. It can also be shown that $\cM_{\mathrm{RS}}(\delta)$ is also decreasing and, thus, continuous almost everywhere. If the minimizer is not unique, then $\cM_{\mathrm{RS}}(\delta)$ may have jump discontinuities and may not be uniquely defined at those points; see Figure~\ref{fig:fp_curve}. \begin{figure*}[!ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0) {\includegraphics{mmse_fig_A_plain.pdf}}; \draw [<-,color = {rgb:red,0;green,0.447;blue,0.741}] (3.9, 4.2) -- (4.4,4.2); \node[ align =center, text = {rgb:red,0;green,0.447;blue,0.741}] at (5.6,4.2) {\small fixed-point \\ \small information curve}; \draw [<-,color = {rgb:red,0.85;green,0.325;blue,0.098}] (2.3, 2.8) -- (2.8,2.8); \node[ align =center, text = {rgb:red,0.85;green,0.325;blue,0.098}] at (4.3,2.8) { \small $\frac{1}{2}\log(1 +\cM_{\mathrm{RS}}(\delta))$}; \end{tikzpicture} \hspace{.5cm} \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0) {\includegraphics{mmse_fig_B_alt.pdf}}; \draw [<-,color = {rgb:red,0;green,0.447;blue,0.741}] (3.8, 4.3) -- (4.3,4.3); \node[ align =center, text = {rgb:red,0;green,0.447;blue,0.741}] at (5.5,4.3) {\small fixed-point \\ \small information curve}; \draw [<-,color = {rgb:red,0.85;green,0.325;blue,0.098}] (2.65, 3.3) -- (3.15,3.3); \node[ align =center, text = {rgb:red,0.85;green,0.325;blue,0.098}] at (4.7,3.2) {\small $\frac{1}{2}\log(1 +\cM_{\mathrm{RS}}(\delta))$}; \draw [<-,color = {rgb:red,0.929;green,0.694;blue,0.125}] (2.9 , 2) -- (3.4,2); \node[align =center, text ={rgb:red,0.929;green,0.694;blue,0.125}] at (5.1,2) {\small $\frac{1}{2}\log(1 +g(\delta))$ for a \\ {\small different function $g \in \mathcal{G}$}}; \end{tikzpicture} \caption{\label{fig:fp_curve} Plot of the replica-MMSE as a function of the measurement ratio $\delta$. The signal distribution is given by a three-component Gaussian mixture of the form $P_X = 0.4 \mathcal{N}(0, 5) + \alpha \mathcal{N}(40,5) + (0.6-\alpha) \mathcal{N}(220,5)$. In the left panel, $\alpha = 0.1$ and the distribution satisfies the single-crossing property. In the right panel, $\alpha = 0.3$ and the distribution does not satisfy the single-crossing property. The fixed-point information curve (dashed blue line) is given by $\frac{1}{2} \log(1 + z)$ where $z$ satisfies the fixed-point equation $R_z(\delta,z) = 0$.} \vspace*{-.25cm} \end{figure*} \vspace{-0.5mm} \subsection{Statement of Main Result} In order to state our results, we need some further definitions. Let $R_z(\delta,z) = \frac{\partial}{\partial z} R(\delta, z)$ denote the partial derivative of $R(\delta, z)$ with respect to $z$. The \emph{fixed-point curve} FP is the set of $(\delta, z)$ pairs where $z$ is a stationary point of $R(\delta, z)$, i.e., \begin{align} \mathrm{FP} =\left \{ (\delta, z) \in \mathbb{R}_+^{2} \, : \, R_z(\delta, z) = 0 \right\}. \notag \end{align} To emphasize the connection with mutual information, we often plot this curve using the change of variables $z\mapsto \frac{1}{2}\log(1+z)$. The resulting curve, $(\delta,\frac{1}{2}\log(1+z))$, is called the \emph{fixed-point information curve}; see Figure~\ref{fig:fp_curve}. \begin{definition}[Single-Crossing Property] Informally, a signal distribution $P_X$ has the single-crossing property if the replica-MMSE crosses the fixed-point curve FP at most once. A formal definition of the single-crossing property is given in Section~\ref{sec:single_crossing}. \end{definition} \begin{assumption}[IID Gaussian Measurements] The rows of the measurement matrix $\{A_k\}$ are independent Gaussian vectors with mean zero and covariance $n^{-1} I_n$. Furthermore, the noise $\{W_k\}$ is i.i.d.\ Gaussian with mean zero and variance one. \end{assumption} \begin{assumption}[IID Signal Entries] The signal entries $\{X_i\}$ are independent copies of a random variable $X \sim P_X$ with bounded fourth moment $\ex{ X^4} \le B$. \end{assumption} \begin{assumption}[Single-Crossing Property] The signal distribution $P_X$ satisfies the single-crossing property. \end{assumption} \begin{theorem} \label{thm:limits} Under Assumptions 1-3, we have \begin{enumerate}[(i)] \item The sequence of MI functions $\mathcal{I}_n (\delta)$ converges to the replica prediction. In other words, for all $\delta \in \mathbb{R}_{+}$, \begin{align} \lim_{n \goto \infty} \mathcal{I}_n (\delta ) = \cI_{\mathrm{RS}}(\delta). \notag \end{align} \item The sequence of MMSE functions $\mathcal{M}_n (\delta)$ converges almost everywhere to the replica prediction. In other words, for all continuity points of $\cM_{\mathrm{RS}}(\delta)$, \begin{align} \lim_{n \goto \infty} \mathcal{M}_n(\delta )= \cM_{\mathrm{RS}}(\delta). \notag \end{align} \end{enumerate} \end{theorem} \begin{remark} The primary contribution of Theorem~\ref{thm:limits} is for the case where $\cM_{\mathrm{RS}}(\delta)$ has a discontinuity. This occurs, for example, in applications such as compressed sensing with sparse priors and CDMA with finite alphabet signaling. For the special case where $\cM_{\mathrm{RS}}(\delta)$ is continuous, the validity of the replica prediction can also be established by combining the AMP analysis with the I-MMSE relationship~\cite{guo:2006, donoho:2009a,donoho:2011, bayati:2011,bayati:2012a}. \end{remark} \begin{remark} For a given signal distribution $P_X$ the single-crossing property can be verified by numerically evaluating the replica-MMSE and checking whether it crosses the fixed-point curve more than once. \end{remark} \subsection{Related Work} The replica method was developed originally to study mean-field approximations in spin glasses~\cite{edwards:1975,mezard:2009}. It was first applied to linear estimation problems in the context of CDMA wireless communication \cite{tanaka:2002, muller:2003, guo:2005}, with subsequent work focusing on the compressed sensing directly \cite{guo:2009, kabashima:2009, rangan:2012, reeves:2012,reeves:2012a, tulino:2013}. Within the context of compressed sensing, the results of the replica method have been proven rigorously in a number of settings. One example is given by message passing on matrices with special structure, such as sparsity \cite{montanari:2006, guo:2006, baron:2010} or spatial coupling~\cite{kudekar:2010, krzakala:2012, donoho:2013}. However, in the case of i.i.d.\ matrices, the results are limited to signal distributions with a unique fixed point \cite{donoho:2009a, bayati:2011} (e.g., Gaussian inputs~\cite{verdu:1999,tse:1999}). For the special case of i.i.d.\ matrices with binary inputs, it has also been shown that the replica prediction provides an upper bound for the asymptotic mutual information \cite{korada:2010}. Bounds on the locations of discontinuities in the MMSE with sparse priors have also been obtain by analyzing the problem of approximate support recovery~\cite{reeves:2012, reeves:2012a, tulino:2013}. Recent work by Huleihel and Merhav \cite{huleihel:2016} addresses the validity of the replica MMSE directly in the case of Gaussian mixture models, using tools from statistical physics and random matrix theory \cite{merhav:2010,merhav:2011a}. \subsection{Notation} We use $C$ to denote an absolute constant and $C_\theta$ to denote a number that depends on a parameter $\theta$. In all cases, the numbers $C$ and $C_\theta$ are positive and finite, although their values change from place to place. The Euclidean norm is denoted by $\|\cdot \|$. The positive part of a real number $x$ is denoted by $(x)_+ \triangleq \max(x,0)$. The nonnegative real line $[0,\infty)$ is denoted by $\mathbb{R}_+$ and the positive integers $\{1,2,\cdots\}$ are denoted by $\mathbb{N}$. For each $n \in \mathbb{N}$ the set $\{1,2,\cdots , n\}$ is denoted by $[n]$. The joint distribution of the random variables $X,Y$ is denoted by $P_{X,Y}$ and the marginal distributions are denoted by $P_X$ and $P_Y$. The conditional distribution of $X$ given $Y= y$ is denoted by $P_{X\mid Y=y}$ and the conditional distribution of $X$ corresponding to a random realization of $Y$ is denoted by $P_{X| Y}$. The expectation over a single random variable is denoted by $\ensuremath{\mathbb{E}}_{X}$. For example, this implies that $\ensuremath{\mathbb{E}} \left[ f(X,Y) | Y \right] = \ensuremath{\mathbb{E}}_X \left[ f(X,Y) \right]$. Using this notation, the mutual information between $X$ and $Y$ can be expressed in terms of the expected Kullback-Leibler divergence as follows: \begin{align} I(X;Y) &= \DKL{P_{X,Y}}{P_X \times P_Y} \notag \\ & = \ensuremath{\mathbb{E}}\left[ \DKL{P_{X\mid Y}}{ P_X} \right] \notag\\ & = \ensuremath{\mathbb{E}}\left[ \DKL{P_{Y\mid X}}{ P_Y} \right],\notag \end{align} where the expectation in the second line is with respect to $Y$ and the expectation in the third line is with respect to $X$. The conditional variance of a random variable $X$ given $Y$ is denoted by \begin{align} \var(X \! \mid \! Y) &= \ex{ ( X - \ex{X \! \mid \! Y})^2\; \middle | \; Y}, \notag \end{align} and the conditional covariance matrix of a random vector $X$ given $Y$ is denoted by \begin{align} \cov(X \! \mid \! Y) &= \ex{ ( X - \ex{X \! \mid \! Y}) ( X - \ex{X \! \mid \! Y})^T \; \middle | \; Y}. \notag \end{align} The conditional variance and conditional covariance matrix are random because they are functions of the random conditional distribution $P_{X\mid Y}$. The minimum mean-square error (MMSE) of $X$ given $Y$ is defined to be the expected squared difference between $X$ and its conditional expectation and is denoted by \begin{align} \mathsf{mmse}(X \! \mid \! Y) & = \ex{ \| X - \ex{X \! \mid \! Y}\|^2}. \notag \end{align} Since the expectation is taken with respect to both $X$ and $Y$, the MMSE is a deterministic functions of the joint distribution $P_{X,Y}$. The MMSE can also be expressed in terms of the expected trace of the conditional covariance matrix: \begin{align} \mathsf{mmse}(X \! \mid \! Y) & = \ensuremath{\mathbb{E}}\left[ \gtr\left( \cov(X \! \mid \! Y)\right) \right]. \notag \end{align} \section{Overview of Main Steps in Proof} We begin with some additional definitions. The finite-length MI sequence $I : \mathbb{N}^2 \to \mathbb{R}_+$ and MMSE sequence $M : \mathbb{N}^2 \to \mathbb{R}_+$ are defined according to \begin{align} I_{m,n} &\triangleq I(X^n; Y^m \! \mid \! A^m), \notag\\ M_{m,n} &\triangleq \frac{1}{n} \mathsf{mmse}(X^n \! \mid \! Y^m, A^m ),\notag \end{align} where the relationship between the $n$-dimensional signal $X^n$, $m$-dimensional measurements $Y^m$, and $m \times n$ measurement matrix $A^m$ is given by the measurement model \eqref{eq:model}. Furthermore, the first and second order MI difference sequences are defined according to \begin{align} I'_{m,n} &\triangleq I_{m+1,n} - I_{m,n} \notag\\ I''_{m,n} &\triangleq I'_{m+1,n} - I'_{m,n}. \notag \end{align} To simplify notation we will often drop the explicit dependence on the signal length $n$ and simply write $I_m$, $M_m$, $I'_m$, and $I''_m$. \subsection{Properties of Finite-Length Sequences} At its foundation, our proof relies on certain relationships between the MI and MMSE sequences. Observe that by the chain rule for mutual information, \begin{align} \underbrace{I(X^n; Y^m \! \mid \! A^m)}_{I_{m,n}} & = \sum_{k=1}^{m-1} \underbrace{I(X^n ; Y_{k+1} \! \mid \! Y^k, A^{k+1})}_{I'_{k,n}}. \notag \end{align} Here, the conditioning in the mutual information on the right-hand side depends only on $A^{k+1}$ because the measurement vectors are generated independently of the signal. The above decomposition shows that the MI difference is given by $I'_{m,n} = I(X^n ; Y_{m+1} \! \mid \! Y^m, A^{m+1})$. In other words, it is the mutual information between the signal and a new measurement $Y_{m+1}$, conditioned on the previous data $(Y^m, A^{m+1})$. One of the key steps in our proof is to show that the MI difference and MMSE satisfy the following relationship almost everywhere \begin{align} I'_{m,n} \approx \frac{1}{2} \log\left(1 + M_{m,n}\right). \label{eq:Ip_MMSE_inq} \end{align} Our approach relies on the fact that the gap between the right and left sides of \eqref{eq:Ip_MMSE_inq} can be viewed as a measure of the non-Gaussianness of the conditional distribution of the new measurement. By relating this non-Gaussianness to certain properties of the posterior distribution, we are able to show that \eqref{eq:Ip_MMSE_inq} is tight whenever the second order MI difference sequence is small. The details of these steps are given in Section~\ref{proof:thm:I_MMSE_relationship}. Another important relationship that is used in our proof is the following fixed-point identity for the MMSE \begin{align} M_{m,n} \approx \mathsf{mmse}_X\left( \frac{m/n}{1 + M_{m,n}} \right) . \label{eq:M_fixed_point} \end{align} In words, this says that the MMSE of the compressed sensing problem is approximately equal to that of a scalar problem whose signal-to-noise ratio is a function of the measurement rate. In Section~\ref{proof:thm:MMSE_fixed_point} it is shown that the tightness in \eqref{eq:M_fixed_point} can be bounded in terms of the tightness of \eqref{eq:Ip_MMSE_inq}. \subsection{Asymptotic Constraints} The previous subsection focused on relationships between the finite-length MI and MMSE sequences. To characterize these relationships in the asymptotic setting of interest, the finite-length sequences are extended to functions of a continuous parameter $\delta \in \mathbb{R}_+$ according to \begin{align} \mathcal{I}'_n(\delta) & = I'_{\lfloor \delta n \rfloor, n} \notag\\ \mathcal{I}_n(\delta) & = \int_0^\delta \mathcal{I}'_n(\gamma)\, \mathrm{d} \gamma \notag\\ \mathcal{M}_n(\delta) &= M_{\lfloor \delta n \rfloor,n}. \notag \end{align} This choice of interpolation has the convenient property that the MI function $\mathcal{I}_n$ is continuous and differentiable almost everywhere. Furthermore, by construction, $\mathcal{I}_n$ corresponds to the normalized mutual information and satisfies $\mathcal{I}_n\left( \tfrac{m}{n} \right) = \frac{1}{n} I_{m, n}$ for all integers $m$ and $n$. With this notation in hand, we are now ready to state two of the main theorems in this paper. These theorems provide precise bounds on the relationships given in \eqref{eq:Ip_MMSE_inq} and \eqref{eq:M_fixed_point}. The proofs are given in Section~\ref{proof:thm:I_MMSE_relationship} and Section~\ref{proof:thm:MMSE_fixed_point}. \begin{theorem}\label{thm:I_MMSE_relationship} Under Assumptions 1 and 2, the MI and MMSE functions satisfy \begin{align} \int_{0}^{\delta} \left| \mathcal{I}_{n}'(\gamma) - \frac{1}{2} \log\left(1 + \mathcal{M}_{n}(\gamma) \right) \right| \mathrm{d} \gamma & \le C_{B, \delta} \cdot n^{-r}, \notag \end{align} for all $n \in \mathbb{N}$ and $\delta \in \mathbb{R}_+$ where $r \in (0,1)$ is a universal constant. \end{theorem} \begin{theorem}\label{thm:MMSE_fixed_point} Under Assumptions 1 and 2, the MMSE function satisfies \begin{align} \int_{0}^{\delta} \left| \mathcal{M}_{n}(\gamma) - \mathsf{mmse}_X\left( \frac{\gamma }{1 + \mathcal{M}_n(\gamma) } \right) \right| \mathrm{d} \gamma& \le C_{B,\delta} \cdot n^{-r}, \notag \end{align} for all $n \in \mathbb{N}$ and $\delta \in \mathbb{R}_+$ where $r \in (0,1)$ is a universal constant. \end{theorem} The bounds given in Theorems~\ref{thm:I_MMSE_relationship} and \ref{thm:MMSE_fixed_point} are with respect the integrals over $\mathcal{I}'_n$ and $\mathcal{M}_n$, and thus prove convergence in $L_1$ over bounded intervals. This is sufficient to show that the relationships hold almost everywhere in the limit. Importantly, though, these bounds still allow for the possibility that the relationships do not hold at countably many points, and thus allow for the possibility of phase transitions. For distributions that have a phase transition, our proof technique requires a boundary condition for the mutual information. This boundary condition is used to determine the location of the phase transition. The next result shows that the replica-MI is equal to the MI-function in the limit as the measurement rate increases to infinity, and thus the replica-MI can be used as a boundary condition. The proof is given in Section~\ref{proof:thm:I_m_boundary}. \begin{theorem}\label{thm:I_m_boundary} Under Assumptions 1 and 2, the MI function satisfies \begin{align} \left | \mathcal{I}_{n}(\delta) - \cI_{\mathrm{RS}}\left( \delta \right) \right | \le C \cdot \delta^{-\frac{1}{2}}, \notag \end{align} for all $n \in \mathbb{N}$ and $\delta \ge 4$. \end{theorem} At first glance, it may seem surprising that Theorem~\ref{thm:I_m_boundary} holds for all signal lengths. However, this bound is tight in the regime where the number of measurements is much larger than the number of signal entries. From the rotational invariance of the Gaussian noise and monotonicity of the mutual information with respect to the signal-to-noise ratio, one can obtain the sandwiching relation \begin{align} n\, \ex{ I_X\left( \sigma^2_\text{min}(A^m) \right)} \le I_{m,n} \le n\, \ex{ I_X\left( \sigma^2_\text{max}(A^m) \right)},\notag \end{align} where the upper and lower bounds depend only on the minimum and maximum singular values of the random $m \times n$ measurement matrix. For fixed $n$, it is well known that the ratio of these singular values converges to one almost surely in the limit as $m$ increases to infinity. Our proof of Theorem~\ref{thm:I_m_boundary} uses a more refined analysis, based on the QR decomposition, but the basic idea is the same. \subsection{Uniqueness of Limit}\label{sec:properties_replica_prediction} The final step of the proof is to make the connection between the asymptotic constraints on the MI and MMSE described in the previous subsection and the replica-MI and replica-MMSE functions given in Definition~\ref{def:rs_limits}. We begin by describing two functional properties of the replica limits. The first property follows from the fact that the MMSE is a global minimizer of the function $R(\delta, z)$ with respect to its second argument. Since $R(\delta,z)$ is differentiable for all $\delta,z \in \mathbb{R}_+$, any minimizer $z^*$ of $R(\delta, z)$ must satisfy the equation $R_z (\delta, z^*) = 0$, where $R_z (\delta,z) = \frac{\partial}{\partial z} R(\delta,z)$. Using the I-MMSE relationship \cite{guo:2005a}, it can be shown that, for each $\delta \in \mathbb{R}_+$, the replica-MMSE $\cM_{\mathrm{RS}}(\delta)$ satisfies the fixed-point equation \begin{align} \cM_{\mathrm{RS}}(\delta) = \mathsf{mmse}_X\left( \frac{\delta}{1 + \cM_{\mathrm{RS}}(\delta) } \right) . \label{eq:fixed_point_replica} \end{align} Note that if this fixed-point equation has a unique solution, then it provides an equivalent definition of the replica-MMSE. However, in the presence of multiple solutions, only the solutions that correspond to global minimizers of $R(\delta, z)$ are valid. The second property relates the derivative of the replica-MI to the replica-MMSE. Specifically, as a consequence of the envelope theorem~\cite{milgrom:2002} and \eqref{eq:fixed_point_replica}, it can be shown that the derivative of the replica-MI satisfies \begin{align} \cI_{\mathrm{RS}}'(\delta) = \frac{1}{2} \log\left( 1 + \cM_{\mathrm{RS}}(\delta) \right), \label{eq:derivative_replica} \end{align} almost everywhere. These properties show that the replica limits $\cI_{\mathrm{RS}}$ and $\cM_{\mathrm{RS}}$ satisfy the relationships given in Theorems~\ref{thm:I_MMSE_relationship} and \ref{thm:MMSE_fixed_point} with equality. In order to complete proof we need to show that, in conjunction with a boundary condition imposed by the large measurement rate limit, the constraints \eqref{eq:fixed_point_replica} and \eqref{eq:derivative_replica} provide an equivalent characterization of the replica limits. \begin{definition} \label{lem:replica_mmse_set} For a given signal distribution $P_X$, let $\mathcal{V}$ be the subset of non-increasing functions from $\mathbb{R}_{+} \to \mathbb{R}_{+}$ such that all $g\in \mathcal{V}$ satisfy the fixed-point condition \begin{align} g(\delta) = \mathsf{mmse}_X \left(\frac{\delta}{1+g(\delta)} \right) \label{eq:fixed_point_g}. \end{align} Furthermore, let $\mathcal{G} \subseteq \mathcal{V}$ be the subset such that all $g \in \mathcal{G}$ also satisfy the large measurement rate boundary condition \begin{align} \lim_{\delta \to \infty} \left| \int_{0}^{\delta}\frac{1}{2} \log \left(1 + g(\gamma) \right) \mathrm{d} \gamma - \cI_{\mathrm{RS}}(\delta) \right| = 0. \nota \end{align} \end{definition} In Section~\ref{sec:single_crossing}, it is shown that if the signal distribution $P_{X}$ has the single-crossing property, then $\cM_{\mathrm{RS}}(\delta)$ has at most one discontinuity and all $g\in \mathcal{G}$ satisfy $g(\delta)=\cM_{\mathrm{RS}}(\delta)$ almost everywhere. In other words, the single-crossing property provides a sufficient condition under which the replica limits can be obtained uniquely from \eqref{eq:derivative_replica} and \eqref{eq:fixed_point_g}. A graphical illustration is provided in Figure~\ref{fig:fp_curve}. \section{Properties of MI and MMSE}\label{sec:MI_MMSE_bounds} \subsection{Single-Letter Functions} The single-letter MI and MMSE functions corresponding to a real valued input distribution $P_X$ under additive Gaussian noise are defined by \begin{align} I_X(s) &\triangleq I(X; \sqrt{s} X + N) \notag \\ \mathsf{mmse}_X(s) & \triangleq \mathsf{mmse}(X\mid \sqrt{s} X + N), \notag \end{align} where $X \sim P_X$ and $N \sim \mathcal{N}(0,1)$ are independent and $s \in \mathbb{R}_+$ parametrizes the signal-to-noise ratio. Many properties of these functions have been studied in the literature \cite{guo:2005a,guo:2011,wu:2011,wu:2012a}. The function $I_X(s)$ is concave and non-decreasing with $I_X(0) = 0$. If $I_X(s)$ is finite for some $s > 0$ then it is finite for all $s \in \mathbb{R}_+$ \cite[Theorem~6]{wu:2012a}. The MMSE function is non-increasing with $\mathsf{mmse}_X(0) = \var(X)$. Both $I_X(s)$ and $\mathsf{mmse}_X(s)$ are infinitely differentiable on $(0,\infty)$ \cite[Proposition~7]{guo:2011}. The so-called I-MMSE relationship \cite{guo:2005a} states that \begin{align} \frac{\mathrm{d}}{ \mathrm{d} s} I_X(s) = \frac{1}{2} \mathsf{mmse}_X(s). \label{eq:I-MMSE} \end{align} This relationship was originally stated for input distributions with finite second moments \cite[Theorem~1]{guo:2005a}, and was later shown to hold for any input distributions with finite mutual information \cite[Theorem~6]{wu:2012a}. This relationship can also be viewed as a consequence of De-Bruijn's identity \cite{stam:1959}. Furthermore, it is well known that under a second moment constraint, the MI and MMSE are maximized when the input distribution is Gaussian. This yields the following inequalities \begin{align} I_X(s) &\le \frac{1}{2} \log(1 + s \var(X)) \label{eq:IX_Gbound}\\ \mathsf{mmse}_X(s) &\le \frac{ \var(X)}{ 1 + s \var(X)}. \label{eq:mmseX_Gbound} \end{align} More generally, the MMSE function satisfies the upper bound $\mathsf{mmse}_X(s) \le 1/s$, for every input distribution $P_X$ and $s > 0$ \cite[Proposition~4]{guo:2011}. Combining this inequality with \eqref{eq:I-MMSE} leads to \begin{align} I_X(t ) - I_X(s) \le \frac{1}{2} \log\left( \frac{t}{s}\right), \quad 0 < s \le t, \label{eq:IX_smooth} \end{align} which holds for any input distribution with finite mutual information. Finally, the derivative of the MMSE with respect to $s$ is given by the second moment of the conditional variance \cite[Proposition~9]{guo:2011}, \begin{align} \frac{\mathrm{d}}{\mathrm{d} s} \mathsf{mmse}_X(s) = - \ex{ \left(\var(X \! \mid \! Y) \right)^2}. \label{eq:mmseXp} \end{align} This relationship shows that the slope of $\mathsf{mmse}_X(s)$ at $s=0$ is finite if and only if $X$ has bounded forth moment. It also leads to the following result, which is proved in Appendix~\ref{proof:lem:mmseX_bound}. \begin{lemma}\label{lem:mmseX_bound} The single-letter MMSE function satisfies the following bounds: \begin{enumerate}[(i)] \item For any input distribution $P_X$ with finite fourth moment and $s,t \in \mathbb{R}_+$, \begin{align} \left | \mathsf{mmse}_X\left(s \right ) - \mathsf{mmse}_X\left(t \right) \right| &\le 4 \ex{X^4} |s - t|. \label{eq:mmseX_smooth_a} \end{align} \item For every input distribution $P_X$ and $s,t \in (0,\infty)$, \begin{align} \left | \mathsf{mmse}_X\left(s \right ) - \mathsf{mmse}_X\left(t \right) \right| &\le 12\, \left | \frac{1}{s} - \frac{1}{t} \right| . \label{eq:mmseX_smooth_b} \end{align} \end{enumerate} \end{lemma} \subsection{Multivariate MI and MMSE}\label{sec:multivariate_MI_MMSE} From the chain rule for mutual information, we see that the MI difference sequence is given by \begin{align} I'_{m,n} & = I(X^n ; Y_{m+1} \! \mid \! Y^m, A^{m+1}). \label{eq:Ip_alt} \end{align} By the non-negativity of mutual information, this establishes that the MI sequence is non-decreasing in $m$. Alternatively, by the data-processing inequality for MMSE \cite[Proposition 4]{rioul:2011}, we also see that $M_{m,n}$ is non-increasing in $m$, and also \begin{align} M_{m,n} \le \var(X). \notag \end{align} The next result shows that the second order MI difference can also be expressed in terms of the mutual information. The proof is given in Appendix~\ref{proof:lem:Ip_and_Ipp}. \begin{lemma}\label{lem:Ip_and_Ipp} Under Assumption 1, the second order MI difference sequence satisfies \begin{align} I''_{m,n} & = - I(Y_{m+1} ; Y_{m+2} \mid Y^m, A^{m+2}). \label{eq:Ipp_alt} \end{align} \end{lemma} One consequence of Lemma~\ref{lem:Ip_and_Ipp} is that the first order MI difference sequence is non-increasing in $m$, and thus \begin{align} I'_{m,n} \le I'_{1,n} = I_{1,n}. \label{eq:Ip_bound} \end{align} This inequality plays an important role later on in our proof, when we show that certain terms of interest are bounded by the magnitude of the second order MI difference. The next result provides non-asymptotic bounds in terms of the single-letter MI and MMSE functions corresponding to the signal distribution $P_X$. The proof is given in Appendix~\ref{proof:lem:Im_bounds} \begin{lemma}\label{lem:Im_bounds} Under Assumptions 1 and 2, the MI and MMSE sequences satisfy \begin{align} & \sum_{k=1}^{\min(n,m)} \hspace{-.2cm} \ex{ I_X\left( \tfrac{1}{n} \chi^2_{m-k+1} \right)} \le I_{m,n} \le n \, \ex{ I_X\left( \tfrac{1}{n} \chi^2_{m} \right)} \label{eq:Im_bounds} \\ & \ex{ \mathsf{mmse}_X\left( \tfrac{1}{n} \chi^2_m \right)} \le M_{m,n} \le \ex{ \mathsf{mmse}_X\left( \tfrac{1}{n} \chi^2_{m - n + 1} \right) }, \label{eq:Mm_bounds} \end{align} where $\chi_k^2$ denotes a chi-squared random variable with $k$ degrees of freedom and the upper bound on $M_{m,n}$ is valid for all $m \ge n$. \end{lemma} \begin{remark} The proof of Lemma~\ref{lem:Im_bounds} does not require the assumption that the signal distribution has bounded fourth moment. In fact, \eqref{eq:Im_bounds} holds for any signal distribution with finite mutual information and \eqref{eq:Mm_bounds} holds for any signal distribution. \end{remark} \begin{remark} The upper bound in \eqref{eq:Im_bounds} and lower bound in \eqref{eq:Mm_bounds} are not new and are special cases of results given in \cite{tulino:2013}. \end{remark} Combining Lemma~\ref{lem:Im_bounds} with Inequalities \eqref{eq:IX_Gbound} and \eqref{eq:mmseX_Gbound}, leads to upper bounds on the MI and MMSE that depend only on the variance of the signal distribution. Alternatively, combining Lemma~\ref{lem:Im_bounds} with the smoothness of the single-letter functions given in \eqref{eq:IX_smooth} and \eqref{eq:mmseX_smooth_b} leads to the following characterization, which is tight whenever $m$ is much larger than $n$. The proof is given in Appendix~\ref{proof:lem:Im_bounds_gap} \begin{lemma}\label{lem:Im_bounds_gap} Under Assumptions 1 and 2, the MI and MMSE sequences satisfy, for all $m \ge n + 2$, \begin{align} \left| \tfrac{1}{n} I_{m,n} - I_X(\tfrac{m}{n}) \right| & \le \tfrac{1}{2} \left[ \tfrac{n + 1}{m - n - 1} + \sqrt{ \tfrac{2}{m\!-\! 2 } } \right] \label{eq:Im_bounds_gap}\\ \left| M_{m,n} - \mathsf{mmse}_X\left( \tfrac{m}{n} \right) \right| & \le \tfrac{ 12\, n}{m} \left[ \tfrac{ n+1}{m - n - 1} + \sqrt{ \tfrac{2}{m -2}} \right]. \label{eq:Mm_bounds_gap} \end{align} \end{lemma} For any fixed $n$, the right-hand sides of \eqref{eq:Im_bounds_gap} and \eqref{eq:Mm_bounds_gap} converge to zero as $m$ increases to infinity. Consequently, the large $m$ behavior of the MI sequence is given by \begin{align} \lim_{m \to \infty} I_{m,n} & = \begin{cases} H(X^n) , & \text{if $P_X$ has finite entropy}\\ + \infty, &\text{otherwise}. \end{cases} \notag \end{align} \subsection{Properties of Replica Prediction} Using the I-MMSE relationship, the partial derivative of $R(\delta, z)$ with respect to its second argument is given by \begin{align} R_z(\delta, z) & = \frac{\delta}{ 2 (1 + z)^2 } \left[ z - \mathsf{mmse}_X\left( \frac{ \delta}{ 1 + z} \right) \right]. \label{eq:R_z} \end{align} From this expression, we see that the $R_z(\delta, z) = 0$ is equivalent to the fixed-point condition \begin{align} z = \mathsf{mmse}\left( \frac{ \delta}{1 + z} \right) . \notag \end{align} Furthermore, since $\cI_{\mathrm{RS}}(\delta)$ is concave, it is differentiable almost everywhere. For all $\delta$ where $\cI_{\mathrm{RS}}'(\delta)$ exists, it follows from the envelope theorem~\cite{milgrom:2002} that $\cI_{\mathrm{RS}}'(\delta) = R_{\delta}\left(\delta, \cM_{\mathrm{RS}}(\delta)\right)$, where $R_\delta(z,\delta)$ is the partial derivative of $R(\delta, z)$ with respect to its first argument. Direct computation yields \begin{align} R_\delta(\delta, z) & = \frac{1}{2} \log(1 \!+\! z) + \frac{1}{ 2 (1 \!+\! z) } \left[ \mathsf{mmse}_X\left( \frac{ \delta}{ 1 \!+\! z} \right) - z \right]. \notag \end{align} Finally, noting that the second term on the right-hand side is equal to zero whenever $z = \cM_{\mathrm{RS}}(\delta)$, leads to \begin{align} \cI_{\mathrm{RS}}'(\delta) & = \frac{1}{2} \log\left(1 + \cM_{\mathrm{RS}}(\delta) \right). \notag \end{align} The proof of the next result is given in Appendix~\ref{proof:lem:IR_boundary}. \begin{lemma} \label{lem:IR_boundary} The Replica-MI and Replica-MMSE functions satisfy, for all $\delta \ge 1$, \begin{align} I_X(\delta-1) &\le \cI_{\mathrm{RS}}(\delta) \le I_X(\delta), \label{eq:IRS_bounds}\\ \mathsf{mmse}_X(\delta) &\le \cM_{\mathrm{RS}}(\delta) \le \mathsf{mmse}_X(\delta-1). \label{eq:MRS_bounds} \end{align} \end{lemma} It is interesting to note the parallels between the bounds on the MI and MMSE sequences in Lemma~\ref{lem:Im_bounds} and the bounds on the replica functions in Lemma~\ref{lem:IR_boundary}. Combining Lemma~\ref{lem:IR_boundary} with the smoothness of the single-letter functions given in \eqref{eq:IX_smooth} and \eqref{eq:mmseX_smooth_b} leads to \begin{align} \left| \cI_{\mathrm{RS}}(\delta) - I_X(\delta) \right| &\le \frac{ 1} {2(\delta -1)} \notag\\ \left| \cM_{\mathrm{RS}}(\delta) - \mathsf{mmse}_X(\delta) \right| &\le \frac{1}{ \delta( \delta -1)} .\notag \end{align} \subsection{Proof of Theorem~\ref{thm:I_m_boundary}}\label{proof:thm:I_m_boundary} This proof follows from combining Lemmas~\ref{lem:Im_bounds_gap} and \ref{lem:IR_boundary}. Fix any $n \in \mathbb{R}_+$ and $\delta > 4$ and let $m = \lfloor \delta n \rfloor$ and $\lambda = m + 1 - \delta n$. The MI function obeys the upper bound \begin{align} \mathcal{I}_n(\delta) &= \frac{1}{n} \left[ \lambda I_{m,n} + (1-\lambda) I_{m+1,n} \right] \notag\\ & \overset{(a)}{\le} \lambda \ex{ I_X\left(\tfrac{1}{n} \chi^2_m \right)} + (1-\lambda) \ex{ I_X\left(\tfrac{1}{n} \chi^2_{m+1} \right)} \notag\\ & \overset{(b)}{\le} I_X(\delta), \label{eq:I_m_boundary_b} \end{align} where: (a) follows from \eqref{eq:Im_bounds}; and (b) follows from Jensen's inequality and the concavity of $I_X(s)$. The MI function also obeys the lower bound \begin{align} \mathcal{I}_n(\delta) &\overset{(a)}{\ge} \frac{1}{n} I_{m,n} \notag\\ & \overset{(b)}{\ge } I_X\left(\delta \right) - \tfrac{1}{2 (\delta - 1) } - \tfrac{1}{2} \left[ \tfrac{n+1}{(\delta -1) n - 2} - \sqrt{ \tfrac{2}{\delta-3} } \right] ,\label{eq:I_m_boundary_c} \end{align} where (a) follows from the fact that $I_{m,n} $ is non-decreasing in $m$ and (b) follows from \eqref{eq:Im_bounds_gap}, \eqref{eq:IX_smooth}, and the fact that $m \ge \delta n - 1 \ge \delta - 1$. Finally, we have \begin{align} \left| \mathcal{I}_n(\delta) - \cI_{\mathrm{RS}}(\delta) \right| &\overset{(a)}{\le} \left| \mathcal{I}_n(\delta) - I_X(\delta) \right| + \left| I_X(\delta) - \cI_{\mathrm{RS}}(\delta) \right| \notag\\ & \overset{(b)}{\le} \tfrac{1}{(\delta -1) } + \tfrac{1}{2} \tfrac{n+1}{(\delta -1) n - 2} + \tfrac{1}{2} \sqrt{ \tfrac{2}{\delta-3} } \notag\\ &\overset{(c)}{\le} \left( 4 + \sqrt{2} \right) \delta^{-\frac{1}{2}}, \notag \end{align} where: (a) follows from the triangle inequality; (b) follows from \eqref{eq:IRS_bounds}, \eqref{eq:I_m_boundary_b}, and \eqref{eq:I_m_boundary_c}; and (c) follows from the assumption $\delta \ge 4$. This completes the proof of Theorem~\ref{thm:I_m_boundary}. \begin{figure*}[!ht] \centering \tikzstyle{gblock} = [rectangle, minimum width=2.75cm, minimum height=1cm, text centered, text width=2.75cm, draw=black, fill=gray!05] \tikzstyle{arrow} = [thick,->,>=stealth] \begin{tikzpicture}[node distance=1.6cm] \footnotesize \node (dist_ident) [gblock]{Posterior distribution identities \\ (Lemma~\ref{lem:post_dist_to_id})}; \node (var_imi) [gblock, below of = dist_ident, yshift = -0.8cm] {Concentration of MI density\\ (Lemmas~\ref{lem:Im_var} and \ref{lem:IMI_var})}; \node (post_dist) [gblock, right of = dist_ident, xshift=1. 9cm, yshift =.8cm] {Weak decoupling \\ (Lemma~\ref{lem:SE_bound_1})}; \node (smooth) [gblock, below of = post_dist] {Smoothness of posterior variance\\ (Lemma~\ref{lem:post_var_smoothness})}; \node (pmid) [gblock, below of = smooth] {Smoothness of posterior MI difference \\ (Lemma~\ref{lem:PMID_var})}; \node (cclt) [gblock, right of = smooth, xshift=1.9 cm, yshift = 0cm] {Posterior Gaussianness of new measurements \\ (Lemma~\ref{lem:DeltaP_bound})}; \node (cclt0) [gblock, above of = cclt] {Conditional CLT \cite{reeves:2016b} \\ (Lemma~\ref{lem:cclt})}; \node (dev_var) [gblock, below of= cclt] {Concentration of posterior variance\\ (Lemma~\ref{lem:post_var_dev_bound})}; \node (cclt2) [gblock, right of = cclt, xshift=1.9cm, yshift = -0cm] {Gaussianness of new measurements \\ (Lemma~\ref{lem:Delta_bound})}; \node (fixed_point) [gblock, below of = cclt2] {MMSE fixed-point constraint \\ (Lemmas~\ref{lem:M_to_M_aug} and \ref{lem:MMSE_aug_bound})}; \node (derivative) [gblock, right of = cclt2, xshift = 1.9 cm, yshift = .8cm] {Asymptotic derivative constraint\\ (Theorem~\ref{thm:I_MMSE_relationship})}; \node (mmse) [gblock, below of = derivative] {Asymptotic fixed-point constraint\\ (Theorem~\ref{thm:MMSE_fixed_point})}; \draw [arrow] (dist_ident) -- (post_dist); \draw [arrow] (dist_ident) -- (smooth); \draw [arrow] (post_dist) -- (cclt); \draw [arrow] (smooth) -- (dev_var); \draw [arrow] (var_imi) -- (pmid); \draw [arrow] (pmid) -- (dev_var); \draw [arrow] (cclt) -- (dev_var); \draw [arrow] (cclt0) -- (cclt); \draw [arrow] (cclt) -- (cclt2); \draw [arrow] (dev_var) -- (cclt2); \draw [arrow] (cclt2) -- (derivative); \draw [arrow] (cclt2) -- (mmse); \draw [arrow] (cclt2) -- (mmse); \draw [arrow] (fixed_point) -- (mmse); \end{tikzpicture} \caption{Outline of the main steps in the proofs of Theorem~\ref{thm:I_MMSE_relationship} and Theorem~\ref{thm:MMSE_fixed_point} \label{fig:proof_outline_a}. } \end{figure*} \subsection{Concentration of MI Density}\label{sec:IMI_concentration} In order to establish the MI and MMSE relationships used our proof, we need to show that certain functions of the random tuple $(X^n, W^n, A^n)$ concentrate about their expectations. Our first result bounds the variation in the mutual information corresponding to the measurement matrix. Let $I_{m,n}(A^m)$ denote the MI sequence as a function of the random matrix $A^m$. The following result follows from the Gaussian Poincar\'e inequality and the multivariate I-MMSE relationship. The proof is given in Appendix~\ref{proof:lem:Im_var}. \begin{lemma}\label{lem:Im_var} Under Assumptions 1 and 2, the variance of the MI with respect to the measurement matrix satisfies \begin{align} \var\left(I_{m,n}(A^m)\right) & \le \min\left\{ \left(\var(X)\right)^2\, m , \frac{n^2}{(m-n-1)_+} \right\}. \notag \end{align} \end{lemma} An important consequence of Lemma~\ref{lem:Im_var} is that the variance of the normalized mutual information $\frac{1}{n} I_{m,n}(A^m)$ converges to zero as $\max(m,n)$ increases to infinity. Next, we focus on the concentration of the mutual information density, which is a random variable whose expectation is equal to the mutual information \begin{definition}\label{def:intant_MI} Given a distribution $P_{X,Y,Z}$, the \textit{conditional mutual information density} between $X$ and $Y$ given $Z$ is defined as \begin{align} \imath(X; Y\! \mid \! Z) & \triangleq\log\left(\frac{ \mathrm{d} P_{X, Y \mid Z} (X, Y \! \mid \! Z)}{ \mathrm{d} \left( P_{X \mid Z}(X \! \mid \! Z) \times P_{Y \mid Z}(Y \! \mid \! Z) \right)} \right), \notag \end{align} where $(X,Y,Z) \sim P_{X,Y,Z}$. This is well-defined because a joint distribution is absolutely continuous with respect to the product of its marginals. \end{definition} The mutual information density satisfies many of the same properties as mutual information, such as the chain rule and invariance to one-to-one transformations; see \cite[Chapter~5.5]{gray:2013}. For this compressed sensing problem, the mutual information density can be expressed in terms of the density functions $f_{Y^m|X^n, A^m}$ and $f_{Y^m|A^m}$, which are guaranteed to exist because of the additive Gaussian noise: \begin{align} \imath(X^n ; Y^m \! \mid \! A^m) = \log\left( \frac{f_{Y^m \mid X^n, A^m}( Y^m \! \mid \! X^n, A^m )}{f_{Y^m \mid A^m}(Y^m \! \mid \! A^m )} \right).\notag \end{align} The next result bounds the variance of the mutual information density in terms of the fourth moment of the signal distribution and the problem dimensions. The proof is given in Appendix~\ref{proof:lem:IMI_var}. \begin{lemma}\label{lem:IMI_var} Under Assumptions 1 and 2, the variance of the MI density satisfies \begin{align} \var\left(\imath(X^n ; Y^m \mid A^m) \right) \le C_B \cdot \left( 1+ \frac{m}{n} \right)^2 n.\nota \end{align} \end{lemma} \section{Proof of Theorem~\ref{thm:I_MMSE_relationship}}\label{proof:thm:I_MMSE_relationship} This section describes the proof of Theorem~\ref{thm:I_MMSE_relationship}. An outline of the dependences between various steps is provided in Figure~\ref{fig:proof_outline_a}. \subsection{Further Definitions} The conditional distribution induced by the data $(Y^m, A^m)$ plays an important role and is referred to throughout as the \textit{posterior distribution}. The optimal signal estimate with respect to squared error is given by the mean of the posterior distribution, and the squared error associated with this estimate is denoted by \begin{align} \cE_{m,n} & = \frac{1}{n} \left\| X^n - \ex{ X^n \! \mid \! Y^m, A^m} \right\|^2. \notag \end{align} The conditional expectation of the squared error with respect to the posterior distribution is referred to as the \textit{posterior variance} and is denoted by \begin{align} V_{m, n} & = \ex{ \mathcal{E}_{m,n} \! \mid \! Y^m, A^m } . \notag \end{align} Both $\cE_{m,n}$ and $V_{m,n}$ are random variables. By construction, their expectations are equal to the MMSE, that is \begin{align} M_{m,n} = \ex{ V_{m,n}} = \ex{\cE_{m,n}}.\notag \end{align} Next, recall that the MI difference sequence can be expressed in terms of the mutual information between the signal and a new measurement: \begin{align} I'_{m,n} & = I(X^n; Y_{m+1} \! \mid \! Y^m, A^{m+1}).\notag \end{align} The \textit{MI difference density} is defined to be the random variable \begin{align} \cJ_{m,n} & = \imath(X^n; Y_{m+1} \! \mid \! Y^m, A^{m+1}),\notag \end{align} where the mutual information density is defined in Definition~\ref{def:intant_MI}. The conditional expectation of the MI difference density with respect to the posterior distribution is referred to as the \textit{posterior MI difference} and is denoted by \begin{align} J_{m,n} & = \ex{ \imath(X^n; Y_{m+1} \! \mid \! Y^m, A^{m+1}) \; \middle | \; Y^m, A^m}.\notag \end{align} Both $\cJ_{m,n} $ and $J_{m,n}$ are random variables. By construction, their expectations are equal to the MI difference, that is \begin{align} \ex{\cJ_{m,n}}= \ex{J_{m,n}} = I'_{m,n}. \notag \end{align} A summary of this notation is provided in Table~\ref{tab:notation}. The next result shows that the moments of the square error and MI difference density can be bounded in terms of the fourth moment of the signal distribution. The proof is given in Appendix~\ref{proof:lem:moment_bounds}. \begin{lemma}\label{lem:moment_bounds} Under Assumptions 1 and 2, \begin{align} \ex{\left| \cE_{m,n} \right|^2} & \le C \cdot B \label{eq:SE_moment_2}\\ \ex{\left| \cJ_{m,n} \right|^2} & \le C\cdot (1 + B) \label{eq:IMID_moment_2}\\ \ex{\left| Y_{m+1} \right|^4 } & \le C \cdot (1 +B) \label{eq:Y_moment_4} \\ \ex{\left|Y_{m+1} - \widehat{Y}_{m+1} \right|^4} & \le C \cdot (1 + B) , \label{eq:Ybar_moment_4} \end{align} where $\widehat{Y}_{m+1} = \ex{ Y_{m+1} \! \mid \! Y^m, A^m}$. \end{lemma} \begin{table*}[htbp] \centering \caption{\label{tab:notation} Summary of notation used in the proofs of Theorem~\ref{thm:I_MMSE_relationship} and Theorem~\ref{thm:MMSE_fixed_point} } \begin{tabular}{@{} clll @{}} \toprule & Random Variable & Posterior Expectation & Expectation \\ \midrule Squared Error & $ \cE_{m} = \frac{1}{n} \| X^n-\ex{X^n \! \mid \! Y^m, A^m }\|^2$ & $ V_{m} = \ex{\cE_{m} \! \mid \! Y^m, A^m}$ & $M_{m} = \ex{\cE_m}$ \\[3pt] MI Difference & $\cJ_m =\imath\left(X^n ; Y_{m+1} \! \mid \! Y^{m}, A^{m+1}\right) $ & $J_{m} = \ex{ \cJ_m \! \mid \! Y^m, A^m}$ & $I'_{m} = \ex{ \cJ_m }$ \\%[3pt] \bottomrule \end{tabular} \label{tab:booktabs} \end{table*} \subsection{Weak Decoupling}\label{sec:posterior_distribution} The posterior distribution of the signal cannot, in general, be expressed as the product of its marginals since the measurements introduce dependence between the signal entries. Nevertheless, it has been observed that in some cases, the posterior distribution satisfies a \textit{decoupling principle} \cite{guo:2005}, in which the posterior distribution on a small subset of the signal entries is well approximated by the product of the marginals on that subset. One way to express this decoupling is say that, for any fixed index set $\{i_1, \cdots i_L\}$ with $L \ll n$, the random posterior distribution satisfies \begin{align} P_{X_{i_1}, \cdots, X_{i_L} \mid Y^m, A^m}\approx \prod_{\ell=1}^L P_{X_{i_\ell} \mid Y^m, A^m}, \notag \end{align} with high probability with respect to the data $(Y^m, A^m)$. One of the main ideas in our proof is to use decoupling to show that the MI and MMSE sequences satisfy certain relationships. For the purposes of our proof, it is sufficient to work with a weaker notation of decoupling that depends only on the statistics of the pairwise marginals of the posterior distribution. \begin{definition} The posterior signal distribution $P_{X^n \mid Y^m, A^m}$ satisfies \textit{weak decoupling} if \begin{gather} \ex{ \left | \cE_{m,n} - V_{m,n} \right |} \to 0 \notag\\ \frac{1}{n} \ex{ \left \| \cov(X^n \! \mid \! Y^{m}, A^{m} ) \right\|_F } \to 0. \notag \end{gather} as $m$ and $n$ increase to infinity. \end{definition} The first condition in the definition of weak decoupling says that the magnitude of the squared error must concentrate about its conditional expectation. The second condition says that the average correlation between the signal entries under the posterior distribution is converging to zero. Note that both of these conditions are satisfied a priori in the case of $m=0$ measurements, because that prior signal distribution is i.i.d.\ with finite fourth moments. The next result provides several key identities, which show that certain properties of the posterior distribution can be expressed in terms of the second order statistics of new measurements. This result does not require Assumption~2, and thus holds generally for any prior distribution on $X^n$. The proof is given in Appendix~\ref{proof:lem:post_dist_to_id} \begin{lemma}\label{lem:post_dist_to_id} Under Assumption 1, the following identities hold for all integers $m < i < j$: \begin{enumerate}[(i)] \item The posterior variance satisfies \begin{align} V_{m,n} = \ensuremath{\mathbb{E}}_{A_{i}}\left[ \var\left(Y_{i} \! \mid \! Y^m, A^{m} , A_i \right) \right] - 1. \label{eq:id_var} \end{align} \item The posterior covariance matrix satisfies \begin{multline} \frac{1}{n^2} \left \| \cov(X^n \! \mid \! Y^{m}, A^{m} ) \right\|^2_F\\ = \ensuremath{\mathbb{E}}_{A_{i}, A_{j}}\left[ \left| \cov\left(Y_{i}, Y_{j} \! \mid \! Y^m\!, A^{m}\!, A_i, A_j \right) \right|^2 \right] .\label{eq:id_cov} \end{multline} \item The conditional variance of $\sqrt{1 + \cE_{m,n}}$ satisfies \begin{multline} \var\left( \sqrt{1 + \cE_{m,n}} \; \middle | \; Y^m ,A^m \right) \\ = \frac{\pi}{2} \cov\left( \big| Y_{i} - \widehat{Y}_{i} \big | , \big | Y_{j} - \widehat{Y}_{j} \big| \, \middle| \, Y^m\!, A^{m} \right), \label{eq:id_post_var} \end{multline} where $\widehat{Y}_{i} = \ex{ Y_{i} \mid Y^m, A^{m}, A_i} $. \end{enumerate} \end{lemma} Identity \eqref{eq:id_cov} relates the correlation of the signal entries under the posterior distribution to the correlation of new measurements. Identity \eqref{eq:id_post_var} relates the deviation of the squared error under the posterior distribution to the correlation of between new measurements. Combining these identities with the bounds on the relationship between covariance and mutual information given in Appendix~\ref{sec:cov_to_MI} leads to the following result. The proof is given in Appendix~\ref{proof:lem:SE_bound_1}. \begin{lemma}\label{lem:SE_bound_1} Under Assumptions 1 and 2, the posterior variance and the posterior covariance matrix satisfy \begin{align} \ex{ \left| \cE_{m,n} - V_{m,n} \right|} & \le C_B \cdot \left| I''_{m,n} \right|^{\frac{1}{4}} \label{eq:weak_dec_1}\\ \frac{1}{n} \ex{ \left \| \cov(X^n \mid Y^{m}, A^{m} ) \right\|_F } & \le C_B \cdot \left| I''_{m,n} \right|^{\frac{1}{4}}. \label{eq:weak_dec_2} \end{align} \end{lemma} \subsection{Gaussiannness of New Measurements}\label{sec:Gaussiannness} The \textit{centered measurement} $\bar{Y}_{m+1}$ is defined to be the difference between a new measurement and its conditional expectation given the previous data: \begin{align} \bar{Y}_{m+1} & \triangleq Y_{m+1} - \ex{Y_{m+1} \mid Y^m, A^{m+1}}. \notag \end{align} Conditioned on the data $(Y^m, A^{m+1})$, the centered measurement provides the same information as $Y_{m+1}$, and thus the posterior MI difference and the MI difference can be expressed equivalently as \begin{align} J_m &= \ex{ \imath(X^n; \bar{Y}_{m+1} \mid Y^m, A^{m+1} ) \; \middle | \; Y^m ,A^m } \notag\\ I'_m & = I(X^n; \bar{Y}_{m+1} \mid Y^m, A^{m+1} ). \notag \end{align} Furthermore, by the linearity of expectation, the centered measurement can be viewed as a noisy linear projection of the signal error: \begin{align} \bar{Y}_{m+1} & = \langle A_{m+1}, \bar{X}^n \rangle + W_{m+1}, \label{eq:Ybar_alt} \end{align} where $\bar{X}^n = X^n - \ex{X^n \! \mid \! Y^m, A^m}$. Since the measurement vector $A_{m+1}$ and noise term $W_{m+1}$ are independent of everything else, the variance of the centered measurement can be related directly to the posterior variance $V_m$ and the MMSE $M_m$ via the following identities: \begin{align} \var(\bar{Y}_{m+1} \! \mid \! Y^m, A^m) &= 1 +V_{m,n} \label{eq:Ybar_var_cond} \\ \var(\bar{Y}_{m+1} ) &= 1 +M_{m,n}.\label{eq:Ybar_var} \end{align} Identity \eqref{eq:Ybar_var_cond} follows immediately from Lemma~\ref{lem:post_dist_to_id}. Identity \eqref{eq:Ybar_var} follows from the fact that the centered measurement has zero mean, by construction, and thus its variance is equal to the expectation of \eqref{eq:Ybar_var_cond}. At this point, the key question for our analysis is the extent to which the conditional distribution of the centered measurement can be approximated by a zero-mean Gaussian distribution. We focus on two different measures of non-Gaussianness. The first measure, which is referred to as the \textit{posterior non-Gaussianness}, is defined by the random variable \begin{align} \Delta^P_{m,n} \triangleq \ensuremath{\mathbb{E}}_{A_{m+1}} \left[ D_\mathrm{KL}\left( P_{\bar{Y}_{m+1} \mid Y^m, A^{m+1}} \, \middle \|\, \mathcal{N}(0, 1 + V_m ) \right)\right]. \notag \end{align} This is the Kullback--Leibler divergence with respect to the Gaussian distribution whose variance is matched to the conditional variance of $\bar{Y}_{m+1}$ given the data $(Y^m, A^m)$. The second measure, which is referred to simply as the \textit{non-Gaussianness}, is defined by \begin{align} \Delta_{m,n} & \triangleq \ex{ D_\mathrm{KL}\left( P_{\bar{Y}_{m+1} \mid Y^m, A^{m+1}} \, \middle \|\, \mathcal{N}(0, 1 + M_m ) \right)}.\nota \end{align} Here, the expectation is taken with respect to the tuple $(Y^m, A^{m+1})$ and the comparison is with respect to the Gaussian distribution whose variance is matched to the marginal variance of $\bar{Y}_{m+1}$. The connection between the non-Gaussianness of the centered measurement and the relationship between the mutual information and MMSE sequences is given by the following result. The proof is given in Appendix~\ref{proof:lem:Delta_alt}. \begin{lemma}\label{lem:Delta_alt} Under Assumption 1, the posterior non-Gaussianness and the non-Gaussianness satisfy the following identities: \begin{align} \Delta^P_{m,n} & = \frac{1}{2} \log(1 + V_{m,n}) - J_{m,n} \label{eq:DeltaP_alt}\\ \Delta_{m,n} & = \frac{1}{2} \log(1 + M_{m,n}) - I'_{m,n} \label{eq:Delta_alt} . \end{align} \end{lemma} Identity~\eqref{eq:Delta_alt} shows the integral relationship between mutual information and MMSE in Theorem~\ref{thm:I_MMSE_relationship} can be stated equivalently in terms of the non-Gaussianness of the centered measurements. Furthermore, by combining \eqref{eq:Delta_alt} with \eqref{eq:DeltaP_alt}, we see that the non-Gaussianness can be related to the expected posterior non-Gaussianness using the following decomposition: \begin{align} \Delta_{m,n} &= \ex{ \Delta^P_{m,n}} + \frac{1}{2} \ex{\log\left( \frac{ 1 + M_{m,n}}{ 1 + V_{m,n}} \right)}.\label{eq:Delta_decomp} \end{align} The rest of this subsection is focused on bounding the expected posterior non-Gaussianness. The second term on the right-hand side of \eqref{eq:Delta_decomp} corresponds to the deviation of the posterior variance and is considered in the next subsection. The key step in bounding the posterior non-Gaussianness is provided by the following result, which bounds the expected Kullback--Leibler divergence between the conditional distribution of a random projection and a Gaussian approximation~\cite{reeves:2016b}. \begin{lemma}[{\!\cite{reeves:2016b}}]\label{lem:cclt} Let $U$ be an $n$-dimensional random vector with mean zero and $\ex{ \|U\|^4 } < \infty$, and let $Y = \langle A, U \rangle + W$, where $A \sim \mathcal{N}(0, \frac{1}{n} I_n)$ and $W \sim \mathcal{N}(0,1)$ are independent. Then, the expected KL divergence between $P_{Y|A}$ and the Gaussian distribution with the same mean and variance as $P_Y$ satisfies \begin{align} \MoveEqLeft \ex{ \DKL{P_{Y\mid A} }{\mathcal{N}(0, \var(Y) }} \notag\\ &\le \frac{1}{2} \ex{ \left| \tfrac{1}{n} \|U\| -\tfrac{1}{n} \ex{ \|U\|^2} \right| } \notag\\ & \quad C \cdot \left| \tfrac{1 }{n} \|\! \cov(U )\|_F \left( 1 + \tfrac{1}{n}\sqrt{\ex{ \|U\|^4 }} \right) \right|^\frac{2}{5} .\notag \end{align} \end{lemma} Combining Lemma~\ref{lem:cclt} with Lemma~\ref{lem:SE_bound_1} leads to the following result, which bounds the expected posterior non-Gaussianness in terms of the second order MI difference. The proof is given in Appendix~\ref{proof:lem:DeltaP_bound}. \begin{lemma}\label{lem:DeltaP_bound} Under Assumptions 1 and 2, the expected posterior non-Gaussianness satisfies \begin{align} \ex{ \Delta^P_{m,n}} &\le C_B\cdot \left| I''_{m,n} \right|^{\frac{1}{10}} .\nota \end{align} \end{lemma} \subsection{Concentration of Posterior Variance} We now turn our attention to the second term on the right-hand side of \eqref{eq:Delta_decomp}. By the concavity of the logarithm, this term is nonnegative and measures the deviation of the posterior variance about its expectation. We begin with the following result, which provides useful bounds on the deviation of the posterior variance. The proof is given in Appendix~\ref{proof:lem:Vdev_to_logVdev}. \begin{lemma}\label{lem:Vdev_to_logVdev} Under Assumption 2, the posterior variance satisfies the following inequalities: \begin{align} \ex{\log\left( \frac{ 1 + M_{m,n}}{ 1 + V_{m,n}} \right)} & \le \ex{\left| V_{m,n} - M_{m,n} \right|} \notag \\ & \le C_B \cdot \sqrt{ \inf_{t \in \mathbb{R}} \ex{\tfrac{1}{2} \log(1 + V_{m,n}) - t } } \label{eq:Vdev_to_logVdev} . \end{align} \end{lemma} The next step is to bound the right-hand side of \eqref{eq:Vdev_to_logVdev}. Observe that by Lemma~\ref{lem:Delta_alt}, the term $\frac{1}{2} \log(1 + V_{m,n})$ can be expressed in terms of the posterior non-Gaussianness and the posterior MI difference. Accordingly, the main idea behind our approach is to show that the deviation of this term can be upper bounded in terms of deviation of the posterior MI difference. Rather than working with the sequences $V_{m,n}$ and $J_{m,n}$ directly, however, we bound the averages of these terms corresponding to a sequence of $\ell$ measurements, where $\ell$ is an integer that that is chosen at the end of the proof to yield the tightest bounds. The main reason that we introduce this averaging is so that we can take advantage of the bound on the variance of the mutual information density given in Lemma~\ref{lem:IMI_var} The key technical results that we need are given below. Their proof are given in Appendices~\ref{proof:lem:post_var_smoothness} and \ref{proof:lem:PMID_var}. \begin{lemma}\label{lem:post_var_smoothness} Under Assumptions 1 and 2, the posterior variance satisfies \begin{align} \frac{1}{\ell} \sum_{k=m}^{m+\ell-1} \ex{ \left| V_{m} - V_{k} \right|} & \le C_B \cdot \left| I'_{m,n} - I'_{m+\ell -1,n} \right|^{\frac{1}{2}}, \notag \end{align} for all $(\ell, m,n)\in \mathbb{N}^3$ \end{lemma} \begin{lemma}\label{lem:PMID_var} Under Assumptions 1 and 2, the posterior MI difference satisfies \begin{align} \inf_{t \in \mathbb{R}} \ex{ \left| \frac{1}{\ell} \! \sum_{k =m}^{m+ \ell-1}\!\! J_k - t \right|} & \le C_B \cdot \left[ \left( 1 + \frac{m}{n} \right) \frac{\sqrt{n}}{\ell} + \frac{1}{\sqrt{n}} \right], \notag \end{align} for all $(\ell,m,n) \in \mathbb{N}^3$. \end{lemma} Finally, combining Lemmas~\ref{lem:DeltaP_bound}, \ref{lem:post_var_smoothness}, and \ref{lem:PMID_var} leads to the following result, which bounds the deviation of the posterior variance in terms of the MI difference difference sequence. The proof is given in Appendix~\ref{proof:lem:post_var_dev_bound}. \begin{lemma}\label{lem:post_var_dev_bound} Under Assumptions 1 and 2, the posterior variance satisfies, \begin{multline} \ex{ |V_{m,n} - M_{m,n}|} \le C_B \cdot \Big[ \left| I'_{m,n} - I'_{m+\ell-1,n} \right|^\frac{1}{4} + \ell^{-\frac{1}{20}} \\ + \left( 1 + \tfrac{m}{n} \right)^\frac{1}{2} n^{\frac{1}{4}} \ell^{-\frac{1}{2}} + n^{-\frac{1}{4}}\Big]. \notag \end{multline} for all $(\ell,m,n) \in \mathbb{N}^3$. \end{lemma} \subsection{Final Steps in Proof of Theorem~\ref{thm:I_MMSE_relationship}} The following result is a straightforward consequence of Identity~\eqref{eq:Delta_decomp} and Lemmas~\ref{lem:DeltaP_bound} and \ref{lem:post_var_dev_bound}. The proof is given in Appendix~\ref{proof:lem:post_var_dev_bound}. \begin{lemma}\label{lem:Delta_bound} Under Assumptions 1 and 2, the non-Gaussianness of new measurements satisfies the upper bound \begin{multline} \Delta_{m,n} \le C_B \cdot \Big[ \left|I''_{m,n} \right|^{\frac{1}{10}} + \left| I'_{m,n} - I'_{m+\ell_n, n} \right|^\frac{1}{4} \\ +\left(1 + \tfrac{m}{n} \right)^\frac{1}{2} n^{-\frac{1}{24}} \Big], \notag \end{multline} where $\ell_n = \lceil n^\frac{5}{6} \rceil$ \end{lemma} We now show how the proof of Theorem~\ref{thm:I_MMSE_relationship} follows as a consequence of Identity~\eqref{eq:Delta_alt} and Lemma~\ref{lem:Delta_bound}. Fix any $n\in \mathbb{N}$ and $\delta \in \mathbb{R}_+$ and let $m = \lceil \delta n \rceil$ and $\ell = \lceil n^\frac{5}{6} \rceil$. Then, we can write \begin{align} \MoveEqLeft \int_{0}^{\delta} \left| \mathcal{I}_{n}'(\gamma) - \frac{1}{2} \log\left(1 + \mathcal{M}_{n}(\gamma) \right) \right| \mathrm{d} \gamma \notag\\ & \le \sum_{k=0}^{m- 1} \int_{\frac{k}{n} }^{\frac{k+1}{n}} \left| \mathcal{I}_{n}'(\gamma) - \frac{1}{2} \log\left(1 + \mathcal{M}_{n}(\gamma) \right) \right| \mathrm{d} \gamma \notag\\ &\overset{(a)}{=} \frac{1}{n}\sum_{k=0}^{m- 1} \left| I'_{k,n} - \frac{1}{2} \log\left(1 + M_{k,n} \right) \right| \notag\\ & \overset{(b)}{=} \frac{1}{n}\sum_{k=0}^{m-1} \Delta_{k,n} \notag \\ & \!\begin{multlined}[b] \overset{(c)}{\le} \frac{ C_B }{n}\sum_{k=0}^{m-1} \Big[ \left| I''_{k,n} \right|^\frac{1}{10} + \left| I'_{k+\ell, n} - I'_{k,n} \right|^\frac{1}{4} \\ + \left( 1 + \tfrac{k}{n}\right)^\frac{1}{2} n^{-\frac{1}{24}} \Big] , \label{eq:I_MMSE_relationship_b} \end{multlined} \end{align} where: (a) follows from the definitions of $\mathcal{I}'_n(\delta)$ and $\mathcal{M}_n(\delta)$; (b) follows from Identity~\eqref{eq:Delta_alt}; and (c) follows from Lemma~\ref{lem:Delta_bound}. To further bound the right-hand side of \eqref{eq:I_MMSE_relationship_b}, observe that \begin{align} \frac{1}{n} \sum_{k=0}^{m-1} \left| I''_{k,n} \right|^\frac{1}{10} &\overset{(a)}{\le} \frac{m^\frac{9}{10}}{n} \left( \sum_{k=0}^{m- 1}\left| I''_{k,n} \right|\right)^\frac{1}{10} \notag \\ & \overset{(b)}{=} \frac{m^\frac{9}{10}}{n} \left| I'_{1,n} -I'_{m,n} \right|^\frac{1}{10} \notag \\ & \overset{(c)}{\le} \frac{m^\frac{9}{10}}{n} \left( I_{1,n}\right)^\frac{1}{10} \notag \\ & \overset{(d)}{\le} C_B \cdot \left( 1+ \delta\right)^\frac{9}{10} \, n^{-\frac{1}{10}} , \label{eq:I_MMSE_relationship_c} \end{align} where: (a) follows from H\"olders inequality; (b) follows from the fact that $I''_{m,n}$ is non-positive; (c) follows from the fact that $I'_{m,n}$ is non-increasing in $m$; and (d) follows from the fact that $I_{1,n}$ is upper bounded by a constant that depends only on $B$. Along similar lines, \begin{align} \frac{1}{n} \sum_{k=0}^{m-1} \left| I'_{k,n} - I'_{k+\ell, n} \right|^\frac{1}{4} &\overset{(a)}{\le} \frac{m^\frac{3}{4}}{n} \left( \sum_{k=0}^{m- 1}\left| I'_{k,n} - I'_{k + \ell, n} \right|\right)^\frac{1}{4} \notag \\ & \overset{(b)}{=} \frac{m^\frac{3}{4}}{n} \left( \sum_{k=0}^{\ell-1} ( I'_{k,n} - I'_{k + m, n}) \right)^\frac{1}{4} \notag \\ & \overset{(c)}{\le} \frac{m^\frac{3}{4}}{n} \left( \ell \cdot I_{1,n}\right)^\frac{1}{4} \notag \\ & \overset{(d)}{\le} C'_B \cdot \left( 1+ \delta\right)^\frac{3}{4} \, n^{-\frac{1}{24}} , \label{eq:I_MMSE_relationship_d} \end{align} where: (a) follows from H\"olders inequality; (b) and (c) follow from the fact that $I'_{m,n}$ is non-increasing in $m$; and (d) follows from the fact that $I_{1,n}$ is upper bounded by a constant that depends only on $B$. Finally, \begin{align} \frac{1}{n} \sum_{k=0}^{m-1} \left( 1 + \tfrac{k}{n}\right)^\frac{1}{2} n^{-\frac{1}{24}} & \le \tfrac{m}{n} \left(1 + \tfrac{m-1}{n} \right)^\frac{1}{2}\, n^{-\frac{1}{24}} \notag \\ & \le \left( 1+ \delta\right)^\frac{3}{2}\, n^{-\frac{1}{24}} . \label{eq:I_MMSE_relationship_e} \end{align} Plugging \eqref{eq:I_MMSE_relationship_c}, \eqref{eq:I_MMSE_relationship_d}, and \eqref{eq:I_MMSE_relationship_e} back into \eqref{eq:I_MMSE_relationship_b} and retaining only the dominant terms yields \begin{align} \int_{0}^{\delta} \left| \mathcal{I}_{n}'(\gamma) - \frac{1}{2} \log\left(1 \!+\! \mathcal{M}_{n}(\gamma) \right) \right| \mathrm{d} \gamm \le C_B \cdot (1\! + \! \delta)^\frac{3}{2} \,n^{- \frac{1}{24}} . \notag \end{align} This completes the proof of Theorem~\ref{thm:I_MMSE_relationship}. \section{Proof of Theorem~\ref{thm:MMSE_fixed_point}}\label{proof:thm:MMSE_fixed_point} \subsection{MMSE Fixed-Point Relationship}\label{sec:MMSE_fixed_point} This section shows how the MMSE can be bounded in terms of a fixed-point equation defined by the single-letter MMSE function of the signal distribution. At a high level, our approach focuses on the MMSE of an augmented measurement model, which contains an extra measurement and extra signal entry, and shows that this augmented MMSE can be related to $M_n$ in two different ways. For a fixed signal length $n$ and measurement number $m$, the augmented measurement model consists of the measurements $(Y^m, A^m)$ plus an additional measurement given by \begin{align} Z_{m+1} = Y_{m+1} + \sqrt{G_{m+1}}\, X_{n+1} , \notag \end{align} where $ G_m \sim \frac{1}{n} \chi^2_m$ has a scaled chi-square distribution and is independent of everything else. The observed data is given by the tuple $(Y^m, A^m, \mathcal{D}_{m+1})$ where \begin{align} \mathcal{D}_{m+1} = (Z_{m+1}, A_{m+1}, G_{m+1}). \notag \end{align} The augmented MMSE $\widetilde{M}_{m,n}$ is defined to be the average MMSE of the first $n$ signal entries given this data: \begin{align} \widetilde{M}_{m,n} & \triangleq \frac{1}{n} \mathsf{mmse}(X^n \mid Y^{m}, A^{m}, \mathcal{D}_{m+1}). \nota \end{align} The augmented measurement $Z_{m+1}$ is a noisy version of the measurement $Y_{m+1}$. Therefore, as far as estimation of the signal $X^n$ is concerned, the augmented measurements are more informative than $(Y^m, A^m)$, but less informative than $(Y^{m+1}, A^{m+1})$. An immediate consequence of the data processing inequality for MMSE \cite[Proposition 4]{rioul:2011}, is that the augmented MMSE is sandwiched between the MMSE sequence: \begin{align} M_{m+1,n} \le \widetilde{M}_{m+1,n} \le M_{m,n}. \label{eq:tildeM_sandwich} \end{align} The following result follows immediately from \eqref{eq:tildeM_sandwich} and the smoothness of the posterior variance given in Lemma~\ref{lem:post_var_smoothness}. The proof is given in Appendix~\ref{proof:lem:M_to_M_aug}. \begin{lemma}\label{lem:M_to_M_aug} Under Assumptions 1 and 2, the augmented MMSE $\widetilde{M}_{m,n}$ and the MMSE $M_{m,n}$ satisfy \begin{align} \left| \widetilde{M}_{m,n} - M_{m,n} \right| \le C_B \cdot \left| I''_{m,n} \right|^\frac{1}{2}. \label{eq:M_to_M_aug} \end{align} \end{lemma} The next step in the proof is to show that $\widetilde{M}_m$ can also be expressed in terms of the single-letter MMSE function $\mathsf{mmse}_X(s)$. The key property of the augmented measurement model that allows us to make this connection is given by the following result. The proof is given in Appendix~\ref{proof:lem:MMSE_aug_alt}. \begin{lemma}\label{lem:MMSE_aug_alt} Under Assumptions 1 and 2, the augmented MMSE can be expressed equivalently in terms of the last signal entry: \begin{align} \widetilde{M}_{m,n} = \mathsf{mmse}(X_{n+1} \mid Y^m, A^m, \mathcal{D}_{m+1}). \label{eq:M_aug_alt} \end{align} \end{lemma} To see why the characterization in \eqref{eq:M_aug_alt} is useful note that the first $m$ measurements $(Y^m, A^m)$ are independent of $X_{n+1}$. Thus, as far as estimation of $X_{n+1}$ is concerned, the relevant information provided by these measurements is summarized by the conditional distribution of $Y_{m+1}$ given $(Y^m,A^{m+1})$. This observation allows us to leverage results from Section~\ref{sec:Gaussiannness}, which focused on the non-Gaussianness of this distribution. The proof of the following result is given in Section~\ref{proof:lem:MMSE_aug_bound}. \begin{lemma}\label{lem:MMSE_aug_bound} Under Assumptions 1 and 2, the augmented MMSE and the MMSE satisfy \begin{align} \left| \widetilde{M}_{m,n} - \mathsf{mmse}_X\left( \frac{m/n}{1 + M_{m,n}} \right) \right| & \le C_B \cdot \left( \sqrt{ \Delta_{m,n} } + \frac{ \sqrt{m}}{n} \right), \notag \end{align} where $\Delta_{m,n}$ is the non-Gaussianness of new measurements \end{lemma} \subsection{Final Steps in Proof of Theorem~\ref{thm:MMSE_fixed_point}} Fix any $n \in \mathbb{N}$ and $\delta\in \mathbb{R}_+$ and let $m = \lceil \delta n \rceil$. Then, we can write \begin{align} \MoveEqLeft \int_{0}^{\delta} \left| \mathcal{M}_{n}(\gamma) - \mathsf{mmse}_X\left( \frac{\gamma }{1 + \mathcal{M}_n(\gamma) } \right) \right| \mathrm{d} \gamma \notag\\ & \le \sum_{k=0}^{m-1} \int_{ \frac{k}{n}}^{\frac{k+1}{n}} \left| \mathcal{M}_{n}(\gamma) - \mathsf{mmse}_X\left( \frac{\gamma }{1 + \mathcal{M}_n(\gamma) } \right) \right| \mathrm{d} \gamma \notag\\ & \overset{(a)}{=} \sum_{k=0}^{m-1} \int_{ \frac{k}{n}}^{\frac{k+1}{n}} \left| M_{k,n} - \mathsf{mmse}_X\left( \frac{\gamma }{1 + M_{k,n}} \right) \right| \mathrm{d} \gamma \notag\\ &\overset{(b)}{\le} \frac{1}{n} \sum_{k=0}^{m-1} \left( \left| M_{k,n} - \mathsf{mmse}_X\left( \frac{k/n }{1 + M_{k,n}} \right) \right| + \frac{4 B}{n} \right) \notag\\ & \overset{(c)}{\le } \frac{C_B}{n} \sum_{k=0}^{m-1} \left( \left| I''_{k,n}\right|^\frac{1}{2} + \left| \Delta_{k,n} \right|^\frac{1}{2} + \frac{ \sqrt{k}}{n} + \frac{1}{n} \right), \label{eq:MMSE_fixed_point_b} \end{align} where: (a) follows from definition of $\mathcal{M}_n(\delta)$; (b) follows from the triangle inequality and Lemma~\ref{lem:mmseX_bound}; and (c) follows from Lemmas~\ref{lem:M_to_M_aug} and \ref{lem:MMSE_aug_bound}. To further bound the right-hand side of \eqref{eq:MMSE_fixed_point_b}, observe that, by the same steps that let to Inequality~\eqref{eq:I_MMSE_relationship_c}, \begin{align} \frac{1}{n} \sum_{k=0}^{m-1} \left| I''_{k,n}\right|^\frac{1}{2} & \le C_B \cdot (1 + \delta)^\frac{1}{2} n^{-\frac{1}{2}}. \label{eq:MMSE_fixed_point_c} \end{align} Furthermore, \begin{align} \frac{1}{n} \sum_{k=0}^{m-1} \left| \Delta_{k,n} \right|^\frac{1}{2} & \overset{(a)}{ \le} \sqrt{ \frac{m}{n}} \sqrt{ \frac{1}{n} \sum_{k=0}^{m-1} \Delta_{k,n} } \notag \\ & \overset{(b)}{\le} C_B \cdot (1 + \delta)^\frac{1}{2} \sqrt{ (1\! + \! \delta)^\frac{3}{2} \,n^{- \frac{1}{24}} } \notag \\ & = C_B \cdot (1 + \delta)^\frac{7}{4} \, n^{-\frac{1}{48}} \label{eq:MMSE_fixed_point_d} \end{align} where (a) follows from the Cauchy-Schwarz inequality, and (b) follows from proof of Theorem~\ref{thm:I_MMSE_relationship}. Finally, \begin{align} \frac{1}{n} \sum_{k=0}^{m-1} \left(\frac{ \sqrt{k}}{n} + \frac{1}{n} \right) & \le \frac{ m( 1 + \sqrt{m-1})}{n^2} \le (1 + \delta)^\frac{3}{2} n^{-\frac{1}{2}}. \label{eq:MMSE_fixed_point_e} \end{align} Plugging \eqref{eq:MMSE_fixed_point_c}, \eqref{eq:MMSE_fixed_point_d}, and \eqref{eq:MMSE_fixed_point_e}, back into \eqref{eq:MMSE_fixed_point_b} and keeping only the dominant terms leads to \begin{multline} \int_{0}^{\delta} \left| \mathcal{M}_{n}(\gamma) - \mathsf{mmse}_X\left( \frac{\gamma }{1 + \mathcal{M}_n(\gamma) } \right) \right| \mathrm{d} \gamma\\ \le C_B \cdot (1 + \delta)^\frac{7}{4} \, n^{-\frac{1}{48}}. \notag \end{multline} This completes the proof of Theorem~\ref{thm:MMSE_fixed_point}. \section{Proof of Theorem~\ref{thm:limits}} The proof of Theorem~\ref{thm:limits} is established by combining implications of the single-crossing property, the constraints on the MI and MMSE given in Theorems~\ref{thm:I_MMSE_relationship}, \ref{thm:MMSE_fixed_point}, and \ref{thm:I_m_boundary}, and standard results from functional analysis. \subsection{The Single-Crossing Property}\label{sec:single_crossing} The fixed point curve is given by the graph of the function \begin{align} \delta_\mathrm{FP}(z) = (1 + z) \mathsf{mmse}_X^{-1}(z), \notag \end{align} where $\mathsf{mmse}_X^{-1}(z)$ is the functional inverse of $\mathsf{mmse}_X(s)$. The function $\delta_\mathrm{FP}(z)$ is continuously differentiable over its domain because $\mathsf{mmse}_X(s)$ is smooth on $(0,\infty)$ \cite[Proposition~7]{guo:2011}. The function $\delta_\mathrm{RS}(z)$ is defined to be the functional inverse of the replica-MMSE function \begin{align} \delta_\mathrm{RS}(z) &= \cM_{\mathrm{RS}}^{-1}(z). \notag \end{align} The function $\delta_\mathrm{RS}$ is continuous and non-increasing because $\cM_{\mathrm{RS}}$ is strictly decreasing. Note that jump discontinuities in $\cM_{\mathrm{RS}}(\delta)$ correspond to flat sections in $\delta_\mathrm{RS}(z)$. Using these definitions we can now provide a formal definition of the single-crossing property. \begin{definition}[Single-Crossing Property] A signal distribution $P_X$ has the single-crossing property if $\delta_\mathrm{RS} - \delta_\mathrm{FP}$ has at most one zero-crossing. In other words, there exists $z_* \in \mathbb{R}_+$ such that $\delta_\mathrm{RS} - \delta_\mathrm{FP}$ is nonpositive or nonnegative on $[0,z_*]$ and nonpositive or nonnegative on $[z_*, \infty)$. \end{definition} \begin{lemma}\label{lem:delta_rs_alt} If the signal distribution $P_X$ has the single-crossing property there exists a point $(z_*,\delta_*) \in \mathrm{FP}$ such that \begin{align} \delta_\mathrm{RS}(z) & = \begin{cases} \max( \delta_\mathrm{FP}(z), \delta_*), & \text{if $z \in [0,z_*]$} \notag\\ \min( \delta_\mathrm{FP}(z), \delta_*), & \text{if $z\in [z_*,\infty)$}. \notag \end{cases} \end{align} \end{lemma} \begin{proof} If $\delta_\mathrm{RS}(z) = \delta_\mathrm{FP}(z)$ for all $z \in \mathbb{R}_+$ then this representation holds for every point in $\mathrm{FP}$ because $\delta_\mathrm{RS}$ is non-increasing. Alternatively, if there exists $(u_*, \delta_*) \in \mathbb{R}^2_+$ such that $\delta_* = \delta_\mathrm{RS}(u_*) \ne \delta_\mathrm{FP}(u_*) $, then it must be the case that the global minimum of $Q_*(z) \triangleq R(\delta_*, z)$ is attained at more than one point. More precisely, there exists $z_1 <u_* < z_2$ such that \begin{align} & Q_*( z_1) = Q_*(z_2) = \min_{z} Q_*(z) \label{eq:delta_rs_alt_b}\\ & Q_*( u ) > \min_{z} Q_*(z) \quad \text{for some $u \in (z_1, z_2)$}. \label{eq:delta_rs_alt_c} \end{align} To see why the second constraint follows from the assumption $\delta_\mathrm{RS}(u_*) \ne \delta_\mathrm{FP}(u_*)$, note that if $Q_*(z)$ were constant over the interval $[z_1, z_2]$, that would mean that $Q'_*(z) = R_z(\delta_* , z) = 0 $ for all $z \in [z_1, z_2]$. This is equivalent to saying that every point on the line from $(\delta_*, z_1)$ to $(\delta_*, z_2)$ is on the fixed-point curve, which is a contradiction. Now, since $\delta_\mathrm{RS}(z)$ is non-increasing and equal to $\delta_*$ at both $z_1$ and $z_2$ we know that $\delta_\mathrm{RS}(z) = \delta_*$ for all $z \in [z_1, z_2]$. Furthermore, since $z_1$ and $z_2$ are minimizers of $R(\delta_*, z)$, we also know that $\delta_\mathrm{FP}(z_1) = \delta_\mathrm{FP}(z_2) = \delta_*$. Next, we will show that the the function $\delta_\mathrm{FP}(z) - \delta_*$ must have at least one negative-to-positive zero-crossing on $(z_1, z_2)$. Recall that the function $Q_*(z)$ is continuous, has global minima at $z_1$ and $z_2$, and it not constant over $[z_1,z_2]$. Therefore, it must attain a local maximum on the open interval $(z_1,z_2)$. Since it is continuously differentiable, this means that there exists $u_1, u_2 \in (z_1, z_2)$ with $u_1 < u_2$ such that $Q'_*(u_1) > 0$ and $Q'_*(u_2) < 0$. The sign changes in $Q_*(z)$ can be related to the sign changes in $\delta_\mathrm{FP}(z) - \delta_*$ be noting that \begin{align} \sgn( Q'_*(z) ) & \overset{(a)}{=} \sgn\left( z- \mathsf{mmse}_X\left( \frac{\delta_*}{1 + z} \right) \right) \notag\\ & \overset{(b)}{=} - \sgn\left( \mathsf{mmse}^{-1}_X(z)- \frac{\delta_*}{1 + z} \right) \notag\\ & = - \sgn\left( \delta_\mathrm{FP}(z) - \delta_*\right), \nota \end{align} where (a) follows from \eqref{eq:R_z} the fact that $\delta_*$ can be taken to be strictly positive and (b) follows from the fact that $\mathsf{mmse}_X(s)$ is strictly decreasing. As a consequence, we see that $\delta_\mathrm{FP}(u_1) < \delta_*$ and $\delta_\mathrm{FP}(u_2) > \delta_*$, and thus $\delta_\mathrm{FP}(z) - \delta_*$ has at least one negative-to-positive zero-crossing on the interval $(z_1, z_2)$. At this point, we have shown that every tuple $(\delta_*, z_1, z_2)$ satisfying \eqref{eq:delta_rs_alt_b} and \eqref{eq:delta_rs_alt_c} leads to at least one negative-to-positive zero-crossing of $\delta_\mathrm{FP} - \delta_\mathrm{RS}$. Therefore, if the signal distribution has the single-crossing property, there can be at most one such tuple. This implies that $\delta_\mathrm{FP}(z) = \delta_\mathrm{RS}(z)$ for all $z \in [0,z_1] \cup [z_2, \infty)$. Furthermore, by the continuity of $\delta_\mathrm{FP}$, there exists a point $z_* \in (u_1, z_2)$ such that $\delta_\mathrm{FP}(z_*) = \delta_*$ and \begin{align} z \le z_* &\implies \delta_\mathrm{RS}(z) \ge \delta_\mathrm{FP}(z) \notag\\ z \ge z_* &\implies \delta_\mathrm{RS}(z) \le \delta_\mathrm{FP}(z). \notag \end{align} Combining these observations leads to the stated result. \end{proof} Next, for each $g \in \mathcal{V}$, we use $\delta_g(z) = g^{-1}(z)$ to denote the functional inverse. The function $\delta_g$ is continuous and non-increasing because $g$ is strictly decreasing. \begin{lemma}\label{lem:delta_g_bounds} If the signal distribution $P_X$ has the single-crossing property then, for every $g \in \mathcal{V}$, the function $\delta_g$ is either an upper bound or lower bound on $\delta_\mathrm{RS}$. \end{lemma} \begin{proof} Let $(\delta_*,z_*)$ be the point described in Lemma~\ref{lem:delta_rs_alt}. Since $\delta_g$ is non-increasing, we have $\delta_g(z) \ge \delta_g(z_*)$ for all $z \in [0,z_*]$ and $\delta_g(z) \le \delta_g(z_*)$ for all $z \in [z_*, \infty)$. Combining these inequalities with the fact that $\delta_g$ is lower bounded by the lower envelop of the fixed-point curve leads to \begin{align} \delta_g(z) & \ge \begin{dcases} \min_{u \in [0,z]} \max( \delta_\mathrm{FP}(u) , \delta_g(z_*)), &\text{if $z \in [0, z_*]$}\\ \min_{u \in [z_*, z]} \min( \delta_\mathrm{FP}(u) , \delta_g(z_*)), &\text{if $z \in [z_*,\infty)$}. \end{dcases}. \notag \end{align} Therefore, if $\delta_g(z_*) \ge \delta_*$, we see that \begin{align} \delta_g(z) & \ge \begin{dcases} \min_{u \in [0,z]} \max( \delta_\mathrm{FP}(u) , \delta_*), &\text{if $z \in [0, z_*]$}\\ \min_{u \in [z_*, z]} \min( \delta_\mathrm{FP}(u) , \delta_*), &\text{if $z \in [z_*,\infty)$}. \end{dcases} \notag\\ &\overset{(a)}{ =} \begin{dcases} \min_{u \in [0,z]} \delta_\mathrm{RS}(u) , &\text{if $z \in [0, z_*]$}\\ \min_{u \in [z_*, z]} \delta_\mathrm{RS}(u), &\text{if $z \in [z_*,\infty)$}. \end{dcases} \notag\\ & \overset{(b)}{=} \delta_\mathrm{RS}(z) , \notag \end{align} where (a) follows from Lemma~\ref{lem:delta_rs_alt} and (b) follows from the fact that $\delta_\mathrm{RS}$ is non-increasing. Alternatively, if $\delta_g(z_*) \le \delta_*$ then a similar argument can be used to show that $\delta_g(z) \le \delta_\mathrm{RS}(z)$ for all $z \in \mathbb{R}_+$. \end{proof} \begin{lemma}\label{lem:G_unique} If the signal distribution $P_X$ has the single-crossing property, then $\mathcal{G}$ is equal to the equivalence class of functions in $\mathcal{V}$ that are equal to $\cM_{\mathrm{RS}}$ almost everywhere. \end{lemma} \begin{proof} Recall that $\mathcal{G}$ is the set of all functions $g \in \mathcal{V}$ that satisfy the boundary condition \begin{align} \lim_{\delta \to \infty} \left| \int_0^\delta \frac{1}{2} \log(1 + g(\gamma) ) \mathrm{d} \gamma - \cI_{\mathrm{RS}}(\delta) \right| = 0. \label{eq:G_unique_b} \end{align} Furthermore, for each $g \in \mathcal{V}$ and $\delta \in \mathbb{R}_+$, we can write \begin{align} \MoveEqLeft \left| \int_0^\delta \tfrac{1}{2} \log(1 + g(\gamma) ) \mathrm{d} \gamma - \cI_{\mathrm{RS}}(\delta) \right| \notag\\ & \overset{(a)}{=} \left| \int_0^\delta \! \tfrac{1}{2} \log(1\! +\! \cM_{\mathrm{RS}}(\gamma) ) \mathrm{d} \gamma - \int_0^\delta \! \tfrac{1}{2} \log(1 \!+\! g(\gamma) ) \mathrm{d} \gamma \right| \notag\\ & \overset{(b)}{=} \int_0^\delta \left|\tfrac{1}{2} \log(1\! +\! \cM_{\mathrm{RS}}(\gamma) ) -\tfrac{1}{2} \log(1\! +\! g(\gamma) )\right| \mathrm{d} \gamma, \label{eq:G_unique_c} \end{align} where (a) follows from \eqref{eq:derivative_replica} and (b) follows from the monotonicity of $g$ and $\cM_{\mathrm{RS}}$ and Lemma~\ref{lem:delta_g_bounds}. Combining \eqref{eq:G_unique_b} and \eqref{eq:G_unique_c}, we see that, for all $g \in \mathcal{G}$, \begin{align} \int_0^\infty \left|\frac{1}{2} \log(1 + \cM_{\mathrm{RS}}(\gamma) ) -\frac{1}{2} \log(1 + g(\gamma) )\right| \mathrm{d} \gamma = 0, \notag \end{align} and thus $\cM_{\mathrm{RS}}$ and $g$ are equal almost everywhere. \end{proof} \subsection{Convergence of Subsequences} For each $n\in \mathbb{N}$, the function $\mathcal{M}_n$ is a non-increasing function from $\mathbb{R}_{+}$ to $\mathbb{R}_{+}$. Convergence of the sequence $\mathcal{M}_n$ can be treated in a few different ways. In our original approach~\cite{reeves:2016a}, we focused on the L\'{e}vy metric~\cite[Ch.~2]{Gnedenko-1968}. Here, we present a more direct argument based on the Helly Selection Theorem~\cite[Thm.~12]{Hanche-em10}. First, we let $L^1 ([0,S])$ represent the standard Banach space of Lebesgue integrable functions from $[0,S]$ to $\mathbb{R}$ with norm \[ \int_0^S |f(\delta)| \mathrm{d} \delta. \] In this space, two functions $f,g$ are called \emph{equivalent} if they are equal almost-everywhere (i.e., $\int_0^S|f(\delta)-g(\delta)|\mathrm{d} \delta = 0$). Next, we recall that monotone functions are continuous almost everywhere (e.g., except for a countable set of jump discontinuities) \cite{Rudin-1976}. Thus, $f,g$ are equivalent if and only if they are equal at all points of continuity. The following lemmas outline our approach to convergence. \begin{lemma} \label{lem:subseq_limit} Under Assumptions 1 and 2, for any $S>0$ and any subsequence of $(\mathcal{M}_{n},\mathcal{I}_{n})$, there is a further subsequence (whose index is denoted by $n'$) and some $g\in \mathcal{G}$ such that \begin{align} \lim_{n'} & \int_0^S \big| \mathcal{M}_{n'}(\delta) - g(\delta) \big| \mathrm{d} \delta =0 \notag \\ \lim_{n'} & \int_0^S \left| \mathcal{I}_{n'}'(\delta) - \frac{1}{2}\log\left(1+g(\delta)\right)\right| \mathrm{d} \delta = 0. \notag \end{align} \end{lemma} \begin{proof} For any $S>0$ and each $n\in \mathbb{N}$, the restriction of $\mathcal{M}_n (\delta)$ to $\delta\in [0,S]$ is non-increasing and uniformly bounded by $\mathcal{M}_n(0) =\var(X)$. Since $\mathcal{M}_n (\delta)$ is nonnegative and non-increasing, its total variation on $[0,T]$ equals $\mathcal{M}_n (0) - \mathcal{M}_n(S) \leq \var(X)$~\cite[Section~6.3]{Royden-2010}. Based on this, the Helly Selection Theorem~\cite[Thm.~12]{Hanche-em10} shows that any subsequence of $\mathcal{M}_n$ contains a further subsequence that converges in $L^1 ([0,S])$. Let $\mathcal{M}_{n'}$ denote this further subsequence and $\mathcal{M}_*$ denote its limit so that \[ \lim_{n'} \int_0^S \left| \mathcal{M}_{n'} (\delta) - \mathcal{M}_* (\delta) \right| \mathrm{d} \delta = 0. \] To simplify notation, we define the operator $T\colon L^1 ([0,S]) \to L^1 ([0,S])$ via $(Tf)(\delta) \mapsto \mathsf{mmse}_X \left(\delta/(1+f(\delta))\right)$. To analyze $\mathcal{M}_* (\delta)$, we observe that, for all $n$, one has \begin{align} \int_0^S \!\!\! & \; \big| \mathcal{M}_{*} (\delta) - T\mathcal{M}_{*} (\delta) \big| \mathrm{d} \delta \leq \notag \int_0^S \big|\mathcal{M}_{*}(\delta)- \mathcal{M}_{n}(\delta) \big| \mathrm{d} \delta \notag\\ &\quad + \int_0^S \big|\mathcal{M}_{n}(\delta) -T\mathcal{M}_{n}(\delta) + T\mathcal{M}_{n}(\delta)-T\mathcal{M}_{*}(\delta) \big| \mathrm{d} \delta \notag\\ &\leq (1+L_T) \int_0^S\big|\mathcal{M}_{*}(\delta)- \mathcal{M}_{n}(\delta) \big| \mathrm{d} \delta \notag\\ & \quad + \int_0^S\big|\mathcal{M}_{n}(\delta) -T\mathcal{M}_{n}(\delta) \big|\mathrm{d} \delta, \notag \end{align} where $L_T$ is the Lipschitz constant of $T$. Under Assumption 2, one can use Lemma~\ref{lem:mmseX_bound}, to show that $L_T \leq 4 B S$. Since $\int_0^S \big|\mathcal{M}_{*} (\delta)- \mathcal{M}_{n'} (\delta) \big| \mathrm{d} \delta \to 0$ by construction and $\int_0^S \big| \mathcal{M}_{n'}(\delta) -T\mathcal{M}_{n'} (\delta) \big| \mathrm{d} \delta \to 0$ by Theorem~\ref{thm:MMSE_fixed_point}, taking the limit along this subsequnce shows that $\mathcal{M}_{*}$ equals $T\mathcal{M}_{*}$ almost everywhere on $[0,S]$. As $S$ was arbitrary, we see that $\mathcal{M}_*$ satisfies the first condition of Definition~\ref{lem:replica_mmse_set}. To establish the second condition of Definition~\ref{lem:replica_mmse_set}, we focus on the sequence $\mathcal{I}_{n}$. Recall that each $\mathcal{I}_{n}$ is concave and differentiable, with derivative $\mathcal{I}_{n}'$. Also, the set $\{\mathcal{I}_{n}\}$ is uniformly bounded on $[0,S]$ and uniformly Lipschitz by \eqref{eq:IX_Gbound}, \eqref{eq:Ip_bound}, and \eqref{eq:Im_bounds}. By the Arzel\`{a}-Ascoli theorem~\cite[Section~10.1]{Royden-2010}, this implies that any subsequence of $\mathcal{I}_{n}$ contains a further subsequence that converges uniformly on $[0,S]$. Moreover, the limiting function is concave and the further subsequence of derivatives also converges to the derivative of the limit function at each point where it is differentiable~\cite[Corollary~1.3.8]{Niculescu-2009}. Thus, from any subsequence of $(\mathcal{M}_{n},\mathcal{I}_{n})$, we can choose a further subsequence (whose index is denoted by $n'$) such that $\int_0^S \big| \mathcal{M}_{n'} (\delta) - \mathcal{M}_* (\delta) \big| \mathrm{d} \delta \to 0$ and $\mathcal{I}_{n'}$ converges uniformly on $[0,S]$ to a concave limit function $\mathcal{I}_* $. Moreover, the sequence of derivatives $\mathcal{I}_{n'}'$ also converges to $\mathcal{I}_* '$ at each point where $\mathcal{I}_*$ is differentiable. Since $\mathcal{I}$ is concave, it is differentiable almost everywhere and we have \[ \lim_{n'}\mathcal{I}_{n'}'(\delta)=\mathcal{I}_* '(\delta) \] almost everywhere on $[0,S]$. Since $\left| \mathcal{I}_{n'} ' (\delta) - \mathcal{I}_{*}' (\delta) \right|$ is bounded and converges to zero almost everywhere on $[0,S]$, we can apply the dominated convergence theorem to see also that $\int_0^S \big|\mathcal{I}_{n'}' (\delta) - \mathcal{I}_{*}' (\delta) \big| \mathrm{d} \delta \to 0$. Next, we can apply Theorem~\ref{thm:I_MMSE_relationship} to see that \[ \lim_{n'}\int_{0}^{S}\left|\mathcal{I}'_{n'}(\delta)-\frac{1}{2}\log\left(1+\mathcal{M}_{n'}(\delta)\right)\right|\mathrm{d}\delta=0. \] Since $\frac{1}{2}\log\left(1+z\right)$ is Lipschitz in $z$, one finds that \[ \int_0^S \left| \frac{1}{2} \log \left(1 + \mathcal{M}_{n'} (\delta) \right) - \frac{1}{2} \log \left(1+ \mathcal{M}_* (\delta) \right) \right| \mathrm{d} \delta \to 0 \] follows from the fact that $\int_0^S \big| \mathcal{M}_{n'} (\delta) - \mathcal{M}_* (\delta) \big| \mathrm{d} \delta \to 0$. Along with the triangle inequality, this shows that $\mathcal{I}_{n'}'(\delta)$ converges to $\frac{1}{2} \log (1+\mathcal{M}_* (\delta))$ almost everywhere on $[0,S]$. Since $\left| \mathcal{I}_{n'}'(\delta) - \frac{1}{2} \log (1+\mathcal{M}_* (\delta)) \right|$ is bounded and converges to zero almost everywhere on $[0,S]$, we can apply the dominated convergence theorem to see that \begin{equation} \label{eq:Istar_int_Mstar} \mathcal{I}_* (S) =\lim_{n'} \int_0^S \mathcal{I}_{n'}' (\delta) \mathrm{d} \delta = \int_{0}^{S} \frac{1}{2} \log (1+\mathcal{M}_* (\delta)) \mathrm{d} \delta. \end{equation} Next, we observe that Theorem~\ref{thm:I_m_boundary} implies \[ \lim_{n'} \; \left|\cI_{\mathrm{RS}} \left(S\right)-\mathcal{I}_{n'}(S)\right|\leq\frac{C}{\sqrt{S}}, \] for all $S \ge 4$. With \eqref{eq:Istar_int_Mstar}, this implies that \begin{align} &\left| \cI_{\mathrm{RS}}(S) - \int_{0}^{S} \frac{1}{2}\log\big( 1+\mathcal{M}_*(\delta) \big) \mathrm{d} \delta \right| \notag\\ & \leq \left| \cI_{\mathrm{RS}}(S) - \mathcal{I}_{n'} (S) \right| + \left| \mathcal{I}_{n'} (S) - \int_{0}^{S} \frac{1}{2}\log\big( 1+\mathcal{M}_*(\delta) \big) \mathrm{d} \delta \right| \notag\\ & \leq \frac{C}{\sqrt{S}} + \epsilon_{n'}, \notag \end{align} where $\lim_{n'} \epsilon_{n'} =0$. Taking the limit $n' \to \infty$ followed by the limit $S\to \infty$, we see \begin{equation} \lim_{S\to \infty} \left| \cI_{\mathrm{RS}}(S) - \int_{0}^{S} \frac{1}{2}\log\big( 1+\mathcal{M}_*(\delta) \big) \mathrm{d} \delta \right| = 0 \notag \end{equation} and, thus, that $\mathcal{M}_* \in \mathcal{G}$. Notice that we focus first on finite $S \in \mathbb{R}_+$ and then take the limit $S\to \infty$. This is valid because the functions $\mathcal{I}(\delta)$ and $\mathcal{M}_n (\delta)$ are defined for all $\delta \in \mathbb{R}_+$ but restricted to $[0,S]$ for the convergence proof. \end{proof} Now, we can complete the proof of Theorem~\ref{thm:limits}. The key idea is to combine Lemma~\ref{lem:G_unique} with Lemma~\ref{lem:subseq_limit}. From these two results, it follows that, for any $S>0$, every subsequence of $\mathcal{M}_n (\delta)$ has a further subsequence that converges to $\cM_{\mathrm{RS}}(\delta)$. This holds because the further subsequence must converge to some function in $\mathcal{G}$ (by Lemma~\ref{lem:subseq_limit}) but there is only one function up to almost everywhere equivalence (by Lemma~\ref{lem:G_unique}). The final step is to realize that this is sufficient to prove that, for all $S>0$, we have \begin{equation} \lim_{n\to\infty} \int_0^S \left| \cM_{\mathrm{RS}}(\delta) - \mathcal{M}_n (\delta) \right| \mathrm{d} \delta = 0. \notag \end{equation} To see this, suppose that $\mathcal{M}_n (\delta)$ does not converge to $\cM_{\mathrm{RS}}(\delta)$ in $L^1 ([0,S])$. In this case, there is an $\epsilon >0$ and an infinite subsequence $n(i)$ such that \begin{equation} \int_0^S \left| \cM_{\mathrm{RS}}(\delta) - \mathcal{M}_{n(i)} (\delta) \right| \mathrm{d} \delta > \epsilon \notag \end{equation} for all $i\in \mathbb{N}$. But, applying Lemma~\ref{lem:subseq_limit} shows that $n(i)$ has a further subsequence (denoted by $n'$) such that \begin{equation} \lim_{n'} \int_0^S \left| \cM_{\mathrm{RS}}(\delta) - \mathcal{M}_{n'} (\delta) \right| \mathrm{d} \delta = 0. \notag \end{equation} From this contradiction, one must conclude that $\mathcal{M}_n (\delta)$ converges to $\cM_{\mathrm{RS}}(\delta)$ in $L^1 ([0,S])$ for any $S>0$. \iffalse \begin{lemma} \label{lem:g_unique} If the signal distribution $P_X$ has the single-crossing property, then the set $\mathcal{G}$ equals the equivalence class of functions in $\mathcal{V}$ that are equal to $\cM_{\mathrm{RS}}$ almost everywhere. \end{lemma} \begin{proof} Using the single-crossing property, we first apply Lemma~?? to see that any $g\in \mV$ must satisfy either $g(\delta) \geq \cM_{\mathrm{RS}}(\delta)$ or $g(\delta)\leq \cM_{\mathrm{RS}}(\delta)$. If $g(\delta)$ is not (almost everywhere) equal to $\cM_{\mathrm{RS}}(\delta)$ on $[0,S]$, then \begin{equation} \int_0^S \left| g(\delta) - \cM_{\mathrm{RS}}(\delta) \right| > \epsilon. \notag \end{equation} Since $\frac{1}{2}\log(1+x)$ is Lipschitz on $[0,S]$, this also implies that \begin{equation} \int_0^S \left| \frac{1}{2} \log \left(1+g(\delta)\right) - \frac{1}{2} \log \left(1+\cM_{\mathrm{RS}}(\delta)\right) \right| > \epsilon. \notag \end{equation} int |f-g| > 0 upper/lower bound f-g either >=0 or <=0 thus ... If $g\in \mathcal{V}$ is not equal to $\cM_{\mathrm{RS}}(\delta)$ almost everywhere, then $g$ is either upper bounds or lower bounds $\cM_{\mathrm{RS}}(\delta)$ by Lemma~? \ednote{Galen will add reference}. In either case, there is an $\epsilon>0$ such that \[ \lim_{S\to \infty} \left| \cI_{\mathrm{RS}}(S) - \int_{0}^{S} \frac{1}{2}\log\big( 1+g(\delta) \big) \mathrm{d} \delta \right| > \epsilon \] for all $S > S_0$. \begin{equation} \label{eq:int_MR_vs_g} \lim_{S\to \infty} \left| \cI_{\mathrm{RS}}(S) - \int_{0}^{S} \frac{1}{2}\log\big( 1+\mathcal{M}_*(\delta) \big) \mathrm{d} \delta \right| = 0. \end{equation} there is an $S_0 < \infty$ and $\epsilon>0$ such that \[ \left| \cI_{\mathrm{RS}}(S) - \int_{0}^{S} \frac{1}{2}\log\big( 1+g(\delta) \big) \mathrm{d} \delta \right| > \epsilon \] for all $S > S_0$. Next, we will show that \begin{equation} \label{eq:int_Mstar_asymptotic} \lim_{S\to \infty} \left| \cI_{\mathrm{RS}}(S) - \int_{0}^{S} \frac{1}{2}\log\big( 1+\mathcal{M}_*(\delta) \big) \mathrm{d} \delta \right| = 0. \end{equation} Combined with the fact that $g\in \mathcal{V}$, this implies that $\mathcal{M}_* \in \mathcal{G}$. \end{proof} \begin{lemma} \label{lem:limit_contrapos} Let $f_n \in L^1 ([0,S])$ be a sequence of non-negative and non-increasing functions that are uniformly bounded by $C$. If every subsequence of $f_n$ contains a further subsequence that converges to $f\in L^1 ([0,S])$, then \[ \lim_n \int_0^S \big| f_n (\delta) - f(\delta) \big| \mathrm{d} \delta = 0. \] \end{lemma} \begin{proof} Since $f_n (\delta)$ is non-negative and non-increasing, its total variation on $[0,T]$ equals $f_n (0) - f_n(S) \leq C$. Based on this, the Helly Selection Theorem~\cite[Thm.~12]{Hanche-em10} shows that all subsequences of $f_n$ contain a further subsequence that converges in $L^1 ([0,S])$. Suppose, for the sake of contradiction, that $f_n$ does not converge to $f$. In this case, there is an $\epsilon>0$ and a subsequence $n(i)$ such that \[ \int_0^S \big| f_{n(i)} (\delta) - f(\delta) \big| \mathrm{d} \delta > \epsilon \] for all $i$. But, by hypothesis, $f_{n(i)}$ contains a further subsequence that converges to $f$. This contradicts the previous statement and, thus, we must conclude that $f_n$ converges to $f$. \end{proof} \begin{definition} Let $\mathcal{V}\subseteq \mathcal{D}_{0}$ be the subset such that $f\in \mathcal{V}$ satisfies the fixed-point equation $f(\delta)=\mathsf{mmse}_X \left(\delta / (1+f(\delta))\right)$ for all $\delta\geq0$. Notice that, for all $f\in\mathcal{V}$, we have $f(\delta)\in\mathrm{FP}(\delta)$, where \[ \mathrm{FP}(\delta) \triangleq \left\{ z\in \mathbb{R} \, \middle| \, (\delta,z) \in \mathrm{FP} \right\}. \] \end{definition} \fi \iffalse \begin{lemma} \label{lem:single_crossing_unique} If $P_{X}$ has the single-crossing property, then there is a $\delta^* \in \mathbb{R}$ such that \[ \cM_{\mathrm{RS}} (\delta)=\begin{cases} \max \mathrm{FP} (\delta) & \mbox{if }\delta<\delta^*\\ \min \mathrm{FP} (\delta) & \mbox{if }\delta>\delta^*. \end{cases} \] \end{lemma} \begin{proof} Since $\cM_{\mathrm{RS}}(\delta) \in \mathrm{FP} (\delta)$, this statement is trivial for all $\delta$ where $|\mathrm{FP}(\delta)|=1$. If $|\mathrm{FP}(\delta)|=1$ for all $\delta \in \mathbb{R}_+$, then the stated result holds with $\delta^* = 0$. Otherwise, there must be some $\delta$ where $|\mathrm{FP}(\delta)|>1$. To continue, we note that the fixed-point curve $\mathrm{FP}$ is also given by the graph of the function \[ \delta(z)= (1+z) \mathsf{mmse}_X^{-1} (z). \] Using this representation, the fixed points associated with $\delta=\delta_0$ can be seen as the level crossings of $\delta(z)$ at height $\delta_0$. From this point of view, the replica-MMSE is defined by a non-increasing function $\delta_{\mathrm{RS}}(z)$ that is constant when the replica-MMSE $\cM_{\mathrm{RS}}(\delta)$ jumps vertically. If the replica solution $\delta_{\mathrm{RS}}(z)$ is constant on $[z_1,z_2]$ (i.e., it decreases locally down to $z_1$, is constant from $z_1$ to $z_2$, and decreases locally from $z_2$), then one can show that \[ \int_{z_1}^{z_2} \delta(z) \mathrm{d} z = 0. \] By assumption, $\delta(z)$ is not constant on any interval. Together these properties show that $\delta(z)$ must intersect $\delta_{\mathrm{RS}}(z)$ at some $z^*\in (z_1,z_2)$. Combined with the single-crossing property, this implies implies that $\cM_{\mathrm{RS}}(\delta)$ crosses the fixed-point curve exactly once and we define the crossing point to be $(\delta^*,z^*)$. Moreover, we see that $|\mathrm{FP}(\delta^*)| = 3$. For $\delta<\delta^*$, the replica-MMSE must take the value of the largest fixed point $\cM_{\mathrm{RS}}(\delta) = \max \mathrm{FP}(\delta)$. For $\delta>\delta^*$, the replica-MMSE must take the value of the smallest fixed point $\cM_{\mathrm{RS}}(\delta) = \min \mathrm{FP}(\delta)$. \iffalse If there are $z_1 < z_2 < z_3 < z_4$ such that $\delta(z)$ is decreasing on $[z_1,z_2]$, increasing on $[z_2,z_3]$, and decreasing on $[z_3,z_4]$, then $\delta_{\mathrm{RS}}(z)$ must ``jump'' across the increasing section because $\delta_{\mathrm{RS}}(z)$ must be decreasing. It is apparent that $\delta_{\mathrm{RS}}(z)$ can only jump from points where $\delta(z)$ is decreasing to other points where $\delta(z)$ is decreasing. It may, however, pass through points where $\delta(z)$ is increasing. For any $\delta_0 \leq I_X (0)$, the first $\delta(z)$ level-crossing will always be downward. The next level crossing will always be upward, and so on. We will not consider level crossings at heights where $\delta(z)$ is tangent to $\delta=\delta_0$. If the single-crossing property holds, then there cannot be a point In the case of tangent, we will consider the point of contact as both an upward and downward level crossing. \fi \end{proof} \begin{lemma} If $f\in \mathcal{V}$ satisfies $\lim_{\delta \uparrow \delta^*} f(\delta) < \max \mathrm{FP}(\delta^*)$, then \[ \int_0^\infty \left( I_X (\delta) -\frac{1}{2}\log (1+f(\delta)) \right) \mathrm{d} \delta > 0. \] Similarly, if $f\in \mathcal{V}$ satisfies $\lim_{\delta \downarrow \delta^*} f(\delta) > \min \mathrm{FP}(\delta^*)$, then \[ \int_0^\infty \left( I_X (\delta) -\frac{1}{2}\log (1+f(\delta)) \right) \mathrm{d} \delta < 0. \] \end{lemma} \begin{proof} hi \end{proof} Next, we note that the fixed-point curve $\mathrm{FP}$ is also given by the graph of the function \[ \delta(z)= (1+z) \mathsf{mmse}_X^{-1} (z). \] Using this representation, the fixed points associated with $\delta=\delta^*$ can be seen as the level crossings of $\delta(z)$. For any $\delta^* \leq I_X (0)$, the first $\delta(z)$ level-crossing will always be downward. The next level crossing will always be upward, and so on. In the case of tangent, we will consider the point of contact as both an upward and downward level crossing. If the replica-MMSE $\cM_{\mathrm{RS}} (\delta)$ has the single-crossing property, then either (i) $\cM_{\mathrm{RS}}(\delta)$ does not cross the fixed-point curve and there is a unique non-decreasing function in $\mathcal{V}$ or (ii) $\cM_{\mathrm{RS}}(\delta)$ crosses the fixed-point curve once. In the first, $\cM_{\mathrm{RS}} (\delta)$ crosses can be written as a function of $z$ \begin{lemma} \label{lem:single_crossing} \end{lemma} \begin{lemma} \label{lem:replica_mmse_set2} If $P_{X}$ has the single-jump property, then all $f\in\mathcal{V}$ satsify $f(\delta)\in\left\{ \underline{\mathrm{FP}}(\delta),\overline{\mbox{\ensuremath{\mathrm{FP}}}}(\delta)\right\} $ and the elements of $\mathcal{V}$ satisfy \[ V_{\rho}(\delta)=\begin{cases} \overline{\mbox{\ensuremath{\mathrm{FP}}}}(\delta) & \mbox{if }\delta<\rho\\ \underline{\mathrm{FP}}(\delta) & \mbox{if }\delta>\rho. \end{cases} \] for some $\rho\in\mathcal{Z}$, where $\mathcal{Z}$ is the set of $\delta$ values the fixed-point curve is not uniquely defined. In particular, the set is given by \[ \mathcal{V}=\cup_{\rho\in\mathcal{Z}}V_{\rho}. \] \end{lemma} \begin{lemma} For a signal distribution $P_{X}$ with the single-jump property, the replica prediction of the MMSE phase transition (if there is one) is $\rho^{*}=\lim_{\delta\to\infty}\rho(\delta)$, where $\rho(\delta)$ is the unique $\rho$-root of the equation \[ I_{X}(\delta)-\frac{1}{2}\int_{0}^{\delta}\log\left(1+V_{\rho}(\lambda)\right)\mathrm{d}\lambda=0. \] \end{lemma} \fi \section{Conclusion} In this paper, we present a rigorous derivation of the fundamental limits of compressed sensing for i.i.d.\ signal distributions and i.i.d.\ Gaussian measurement matrices. We show that the limiting MI and MMSE are equal to the values predicted by the replica method from statistical physics. This resolves a well-known open problem. \appendices \section{Useful Results} \subsection{Basic Inequalities} We begin by reviewing a number of basic inequalities. For numbers $x_1, \cdots x_n$ and $p\ge 1$, Jensen's inequality combined with the convexity of $|\cdot|^p$ yields, \begin{align} \left| \sum_{i =1}^n x_i \right|^p\ & \le n^{p-1} \sum_{i =1}^n \left| x_i \right|^p, \qquad p \ge 1 \label{eq:LpBound_1}. \end{align} In the special case $n =2$ and $p \in \{2,4\}$, we obtain \begin{align} (a+b)^2 & \le 2 (a^2 + b^2) \label{eq:ab2}\\ (a+b)^4 & \le 8 (a^4 + b^4) \label{eq:ab4}. \end{align} For random variables $X_1, \cdots , X_n$ and $p \ge1$, a consequence of Minkowski's inequality \cite[Theorem 2.16]{boucheron:2013} is that \begin{align} \ex{ \left| \sum_{i =1}^n X_i \right|^p} & \le \left( \sum_{i =1}^n \left( \ex{ \left| X_i \right|^p } \right)^\frac{1}{p} \right)^p, \qquad p \ge 1. \label{eq:Minkowski} \end{align} Also, for random variables $X$ and $Y$, an immediate consequence of Jensen's inequality is that expectation of the absolute difference $|X-Y|$ can be upper bounded in terms of higher moments, i.e. \begin{align} \ex{|X-Y|} \le \left | \ex{|X - Y|^p}\right|^{\frac{1}{p}}, \qquad p \ge 1. \notag \end{align} Sometimes, we need to bound the differnence in terms of weaker measures of deviation between $X$ and $Y$. The following Lemma provides two such bounds that also depend on the moments of $X$ and $Y$. \begin{lemma}\label{lem:L1_to_Log} For nonnegative random variables $X$ and $Y$, the expectation of the absolute difference $|X-Y|$ obeys the following upper bounds: \begin{align} \ex{ \left | X - Y \right|} &\le \sqrt{ \frac{1}{2} (\ex{X^2} + \ex{Y^2}) \ex{ \left | \log(X/Y) \right|}} \label{eq:abs_dev_bnd_log}\\ \ex{ \left | X - Y \right|} &\le \sqrt{ 2 (\ex{X} + \ex{Y}) \ex{ \left | \sqrt{X} - \sqrt{Y} \right|}}. \label{eq:abs_dev_bnd_sqrt} \end{align} \end{lemma} \begin{proof} We begin with \eqref{eq:abs_dev_bnd_log}. For any numbers $0 < x < y$, the difference $y-x$ can be upper bounded as follows: \begin{align} y-x &= \int_x^y \sqrt{u} \frac{1}{\sqrt{u}} du \notag\\ & \le \sqrt{ \int_x^y u\, du} \sqrt{ \int_x^y \frac{1}{u} du} \notag\\ & = \sqrt{ \frac{1}{2} (y^2 - x^2) } \sqrt{ \log(y/x) } \notag\\ & \le \sqrt{ \frac{1}{2} (y^2 + x^2) } \sqrt{ \log(y/x) }, \notag \end{align} where the first inequality is due to the Cauchy-Schwarz inequality. Thus, the absolute difference between $X$ and $Y$ obeys \begin{align} |X-Y| & \le \sqrt{ \frac{1}{2} (X^2 + Y^2)} \sqrt{ \left | \log(X/Y) \right|}. \notag \end{align} Taking the expectation of both sides and using the Cauchy-Schwarz inequality leads to \eqref{eq:abs_dev_bnd_log}. To prove \eqref{eq:abs_dev_bnd_sqrt}, observe that the difference between between $X$ and $Y$ can be decomposed as \begin{align} X - Y = (\sqrt{X} + \sqrt{Y}) (\sqrt{X} - \sqrt{Y}). \notag \end{align} Thus, by the Cauchy-Schwarz inequality, \begin{align} \ex{ |X - Y|} &= \sqrt{ \ex{ (\sqrt{X} + \sqrt{Y})^2}} \sqrt{ \ex{ (\sqrt{X} - \sqrt{Y})^2}} \notag\\ & \le \sqrt{ 2( \ex{X} + \ex{Y})} \sqrt{ \ex{ (\sqrt{X} - \sqrt{Y})^2}}, \notag \end{align} where the last step is due to \eqref{eq:ab2}. \end{proof} \subsection{Variance Decompositions} This section reviews some useful decompositions and bounds on the variance. As a starting point, observe that the variance of a random variable $X$ can be expressed in terms of an independent copy $X'$ according to \begin{align} \var(X) = \frac{1}{2} \ex{ (X - X')^2}. \nota \end{align} This representation can extended to conditional variance, by letting $X_y$ and $X'_y$ denote independent draws from the conditional distribution $P_{X|Y= y}$, so that \begin{align} \var(X \! \mid \! Y = y) = \frac{1}{2} \ex{ (X_y - X'_y)^2}. \notag \end{align} For a random draw of $Y$, it then follows that the random conditional variance of $X$ given $Y$ can be expressed as \begin{align} \var(X\! \mid \! Y) = \frac{1}{2} \ex{ (X_Y - X'_Y)^2 \! \mid \! Y }, \label{eq:cond_var_decomp} \end{align} where $X_Y$ and $X'_Y$ are conditionally independent draws from the random conditional distribution $P_{X|Y}(\cdot | Y)$. Using this representation, the moments of the conditional variance can be bounded straightforwardly. For all $p \ge 1$, \begin{align} \ex{ \left|\var(X | Y) \right|^p} & = \frac{1}{2^p}\ex{ \left| \ex{ (X_Y - X'_Y)^2 \mid Y} \right|^p} \notag\\ & \overset{(a)}{\le} \frac{1}{2^p}\ex{ \left| X_Y - X'_Y \right|^{2p}} \notag\\ & \overset{(b)}{\le} 2^{ p -1} \left( \ex{ \left| X_Y\right|^{2 p}} + \ex{ \left| X'_Y \right|^{2 p}} \right) \notag\\ & \overset{(c)}{=} 2^{ p} \ex{\left|X\right|^{2p}}, \notag \end{align} where (a) follows from Jensen's inequality and the convexity of $|\cdot|^p$, (b) follows from \eqref{eq:LpBound_1}, and (c) follows from the fact that $X_Y$ and $X'_Y$ both have the same distribution as $X$. The law of total variance gives \begin{align} \var(X) = \ex{ \var(X \! \mid \! Y)} + \var( \ex{ X \! \mid \! Y}). \label{eq:law_tot_var} \end{align} As an immediate consequence, we obtain the data processing inequality for MMSE (see e.g.\ \cite[Proposition 4]{rioul:2011}) , which states that conditioning cannot increase the MMSE on average. In particular, if $X \to Y \to Z$ form a Markov chain, then, \begin{align} \mathsf{mmse}(X \! \mid \! Y) \le \mathsf{mmse}(X \! \mid \! Z). \nota \end{align} \subsection{Bounds using KL Divergence} \label{sec:cov_to_MI} This section provides a number results that that allow us to bound differences in expectations in terms of Kullback--Leibler divergence. One of the consequences of Lemma~\ref{lem:integral_bound} (given below) is that random variables $X \sim P_X$ and $Y \sim P_Y$ with positive and finite second moments satisfy \begin{align} \frac{ \left| \ex{X} - \ex{Y}\right|}{ \sqrt{2 \ex{X^2} +2\ex{Y^2}}} \le \sqrt{ \DKL{P_X}{P_Y}}. \notag \end{align} We begin by reviewing some basic definitions (see e.g., \cite[Section 3.3]{pollard:2002}). Let $P$ and $Q$ be probability measures with densities $p$ and $q$ with respect to a dominating measure $\lambda$. The Hellinger distance $d_H( P, Q)$ is defined as the $\mathcal{L}_2$ distance between the square roots of the densities $\sqrt{p}$ and $\sqrt{q}$, and the squared Helliger distance is given by \begin{align} d^2_H( P, Q) & = \int \left( \sqrt{p} - \sqrt{q} \right )^2 \mathrm{d} \lambda. \notag \end{align} The Kullback--Leibler divergence (also known as relative entropy) is defined as \begin{align} D_\mathrm{KL}( P\, \| \, Q) & = \int p \log\left(\frac{ p}{q }\right) \mathrm{d} \lambda . \notag \end{align} The squared Hellinger distance is upper bounded by the KL divergence \cite[pg. 62]{pollard:2002}, \begin{align} d^2_H(P, Q) \le D_\mathrm{KL}( P\, \| \, Q) \label{eq:H_to_KL}. \end{align} \begin{lemma}\label{lem:integral_bound} Let $f$ be a function that is measurable with respect to $P$ and $Q$. Then \begin{align} \MoveEqLeft \left| \int f\left( \mathrm{d} P -\mathrm{d} Q \right) \right| \notag\\ & \le \sqrt{ 2 \int f^2 \left( \mathrm{d} P + \mathrm{d} Q \right) } \min\left\{ \sqrt{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right)}, 1 \right\}. \notag \end{align} \end{lemma} \begin{proof} Let $p$ and $q$ be the densities of $P$ and $Q$ with respect to a dominating measures $\lambda$. Then, we can write \begin{align} \MoveEqLeft \left| \int f ( \mathrm{d} P -\mathrm{d} Q) \right| \notag\\ & = \left| \int f ( p - q ) \mathrm{d} \lambda \right| \notag \\ & \overset{(a)}{=} \left| \int f (\sqrt{p} + \sqrt{q}) ( \sqrt{p} -\sqrt{q}) \mathrm{d} \lambda \right| \notag \\ & \overset{(b)}{\le} \sqrt{ \int f^2 (\sqrt{p} + \sqrt{q})^2 \mathrm{d} \lambda \, d^2_H(P, Q) } \notag \\ & \overset{(c)}{\le} \sqrt{ \int f^2 (\sqrt{p} + \sqrt{q})^2 \mathrm{d} \lambda \, \DKL{P}{Q} } \notag\\ & \overset{(d)}{\le} \sqrt{2 \int f^2 (p + q) \mathrm{d} \lambda \, \DKL{P}{Q} }, \label{eq:integral_bound_b} \end{align} where (a) is justified by the non-negativity of the densities, (b) follows from the Cauchy-Schwarz inequality, (c) follows from \eqref{eq:H_to_KL}, and (d) follows from \eqref{eq:ab2}. Alternatively, we also have the upper bound \begin{align} \left| \int f \left( \mathrm{d} P -\mathrm{d} Q \right) \right| \notag & \le \left| \int f \mathrm{d} P \right| + \left| \int f \mathrm{d} Q \right| \notag\\ & \le \sqrt{ \int f^2 \mathrm{d} P} + \sqrt{ \int f^2 \mathrm{d} Q} \notag \\ & \le \sqrt{2 \int f^2 \left( \mathrm{d} P +\mathrm{d} Q \right)} , \label{eq:integral_bound_c} \end{align} where (a) follows from the triangle inequality, (b) follows from Jensen's inequality, and (c) follows from \eqref{eq:ab2}. Taking the minimum of \eqref{eq:integral_bound_b} and \eqref{eq:integral_bound_c} leads to the stated result. \end{proof} \begin{lemma}\label{lem:var_bnd} For any distribution $P_{X, Y, Z}$ and $p \ge 1$, \begin{align} \MoveEqLeft \ex{ \left| \var(X \! \mid \! Y) - \var(X \! \mid \! Z) \right|^p} \notag \\ & \le 2^{2p + \frac{1}{2} } \sqrt{ \ex{ |X|^{4p} }\ex{ \DKL{ P_{X|Y}}{ P_{X|Z} }}}. \nota \end{align} \end{lemma} \begin{proof} Let $P$ and $Q$ be the random probability measures on $\mathbb{R}^2$ defined by \begin{align} P & =P_{X \mid Y} \times P_{X \mid Y} \notag\\ Q & =P_{X \mid Z} \times P_{X \mid Z}, \notag \end{align} and let $f : \mathbb{R}^2 \to \mathbb{R}$ be defined by $f(x_1, x_2) = \frac{1}{2} (x_1 - x_2)^2$. Then, by the variance decomposition \eqref{eq:cond_var_decomp}, we can write \begin{align} \var(X \! \mid \! Y) & = \int f \mathrm{d} P , \quad \var(X \! \mid \! Z) = \int f \mathrm{d} Q. \notag \end{align} Furthermore, by the upper bound $f^2(x_1, x_2) \le 2 (x_1^4 + x_2^4)$, the expectation of the $f^2$ satisfies \begin{align} \int f^2 \left( \mathrm{d} P + \mathrm{d} Q \right) \le 4 \ex{ X^4 \!\mid\! Y} + 4 \ex{ X^4 \! \mid \! Z} .\label{eq:var_bnd_a} \end{align} Therefore, by Lemma~\ref{lem:integral_bound}, the difference between the conditional variances satisfies \begin{align} \MoveEqLeft \left| \var(X \! \mid \! Y) - \var(X \! \mid \! Z) \right| \notag \\ & \le \sqrt{ 2 \int f^2 \left( \mathrm{d} P + \mathrm{d} Q \right) } \sqrt{ \min\left\{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right), 1 \right\}} \notag\\ & \le \sqrt{ 8 \ex{ X^4 \! \mid \! Y} + 8 \ex{ X^4 \! \mid \! Z} } \sqrt{ \min\left\{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right), 1 \right\}}, \label{eq:var_bnd_b} \end{align} where the second inequality follows from \eqref{eq:var_bnd_a}. The next step is to bound the expected $p$-th power of the right-hand side of \eqref{eq:var_bnd_b}. Starting with the Cauchy-Schwarz inequality, we have \begin{align} \MoveEqLeft \ex{ \left| \var(X \mid Y) - \var(X \mid Z) \right|^p} \notag \\ & \le \sqrt{ \ex{ \left| 8 \ex{ X^4 \! \mid \! Y} + 8 \ex{ X^4 \! \mid \! Z} \right|^p}} \notag\\ & \quad \times \sqrt{ \ex{ \left| \min\left\{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right) , 1 \right\} \right|^p } }. \label{eq:var_bnd_c} \end{align} For the first term on the right-hand side of \eqref{eq:var_bnd_c}, observe that, by Jensen's inequality, \begin{align} \sqrt{ \ex{ \left| 8 \ex{ X^4 \! \mid \! Y} + 8 \ex{ X^4 \! \mid \! Z} \right|^p} } & \le \sqrt{ 8^p \ex{ X^{4p} } + 8^p \ex{ X^{4p}} } \notag\\ & = 4^p \sqrt{ \ex{X^{4p} }} . \label{eq:var_bnd_d} \end{align} Meanwhile, the expectation in the second term on the right-hand side of \eqref{eq:var_bnd_c} satisfies \begin{align} \ex{ \left| \min\left\{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right), 1 \right\}\right|^{p}} & \le \ex{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right)} \notag \\ & = 2 \ex{D_\mathrm{KL}\left( P_{X|Y} \, \middle \| \, P_{X|Z} \right)} , \label{eq:var_bnd_e} \end{align} where the second step follows from the definition of $P$ and $Q$. Plugging \eqref{eq:var_bnd_d} and \eqref{eq:var_bnd_e} back into \eqref{eq:var_bnd_c} leads to the stated result. \end{proof} \begin{lemma}\label{lem:cov_bnd} For any distribution $P_{X,Y,Z}$ and $p \ge 1$, \begin{align} \MoveEqLeft \ex{ \left| \cov(X, Y \! \mid \! Z ) \right|^p } \notag \\ &\le 2^{p} \sqrt{ \ex{ \left| \ex{X^4 \! \mid \! Z} \ex{Y^4 \! \mid \! Z} \right|^\frac{p}{2}} I(X; Y\! \mid \! Z) }. \nota \end{align} \end{lemma} \begin{proof} Let $P$ and $Q$ be the random probability measures on $\mathbb{R}^2$ defined by \begin{align} P & =P_{X, Y \mid Z} \notag\\ Q & =P_{X \mid Z} \times P_{Y \mid Z}, \notag \end{align} and let $f : \mathbb{R}^2 \to \mathbb{R}$ be defined by $f(x,y) = x y$. Then, the conditional covariance between $X$ and $Y$ can be expressed as \begin{align} \cov(X, Y \! \mid \! Z ) & = \int f \left( \mathrm{d} P - \mathrm{d} Q \right). \notag \end{align} Furthermore \begin{align} \int f^2 \left( \mathrm{d} P + \mathrm{d} Q \right) & = \ex{ |X Y|^2 \mid Z} + \ex{X^2 \! \mid \! Z} \ex{ Y^2 \! \mid \! Z} \notag\\ & \le 2 \sqrt{ \ex{X^4 \! \mid \! Z} \ex{ Y^4 \! \mid \! Z} }, \notag \end{align} where the second step follows from the Cauchy-Schwarz inequality and Jensen's inequality. Therefore, by Lemma~\ref{lem:integral_bound}, the magnitude of the covariance satisfies \begin{align} \MoveEqLeft \left| \cov(X, Y \! \mid \! Z ) \right| \notag\\ & \le \sqrt{ 2 \int f^2 \left( \mathrm{d} P + \mathrm{d} Q \right) } \sqrt{ \min\left\{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right), 1 \right\}} \notag\\ & \le 2 \left|\ex{X^4 \! \mid \! Z} \ex{ Y^4 \!\mid \!Z} \right|^\frac{1}{4} \left| \min\left\{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right),1 \right\}\right|^{\frac{1}{2}}. \label{eq:cov_bnd_b} \end{align} The next step is to bound the expected $p$-th power of the right-hand side of \eqref{eq:cov_bnd_b}. Starting with the Cauchy-Schwarz inequality, we have \begin{align} \ex{ \left| \cov(X, Y \mid Z ) \right|^p } \notag &\le 2^p \sqrt{ \ex{ \left| \ex{X^4 \! \mid \! Z} \ex{Y^4 \! \mid \! Z} \right|^\frac{p}{2}}} \notag\\ & \quad \times \sqrt{ \ex{ \left| \min\left\{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right) , 1 \right\} \right|^p } }. \label{eq:cov_bnd_c} \end{align} Note that the expectation in the second term on the right-hand side of \eqref{eq:cov_bnd_c} satisfies \begin{align} \ex{ \left| \min\left\{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right), 1 \right\}\right|^{p} } & \le \ex{ D_\mathrm{KL}\left( P \, \middle \| \, Q \right)} \notag \\ & = I(X; Y \! \mid \! Z) , \label{eq:cov_bnd_e} \end{align} where the second step follows from the definition of $P$ and $Q$. Plugging \eqref{eq:cov_bnd_e} back into \eqref{eq:cov_bnd_c} leads to the stated result. \end{proof} \begin{lemma}\label{lem:mmse_diff} For any distributions $P_{X,Y}$ and $P_{X,Z}$, \begin{align} \MoveEqLeft \left| \mathsf{mmse}(X \! \mid \! Y) - \mathsf{mmse}(X \! \mid \! Z) \right| \notag\\ & \le 2^{\frac{5}{2}} \sqrt{ \ex{ |X|^{4}} \, D_\mathrm{KL}(P_{X,Y} \, \| \, P_{X,Z}) }. \notag \end{align} \end{lemma} \begin{proof} Let $P$ and $Q$ be the distributions given by \begin{align} P(x_1,x_2,y) &= P_{X\mid Y}(x_1 \mid y) P_{X\mid Y}(x_2 \mid y) P_{Y}(y) \notag\\ Q(x_1,x_2,y) &= P_{X\mid Z}(x_1 \mid y) P_{X\mid Z}(x_2 \mid y) P_{Z}(y) \notag \end{align} Then, \begin{align} \mathsf{mmse}(X \! \mid \! Y) = \frac{1}{2} \int (x_1 - x_2)^2 \mathrm{d} P(x_1,x_2,y) \notag\\ \mathsf{mmse}(X \! \mid \! Z) = \frac{1}{2} \int (x_1 - x_2)^2 \mathrm{d} Q(x_1,x_2,y) \notag \end{align} and so, by Lemma~\ref{lem:integral_bound}, \begin{align} \MoveEqLeft \left| \mathsf{mmse}(X \! \mid \! Y) - \mathsf{mmse}(X \! \mid \! Z) \right| \notag\\ & \le \sqrt{ \frac{1}{2} \! \int (x_1 \!-\! x_2)^4 \left(\mathrm{d} P(x_1,x_2,y) + \mathrm{d} Q(x_1, x_2, y) \right) } \notag \\ & \quad \times \sqrt{ D_\mathrm{KL}(P \, \| \, Q )} . \label{eq:mmse_diff_b} \end{align} For the first term on the right-hand side of \eqref{eq:mmse_diff_b}, observe that \begin{align} \MoveEqLeft \int (x_1 \!-\! x_2)^4 \left(\mathrm{d} P(x_1,x_2,y) + \mathrm{d} Q(x_1, x_2, y) \right) \notag\\ & \overset{(a)}{\le} 8 \int (x^4_1 + x^4_2) \left(\mathrm{d} P(x_1,x_2,y) + \mathrm{d} Q(x_1, x_2, y) \right) \notag \\ & \overset{(b)}{=} 32 \, \ex{ |X|^4}, \label{eq:mmse_diff_c} \end{align} where (a) follows from \eqref{eq:ab4} and (b) follows from the fact that the marginal distributions of $X_1$ and $X_2$ are identical under $P$ and $Q$. For the second term on the right-hand side of \eqref{eq:mmse_diff_b}, observe that $P$ and $Q$ can be expressed as \begin{align} P(x_1,x_2,y) &= \frac{ P_{X , Y}(x_1,y) P_{X, Y}(x_2, y) }{ P_{Y}(y) } \notag\\ Q(x_1,x_2,y) &= \frac{ P_{X , Z}(x_1,y) P_{X, Z}(x_2, y) }{ P_{Z}(y) }. \notag \end{align} Letting $(X_1, X_2, Y) \sim P$, we see that the Kullback-Leibler divergence satisfies \begin{align} D_\mathrm{KL}( P \| Q) & = \ex{ \log\left( \frac{ P_{X , Y}(X_1,Y) P_{X, Y}(X_2, Y) P_{Z}(Y)}{ P_{X , Z}(X_1,Y) P_{X, Z}(X_2, Y) P_{Y}(Y) } \right)} \notag\\ & =2 D_\mathrm{KL}\left( P_{X,Y} \| P_{X,Z} \right) - D_\mathrm{KL}( P_Y \| P_Z) \notag \\ & \le 2 D_\mathrm{KL}\left( P_{X,Y} \| P_{X,Z} \right) . \label{eq:mmse_diff_d} \end{align} Plugging \eqref{eq:mmse_diff_c} and \eqref{eq:mmse_diff_d} back into \eqref{eq:mmse_diff_b} completes the proof of Lemma~\ref{lem:mmse_diff}. \end{proof} \section{Proofs of Results in Section~\ref{sec:MI_MMSE_bounds}} \subsection{Proof of Lemma~\ref{lem:mmseX_bound}} \label{proof:lem:mmseX_bound} Let $Y = \sqrt{s} X+ W$ where $X \sim P_X$ and $W \sim \mathcal{N}(0,1)$ are independent. Letting $X_Y$ and $X'_Y$ denote conditionally independent draws from $P_{X|Y}$ the conditional variance can be expressed as $\var(X \! \mid \! Y) = \frac{1}{2} \ex{ (X_Y - X'_Y)^2 \! \mid \! Y}$. Therefore, \begin{align} \left | \frac{\mathrm{d}}{\mathrm{d} s} \mathsf{mmse}_X(s) \right| & \overset{(a)}{=} \ex{ \left(\var(X \! \mid \! Y) \right)^2} \notag\\ &= \frac{1}{4} \ex{ \left( \ex{ (X_Y -X'_Y)^2 \! \mid \! Y } \right)^2} \notag\\ &\overset{(b)}{\le} \frac{1}{4} \ex{ \left( X_Y -X'_Y \right)^4} \notag\\ & \overset{(c)}{\le} 4\ex{ X^4}, \nota \end{align} where (a) follows from \eqref{eq:mmseXp}, (b) follows from Jensen's inequality and (c) follows from \eqref{eq:ab4} and the fact that $X_Y$ and $X'_Y$ have the same distribution as $X$. This completes the proof of \eqref{eq:mmseX_smooth_a}. Next, since $\ex{W \! \mid \! Y} = Y - \sqrt{s} \ex{ X\! \mid \! Y}$, the conditional variance can also be expressed as \begin{align} \var(X \! \mid \! Y) = \frac{1}{s} \var(W \! \mid \! Y). \notag \end{align} Using the same argument as above leads to \begin{align} \left | \frac{\mathrm{d}}{\mathrm{d} s} \mathsf{mmse}_X(s) \right| & \le \frac{ 4\ex{ W^4}}{s^2}= \frac{12}{s^2} . \nota \end{align} This completes the proof of \eqref{eq:mmseX_smooth_b}. \subsection{Proof of Lemma~\ref{lem:Ip_and_Ipp}} \label{proof:lem:Ip_and_Ipp} The first order MI difference can be decomposed as \begin{align} I'_{m,n} & = h(Y_{m+1} \! \mid \! Y^m, A^{m+1}) - h(Y_{m+1} \! \mid \! Y^m, A^{m+1}, X^n), \notag \end{align} where the differential entropies are guaranteed to exist because of the additive Gaussian noise. The second term is given by the entropy of the noise, \begin{align} h(Y_{m+1} \! \mid \! Y^m, A^{m+1}, X^n) = h(W_{m+1}) = \frac{1}{2} \log(2 \pi e), \notag \end{align} and thus does not depend on $m$. Using this decomposition, we can now write \begin{align} I''_{m,n} & = h(Y_{m+2} \! \mid \! Y^{m+1}, A^{m+2}) - h(Y_{m+1} \! \mid \! Y^m, A^{m+1}) \notag\\ & = h(Y_{m+2} \! \mid \! Y^{m+1}, A^{m+2}) - h(Y_{m+2} \! \mid \! Y^m, A^{m+2}) \notag\\ & = - I(Y_{m+1} ; Y_{m+2} \! \mid \! Y^m, A^{m+2}), \notag \end{align} where the second step follows from the fact that, conditioned on $(Y^m, A^m)$, the new measurements pairs $(Y_{m+1},A_{m+1})$ and $(Y_{m+2}, A_{m+2})$ are identically distributed. \subsection{Proof of Lemma~\ref{lem:Im_bounds}}\label{proof:lem:Im_bounds} We first consider the upper bound in \eqref{eq:Im_bounds}. Starting with the chain rule for mutual information, we have \begin{align} I(X^n ; Y^m \mid A^m) &= \sum_{i=1}^n I(X_i ; Y^m \mid A^m, X^{i-1} ). \label{eq:Im_UB_b} \end{align} Next, observe that each summand satisfies \begin{align} \MoveEqLeft I(X_i ; Y^m \mid A^m, X^{i-1} ) \notag \\ & \overset{(a)}{\le} I( X_{i} ; X_{i+1}^n , Y^m \mid A^m, X^{i-1} ) \notag\\ & \overset{(b)}{=} I( X_{i} ; Y^m \mid A^m, X^{i-1} , X_{i+1}^n ), \label{eq:Im_UB_c} \end{align} where (a) follows from the data processing inequality and (b) follows from expanding the mutual information using the chain rule and noting that $I( X_{i} ; X_{i+1}^n \mid A^m, X^{i-1})$ is equal to zero because the signal entries are independent. Conditioned on data $(A^m, X^{i-1}, X_{i+1}^n)$, the mutual information provided by $Y^m$ is equivalent to the mutual information provided by the measurement vector \begin{align} Y^m - \ex{ Y^m \mid A^m, X^{i-1}, X_{i+1}^n} & = A^m(i) X_i + W^m, \notag \end{align} where $A^m(i)$ is the $i$-th column of $A^m$. Moreover, by the rotational invariance of the Gaussian distribution of the noise, the linear projection of this vector in the direction of $A^m(i)$ contains all of the information about $X_i$. This projection can be expressed as \begin{align} \frac{ \langle A^m(i),A^m(i) X_i + W^m \rangle }{ \| A^m(i)\|} = \|A^m(i) \| X_i + W, \notag \end{align} where $W = \langle A^m(i), W^m \rangle / \| A^m(i)\|$ is Gaussian $\mathcal{N}(0,1)$ and independent of $X_i$ and $A^m(i)$, by the Gaussian distribution of $W^m$. Therefore, the mutual information obeys \begin{align} \MoveEqLeft I( X_{i} ; Y^m \! \mid \! A^m, X^{i-1} , X_{i+1}^n ) \notag\\ & = I\left(X_i ; \|A^m(i) \| X_i + W \; \middle | \; \| A^m(i) \| \right ) \notag \\ & = \ex{ I_X\left( \|A^m(i) \|^2 \right ) } \notag\\ & = \ex{ I_X\left( \tfrac{1}{n} \chi^2_m \right ) } , \label{eq:Im_UB_d} \end{align} where the last step follows from the fact that the entries of $A^m(i)$ are i.i.d.\ Gaussian $\mathcal{N}(0, 1/n)$. Combining \eqref{eq:Im_UB_b}, \eqref{eq:Im_UB_c}, and \eqref{eq:Im_UB_d} gives the upper bound in \eqref{eq:Im_bounds}. Along similar lines, the lower bound in \eqref{eq:Mm_bounds} follows from \begin{align} M_{m,n} & \triangleq \frac{1}{n} \mathsf{mmse}( X^{n} \! \mid \! Y^m, A^m) \notag\\ & \overset{(a)}{=} \mathsf{mmse}( X_{n} \! \mid \! Y^m, A^m) \notag \\ & \overset{(b)}{\ge } \mathsf{mmse}(X_n \! \mid \! Y^m, A^m, X^{n-1} ) \notag\\ & \overset{(c)}{= } \mathsf{mmse}(X_n \! \mid \! A^m(n) X_n + W^m , A^m(n) ) \notag\\ & \overset{(d)}{= } \mathsf{mmse}\left(X_n \; \middle | \; \|A^m(n) \| X_i + W, \| A^m(n) \| \right ) \notag\\ & \overset{(e)}{=} \ex{ \mathsf{mmse}_X\left( \tfrac{1}{n} \chi^2_{m} \right)}, \notag \end{align} where (a) follows from the fact that the distributions of the columns of $A^m$ and entries of $X^n$ are permutation invariant, (b) follows from the data processing inequality for MMSE, (c) follows from the independence of the signal entries, (d) follows from the Gaussian distribution of the noise $W^m$, and (e) follows from the distribution of $A^m(n)$. We now turn our attention to the lower bound in \eqref{eq:Im_bounds}. Let the QR decomposition of $A^m$ be given by \begin{align} A^m = Q R , \notag \end{align} where $Q$ is an $m \times n$ orthogonal matrix and $R$ is an $m \times n$ upper triangular matrix. Under the assumed Gaussian distribution on $A^m$, the nonzero entries of $R$ are independent random variables with \cite[Theorem~2.3.18]{gupta:1999}: \begin{align} R_{i,j} \sim \begin{cases} \frac{1}{n} \chi^2_{m-i+1}, & \text{if $i = j$}\\ \mathcal{N}\left(0, \frac{1}{n} \right), & \text{if $i < j$}. \end{cases} \label{eq:R_dist} \end{align} Let the rotated measurements and noise be defined by \begin{align} \widetilde{Y}^m & = Q^T Y^ m, \qquad \widetilde{W}^m = Q^T W^m \notag \end{align} and observe that \begin{align} \widetilde{Y}^m & =R X^n + \widetilde{W}^m. \notag \end{align} By the rotational invariance of the Gaussian distribution of the noise $W^m$, the rotated noise $\widetilde{W}^m$ is Gaussian $\mathcal{N}(0,I_{m \times m} )$ and independent of everything else. Therefore, only the first $d\triangleq \min(m,n)$ measurements provide any information about the signal. Using this notation, the mutual information can be expressed equivalently as \begin{align} I(X^n ; Y^m \mid A^m) & = I(X^n ; \widetilde{Y}^d \mid R ) \notag\\ & = \sum_{i=1}^d I(X^n ; \widetilde{Y}_k \mid \widetilde{Y}_{k+1}^d , R), \label{eq:Im_LB_b} \end{align} where the second step follows from the chain rule for mutual information. To proceed, note that the measurements $\widetilde{Y}_{k}^d$ are independent of the first part of the signal $X^{k-1}$, because $R$ is upper triangular. Therefore, for all $1 \le k \le d$, \begin{align} I(X^n ; \widetilde{Y}_k \mid \widetilde{Y}_{k+1}^d , R) \notag & = I(X_{k}^n ; \widetilde{Y}_k \mid \widetilde{Y}_{k+1}^d , R) \notag \\ & \overset{(a)}{\ge} I(X_{k} ; \widetilde{Y}_k \mid \widetilde{Y}_{k+1}^d, X_{k+1}^n , R) \notag\\ & \overset{(b)}{=} I(X_{k} ; R_{k,k} X_k + \widetilde{W}_k \mid R) \notag\\ & \overset{(c)}{=} \ex{ I_X( \tfrac{1}{n} \chi^2_{m-k+1})},\label{eq:Im_LB_c} \end{align} where (a) follows from the data processing inequality, (b) follows from the fact that $R$ is upper triangular and the independence of the signal entries, and (c) follows from \eqref{eq:R_dist}. Combining \eqref{eq:Im_LB_b} and \eqref{eq:Im_LB_c} gives the upper bound in \eqref{eq:Im_bounds} For the upper bound in \eqref{eq:Mm_bounds}, suppose that $m \ge n$ and let $\widetilde{Y}^n$ be the rotated measurements defined above. Then we have \begin{align} M_{m,n}& \triangleq \frac{1}{n} \mathsf{mmse}( X_{n} \mid Y^m, A^m) \notag \\ & \overset{(a)}{=} \mathsf{mmse}( X_{n} \! \mid \! Y^m, A^m) \notag\\ & \overset{(b)}{=} \mathsf{mmse}(X_n \! \mid \! \widetilde{Y}^m , R) \notag\\ & \overset{(c)}{\le} \mathsf{mmse}(X_n \! \mid \! \widetilde{Y}_n , {R}_{n,n} ) \notag\\ & \overset{(d)}{=} \mathsf{mmse}(X_n \! \mid \! R_{n,n} X_n + \widetilde{W}_n , {R}_{n,n} ) \notag\\ & \overset{(e)}{=} \ex{ \mathsf{mmse}_X\left( \tfrac{1}{n} \chi^2_{m-n+1} \right)} , \notag \end{align} where (a) follows from the fact that the distributions of the columns of $A^m$ and entries of $X^n$ are permutation invariant, (b) follows from the fact that multiplication by $Q$ is a one-to-one transformation, (c) follows from the data processing inequality for MMSE, (d) follows from the fact that $R$ is upper triangular with $m \ge n$, and (e) follows from \eqref{eq:R_dist}. \subsection{Proof of Lemma~\ref{lem:Im_bounds_gap} }\label{proof:lem:Im_bounds_gap} Let $U = \frac{1}{n} \chi^2_{m-n+1}$ and $V= \frac{1}{n} \chi^2_{n+1}$ be independent scaled chi-square random variables and let $Z = U+V$. Using this notation, the lower bound in \eqref{eq:Im_bounds} satisfies \begin{align} \frac{1}{n} \sum_{k=1}^{n} \ex{ I_X\left( \tfrac{1}{n} \chi^2_{m-k+1} \right)} & \ge \ex{ I_X(U)}, \label{eq:Im_bounds_gap_b} \end{align} where we have used the fact that the mutual information function is non-decreasing. Moreover, by \eqref{eq:IX_smooth}, we have \begin{align} \ex{ I_X(Z)} - \ex{ I_X(U)} & \le \frac{1}{2} \ex{ \left( Z/U -1 \right)_+} \notag \\ & = \frac{1}{2} \ex{V/U } \notag\\ & = \frac{1}{2} \frac{n + 1}{m - n - 1 }. \label{eq:Im_bounds_gap_c} \end{align} Next, observe that $Z$ has a scaled chi-squared distribution $Z \sim \frac{1}{n} \chi^2_{m+2}$, whose inverse moments are given by \begin{align} \ex{ Z^{-1}} = \frac{n}{m}, \qquad \var( Z^{-1}) = \frac{2 n^2 }{m^2 (m-2)} . \notag \end{align} Therefore, by \eqref{eq:IX_smooth}, we have \begin{align} I_X(\tfrac{m}{n} ) - \ex{ I_X(Z)} & \le \frac{1}{2} \ex{ \left( \frac{m/n}{Z} -1 \right)_+} \notag \\ & \le \frac{m}{2 n} \ex{ \left| \frac{1}{Z} -\frac{n}{m} \right|} \notag\\ & \le \frac{m}{2 n } \sqrt{ \var(Z^{-1})} \notag\\ & = \frac{1}{2 } \sqrt{ \frac{2}{ m-2}}, \label{eq:Im_bounds_gap_d} \end{align} where the second and third steps follow from Jensen's inequality. Combining \eqref{eq:Im_bounds_gap_b}, \eqref{eq:Im_bounds_gap_c} and \eqref{eq:Im_bounds_gap_d} completes the proof of Inequality \eqref{eq:Im_bounds_gap}. We use a similar approach for the MMSE. Note that \begin{align} \ex{ \mathsf{mmse}_X(U)} & = \ex{ \mathsf{mmse}_X\left( \tfrac{1}{n} \chi^2_{m - n + 1}\right) } \notag \\ \ex{\mathsf{mmse}_X(Z)} & \le \ex{ \mathsf{mmse}_X\left( \tfrac{1}{n} \chi^2_{m}\right) } , \label{eq:Mm_bounds_gap_b} \end{align} where the second inequality follows from the monotonicity of the MMSE function. By Lemma~\ref{lem:mmseX_bound}, the MMSE obeys \begin{align} \ex{ \mathsf{mmse}_X(U)} - \ex{ \mathsf{mmse}_X(Z)} & \le 12 \, \ex{ \left| \frac{ 1}{U} -\frac{1}{Z} \right|} \notag\\ & = 12 \left( \ex{U^{-1} } - \ex{ Z^{-1} } \right) \notag\\ & =12 \frac{n + 1}{m - n - 1 }. \label{eq:Mm_bounds_gap_c} \end{align} Moreover, \begin{align} \left| \ex{ \mathsf{mmse}_X(Z)} - \mathsf{mmse}_X(\tfrac{m}{n} )\right| & \le 12 \, \ex{ \left| \frac{ 1}{Z} -\frac{n}{m} \right|} \notag\\ & \le 12 \sqrt{ \var(Z^{-1})} \notag \\ & = \frac{1}{2 } \sqrt{ \frac{2}{ m-2}} \label{eq:Mm_bounds_gap_d} \end{align} where the second and third steps follow from Jensen's inequality. Combining \eqref{eq:Mm_bounds_gap_b}, \eqref{eq:Mm_bounds_gap_c} and \eqref{eq:Mm_bounds_gap_d} completes the proof of Inequality \eqref{eq:Mm_bounds_gap}. \subsection{Proof of Lemma~\ref{lem:IR_boundary}}\label{proof:lem:IR_boundary} The upper bound in \eqref{eq:IRS_bounds} follows from noting that \begin{align} \cI_{\mathrm{RS}}(\delta) & \triangleq \min_{z \ge 0} R( \delta, z) \le R( \delta, 0) = I_X(\delta) . \notag \end{align} For the lower bound, observe that the replica-MI function can also be expressed in terms of the replica-MMSE function as \begin{align} \cI_{\mathrm{RS}}(\delta) & = R( \delta, \cM_{\mathrm{RS}}(\delta)). \notag \end{align} Since the term $ [ \log\left( 1+ z\right) - \frac{ z}{ 1 + z}]$ in the definition of $R(\delta, z)$ is non-negative, we have the lower bound \begin{align} \cI_{\mathrm{RS}}(\delta) & \ge I_X\left( \frac{ \delta}{ 1 + \cM_{\mathrm{RS}}(\delta)} \right) . \label{eq:IR_LB_b} \end{align} Next, we recall that the replica-MMSE function satisfies the fixed-point equation \begin{align} \cM_{\mathrm{RS}}(\delta) = \mathsf{mmse}_X\left( \frac{\delta}{1+\cM_{\mathrm{RS}}(\delta) } \right) . \label{eq:IR_LB_c} \end{align} Also, for any signal distribution $P_X$ with, the MMSE function satisfies the upper bound $\mathsf{mmse}_X(s) \le 1/s$ \cite[Proposition 4]{guo:2011}. Therefore, \begin{align} \cM_{\mathrm{RS}}(\delta) \le \frac{ 1 + \cM_{\mathrm{RS}}(\delta)}{\delta} . \notag \end{align} For $\delta > 1$, rearranging the terms leads to the upper bound \begin{align} \cM_{\mathrm{RS}}(\delta) \le \frac{ 1}{ \delta - 1} . \label{eq:IR_LB_d} \end{align} Combining \eqref{eq:IR_LB_b} and \eqref{eq:IR_LB_d} with the fact that the mutual information is non-decreasing yields \begin{align} \cI_{\mathrm{RS}}(\delta) & \ge I_X\left( \frac{ \delta}{ 1 + \frac{1}{1 -\delta} } \right) = I_X(\delta - 1). \notag \end{align} Lastly, we consider the bounds in \eqref{eq:MRS_bounds}. Combining \eqref{eq:IR_LB_c} and \eqref{eq:IR_LB_d} with the fact that the MMSE is non-increasing yields \begin{align} \cM_{\mathrm{RS}}(\delta) & \le \mathsf{mmse}_X\left( \frac{ \delta}{ 1 + \frac{1}{1 -\delta} } \right) = \mathsf{mmse}_X(\delta - 1) . \notag \end{align} Alternatively, starting with \eqref{eq:IR_LB_c} and using the non-negativity of $\cM_{\mathrm{RS}}$ leads to the lower bound. This completes the proof of Lemma~\ref{lem:IR_boundary}. \subsection{Proof of Lemma~\ref{lem:Im_var}}\label{proof:lem:Im_var} To lighten notation, we will write $I(A)$ in place of $I_{m,n}(A^m)$. By the Gaussian Poincar\'e inequality \cite[Theorem 3.20]{boucheron:2013}, the variance satisfies \begin{align} \var( \ex{ I(A)}) & \le \ex{ \left\| \nabla_{A} I(A) \right\|^2_F}, \label{eq:var_matrix_b} \end{align} where $\nabla_{A}$ is the gradient operator with respect to $A$. Furthermore, by the multivariate I-MMSE relationship \cite[Theorem 1]{palomar:2006}, the gradient of the mutual information with respect to $A$ is given by \begin{align} \nabla_{A} I(A) = A K(A), \label{eq:var_matrix_c} \end{align} where $K(A) = \ex{ \cov(X^n | Y^m, A) \mid A}$ is the expected posterior covariance matrix as a function of the matrix $A$. Next, we recall some basic facts about matrices that will allow us to bound the magnitude of the gradient. For symmetric matrices $U$ and $V$ the notation $U \preceq V$ mean that $V - U$ is positive semidefinite. If $U$ and $V$ are positive definite with $U \preceq V$ then $U^2 \preceq V^2$, by \cite[Theorem~7.7.3]{horn:2012}, and thu \begin{align} \|S U\|^2_F = \gtr( S U^2 S^T) \le \gtr( S V^2 S^T) \notag \end{align} for every matrix $S$. Since the since the conditional expectation minimizes the squared error, the expected covariance matrix can be upper bounded in terms of the MSE matrix associated with the optimal linear estimator: \begin{align} K(A) \preceq \left(I_{n \times n} + \var(X) A^T A^T\right)^{-1} \var(X). \notag \end{align} Combining this inequality with the arguments given above and the fact that $\left(I_{n \times n} + \var(X) A^T A^T\right)^{-1} \preceq I_{n \times n}$ leads to \begin{align} \left\| A K(A) \right\|^2_F& \le \left( \var(X)\right)^2 \gtr\left( A A^T \right). \label{eq:var_matrix_d} \end{align} Alternatively, for $m > n$, the matrix $A^T A$ is full rank with probability one and thus $K(A) \preceq \left(A^T A\right)^{-1}$ with probability one. This leads to the upper bound \begin{align} \left\| A K(A) \right\|^2_F\l \gtr\left( \left( A^T\! A\right)^{-1} \right). \label{eq:var_matrix_e} \end{align} To conclude, note that under the assumed Gaussian distribution on $A$, \begin{align} \ex{ \gtr\left( A A^T \right)} & = m \notag\\ \ex{ \gtr\left( \left(A^T A\right)^{-1} \right)} & = \frac{n^2}{(m-n-1)_+}, \notag \end{align} where the second expression follows from \cite[Theorem~3.3.16]{gupta:1999}. Combining these expectations with \eqref{eq:var_matrix_b} and \eqref{eq:var_matrix_c} and the upper bounds \eqref{eq:var_matrix_d} and \eqref{eq:var_matrix_e} completes the proof. \subsection{Proof of Lemma~\ref{lem:IMI_var}}\label{proof:lem:IMI_var} To simplify notation, the mutual information density is denoted by $Z = \imath(X^n ;Y^m \! \mid \! A^m)$. Note that $Z$ can be expressed as a function of the random tuple $(X^n, W^m, A^m)$ according to \begin{align} Z &= \log\left( \frac{f_{Y^m|X^n,A^m}( A^m X^n +W^m \! \mid \! X^n,A^m )}{f_{Y^m|A^m}(A^m X^n +W^m \! \mid \! A^m )} \right). \label{eq:IMI_function} \end{align} Starting with the law of total variance \eqref{eq:law_tot_var}, we see that \begin{align} \var(Z ) & = \ex{ \var( Z \! \mid \! A^m)} + \var( \ex{ Z \! \mid \! A^m}). \label{eq:var_IMI_decomp} \end{align} The second term on the right-hand side of \eqref{eq:var_IMI_decomp} is variance with respect to the matrix, which is bounded by Lemma~\ref{lem:Im_var}. The first term on the right-hand side of \eqref{eq:var_IMI_decomp} is the variance with respect to the signal and the noise. Since the entries of $X^n$ and $W^m$ are independent, the variance can be bounded using the the Efron-Stein inequality \cite[Theorem 3.1]{boucheron:2013}, which yields \begin{align} \ex{ \var( Z \! \mid \! A^m )} & \le \sum_{i=1}^n \ex{ \left( Z - \ensuremath{\mathbb{E}}_{X_i}\left[ Z \right] \right)^2} \notag \\ & \quad + \sum_{ i = 1}^m \ex{\left( Z - \ensuremath{\mathbb{E}}_{W_i}\left[ Z \right] \right)^2 } . \label{eq:var_IMI_decomp_b} \end{align} At this point, Lemma~\ref{lem:IMI_var} follows from combining \eqref{eq:var_IMI_decomp}, Lemma~\ref{lem:Im_var}, and \eqref{eq:var_IMI_decomp_b} with the following inequalities \begin{align} \ex{ \left( Z - \ensuremath{\mathbb{E}}_{X_i}\left[ Z \right] \right)^2} & \le 12 \left( 1 + \left(1 + \sqrt{ \tfrac{m}{n}} \right) B^{\frac{1}{4}} \right)^4 \label{eq:var_signal}\\ \ex{\left( Z - \ensuremath{\mathbb{E}}_{W_i}\left[ Z \right] \right)^2} & \le \sqrt{B} \label{eq:var_noise}. \end{align} Inequalities \eqref{eq:var_signal} and \eqref{eq:var_noise} are proved in the following subsections. \subsubsection{Proof of Inequality \eqref{eq:var_signal}} Observe that the expectation $\ex{ \left( Z - \ensuremath{\mathbb{E}}_{X_i}\left[ Z \right]\right)^2 }$ is identical for all $i \in [n]$ because the distribution on the entries in $X^n$ and the columns of $A^m$ are permutation invariant. Throughout this proof, we focus on the variance with respect to the last signal entry $X_n$. Starting with the chain rule for mutual information density, we obtain the decomposition \begin{align} Z & =\imath(X_n ;Y^m \! \mid \! A^m ) + \imath(X^{n-1} ; Y^m \! \mid \! X_n , A^m ). \label{eq:var_sig_b} \end{align} The second term is independent of $X_n$, and thus does not contribute to the conditional variance. To characterize the first term, we introduce a transformation of the data that isolates the effect of the $n$th signal entry. Let $Q$ be drawn uniformly from the set of $m \times m$ orthogonal matrices whose last row is the unit vector in the direction of the last column of $A^{m}$, and let the \textit{rotated} data be defined according to \begin{align} \widetilde{Y}^m = Q Y^m, \qquad \widetilde{A}^m = Q A^m , \qquad \widetilde{W}^m = Q W^m. \notag \end{align} By construction, the last column of $\widetilde{A}^m$ is zero everywhere except for the last entry with \begin{align} \widetilde{A}_{i,n} = \begin{cases} 0, & \text{if $1 \le i \le m-1$}\\ \| A^{m}(n) \|, & \text{if $i = m$}, \end{cases} \notag \end{align} where $A^m(n)$ denotes the $n$-th column of $A^m$. Furthermore, by the rotational invariance of the Gaussian distribution of the noise, the rotated noise $\widetilde{W}^m$ has the same distribution as $W^m$ and is independent of everything else. Expressing the mutual information density in terms of the rotated data leads to the further decomposition \begin{align} \imath(X_n ;Y^m \! \mid \! A^m ) & \overset{(a)}{=} \imath(X_n ;\widetilde{Y}^m \! \mid \! \widetilde{A}^m ) \notag\\ & \overset{(b)}{=} \imath(X_n ;\widetilde{Y}_m \! \mid \! \widetilde{Y}^{m-1}, \widetilde{A}^m ) \notag\\ & \quad + \imath(X_n ;\widetilde{Y}^{m-1} \! \mid \! \widetilde{A}^m ) \notag \\ & \overset{(c)}{=} \imath(X_n ;\widetilde{Y}_m \! \mid \! \widetilde{Y}^{m-1}, \widetilde{A}^m ), \label{eq:var_sig_c} \end{align} where (a) follows from fact that multiplication by $Q$ is a one-to-one transformation, (b) follows from the chain rule for mutual information density, and (c) follows from fact that the first $m-1$ entries of $\widetilde{Y}^m$ are independent of $X_n$. To proceed, we introduce the notation $U = (\widetilde{Y}^{m-1} ,\widetilde{A}^{m-1})$. Since the noise is independent of the measurements, the conditional density of $\widetilde{Y}_{m}$ given $(X^n, U, \widetilde{A}_{m})$ obeys \begin{align} \MoveEqLeft f_{\widetilde{Y}_m \mid X^n, U, \widetilde{A}_m} (\widetilde{y}_m \mid x^n, u, \widetilde{a}_m) \notag\\ & = f_{\widetilde{W}_m} (\widetilde{y}_m - \langle \widetilde{a}_{m} , x^n \rangle) \notag\\ & = \frac{1}{\sqrt{ 2 \pi}} \exp\left( - \frac{1}{2} \left(\widetilde{y}_m - \langle \widetilde{a}_{m} , x^n \rangle \right)^2 \right) . \notag \end{align} Starting with \eqref{eq:var_sig_c}, the mutual information density can now be expressed as \begin{align} \imath(X_n ;Y^m \mid A^m ) & = \imath\left(X_n ; \widetilde{Y}_m \mid U, \widetilde{A}_{m} \right) \notag\\ & = \log \left( \frac{ f_{\widetilde{Y}_m \mid X^n, U, \widetilde{A}_m} (\widetilde{Y}_m \mid X^n, U, \widetilde{A}_m) }{ f_{\widetilde{Y}_m \mid U, \widetilde{A}_m} (\widetilde{Y}_m \mid U, \widetilde{A}_m)}\right), \notag\\ & = g(U, \widetilde{Y}_{m}, \widetilde{A}_{m}) - \frac{1}{2} \widetilde{W}_m^2, \label{eq:var_sig_d} \end{align} where \begin{align} g(U, \widetilde{Y}_m, A_{m}) &= \log\left( \frac{ ( 2 \pi)^{-\frac{1}{2}} }{ f_{\widetilde{Y}_m \mid U, \widetilde{A}_m} (\widetilde{Y}_m \mid U, \widetilde{A}_m)} \right). \notag \end{align} Furthermore, by \eqref{eq:var_sig_b} and \eqref{eq:var_sig_d}, the difference between $Z$ and its conditional expectation with respect to $X_n$ is given by \begin{align} Z - \ensuremath{\mathbb{E}}_{X_n}\left[Z \right] & = g(U, \widetilde{Y}_m, \widetilde{A}_{m}) - \ensuremath{\mathbb{E}}_{X_n} \left[ g(U, \widetilde{Y}_m, \widetilde{A}_{m}) \right]. \notag \end{align} Squaring both sides and taking the expectation yields \begin{align} \MoveEqLeft \ex{ \left( Z - \ensuremath{\mathbb{E}}_{X_n}\left[Z \right] \right)^2} \notag \\ & = \ex{\left( g(U, \widetilde{Y}_m, \widetilde{A}_{m}) - \ensuremath{\mathbb{E}}_{X_n} \left[ g(U, \widetilde{Y}_m, \widetilde{A}_{m}) \right]\right)^2} \notag \\ & \overset{(b)}{\le} \ex{ g^2(U, \widetilde{Y}_m, \widetilde{A}_{m}) }, \label{eq:var_sig_e} \end{align} where (b) follows from the law of total variance \eqref{eq:law_tot_var}. Next, we bound the function $g(u, \widetilde{y}_m, \widetilde{a}_m)$. Let $X^n_u$ be drawn according to the conditional distribution $P_{X^n \mid U = u}$. Then, the conditional density of $\widetilde{Y}_{m}$ given $(U, \widetilde{A}_{m})$ is given by \begin{align} \MoveEqLeft f_{\widetilde{Y}_m \mid U, \widetilde{A}_m} (\widetilde{y}_m \mid u, \widetilde{a}_m) \notag\\ &= \ex{ f_{\widetilde{Y}_m \mid X^n, U, \widetilde{A}_m} (\widetilde{y}_m \mid X_u^n, u, \widetilde{a}_m) } \notag\\ & = \ex{ \frac{1}{\sqrt{ 2 \pi}} \exp\left( - \frac{1}{2} \left(\widetilde{y}_m - \langle a_{m} , X_u^n \rangle \right)^2 \right)} . \notag \end{align} Since the conditional density obeys the upper bound \begin{align} f_{\widetilde{Y}_m \mid U, \widetilde{A}_m} (\widetilde{y}_m \mid u, \widetilde{a}_m) \le ( 2 \pi)^{-\frac{1}{2}}, \notag \end{align} we see that $g(u, \widetilde{y}_m, \widetilde{a}_m)$ is nonnegative. Alternatively, by Jensen's inequality, \begin{align} \MoveEqLeft f_{\widetilde{Y}_m \mid U, \widetilde{A}_m} (\widetilde{y}_m \mid u, \widetilde{a}_m) \notag \\ & \ge \frac{1}{\sqrt{ 2 \pi}} \exp\left( - \frac{1}{2} \ex{ \left(\widetilde{y}_m - \langle a_{m} , X_u^n \rangle \right)^2} \right), \notag \end{align} and thus \begin{align} g(u, \widetilde{y}_m, \widetilde{a}_m) \le \frac{1}{2} \ex{ \left(\widetilde{y}_m - \langle a_{m} , X_u^n \rangle \right)^2}. \notag \end{align} Using these facts leads to the following inequality: \begin{align} g^2(u, \widetilde{y}_m , \widetilde{a}_m) & \le \frac{1}{4} \left( \ensuremath{\mathbb{E}}_{X^n_u}\left[ \left (y_m - \langle \widetilde{a}_m, X_u^n \rangle \right )^2 \right] \right)^2 \notag\\ & \overset{(a)}{\le} \frac{1}{4} \ensuremath{\mathbb{E}}_{X^n_u}\left[ \left (\widetilde{y}_m - \langle \widetilde{a}_m, X_u^n \rangle \right )^4 \right] \notag\\ & \overset{(b)}{\le} 2 \left(\widetilde{y}_m \right)^4 + 2\ensuremath{\mathbb{E}}_{X^n_u}\left[ \left ( \langle \widetilde{a}_m, X_u^n \rangle \right)^4 \right] , \notag \end{align} where (a) follows from Jensen's inequality and (b) follows from \eqref{eq:ab4}. Taking the expectation with respect to the random tuple $(U, \widetilde{Y}_m , \widetilde{A}_m)$, we can now write \begin{align} \ex{ g^2(U, \widetilde{Y}_m , \widetilde{A}_m) } & \le 2 \ex{ \left( \widetilde{Y}_m \right)^4} + 2 \ex{ \left ( \langle \widetilde{A}_m, X_U^n \rangle \right)^4 } \notag\\ &\overset{(a)}{=} 2 \ex{ \left( \widetilde{Y}_m \right)^4} + 2 \ex{ \left ( \langle \widetilde{A}_m, X^n \rangle \right)^4 }, \label{eq:var_sig_f} \end{align} where (a) follows from the fact that $X^n_U$ has the same distribution as $X^n$ and is independent of $\widetilde{A}_m$. To upper bound the first term, observe that \begin{align} \ex{ \left( \widetilde{Y}_m\right)^4 } & = \ex{ \left(\widetilde{W}_m + \sum_{i=1}^n \widetilde{A}_{m,i} X_i \right)^4} \notag \\ & \overset{(a)}{\le} \left( \left( \ex{ \widetilde{W}_m^4} \right)^{\frac{1}{4}} + \left( \ex{\left( \langle \widetilde{A}_m, X^n \rangle \right)^4 }\right)^{\frac{1}{4}} \right)^4 \notag \\ & = \left( 3^{\frac{1}{4}} + \left( \ex{\left( \langle \widetilde{A}_m, X^n \rangle \right)^4 }\right)^{\frac{1}{4}} \right)^4 \label{eq:var_sig_g} \end{align} where (a) follows from Minkowski's inequality \eqref{eq:Minkowski}. To bound the fourth moment of $\langle \widetilde{A}_m, X^n \rangle$, we use the fact that the entries of the rotated measurement vector $\widetilde{A}_m$ are independent with \begin{align} \widetilde{A}_{m,i} \sim \begin{cases} \mathcal{N}(0,\frac{1}{n} ), & \text{if $1 \le i \le n-1$}\\ \frac{1}{\sqrt{n}} \chi_m , & \text{if $i = n$}, \end{cases} \notag \end{align} where $\chi_m$ denotes a chi random variable with $m$ degrees of freedom. Thus, we have \begin{align} \MoveEqLeft \left( \ex{\left( \langle \widetilde{A}_m,X^n \rangle \right)^4 }\right)^{\frac{1}{4}} \notag \\ & \overset{(a)}{\le} \left( \ex{ \left( \sum_{i=1}^{n-1} \widetilde{A}_{m,i} X_i \right)^4 }\right)^{\frac{1}{4}} + \left(\ex{ \left( \widetilde{A}_{m,n} X_n \right)^4 }\right)^{\frac{1}{4}} \notag \\ & \overset{(b)}{=} \left( \frac{3}{n^2} \ex{ \left\| X^{n-1} \right \|^4} \right)^{\frac{1}{4}} + \left( \frac{m(m+2)}{n^2} \ex{X_n^4} \right)^{\frac{1}{4}} \notag \\ & \overset{(c)}{\le} \left( \frac{3(n-1)^2 }{n^2} B \right)^{\frac{1}{4}} + \left( \frac{m(m+2)}{n^2} B \right)^{\frac{1}{4}} \notag\\ & \le \left( 1+ \sqrt{ \frac{m}{n}} \right) \left( 3 B \right)^{\frac{1}{4}}, \label{eq:var_sig_h} \end{align} where (a) follows from Minkowski's inequality \eqref{eq:Minkowski}, (b) follows from the distribution on $\widetilde{A}^m$ and (c) follows from Assumption 2 and inequality \eqref{eq:LpBound_1} applied to $\|X^{n-1}\|^4 = \left( \sum_{i=1}^{n-1} X_i^2 \right)^2$. Finally, combining \eqref{eq:var_sig_e}, \eqref{eq:var_sig_f}, \eqref{eq:var_sig_g}, and \eqref{eq:var_sig_h} leads to \begin{align} \ex{ \left( Z - \ensuremath{\mathbb{E}}_{X_n}\left[Z \right] \right)^2 & \le 2 \left( 3^\frac{1}{4} + \left( 1+ \sqrt{ \frac{m}{n}} \right) (3B)^{\frac{1}{4}} \right)^4 \notag\\ & \quad + 2\left( 1+ \sqrt{ \frac{m}{n}} \right)^4 \left( 3 B \right) \notag\\ & \le 12 \left( 1 + \left( 1+ \sqrt{ \frac{m}{n}} \right) B^{\frac{1}{4}} \right)^4. \notag \end{align} This completes the proof of Inequality \eqref{eq:var_signal}. \subsubsection{Proof of Inequality \eqref{eq:var_noise}} Observe that the expectation $\ex{\left( Z_k - \ensuremath{\mathbb{E}}_{W_m}\left[ Z_k \right] \right)^2 }$ is identical for all $k \in [m]$ because the distributions on the entries in $W^n$ and the rows of $A^m$ are permutation invariant. Throughout this proof, we focus on the variance with respect to the last noise entry $W_m$. Recall from \eqref{eq:IMI_function}, $Z$ can be expressed as a function of $(X^n, W^m, A^m)$. By the Gaussian Poincar\'e inequality \cite[Theorem 3.20]{boucheron:2013}, the variance with respect to $W_m$ obeys \begin{align} \ex{\left( Z - \ensuremath{\mathbb{E}}_{W_m}\left[ Z \right] \right)^2 } & \le \ex{ \left( \frac{\partial}{\partial W_m} Z \right)^2}, \label{eq:var_noise_b} \end{align} where $\frac{\partial}{\partial W_m} Z$ denotes the partial derivative of the right-hand side of \eqref{eq:IMI_function} evaluated at the point $(X^n, W^m, A^m)$. To compute the partial derivative, observe that by the chain rule for mutual information density, \begin{align} Z & =\imath(X^n ;Y^{m-1} \! \mid \! A^m) + \imath(X^n ; Y_m \! \mid \! Y^{m-1} , A^m). \notag \end{align} In this decomposition, the first term on the right-hand side is independent of $W_m$. The second term can be decomposed as \begin{align} \MoveEqLeft \imath(X^n ; Y_m \! \mid \! Y^{m-1} , A^m) \notag\\ & = \log\left( \frac{f_{Y_m \mid X^n, Y^{m-1}, A^m}(Y_m \mid X^n, Y^{m-1}, A^m)}{f_{Y_m \mid Y^{m-1}, A^m}(Y_m \mid Y^{m-1}, A^m)} \right) \notag\\ & = \log\left(\frac{ f_{W_m}(Y_m - \langle A_m, X^n \rangle)}{f_{Y_m \mid Y^{m-1}, A^m}(Y_m \mid Y^{m-1}, A^m) } \right) \notag \\ & = \log\left(\frac{ f_{W_m}(Y_m)}{f_{Y_m \mid Y^{m-1}, A^m}(Y_m \mid Y^{m-1}, A^m) } \right) \notag\\ & \quad + Y_m \langle A_m, X^n \rangle - \frac{1}{2} \left(\langle A_m, X^n \rangle \right)^2. \nota \end{align} Note that the first term on the right-hand side is the negative of the log likelihood ratio. The partial derivative of this term with respect to $Y_m$ can be expressed in terms of the conditional expectation \cite{esposito:1968}: \begin{align} \MoveEqLeft \frac{\partial}{\partial Y_m} \log\left(\frac{ f_{W_m}(Y_m)}{f_{Y_m \mid Y^{m-1}, A^m}(Y_m \mid Y^{m-1}, A^m) } \right) \notag\\ & = - \ex{ \langle A^m, X^n \rangle \mid Y^{m}, A^m}. \notag \end{align} Recalling that $Y_m = \langle A_m , X^n \rangle + W_m$, the partial derivative with respect to $W_m$ can now be computed directly as \begin{align} \frac{\partial}{\partial W_m} Z & = \frac{\partial}{\partial Y_m} \imath(X^n ; Y_m \mid Y^{m-1} , A^m) \notag\\ & = \langle A_m, X^n \rangle - \ex{ \langle A^m, X^n \rangle \mid Y^{m}, A^m} \notag\\ &= \langle A_m , X^n - \ex{X^n \mid Y^m, A^m} \rangle. \notag \end{align} Thus, the expected squared magnitude obeys \begin{align} \ex{ \left( \frac{\partial}{\partial W_i} Z_m \right)^2} &= \ex{ \left( \langle A_m , X^n - \ex{X^n \mid Y^m, A^m} \rangle \right)^2} \notag\\ & = \ex{ \var( \langle A_m, X^n \rangle \mid Y^m, A^m)} \notag\\ & \overset{(a)}{\le} \ex{ \var( \langle A_m, X^n \rangle \mid A_m)} \notag\\ & = \ex{ A_m^T \cov(X^n) A_m} \notag\\ & \overset{(b)}{=} \ex{X^2} \notag\\ & \overset{(c)}{\le} \sqrt{B}, \label{eq:var_noise_c} \end{align} where (a) follows from the law of total variance \eqref{eq:law_tot_var} and (b) follows from the fact that $\ex{A_m A_m^T} = \frac{1}{n} I_{n \times n}$, and (c) follows from Jensen's inequality and Assumption 2. Combining \eqref{eq:var_noise_b} and \eqref{eq:var_noise_c} completes the proof of Inequality \eqref{eq:var_noise}. \section{Proofs of Results in Section~\ref{proof:thm:I_MMSE_relationship} } \subsection{Proof of Lemma~\ref{lem:moment_bounds}.} \label{proof:lem:moment_bounds} The squared error obeys the upper boun \begin{align} \left|\mathcal{E}_{m,n} \right|^2 & = \left( \frac{1}{n} \sum_{i=1}^n \left( X_i - \ex{ X_i \! \mid \! Y^m, A^m}\right)^2 \right)^2 \notag\\ & \overset{(a)}{\le} \frac{1}{n} \sum_{i=1}^n \left( X_i - \ex{ X_i \! \mid \! Y^m, A^m}\right)^4 \notag\\ & \overset{(b)}{\le} \frac{8}{n} \sum_{i=1}^n \left[\left| X_i \right|^4 + \left| \ex{ X_i \! \mid \! Y^m, A^m}\right|^4 \right] \notag\\ &\overset{(c)}{\le} \frac{8}{n} \sum_{i=1}^n \left[\left| X_i \right|^4 + \ex{ \left| X_i \right|^4 \! \mid \! Y^m, A^m} \right], \notag \end{align} where (a) follows from Jensen's inequality \eqref{eq:LpBound_1}, (b) follows from \eqref{eq:ab4}, and (c) follows from Jensen's inequality. Taking the expectation of both sides leads to \begin{align} \ex{ \left|\mathcal{E}_{m,n} \right|^2 } & \le \frac{16}{n} \sum_{i=1}^n \ex{ \left|X_i\right|^4} \le 16 B, \notag \end{align} where the second inequality follows from Assumption 2. Next, observe that the conditional distribution $Y_{m+1}$ given $X^n$ is zero-mean Gaussian with variance $1 + \frac{1}{n}\|X^n\|^2$. Thus, \begin{align} \ex{ Y^4_{m+1} } &= \ex{ \ex{ Y^4 \! \mid \! X^n } } \notag \\ & = 3\ex{ \left(1 + \tfrac{1}{n} \|X^n\|^2\right)^2 } \notag\\ & \overset{(a)}{\le} 6 \left(1 +\tfrac{1}{n^2}\ex{ \|X^n\|^4} \right) \notag\\ & \overset{(b)} \le 6 (1 + B ), \notag \end{align} where (a) follows from \eqref{eq:ab2} and (b) follows from \eqref{eq:LpBound_1} and Assumption 2. Along similar lines, \begin{align} \MoveEqLeft \ex{\left| Y_{m+1} - \ex{ Y_{m+1} \! \mid \! Y^m, A^{m+1}}\right|^4 } \notag \\ &\overset{(a)}{ \le} 8 \ex{\left| Y_{m+1} \right|^4 } + 8 \ex{ \left| \ex{ Y_{m+1} \! \mid \! Y^m, A^{m+1}}\right|^4 } \notag\\ & \overset{(b)}{\le} 16 \ex{\left| Y_{m+1} \right|^4 } \notag \\ & \le 96 (1+B), \notag \end{align} where (a) follows from \eqref{eq:ab4} and (b) follows from Jensen's inequality. Finally, we note that the proof of \eqref{eq:IMID_moment_2} can be found in the proof of Lemma~\ref{lem:PMID_var}. \subsection{Proof of Lemma~\ref{lem:post_dist_to_id}}\label{proof:lem:post_dist_to_id} The starting point for these identities is to observe that the differences between the measurements $Y_i$ and $Y_j$ and their conditional expectations given the data $(Y^m, A^m ,A_i, A_j)$ can be expressed as \begin{align} \begin{bmatrix} Y_i - \widehat{Y}_i \\ Y_j - \widehat{Y}_j \end{bmatrix} & = \begin{bmatrix} A^T_i \\ A^T_j \end{bmatrix} \left( X^n - \widehat{X}^n\right) +\begin{bmatrix} W_i \\ W_j \end{bmatrix} \label{eq:dist_YiYj} , \end{align} where $\widehat{X}^n = \ex{ X^n \mid Y^m, A^m}$ is the signal estimate after the first $m$ measurements. \subsubsection{Proof of Identity~\eqref{eq:id_var}} Starting with \eqref{eq:dist_YiYj}, we see that the conditional variance of $Y_i$ can be expressed in terms of the posterior covariance matrix of the signal: \begin{align} \var(Y_i \! \mid \! Y^m, A^m, A_i) & = A_i^T \cov(X^n \! \mid \! Y^m, A^m) A_i + 1. \notag \end{align} Taking the expectation of both sides with respect to $A_i$ yields \begin{align} \MoveEqLeft \ensuremath{\mathbb{E}}_{A_i} \left[ \var(Y_i \! \mid \! Y^m, A^m, A_i) \right] \notag\\ & = \ensuremath{\mathbb{E}}_{A_i} \left[ A_i^T \cov(X^n \! \mid \! Y^m, A^m) A_i \right] + 1 \notag\\ & = \ensuremath{\mathbb{E}}_{A_i} \left[ \gtr\left( A_i A_i^T \cov(X^n \! \mid \! Y^m, A^m) \right) \right] + 1 \notag\\ & =\gtr\left( \ensuremath{\mathbb{E}}_{A_i} \left[ A_i A_i^T \right] \cov(X^n \! \mid \! Y^m, A^m) \right) + 1 \notag\\ & =\frac{1}{n} \gtr\left(\cov(X^n \mid Y^m, A^m) \right) + 1, \notag \end{align} where we have used the fact that $\ex{ A_i A_i^T} = \frac{1}{n} I_{m \times m}$. \subsubsection{Proof of Identity~\eqref{eq:id_cov}} Along the same lines as the conditional variance, we see that the conditional covariance between of $Y_i$ and $Y_j$ is given by \begin{align} \cov(Y_i , Y_j \mid Y^m, A^m, A_i, A_j) & = A_i^T \cov(X^n \mid Y^m, A^m) A_j . \notag \end{align} Letting $K = \cov(X^n \mid Y^m, A^m)$, the expectation of the squared covariance with respect to $A_i$ and $A_j$ can be computed as follows: \begin{align} \MoveEqLeft \ensuremath{\mathbb{E}}_{A_i, A_j} \left[ \left( \cov(Y_i , Y_j \mid Y^m, A^m, A_i, A_j) \right)^2 \right] \notag\\ & = \ensuremath{\mathbb{E}}_{A_i, A_j} \left[ \left( A_i^T K A_j \right) \left( A_j^T K A_i^T\right) \right] \notag\\ & = \frac{1}{n} \ensuremath{\mathbb{E}}_{A_i} \left[ A_i^T K^2 A_i \right] \notag\\ & = \frac{1}{n} \ensuremath{\mathbb{E}}_{A_i} \left[ \gtr\left( A_i A_i^T K^2 \right) \right] \notag\\ & = \frac{1}{n}\gtr\left( \ensuremath{\mathbb{E}}_{A_i} \left[ A_i A_i^T \right] K^2 \right) \notag\\ & = \frac{1}{n^2}\gtr\left( K^2 \right), \notag \end{align} where we have used the fact that $A_i$ and $A_j$ are independent with $\ex{ A_i A_i^T} = \frac{1}{n} I_{m \times m}$. Noting that $\gtr(K^2) = \| K\|_F^2$ completes the proof of Identity~\eqref{eq:id_cov}. \subsubsection{Proof of Identity~\eqref{eq:id_post_var}} For this identity, observe that the measurement vectors $(A_i,A_j)$ and the noise terms $(W_i, W_j)$ in \eqref{eq:dist_YiYj} are Gaussian and independent of the signal error. Therefore, the conditional distribution of $(Y_i - \widehat{Y}_i, Y_j - \widehat{Y}_j)$ given $(X^n, Y^m, A^m)$ is i.i.d.\ Gaussian with mean zero and covariance \begin{align} \cov\left( \begin{bmatrix} Y_i - \widehat{Y}_i \\ Y_j - \widehat{Y}_j \end{bmatrix} \; \middle | \; X^n, Y^m, A^m\right) = \begin{bmatrix} 1 + \cE_m& 0 \\ 0 & 1 + \cE_m\end{bmatrix} . \notag \end{align} Using the fact that the expected absolute value of standard Gaussian variable is equal to $\sqrt{2/\pi}$, we see that the conditional absolute moments are given by \begin{gather} \ex{ \big| Y_{i} - \widehat{Y}_i \big | \; \middle | \; X^n, Y^m ,A^m } = \sqrt{ \frac{2}{\pi} } \sqrt{1 + \cE_m} \notag\\ \ex{ \big | Y_{i} - \widehat{Y}_i \big | \big | Y_{j} - \widehat{Y}_j \big | \; \middle | \; X^n, Y^m, A^m } = \frac{2}{\pi} (1 + \cE_m). \notag \end{gather} Taking the expectation of both sides with respect to the posterior distribution of $X^n$ given $(Y^m, A^m)$ leads to \begin{gather} \ex{ \big| Y_{i} - \widehat{Y}_i \big | \; \middle | \; Y^m ,A^m } = \sqrt{ \frac{2}{\pi} } \ex{ \sqrt{1 + \cE_m} \mid Y^m, A^m } \notag\\ \ex{ \big | Y_{i} - \widehat{Y}_i \big | \big | Y_{j} - \widehat{Y}_j \big | \; \middle | \; Y^m, A^m } = \frac{2}{\pi} (1 + V_m). \notag \end{gather} Finally, we see that the conditional covariance is given by \begin{align} \MoveEqLeft \cov\left( \big| Y_{i} - \widehat{Y}_i \big| , \big | Y_{j} - \widehat{Y}_j \big | \; \middle| \; Y^m, A^{m} \right) \notag \\ & = \frac{2}{\pi} \ex{ \left( \sqrt{ 1 + \cE_m}\right)^2 \; \middle | \; Y^m, A^m} \notag\\ & \quad - \frac{2}{\pi} \left( \ex{ \sqrt{1 + \cE_m} \; \middle | \; Y^m, A^m} \right)^2 \notag\\ & = \frac{2}{\pi} \var\left( \sqrt{ 1 + \cE_m} \mid Y^m, A^m \right). \notag \end{align} This completes the proof of identity \eqref{eq:id_post_var}. \subsection{Proof of Lemma~\ref{lem:SE_bound_1}}\label{proof:lem:SE_bound_1} To simplify notation, we drop the explicit dependence on the problem dimensions and write $\cE$ and $V$ instead of $\cE_{m,n}$ and $V_{m,n}$. Also, we use $U = (Y^m, A^m)$ to denote the first $m$ measurements. Using this notation, the posterior distribution is given by $P_{X^n \mid U}$ and the posterior variance is $V = \ex{ \cE \! \mid \! U}$. \subsubsection{Proof of Inequality \eqref{eq:weak_dec_1}} To begin, let $\cE'_m$ be a conditionally independent copy of $\cE$ that is drawn according to the posterior distribution $P_{\cE \mid U}$. Starting with the fact that the posterior variance can expressed as $V= \ex{ \cE' \mid U}$, the absolute deviation between $\cE$ and $V$ can be upper bounded using the following series of inequalities: \begin{align} \ex{ \left| \cE - V \right| } & = \ensuremath{\mathbb{E}}_{U} \Big[ \ensuremath{\mathbb{E}} \big[ \left| \cE -\ex{ \cE' \! \mid \! U} \right| \, \big | \, U \big] \Big] \notag\\ & \overset{(a)}{\le} \ensuremath{\mathbb{E}}_{U} \Big[ \ensuremath{\mathbb{E}} \big[ \left| \cE - \cE' \right| \, \big | \, U \big] \Big] \notag\\ & = \ex{ \left| \cE - \cE' \right| } \notag\\ & =\ensuremath{\mathbb{E}}\bigg[ \bigg | \left( \sqrt{1 + \cE} + \sqrt{1 + \cE'} \right) \notag\\ & \qquad \times \left( \sqrt{1 + \cE} -\sqrt{1 + \cE'} \right)\bigg | \bigg] \notag\\ & \overset{(b)}{\le} \sqrt{ \ex{ \left| \sqrt{ 1 + \cE} + \sqrt{ 1 + \cE' } \right|^2} } \notag \\ & \quad \times \sqrt{ \ex{ \left|\sqrt{1 + \cE} -\sqrt{1+ \cE'} \right|^2 }} ,\label{eq:SE_bound_1_b} \end{align} where (a) follows from Jensen's inequality and the fact that $\cE$ and $\cE'$ are conditionally independent given $U$ and (b) follows from the Cauchy-Schwarz inequality. For the first term on the right-hand side of \eqref{eq:SE_bound_1_b}, observe that \begin{align} \ex{ \left| \sqrt{ 1 + \cE} + \sqrt{ 1 + \cE' } \right|^2 & \overset{(a)}{\le} 2 \ex{ ( 1 + \cE)} + 2 \ex{ (1 + \cE' ) } \notag\\ & \overset{(b)}{=} 4( 1 + M_{m,n}) \notag\\ & \le C_B ,\label{eq:SE_bound_1_c} \end{align} where (a) follows from \eqref{eq:ab2} and (b) follows from the fact that $\cE$ and $\cE'$ are identically distributed. For the second term on the right-hand side of \eqref{eq:SE_bound_1_b}, observe that \begin{align} \MoveEqLeft \ex{ \left|\sqrt{1 + \cE} -\sqrt{1+ \cE'} \right|^2 } \notag\\ & \overset{(a)}{=} 2 \ex{ \var( \sqrt{1 + \cE} \! \mid \! U)} \notag\\ & \overset{(b)}{=} \pi \ex{ \cov\left( \left| Z_{m+1} \right| , \left|Z_{m+2}\right| \, \big | \, U \right)} , \label{eq:SE_bound_1_d} \end{align} with $Z_{i} = Y_{i} - \ex{ Y_{i} \mid Y^m, A^{m}, A_{i}} $. Here, (a) follows from the conditional variance decomposition \eqref{eq:cond_var_decomp} and (b) follows from Identity~\eqref{eq:id_post_var}. To bound the expected covariance, we apply Lemma~\ref{lem:cov_bnd} with $p=1$ to obtain \begin{align} \MoveEqLeft \ex{ \cov\left( \left| Z_{m+1} \right| , \left|Z_{m+2}\right| \mid U \right)} \notag\\ & \le 2 \sqrt{\ex{ \sqrt{ \ex{ Z_{m+1}^4 \mid U} \ex{ Z_{m+2}^4 \mid U }}} } \notag\\ & \quad \times \sqrt{ I\left(|Z_{m+1} |; |Z_{m+2}| \, \middle | \, U \right) } . \label{eq:SE_bound_1_e} \end{align} For the first term on the right-hand side, observe that \begin{align} \ex{ \sqrt{ \ex{ Z_{m+1}^4 \! \mid \! U} \ex{ Z_{m+2}^4 \! \mid \! U}}} & \overset{(a)}{\le} \sqrt{\ex{ Z_{m+1}^4} \ex{ Z_{m+2}^4}} \notag \\ & \overset{(b)}{\le} C_B, \label{eq:SE_bound_1_f} \end{align} where (a) follows from the Cauchy-Schwarz inequality and (b) follows from \eqref{eq:Ybar_moment_4}. Combining \eqref{eq:SE_bound_1_b}, \eqref{eq:SE_bound_1_c}, \eqref{eq:SE_bound_1_d}, \eqref{eq:SE_bound_1_e}, and \eqref{eq:SE_bound_1_f} yields \begin{align} \ex{ \left| \cE - V \right|} \le C_B \cdot \left| I\left(|Z_{m+1} |; |Z_{m+2}| \, \middle | \, U \right) \right|^{\frac{1}{4}}. \notag \end{align} Thus, in order to complete the proof, we need to show that the mutual information term can be upper bounded in terms of the second order MI difference sequence. To this end, observe that \begin{align} \MoveEqLeft I\left(|Z_{m+1} |; |Z_{m+2}| \, \big | \, U \right) \notag\\ & \overset{(a)}{\le} I\left(Z_{m+1} ; Z_{m+2} \, \middle | \, U \right) \notag\\ & \overset{(b)}{ \le} I\left(Z_{m+1}, A_{m+1} ; Z_{m+2}, A_{m+2} \, \middle | \, U \right) \notag\\ & \overset{(c)}{=} I\left(Y_{m+1}, A_{m+1} ; Y_{m+2}, A_{m+2} \, \middle | \, U \right) \notag \\ & \overset{(d)}{ =} I\left(Y_{m+1} ; Y_{m+2} \, \middle | \, U, A_{m+1}, A_{m+2} \right) \notag\\ & \overset{(e)}{=} - I''_m, \notag \end{align} where (a) and (b) both follow from the data processing inequality for mutual information, (c) follows from the fact that, given $(U, A_{m+i})$, there is a one-to-one mapping between $Z_{m+i}$ and $Y_{m+i}$, (d) follows from the fact that measurements are generated independently of everything else (Assumption~1), and (e) follows from \eqref{eq:Ipp_alt}. This completes the Proof of Inequality \eqref{eq:weak_dec_1}. \subsubsection{Proof of Inequality \eqref{eq:weak_dec_2}} The main idea behind this proof is to combine identity \eqref{eq:id_cov} with the covariance bound in Lemma~\ref{lem:cov_bnd}. The only tricky part is that the expectation with respect to $(A_{m+1}, A_{m+2})$ is taken with respect to the squared Frobenius norm whereas the expectation with respect to $U$ is taken with respect to the square root of this quantity. To begin, observe that for each realization $U =u$, we have \begin{align} \MoveEqLeft \frac{1}{n^2} \left \| \cov(X^n \mid U = u ) \right\|^2_F \notag\\ & \overset{(a)}{=} \ex{ \left| \cov\left(Y_{m+1}, Y_{m+2} \mid U= u , A_{m+1}^{m+2} \right) \right|^2 } \notag \\ & \overset{(b)}{\le} 4 \sqrt{ \ex{ Y_{m+1}^4 \mid U = u }\ex{ Y_{m+2}^4 \mid U =u } } \notag\\ & \quad \times \sqrt{ I(Y_{m+1}; Y_{m+2} \mid U = u, A_{m+1}^{m+2})}, \notag \end{align} where (a) follows from \eqref{eq:id_cov} and (b) follows from Lemma~\ref{lem:cov_bnd} with $p =2$ and the fact that \begin{multline} \ex{ \ex{ Y_{m+1}^4 \mid U = u, A_{m+1}^{m+2} }\ex{ Y_{m+2}^4 \mid U =u , A_{m+1}^{m+2}} } \notag\\ = \ex{ Y_{m+1}^4 \mid U = u } \ex{ Y_{m+2}^4 \mid U =u }. \nota \end{multline} Taking the expectation of the square root of both sides with respect to $U$ leads to \begin{align} \MoveEqLeft \frac{1}{n} \ex{ \left \| \cov(X^n \mid U ) \right\|_F} \notag\\ &\overset{(a)}{\le} 4 \left| \ex{ Y_{m+1}^4 } \ex{ Y_{m+2}^4} I(Y_{m+1}; Y_{m+2} \mid U , A_{m+1}^{m+2})\right|^{\frac{1}{4}} \notag \\ &\overset{(b)}{\le} C_B \cdot \left| I(Y_{m+1}; Y_{m+2} \mid U , A_{m+1}^{m+2})\right|^{\frac{1}{4}} \notag \\ &\overset{(c)}{=} C_B\cdot \left| I''_m \right|^{\frac{1}{4}}, \nota \end{align} where (a) follows from the Cauchy-Schwarz inequality and Jensen's Inequality, (b) follows from \eqref{eq:Y_moment_4}, and (c) follows from \eqref{eq:Ipp_alt}. This completes the Proof of Inequality \eqref{eq:weak_dec_2}. \subsection{Proof of Lemma~\ref{lem:Delta_alt}} \label{proof:lem:Delta_alt} The mutual information difference density can be decomposed as \begin{align} \cJ_{m,n} & = \imath(X^n; \bar{Y}_{m+1} \mid Y^m, A^{m+1}) \notag \\ & = - \log\left( f_{\bar{Y}_{m+1} \mid Y^m, A^{m+1}}\left(\bar{Y}_{m+1}\right) \right) \notag\\ & \quad - \frac{1}{2}\log (2 \pi) - \frac{1}{2} W_{m+1}^2, \notag \end{align} where $f_{\bar{Y}_{m+1} \mid Y^m, A^{m+1}}(y)$ denotes the conditional density function of the centered measurement evaluated with the random data $(Y^m, A^{m+1})$. Therefore, for every $\sigma^2 > 0$, the Kullback--Leibler divergence between $P_{\bar{Y}_{m+1} \mid Y^m, A^{m+1}}$ and the Gaussian distribution $\mathcal{N}(0, \sigma^2)$ can be expressed as \begin{align} \MoveEqLeft D_\mathrm{KL}\left( P_{\bar{Y}_{m+1} \mid Y^m, A^{m+1} } \, \middle \|\, \mathcal{N}(0, \sigma^2 ) \right) \notag \\ & = \int \left( \frac{1}{2 \sigma^2}y^2 + \frac{1}{2} \log\left ( 2\pi \sigma^2 \right ) \right) f_{\bar{Y}_{m+1} \mid Y^m, A^{m+1}}(y) \mathrm{d} y \notag \\ & \quad + \int \log\left( f_{\bar{Y}_{m+1} \mid Y^m, A^{m+1}}(y) \right) f_{\bar{Y}_{m+1} \mid Y^m, A^{m+1}}(y) \mathrm{d} y \notag \\ & = \frac{1}{2 \sigma^2} \ex{ \bar{Y}_{m+1}^2 \! \mid \! Y^m, A^{m+1} } + \frac{1}{2} \log(2 \pi \sigma^2) \notag\\ & \quad - \ex{ \cJ_{m,n} \! \mid \! Y^m, A^{m+1}} - \frac{1}{2} \log (2 \pi) - \frac{1}{2}. \notag \end{align} Taking the expectation with respect to $A_{m+1}$ and rearranging terms leads to \begin{align} \MoveEqLeft \ensuremath{\mathbb{E}}_{A_{m+1}} \left[ D_\mathrm{KL}\left( P_{\overline{Y}_{m+1} \mid Y^m, A^{m+1} } \, \middle \|\, \mathcal{N}(0, \sigma^2 ) \right) \right] \notag\\ & = \ \frac{1}{2} \log( \sigma^2) - J_{m,n} + \frac{1}{2 } \left( \frac{1 + V_{m,n}}{\sigma^2} - 1 \right), \label{eq:Delta_alt_c} \end{align} where we have used the fact that the conditional variance is given by \eqref{eq:Ybar_var_cond}. At this point, Identity~\eqref{eq:DeltaP_alt} follows immediately by letting $\sigma^2 =1+ V_{m,n}$. For Identity~\eqref{eq:Delta_alt}, let $\sigma^2 = 1+M_{m,n}$ and note that the expectation of the last term in \eqref{eq:Delta_alt_c} is equal to zero. \subsection{Proof of Lemma~\ref{lem:DeltaP_bound} }\label{proof:lem:DeltaP_bound} The error vector $\bar{X}^n = X^n - \ex{X^n \! \mid \! Y^m, A^m}$ has mean zero by construction. Therefore, by \eqref{eq:Ybar_alt} and Lemma~\ref{lem:cclt}, the posterior non-Gaussianness satisfies \begin{align} \Delta^P_{m,n} & \le \frac{1}{2} \ex{ \left| \cE_{m,n} - V_{m,n} \right| \, \big | \, Y^m, A^m } \notag\\ & + C \cdot \left| \tfrac{1}{n} \| \cov(X^n \! \mid \! Y^m, A^m)\|_F \left(1 + \widetilde{V}_{m,n}^2 \right) \right|^\frac{2}{5}, \notag \end{align} where $\widetilde{V}_{m,n} = \sqrt{ \ex{ \cE_{m,n}^2 \! \mid \! Y^m, A^m}}$. Taking the expectation of both sides and using the Cauchy-Schwarz inequality and Jensen's inequality leads to \begin{multline} \ex{ \Delta^P_{m,n}} \le \frac{1}{2} \ex{ \left| \cE_{m,n} - V_{m,n} \right|} \notag \\ + C \cdot \left| \tfrac{1}{n} \ex{ \left\| \cov(X^n \! \mid \! Y^m, A^m)\right\|_F} \big (1 \!+\! \sqrt{ \ex{ \cE_{m,n}^2}} \big) \right|^\frac{2}{5}. \end{multline} Furthermore, combining this inequality with Lemma~\ref{lem:SE_bound_1} and \eqref{eq:SE_moment_2} gives \begin{align} \ex{ \Delta^P_{m,n}} \le C_B \cdot \left[ \left| I''_{m,n} \right|^\frac{1}{4} + \left| I''_{m,n} \right|^\frac{1}{10}\right]. \notag \end{align} Finally, since $| I''_{m,n}|$ can be bounded uniformly by a constant that depends only on $B$, we see that the dominant term on the right-hand side is the one with the smaller exponent. This completes the proof. \subsection{Proof of Lemma~\ref{lem:Vdev_to_logVdev}}\label{proof:lem:Vdev_to_logVdev} To simplify notation we drop the explicit dependence on the problem parameters and write $M$ and $V$ instead of $M_{m,n}$ and $V_{m,n}$. The first inequality in \eqref{eq:Vdev_to_logVdev} follows immediately from the fact that the mapping $x \mapsto \log(1 + x)$ is one-Lipschitz on $\mathbb{R}_+$. Next, letting $V'$ be an independent copy of $V$, the absolute deviation of the posterior variance can be upper bounded as follows: \begin{align} \ex{ \left| V - M\right|} & = \ex{ \left| V - \ex{ V'} \right|} \notag\\ & \overset{(a)}{\le} \ex{ \left| V - V' \right|} \notag\\ & \overset{(b)}{\le} \sqrt{ \ex{ \left(1 + V \right)^2} \ex{\left| \log\left( \frac{ 1 + V}{ 1 + V'} \right)\right| } }, \label{eq:PSE_dev_c_a} \end{align} where (a) follows from Jensen's inequality and (b) follows from applying Lemma~\ref{lem:L1_to_Log} with $X = 1 + V_m$ and $Y = 1 +V'_m$. The first expectation on right-hand side of \eqref{eq:PSE_dev_c_a} obeys: \begin{align} \ex{ \left(1 + V_m \right)^2 } \overset{(a)}{\le} 2(1 + \ex{ V^2}) \overset{(b)}{\le} C_B, \label{eq:PSE_dev_c_b} \end{align} where (a) follows from \eqref{eq:ab2} and (b) follows form Jensen's inequality and \eqref{eq:SE_moment_2}. For the second expectation on the right-hand side of \eqref{eq:PSE_dev_c_a}, observe that by the triangle inequality, we have, for every $t \in \mathbb{R}$, \begin{align} \left| \log\left( \frac{ 1 + V}{ 1 + V'} \right)\right| & \le \left| \log\left( 1 + V \right) - t \right| + \left| \log\left( 1 + V' \right) - t \right| . \notag\notag \end{align} Taking the expectation of both sides and minimizing over $t$ yields \begin{align} \ex{\left| \log\left( \frac{ 1 + V}{ 1 + V'} \right)\right| } & \le \min_{ t \in \mathbb{R}} 2 \ex{ \left| \log\left( 1 + V \right) - t \right| } . \label{eq:PSE_dev_c_c} \end{align} Plugging \eqref{eq:PSE_dev_c_a} and \eqref{eq:PSE_dev_c_b} back into \eqref{eq:PSE_dev_c_c} completes the proof of Lemma~\ref{lem:Vdev_to_logVdev}. \subsection{Proof of Lemma~\ref{lem:post_var_smoothness}}\label{proof:lem:post_var_smoothness} Let $U = (Y^m, A^m)$ and $U_k= (Y_{m+1}^{m+k}, A_{m+1}^{m+k})$. We use the fact that the posterior variance can be expressed in terms of the expected variance of a future measurement. Specifically, by \eqref{eq:id_var}, it follows that for any integer $i > m + k$, we have \begin{align} V_{m} & = \ensuremath{\mathbb{E}}_{A_{i}}\left[ \var\left( Y_{i} \! \mid \! U, A_{i}\right) \right] - 1 \notag\\ V_{m+k} & = \ensuremath{\mathbb{E}}_{A_{i}}\left[ \var\left( Y_{i} \! \mid \! U, U_k, A_{i}\right) \right] - 1. \notag \end{align} Accordingly, the expectation of the absolute difference can be upper bounded as \begin{align} \MoveEqLeft \ex{ \left| V_{m+k} - V_m \right|} \notag \\ & = \ex{ \left | \ensuremath{\mathbb{E}}_{A_{i}} \left[ \var(Y_{i} \! \mid \! U, U_k, A_{i}) - \var(Y_{i} \! \mid \! U, A_{i}) \right] \right|} \notag \\ & \overset{(a)}{\le} \ex{ \left | \var(Y_{i} \! \mid \! U, U_k, A_{i}) - \var(Y_{i} \! \mid \! U, A_{i}) \right|} \notag\\ & \overset{(b)}{\le} C \cdot \sqrt{\ex{Y^4_{i}} \ex{ D_\text{KL}(P_{Y_{i} \mid U, U_k, A_i} \, \middle \| \, P_{Y_{i} \mid U, A_i}) }} \notag\\ & \overset{(c)}{\le} C_B \cdot \sqrt{ \ex{ D_\text{KL}(P_{Y_{i} \mid U, U_k, A_i} \, \middle \| \, P_{Y_{i} \mid U, A_i}) }} , \label{eq:post_var_smoothness_b} \end{align} where (a) follows from Jensen's inequality, (b) follows from Lemma~\ref{lem:var_bnd} with $p =1$, and (c) follows from \eqref{eq:Y_moment_4}. Next, the expected Kullback--Leibler divergence can be expressed in terms of a conditional mutual information, \begin{align} \MoveEqLeft \ex{ D_\text{KL}(P_{Y_{i} \mid U, U_k, A_i} \, \middle \| \, P_{Y_{i} \mid U, A_i}) } \notag \\ & = I(Y_{i} ; U_k \mid U, A_{i}) \notag\\ &\overset{(a)}{=} I(Y_{i} ; Y_{m+1}^{m+k} \! \mid \! Y^m , A^{m+k}, A_i) \notag\\ & \overset{(b)}{=}h(Y_{i} \! \mid \! Y^m , A^{m}, A_i) - h(W_i) \notag \\ & \quad - h(Y_{i} \! \mid \! Y^{m+k} , A^{m+k}, A_i) + h(W_i) \notag\\ & \overset{(c)}{=}h(Y_{m+1} \mid Y^m , A^{m+1}) - h(W_{m+1}) \notag \\ & \quad - h(Y_{m+k + 1} \! \mid \! Y^{m+k} , A^{m+k+1}) + h(W_{m+k}) \notag \\ & \overset{(d)}{=}I(X^n ; Y_{m+1} \! \mid \! Y^m , A^{m+1}) \notag\\ & \quad - I(X^n; Y_{m+k+1} \! \mid \! Y^{m+k} , A^{m+k+1}) \notag\\ & \overset{(e)}{=} I'_{m,n} - I'_{m+k,n} \label{eq:post_var_smoothness_c} \end{align} where (a) follows from the definitions of $U$ and $U_k$ and the fact that the measurements are independent of everything else, (b) follows from expanding the mutual information in terms of the differential entropy of $Y_i$, (c) follows from the fact that future measurements are identically distributed given the past, (d) follows from the fact that $h(W_{m} ) = h(Y_m \! \mid \! X^n, A^m )$, and (e) follows from \eqref{eq:Ip_alt}. Combining \eqref{eq:post_var_smoothness_b} and \eqref{eq:post_var_smoothness_c}, we see that the following inequality holds for all integers $m$ and $k$, \begin{align} \ex{ \left| V_{m,n} - V_{k,n} \right| } \le C_B \cdot \left| I'_{m,n} - I'_{k,n} \right|^\frac{1}{2}. \nota \end{align} Moreover, we can now bound the deviation over $\ell$ measurements using \begin{align} \frac{1}{\ell} \sum_{i=m}^{m+\ell-1} \ex{ \left | V_m - V_k \right| } & \le C_B \cdot \frac{1}{\ell} \sum_{k=m}^{m + \ell -1} \left| I'_m - I'_{k} \right|^\frac{1}{2} \notag \\ &\le C_B \cdot \left| I'_{m} - I'_{m + \ell-1} \right|^\frac{1}{2}, \notag \end{align} where the second inequality follows from the fact that $I'_{m,n}$ is non-increasing in $m$ (see Section~\ref{sec:multivariate_MI_MMSE}). This completes the proof of Lemma~\ref{lem:post_var_smoothness}. \subsection{Proof of Lemma~\ref{lem:PMID_var}}\label{proof:lem:PMID_var} Starting with the triangle inequality, the sum of the posterior MI difference satisfies, for all $t \in \mathbb{R}$, \begin{align} \left| \sum_{i =m}^{m+ \ell-1} J_i - t \right| \le \left| \sum_{i =m}^{m+\ell-1} J_{i} - \cJ_i \right| + \left| \sum_{i =m}^{m+\ell-1} \cJ_{i} - t \right|. \notag \end{align} Taking the expectation of both sides and minimizing over $t$ leads to \begin{multline} \inf_{t \in \mathbb{R}} \ex{ \left| \sum_{i =m}^{m+ \ell-1} J_i - t \right|} \le \inf_{t \in \mathbb{R}} \ex{ \left| \sum_{i =m}^{m+\ell-1} \cJ_{i} - t \right|} \\+ \ex{ \left| \sum_{i =m}^{m+\ell-1} \cJ_{i} - J_i \right|} \label{eq:PMID_var_b} . \end{multline} For the first term in \eqref{eq:PMID_var_b}, observe that \begin{align} \inf_{t \in \mathbb{R}} \ex{ \left| \sum_{i =m}^{m+\ell-1} \cJ_i - t \right|} & \overset{(a)}{\le} \inf_{t \in \mathbb{R}} \sqrt{ \ex{ \left| \sum_{i =m}^{m+\ell-1} \cJ_{i} - t \right|^2} } \notag\\ & = \sqrt{ \var\left( \sum_{i=m}^{m+\ell-1} \cJ_{i} \right) } , \label{eq:PMID_var_c} \end{align} where (a) follows from Jensen's inequality. Furthermore, the variance obeys the upper bound \begin{align} \var\left( \sum_{i =m}^{m+\ell-1} \cJ_{i} \right) & = \var\left( \sum_{i = 0}^{m + \ell -1} \cJ_i - \sum_{i = 0}^{m-1} \cJ_{i} \right) \notag\\ & \overset{(a)}{\le} 2 \var\left( \sum_{i = 0}^{m + \ell -1} \cJ_i \right) + 2 \var\left( \sum_{i = 0}^{m-1} \cJ_{i} \right) \notag \\ & \overset{(b)}{=} 2 \var\left( \imath\left( X^n ; Y^{m+\ell} \mid A^{m+\ell} \right) \right) \notag \\ & \quad + 2 \var\left( \imath\left( X^n ; Y^m \mid A^m \right) \right) \notag\\ & \overset{(c)}{\le} C_B \cdot \left( 1 + \tfrac{m+\ell}{n} \right)^2 n + C_B \cdot \left( 1 + \tfrac{m}{n} \right)^2 n \notag\\ & \le C'_B \cdot \left( 1 + \tfrac{m+\ell}{n} \right)^2 n \notag \end{align} where (a) follows from \eqref{eq:ab2}, (b) follows from the definition of $\cJ_m$, and (c) follows from Lemma~\ref{lem:IMI_var}. Plugging this bound back into \eqref{eq:PMID_var_c} gives \begin{align} \inf_{t \in \mathbb{R}} \ex{ \left| \sum_{i =m}^{m+\ell-1} \cJ_i - t \right|} & \le C_B \cdot \left( 1 + \frac{m+\ell}{n} \right) \sqrt{ n } \label{eq:PMID_var_d}. \end{align} Next, we consider the second term in \eqref{eq:PMID_var_b}. Note that $\cJ_m$ can be expressed explicitly as follows: \begin{align} \cJ_m & = \log\left( \frac{f_{Y_{m+1}|X^n , Y^m,A^{m+1}}( Y_{m+1} \mid X^n, Y^m,A^{m+1} )}{f_{Y_{m+1}|Y^m, A^{m+1}}(Y_{m+1} \mid Y^m, A^{m+1} )} \right) \notag\\ & = - \log\left( f_{Y_{m+1}|Y^m, A^{m+1}}(Y_{m+1} \mid Y^m, A^{m+1} ) \right) \notag\\ & \quad - \frac{1}{2} W^2_{m+1} - \frac{1}{2} \log( 2\pi) . \notag \end{align} To proceed, we define the random variables \begin{align} \mathcal{H}_m & \triangleq - \log\left( f_{Y_{m+1}|Y^m, A^{m+1}}(Y_{m+1} \mid Y^m, A^{m+1} ) \right) \notag\\ \widehat{\mathcal{H}}_m & \triangleq \ex{ \mathcal{H}_m \mid Y^m, A^m} , \notag \end{align} and observe that \begin{align} \cJ_m & = \mathcal{H}_m - \frac{1}{2} W^2_{m+1} - \frac{1}{2} \log( 2\pi) \notag\\ J_m & = \widehat{\mathcal{H}}_m - \frac{1}{2} - \frac{1}{2} \log( 2\pi) . \notag \end{align} Using this notation, we can now write \begin{align} \MoveEqLeft \ex{ \left| \sum_{i =m}^{m+\ell-1} \cJ_{i} - J_i \right|} \notag \\ & = \ex{ \left| \sum_{i =m}^{m+\ell-1} \mathcal{H}_i - \widehat{\mathcal{H}}_i + \frac{1}{2}(W_{i}^2 - 1) \right|} \notag\\ & \overset{(a)}{\le} \ex{ \left| \sum_{i =m}^{m+\ell-1} \mathcal{H}_i - \widehat{\mathcal{H}}_i \right|} + \ex{ \left| \sum_{i =m}^{m+\ell-1}\frac{1}{2}(W_{i}^2 - 1) \right|} \notag \\ & \overset{(b)}{\le} \sqrt{ \ex{ \left( \sum_{i =m}^{m+\ell-1} \mathcal{H}_i - \widehat{\mathcal{H}}_i \right)^2}} \notag \\ & \quad + \sqrt{ \ex{ \left( \sum_{i =m}^{m+\ell-1}\frac{1}{2}(W_{i}^2 - 1) \right)^2}}, \label{eq:PMID_var_e} \end{align} where (a) follows from the triangle inequality and (b) follows from Jensen's inequality. For the first term on the right-hand side, observe that square of the sum can be expanded as follows: \begin{align} \MoveEqLeft \ex{ \left( \sum_{i =m}^{m+\ell-1} \mathcal{H}_i - \widehat{\mathcal{H}}_i \right)^2} \notag \\ & =\sum_{i =m}^{m+\ell-1} \ex{ \left( \mathcal{H}_i - \widehat{\mathcal{H}}_i \right)^2} \notag \\ & \quad + 2 \sum_{i =m}^{m+\ell-1} \sum_{j = i+1}^{m+ \ell-1} \ex{ \left( \mathcal{H}_i - \widehat{\mathcal{H}}_i \right)\left( \mathcal{H}_j - \widehat{\mathcal{H}}_j \right)}. \label{eq:PMID_var_f} \end{align} To deal with the first term on the right-hand side of \eqref{eq:PMID_var_f}, observe that \begin{align} \ex{ \left( \mathcal{H}_i - \widehat{\mathcal{H}}_i \right)^2} & \overset{(a)}{ =}\ex{ \var( \mathcal{H}_i \mid Y^i, A^i )} \notag \\ &\overset{(b)}{ \le} \ex{ \var( \mathcal{H}_i ) } \notag\\ & \le \ex{ \left( \mathcal{H}_i - \frac{1}{2} \log(2 \pi) \right)^2 }, \notag \end{align} where (a) follows from the definition of $\widehat{\mathcal{H}}_i$ and (b) follows from the law of total variance \eqref{eq:law_tot_var}. To bound the remaining term, let $U = (Y^m, A^m)$ and let $\widetilde{X}^n_{u}$ be drawn according to the posterior distribution of $X^n$ given $U = u$. Then, the density of $Y_{m+1}$ given $(U, A_{m+1})$ can be bounded as follows: \begin{align} \frac{1}{\sqrt{2 \pi}} &\ge f_{Y_{m+1}|U, A_{m+1}}(y_{m+1} \mid u, a_{m+1}) \notag\\ & = \ensuremath{\mathbb{E}}_{\tilde{X}_u^n} \left[ \frac{1}{\sqrt{2 \pi}}\exp\left( - \frac{1}{2} (y_{m+1} - \langle a_{m+1}, \widetilde{X}_u^n \rangle)^2 \right) \right] \notag\\ & \overset{(a)}{\ge} \frac{1}{\sqrt{2 \pi}} \exp\left( - \frac{1}{2} \ensuremath{\mathbb{E}}_{\tilde{X}_u^n} \left[ (y_{m+1} - \langle a_{m+1}, \widetilde{X}_u^n \rangle)^2 \right] \right) \notag \\ & \overset{(b)}{\ge} \frac{1}{\sqrt{2 \pi}} \exp\left( - y_{m+1}^2 - \ensuremath{\mathbb{E}}_{\tilde{X}_u^n} \left[ ( \langle a_{m+1}, \widetilde{X}_u^n \rangle)^2 \right] \right) , \notag \end{align} where (a) follows from Jensen's inequality and the convexity of the exponential and (b) follows from \eqref{eq:ab2}. Using these bounds, we obtain\begin{align} \MoveEqLeft \ex{ \left( \mathcal{H}_i - \frac{1}{2} \log(2 \pi) \right)^2} \notag\\ & \le \ex{ \left( Y_{m+1}^2 + \ensuremath{\mathbb{E}}_{\tilde{X}_U^n} \left[ \left( \langle A_{m+1}, \widetilde{X}_U^n \rangle \right)^2\right]\right)^2} \notag\\ & \overset{(a)}{\le} 2 \ex{ Y_{m+1}^4} + 2 \ex{ \left( \ensuremath{\mathbb{E}}_{\tilde{X}_U^n} \left[ \left( \langle A_{m+1}, \widetilde{X}_U^n \rangle \right)^2\right]\right)^2} \notag\\ & \overset{(b)}{\le} 2 \ex{ Y_{m+1}^4} + 2 \ex{ \left( \langle A_{m+1}, X^n \rangle \right)^4} \notag\\ & \le 4\ex{ Y_{m+1}^4} \notag\\ & \overset{(c)}{\le} C_B, \nota \end{align} where (a) follows from \eqref{eq:ab2}, (b) follows from Jensen's inequality and the fact that $\widetilde{X}^n_U$ as the same distribution as $X^n$ and is independent of $A_{m+1}$, and (c) follows from \eqref{eq:Y_moment_4}. To deal with the second term on the right-hand side of \eqref{eq:PMID_var_f}, note that $\mathcal{H}_i$ and $\widehat{\mathcal{H}}_i$ are determined by $(Y^{i+1}, A^{i+1})$, and thus, for all $j > i$, \begin{align} \MoveEqLeft \ex{ \left( \mathcal{H}_i - \widehat{\mathcal{H}}_i \right)\left( \mathcal{H}_j - \widehat{\mathcal{H}}_j \right) \mid Y^{i+1}, A^{i+1} } \notag\\ & = \left( \mathcal{H}_i - \widehat{\mathcal{H}}_i \right) \ex{ \left( \mathcal{H}_j - \widehat{\mathcal{H}}_j \right) \mid Y^{i+1}, A^{i+1} } = 0 . \notag \end{align} Consequently, the cross terms in the expansion of \eqref{eq:PMID_var_f} are equal to zero, and the first term on the right-hand side of \eqref{eq:PMID_var_e} obeys the upper bound \begin{align} \sqrt{ \ex{ \left( \sum_{i =m}^{m+\ell-1} \mathcal{H}_i - \widehat{\mathcal{H}}_i \right)^2} } &\le C_B \cdot \sqrt{\ell} \label{eq:PMID_var_g}. \end{align} As for the second term on the right-hand side of \eqref{eq:PMID_var_e}, note that $\sum_{i =m}^{m+\ell-1}W_{i}^2$ is chi-squared with $\ell$ degrees of freedom, and thus \begin{align} \sqrt{ \ex{ \left( \sum_{i =m}^{m+\ell-1}\frac{1}{2}(W_{i}^2 - 1) \right)^2}} & = \sqrt{ \frac{\ell}{2} } \label{eq:PMID_var_h}. \end{align} Plugging \eqref{eq:PMID_var_g} and \eqref{eq:PMID_var_h} back in to \eqref{eq:PMID_var_e} leads to \begin{align} \ex{ \left| \sum_{i =m}^{m+\ell-1} \cJ_{i} - J_i \right|} & \le C_B \cdot \sqrt{\ell} . \nota \end{align} Finally, combining this inequality with \eqref{eq:PMID_var_b} and \eqref{eq:PMID_var_d} gives \begin{align} \inf_{t \in \mathbb{R}} \ex{ \left| \sum_{i =m}^{m+ \ell-1} J_i - t \right|} & \le C_B \cdot \left( \left( 1 + \frac{m+\ell}{n} \right) \sqrt{n} + \sqrt{\ell} \right) \notag\\ & \le C'_B \cdot \left( \left( 1 + \frac{m}{n} \right) \sqrt{n} + \frac{\ell}{\sqrt{n}} \right), \notag \end{align} where the last step follows from keeping only the dominant terms. This completes the proof of Lemma~\ref{lem:PMID_var}. \subsection{Proof of Lemma~\ref{lem:post_var_dev_bound}} \label{proof:lem:post_var_dev_bound} Fix any $(m,n, \ell) \in \mathbb{N}^3$. We begin with the following decomposition, which follows from the triangle inequality: \begin{align} \MoveEqLeft \inf_{t \in \mathbb{R} } \ex{ \frac{1}{2} \log(1 + V_m) - t } \notag\\ &\le \ex{ \left |\frac{1}{2} \log(1 + V_m) - \frac{1}{\ell} \sum_{k=m}^{m+\ell+1}\frac{1}{2} \log\left(1 +V_k \right) \right| } \notag \\ & \quad + \ex{ \left| \frac{1}{\ell} \sum_{k=m}^{m+ k+1} \frac{1}{2} \log\left(1 +V_k \right) - \frac{1}{k} \sum_{i=m}^{m+\ell+1} J_k \right| } \notag\\ & \quad + \inf_{t \in \mathbb{R}} \ex{ \left| \frac{1}{\ell} \sum_{i=m}^{m+ \ell+1} J_k - t \right| }. \label{eq:PSE_dev} \end{align} The first term on the right-hand side of \eqref{eq:PSE_dev} can be bounded in terms of the smoothness of the mutual information given in Lemma~\ref{lem:post_var_smoothness}. We use the following chain of inequalities: \begin{align} \MoveEqLeft \ex{ \left | \log(1 + V_m) - \frac{1}{\ell} \sum_{k=m}^{m +\ell-1} \log\left(1 +V_k \right) \right| } \notag \\ & \overset{(a)}{\le} \frac{1}{\ell} \sum_{k=m}^{m+ \ell-1}\ex{ \left | \log(1 + V_m) - \log\left(1 +V_k \right) \right| } \notag\\ & \overset{(b)}{\le} \frac{1}{\ell} \sum_{k=m}^{m+\ell-1} \ex{ \left | V_m - V_k \right| } \notag\\ &\overset{(c)}{\le} C_B \cdot \left| I'_{m} - I'_{m + \ell -1} \right|^\frac{1}{2} \label{eq:PSE_dev_c} \end{align} where (a) follows from Jensen's inequality, (b) follows from the fact that the mapping $\log(1 +x) \to x$ is one-Lipschitz on $\mathbb{R}_+$, and (c) follows from Lemma~\ref{lem:post_var_smoothness} The second term on the right-hand side of \eqref{eq:PSE_dev} is bounded by the relationship between the posterior variance and posterior mutual information difference: \begin{align} \MoveEqLeft \ex{ \left| \frac{1}{\ell} \sum_{k=m}^{m+\ell-1} \frac{1}{2} \log\left(1 +V_k \right) - \frac{1}{\ell} \sum_{k=m}^{m+\ell-1} J_k \right| } \notag\\ & \overset{(a)}{=} \frac{1}{\ell} \sum_{k=m}^{m+ \ell -1} \ex{\Delta_k^P } \notag\\ & \overset{(b)}{\le} C_B \cdot \frac{1}{\ell} \sum_{k=m}^{m+\ell-1} \left| I''_k \right|^\frac{1}{10} \notag\\ & \overset{(c)}{\le} C_B \cdot \left|\frac{1}{\ell} \sum_{k=m}^{m+\ell-1}\left| I''_k \right| \right|^\frac{1}{10} \notag\\ & = C_B \cdot \left|\frac{1}{k} ( I'_{m+k} - I'_{m} ) \right|^\frac{1}{10} \notag\\ & \overset{(d)}{\le} C'_B \cdot \ell^{-\frac{1}{10} } \label{eq:PSE_dev_d} , \end{align} where (a) follows from Identity~\eqref{eq:DeltaP_alt}, (b) follows from Lemma~\ref{lem:DeltaP_bound}, (c) follows from Jensen's inequality and the non-positiviity of $I''_m$ and (d) follows from the fact that $I'_m$ is bounded by a constant that depends only on $B$ (see Section~\ref{sec:multivariate_MI_MMSE}). Finally, the third term on the right-hand side of \eqref{eq:PSE_dev} is bounded by Lemma~\ref{lem:PMID_var}. Plugging \eqref{eq:PSE_dev_c} and \eqref{eq:PSE_dev_d} back into \eqref{eq:PSE_dev} leads to \begin{multline} \inf_{t \in \mathbb{R} } \ex{ \frac{1}{2} \log(1 + V_m) - t } \le C_B \cdot \left| I'_{m} - I'_{m + \ell -1} \right|^\frac{1}{2} \\ + C_B \cdot \left[\ell^{-\frac{1}{10}} + \left(1 + \frac{m }{n} \right) \frac{ \sqrt{n}}{\ell} + \frac{1}{ \sqrt{n}} \right]. \notag \end{multline} Combining this inequality with Lemma~\ref{lem:Vdev_to_logVdev} completes the proof of Lemma~\ref{lem:post_var_dev_bound}. \subsection{Proof of Lemma~\ref{lem:Delta_bound}}\label{proof:lem:Delta_bound} Fix any $(m,n, \ell) \in \mathbb{N}^3$. Combining Identity \eqref{eq:Delta_decomp}, with Lemmas~\ref{lem:DeltaP_bound} and \ref{lem:post_var_dev_bound} yields \begin{align} \Delta_{m,n} & = \ex{ \Delta^P_{m,n}} + \frac{1}{2} \ex{\log\left( \frac{ 1 + M_{m,n}}{ 1 + V_{m,n}} \right)} \notag\\ & \begin{multlined}[b] \le C_B \cdot \Big[ \left| I''_{m,n} \right|^\frac{1}{10} + \left | I'_{m,n} - I'_{m+ \ell + 1, n} \right|^\frac{1}{4} \\ + \ell^{-\frac{1}{20}}+ \left(1 + \tfrac{m}{n} \right)^\frac{1}{2} n^\frac{1}{4} \ell^{-\frac{1}{2}} + n^{-\frac{1}{4}} \Big]. \label{eq:Delta_bound} \end{multlined} \end{align} For the specific choice of $\ell = \lceil n^\frac{5}{6} \rceil$, we have \begin{align} \ell^{-\frac{1}{20}}+ \left(1 + \tfrac{m}{n} \right)^\frac{1}{2} n^\frac{1}{4} \ell^{-\frac{1}{2}} & \le n^{-\frac{1}{24}} + \left(1 + \tfrac{m}{n} \right)^\frac{1}{2} n^{-\frac{1}{6} } \notag \\ & \le 2\left(1 + \tfrac{m}{n} \right)^\frac{1}{2} n^{-\frac{1}{24} }. \notag \end{align} Plugging this inequality back into \eqref{eq:Delta_bound} completes the proof of Lemma~\ref{lem:Delta_bound}. \section{Proofs of Results in Section~\ref{proof:thm:MMSE_fixed_point}} \subsection{Proof of Lemma~\ref{lem:M_to_M_aug}}\label{proof:lem:M_to_M_aug} Recall that $M_m$ is the expectation of the posterior variance $V_m$. Therefore, the difference between $M_{m+1}$ and $M_m$ can bounded as follows: \begin{align} \left| M_{m+1} - M_m \right| & = \left| \ex{ V_{m+1} - V_m} \right| \notag\\ & \overset{(a)}{\le} \ex{ \left | V_{m+1} - V_{m} \right|} \notag\\ & \overset{(b)}{\le} C_B \cdot \sqrt{I_m''}, \label{eq:MMSE_smooth} \end{align} where (a) follows from Jensen's inequality and (b) follows from Lemma~\ref{lem:post_var_smoothness}. Combining \eqref{eq:MMSE_smooth} with the sandwiching relation \eqref{eq:tildeM_sandwich} leads to \eqref{eq:M_to_M_aug}. \subsection{Proof of Lemma~\ref{lem:MMSE_aug_alt}}\label{proof:lem:MMSE_aug_alt} Let $Q$ be a random matrix distributed uniformly on the set of $(m+1) \times (m+1)$ orthogonal matrices and define the rotated augmented measurements: \begin{align} \widetilde{Y}^{m+1} &= Q \begin{bmatrix} Y^{m} \\ Z_{m+1} \end{bmatrix} , \qquad \widetilde{A}^{m+1} = Q \begin{bmatrix} A^{m} & \bm{m}{0}_{m \times 1} \\ A_{m+1} & \sqrt{G_{m+1}} \end{bmatrix}. \notag \end{align} Since multiplication by $Q$ is a one-to-one transformation, the augmented MMSE can be expressed equivalently in terms of the rotated measurements: \begin{align} \widetilde{M}_m & \triangleq \frac{1}{n} \mathsf{mmse}(X^n \mid Y^{m}, A^m, \mathcal{D}_{m+1} ) \notag\\ & = \frac{1}{n} \mathsf{mmse}(X^n \mid \widetilde{Y}^{m+1}, \widetilde{A}^{m+1}). \label{eq:tildeM_alt_a} \end{align} \begin{lemma}\label{lem:tildeA} The entries of the $(m+1) \times (n+1)$ random matrix $\widetilde{A}^{m+1}$ are i.i.d.\ Gaussian $\mathcal{N}(0,1/n)$. \end{lemma} \begin{proof} The first $n$ columns are i.i.d.\ Gaussian $\mathcal{N}(0, \frac{1}{n} I_{m+1})$ and independent of $Q$ because of the rotational invariance of the i.i.d.\ Gaussian distribution on $A^{m+1}$. The last column of $ \widetilde{A}^{m+1}$ is equal to the product of $\sqrt{G_{m+1}}$ and the last column of $Q$. Since $G_{m+1}$ is proportional to a chi random variable with $m+1$ degrees of freedom and $Q$ is distributed uniformly on the Euclidean sphere of radius one, the last column is also Gaussian $\mathcal{N}(0, \frac{1}{n} I_{m+1})$; see e.g.\ \cite[Theorem~2.3.18]{gupta:1999}. \end{proof} The key takeaway from Lemma~\ref{lem:tildeA}, is that the distribution on the columns of $\widetilde{A}^{m+1}$ is permutation invariant. Since the distribution on the entries of $X^{n+1}$ is also permutation invariant, this means that the MMSEs of the signal entries are identical, i.e., \begin{align} \mathsf{mmse}(X_{i} \mid \widetilde{Y}^{m+1}, \widetilde{A}^{m+1}) = \mathsf{mmse}(X_{j} \mid \widetilde{Y}^{m+1}, \widetilde{A}^{m+1}), \notag \end{align} for all $i,j \in [n+1]$. Combining this fact with \eqref{eq:tildeM_alt_a}, we see that the augmented MMSE can be expressed equivalently as \begin{align} \widetilde{M}_m & = \mathsf{mmse}(X_{n+1} \mid \widetilde{Y}^{m+1}, \widetilde{A}^{m+1}) \notag\\ & = \mathsf{mmse}(X_{n+1} \mid Y^{m}, A^m, \mathcal{D}_{m+1}) , \notag \end{align} where the last step follows, again, from the fact that multiplication by $Q$ is a one-to-one transformation of the data. This completes the proof of Lemma~\ref{lem:MMSE_aug_alt}. \subsection{Proof of Lemma~\ref{lem:MMSE_aug_bound}}\label{proof:lem:MMSE_aug_bound} This proof is broken into two steps. First, we show that the augmented MMSE satisfies the inequality, \begin{align} \left| \widetilde{M}_m - \ex{ \mathsf{mmse}_X\left( \frac{G_{m+1}}{ 1+ M_m} \right) } \right| & \le C_B \cdot \sqrt{\Delta_m}. \label{eq:M_aug_bound_2} \end{align} Then, we use the smoothness of the of the single-letter MMSE function $\mathsf{mmse}_X(s)$ to show that \begin{align} \MoveEqLeft \left| \ex{ \mathsf{mmse}_X\left( \frac{G_{m+1}}{ 1+ M_m} \right) } -\mathsf{mmse}_X\left( \frac{m/n}{ 1+ M_m} \right) \right| \notag\\ & \le C_B \frac{1 + \sqrt{m}}{n}. \label{eq:M_aug_bound_3} \end{align} The proofs of Inequalities \eqref{eq:M_aug_bound_2} and \eqref{eq:M_aug_bound_3} are given in the following subsections. \subsubsection{Proof of Inequality~\eqref{eq:M_aug_bound_2}} The centered augmented measurement $\bar{Z}_{m+1}$ is defined by \begin{align} \bar{Z}_{m+1} = Z_{m+1} - \ex{ Z_{m+1} \mid Y^m, A^{m+1} }. \notag \end{align} Since $G_{m+1}$ and $X_{n+1}$ are independent of the first $m$ measurements, $\bar{Z}_{m+1}$ can also be expressed as \begin{align} \bar{Z}_{m+1} = \sqrt{G_{m+1}} X_{n+1} + \bar{Y}_{m+1}, \notag \end{align} where $\bar{Y}_{m+1} = Y_{m+1} - \ex{Y_{m+1} \mid Y^m, A^{m+1}}$ is the centered measurement introduced in Section~\ref{sec:Gaussiannness}. Starting with \eqref{eq:M_aug_alt}, we see that the augmented MMSE can expressed as \begin{align} \widetilde{M}_m & = \mathsf{mmse}( X_{n+1} \! \mid \! Y^m\!, A^m\!, \bar{Z}_{m+1} , A_{m+1}, G_{m+1}), \label{eq:mmse_aug_b} \end{align} where we have used the fact that there is a one-to-one mapping between $\bar{Z}_{m+1}$ and $Z_{m+1}$. The next step of the proof is to address the extent to which the MMSE in \eqref{eq:mmse_aug_b} would differ if the `noise' term $\bar{Y}_{m+1}$ were replaced by an independent Gaussian random variable with the same mean and variance. To make this comparison precise, recall that $\ex{ \bar{Y}_{m+1}} = 0$ and $\var(\bar{Y}_{m+1}) = 1+ M_m$, and let $Z^*_{m+1}$ be defined by \begin{align} Z^*_{m+1} = \sqrt{G_{m+1}} X_{n+1} + Y_{m+1}^*, \notag \end{align} where $Y_{m+1}^* \sim \mathcal{N}(0,1+M_m)$ is independent of everything else. Note that the MMSE of $X_{n+1}$ with $\bar{Z}_{m+1}$ replaced by $Z^*_{m+1}$ can be characterized explicitly in terms of the single-letter MMSE function: \begin{align} \MoveEqLeft \mathsf{mmse}( X_{n+1} \mid Y^m, A^m, Z^*_{m+1} ,A_{m+1}, G_{m+1}) \notag\\ & \overset{(a)}{=} \mathsf{mmse}( X_{n+1} \mid Z^*_{m+1} , G_{m+1}) \notag\\ & = \ex{ \mathsf{mmse}_X\left( \frac{G_{m+1}}{ 1+ M_m} \right) } \label{eq:mmse_aug_c} \end{align} where (a) follows from the fact that $(Y^m, A^{m+1})$ is independent of $(X_{n+1}, Z^*_{m+1}, G_{m+1})$. The next step is to bound the difference between \eqref{eq:mmse_aug_b} and \eqref{eq:mmse_aug_c}. To proceed, we introduce the notation \begin{align} \mathcal{F} & = (Y^m, A^m, \bar{Z}_{m+1} , A_{m+1}, G_{m+1} ) \notag\\ \mathcal{F}^* & = (Y^m, A^m, Z^*_{m+1} , A_{m+1}, G_{m+1}). \notag \end{align} Then, using Lemma~\ref{lem:mmse_diff} yields \begin{align} \MoveEqLeft \left| \mathsf{mmse}(X_{n+1} \mid \mathcal{F}) - \mathsf{mmse}(X_{n+1} \mid \mathcal{F}^*) \right| \notag \\ & \le 2^\frac{5}{2} \sqrt{ \ex{ X^4_{n+1}} D_\mathrm{KL} \left( P_{\mathcal{F} , X_{n+1}} \, \middle \|\, P_{\mathcal{F}^*, X_{n+1}} \right)}. \notag \end{align} By Assumption 2, the fourth moment of $X_{n+1}$ is upper bounded by $B$. The last step is to show that the Kullback--Leibler divergence is equal to the non-Gaussianenesss $\Delta_m$. To this end, observe that \begin{align} \MoveEqLeft D_\mathrm{KL} \left( P_{\mathcal{F} , X_{n+1}} \, \middle \|\, P_{\mathcal{F}^*, X_{n+1}} \right) \notag\\ & \overset{(a)}{=} \ensuremath{\mathbb{E}}_{X_{n+1}} \left[ D_\mathrm{KL} \left( P_{\mathcal{F} \mid X_{n+1}} \, \middle \|\, P_{\mathcal{F}^*\mid X_{n+1}} \right) \right] \notag\\ & \overset{(b)}{=} D_\mathrm{KL} \left( P_{\bar{Y}_{m+1} , Y^m, A^{m+1} } \, \middle \|\, P_{ Y^*_{m+1}, Y^m, A^{m+1}} \right) \notag\\ & \overset{(c)}{=} \ensuremath{\mathbb{E}}_{Y^m, A^{m+1}} \left[ D_\mathrm{KL} \left( P_{\bar{Y}_{m+1} \mid Y^m, A^{m+1} } \, \middle \|\, P_{ Y^*_{m+1}} \right) \right] \notag\\ & = \Delta_{m} , \notag \end{align} where (a) follows from the chain rule for Kullback--Leibler divergence, (b) follows from the fact that both $\bar{Y}_{m+1}$ and $Y^*_{m+1}$ are independent of $(X_{n+1}, G_{m+1})$, and (c) follows from the chain rule for Kullback--Leibler divergence and the fact that $Y^*_{m+1}$ is independent of $(Y^m, A^{m+1})$. This completes the proof of Inequality~\eqref{eq:M_aug_bound_2}. \subsubsection{Proof of Inequality~\eqref{eq:M_aug_bound_3}} Observe that \begin{align} \MoveEqLeft \left| \ex{ \mathsf{mmse}_X\left( \frac{G_{m+1}}{ 1+ M_m} \right) } - \mathsf{mmse}_X\left( \frac{ m/n }{ 1+ M_m} \right) \right| \notag\\ &\overset{(a)}{\le} \ex{ \left| \mathsf{mmse}_X\left( \frac{G_{m+1}}{ 1+ M_m} \right) - \mathsf{mmse}_X\left( \frac{ m/n }{ 1+ M_m} \right) \right| } \notag\\ & \overset{(b)}{\le} 4 B \ex{ \left |G_{m+1} - \frac{m}{n} \right |} \notag \\ & \overset{(c)}{\le} 4 B \left( \ex{ \left |G_{m+1} - \ex{ G_{m+1}} \right|} + \frac{1}{n} \right) \notag\\ & \overset{(d)}{\le} 4 B \left( \sqrt{ \var(G_{m+1}) }+ \frac{1}{n} \right) \notag\\ & = 4 B \Big( \frac{ \sqrt{2(m+1)}}{n} + \frac{1}{n} \Big) , \notag \end{align} where (a) follows from Jensen's inequality, (b) follows from Lemma~\ref{lem:mmseX_bound} and Assumption 2, (c) follows from the triangle inequality and the fact that $\ex{G_{m+1}} = \frac{m+1}{n}$, and (d) follows from Jensen's inequality. \section*{Acknowledgment} G.\ Reeves is grateful to D. Donoho for many helpful conversations as well as the inspiration behind the sandwiching argument that lead to the fixed-point relationships in Section~\ref{sec:MMSE_fixed_point}. \bibliographystyle{ieeetr}
{ "attr-fineweb-edu": 1.586914, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcyI4uBhixmHbfKYB
\section{Introduction} The ternary uranium compound UPd$_2$Si$_2$ (the ThCr$_2$Si$_2$-type body centered tetragonal structure) displays a variety of antiferromagnetic (AF) modulations of uranium 5f moments in the ordered states \cite{rf:Palstra86,rf:Shemirani93,rf:Collins93,rf:Honma98,rf:Plackowski11,rf:Wermeille}. In UPd$_2$Si$_2$, an incommensurate (IC) AF order composed of a longitudinal sine-wave with a propagation vector of $q_1= (0,0,0.73)$ develops below $T_{\rm Nh} = 132-138\ {\rm K}$, and it is replaced by a commensurate (C) AF order with $q_2 = (0,0,1)$ below $T_{\rm Nl} = 108\ {\rm K}$ via a first-order phase transition. Furthermore, applying magnetic field along the tetragonal $c$ axis suppresses both the AF phases, and then generates the other anti-phase AF structure with a modulation of $q_3 = (0,0,2/3)$ above 6 T. These features on the AF modulations are considered to be ascribed to a frustration of different inter-site interactions between the 5f moments with nearly localized characteristics \cite{rf:Honma98}. In fact, these features are suggested to be fairly well reproduced by the theoretical calculation based on the axial-next-nearest-neighbor Ising (ANNNI) model \cite{rf:Honma98}. In such a situation, it is expected that the competition among the magnetic interactions is sensitively affected by the crystal strains that give rise to the variations of the distances between uranium ions. To investigate the relationship between the AF modulations and the crystal strains, we have performed elastic neutron scattering experiments under uniaxial stress for UPd$_2$Si$_2$. \section{Experiment Details} A single crystal of UPd$_2$Si$_2$ was grown by a Czochralski pulling method using a tri-arc furnace, and a plate-shaped sample with a base of the (010) plane (dimensions: 13.1mm$^2$$ \times$ 1mm) was cut out from the ingot by means of a spark erosion. The sample was then mounted between pistons (Be-Cu alloy) in a constant-load uniaxial-stress apparatus \cite{rf:Kawarazaki02}, and cooled down to 1.5 K in a pumped $^4$He cryostat. The uniaxial stress was applied along the [010] directions (equivalent to the [100] direction in the tetragonal symmetry) up to 0.8 GPa. The elastic neutron scattering experiments were performed in the triple-axis spectrometer ISSP-GPTAS located at the JRR-3 reactor of JAEA, Tokai. We chose the neutron momentum of $k=2.67~$\AA$^{-1}$, and used a combination of 40'-40'-40'-80' collimators and two pyrolytic graphite filters. The scans were performed in the $(h0l)$ scattering plane. Applying $\sigma$ along the [010] direction up to 0.8 GPa is expected to enhance the $a$- and $c$-axes lattice parameters by a ratio of $\sim 10^{-4}$ \cite{rf:Yokoyama05}. In the present investigation, however, such variations cannot be detected due to limitation of instrumental resolution ($\sim 10^{-2}-10^{-3}$). \begin{figure}[tbp] \begin{center} \includegraphics[keepaspectratio,width=0.75\textwidth]{nakadafig1.eps} \end{center} \vspace{-10pt} \caption{ Uniaxial-stress variations of the AF Bragg-peak profiles obtained from the $(1,0,\zeta)$ line scans (a) around $\zeta=0.264$ at 115 K, and (b) around $\zeta=0$ at 5 K for UPd$_2$Si$_2$. The volume-averaged AF moments for the C-AF and IC-AF orders are shown in (c). } \end{figure} \section{Results and Discussion} Figure 1(a) and (b) show $\sigma$ variations of IC- and C-AF Bragg-peak profiles for the momentum transfer $Q=(1,0,\sim 0.264) \ (\equiv Q_1)$ at 115 K and $Q=(1,0,0) \ (\equiv Q_2)$ at 5 K, respectively. At $\sigma=0$, clear Bragg peaks originating from the IC-AF order with the propagation vector $q_1=(0,0,0.736(2))$ are observed at 115 K. As temperature is lowered, the IC-AF Bragg peaks disappear and then the C-AF Bragg peaks develop. By applying $\sigma$, it is found that both the intensity and the IC component $\delta$ of the peak position $(1,0,\delta)$ for the IC-AF Bragg peak at $Q_1$ are significantly reduced, although the C-AF Bragg peaks at $Q_2$ are roughly insensitive to $\sigma$. We also observed that the widths of the IC-AF Bragg peak for $\sigma=0.8\ {\rm GPa}$ are somewhat larger than the instrumental resolution estimated from the widths of nuclear Bragg peaks, while those for $\sigma \le 0.5\ {\rm GPa}$ are resolution-limited. This broadening occurs in the entire temperature range of the IC-AF phase appearing, and is nearly independent of temperature. In addition, such a broadening occurs only in the IC-AF Bragg peaks whose positions markedly vary by applying $\sigma$, and is not seen in the other nuclear and C-AF Bragg reflections presently measured. We thus consider that the broadening may be caused by a distribution of the IC wave vectors in the sample generated by inhomogeneity of $\sigma$ in high-$\sigma$ region (roughly $\pm 0.2\ {\rm GPa}$ at $0.8\ {\rm GPa}$), not by short-ranged metastable AF clusters presumably formed on the verge of the first-order phase boundary at $T_{\rm Nl}$. \begin{figure}[tbp] \begin{center} \includegraphics[keepaspectratio,width=0.75\textwidth]{nakadafig2.eps} \end{center} \vspace{-10pt} \caption{ Temperature variations of the AF Bragg-peak intensities for the (a) IC- and (b) C-AF orders, and (d) the $c$ axis component of the ordering vector in the IC-AF phase of UPd$_2$Si$_2$. In (c), the AF transition temperatures $T_{\rm Nh}$ and $T_{\rm Nl}$ under $\sigma$ for IC- and C-AF orders are plotted. } \end{figure} In Fig.\ 1(c), $\sigma$ variations of volume-averaged AF moments $\mu_{\rm 0}$ for the IC-AF order (115 K) and the C-AF order (5K) are plotted. The $\mu_{\rm 0}$ values are estimated from the Bragg-peak intensities at $Q_1$ and $Q_2$ normalized by the intensities of the nuclear (101) Bragg reflections as a reference. The $|Q|$ dependence of the magnetic scattering amplitude is assumed to be proportional to the magnetic form factor of the U$^{4+}$ ion \cite{rf:Freeman76}. The $\mu_0$ values for the C-AF order under $\sigma$ are unchanged from $2.2(1)\ \mu_{\rm B}/{\rm U}$ at ambient pressure, but those for the IC-AF order are linearly reduced from $2.0(1)\ \mu_{\rm B}/{\rm U}$ ($\sigma=0$) to $1.5(2)\ \mu_{\rm B}/{\rm U}$ ($0.8\ {\rm GPa}$). Displayed in Fig.\ 2(a) and (b) are the IC- and C-AF Bragg-peak intensities $I_1$ and $I_2$ estimated at $Q_1$ and $Q_2$, respectively, plotted as a function of temperature. For $\sigma=0$, $I_1$ starts increasing at 132 K as temperature is decreased, and shows a tendency to saturate below $\sim 115\ {\rm K}$. The phase transition from IC- to C-AF orders is clearly observed at $\sim 109\ {\rm K}$ in both $I_1$ and $I_2$, where $I_1$ discontinuously drops to zero and $I_2$ increases sharply. Similar variations of $I_1$ and $I_2$ are also observed under $\sigma$. However, the discontinuous changes of $I_1$ and $I_2$ ascribed to the IC- to C-AF transition occur in wider temperature range by applying $\sigma$, probably due to the inhomogeneity of strain in the sample. We here define the transition temperature $T_{\rm Nh}$ from the onset of $I_1$, and $T_{\rm Nl}$ as a midpoint of the step-like development in $I_2$. Although $T_{\rm Nh}$ remains nearly unaffected from $\sigma$, $T_{\rm Nl}$ linearly increases with increasing $\sigma$ at a rate of $\partial T_{\rm Nl}/\partial \sigma =4.3\ {\rm K/GPa}$ [Fig.\ 2(c)]. Simple extrapolations of the $\sigma$-variations of $T_{\rm Nh}$ and $T_{\rm Nl}$ yield that they meet with each other at $5.4\ {\rm GPa}$. In Fig.\ 2(d), we show temperature dependence of the $c$-axis component $q_z(T)$ in $q_1$ obtained from the Bragg reflections at $Q_1$. The overall features of $q_z(T)$ do not change by the application of $\sigma$: $q_z$ linearly increases with decreasing temperature, and shows a maximum at $T^*$ ($\sim 110\ {\rm K}$ at $\sigma=0$), followed by a reduction with further decreasing temperature. However, both $T^*$ and the magnitude of $q_z$ above $T^*$ are found to be slightly enhanced by the compression. At $\sigma=0.8\ {\rm GPa}$, $q_z$ reaches 0.747(3) at 115 K, and the determination of $T^*$ becomes difficult because the cusp in the $q_z(T)$ curve are obscured. We have found that applying $\sigma$ along the [010] direction stabilizes the C-AF phase through tuning the competition between the C- and IC-AF orders. This tendency is clearly indicated by the increases of $T_{\rm Nl}$ and $q_z$ under $\sigma$. Interestingly, the presently obtained $\sigma-T$ phase diagram closely resembles the pressure $p$ versus temperature phase diagram proposed by the resistivity measurements \cite{rf:Quirion98,rf:Hidaka11}. In general, it is expected that compressions using $p$ and $\sigma\,||\,[010]$ yield the different types of strain. In particular, the $c$-axis lattice parameter should be reduced by applying $p$, while it may be slightly enhanced by $\sigma$. This suggests that the strain other than the variation of $c$-axis lattice parameter, such as the $a$-axis lattice parameter or tetragonal $c/a$ ratio, plays an crucial role in the frustration among the different inter-site interactions of uranium 5f moments. We also wish to stress that $\sigma$ along the [010] direction induces the tetragonal symmetry-breaking strain of the $x^2-y^2$ type. The relationship between the symmetry-breaking strains and the AF orders is not clear in the present stage. We thus need to investigate effects of $\sigma$ along the other directions on the AF orders. On the other hand, both the $\mu_0$ at 5 K and $T_{\rm Nh}$ values are not influenced by $\sigma$, suggesting that the characteristics of uranium 5f electrons, such as a valence and an AF condensation energy, are basically unchanged under $\sigma$. \section{Summary} Our elastic neutron scattering experiments under $\sigma\ (||\, [010])$ for UPd$_2$Si$_2$ revealed that the compression with $\sigma$ favors the evolution of the C-AF order rather than the IC-AF order. The transition temperature $T_{\rm Nl}$ from the IC- to C-AF states and the $c$-axis IC modulation $q_z$ of the IC-AF order increase by applying $\sigma$, while the staggered moment of the C-AF order and the IC-AF transition temperature $T_{\rm Nh}$ are insensitive to $\sigma$. We suggest from the resemblance between the $\sigma-T$ and $p-T$ phase diagrams that there are couplings between the AF orders and crystal strains commonly induced by the different types of compression, and they govern the frustration of the AF states. To clarify a detail of such couplings, we plan to perform the neutron scattering experiments by applying $\sigma$ along the other directions. \section*{Acknowledgment} We thank T. Asami, R. Sugiura and Y. Kawamura for technical support on neutron scattering measurements. This work was partly supported by a Grant-in-Aid for Scientific Research on Innovative Areas ``Heavy Electrons" (No.23102703) from the Ministry of Education, Culture, Sports, Science and Technology of Japan.
{ "attr-fineweb-edu": 1.791016, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd0XxK0zjCrsOAUx3
\section{Introduction} The spin 1/2 XXZ spin chain \begin{equation} H={1\over 2}\Sigma_{j=0}^{N-1} (\sigma_j^x\sigma_{j+1}^x+ \sigma_j^y\sigma_{j+1}^y+\Delta \sigma_j^z\sigma_{j+1}^z) \label{ham} \end{equation} was first exactly studied by Bethe \cite{bet} in 1931 for $\Delta=\pm 1,$ extensively investigated for $\Delta=0$ in 1960 by Lieb, Schultz and Mattis \cite{lsm} and studied for general values of $\Delta$ in 1966 by Yang and Yang \cite{yya}-\cite{yyc}. Since these initial studies the thermodynamics have been extensively investigated \cite{ts} and by now are fairly well understood. The spin-spin correlations, however, are much more difficult to study and even the simplest of the equal time correlations \begin{equation} S^z(n;T,\Delta)={\rm Tr}\sigma_0^z \sigma_n^z e^{-H/kT}/{\rm Tr}e^{-H/kT} \label{sz} \end{equation} is only partially understood after decades of research \cite{lsm,nm}-\cite{kss}. In this note we extend these investigations of $S^z(n;T,\Delta)$ for $T>0$ by means of an exact computer diagonalization of chains of $N=16$ and $18$ spins for $-1\leq \Delta \leq 0.$ Our results are presented below in tables 1-7 and Figs. 1 and 2. In the remainder of this note we discuss the significance and the interpretation of these results. \section{Results and Discussion} The correlation $S^z(n;T,0)$ for the case $\Delta=0$ was exactly computed long ago \cite{lsm} to be \begin{equation} S^z(n;T,0)=\cases{-[{1\over \pi}\int_0^{\pi}d\phi \sin(n\phi)\tanh({1\over kT}\sin\phi)]^2&if $n$ is odd\cr \delta_{n,0}&if $n$ is even.} \label{zerocor} \end{equation} This correlation is manifestly never positive for $n\neq 0.$ When $T=0$ it simplifies to \begin{equation} S^z(n;0,0)=\cases{-4\pi^{-2}n^{-2}& if $n$ is odd\cr \delta_{n,0}&if $n$ is even.} \end{equation} In the scaling limit where \begin{equation} T\rightarrow 0,~~n\rightarrow \infty,~~{\rm with}~~ Tn=r~~{\rm fixed} \label{scaling} \end{equation} we have \cite{bm} \begin{equation} {\rm lim} T^{-2}S^z(n;T,0)=\cases{-\sinh^{-2}(\pi r/2)&if $n$ is odd\cr 0&if $n$ is even.} \label{corr} \end{equation} In the general case $\Delta\neq 0$ the nearest neighbor correlation at $T=0$ $S^z(1;0,\Delta)$ is obtained from the derivative of the ground state energy \cite{yyb} with respect to $\Delta.$ This correlation is negative for $-1<\Delta$ and is plotted in Fig. 3 of ref\cite{jkm}. For large $n$ the behavior of $S^z(n;0,\Delta)$ at $T=0$ has been extensively investigated and for $|\Delta|<1$ we have \cite{lp,fog} for $n\rightarrow \infty$ \begin{equation} S^z(n,;0,\Delta)\sim -{1\over \pi^2 \theta n^2}+(-1)^n{C(\Delta)\over n^{1\over \theta}} \label{corra} \end{equation} where from ref.\cite{jkm} \begin{equation} \theta={1\over 2}+{1\over \pi}\arcsin \Delta. \end{equation} We note that $0\leq\theta\leq 1$ and vanishes at the ferromagnetic point $\Delta=-1.$ At $\Delta=0$ we have $\theta=1/2,~C(0)=2\pi^{-2}$ and (\ref{corra}) reduces to the exact result (\ref{corr}). For other values of $\Delta$ only the limiting value as $\Delta\rightarrow 1$ is known \cite{aff}. When $T>0$ the correlations decay exponentially for large $n$ instead of the algebraic decay (\ref{corra}) of $T=0.$ For $0<\Delta<1$ it is known \cite{klu,kbi} that for small fixed positive $T$ the large $n$ behavior of $S^z(n;T,\Delta)$ is \begin{equation} S^z(n;T,\Delta)\sim A_z(\Delta,T)(-1)^ne^{-nkT\pi(1-\theta)\theta^{-1}(1-\Delta^2)^{-1/2}}. \label{corrb} \end{equation} In order to smoothly connect to the $T=0$ result (\ref{corra}) we need $A_z(\Delta,T)=A(\Delta)T^{1/\theta}$ but this has not yet been demonstrated. We note that for positive values of $\Delta$ the exact nearest neighbor correlation at $T=0$ is negative and the leading term in the asymptotic behaviors (\ref{corra}) and (\ref{corrb}) oscillates as $(-1)^n.$ Both of these facts are consistent with antiferromagnetism. For negative values of $\Delta,$ however, the situation is somewhat different. The nearest neighbor correlation at $T=0$ is negative and, indeed, since $\theta<1/2,$ we see from (\ref{corra}) that the asymptotic values of $S^z(n;0,\Delta)$ are also negative and there are no oscillations. This behavior cannot be called antiferromagnetic because there are no oscillations but neither can it be called ferromagnetic because the correlations are negative instead of positive. In order to further investigate the regime $-1<\Delta<0$ we have computed the correlation function $S^z(n;T,\Delta)$ by means of exact diagonalization for systems of $N=16$ and $N=18$ spins. Our results for $N=18$ with $\Delta=-.1,-.3,-.9$ and $-1.0$ are given in tables 1-4 where we give $S^z(n;T,\Delta)$ for $1\leq n \leq 8$ and ${1 \over 2}S^z(n;T,\Delta)$ for $n=9.$ The factor of $1/2$ for $n=9$ is used because for $n=N/2$ there are two paths of equal length joining $0$ and $n$ in the finite system whereas for the same $n$ in the infinite size system there will be only one path of finite length. To estimate the precision with which the $N=18$ system gives the $N=\infty$ correlations we give in table 5 the correlation for $N=16$ and $\Delta=-0.9.$ We see here that for $T\geq .5$ the $N=18$ correlations are virtually identical with the $N=16$ correlations. Even for $T=.1$ and $T=.2$ the $N=18$ data should be qualitatively close to the $N =\infty$ values. The tables 1-5 reveal for $-1\leq \Delta \leq 0$ the striking property that $S^z(n;T,\Delta),$ which is always negative at $T=0,$ becomes positive for fixed n at sufficiently large $T.$ We study this further in table 6 where we list the values $T_0(n;\Delta)$ where $S^z(n;T_0(n;\Delta),\Delta)=0.$ This table indicates that \begin{equation}{\rm lim}_{n\rightarrow \infty}T_0(n;\Delta)>0. \end{equation} We denote this limiting temperature by $T_0(\Delta)$ and note that this implies that in the expansion of $S^z(n;T,\Delta)$ obtained from the quantum transfer matrix formalism \cite{klu} \begin{equation} S^z(n;T,\Delta)=\sum_{j=1}C_j(T;\Delta)e^{-n\gamma_j(T)}~~ {\rm with}~\gamma_j<\gamma_{j+1} \end{equation} we have $C_1(T_0(\Delta);\Delta)=0.$ If, for large $n,$ we retain only the first two terms in the expansion, ignore the $T$ dependence of $\gamma_j(T)$ and $C_2(T;\Delta)$ and write $C_1(T;\Delta)=(T-T_0(\Delta))C_1(\Delta)$ we see that the large $n$ behavior of $T_0(n;\Delta)$ may be estimated as \begin{equation} T_0(n,\Delta)=T_0(\Delta)+Ae^{-n\gamma}. \label{fit} \end{equation} where $\gamma=\gamma_2-\gamma_1$ and $A=-C_2/C_1.$ In Fig. 1 plot the data of table 6 versus a least squares fit using (\ref{fit}) and find that the fit is exceedingly good even for small $n.$ The values of the fitting parameters are given in table 7 for $-.9\leq \Delta \leq -.1$ and $T_0(\Delta)$ is plotted in Fig. 2. The existence of this $T_0(\Delta)>0$ for $-1<\Delta<0$ is quite different from the case $0<\Delta<1$ where for all temperatures the sign of $S^z(n;T,\Delta)$ is $(-1)^n.$ To interpret the property of changing sign we note that, when the Hamiltonian (\ref{ham}) is written in terms of the basis where $\sigma^z_j$ is diagonal $(\sigma_j^z=\pm 1),$ the term $\sigma^x_j\sigma^x_{j+1}+ \sigma^y_{j}\sigma^y_{j+1}$ is a kinetic energy term which translates a down spin one step whereas the term $\sigma^z_j\sigma^z_{j+1}$ is a potential energy term which is diagonal in the basis of eigenstates of $\sigma^z_j.$ In classical statistical mechanics the static expectation values of position dependent operators are independent of the kinetic energy and depend only on the potential energy. If we further expect that at high temperatures the system should behave in a classical fashion we infer that at high temperatures for $\Delta<0$ the correlation $S^z(n;T,\Delta)$ should be ferromagnetically aligned ie. $S^z(n;T,\Delta)>0.$ This is indeed what is seen in tables 1-5. However at low temperatures the quantum effects of the kinetic term cannot be ignored. When $\Delta=0$ there is no potential energy so all the behavior in $S^z(n;T,0)$ can only come from the kinetic terms and hence the behavior given by (\ref{zerocor}) in which $S^z(n;T,0)$ is never positive for $n\neq 0$ must be purely quantum mechanical. Consequently it seems appropriate to refer to the change of sign of the correlation $S^z(n;T,\Delta)$ as a quantum to classical crossover. The low temperature behavior of the correlation function is determined by conformal field theory. In particular we consider the scaling limit (\ref{scaling}) and define the scaling function \begin{equation} f(r,\Delta)=\lim T^{-2}S^z(n;T,\Delta). \end{equation} The prescription of conformal field theory is that this scaling function is obtained from the large $n$ behavior of the $T=0$ correlation given by the first term of (\ref{corra}) by the replacement [page 513 of ref.\cite{kbi}] \begin{equation} n\rightarrow (\kappa T/2)^{-1}\sinh \kappa r/2 \label{replace} \end{equation} where the decay constant $\kappa$ can be obtained by use of the methods of ref. \cite{klu}. This replacement is obtained by combining the conformal field theory results on finite size corrections \cite{car} with the field theory relation of finite strip size to nonzero temperature \cite{affb}. This prescription clearly leads to a correlation which is always negative and does not show the sign changing phenomena seen in the tables 1-5. However, this result is only a limiting result as $T\rightarrow 0.$ The results of this paper indicate that there is further physics in the high temperature behavior of the XXZ chain where $T>T_0(\Delta)$ which is not contained in this conformal field theory result. \acknowledgments We are pleased to acknowledge useful discussions with A. Kl{\"u}mper, V. Korepin, S. Sachdev and J. Suzuki. This work is supported in part by the National Science Foundation under grant DMR 97-03543. \begin{table}[b] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} $ T $ &$n=1$ & $n=2$ & $n=3$ & $n=4$ & $n=5$& $n=6$ & $n=7$& $n=8 $ & $n=9$ \\ \hline 0.1 & $-$3.81e-01 & $-$1.87e-02 & $-$3.74e-02 & $-$4.51e-03 & $-$1.19e-02 & $-$2.01e-03 & $-$5.81e-03 & $-$1.28e-03 & $-$2.21e-03 \\ 0.2 & $-$3.69e-01 & $-$1.75e-02 & $-$2.82e-02 & $-$3.13e-03 & $-$5.72e-03 & $-$8.43e-04 & $-$1.47e-03 & $-$2.99e-04 & $-$3.51e-04 \\ 0.5 & $-$2.78e-01 & $-$1.10e-02 & $-$5.88e-03 & $-$4.65e-04 & $-$2.27e-04 & $-$2.24e-05 & $-$9.31e-06 & $-$1.11e-06 & $-$3.86e-07 \\ 1.0 & $-$1.29e-01 & $-$3.25e-03 & $-$3.25e-04 & $-$1.28e-05 & $-$1.18e-06 & $-$5.28e-08 & $-$4.43e-09 & $-$2.15e-10 & $-$1.67e-11 \\ 2.0 & $-$3.25e-02 & $-$3.49e-04 & $-$3.92e-06 & $-$9.71e-09 & 3.34e-10 & 1.01e-11 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 \\ 3.0 & $-$1.02e-02 & $-$2.76e-05 & 4.84e-07 & 9.49e-09 & 8.85e-11 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 \\ 4.0 & $-$2.91e-03 & 2.53e-05 & 4.85e-07 & 4.45e-09 & 3.05e-11 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 \\ 5.0 & 6.54e-05 & 3.26e-05 & 3.44e-07 & 2.44e-09 & 1.54e-11 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 \\ 10.0 & 2.50e-03 & 1.66e-05 & 7.47e-08 & 3.09e-10 & 1.28e-12 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 \\ 20.0 & 1.87e-03 & 5.20e-06 & 1.21e-08 & 2.77e-11 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 \\ \hline \end{tabular} \caption{The correlation $(1-{1\over 2}\delta_{n,N/2})S^z(n;T,\Delta)$ for $\Delta=-.1$ for the XXZ spin chain with $N=18$ sites.} \label{one} \end{table} \begin{table}[b] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} $ T $ &$n=1$ & $n=2$ & $n=3$ & $n=4$ & $n=5$& $n=6$ & $n=7$& $n=8 $ & $n=9$ \\ \hline 0.1 & $-$3.35e-01 & $-$5.19e-02 & $-$3.23e-02 & $-$1.08e-02 & $-$9.19e-03 & $-$4.14e-03 & $-$3.93e-03 & $-$2.32e-03 & $-$1.39e-03 \\ 0.2 & $-$3.17e-01 & $-$4.71e-02 & $-$2.25e-02 & $-$6.41e-03 & $-$3.64e-03 & $-$1.28e-03 & $-$7.22e-04 & $-$3.19e-04 & $-$1.38e-04 \\ 0.5 & $-$1.97e-01 & $-$2.13e-02 & $-$3.09e-03 & $-$3.32e-04 & $-$3.77e-05 & $-$2.78e-06 & $-$1.70e-07 & 1.06e-08 & 3.20e-09 \\ 1.0 & $-$5.27e-02 & $-$8.92e-04 & 2.04e-04 & 3.00e-05 & 2.38e-06 & 1.33e-07 & 6.04e-09 & 3.33e-10 & 2.95e-11 \\ 2.0 & 1.40e-02 & 2.18e-03 & 1.49e-04 & 7.98e-06 & 4.09e-07 & 2.16e-08 & 1.16e-09 & 6.30e-11 & 3.37e-12 \\ 3.0 & 2.20e-02 & 1.47e-03 & 6.52e-05 & 2.60e-06 & 1.04e-07 & 4.19e-09 & 1.70e-10 & 6.87e-12 & $<$1.0e-12 \\ 4.0 & 2.16e-02 & 9.76e-04 & 3.29e-05 & 1.05e-06 & 3.34e-08 & 1.07e-09 & 3.45e-11 & 1.11e-12 & $<$1.0e-12 \\ 5.0 & 1.98e-02 & 6.82e-04 & 1.87e-05 & 4.92e-07 & 1.30e-08 & 3.47e-10 & 9.22e-12 & $<$1.0e-12 & $<$1.0e-12 \\ 10.0 & 1.25e-02 & 1.99e-04 & 2.83e-06 & 4.00e-08 & 5.65e-10 & 8.00e-12 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 \\ 20.0 & 6.87e-03 & 5.30e-05 & 3.87e-07 & 2.82e-09 & 2.06e-11 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 & $<$1.0e-12 \\ \hline \end{tabular} \caption{The correlation $(1-{1\over 2}\delta_{n,N/2})S^z(n;T,\Delta)$ for $\Delta=-.3$ for the XXZ spin chain with $N=18$ sites.} \label{two} \end{table} \begin{table}[b] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} $ T $ &$n=1$ & $n=2$ & $n=3$ & $n=4$ & $n=5$& $n=6$ & $n=7$& $n=8 $ & $n=9$ \\ \hline 0.1 & $-$5.46e-02 & $-$2.14e-02 & $-$3.41e-03 & 5.10e-03 & 8.06e-03 & 8.19e-03 & 7.27e-03 & 6.35e-03 & 2.99e-03 \\ 0.2 & 8.19e-02 & 8.50e-02 & 7.12e-02 & 5.22e-02 & 3.51e-02 & 2.26e-02 & 1.47e-02 & 1.05e-02 & 4.61e-03 \\ 0.5 & 2.02e-01 & 1.36e-01 & 7.38e-02 & 3.62e-02 & 1.72e-02 & 8.18e-03 & 4.03e-03 & 2.23e-03 & 8.66e-04 \\ 1.0 & 2.05e-01 & 8.74e-02 & 3.03e-02 & 9.96e-03 & 3.26e-03 & 1.07e-03 & 3.56e-04 & 1.28e-04 & 3.81e-05 \\ 2.0 & 1.55e-01 & 3.57e-02 & 7.11e-03 & 1.39e-03 & 2.72e-04 & 5.33e-05 & 1.05e-05 & 2.13e-06 & 4.01e-07 \\ 3.0 & 1.19e-01 & 1.83e-02 & 2.55e-03 & 3.51e-04 & 4.83e-05 & 6.66e-06 & 9.19e-07 & 1.29e-07 & 1.75e-08 \\ 4.0 & 9.51e-02 & 1.10e-02 & 1.17e-03 & 1.24e-04 & 1.31e-05 & 1.39e-06 & 1.48e-07 & 1.58e-08 & 1.66e-09 \\ 5.0 & 7.90e-02 & 7.29e-03 & 6.28e-04 & 5.40e-05 & 4.64e-06 & 3.99e-07 & 3.43e-08 & 2.97e-09 & 2.53e-10 \\ 10.0 & 4.23e-02 & 1.94e-03 & 8.53e-05 & 3.76e-06 & 1.66e-07 & 7.30e-09 & 3.22e-10 & 1.42e-11 & $<$1.0e-12 \\ 20.0 & 2.19e-02 & 4.96e-04 & 1.11e-05 & 2.46e-07 & 5.48e-09 & 1.22e-10 & 2.72e-12 & $<$1.0e-12 & $<$1.0e-12 \\ \hline \end{tabular} \caption{The correlation $(1-{1\over 2}\delta_{n,N/2})S^z(n;T,\Delta)$ for $\Delta=-.9$ for the XXZ spin chain with $N=18$ sites.} \label{three} \end{table} \begin{table}[b] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} $ T $ &$n=1$ & $n=2$ & $n=3$ & $n=4$ & $n=5$& $n=6$ & $n=7$& $n=8 $ & $n=9$ \\ \hline 0.1 & 3.30e-01 & 3.19e-01 & 3.02e-01 & 2.83e-01 & 2.64e-01 & 2.47e-01 & 2.33e-01 & 2.25e-01 & 1.11e-01 \\ 0.2 & 3.21e-01 & 2.87e-01 & 2.41e-01 & 1.95e-01 & 1.55e-01 & 1.25e-01 & 1.05e-01 & 9.28e-02 & 4.45e-02 \\ 0.5 & 2.95e-01 & 2.04e-01 & 1.21e-01 & 6.74e-02 & 3.70e-02 & 2.06e-02 & 1.19e-02 & 7.78e-03 & 3.28e-03 \\ 1.0 & 2.50e-01 & 1.14e-01 & 4.41e-02 & 1.63e-02 & 6.04e-03 & 2.24e-03 & 8.46e-04 & 3.50e-04 & 1.14e-04 \\ 2.0 & 1.79e-01 & 4.50e-02 & 9.99e-03 & 2.19e-03 & 4.79e-04 & 1.05e-04 & 2.31e-05 & 5.30e-06 & 1.11e-06 \\ 3.0 & 1.35e-01 & 2.29e-02 & 3.55e-03 & 5.46e-04 & 8.40e-05 & 1.29e-05 & 1.99e-06 & 3.14e-07 & 4.72e-08 \\ 4.0 & 1.07e-01 & 1.37e-02 & 1.62e-03 & 1.92e-04 & 2.27e-05 & 2.68e-06 & 3.17e-07 & 3.80e-08 & 4.43e-09 \\ 5.0 & 8.88e-02 & 9.06e-03 & 8.70e-04 & 8.33e-05 & 7.99e-06 & 7.65e-07 & 7.33e-08 & 7.09e-09 & 6.73e-10 \\ 10.0 & 4.73e-02 & 2.40e-03 & 1.18e-04 & 5.77e-06 & 2.83e-07 & 1.39e-08 & 6.81e-10 & 3.35e-11 & 1.64e-12 \\ 20.0 & 2.44e-02 & 6.13e-04 & 1.52e-05 & 3.77e-07 & 9.33e-09 & 2.31e-10 & 5.73e-12 & $<$1.0e-12 & $<$1.0e-12 \\ \hline \end{tabular} \caption{The correlation $(1-{1\over 2}\delta_{n,N/2})S^z(n;T,\Delta)$ for $\Delta=-1.0$ for the XXZ spin chain with $N=18$ sites.} \label{four} \end{table} \begin{table}[b] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} $ T $ &$n=1$ & $n=2$ & $n=3$ & $n=4$ & $n=5$& $n=6$ & $n=7$& $n=8$ \\ \hline 0.1 & $-$5.62e-02 & $-$2.25e-02 & $-$4.06e-03 & 4.90e-03 & 8.37e-03 & 9.12e-03 & 8.91e-03 & 4.37e-03 \\ 0.2 & 7.92e-02 & 8.32e-02 & 7.02e-02 & 5.20e-02 & 3.56e-02 & 2.40e-02 & 1.74e-02 & 7.65e-03 \\ 0.5 & 2.02e-01 & 1.36e-01 & 7.38e-02 & 3.63e-02 & 1.73e-02 & 8.49e-03 & 4.70e-03 & 1.82e-03 \\ 1.0 & 2.05e-01 & 8.74e-02 & 3.03e-02 & 9.96e-03 & 3.26e-03 & 1.08e-03 & 3.90e-04 & 1.16e-04 \\ 2.0 & 1.55e-01 & 3.57e-02 & 7.11e-03 & 1.39e-03 & 2.72e-04 & 5.34e-05 & 1.08e-05 & 2.05e-06 \\ 3.0 & 1.19e-01 & 1.83e-02 & 2.55e-03 & 3.51e-04 & 4.83e-05 & 6.66e-06 & 9.36e-07 & 1.27e-07 \\ 4.0 & 9.51e-02 & 1.10e-02 & 1.17e-03 & 1.24e-04 & 1.31e-05 & 1.39e-06 & 1.49e-07 & 1.56e-08 \\ 5.0 & 7.90e-02 & 7.29e-03 & 6.28e-04 & 5.40e-05 & 4.64e-06 & 3.99e-07 & 3.45e-08 & 2.95e-09 \\ 10.0 & 4.23e-02 & 1.94e-03 & 8.53e-05 & 3.76e-06 & 1.66e-07 & 7.30e-09 & 3.23e-10 & 1.42e-11 \\ 20.0 & 2.19e-02 & 4.96e-04 & 1.11e-05 & 2.46e-07 & 5.48e-09 & 1.22e-10 & 2.72e-12 & $<$1.0e-12 \\ \hline \end{tabular} \caption{The correlation $(1-{1\over 2}\delta_{n,N/2})S^z(n;T,\Delta)$ for $\Delta=-.9$ for the XXZ spin chain with $N=16$ sites.} \label{five} \end{table} \begin{table}[b] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} $\Delta$ &$n=1$ & $n=2$ & $n=3$ & $n=4$ & $n=5$& $n=6$ & $n=7$& $n=8$ & $n=9$ \\ \hline $-$0.1 & 4.966 & 3.323 & 2.561 & 2.073 & 1.870 & 1.706 & 1.669 & 1.592 & ~ \\ $-$0.2 & 2.432 & 1.643 & 1.275 & 1.037 & 0.923 & 0.840 & 0.811 & 0.774 & 0.767 \\ $-$0.3 & 1.561 & 1.071 & 0.839 & 0.687 & 0.602 & 0.545 & 0.517 & 0.493 & 0.483 \\ $-$0.4 & 1.103 & 0.771 & 0.612 & 0.505 & 0.437 & 0.392 & 0.365 & 0.346 & 0.335 \\ $-$0.5 & 0.807 & 0.578 & 0.464 & 0.388 & 0.334 & 0.297 & 0.272 & 0.253 & 0.243 \\ $-$0.6 & 0.589 & 0.434 & 0.355 & 0.300 & 0.259 & 0.229 & 0.206 & 0.189 & 0.180 \\ $-$0.7 & 0.413 & 0.318 & 0.264 & 0.227 & 0.198 & 0.175 & 0.156 & 0.140 & 0.132 \\ $-$0.8 & 0.265 & 0.215 & 0.184 & 0.161 & 0.142 & 0.126 & 0.112 & 0.099 & 0.094 \\ $-$0.9 & 0.137 & 0.118 & 0.104 & 0.092 & 0.082 & 0.073 & 0.065 & 0.059 & 0.057 \\ \hline \end{tabular} \caption{The values of $T_0(n;\Delta)$ at which the correlation function $S^z(n;T_0(n;\Delta),\Delta)$ vanishes for $N=18$} \label{six} \end{table} \begin{table}[b] \begin{tabular}{|c|c|c|c|} $\Delta$ & $T_{0}$ & $\gamma$ & $A$\\ \hline $-$0.1 & 1.550 & 0.585 & 5.734 \\ $-$0.2 & 0.745 & 0.547 & 2.690 \\ $-$0.3 & 0.462 & 0.491 & 1.630 \\ $-$0.4 & 0.312 & 0.433 & 1.093 \\ $-$0.5 & 0.216 & 0.374 & 0.764 \\ $-$0.6 & 0.148 & 0.317 & 0.539 \\ $-$0.7 & 0.095 & 0.259 & 0.372 \\ $-$0.8 & 0.054 & 0.205 & 0.241 \\ $-$0.9 & 0.031 & 0.180 & 0.125 \\ \hline \end{tabular} \caption{The fitting parameters $T_0,\gamma$ and $A$ of (2.10) for $\Delta=-.1,\cdots , -.9$} \label{seven} \end{table} {\bf Figure Captions} Figure 1. A plot of the exact zeroes $T_0(n;\Delta)$ of the $N=18$ system compared with the fitting form (2.9). The values $\Delta=-.1,\cdots ,-.9$ are given with $\Delta=-.1$ being the highest. Figure 2. The temperature $T_0(\Delta)$ plotted as a function of $\Delta.$ \newpage \centerline{\epsfxsize=6in\epsfbox{Fig1.ps}} \centerline{Fig. 1} \newpage \centerline{\epsfxsize=6in\epsfbox{T0.ps}} \centerline{Fig. 2}
{ "attr-fineweb-edu": 1.514648, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd0jxaJJQnK0WJbQ4
\section{\Large Supplementary Material} \section{Relation between bare and coarse grained entropy} In the following, we derive the equality \begin{equation} \label{eq:entropyAverage} \langle e^{-\Delta s_m}\rangle = e^{-\Delta s_m^\textrm{cc}} \end{equation} for trajectories with a single tunneling event in a NIS single electron box. Let $n$ be the net number of electrons tunneled from $S$ to $N$. An electron may either tunnel from the S island to the N island, or tunnel from N-lead to the S-lead, adding or substracting one to the parameter $n$. Let the tunneling events of the former type be denoted with $'+'$, and of the latter with $'-'$. We consider a box at an electromagnetic environment at temperature $T_E$. Upon a tunneling event, the environment absorbs or emits a photon with energy $E_{E}$: the energy of the electron in the superconducting lead $E_S$ and the energy in the normal lead $E_N$ then satisfy \begin{equation} E_S - E_N = \pm(Q + E_{E}), \end{equation} where $Q$ is the dissipated heat upon the tunneling process $0\rightarrow 1$, determined solely by the external control parameter $n_g$. The dimensionless medium entropy change of such an event is \begin{equation} \Delta s_{m}^\pm = \mp \beta_S E_S \pm \beta_N E_N \pm \beta_{E} E_{E}, \end{equation} where $\beta_i = 1 / k_B T_i$ denotes the inverse temperature. According to the theory for sequential tunneling in an electromagnetic environment \cite{Ingold}, the tunneling rates for transitions $\Gamma_{0 \rightarrow 1} \equiv \Gamma_+$ and $\Gamma_{1 \rightarrow 0} \equiv \Gamma_-$ are \begin{equation} \label{eq:Trate} \begin{split} \Gamma_\pm(Q) =& \int dE_S \int dE_{E} \gamma_\pm(E_S, Q, E_{E}); \\ \gamma_\pm(E_S, Q, E_{E}) =& \frac{1}{e^2R_T} N_S(E_S) f_S(\pm E_S) \times \\ & P(\pmE_{E}) f_N(\mp E_N), \end{split}\end{equation} where $R_T$ is the tunneling resistance, $N_S(E) = \textrm{Re} \left(|E| / \sqrt{(E^2 - \Delta^2)}\right)$ is the normalized BCS superconductor density of states with a superconductor energy gap $\Delta$, $P(E_{E})$ is the probability for the environment to absorb the energy $Q_E$, and $f_{N/S}(E_{N/S}) = (1 + \exp(\beta_{N/S} E_{N/S}))^{-1}$ is the fermi function of the N/S lead, giving the probability for an electron to occupy the energy level $E_{N/S}$. The conditional probability for the energy parameters to be exactly $E_S$ and $E_{E}$ is $P(E_S, E_{E}~|~Q, n\rightarrow n \pm 1) = \gamma_\pm(E_S, Q, E_{E}) / \Gamma_\pm(Q).$ Left hand side of Eq. (\ref{eq:entropyAverage}) becomes \begin{equation} \langle e^{-\Delta s_{m}^\pm}\rangle = \int dE_S \int dE_{E} e^{-\Delta s_{m}^\pm} \frac {\gamma_\pm(E_S, Q, E_{E})}{\Gamma_\pm(Q)}. \end{equation} Since the environment function satisfies detailed balance, $P(E_{E}) / P(-E_{E}) = e^{\beta_{E} E_{E}}$, and the fermi function satisfies $e^{\beta_{N/S} E_{N/S}} f_{N/S}(E_{N/S}) = f_{N/S}(-E_{N/S})$, one obtains $e^{-\Delta s_{m}^\pm} \gamma_\pm(E_S, Q, E_{E}) = \gamma_\mp(E_S, Q, E_{E})$, and with Eq. (\ref{eq:Trate}) the average is then \begin{equation} \langle e^{-\Delta s_{m}^\pm}\rangle = \frac{\Gamma_\mp(Q)}{\Gamma_\pm(Q)} = e^{-\Delta s_m^{\pm,\textrm{cc}}}\end{equation} \section{Extraction of tunneling rates from state probabilities} The tunneling rates $\Gamma_{1 \rightarrow 0}$ and $\Gamma_{0 \rightarrow 1}$ are obtained from the master equation for a two state system. At any given time instant $t$, the system has a probability $P_1$ to occupy the charge state $n = 1$. As the charge state must be either $n = 0$ or $n = 1$, the occupation probability for $n = 0$ is $P_0 = 1 - P_1$. The master equation is then \begin{equation} \dot P_1 = -\Gamma_{1 \rightarrow 0}(n_g(t)) P_1 + \Gamma_{0 \rightarrow 1}(n_g(t))(1 - P_1). \end{equation} In order to solve the tunneling rates as a function of $n_g$, the occupation probability is calculated for both forward $n_g^\rightarrow(t)$ and reverse drives $n_g^\leftarrow(t)$. These satisfy $n_g^\leftarrow(t) = n_g^\rightarrow(\tau - t)$, and by a change of variable $t' = \tau - t$, $n_g^\leftarrow(t) = n_g^\rightarrow(t')$, two equations are obtained: \begin{equation} \begin{split} \dot P_1^\rightarrow &= -\Gamma_{1 \rightarrow 0}(n_g^\rightarrow(t)) P_1^\rightarrow + \Gamma_{0 \rightarrow 1}(n_g^\rightarrow(t))(1 - P_1^\rightarrow); \\ -\dot P_1^\leftarrow &= -\Gamma_{1 \rightarrow 0}(n_g^\rightarrow(t')) P_1^\leftarrow + \Gamma_{0 \rightarrow 1}(n_g^\rightarrow(t'))(1 - P_1^\leftarrow). \end{split} \end{equation} The rates are then solved as \begin{equation} \label{eq:RateEquations} \begin{split} \Gamma_{1 \rightarrow 0}(n_g^\rightarrow(t)) &= \frac{\dot p_\leftarrow (1 - p_\rightarrow) + \dot p_\rightarrow(1 - p_\leftarrow)}{p_\leftarrow - p_\rightarrow};\\ \Gamma_{0 \rightarrow 1}(n_g^\rightarrow(t)) &= \frac{\dpfp_\leftarrow + \dot p_\leftarrow p_\rightarrow}{p_\leftarrow - p_\rightarrow}. \\ \end{split}\end{equation} $p_\rightarrow$ and $p_\leftarrow$ are obtained from the measurements for a time interval $t... t + \Delta t$ by averaging $n$ over the ensemble of process repetitions. The obtained distributions are inserted in Eq. (\ref{eq:RateEquations}) to obtain the rates. As shown in Fig. 2 (b), Eq. (\ref{eq:Trate}) describes the extracted tunneling rates well. The direct effect of the environment is negligible, and we take the limit of weak environment, $P(E_{E}) = \delta(E_{E})$. By fitting Eq. (\ref{eq:Trate}) to the measured rates, we obtain $R_T \simeq 1.7~$M$\Omega$, $\Delta \simeq 224~\mu$eV, and $E_C \simeq 162~\mu$eV. Here, $T_N$ is assumed to be the temperature of the cryostat, while $T_S$ is obtained for each measurement separately as listed in Table 1. \section{Fabrication methods} The sample was fabricated by the standard shadow evaporation technique \cite{77-Dolan}. The superconducting structures are aluminium with a thickness of $\simeq 25$ nm. The tunnel barriers are formed by exposing the aluminium to oxygen, oxidizing its surface into an insulating aluminium oxide layer. The normal metal is copper with a thickness of $\simeq 30$ nm.
{ "attr-fineweb-edu": 1.814453, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd2025V5jCst-SZ-I
\section{Introduction} \label{sec:Intro} Consider the following initial-boundary value problems for the linearized Korteweg-de Vries (LKdV) equation: \newline\noindent{\bf Problem 1} \begin{subequations} \label{eqn:introIBVP.1} \begin{align} \label{eqn:introIBVP.1:PDE} q_t(x,t) + q_{xxx}(x,t) &= 0 & (x,t) &\in (0,1)\times(0,T), \\ q(x,0) &= f(x) & x &\in [0,1], \\ q(0,t) = q(1,t) &= 0 & t &\in [0,T], \\ q_x(1,t) &= q_x(0,t)/2 & t &\in [0,T]. \end{align} \end{subequations} \noindent{\bf Problem 2} \begin{subequations} \label{eqn:introIBVP.2} \begin{align} \label{eqn:introIBVP.2:PDE} q_t(x,t) + q_{xxx}(x,t) &= 0 & (x,t) &\in (0,1)\times(0,T), \\ q(x,0) &= f(x) & x &\in [0,1], \\ q(0,t) = q(1,t) = q_x(1,t) &= 0 & t &\in [0,T]. \end{align} \end{subequations} It is shown in~\cite{FP2001a,Pel2005a,Smi2012a,Smi2013a} that these problems are well-posed and that their solutions can be expressed in the form \BE \label{eqn:introIBVP.solution.2} q(x,t) = \frac{1}{2\pi}\left\{\int_{\Gamma^+} + \int_{\Gamma_0}\right\} e^{i\lambda x + i\lambda^3t} \frac{\zeta^+(\lambda)}{\Delta(\lambda)}\d\lambda + \frac{1}{2\pi}\int_{\Gamma^-} e^{i\lambda(x-1) + i\lambda^3t} \frac{\zeta^-(\lambda)}{\Delta(\lambda)}\d\lambda, \EE where, $\Gamma_0$ is the circular contour of radius $\frac{1}{2}$ centred at $0$, $\Gamma^\pm$ are the boundaries of the domains $\{\lambda\in\C^\pm:\Im(\lambda^3)>0$ and $|\lambda|>1\}$ as shown on figure~\ref{fig:3o-cont}, $\alpha$ is the root of unity $e^{2\pi i/3}$, $\hat{f}(\lambda)$ is the Fourier transform \BE \int_0^1 e^{-i\lambda x}f(x)\d x, \qquad \lambda\in\C \EE and $\zeta^\pm(\lambda)$, $\Delta(\lambda)$ are defined as follows for all $\lambda\in\C$: \newline\noindent{\bf Problem 1} \begin{subequations} \label{eqn:introIBVP.DeltaZeta.1} \begin{align} \Delta(\lambda) &= e^{i\lambda} + \alpha e^{i\alpha\lambda} + \alpha^2 e^{i\alpha^2\lambda} + 2(e^{-i\lambda} + \alpha e^{-i\alpha\lambda} + \alpha^2 e^{-i\alpha^2\lambda}),\\ \notag \zeta^+(\lambda) &= \hat{f}(\lambda)(e^{i\lambda}+2\alpha e^{-i\alpha\lambda}+2\alpha^2 e^{-i\alpha^2\lambda}) + \hat{f}(\alpha\lambda)(\alpha e^{i\alpha\lambda}-2\alpha e^{-i\lambda}) \\ &\hspace{40ex} + \hat{f}(\alpha^2\lambda)(\alpha^2 e^{i\alpha^2\lambda}-2\alpha^2e^{-i\lambda}), \\ \zeta^-(\lambda) &= -\hat{f}(\lambda)(2 + \alpha^2 e^{-i\alpha\lambda} + \alpha e^{-i\alpha^2\lambda}) - \alpha\hat{f}(\alpha\lambda)(2-e^{-i\alpha^2\lambda}) - \alpha^2\hat{f}(\alpha^2\lambda)(2-e^{-i\alpha\lambda}). \end{align} \end{subequations} \noindent{\bf Problem 2} \begin{subequations} \label{eqn:introIBVP.DeltaZeta.2} \begin{align} \Delta(\lambda) &= e^{-i\lambda} + \alpha e^{-i\alpha\lambda} + \alpha^2 e^{-i\alpha^2\lambda}, \\ \zeta^+(\lambda) &= \hat{f}(\lambda)(\alpha e^{-i\alpha\lambda}+\alpha^2 e^{-i\alpha^2\lambda}) - (\alpha\hat{f}(\alpha\lambda) + \alpha^2\hat{f}(\alpha^2\lambda))e^{-i\lambda}, \\ \zeta^-(\lambda) &= -\hat{f}(\lambda) - \alpha\hat{f}(\alpha\lambda) - \alpha^2\hat{f}(\alpha^2\lambda). \end{align} \end{subequations} \begin{figure} \begin{center} \includegraphics{LKdV-contours-02} \caption{Contours for the linearized KdV equation.} \label{fig:3o-cont} \end{center} \end{figure} For evolution PDEs defined in the finite interval, $x\in[0,1]$, one may expect that the solution can be expressed in terms of an infinite series. However, it is shown in~\cite{Pel2005a,Smi2012a} that for generic boundary conditions this is \emph{impossible}. The solution \emph{can} be expressed in the form of an infinite series only for a particular class of boundary value problems; this class is characterised explicitly in~\cite{Smi2012a}. In particular, problem 2 does \emph{not} belong to this class, in contrast to problem 1 for which there exists the following alternative representation: \BE \label{eqn:introIBVP.solution.1} q(x,t) = \frac{1}{2\pi} \sum_{\substack{\sigma\in\overline{\C^+}:\\\Delta(\sigma)=0}} \int_{\Gamma_\sigma} e^{i\lambda x + i\lambda^3t} \frac{\zeta^+(\lambda)}{\Delta(\lambda)}\d\lambda + \frac{1}{2\pi} \sum_{\substack{\sigma\in\C^-:\\\Delta(\sigma)=0}} \int_{\Gamma_\sigma} e^{i\lambda(x-1) + i\lambda^3t} \frac{\zeta^-(\lambda)}{\Delta(\lambda)}\d\lambda, \EE where $\Gamma_\sigma$ is a circular contour centred at $\sigma$ with radius $\frac{1}{2}$; the asymptotic formula for $\sigma$ is given in~\cite{Smi2013a}. By using the residue theorem, it is possible to express the right hand side of equation~\eqref{eqn:introIBVP.solution.1} in terms of an infinite series over $\sigma$. We note that even for problems for which there does exist a series representation (like problem~1), the integral representation~\eqref{eqn:introIBVP.solution.2} has certain advantages. In particular, it provides an efficient numerical evaluation of the solution~\cite{FF2008a}. Generic initial-boundary value problems for which there does \emph{not} exist an infinite series representation will be referred to as problems of type~II, in contrast to those problems whose solutions possess both an integral and a series representation, which will be referred to as problems of type~I, \begin{quote} existence of a series representation: type~I \\ existence of only an integral representation: type~II. \end{quote} \subsubsection*{Transform pair} Simple initial-boundary value problems for linear evolution PDEs can be solved via an appropriate transfrom pair. For example, the Dirichlet and Neumann problems of the heat equation on the finite interval can be solved with the transform pair associated with the Fourier-sine and the Fourier-cosine series, respectively. Similarly, the series that can be constructed using the residue calculations of the right hand side of equation~\eqref{eqn:introIBVP.solution.1} can be obtained directly via a classical transform pair, which in turn can be constructed via standard spectral analysis. It turns out that the unified method provides an algorithmic way for constructing a transform pair tailored for a given initial-boundary value problem. For example, the integral representation~\eqref{eqn:introIBVP.solution.2} gives rise to the following transform pair tailored for solving problems~1 and~2: \begin{subequations} \label{eqn:introTrans.1.1} \begin{align} \label{eqn:introTrans.1.1a} f(x) &\mapsto F(\lambda): & F_\lambda(f) &= \begin{cases} \int_0^1 \phi^+(x,\lambda)f(x)\d x & \mbox{if } \lambda\in\Gamma^+\cup\Gamma_0, \\ \int_0^1 \phi^-(x,\lambda)f(x)\d x & \mbox{if } \lambda\in\Gamma^-, \end{cases} \\ \label{eqn:introTrans.1.1b} F(\lambda) &\mapsto f(x): & f_x(F) &= \left\{ \int_{\Gamma_0} + \int_{\Gamma^+} + \int_{\Gamma^-} \right\} e^{i\lambda x} F(\lambda) \d\lambda, \qquad x\in[0,1], \end{align} where for problems~1 and~2 respectively, $\phi^\pm$ are given by \begin{align} \notag \phi^+(x,\lambda) &= \frac{1}{2\pi\Delta(\lambda)} \left[ e^{-i\lambda x}(e^{i\lambda}+2\alpha e^{-i\alpha\lambda}+2\alpha^2 e^{-i\alpha^2\lambda}) + e^{-i\alpha\lambda x}(\alpha e^{i\alpha\lambda}-2\alpha e^{-i\lambda}) \right. \\ &\hspace{40ex} \left. + e^{-i\alpha^2\lambda x}(\alpha^2 e^{i\alpha^2\lambda}-2\alpha^2e^{-i\lambda}) \right], \label{eqn:introTrans.1.1c} \\ \notag \phi^-(x,\lambda) &= \frac{-e^{-i\lambda}}{2\pi\Delta(\lambda)} \left[ e^{-i\lambda x}(2 + \alpha^2 e^{-i\alpha\lambda} + \alpha e^{-i\alpha^2\lambda}) + \alpha e^{-i\alpha\lambda x}(2-e^{-i\alpha^2\lambda}) \right. \\ &\hspace{40ex} \left. + \alpha^2 e^{-i\alpha^2\lambda x}(2-e^{-i\alpha\lambda}) \right] \label{eqn:introTrans.1.1d} \end{align} and \begin{align} \label{eqn:introTrans.1.1e} \phi^+(x,\lambda) &= \frac{1}{2\pi\Delta(\lambda)} \left[ e^{-i\lambda x}(\alpha e^{-i\alpha\lambda}+\alpha^2 e^{-i\alpha^2\lambda}) - (\alpha e^{-i\alpha\lambda x} + \alpha^2 e^{-i\alpha^2\lambda x})e^{-i\lambda} \right], \\ \label{eqn:introTrans.1.1f} \phi^-(x,\lambda) &= \frac{-e^{-i\lambda}}{2\pi\Delta(\lambda)} \left[ e^{-i\lambda x} + \alpha e^{-i\alpha\lambda x} + \alpha^2 e^{-i\alpha^2\lambda x} \right]. \end{align} \end{subequations} The alternative representation~\eqref{eqn:introIBVP.solution.1} gives rise to the following alternative transform pair tailored for solving problem~1: \begin{align} \label{eqn:introTrans.1.2} F(\lambda) &\mapsto f(x): & f^\Sigma_x(F) = \sum_{\substack{\sigma\in\C:\\\Delta(\sigma)=0}}\int_{\Gamma_\sigma} e^{i\lambda x}F(\lambda)\d\lambda, \end{align} where $F_\lambda(f)$ is defined by equations~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1c} and~\eqref{eqn:introTrans.1.1d} and $\Gamma_\sigma$ is defined below~\eqref{eqn:introIBVP.solution.1}. The validity of these transform pairs is established in section~\ref{sec:Transforms.valid}. The solution of problems~1 and~2 is then given by \BE \label{eqn:introIBVP.solution.transform.1} q(x,t) = f_x\left(e^{i\lambda^3t}F_\lambda(f)\right). \EE \subsubsection*{Spectral representation} The basis for the classical transform pairs used to solve initial-boundary value problems for linear evolution PDEs is the expansion of the initial datum in terms of appropriate eigenfunctions of the spatial differential operator. The transform pair diagonalises the associated differential operator in the sense of the classical spectral theorem. The main goal of this paper is to show that the unified method yields an integral representation, like~\eqref{eqn:introIBVP.solution.2}, which in turn gives rise to a transform pair like~\eqref{eqn:introTrans.1.1}, and furthermore the elucidation of the spectral meaning of such new transform pairs leads to new results in spectral theory. In connection with this, we recall that Gel'fand and coauthors have introduced the concept of generalised eigenfunctions~\cite{GV1964a} and have used these eigenfunctions to construct the spectral representations of self-adjoint differential operators~\cite{GS1967a}. This concept is inadequate for analysing the IBVPs studied here because our problems are in general non-self-adjoint. Although the given formal differential operator is self-adjoint, the boundary conditions are in general not self-adjoint. In what follows, we introduce the notion of~\emph{augmented eigenfunctions}. Actually, in order to analyse type~I and type~II IBVPs, we introduce two types of augmented eigenfunctions. Type~I are a slight generalisation of the eigenfunctions introduced by Gel'fand and Vilenkin and are also related with the notion of pseudospectra~\cite{TE2005a}. However, it appears that type~II eigenfunctions comprise a new class of spectral functionals. \begin{defn} \label{defn:AugEig} Let $C$ be a linear topological space with subspace $\Phi$ and let $L:\Phi\to C$ be a linear operator. Let $\gamma$ be an oriented contour in $\C$ and let $E=\{E_\lambda:\lambda\in\gamma\}$ be a family of functionals $E_\lambda\in C'$. Suppose there exist corresponding \emph{remainder} functionals $R_\lambda\in\Phi'$ and \emph{eigenvalues} $z:\gamma\to\C$ such that \BE \label{eqn:defnAugEig.AugEig} E_\lambda(L\phi) = z(\lambda) E_\lambda(\phi) + R_\lambda(\phi), \qquad\hsforall\phi\in\Phi, \hsforall \lambda\in\gamma. \EE If \BE \label{eqn:defnAugEig.Control1} \int_\gamma e^{i\lambda x} R_\lambda(\phi)\d\lambda = 0, \qquad \hsforall \phi\in\Phi, \hsforall x\in[0,1], \EE then we say $E$ is a family of \emph{type~\textup{I} augmented eigenfunctions} of $L$ up to integration along $\gamma$. If \BE \label{eqn:defnAugEig.Control2} \int_\gamma \frac{e^{i\lambda x}}{z(\lambda)} R_\lambda(\phi)\d\lambda = 0, \qquad \hsforall \phi \in\Phi, \hsforall x\in(0,1), \EE then we say $E$ is a family of \emph{type \textup{II}~augmented eigenfunctions} of $L$ up to integration along $\gamma$. \end{defn} We note that the class of families of augmented eigenfunctions of a given operator is closed under union Recall that in the theory of pseudospectra it is required that the norm of the functional $R_\lambda(\phi)$ is finite, whereas in our definition it is required that the integral of $\exp(i\lambda x)R_\lambda(\phi)$ along the contour $\gamma$ vanishes. Recall that the inverse transform of the relevant transform pair is defined in terms of a contour integral, thus the above definition is sufficient for our needs. It will be shown in Section~\ref{sec:Spectral} that $\{F_\lambda:\lambda\in\Gamma_\sigma\hsexists\sigma\in\C:\Delta(\sigma)=0\}$ is a family of type~I augmented eigenfunctions of the differential operator representing the spatial part of problem~1 with eigenvalue $\lambda^3$. Similarly $\{F_\lambda:\lambda\in\Gamma_0\}$ is a family of type~I augmented eigenfunctions of the spatial operator in problem~\eqref{eqn:introIBVP.2}. However, $\{F_\lambda:\lambda\in\Gamma^+\cup\Gamma^-\}$ is a family of type~II augmented eigenfunctions. \subsubsection*{Diagonalisation of the operator} Our definition of augmented eigenfunctions, in contrast to the generalized eigenfunctions of Gel'fand and Vilenkin~\cite[Section 1.4.5]{GV1964a}, allows the occurence of remainder functionals. However, the contribution of these remainder functionals is eliminated by integrating over $\gamma$. Hence, integrating equation~\eqref{eqn:defnAugEig.AugEig} over $\gamma$ gives rise to a non-self-adjoint analogue of the spectral representation of an operator. \begin{defn} We say that $E=\{E_\lambda:\lambda\in\gamma\}$ is a \emph{complete} family of functionals $E_\lambda\in C'$ if \BE \label{eqn:defnGEInt.completecriterion} \phi\in\Phi \mbox{ and } E_\lambda\phi = 0 \hsforall \lambda\in\gamma \quad \Rightarrow \quad \phi=0. \EE \end{defn} We now define a spectral representation of the non-self-adjoint differential operators we study in this paper. \begin{defn} \label{defn:Spect.Rep.I} Suppose that $E=\{E_\lambda:\lambda\in\gamma\}$ is a system of type~\textup{I} augmented eigenfunctions of $L$ up to integration over $\gamma$, and that \BE \label{eqn:Spect.Rep.defnI.conv} \int_\gamma e^{i\lambda x} E_\lambda L\phi \d\lambda \mbox{\textnormal{ converges }} \hsforall \phi\in\Phi, \hsforall x\in(0,1). \EE Furthermore, assume that $E$ is a complete system. Then $E$ provides a \emph{spectral representation} of $L$ in the sense that \BE \label{eqn:Spect.Rep.I} \int_\gamma e^{i\lambda x} E_\lambda L\phi \d\lambda = \int_\gamma e^{i\lambda x} z(\lambda) E_\lambda \phi\d\lambda \qquad \hsforall \phi\in\Phi, \hsforall x\in(0,1). \EE \end{defn} \begin{defn} \label{defn:Spect.Rep.I.II} Suppose that $E^{(\mathrm{I})}=\{E_\lambda:\lambda\in\gamma^{(\mathrm{I})}\}$ is a system of type~\textup{I} augmented eigenfunctions of $L$ up to integration over $\gamma^{(\mathrm{I})}$ and that \BE \label{eqn:Spect.Rep.defnI.II.conv1} \int_{\gamma^{(\mathrm{I})}} e^{i\lambda x} E_\lambda L\phi \d\lambda \mbox{\textnormal{ converges }} \hsforall \phi\in\Phi, \hsforall x\in(0,1). \EE Suppose also that $E^{(\mathrm{II})}=\{E_\lambda:\lambda\in\gamma^{(\mathrm{II})}\}$ is a system of type~\textup{II} augmented eigenfunctions of $L$ up to integration over $\gamma^{(\mathrm{II})}$ and that \BE \label{eqn:Spect.Rep.defnI.II.conv2} \int_{\gamma^{(\mathrm{II})}} e^{i\lambda x} E_\lambda \phi \d\lambda \mbox{\textnormal{ converges }} \hsforall \phi\in\Phi, \hsforall x\in(0,1). \EE Furthermore, assume that $E=E^{(\mathrm{I})}\cup E^{(\mathrm{II})}$ is a complete system. Then $E$ provides a \emph{spectral representation} of $L$ in the sense that \begin{subequations} \label{eqn:Spect.Rep.II} \begin{align} \label{eqn:Spect.Rep.II.1} \int_{\gamma^{(\mathrm{I})}} e^{i\lambda x} E_\lambda L\phi \d\lambda &= \int_{\gamma^{(\mathrm{I})}} z(\lambda) e^{i\lambda x} E_\lambda \phi \d\lambda & \hsforall \phi &\in\Phi, \hsforall x\in(0,1), \\ \label{eqn:Spect.Rep.II.2} \int_{\gamma^{(\mathrm{II})}} \frac{1}{z(\lambda)} e^{i\lambda x} E_\lambda L\phi \d\lambda &= \int_{\gamma^{(\mathrm{II})}} e^{i\lambda x} E_\lambda \phi \d\lambda & \hsforall \phi &\in\Phi, \hsforall x\in(0,1). \end{align} \end{subequations} \end{defn} According to Definition~\ref{defn:Spect.Rep.I}, the operator $L$ is diagonalised (in the traditional sense) by the complete transform pair \BE \left(E_\lambda,\int_\gamma e^{i\lambda x} \cdot \d\lambda \right). \EE Hence, augmented eigenfunctions of type~\textup{I} provide a natural extension of the generalised eigenfunctions of Gel'fand~\&~Vilenkin. This form of spectral representation is sufficient to describe the transform pair associated with problem~1. However, the spectral interpretation of the transform pair used to solve problem~2 gives rise to augmented eigenfunctions of type~II, which are clearly quite different from the generalised eigenfunctions of Gel'fand~\&~Vilenkin. Definition~\ref{defn:Spect.Rep.I.II} describes how an operator may be written as the sum of two parts, one of which is diagonalised in the traditional sense, whereas the other possesses a diagonalised inverse. \begin{thm} \label{thm:Diag:LKdV:1} The transform pairs $(F_\lambda,f_x)$ defined in~\eqref{eqn:introTrans.1.1a}--\eqref{eqn:introTrans.1.1d} and~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1b},~\eqref{eqn:introTrans.1.1e} and~\eqref{eqn:introTrans.1.1f} provide spectral representations of the spatial differential operators associated with problems~1 and~2 respectively in the sense of Definition~\ref{defn:Spect.Rep.I.II}. \end{thm} \begin{thm} \label{thm:Diag:LKdV:2} The transform pair $(F_\lambda,f^\Sigma_x)$ defined in~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1c},~\eqref{eqn:introTrans.1.1d} and~\eqref{eqn:introTrans.1.2} provides a spectral representation of the spatial differential operator associated with problem~1 in the sense of Definition~\ref{defn:Spect.Rep.I}. \end{thm} \begin{rmk} \label{rmk:inhomogeneous.BC} Both problems~1 and~2 involve homogeneous boundary conditions. It is straightforward to extend the above analysis for problems with inhomogeneous boundary conditions, see Section~\ref{ssec:Transform.Method:LKdV}. \end{rmk} \section{Validity of transform pairs} \label{sec:Transforms.valid} In section~\ref{ssec:Transforms.valid:LKdV} we will derive the validity of the transform pairs defined by equations~\eqref{eqn:introTrans.1.1}. In section~\ref{ssec:Transforms.valid:LKdV:General} we derive an analogous transform pair for a general IBVP. \subsection{Linearized KdV} \label{ssec:Transforms.valid:LKdV} \begin{prop} \label{prop:Transforms.valid:LKdV:2} Let $F_\lambda(f)$ and $f_x(F)$ be given by equations~\eqref{eqn:introTrans.1.1a}--\eqref{eqn:introTrans.1.1d}. For all $f\in C^\infty[0,1]$ such that $f(0)=f(1)=0$ and $f'(0)=2f'(1)$ and for all $x\in(0,1)$, we have \BE \label{eqn:Transforms.valid:LKdV:prop2:prob1} f_x(F_\lambda(f)) = f(x). \EE Let $F_\lambda(f)$ and $f_x(F)$ be given by equations~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1b},~\eqref{eqn:introTrans.1.1e} and~\eqref{eqn:introTrans.1.1f}. For all $f\in C^\infty[0,1]$ such that $f(0)=f(1)=f'(1)=0$ and for all $x\in(0,1)$, \BE \label{eqn:Transforms.valid:LKdV:prop2:prob2} f_x(F_\lambda(f)) = f(x). \EE \end{prop} \begin{proof} The definition of the transform pair~\eqref{eqn:introTrans.1.1a}--\eqref{eqn:introTrans.1.1d} implies \BE \label{eqn:Transforms.valid:LKdV:prop2:proof.1} f_x(F_\lambda(f)) = \frac{1}{2\pi}\left\{\int_{\Gamma^+} + \int_{\Gamma_0}\right\} e^{i\lambda x} \frac{\zeta^+(\lambda)}{\Delta(\lambda)}\d\lambda + \frac{1}{2\pi}\int_{\Gamma^-} e^{i\lambda(x-1)} \frac{\zeta^-(\lambda)}{\Delta(\lambda)}\d\lambda, \EE where $\zeta^\pm$ and $\Delta$ are given by equations~\eqref{eqn:introIBVP.DeltaZeta.1} and the contours $\Gamma^+$, $\Gamma^-$ and $\Gamma_0$ are shown in figure~\ref{fig:3o-cont}. \begin{figure} \begin{center} \includegraphics{LKdV-contours-03} \caption{Contour deformation for the linearized KdV equation.} \label{fig:3o-contdef} \end{center} \end{figure} The fastest-growing exponentials in the sectors exterior to $\Gamma^\pm$ are indicated on figure~\ref{fig:3o-contdef}a. Each of these exponentials occurs in $\Delta$ and integration by parts shows that the fastest-growing-terms in $\zeta^\pm$ are the exponentials shown on figure~\ref{fig:3o-contdef}a multiplied by $\lambda^{-2}$. Hence the ratio $\zeta^+(\lambda)/\Delta(\lambda)$ decays for large $\lambda$ within the sector $\pi/3\leq\arg\lambda\leq2\pi/3$ and the ratio $\zeta^-(\lambda)/\Delta(\lambda)$ decays for large $\lambda$ within the sectors $-\pi\leq\arg\lambda\leq-2\pi/3$, $-\pi/3\leq\arg\lambda\leq0$. The relevant integrands are meromorphic functions with poles only at the zeros of $\Delta$. The distribution theory of zeros of exponential polynomials~\cite{Lan1931a} implies that the only poles occur within the sets bounded by $\Gamma^\pm$. The above observations and Jordan's lemma allow us to deform the relevant contours to the contour $\gamma$ shown on figure~\ref{fig:3o-contdef}b; the red arrows on figure~\ref{fig:3o-contdef}a indicate the deformation direction. Hence equation~\eqref{eqn:Transforms.valid:LKdV:prop2:proof.1} simplifies to \BE \label{eqn:Transforms.valid:LKdV:prop2:proof.2} f_x(F_\lambda(f)) = \frac{1}{2\pi}\int_\gamma \frac{e^{i\lambda x}}{\Delta(\lambda)}\left( \zeta^+(\lambda) - e^{-i\lambda}\zeta^-(\lambda) \right) \d\lambda. \EE Equations~\eqref{eqn:introIBVP.DeltaZeta.1} imply, \BE \label{eqn:Transforms.valid:LKdV:prop2:proof.3} \left( \zeta^+(\lambda) - e^{-i\lambda}\zeta^-(\lambda) \right) = \hat{f}(\lambda)\Delta(\lambda), \EE where $\hat{f}$ is the Fourier transform of a piecewise smooth function supported on $[0,1]$. Hence the integrand on the right hand side of equation~\eqref{eqn:Transforms.valid:LKdV:prop2:proof.2} is an entire function, so we can deform the contour onto the real axis. The usual Fourier inversion theorem completes the proof. The proof for the transform pair~~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1b},~\eqref{eqn:introTrans.1.1e} and~\eqref{eqn:introTrans.1.1f} is similar. \end{proof} Although \BE f_0(F_\lambda(\phi)) \neq f(0), \qquad f_1(F_\lambda(\phi)) \neq f(1), \EE the values at the endpoints can be recovered by taking apporpriate limits in the interior of the interval. \subsection{General} \label{ssec:Transforms.valid:LKdV:General} \subsubsection*{Spatial differential operator} Let $C=C^\infty[0,1]$ and $B_j:C\to\mathbb{C}$ be the following linearly independent boundary forms \BE B_j\phi = \sum_{k=0}^{n-1} \left( \M{b}{j}{k}\phi^{(j)}(0) + \M{\beta}{j}{k}\phi^{(j)}(1) \right), \quad j\in\{1,2,\ldots,n\}. \EE Let $\Phi=\{\phi\in C:B_j\phi=0\hsforall j\in\{1,2,\ldots,n\}\}$ and $\{B_j^\star:j\in\{1,2,\ldots,n\}\}$ be a set of adjoint boundary forms with adjoint boundary coefficients $\Msup{b}{j}{k}{\star}$, $\Msup{\beta}{j}{k}{\star}$. Let $S:\Phi\to C$ be the differential operator defined by \BE S\phi(x)=(-i)^n\frac{\d^n\phi}{\d x^n}(x). \EE Then $S$ is formally self-adjoint but, in general, does not admit a self-adjoint extension because, in general, $B_j\neq B_j^\star$. Indeed, adopting the notation \BE [\phi\psi](x) = (-i)^n\sum_{j=0}^{n-1}(-1)^j(\phi^{(n-1-j)}(x)\overline{\psi}^{(j)}(x)), \EE of~\cite[Section~11.1]{CL1955a} and using integration by parts, we find \BE \label{eqn:S.not.s-a} ((-i\d/\d x)^n\phi,\psi) = [\phi\psi](1) - [\phi\psi](0) + (\phi,(-i\d/\d x)^n\psi), \qquad \hsforall \phi,\psi\in C^\infty[0,1]. \EE If $\phi\in\Phi$, then $\psi$ must satisfy the adjoint boundary conditions in order for $[\phi\psi](1) - [\phi\psi](0) = 0$ to be valid. \subsubsection*{Initial-boundary value problem} Associated with $S$ and constant $a\in\C$, we define the following homogeneous initial-boundary value problem: \begin{subequations} \label{eqn:IBVP} \begin{align} \label{eqn:IBVP.PDE} (\partial_t + aS)q(x,t) &= 0 & \hsforall (x,t) &\in (0,1)\times(0,T), \\ \label{eqn:IBVP.IC} q(x,0) &= f(x) & \hsforall x &\in [0,1], \\ \label{eqn:IBVP.BC} q(\cdot,t) &\in \Phi & \hsforall t &\in [0,1], \end{align} \end{subequations} where $f\in\Phi$ is arbitrary. Only certain values of $a$ are permissible. Clearly $a=0$ is nonsensical and a reparametrisation ensures there is no loss of generality in assuming $|a|=1$. The problem is guaranteed to be ill-posed (for the same reason as the reverse-time heat equation is ill-posed) without the following further restrictions on $a$: if $n$ is odd then $a=\pm i$ and if $n$ is even then $\Re(a)\geq0$~\cite{FP2001a,Smi2012a}. A full characterisation of well-posedness for all problems~\eqref{eqn:IBVP} is given in~\cite{Pel2004a,Smi2012a,Smi2013a}; For even-order problems, well-posedess depends upon the boundary conditions only, but for odd-order it is often the case that a problem is well-posed for $a=i$ and ill-posed for $a=-i$ or vice versa. Both problems~\eqref{eqn:introIBVP.1} and~\eqref{eqn:introIBVP.2} are well-posed. \begin{defn} \label{defn:Types.of.Problem} We classify the IBVP~\eqref{eqn:IBVP} into three classes using the definitions of~\cite{Smi2013a}: \begin{description} \item[\textnormal{type~I:}]{if the problem for $(S,a)$ is well-posed and the problem for $(S,-a)$ is well-conditioned.} \item[\textnormal{type~II:}]{if the problem for $(S,a)$ is well-posed but the problem for $(S,-a)$ is ill-conditioned.} \item[\textnormal{ill-posed}]{otherwise.} \end{description} We will refer to the operators $S$ associated with cases~\textnormal{I} and~\textnormal{II} as operators of \emph{type~I} and \emph{type~II} respectively. \end{defn} The spectral theory of type~I operators is well understood in terms of an infinite series representation. Here, we provide an alternative spectral representation of the type~I operators and also provide a suitable spectral representation of the type~II operators. \subsubsection*{Transform pair} Let $\alpha = e^{2\pi i/n}$. We define the entries of the matrices $M^\pm(\lambda)$ entrywise by \begin{subequations} \label{eqn:M.defn} \begin{align} \label{eqn:M+.defn} \Msup{M}{k}{j}{+}(\lambda) &= \sum_{r=0}^{n-1} (-i\alpha^{k-1}\lambda)^r \Msup{b}{j}{r}{\star}, \\ \label{eqn:M-.defn} \Msup{M}{k}{j}{-}(\lambda) &= \sum_{r=0}^{n-1} (-i\alpha^{k-1}\lambda)^r \Msup{\beta}{j}{r}{\star}. \end{align} \end{subequations} Then the matrix $M(\lambda)$, defined by \BE \M{M}{k}{j}(\lambda) = \Msup{M}{k}{j}{+}(\lambda) + \Msup{M}{k}{j}{-}(\lambda)e^{-i\alpha^{k-1}\lambda}, \EE is a realization of Birkhoff's \emph{adjoint characteristic matrix}. We define $\Delta(\lambda) = \det M(\lambda)$. From the theory of exponential polynomials~\cite{Lan1931a}, we know that the only zeros of $\Delta$ are of finite order and are isolated with positive infimal separation $5\epsilon$, say. We define $\Msups{X}{}{}{l}{j}$ as the $(n-1)\times(n-1)$ submatrix of $M$ with $(1,1)$ entry the $(l+1,j+1)$ entry of $M$. The transform pair is given by \begin{subequations} \label{eqn:defn.forward.transform} \begin{align} f(x) &\mapsto F(\lambda): & F_\lambda(f) &= \begin{cases} F_\lambda^+(f) & \mbox{if } \lambda\in\Gamma_0^+\cup\Gamma_a^+, \\ F_\lambda^-(f) & \mbox{if } \lambda\in\Gamma_0^+\cup\Gamma_a^+, \end{cases} \\ \label{eqn:defn.inverse.transform.2} F(\lambda) &\mapsto f(x): & f_x(F) &= \int_{\Gamma} e^{i\lambda x} F(\lambda) \d\lambda, \qquad x\in[0,1], \end{align} \end{subequations} where, for $\lambda\in\C$ such that $\Delta(\lambda)\neq0$, \begin{subequations} \label{eqn:defn.Fpm.rho} \begin{align} F^+_\lambda(f) &= \frac{1}{2\pi\Delta(\lambda)} \sum_{l=1}^n\sum_{j=1}^n\det \Msups{X}{}{}{l}{j}(\lambda) \Msup{M}{1}{j}{+}(\lambda) \int_0^1 e^{-i\alpha^{l-1}\lambda x} f(x)\d x, \\ F^-_\lambda(f) &= \frac{-e^{-i\lambda}}{2\pi\Delta(\lambda)} \sum_{l=1}^n\sum_{j=1}^n\det \Msups{X}{}{}{l}{j}(\lambda) \Msup{M}{1}{j}{-}(\lambda) \int_0^1 e^{-i\alpha^{l-1}\lambda x} f(x)\d x, \end{align} \end{subequations} and the various contours are defined by \begin{subequations} \begin{align} \Gamma &= \Gamma_0 \cup \Gamma_a, \\ \Gamma_0 &= \Gamma^+ \cup \Gamma^-, \\ \Gamma^+_0 &= \bigcup_{\substack{\sigma\in\overline{\C^+}:\\\Delta(\sigma)=0}}C(\sigma,\epsilon), \\ \Gamma^-_0 &= \bigcup_{\substack{\sigma\in\C^-:\\\Delta(\sigma)=0}}C(\sigma,\epsilon), \\ \Gamma_a &= \Gamma^+_a \cup \Gamma^-_a, \\ \notag \Gamma^\pm_a\; &\mbox{is the boundary of the domain} \\ &\hspace{5ex}\left\{\lambda\in\C^\pm:\Re(a\lambda^n)>0\right\} \setminus \bigcup_{\substack{\sigma\in\C:\\\Delta(\sigma)=0}}D(\sigma,2\epsilon). \end{align} \end{subequations} \begin{figure} \begin{center} \includegraphics{General-contours-02} \caption{Definition of the contour $\Gamma$.} \label{fig:general-contdef} \end{center} \end{figure} Figure~\ref{fig:general-contdef} shows the position of the contours for some hypothetical $\Delta$ with zeros at the black dots. The contour $\Gamma_0^+$ is shown in blue and the contour $\Gamma_0^-$ is shown in black. The contours $\Gamma_a^+$ and $\Gamma_a^-$ are shown in red and green respectively. This case corresponds to $a=-i$. The figure indicates the possibility that there may be infinitely many zeros lying in the interior of the sectors bounded by $\Gamma_a$. For such a zero, $\Gamma_a$ has a circular component enclosing this zero with radius $2\epsilon$. The validity of the transform pairs is expressed in the following proposition: \begin{prop} \label{prop:Transforms.valid:General:2} Let $S$ be a type~\textnormal{I} or type~\textnormal{II} operator. Then for all $f\in\Phi$ and for all $x\in(0,1)$, \BE \label{eqn:Transforms.valid:General:prop2} f_x(F_\lambda(f)) = \left\{\int_{\Gamma^+_0}+\int_{\Gamma^+_a}\right\}e^{i\lambda x} F^+_\lambda(f)\d\lambda + \left\{\int_{\Gamma^-_0}+\int_{\Gamma^-_a}\right\}e^{i\lambda x} F^-_\lambda(f)\d\lambda = f(x). \EE \end{prop} \begin{proof} A simple calculation yields \BE \label{eqn:F+-F-.is.phihat} \hsforall f\in C,\hsforall S, \qquad F^+_\lambda(f)-F^-_\lambda(f) = \frac{1}{2\pi}\hat{f}(\lambda). \EE As shown in~\cite{Smi2012a}, the well-posedness of the initial-boundary value problem implies $F^\pm_\lambda(f)=O(\lambda^{-1})$ as $\lambda\to\infty$ within the sectors exterior to $\Gamma^\pm_a$. The only singularities of $F^\pm_\lambda(f)$ are isolated poles hence, by Jordan's Lemma, \begin{multline} \label{eqn:Transforms.valid:General:2:proof.1} \left\{\int_{\Gamma^+_0}+\int_{\Gamma^+_a}\right\}e^{i\lambda x} F^+_\lambda(f)\d\lambda + \left\{\int_{\Gamma^-_0}+\int_{\Gamma^-_a}\right\}e^{i\lambda x} F^-_\lambda(f)\d\lambda \\ = \sum_{\substack{\sigma\in\C:\\\Im(\sigma)>\epsilon,\\\Delta(\sigma)=0}}\left\{\int_{C(\sigma,\epsilon)}-\int_{C(\sigma,2\epsilon)}\right\}e^{i\lambda x} F^+_\lambda(f)\d\lambda + \sum_{\substack{\sigma\in\C:\\\Im(\sigma)<\epsilon,\\\Delta(\sigma)=0}}\left\{\int_{C(\sigma,\epsilon)}-\int_{C(\sigma,2\epsilon)}\right\}e^{i\lambda x} F^-_\lambda(f)\d\lambda \\ + \int_\gamma e^{i\lambda x}\left( F^+_\lambda(f) - F^-_\lambda(f) \right) \d\lambda, \end{multline} where $\gamma$ is a contour running along the real line in the increasing direction but perturbed along circular arcs in such a way that it is always at least $\epsilon$ away from each pole of $\Delta$. The series on the right hand side of equation~\eqref{eqn:Transforms.valid:General:2:proof.1} yield a zero contribution. As $f\in\Phi$, its Fourier transform $\hat{f}$ is an entire function hence, by statement~\eqref{eqn:F+-F-.is.phihat}, the integrand in the final term on the right hand side of equation~\eqref{eqn:Transforms.valid:General:2:proof.1} is an entire function and we may deform $\gamma$ onto the real line. The validity of the usual Fourier transform completes the proof. \end{proof} \section{True integral transform method for IBVP} \label{sec:Transform.Method} In section~\ref{ssec:Transform.Method:LKdV} we will prove equation~\eqref{eqn:introIBVP.solution.transform.1} for the transform pairs~\eqref{eqn:introTrans.1.1}. In section~\ref{ssec:Transform.Method:LKdV:General}, we establish equivalent results for general type~I and type~II initial-boundary value problems. \subsection{Linearized KdV} \label{ssec:Transform.Method:LKdV} \begin{prop} \label{prop:Transform.Method:LKdV:2} The solution of problem~1 is given by equation~\eqref{eqn:introIBVP.solution.transform.1}, with $F_\lambda(f)$ and $f_x(F)$ defined by equations~\eqref{eqn:introTrans.1.1a}--\eqref{eqn:introTrans.1.1d}. The solution of problem~2 is given by equation~\eqref{eqn:introIBVP.solution.transform.1}, with $F_\lambda(f)$ and $f_x(F)$ defined by equations~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1b},~\eqref{eqn:introTrans.1.1e} and~\eqref{eqn:introTrans.1.1f}. \end{prop} \begin{proof} We present the proof for problem~2. The proof for problem~1 is very similar. Suppose $q\in C^\infty([0,1]\times[0,T])$ is a solution of the problem~\eqref{eqn:introIBVP.2}. Applying the forward transform to $q$ yields \BE F_\lambda(q(\cdot,t)) = \begin{cases} \int_0^1 \phi^+(x,\lambda)q(x,t)\d x & \mbox{if } \lambda\in\overline{\C^+}, \\ \int_0^1 \phi^-(x,\lambda)q(x,t)\d x & \mbox{if } \lambda\in\C^-. \end{cases} \EE The PDE and integration by parts imply the following: \begin{align} \notag \frac{\d}{\d t} F_\lambda(q(\cdot,t)) &= \int_0^1 f^\pm(x,\lambda)q_{xxx}(x,t)\d x \\ \notag &\hspace{-4em} = - \partial_{x}^2q(1,t) \phi^\pm(1,\lambda) + \partial_{x}^2q(0,t) \phi^\pm(0,\lambda) + \partial_{x}q(1,t) \partial_{x}\phi^\pm(1,\lambda) - \partial_{x}q(0,t) \partial_{x}\phi^\pm(0,\lambda) \\ &\hspace{3em}- q(1,t) \partial_{xx}\phi^\pm(1,\lambda) + q(0,t) \partial_{xx}\phi^\pm(0,\lambda) + i\lambda^3 F_\lambda(q(\cdot,t)). \end{align} Rearranging, multiplying by $e^{-i\lambda^3t}$ and integrating, we find \BE F_\lambda(q(\cdot,t)) = e^{i\lambda^3t}F_\lambda(f) + e^{i\lambda^3t} \sum_{j=0}^2 (-1)^j\left[\partial_{x}^{2-j}\phi^\pm(0,\lambda) Q_j(0,\lambda) - \partial_{x}^{2-j}\phi^\pm(1,\lambda) Q_j(1,\lambda)\right], \EE where \BE Q_j(x,\lambda) = \int_0^t e^{-i\lambda^3s} \partial_x^j q(x,s) \d s. \EE Evaluating $\partial_{x}^{j}\phi^\pm(0,\lambda)$ and $\partial_{x}^{j}\phi^\pm(1,\lambda)$, we obtain \begin{multline} F_\lambda(q(\cdot,t)) = e^{i\lambda^3t}F_\lambda(f) + \frac{e^{i\lambda^3t}}{2\pi} \left[ Q_1(1,\lambda)i\lambda(\alpha-\alpha^2)\frac{e^{i\alpha\lambda}-e^{i\alpha^2\lambda}}{\Delta(\lambda)} \right. \\ \left. + Q_0(0,\lambda)\lambda^2\frac{2e^{-i\lambda}-\alpha e^{-i\alpha\lambda}-\alpha^2 e^{-i\alpha^2\lambda}}{\Delta(\lambda)} \right. \\ \left. + Q_0(1,\lambda)\lambda^2\frac{(1-\alpha^2)e^{i\alpha\lambda}+(1-\alpha)e^{i\alpha^2\lambda}}{\Delta(\lambda)} + Q_2(0,\lambda) + Q_1(0,\lambda) i\lambda \right], \end{multline} for all $\lambda\in\overline{\C^+}$ and \begin{multline} F_\lambda(q(\cdot,t)) = e^{i\lambda^3t}F_\lambda(f) + \frac{e^{i\lambda^3t}}{2\pi} \left[ Q_1(1,\lambda)i\lambda\frac{e^{-i\lambda}+\alpha^2e^{-i\alpha\lambda}+\alpha e^{-i\alpha^2\lambda}}{\Delta(\lambda)} \right. \\ \left. + Q_0(0,\lambda)\lambda^2\frac{3}{\Delta(\lambda)} - Q_0(1,\lambda)\lambda^2\frac{e^{-i\lambda}+e^{-i\alpha\lambda}+e^{-i\alpha^2\lambda}}{\Delta(\lambda)} + Q_2(1,\lambda)e^{-i\lambda} \right], \end{multline} for all $\lambda\in\C^-$. Hence, the validity of the transform pair (Proposition~\ref{prop:Transforms.valid:LKdV:2}) implies \begin{multline} \label{eqn:Transform.Method:LKdV.2:q.big} q(x,t) = \left\{\int_{\Gamma_0}+\int_{\Gamma^+}+\int_{\Gamma^-}\right\} e^{i\lambda x+i\lambda^3t} F_\lambda(f)\d\lambda \\ +\frac{1}{2\pi}\left\{\int_{\Gamma_0}+\int_{\Gamma^+}\right\} e^{i\lambda x+i\lambda^3t} \left[ Q_1(1,\lambda)i\lambda(\alpha-\alpha^2)\frac{e^{i\alpha\lambda}-e^{i\alpha^2\lambda}}{\Delta(\lambda)} \right. \\ \left. + Q_0(0,\lambda)\lambda^2\frac{2e^{-i\lambda}-\alpha e^{-i\alpha\lambda}-\alpha^2 e^{-i\alpha^2\lambda}}{\Delta(\lambda)} + Q_0(1,\lambda)\lambda^2\frac{(1-\alpha^2)e^{i\alpha\lambda}+(1-\alpha)e^{i\alpha^2\lambda}}{\Delta(\lambda)} \right] \d\lambda \\ +\frac{1}{2\pi}\int_{\Gamma^-} e^{i\lambda x+i\lambda^3t} \left[ Q_1(1,\lambda)i\lambda\frac{e^{-i\lambda}+\alpha^2e^{-i\alpha\lambda}+\alpha e^{-i\alpha^2\lambda}}{\Delta(\lambda)} \right. \\ \left. + Q_0(0,\lambda)\lambda^2\frac{3}{\Delta(\lambda)} - Q_0(1,\lambda)\lambda^2\frac{e^{-i\lambda}+e^{-i\alpha\lambda}+e^{-i\alpha^2\lambda}}{\Delta(\lambda)} \right] \d\lambda \\ +\frac{1}{2\pi}\left\{\int_{\Gamma_0}+\int_{\Gamma^+}\right\} e^{i\lambda x+i\lambda^3t} \left[ Q_2(0,\lambda) + Q_1(0,\lambda) i\lambda \right] \d\lambda \\ +\frac{1}{2\pi}\int_{\Gamma^-} e^{i\lambda(x-1)+i\lambda^3t} Q_2(1,\lambda) \d\lambda. \end{multline} Integration by parts yields \BE Q_j(x,t) = O(\lambda^{-3}), \EE as $\lambda\to\infty$ within the region enclosed by $\Gamma\pm$. Hence, by Jordan's lemma, the final two lines of equation~\eqref{eqn:Transform.Method:LKdV.2:q.big} vanish. The boundary conditions imply \BE Q_0(0,\lambda) = Q_0(1,\lambda) = Q_1(1,\lambda) = 0, \EE so the second, third, fourth and fifth lines of equation~\eqref{eqn:Transform.Method:LKdV.2:q.big} vanish. Hence \BE q(x,t) = \left\{\int_{\Gamma_0}+\int_{\Gamma^+}+\int_{\Gamma^-}\right\} e^{i\lambda x+i\lambda^3t} F_\lambda(f)\d\lambda. \EE \end{proof} The above proof also demonstrates how the transform pair may be used to solve a problem with inhomogeneous boundary conditions: consider the problem, \begin{subequations} \label{eqn:introIBVP.2inhomo} \begin{align} \label{eqn:introIBVP.2inhomo:PDE} q_t(x,t) + q_{xxx}(x,t) &= 0 & (x,t) &\in (0,1)\times(0,T), \\ q(x,0) &= \phi(x) & x &\in [0,1], \\ q(0,t) &= h_1(t) & t &\in [0,T], \\ q(1,t) &= h_2(t) & t &\in [0,T], \\ q_x(1,t) &= h_3(t) & t &\in [0,T], \end{align} \end{subequations} for some given boundary data $h_j\in C^\infty[0,1]$. Then $Q_0(0,\lambda)$, $Q_0(1,\lambda)$ and $Q_1(1,\lambda)$ are nonzero, but they are known quantities, namely $t$-transforms of the boundary data. Substituting these values into equation~\eqref{eqn:Transform.Method:LKdV.2:q.big} yields an explicit expression for the solution. \subsection{General} \label{ssec:Transform.Method:LKdV:General} \begin{prop} \label{prop:Transform.Method:General:2} The solution of a type~\textnormal{I} or type~\textnormal{II} initial-boundary value problem is given by \BE \label{eqn:Transform.Method:General:prop2:1} q(x,t) = f_x \left( e^{-a\lambda^nt} F_\lambda(f) \right). \EE \end{prop} \begin{lem} \label{lem:GEInt1} Let $f\in\Phi$. Then there exist polynomials $P^\pm_f$ of degree at most $n-1$ such that \begin{subequations} \begin{align} F^+_\lambda(Sf) &= \lambda^n F^+_\lambda(f) + P^+_f(\lambda), \\ F^-_\lambda(Sf) &= \lambda^n F^-_\lambda(f) + P^-_f(\lambda) e^{-i\lambda}. \end{align} \end{subequations} \end{lem} \begin{proof} Let $(\phi,\psi)$ be the usual inner product $\int_0^1\phi(x)\overline{\psi}(x)\d x$. For any $\lambda\in\Gamma$, we can represent $F^\pm_\lambda$ as the inner product $F^\pm_\lambda(f)=(f,\phi^\pm_\lambda)$, for the function $\phi^\pm_\lambda(x)$, smooth in $x$ and meromorphic in $\lambda$, defined by \begin{subequations} \label{eqn:defn.fpm.rho} \begin{align} \overline{\phi^+_\lambda}(x) &= \frac{1}{2\pi\Delta(\lambda)} \sum_{l=1}^n\sum_{j=1}^n\det \Msups{X}{}{}{l}{j}(\lambda) \Msup{M}{1}{j}{+}(\lambda) e^{-i\alpha^{l-1}\lambda x}, \\ \overline{\phi^-_\lambda}(x) &= \frac{-e^{-i\lambda}}{2\pi\Delta(\lambda)} \sum_{l=1}^n\sum_{j=1}^n\det \Msups{X}{}{}{l}{j}(\lambda) \Msup{M}{1}{j}{-}(\lambda) e^{-i\alpha^{l-1}\lambda x}. \end{align} \end{subequations} As $\phi^\pm_\lambda$, $Sf\in C^\infty[0,1]$ and $\alpha^{(l-1)n}=1$, equation~\eqref{eqn:S.not.s-a} yields \BE F^\pm_\lambda(Sf) = \lambda^n F^\pm_\lambda(f) + [f \phi^\pm_\lambda](1) - [f \phi^\pm_\lambda](0). \EE If $B$, $B^\star:C^\infty[0,1]\to\C^n$, are the real vector boundary forms \BE B=(B_1,B_2,\ldots,B_n), \qquad B^\star=(B_1^\star,B_2^\star,\ldots,B_n^\star), \EE then the boundary form formula~\cite[Theorem~11.2.1]{CL1955a} guarantees the existance of complimentary vector boundary forms $B_c$, $B_c^\star$ such that \BE \label{eqn:propGEInt.3} [f \phi^\pm_\lambda](1) - [f \phi^\pm_\lambda](0) = Bf \cdot B_c^\star \phi^\pm_\lambda + B_cf \cdot B^\star \phi^\pm_\lambda, \EE where $\cdot$ is the sesquilinear dot product. We consider the right hand side of equation~\eqref{eqn:propGEInt.3} as a function of $\lambda$. As $Bf=0$, this expression is a linear combination of the functions $B_k^\star\overline{\phi^\pm}_\lambda$ of $\lambda$, with coefficients given by the complementary boundary forms. The definitions of $B_k^\star$ and $\phi_\lambda^+$ imply \begin{align} \notag B_k^\star\overline{\phi^+_\lambda} &= \frac{1}{2\pi\Delta(\lambda)} \sum_{l=1}^n\sum_{j=1}^n\det \Msups{X}{}{}{l}{j}(\lambda) \Msup{M}{1}{j}{+}(\lambda) B_k^\star(e^{-i\alpha^{l-1}\lambda \cdot}) \\ \notag &= \frac{1}{2\pi\Delta(\lambda)} \sum_{l=1}^n\sum_{j=1}^n\det \Msups{X}{}{}{l}{j}(\lambda) \Msup{M}{1}{j}{+}(\lambda) \M{M}{l}{k}(\lambda). \end{align} But \BE \sum_{l=1}^n\det\Msups{X}{}{}{l}{j}(\lambda)\M{M}{l}{k}(\lambda) = \Delta(\lambda)\M{\delta}{j}{k}, \EE so \begin{subequations} \begin{align} B_k^\star \overline{\phi^+_\lambda} &= \frac{1}{2\pi}\Msup{M}{1}{k}{+}(\lambda). \\ \intertext{Similarly,} B_k^\star \overline{\phi^-_\lambda} &= \frac{-e^{-i\lambda}}{2\pi}\Msup{M}{1}{k}{-}(\lambda). \end{align} \end{subequations} Finally, by equations~\eqref{eqn:M.defn}, $\Msup{M}{1}{k}{\pm}$ are polynomials of order at most $n-1$. \end{proof} \begin{proof}[of Proposition~\ref{prop:Transform.Method:General:2}] Let $q$ be the solution of the problem. Then, since $q$ satisfies the partial differential equation~\eqref{eqn:IBVP.PDE}, \BE \frac{\d}{\d t} F^+_\lambda(q(\cdot,t)) = -aF^+_\lambda(S(q(\cdot,t))) = -a\lambda^n F^+_\lambda(q(\cdot,t)) - a P^+_{q(\cdot,t)}(\lambda), \EE where, by Lemma~\ref{lem:GEInt1}, $P^+_{q(\cdot,t)}$ is a polynomial of degree at most $n-1$. Hence \BE \frac{\d}{\d t} \left( e^{a\lambda^nt}F^+_\lambda(q(\cdot,t)) \right) = -ae^{a\lambda^nt}P^+_{q(\cdot,t)}(\lambda). \EE Integrating with respect to $t$ and applying the initial condition~\eqref{eqn:IBVP.IC}, we find \BE \label{eqn:Transform.Method:General:1proof.1} F^+_\lambda(q(\cdot,t)) = e^{-a\lambda^nt}F^+_\lambda(f) -a e^{-a\lambda^nt} \int_0^t e^{a\lambda^ns} P^+_{q(\cdot,s)}(\lambda)\d s. \EE Similarly, \BE \label{eqn:Transform.Method:General:1proof.2} F^-_\lambda(q(\cdot,t)) = e^{-a\lambda^nt}F^-_\lambda(f) -a e^{-i\lambda-a\lambda^nt} \int_0^t e^{a\lambda^ns} P^-_{q(\cdot,s)}(\lambda)\d s, \EE where $P^-_{q(\cdot,t)}$ is another polynomial of degree at most $n-1$. The validity of the type~II transform pair, Proposition~\ref{prop:Transforms.valid:General:2}, implies \begin{multline} \label{eqn:Transform.Method:General:2proof.3} q(x,t) = \int_{\Gamma^+} e^{i\lambda x-a\lambda^nt}F^+_\lambda(f) \d\lambda + \int_{\Gamma^-} e^{i\lambda(x-1)-a\lambda^nt}F^-_\lambda(f) \d\lambda \\ -a \int_{\Gamma^+_0} e^{i\lambda x-a\lambda^nt} \left(\int_0^t e^{a\lambda^ns} P^+_{q(\cdot,s)}(\lambda) \d s\right) \d\lambda \\ -a \int_{\Gamma^-_0} e^{i\lambda(x-1)-a\lambda^nt} \left(\int_0^t e^{a\lambda^ns} P^-_{q(\cdot,s)}(\lambda) \d s\right) \d\lambda \\ -a \int_{\Gamma^+_a} e^{i\lambda x-a\lambda^nt} \left(\int_0^t e^{a\lambda^ns} P^+_{q(\cdot,s)}(\lambda) \d s\right) \d\lambda \\ -a \int_{\Gamma^-_a} e^{i\lambda(x-1)-a\lambda^nt} \left(\int_0^t e^{a\lambda^ns} P^-_{q(\cdot,s)}(\lambda) \d s\right) \d\lambda. \end{multline} As $P^\pm_{q(\cdot,s)}$ are polynomials, the integrands \BES e^{i\lambda x-a\lambda^nt} \left(\int_0^t e^{a\lambda^ns} P^+_{q(\cdot,s)}(\lambda) \d s\right) \mbox{ and } e^{i\lambda(x-1)-a\lambda^nt} \left(\int_0^t e^{a\lambda^ns} P^-_{q(\cdot,s)}(\lambda) \d s\right) \EES are both entire functions of $\lambda$. Hence the third and fourth terms of equation~\eqref{eqn:Transform.Method:General:2proof.3} vanish. Integration by parts yields \begin{align*} e^{i\lambda x-a\lambda^nt} \left(\int_0^t e^{a\lambda^ns} P^+_{q(\cdot,s)}(\lambda) \d s\right) &= O(\lambda^{-1}) \mbox{ as } \lambda\to\infty\; \begin{matrix}\mbox{ within the region }\\\mbox{ enclosed by } \Gamma^+_a,\end{matrix} \\ e^{i\lambda(x-1)-a\lambda^nt} \left(\int_0^t e^{a\lambda^ns} P^-_{q(\cdot,s)}(\lambda) \d s\right) &= O(\lambda^{-1}) \mbox{ as } \lambda\to\infty\; \begin{matrix}\mbox{ within the region }\\\mbox{ enclosed by } \Gamma^-_a.\end{matrix} \end{align*} Hence, by Jordan's Lemma, the final two terms of equation~\eqref{eqn:Transform.Method:General:2proof.3} vanish. \end{proof} \begin{rmk} The same method may be used to solve initial-boundary value problems with inhomogeneous boundary conditions. The primary difference is that statement~\eqref{eqn:F+-F-.is.phihat} must be replaced with~\cite[Lemma~4.1]{Smi2012a}. \end{rmk} \section{Analysis of the transform pair} \label{sec:Spectral} In this section we analyse the spectral properties of the transform pairs using the notion of augmented eigenfunctions. \subsection{Linearized KdV} \label{ssec:Spectral:LKdV} \subsubsection*{Augmented Eigenfunctions} Let $S^{(\mathrm{I})}$ and $S^{(\mathrm{II})}$ be the differential operators representing the spatial parts of the IBVPs~1 and~2, respectively. Each operator is a restriction of the same formal differential operator, $(-i\d/\d x)^3$ to the domain of initial data compatible with the boundary conditions of the problem: \begin{align} \label{eqn:introS1} \mathcal{D}(S^{(\mathrm{I})}) &= \{f\in C^\infty[0,1]: f(0)=f(1)=0,\; f'(0)=2f'(1)\}, \\ \label{eqn:introS2} \mathcal{D}(S^{(\mathrm{II})}) &= \{f\in C^\infty[0,1]: f(0)=f(1)=f'(1)=0\}. \end{align} A simple calculation reveals that $\{F_\lambda:\lambda\in\Gamma_0,\Delta(\sigma)=0\}$ (where $F_\lambda$ is defined by equations~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1c} and~\eqref{eqn:introTrans.1.1d}) is a family of type~I augmented eigenfunctions of $S^{(\mathrm{I})}$. Indeed, integration by parts yields \BE \label{eqn:introS1.Rphi} F_\lambda(S^{(\mathrm{I})}f) = \begin{cases} \lambda^3 F_\lambda(f) + \left( - \displaystyle\frac{i}{2\pi}f''(0) + \displaystyle\frac{\lambda}{2\pi}f'(0) \right) & \lambda\in\overline{\C^+}, \\ \lambda^3 F_\lambda(f) + \left( - \displaystyle\frac{i}{2\pi}f''(1) + \displaystyle\frac{\lambda}{2\pi}f'(1) \right) & \lambda\in\C^-. \end{cases} \EE For any $f$, the remainder functional is an entire function of $\lambda$ and $\Gamma_0$ is a closed, circular contour hence~\eqref{eqn:defnAugEig.Control1} holds. In the same way $\{F_\lambda:\lambda\in\Gamma_0\}$ (where $F_\lambda$ is defined by equations~~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1c} and~\eqref{eqn:introTrans.1.1d}) is a family of type~I augmented eigenfunctions of $S^{(\mathrm{II})}$. Indeed \BE \label{eqn:introS2.Rphi} F_\lambda(S^{(\mathrm{II})}f) = \begin{cases} \lambda^3 F_\lambda(f) + \left( - \displaystyle\frac{i}{2\pi}f''(0) - \displaystyle\frac{\lambda}{2\pi}f'(0) \right) & \lambda\in\overline{\C^+}, \\ \lambda^3 F_\lambda(f) + \left( - \displaystyle\frac{i}{2\pi}f''(1) \right) & \lambda\in\C^-, \end{cases} \EE so the remainder functional is again entire. Furthermore, the ratio of the remainder functionals to the eigenvalue is a rational function with no pole in the regions enclosed by $\Gamma^\pm$ and decaying as $\lambda\to\infty$. Jordan's lemma implies~\eqref{eqn:defnAugEig.Control2} hence $\{F_\lambda:\lambda\in\Gamma^+\cup\Gamma^-\}$ is a family of type~II augmented eigenfunctions of the corresponding $S^{(\mathrm{I})}$ or $S^{(\mathrm{II})}$. \subsubsection*{Spectral representation of $S^{(\mathrm{II})}$} We have shown above that $\{F_\lambda:\lambda\in\Gamma_0\}$ is a family of type~I augmented eigenfunctions and $\{F_\lambda:\lambda\in\Gamma^+\cup\Gamma^-\}$ is a family of type~II augmented eigenfunctions of $S^{(\mathrm{II})}$, each with eigenvalue $\lambda^3$. It remains to show that the integrals \BE \int_{\Gamma_0}e^{i\lambda x}F_\lambda(Sf)\d\lambda, \qquad \int_{\Gamma^+\cup\Gamma^-}e^{i\lambda x}F_\lambda(f)\d\lambda \EE converge. A simple calculation reveals that $F_\lambda(\psi)$ has a removable singularity at $\lambda=0$, for any $\psi\in C$. Hence the first integral not only converges but evaluates to $0$. Thus, the second integral represents $f_x(F_\lambda(f))=f$ and converges by Proposition~\ref{prop:Transforms.valid:LKdV:2}. This completes the proof of Theorem~\ref{thm:Diag:LKdV:1} for problem~2. \subsubsection*{Spectral representation of $S^{(\mathrm{I})}$} By the above argument, it is clear that the transform pair $(F_\lambda,f_x)$ defined by equations~\eqref{eqn:introIBVP.solution.2} provides a spectral representation of $S^{(\mathrm{I})}$ in the sense of Definition~\ref{defn:Spect.Rep.I.II}, verifying Theorem~\ref{thm:Diag:LKdV:1} for problem~1. It is clear that $\{F_\lambda:\lambda\in\Gamma^\pm\}$ is not a family of type~I augmented eigenfunctions, so the representation~\eqref{eqn:introIBVP.solution.2} does \emph{not} provide a spectral representation of $S^{(\mathrm{I})}$ in the sense of Definition~\ref{defn:Spect.Rep.I}. However, equation~\eqref{eqn:introIBVP.solution.1} does provide a representation in the sense of Definition~\ref{defn:Spect.Rep.I}. Indeed, equation~\eqref{eqn:introIBVP.solution.1} implies that it is possible to deform the contours $\Gamma^\pm$ onto \BES \bigcup_{\substack{\sigma\in\C:\\\Delta(\sigma)=0}}\Gamma_\sigma. \EES It is possible to make this deformation without any reference to the initial-boundary value problem. By an argument similar to that in the proof of Proposition~2.1, we are able to `close' (whereas in the earlier proof we `opened') the contours $\Gamma^\pm$ onto simple circular contours each enclosing a single zero of $\Delta$. Thus, an equivalent inverse transform is given by~\eqref{eqn:introTrans.1.2}. It is clear that, for each $\sigma$ a zero of $\Delta$, $\{F_\lambda:\lambda\in\Gamma_\sigma\}$ is a family of type~I augmented eigenfunctions of $S^{(\mathrm{I})}$ up to integration over $\Gamma_\sigma$. It remains to show that the series \BE \label{eqn:LKdV1.Sphi.Series} \sum_{\substack{\sigma\in\C:\\\Delta(\sigma)=0}} \int_{\Gamma_\sigma} e^{i\lambda x} F_\lambda(Sf) \d\lambda \EE converges. The validity of the transform pair $(F_\lambda,f^\Sigma_x)$ defined by equations~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1c},~\eqref{eqn:introTrans.1.1d} and~\eqref{eqn:introTrans.1.2} is insufficient to justify this convergence since, in general, $Sf$ may not satisfy the boundary conditions, so $Sf$ may not be a valid initial datum of the problem. Thus, we prove convergence directly. The augmented eigenfunctions $F_\lambda$ are meromorphic functions of $\lambda$, represented in their definition~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1c},~\eqref{eqn:introTrans.1.1d} as the ratio of two entire functions, with singularities only at the zeros of the exponential polynomial $\Delta$. The theory of exponential polynomials~\cite{Lan1931a} implies that the only zeros of $\Delta$ are of finite order, so each integral in the series converges and is equal to the residue of the pole at $\sigma$. Furthermore, an asymptotic calculation reveals that these zeros are at $0$, $\alpha^j\lambda_k$, $\alpha^j\mu_k$, for each $j\in\{0,1,2\}$ and $k\in\N$, where \begin{align} \label{eqn:LKdV.1.lambdak} \lambda_k &= \left(2k-\frac{1}{3}\right)\pi + i\log2 + O\left( e^{-\sqrt{3}k\pi} \right), \\ \mu_k &= -\left(2k-\frac{1}{3}\right)\pi + i\log2 + O\left( e^{-\sqrt{3}k\pi} \right). \end{align} \label{eqn:LKdV.1.muk} Evaluating the first derivative of $\Delta$ at these zeros, we find \begin{align} \Delta'(\lambda_k) &= (-1)^{k+1}\sqrt{2}e^{ i\frac{\sqrt{3}}{2}\log2} e^{\sqrt{3}\pi(k-1/6)} + O(1), \\ \Delta'(\mu_k) &= (-1)^{k} \sqrt{2}e^{-i\frac{\sqrt{3}}{2}\log2} e^{\sqrt{3}\pi(k-1/6)} + O(1). \end{align} Hence, at most finitely many zeros of $\Delta$ are of order greater than $1$. A straightforward calculation reveals that $0$ is a removable singularity. Hence, via a residue calculation and integration by parts, we find that we can represent the tail of the series~\eqref{eqn:LKdV1.Sphi.Series} in the form \begin{subequations} \begin{multline} i\sum_{k=N}^\infty \left\{ \frac{1}{\lambda_k\Delta'(\lambda_k)} \left[ e^{i\lambda_kx}\left((Sf)(1)Y_1(\lambda_k)-(Sf)(0)Y_0(\lambda_k)\right) \right.\right. \\ + \alpha^2 e^{i\alpha\lambda_kx}\left((Sf)(1)Y_1(\alpha\lambda_k)-(Sf)(0)Y_0(\alpha\lambda_k)\right) \\ \hspace{18ex} \left. - \alpha e^{i\alpha^2\lambda_k(x-1)}\left((Sf)(1)Z_1(\alpha^2\lambda_k)-(Sf)(0)Z_0(\alpha^2\lambda_k)\right) \right] \\ + \frac{1}{\mu_k\Delta'(\mu_k)} \left[ e^{i\mu_kx}\left((Sf)(1)Y_1(\mu_k)-(Sf)(0)Y_0(\mu_k)\right) \right. \hspace{23ex} \\ - \alpha^2 e^{i\alpha\mu_k(x-1)}\left((Sf)(1)Z_1(\alpha\mu_k)-(Sf)(0)Z_0(\alpha\mu_k)\right) \\ \left.\left. + \alpha e^{i\alpha^2\mu_kx}\left((Sf)(1)Y_1(\alpha^2\mu_k)-(Sf)(0)Y_0(\alpha^2\mu_k)\right) \right] + O(k^{-2}) \right\}, \end{multline} where \begin{align} Y_1(\lambda) &= 3 + 2(\alpha^2-1)e^{i\alpha\lambda} + 2(\alpha-1)e^{i\alpha^2\lambda}, \\ Y_0(\lambda) &= e^{i\lambda}+e^{i\alpha\lambda}+e^{i\alpha^2\lambda}-4e^{-i\lambda}+2e^{-i\alpha\lambda}+2e^{-i\alpha^2\lambda}, \\ Z_1(\lambda) &= \alpha e^{i\alpha\lambda} + 2e^{-i\alpha\lambda} + \alpha^2 e^{i\alpha^2\lambda} + 2e^{-i\alpha^2\lambda}, \\ Z_0(\lambda) &= 6 + (\alpha^2-1)e^{-i\alpha\lambda} + (\alpha-1)e^{-i\alpha^2\lambda}. \end{align} \end{subequations} As $Y_j$, $Z_j\in O(\exp(\sqrt{3}\pi k))$, the Riemann-Lebesgue lemma guarantees conditional convergence for all $x\in(0,1)$. This completes the proof of Theorem~\ref{thm:Diag:LKdV:2}. \begin{rmk} \label{rmk:diag.type2only.LKdV} We observed above that $0$ is removable singularity of $F_\lambda$ defined by~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1c} and~\eqref{eqn:introTrans.1.1d}. The same holds for $F_\lambda$ defined by~\eqref{eqn:introTrans.1.1a},~\eqref{eqn:introTrans.1.1e} and~\eqref{eqn:introTrans.1.1f}. Hence, for both problems~1 and~2, \BE \int_{\Gamma_0}e^{i\lambda x}F_\lambda(f)\d\lambda = 0 \EE and we could redefine the inverse transform~\eqref{eqn:introTrans.1.1b} as \begin{align} F(\lambda) &\mapsto f(x): & f_x(F) &= \left\{ \int_{\Gamma^+} + \int_{\Gamma^-} \right\} e^{i\lambda x} F(\lambda) \d\lambda, \qquad x\in[0,1]. \end{align} This permits spectral representations of both $S^{(\mathrm{I})}$ and $S^{(\mathrm{II})}$ via augmented eigenfunctions of type~II \emph{only}, that is spectral representations in the sense of Definition~\ref{defn:Spect.Rep.I.II} but with $E^{(\mathrm{I})}=\emptyset$. \end{rmk} \subsection{General} \label{ssec:Spectral:General} We will show that the transform pair $(F_\lambda,f_x)$ defined by equations~\eqref{eqn:defn.forward.transform} represents spectral decomposition into type~I and type~II augmented eigenfunctions. \begin{thm} \label{thm:Diag.2} Let $S$ be the spatial differential operator associated with a type~\textnormal{II} IBVP. Then the transform pair $(F_\lambda,f_x)$ provides a spectral representation of $S$ in the sense of Definition~\ref{defn:Spect.Rep.I.II}. \end{thm} The principal tools for constructing families of augmented eigenfunctions are Lemma~\ref{lem:GEInt1}, as well as the following lemma: \begin{lem} \label{lem:GEInt2} Let $F^\pm_\lambda$ be the functionals defined in equations~\eqref{eqn:defn.Fpm.rho}. \begin{enumerate} \item[(i)]{Let $\gamma$ be any simple closed contour. Then $\{F^\pm_\lambda:\lambda\in\gamma\}$ are families of type~\textup{I} augmented eigenfunctions of $S$ up to integration along $\gamma$ with eigenvalues $\lambda^n$.} \item[(ii)]{Let $\gamma$ be any simple closed contour which neither passes through nor encloses $0$. Then $\{F^\pm_\lambda:\lambda\in\gamma\}$ are families of type~\textup{II} augmented eigenfunctions of $S$ up to integration along $\gamma$ with eigenvalues $\lambda^n$.} \item[(iii)]{Let $0\leq\theta<\theta'\leq\pi$ and define $\gamma^+$ to be the boundary of the open set \BE \{\lambda\in\C:|\lambda|>\epsilon, \, \theta<\arg\lambda<\theta'\}; \EE similarly, $\gamma^-$ is the boundary of the open set \BE \{\lambda\in\C:|\lambda|>\epsilon, \, -\theta'<\arg\lambda<-\theta\}. \EE Both $\gamma^+$ and $\gamma^-$ have positive orientation. Then $\{F^\pm_\lambda:\lambda\in\gamma^\pm\}$ are families of type~\textup{II} augmented eigenfunctions of $S$ up to integration along $\gamma^\pm$ with eigenvalues $\lambda^n$.} \end{enumerate} \end{lem} \begin{proof} ~ \begin{enumerate} \item[(i)]{\& (ii) By Lemma~\ref{lem:GEInt1}, the remainder functionals are analytic in $\lambda$ within the region bounded by $\gamma$. Cauchy's theorem yields the result.} \item[(iii)]{The set enclosed by $\gamma^+$ is contained within the upper half-plane. By Lemma~\ref{lem:GEInt1}, \BE \int_{\gamma^+} e^{i\lambda x}\lambda^{-n}(F_\lambda^+(Sf) - \lambda^nF_\lambda^+(f))\d\lambda = \int_{\gamma^+} e^{i\lambda x}\lambda^{-n}P^+_f(\lambda)\d\lambda, \EE and the integrand is the product of $e^{i\lambda x}$ with a function analytic on the enclosed set and decaying as $\lambda\to\infty$. Hence, by Jordan's Lemma, the integral of the remainder functionals vanishes for all $x>0$. For $\gamma^-$, the proof is similar.} \end{enumerate} \end{proof} \begin{rmk} If we restrict to the case $0<\theta<\theta'<\pi$ then the functionals $F^\pm_\lambda$ form families of type~I augmented eigenfunctions up to integration along the resulting contours but this is insufficient for our purposes. Indeed, an infinite component of $\Gamma_a$ lies on the real axis, but \BE \int_{-\infty}^\infty e^{i\lambda x}P^+_f(\lambda)\d\lambda \EE \emph{diverges} and can only be interpreted as a sum of delta functions and their derivatives. \end{rmk} Let $(S,a)$ be such that the associated initial-boundary value problem is well-posed. Then there exists a complete system of augmented eigenfunctions associated with $S$, some of which are type~\textup{I} whereas the rest are type~\textup{II}. Indeed: \begin{prop} \label{prop:GEIntComplete3} The system \BE \mathcal{F}_0 = \{F^+_\lambda:\lambda\in\Gamma^+_0\}\cup\{F^-_\lambda:\lambda\in\Gamma^-_0\} \EE is a family of type~\textup{I} augmented eigenfunctions of $S$ up to integration over $\Gamma_0$, with eigenvalues $\lambda^n$. The system \BE \mathcal{F}_a = \{F^+_\lambda:\lambda\in\Gamma^+_a\}\cup\{F^-_\lambda:\lambda\in\Gamma^-_a\} \EE is a family of type~\textup{II} augmented eigenfunctions of $S$ up to integration over $\Gamma_a$, with eigenvalues $\lambda^n$. Furthermore, if an initial-boundary value problem associated with $S$ is well-posed, then $\mathcal{F}=\mathcal{F}_0\cup \mathcal{F}_a$ is a complete system. \end{prop} \begin{proof} Considering $f\in\Phi$ as the initial datum of the homogeneous initial-boundary value problem and applying Proposition~\ref{prop:Transform.Method:General:2}, we evaluate the solution of problem~\eqref{eqn:IBVP} at $t=0$, \BE f(x) = q(x,0) = \int_{\Gamma_0^+} e^{i\lambda x} F^+_\lambda(f) \d\lambda + \int_{\Gamma_0^-} e^{i\lambda x} F^-_\lambda(f) \d\lambda. \EE Thus, if $F^\pm_\lambda(f)=0$ for all $\lambda\in\Gamma_0$ then $f=0$. By Lemma~\ref{lem:GEInt2}~(i), $\mathcal{F}_0$ is a system of type~I augmented eigenfunctions up to integration along $\Gamma^+_0\cup\Gamma^-_0$. Applying Lemma~\ref{lem:GEInt1} to $\mathcal{F}_a$, we obtain \BE F^\pm_\lambda(Sf) = \lambda^n F^\pm_\lambda(f) + R^\pm_\lambda(f), \EE with \BE R^+_\lambda(f) = P^+_f(\lambda), \qquad R^-_\lambda(f) = P^-_f(\lambda)e^{-i\lambda}. \EE By Lemma~\ref{lem:GEInt2}~(ii), we can deform the contours $\Gamma^\pm_a$ onto the union of several contours of the form of the $\gamma^\pm$ appearing in Lemma~\ref{lem:GEInt2}~(iii). The latter result completes the proof. \end{proof} \begin{proof}[of Theorem~\ref{thm:Diag.2}] Proposition~\ref{prop:GEIntComplete3} establishes completeness of the augmented eigenfunctions and equations~\eqref{eqn:Spect.Rep.II}, under the assumption that the integrals converge. The series of residues \BE \int_{\Gamma_0} e^{i\lambda x} F^\pm_\lambda(Sf) \d\lambda = 2\pi i\sum_{\substack{\sigma\in\C:\\\Delta(\sigma)=0}}e^{i\sigma x}\res_{\lambda=\sigma}F^\pm_\lambda(Sf), \EE whose convergence is guaranteed by the well-posedness of the initial-boundary value problem~\cite{Smi2013a}. Indeed, a necessary condition for well-posedness is the convergence of this series for $Sf\in\Phi$. But then the definition of $F^\pm_\lambda$ implies \BES \res_{\lambda=\sigma}F^\pm_\lambda(f) = O(|\sigma|^{-j-1}), \mbox{ where } j=\max\{k:\hsforall f\in\Phi,\; f^{(k)}(0)=f^{(k)}(1)=0\}, \EES so $\res_{\lambda=\sigma} F_\lambda(Sf) = O(|\sigma|^{-1})$ and the Riemann-Lebesgue lemma gives convergence. This verifies statement~\eqref{eqn:Spect.Rep.defnI.II.conv1}. Theorem~\ref{prop:Transforms.valid:General:2} ensures convergence of the right hand side of equation~\eqref{eqn:Spect.Rep.II.2}. Hence statement~\eqref{eqn:Spect.Rep.defnI.II.conv2} holds. \end{proof} \begin{rmk} Suppose $S$ is a type~I operator. By the definition of a type~I operator (more precisely, by the properties of an associated type~I IBVP, see~\cite{Smi2013a}), $F^\pm_\lambda(\phi)=O(\lambda^{-1})$ as $\lambda\to\infty$ within the sectors interior to $\Gamma^\pm_a$. Hence, by Jordan's Lemma, \BE \int_{\Gamma^+_a}e^{i\lambda x} F^+_\lambda(\phi)\d\lambda + \int_{\Gamma^-_a}e^{i\lambda x} F^-_\lambda(\phi)\d\lambda = 0. \EE Hence, it is possible to define an alternative inverse transform \begin{align} \label{eqn:defn.inverse.transform.typeI} F(\lambda) &\mapsto f(x): & f^\Sigma_x(F) &= \int_{\Gamma_0} e^{i\lambda x} F(\lambda)\d\lambda, \end{align} equivalent to $f_x$. The new transform pair $(F_\lambda,f^\Sigma_x)$ defined by equations~\eqref{eqn:defn.forward.transform} and~\eqref{eqn:defn.inverse.transform.typeI} may be used to solve an IBVP associated with $S$ hence \BE \mathcal{F}_0 = \{F^+_\lambda:\lambda\in\Gamma^+_0\}\cup\{F^-_\lambda:\lambda\in\Gamma^-_0\} \EE is a complete system of functionals on $\Phi$. Moreover, $\mathcal{F}_0$ is a family of type~I augmented eigenfunctions \emph{only}. Hence, $\mathcal{F}_0$ provides a spectral representation of $S$ in the sense of Definition~\ref{defn:Spect.Rep.I}. Via a residue calculation at each zero of $\Delta$, one obtains a classical spectral representation of $S$ as a series of (generalised) eigenfunctions. We emphasize that this spectral representation without type~II augmented eigenfunctions is only possible for a type~I operator. \end{rmk} \begin{rmk} By definition, the point $3\epsilon/2$ is always exterior to the set enclosed by $\Gamma$. Therefore introducing a pole at $3\epsilon/2$ does not affect the convergence of the contour integral along $\Gamma$. This means that, the system $\mathcal{F}'=\{(\lambda-3\epsilon/2)^{-n}F_\lambda:\lambda\in\Gamma\}$ is a family of type~\textup{I} augmented eigenfunctions, thus no type~\textup{II} augmented eigenfunctions are required; equation~\eqref{eqn:Spect.Rep.I} holds for $\mathcal{F}'$ and the integrals converge. However, we cannot show that $\mathcal{F}'$ is complete, so we do not have a spectral representation of $S$ through the system $\mathcal{F}'$. \end{rmk} \begin{rmk} \label{rmk:Splitting.of.rep} There may be at infinitely many circular components of $\Gamma_a$, each corresponding to a zero of $\Delta$ which lies in the interior of a sector enclosed by the main component of $\Gamma_a$. It is clear that in equations~\eqref{eqn:Transforms.valid:General:prop2} and~\eqref{eqn:Transform.Method:General:prop2:1}, representing the validity of the transform pair and the solution of the initial-boundary value problem, the contributions of the integrals around these circular contours are cancelled by the contributions of the integrals around certain components of $\Gamma_0$, as shown in Figure~\ref{fig:general-contdef}. Hence, we could redefine the contours $\Gamma_a$ and $\Gamma_0$ to exclude these circular components without affecting the validity of Propositions~\ref{prop:Transforms.valid:General:2} and~\ref{prop:Transform.Method:General:2}. Our choice of $\Gamma_a$ is intended to reinforce the notion that $S$ is split into two parts by the augmented eigenfunctions. In $\Gamma_0$, we have chosen a contour which encloses each zero of the characterstic determinant individually, since each of these zeros is a classical eigenvalue, so $\mathcal{F}_0$ corresponds to the set of all generalised eigenfunctions. Hence $\mathcal{F}_a$ corresponds only to the additional spectral objects necessary to form a complete system. \end{rmk} \begin{rmk} \label{rmk:Gamma.a.ef.at.inf} As $\Gamma_a$ encloses no zeros of $\Delta$, we could choose a $R>0$ and redefine $\Gamma^\pm_{a\Mspacer R}$ as the boundary of \BE \left\{\lambda\in\C^\pm:|\lambda|>R,\;\Re(a\lambda^n)>0\right\} \setminus \bigcup_{\substack{\sigma\in\C:\\\Delta(\sigma)=0}}D(\sigma,2\epsilon), \EE deforming $\Gamma_a$ over a finite region. By considering the limit $R\to\infty$, we claim that $\mathcal{F}_a$ can be seen to represent \emph{spectral objects with eigenvalue at infinity}. \end{rmk} \begin{rmk} \label{rmk:diag.type2only.General} By Lemma~\ref{lem:GEInt2}(ii), for all $\sigma\neq0$ such that $\Delta(\sigma)=0$, it holds that $\{F^\pm_\lambda:\lambda\in C(\sigma,\epsilon)\}$ are families of type~II augmented eigenfunctions. Hence, the only component of $\Gamma_0$ that may not be a family of type~II augmented eigenfunctions is $C(0,\epsilon)$. If \begin{subequations} \begin{align} \gamma_a^+ &= \Gamma_a^+ \cup \displaystyle\bigcup_{\substack{\sigma\in\overline{\C^+}:\\\sigma\neq0,\\\Delta(\sigma)=0}}C(\sigma,\epsilon), \\ \gamma_a^- &= \Gamma_a^- \cup \displaystyle\bigcup_{\substack{\sigma\in-\C^-:\\\Delta(\sigma)=0}}C(\sigma,\epsilon), \\ \gamma_0 &= C(0,\epsilon), \end{align} \end{subequations} then \BE \mathcal{F}'_a = \{F^+\lambda:\lambda\in\gamma_a^+\}\cup\{F^-\lambda:\lambda\in\gamma_a^-\} \EE is a family of type~II augmented eigenfunctions and \BE \mathcal{F}'_0 = \{F^+\lambda:\lambda\in\gamma_0\} \EE is a family of type~I augmented eigenfunctions of $S$. For $S$ type~I or type~II, $\mathcal{F}'_a\cup\mathcal{F}'_0$ provides a spectral representation of $S$ in the sense of Definition~\ref{defn:Spect.Rep.I.II}, with minimal type~I augmented eigenfunctions. (Note that it is possible to cancel certain circular components of $\gamma_a^\pm$.) Assume that $0$ is a removable singularity of $F^+_\lambda$. Then $\mathcal{F}'_a$ provides a spectral representation of $S$ in the sense of Definition~\ref{defn:Spect.Rep.I.II} with $E^{(\mathrm{I})}=\emptyset$. We have already identified the operators $S^{(\mathrm{I})}$ and $S^{(\mathrm{II})}$ for which this representation is possible (see Remark~\ref{rmk:diag.type2only.LKdV}). \end{rmk} \begin{rmk} \label{rmk:Ill-posed} The validity of Lemmata~\ref{lem:GEInt1} and~\ref{lem:GEInt2} does not depend upon the class to which $S$ belongs. Hence, even if all IBVPs associated with $S$ are ill-posed, it is still possible to construct families of augmented eigenfunctions of $S$. However, without the well-posedness of an associated initial-boundary value problem, an alternative method is required in order to analyse the completeness of these families. Without completeness results, it is impossible to discuss the diagonalisation by augmented eigenfunctions. \end{rmk} \section{Conclusion} In the classical separation of variables, one makes a particular assumption on the form of the solution. For evolution PDEs in one dimension, this is usually expressed as \begin{quote} ``Assume the solution takes the form $q(x,t)=\tau(t)\xi(x)$ for all $(x,t)\in[0,1]\times[0,T]$ for some $\xi\in C^\infty[0,1]$ and $\tau\in C^\infty[0,T]$.'' \end{quote} However, when applying the boundary conditions, one superimposes infinitely many such solutions. So it would be more accurate to use the assumption \begin{quote} ``Assume the solution takes the form $q(x,t)=\sum_{m\in\N}\tau_m(t)\xi_m(x)$ for some sequences of functions $\xi_m\in C^\infty[0,1]$ which are eigenfunctions of the spatial differential operator, and $\tau_m\in C^\infty[0,T]$; assume that the series converges uniformly for $(x,t)\in[0,1]\times[0,T]$.'' \end{quote} For this `separation of variables' scheme to yield a result, we require completeness of the eigenfunctions $(\xi_m)_{m\in\N}$ in the space of admissible initial data. The concept of generalized eigenfunctions, as presented by Gelfand and coauthors~\cite{GS1967a,GV1964a} allows one to weaken the above assumption in two ways: first, it allows the index set to be uncountable, hence the series is replaced by an integral. Second, certain additional spectral functions, which are not genuine eigenfunctions, are admitted to be part of the series. An integral expansion in generalized eigenfunctions is insufficient to describe the solutions of IBVPs obtained via the unified transform method for type~II problems. In order to describe these IBVPs, we have introduced type~II augmented eigenfunctions. Using these new eigenfunctions, the assumption is weakened further: \begin{quote} ``Assume the solution takes the form $q(x,t)=\int_{m\in\Gamma}\tau_m(t)\xi_m(x)\d m$ for some functions $\xi_m\in C^\infty[0,1]$, which are type~I and~II augmented eigenfunctions of the spatial differential operator, and $\tau_m\in C^\infty[0,T]$; assume that the integral converges uniformly for $(x,t)\in[0,1]\times[0,T]$.'' \end{quote} It appears that it is \emph{not} possible to weaken the above assumption any further. Indeed, it has been established in~\cite{FP2001a} that the unified method provides the solution of \emph{all} well-posed problems. The main contribution of this paper is to replace the above assumption with the following theorem: \begin{quote} ``Suppose $q(x,t)$ is the $C^\infty$ solution of a well-posed two-point linear constant-coefficient initial-boundary value problem. Then $q(x,t)=\int_{m\in\Gamma}\tau_m(t)\xi_m(x)\d m$, where $\xi_m\in C^\infty[0,1]$ are type~I and~II augmented eigenfunctions of the spatial differential operator and $\tau_m\in C^\infty[0,T]$ are some coefficient functions. The integral converges uniformly for $(x,t)\in[0,1]\times[0,T]$.'' \end{quote} In summary, both type~I and type~II IBVPs admit integral representations like~\eqref{eqn:introIBVP.solution.2}, which give rise to transform pairs associated with a combination of type~I and type~II augmented eigenfunctions. For type~I IBVPs, it is possible (by appropriate contour deformations) to obtain alternative integral representations like~\eqref{eqn:introIBVP.solution.1}, which give rise to transform pairs associated with only type~I augmented eigenfunctions. Furthermore, in this case, a residue calculation yields a classical series representation, which can be associated with Gel'fand's generalised eigenfunctions. \bigskip The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7-REGPOT-2009-1) under grant agreement n$^\circ$ 245749. \bibliographystyle{amsplain} {\footnotesize
{ "attr-fineweb-edu": 1.458984, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd2E25V5jBFwBuGRD
\section{introduction} Negative magneto-conductivity/resistivity is a spectacular- and thought provoking- theoretical property of microwave photoexcited, far-from-equilibrium, two-dimensional electronic systems. This property has been utilized to understand the experimental observation of the microwave-radiation-induced zero-resistance states in the GaAs/AlGaAs system.\cite{grid1, grid2} Yet, the negative conductivity/resistivity state remains an enigmatic and open topic for investigation, although, over the past decade, photo-excited transport has been the subject of a broad and intense experimental \cite{ grid1, grid2, grid3, grid4, grid201, grid202, grid5, grid6, grid7, grid8, grid9, grid11, grid209, grid203, grid12, grid13, grid15, grid16, grid17, grid19, grid20, grid16b, grid204, grid21z, grid21b, grid104, grid21, grid21y, grid22, grid60, grid61, grid22z, grid22a, grid22c, grid108, grid109} and theoretical\cite{grid23, grid24, grid25, grid101, grid27, grid28, grid29, grid30, grid111, grid31, grid32, grid33, grid34, grid35, grid37, grid206, grid39, grid40, grid42, grid43, grid44, grid45, grid46, grid47, grid49, grid112, grid62, grid63, grid107, grid103} study in the 2D electron system (2DES). In experiment, the microwave-induced zero-resistance states arise from "1/4-cycle-shifted" microwave radiation-induced magnetoresistance oscillations in the high mobility GaAs/AlGaAs system\cite{grid1, grid4, grid22a} as these oscillations become larger in amplitude with the reduction of the temperature, $T$, at a fixed microwave intensity. At sufficiently low $T$ under optimal microwave intensity, the amplitude of the microwave-induced magnetoresistance oscillations becomes large enough that the deepest oscillatory minima approach zero-resistance. Further reduction in $T$ then leads to the saturation of the resistance at zero, leading to the zero-resistance states that empirically look similar to the zero-resistance states observed under quantized Hall effect conditions.\cite{grid1, grid2, grid5} Similar to the situation in the quantized Hall effect, these radiation-induced zero resistance states exhibit activated transport.\cite{grid1, grid2, grid5, grid8} A difference with respect to the quantized Hall situation, however, is that the Hall resistance, $R_{xy}$, does not exhibit plateaus or quantization in this instance where the zero-resistance state is obtained by photo-excitation.\cite{grid1, grid2, grid5} Some theories have utilized a two step approach to explain the microwave-radiation-induced zero-resistance states. In the first step, theory identifies a mechanism that helps to realize oscillations in the diagonal magneto-photo-conductivity/resistivity, and provides for the possibility that the minima of the oscillatory diagonal conductivity/resistivity can even take on negative values.\cite{grid23, grid25, grid27, grid111, grid46, grid33} The next step in the two step approach invokes the theory of Andreev et al.,\cite{grid24} who suggest that the zero-current-state at negative resistivity (and conductivity) is unstable, and that this favors the appearance of current domains with a non-vanishing current density,\cite{grid24, grid43} followed by the experimentally observed zero-resistance states. There exist alternate approaches which directly realize zero-resistance states without a de-tour through negative conductivity/resistivity states. Such theories include the radiation-driven electron-orbit- model,\cite{grid34} the radiation-induced-contact-carrier-accumulation/depletion model,\cite{grid112} and the synchronization model.\cite{grid103} Thus far, however, experiment has been unable to clarify the underlying mechanism(s), so far as the zero-resistance states are concerned. The negative magneto-conductivity/resistivity state suggested theoretically in this problem\cite{grid23, grid25, grid27, grid111, grid46, grid33} has been a puzzle for experiment since it had not been encountered before in magneto-transport. Naively, one believes that negative magneto-resistivity/conductivity should lead to observable negative magneto-resistance/conductance, based on expectations for the zero-magnetic-field situation. At the same time, one feels that the existence of the magnetic field is an important additional feature, and this raises several questions: Could the existence of the magnetic field be sufficiently significant to overcome nominal expectations, based on the zero-magnetic-field analogy, for an instability in a negative magneto-conductivity/resistivity state? If an instability does occur for the negative magneto-conductivity/resistivity state, what is the reason for the instability? Could negative conductivity/resistivity lead to observable negative conductance/resistance at least in some short time-scale transient situation where current domains have not yet formed? Indeed, one might ask: what are the magneto-transport characteristics of a bare negative conductivity/resistivity state? Remarkably, it turns out that an answer has not yet been formulated for this last question. To address this last question, we examine here the transport characteristics of the photo-excited 2DES at negative diagonal conductivity/resistivity through a numerical solution of the associated boundary value problem. The results suggest, rather surprisingly, that negative conductivity/resistivity in the 2DES under photo-excitation should generally yield a positive diagonal resistance, i.e., $R_{xx} > 0$, except at singular points where $R_{xx}=0$ when the diagonal conductivity $\sigma_{xx}=0$. The simulations also identify an associated, unexpected sign reversal in the Hall voltage under these conditions. These features suggest that nominal expectations, based on the zero-magnetic-field analogy, for a negative conductivity/resistivity state in a non-zero magnetic field, need not necessarily follow, and that experimental observations of zero-resistance and a linear Hall effect in the photo-excited GaAs/AlGaAs system could be signatures of vanishing conductivity/resistivity. \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=2.5 in \epsfbox {Hallsummary_fig1eps.eps} \end{center} \caption{(Color online) (a) The dark- and photo-excited-diagonal- ($R_{xx}$) and the photo-excited-Hall- resistance ($R_{xy}$) are exhibited vs. the magnetic field $B$ for a GaAs/AlGaAs heterostructure at $T=0.5$K. $R_{xx}$ exhibits a non-vanishing resistance with Shubnikov-de Haas oscillations in the dark (blue trace) at $|B| \ge 0.1$Tesla. Under photo-excitation at $f=50$GHz (red traces), $R_{xx}$ exhibits large magnetoresistance oscillations with vanishing resistance in the vicinity of $\pm (4/5) B_{f}$, where $B_{f} = 2 \pi f m^{*}/e$. Note the absence of a coincidental plateau in $R_{xy}$. (b) Theory predicts negative diagonal resistivity, i.e., $\rho_{xx} < 0$, under intense photoexcitation at the oscillatory minima, observable here in the vicinity of $B \approx 0.19$ Tesla and $B \approx 0.105$ Tesla. (c) Theory asserts that negative resistivity states are unstable to current domain formation and zero-resistance. Consequently, the $B$-span of negative resistivity in panel (b) corresponds to the domain of zero-resistance states (ZRS), per theory. \label{fig: epsart}} \end{figure} \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=3.25 in \epsfbox {Hallsummary_fig2eps.eps} \end{center} \caption{ (Color online) This dual purpose figure illustrates an idealized measurement configuration and the simulation mesh. A Hall bar (blue outline) is connected via its current contacts (thick black rectangles at the ends) to a constant current source, which may be modelled as a battery with a resistor in series. For convenience, the negative pole of the battery has been grounded to set the potential of this terminal to zero. A pair of "voltmeters" are used to measure the diagonal $(V_{xx})$ and Hall $(V_{xy})$ voltages. For the numerical simulations reported in this work, the Hall bar is represented by a mesh of points $(i,j)$, where the potential is evaluated by a relaxation method. Here, $0 \le i \le 100$ and $0 \le j \le 20$. The long (short) axis of the Hall bar corresponds the x (y)-direction. \label{afig: epsart}} \end{figure} \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=1.2 in \epsfbox {Hallsummary_fig3eps.eps} \end{center} \caption{ (Color online) This figure summarizes the potential profiles within a Hall bar device that is 100 units long and 20 units wide at three values of the Hall angle, $\theta_{H}$, where $\theta_{H} = tan^{-1}(\sigma_{xy}/\sigma_{xx})$. (a) This panel shows the potential profile at $\theta_{H} = 0^{0}$, which corresponds to the $B=0$ situation. Current contacts are indicated by black rectangles at the left- and right- ends, midway between the top and the bottom edges. The left end of the Hall bar is at $V = 1$ and the right end is at $V=0$. Potential (V) is indicated in normalized arbitrary units. Panel (b) shows that the potential decreases linearly along the indicated yellow line at $y=10$ from the left end to the right end of the device. Panel (c) shows the absence of a potential difference between the top- and bottom- edges along the indicated yellow line at $x = 50$. That is, there is no Hall effect at $B=0$. Panel (d) shows the potential profile at $\theta_{H} = 60^{0}$, which corresponds to $\sigma_{xx} = 0.577 \sigma_{xy}$. Note that the equipotential contours develop a tilt with respect to the same in panel (a). Panel (e) shows the potential drop from the left to the right edge along the line at $y=10$. Panel (f) shows a decrease in the potential from the bottom to the top edge. This potential difference is the Hall voltage at $\theta_{H} = 60^{0}$. Panel (g) shows the potential profile at $\theta_{H} = 88.5^{0}$, which corresponds to $\sigma_{xx} = 0.026 \sigma_{xy}$. Note that in the interior of the device, the equipotential contours are nearly parallel to the long axis of the Hall bar, in sharp contrast to (a). Panel(h) shows the potential variation from the left to the right end of the device along the line at $y=10$. The reduced potential variation here between the $V_{xx}$ voltage probes (red and black triangles) is indicative of a reduced diagonal resistance. Panel(i) shows a large variation in the potential along the line at $x=50$ between the bottom and top edges. \label{afig2: epsart}} \end{figure} \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=2 in \epsfbox {Hallsummary_fig4eps.eps} \end{center} \caption{(Color online) This figure compares the potential profile within the Hall bar device for positive ($\sigma_{xx}= +0.026 \sigma_{xy}$) and negative ($\sigma_{xx}= -0.026 \sigma_{xy}$) conductivity. Panel (a) shows the potential profile at $\theta_{H} = 88.5^{0}$ which corresponds to $\sigma_{xx}= +0.026 \sigma_{xy}$. Note that in the interior of the device, the equipotential contours are nearly parallel to the long axis of the Hall bar. Panel(b) shows the potential variation from the left to the right end of the device along the line at $y=10$. The small potential drop here between the $V_{xx}$ voltage probes (red and black triangles) is indicative of a reduced diagonal resistance in this low $\sigma_{xx}$ condition. Panel(c) suggests the development of a large Hall voltage between the bottom and top edges. Here the voltage decreases towards the top edge. Panel (d) shows the potential profile at $\theta_{H} = 91.5^{0}$ which corresponds to $\sigma_{xx}= -0.026 \sigma_{xy}$. The key feature here is the reflection of the potential profile with respect panel (a) about the line at $y=10$ when the $\sigma_{xx}$ shifts from a positive ($\sigma_{xx}= +0.026 \sigma_{xy}$) to a negative ($\sigma_{xx}= -0.026 \sigma_{xy}$) value. Panel (e) shows that in the negative $\sigma_{xx}$ condition, the potential still decreases from left to right, implying a positive diagonal voltage $V_{xx}$ and diagonal resistance $R_{xx}$. Panel (f) shows that for $\sigma_{xx}= -0.026 \sigma_{xy}$, the potential \textit{increases} from the bottom edge to the top edge, unlike in panel (c). Thus, the Hall voltage undergoes sign reversal in going from the $\sigma_{xx}= +0.026 \sigma_{xy}$ situation to the $\sigma_{xx}= -0.026 \sigma_{xy}$ condition, compare panels (c) and (f).\label{fig3:epsart}} \end{figure} \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=2.25 in \epsfbox {Hallsummary_fig5eps.eps} \end{center} \caption{ (Color online) This figure illustrates expectations, based on the results illustrated in Fig. 3 and 4, for the behavior of the diagonal ($R_{xx}$) and Hall ($R_{xy}$) resistances in a 2D system driven periodically to negative conductivity/resistivity by photo-excitation. (a) The diagonal resistance $R_{xx}$ exhibits microwave-induced magnetoresistance oscillations that grow in amplitude with increasing $B$. In the regime of negative conductivity at the oscillatory minima, the $R_{xx}$ exhibits positive values. (b) Over the same span of $B$, the Hall resistance $R_{xy}$ shows sign reversal. \label{fig:epsart}} \end{figure} \section{Results} \subsection{Experiment} Figure 1(a) exhibits measurements of $R_{xx}$ and $R_{xy}$ over the magnetic field span $-0.15 \le B \le 0.15$ Tesla at $T=0.5K$. The blue curve, which exhibits Shubnikov-de Haas oscillations at $|B| \ge 0.1$ Tesla, represents the $R_{xx}$ in the absence of photo-excitation (w/o radiation). Microwave photo-excitation of this GaAs/AlGaAs specimen at $50$GHz, see red traces in Fig. 1, produces radiation-induced magnetoresistance oscillations in $R_{xx}$, and these oscillations grow in amplitude with increasing $|B|$. At the deepest minimum, near $|B| = (4/5)B_{f}$, where $B_{f} = 2 \pi f m^{*}/e$,\cite{grid1} the $R_{xx}$ saturates at zero-resistance. Note also the close approach to zero-resistance near $|B| = (4/9) B_{f}$. Although $R_{xx}$ exhibits zero-resistance, the Hall resistance $R_{xy}$ exhibits a linear variation over the $B$-span of the zero-resistance states, see Fig. 1(a).\cite{grid1,grid2} \subsection{Negative magneto-resistivity and zero-resistance} Both the displacement theory for the radiation-induced magnetoresistivity oscillations,\cite{grid23} and the inelastic model,\cite{grid33} suggest that the magnetoresistivity can take on negative values over the $B$-spans where experiment indicates zero-resistance states. For illustrative purposes, such theoretical expectations for negative resistivity are sketched in Fig. 1(b), which presents the simulated $\rho_{xx}$ at $f=100$ GHz. This curve was obtained on the basis of extrapolating, without placing a lower bound, the results of fits,\cite{grid202} which have suggested that the radiation-induced oscillatory magnetoresistivity, $\rho_{xx}^{osc}$, where $\rho_{xx}^{osc}= R_{xx}^{osc}(W/L)$, with $W/L$ the device width-to-length ratio, follows $\rho_{xx}^{osc} = A'' exp(-\lambda/B) sin(2 \pi F/B - \pi)$. Here, $F = 2 \pi f m^{*}/e$, with $f=100$GHz, the microwave frequency, $m^{*} = 0.065m_{e}$, the effective mass, $e$, the electron charge, and $\rho_{xx} = \rho_{xx}^{dark} + \rho_{xx}^{osc}$, with $\rho_{xx}^{dark}$ the dark resistivity which reflects typical material characteristics for the high mobility GaAs/AlGaAs 2DES. This figure shows that the deepest $\rho_{xx}$ minima at $B \approx 0.19$ Tesla and $B \approx 0.105$ Tesla exhibit negative resistivity, similar to theoretical predictions.\cite{grid23, grid25, grid27, grid111, grid46, grid33} Andreev et al.\cite{grid24} have reasoned that the only time-independent state of a system with negative resistivity/conductivity is characterized by a current which almost everywhere has a magnitude $j_{0}$ fixed by the condition that nonlinear dissipative resistivity equals zero. This prediction implies that the $\rho_{xx}$ curve of Fig. 1(b) is transformed into the magnetoresistance, $R_{xx}$, trace shown in Fig. 1(c), where the striking feature is the zero-resistance over the $B$-domains that exhibited negative resistivity in Fig. 1(b). The curve of Fig. 1(c) follows from Fig. 1(b) upon multiplying the ordinate by the $L/W$ ratio, i.e., $R_{xx} = \rho_{xx} (L/W)$, and placing a lower bound of zero on the resulting $R_{xx}$. \subsection{Device configuration} As mentioned, a question of interest is: what are the transport characteristics of a bare negative magneto-conductivity/resistivity state? To address this issue, we reexamine the experimental measurement configuration in Fig. 2. Transport measurements are often carried out in the Hall bar geometry which includes finite size current contacts at the ends of the device. Here, a constant current is injected via the ends of the device, and "voltmeters" measure the diagonal ($V_{xx}$) and Hall ($V_{xy}$) voltages between probe points as a function of a transverse magnetic field, as indicated in Fig. 2. Operationally, the resistances relate to the measured voltages by $R_{xx} = V_{xx}/I$ and $R_{xy}=V_{xy}/I$. \subsection{Simulations} Hall effect devices can be numerically simulated on a grid/mesh,\cite{grid99,grid110,grid102} see Fig. 2, by solving the boundary value problem corresponding to enforcing the local requirement $\nabla . \overrightarrow{j} = 0$, where $\overrightarrow{j}$ is the 2D current density with components $j_{x}$ and $j_{y}$, $\overrightarrow{j} = \rttensor{\sigma} \overrightarrow{E}$, and $\rttensor{\sigma}$ is the conductivity tensor.\cite{grid99,grid110} Enforcing $\nabla . \overrightarrow{j} = 0$ within the homogeneous device is equivalent to solving the Laplace equation $\nabla^{2} V = 0$, which may be carried out in finite difference form using a relaxation method, subject to the boundary conditions that current injected via current contacts is confined to flow within the conductor. That is, current perpendicular to edges must vanish everywhere along the boundary except at the current contacts. We have carried out simulations using a $101 \times 21$ point grid with current contacts at the ends that were 6 points wide. For the sake of simplicity, the negative current contact is set to ground potential, i.e., $V=0$, while the positive current contact is set to $V=1$. In the actual Hall bar device used in experiment, the potential at the positive current contact will vary with the magnetic field but one can always normalize this value to 1 to compare with these simulations. Figure 3 summarizes the potential profile within the Hall device at three values of the Hall angle, $\theta_{H} = tan^{-1} (\sigma_{xy}/\sigma_{xx})$. Fig. 3(a) shows a color plot of the potential profile with equipotential contours within the device at $\theta_{H} = 0^{0}$, which corresponds to the $B=0$ situation. This panel, in conjunction with Fig. 3(b), shows that the potential drops uniformly within the device from the left- to the right- ends of the Hall bar. Fig. 3(c) shows the absence of a potential difference between the top- and bottom- edges along the indicated yellow line at $x = 50$. This feature indicates that there is no Hall effect in this device at $B=0$, as expected. Figure 3(d) shows the potential profile at $\theta_{H} = 60^{0}$, which corresponds to the situation where $\sigma_{xx} = 0.577 \sigma_{xy}$. Note that, here, the equipotential contours develop a tilt with respect to the same in Fig. 3(a). Fig. 3(e) shows a mostly uniform potential drop from the left to the right edge along the line at $y=10$, as Fig. 3(f) shows a decrease in the potential from the bottom to the top edge. This potential difference represents the Hall voltage under these conditions. Figure 3(g) shows the potential profile at $\theta_{H} = 88.5^{0}$, which corresponds to the situation where $\sigma_{xx} = 0.026 \sigma_{xy}$. Note that in the interior of the device, the equipotential contours are nearly parallel to the long axis of the Hall bar, in sharp contrast to Fig. 3(a). Fig. 3(h) shows the potential variation from the left to the right end of the device. The reduced change in potential between the $V_{xx}$ voltage probes (red and black inverted triangles), in comparison to Fig. 3(b) and Fig. 3(e) is indicative of a reduced diagonal voltage and resistance. Fig. 3(i) shows a large potential difference between the bottom and top edges, indicative of a large Hall voltage. The results presented in Fig. 3 display the normal expected behavior for a 2D Hall effect device with increasing Hall angle. Such simulations can also be utilized to examine the influence of microwave excitation since microwaves modify the diagonal conductivity, $\sigma_{xx}$, or resistivity, $\rho_{xx}$,\cite{grid23, grid33} and this sets $\theta_{H}$ via $\theta_{H} = tan^{-1} (\sigma_{xy}/\sigma_{xx})$. In the next figure, we examine the results of such simulations when the diagonal conductivity, $\sigma_{xx}$, reverses signs and takes on negative values, as per theory, under microwave excitation. Thus, figure 4 compares the potential profile within the Hall bar device for positive ($\sigma_{xx}= +0.026 \sigma_{xy}$) and negative ($\sigma_{xx}= -0.026 \sigma_{xy}$) values of the conductivity. Fig. 4(a) shows the potential profile at $\sigma_{xx}= +0.026 \sigma_{xy}$. This figure is identical to the figure exhibited in Fig. 3(g). The essential features are that the equipotential contours are nearly parallel to the long axis of the Hall bar, see Fig. 4(b), signifying a reduced diagonal resistance. Concurrently, Fig. 4(c) suggests the development of a large Hall voltage between the bottom and top edges. Here the Hall voltage decreases from the bottom- to the top- edge. Fig. 4(d) shows the potential profile at $\sigma_{xx}= -0.026 \sigma_{xy}$, i.e., the negative conductivity case. The important feature here is the reflection of the potential profile with respect Fig. 4(a) about the line at $y=10$ when the $\sigma_{xx}$ shifts from a positive ($\sigma_{xx}= +0.026 \sigma_{xy}$) to a negative ($\sigma_{xx}= -0.026 \sigma_{xy}$) value. Fig. 4(e) shows, remarkably, that in the negative $\sigma_{xx}$ condition, the potential still decreases from left to right, implying $V_{xx}>0$ and $R_{xx}>0$ even in this $\sigma_{xx} \le 0$ condition. Fig. 4(f) shows that for $\sigma_{xx}= -0.026 \sigma_{xy}$, the potential \textit{increases} from the bottom edge to the top edge, in sharp contrast to Fig. 4(c). Thus, these simulations show clearly that the Hall voltage undergoes sign reversal when $\sigma_{xx} \le 0$, although the diagonal voltage (and resistance) exhibits positive values. \section{Discussion} Existing theory indicates that photo-excitation of the high mobility 2D electron system can drive the $\rho_{xx}$ and $\sigma_{xx}$ to negative values at the minima of the radiation-induced oscillatory magneto-resistivity.\cite{grid23, grid25, grid27, grid111, grid46, grid33} Andreev et al.,\cite{grid24} have argued that "$\sigma_{xx} < 0$ by itself suffices to explain the zero-dc-resistance state" because "negative linear response conductance implies that the zero-current state is intrinsically unstable." Since our simulations (Fig. 4) show clearly that negative magneto conductivity/resistivity leads to positive, not negative, conductance/resistance, it looks like one cannot argue for an instability in the zero-current state based on presumed "negative linear response conductance." For illustrative purposes, using the understanding obtained from the simulation results shown in Fig. 4, we sketch in Fig. 5 the straightforward expectations, for the behavior of the diagonal ($R_{xx}$) and Hall ($R_{xy}$) resistances in a 2D system driven periodically to negative diagonal conductivity by photo-excitation. Fig. 5(a) shows that the microwave-induced magnetoresistance oscillations in $R_{xx}$ grow in amplitude with increasing $B$. When the oscillations in the magneto-resistivity/conductivity are so large that the oscillatory minima would be expected to cross into the regime of $\sigma_{xx} <0$ at the oscillatory minima, the $R_{xx}$ exhibits positive values. Here, vanishing $R_{xx}$ occurs only at singular values of the magnetic field where $\sigma_{xx} = 0$. Fig. 5(b) shows that the Hall resistance $R_{xy}$ shows sign reversal over the same span of $B$ where $\sigma_{xx} <0$. It appears that if there were an instability, it should be related to the sign-reversal in the Hall effect. Yet, note that sign reversal in the Hall effect is not a manifestly un-physical effect since it is possible to realize Hall effect sign reversal in experiment even with a fixed external bias on the sample, as in the simulations, simply by reversing the direction of the magnetic field or by changing the sign of the charge carriers. The unusual characteristic indicated by these simulations is Hall effect sign reversal even without changing the direction of the magnetic field or changing the sign of the charge carriers. This feature can be explained, however, by noting that the numerical solution of the boundary value problem depends on a single parameter, the Hall angle, $\theta_{H}$, where $tan (\theta_{H}) = \sigma_{xy}/\sigma_{xx}$. Since this single parameter depends on the ratio of the off-diagonal and diagonal conductivities, sign change in $\sigma_{xx}$ produces the same physical effect as sign reversal in $\sigma_{xy}$ so far as the solution to the boundary value problem is concerned. That is, one might change the sign of $\sigma_{xy}$ or one might change the sign of $\sigma_{xx}$, the end physical result is the same: a sign reversal in the Hall effect. One might also ask: why do the simulations indicate a positive diagonal resistances for the negative diagonal conductivity/resistivity scenario? The experimental setup shown in Fig. 2 offers an answer to this question: In the experimental setup, the Hall bar is connected to an external battery which enforces the direction of the potential drop from one end of the specimen to the other. This directionality in potential drop is also reflected in the boundary value problem. As a consequence, the red potential probe in Fig. 2, 3 or 4 would prefer to show a higher potential than the black potential probe so long as the $\sigma_{xx}$ is not identically zero, and this leads to the positive resistance even for negative diagonal conductivity/resistivity in the numerical simulations. We remark that the experimental results of Fig. 1(a) are quite unlike the expectations exhibited in Fig. 5. Experiment shows an ordinary Hall effect without anomalies over the zero-resistance region about $(4/5)B_{f}$, not a sign reversal in the Hall effect, and experiment shows zero-resistance, not the positive resistance expected for a system driven to negative conductivity. In conclusion, the results presented here suggest that a bare negative magneto- conductivity/resistivity state in the 2DES under photo-excitation should yield a positive diagonal resistance with a concomitant sign reversal in the Hall effect. We have also understood that these results could be potentially useful for understanding plateau formation in the Hall effect as, for example, in the quantum Hall situation, if new physics comes into play in precluding sign reversal in the Hall effect, when the diagonal magneto-conductivity/resistivity is forced into the regime of negative values. \section{Methods} \subsection{Samples} The GaAs/AlGaAs material utilized in our experiments exhibit electron mobility $\mu \approx 10^{7} cm^{2}/Vs$ and electron density in the range $2.4 \times 10^{11} \leq n \leq 3 \times 10^{11} cm^{-2}$. Utilized devices include cleaved specimens with alloyed indium contacts and Hall bars fabricated by optical lithography with alloyed Au-Ge/Ni contacts. Standard low frequency lock-in techniques yield the electrical measurements of $R_{xx}$ and $R_{xy}$.\cite{grid1,grid3,grid4,grid201, grid202, grid5,grid11, grid209, grid203, grid16, grid16b, grid21, grid21y, grid60, grid61, grid108, grid109} \subsection{Microwave transport measurements} Typically, a Hall bar specimen was mounted at the end of a long straight section of a rectangular microwave waveguide. The waveguide with sample was inserted into the bore of a superconducting solenoid, immersed in pumped liquid Helium, and irradiated with microwaves, at a source-power $0.1 \le P \le 10$ mW, as in the usual microwave-irradiated transport experiment.\cite{grid1} The applied external magnetic field was oriented along the solenoid and waveguide axis.
{ "attr-fineweb-edu": 1.991211, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd2c5qdmDEeS7VlZ3
\section{Introduction} \label{Introduction} $~~\,~$ Different aspects of higher spin field theory in various dimensions attract considerable attention currently. ${}$First of all, higher spin fields and their possible interactions bring about numerous challenges for theoreticians. More importantly, massive higher spin states are known to be present in the spectra of the string and superstring theories. It is therefore quite natural to expect that, in a field theory limit, the superstring theory should reduce to a consistent interacting supersymmetric theory of higher spin fields. In four space-time dimensions, Lagrangian formulations for massive fields of arbitrary spin were constructed thirty years ago \cite{SH}. A few years later, the massive construction of \cite{SH} was used to derive Lagrangian formulations for gauge massless fields of arbitrary spin \cite{Fronsdal}. Since then, there have been published hundreds of papers in which the results of \cite{SH,Fronsdal} were generalized, (BRST) reformulated, extended, quantized, and so forth. Here it is hardly possible to comment upon these heroic follow-up activities. We point out only several reviews \cite{REW} and some recent papers \cite{DEV}. One of the interesting directions in higher spin field theory is the construction of manifestly supersymmetric extensions of the models given in \cite{SH,Fronsdal}. In the massless case, the problem has actually been solved in \cite{KSP,KS} (see \cite{BK} for a review and \cite{KSG} for generalizations). ${}$For each superspin $Y>3/2$, these publications provide two dually equivalent off-shell realizations in 4D, ${\cal N }= 1$ superspace. At the component level, each of the two superspin-$Y$ actions \cite{KSP,KS} reduces to a sum of the spin-$Y$ and spin-$(Y+1/2)$ actions \cite{Fronsdal} upon imposing a Wess-Zumino-type gauge and eliminating the auxiliary fields. On the mass shell, the only independent gauge-invariant field strengths in these models are exactly the higher spin on-shell field strengths first identified in ``Superspace'' \cite{GGRS}. As concerns the massive case, off-shell higher spin supermultiplets have never been constructed in complete generality. In 4D, ${\cal N}=1$ Poincar\'e supersymmetry, a massive multiplet of superspin $Y$ describes four propagating fields with the same mass but different spins $s$ $=$ $(Y-1/2, Y, Y, Y+1/2)$, see, e.g., \cite{BK,GGRS} for reviews. ${}$The first attempts\footnote{Some preliminary results were also obtained in \cite{BL}.} to attack the problem of constructing free off-shell massive higher spin supermultiplets were undertaken in recent works \cite{BGLP1,BGLP2,GSS} that were concerned with deriving off-shell realizations for the massive {\it gravitino multiplet} ($Y$ = 1) and the massive {\it graviton multiplet} ($Y$ = 3/2). This led to two $Y$ = 3/2 formulations constructed in \cite{BGLP1} and one $Y$ = 1 formulation derived in \cite{BGLP2}. The results of \cite{BGLP1} were soon generalized \cite{GSS} to produce a third $Y$ = 3/2 formulation. In the present letter, we continue the research started in \cite{BGLP1,BGLP2} and derive two new off-shell realizations for the massive gravitino multiplet, and three new off-shell realizations for the massive graviton multiplet. Altogether, there now occur four massive $Y$ = 1 models (in conjunction with the massive $Y$ = 1 model constructed by Ogievetsky and Sokatchev years ago \cite{OS2}) and six massive $Y$ = 3/2 models. We further demonstrate that these realizations are related to each other by duality transformations similar to those which relate massive tensor and vector multiplets, see \cite{K} and references therein. It is interesting to compare the massive and massless results in the case of the $Y$ = 3/2 multiplet. In the massless case, there are three building blocks to construct {\it {minimal}} \footnote{To our knowledge, no investigations have occured for the possible existence of \newline $~~~~~~$ a massive {\it {non-minimal}} $Y$ = 3/2 theory.} linearized supergravity \cite{GKP}. They correspond to (i) old minimal supergravity (see \cite{BK,GGRS} for reviews); (ii) new minimal supergravity (see \cite{BK,GGRS} for reviews); (iii) the novel formulation derived in \cite{BGLP1}. These off-shell $(3/2,2)$ supermultiplets, which comprise all the supergravity multiplets with $12+12$ degrees of freedom, will be called type I, type II and type III supergravity multiplets\footnote{In the case of type III supergravity, a nonlinear formulation is still unknown.} in what follows, in order to avoid the use of unwieldy terms like ``new new minimal'' or ``very new'' supergravity. As is demonstrated below, each of the massless type I---III formulations admits a massive extension, and the latter turns out to possess a nontrivial dual. As a result, we have now demonstrated that there occur {\it {at}} {\it {least}} six off-shell distinct massive $Y$ = 3/2 minimal realizations. This paper is organized as follows. In section 2 we derive two new (dually equivalent) formulations for the massive gravitino multiplet. They turn out to be massive extensions of the two standard off-shell formulations for the massless spin $(1,3/2)$ supermultiplet discovered respectively in \cite{FV,deWvH,GS} and \cite{OS}. In section 3 we derive three new formulations for the massive graviton multiplet. Duality transformations are also worked out that relate all the massive $Y$ = 3/2 models. A brief summary of the results obtained is given in section 4. The paper is concluded by a technical appendix. Our superspace conventions mostly follow \cite{BK} except the following two from \cite{GGRS}: (i) the symmetrization of $n$ indices does not involve a factor of $(1/n!) $; (ii) given a four-vector $v_a$, we define $v_{\un a} \equiv v_{\alpha {\dot{\alpha}}} =(\sigma^a)_{\alpha {\dot{\alpha}} } v_a$. \section{Massive Gravitino Multiplets} \label{Massive Gravitino Multiplets} $~~\,~$ We start by recalling the off-shell formulation for massless (matter) gravitino multiplet introduced first in \cite{FV,deWvH} at the component level and then formulated in \cite{GS} in terms of superfields (see also \cite{GG}). The action derived in \cite{GS} is \begin{eqnarray} S_{(1,\frac 32)}[\Psi , V] &=& \hat{S}[\Psi ] + \int d^8z\,\Big\{ \Psi^\alpha W_\alpha + {\Bar \Psi}_{\dot{\alpha}} {\Bar W}^{\dot{\alpha}} \Big\} - {1\over 4} \int d^6z \,W^\alpha W_\alpha~, \label{tino}\\ && \qquad \qquad W_\alpha = -{1 \over 4} \Bar D^2D^\alpha V~, \nonumber \end{eqnarray} where \begin{eqnarray} \hat{S}[\Psi ] &=& \int d^8z\, \Big\{ D^\alpha\Bar \Psi^{\dot\alpha}\Bar D_{\dot\alpha}\Psi_\alpha -\frac 14\Bar D^{\dot\alpha}\Psi^\alpha\Bar D_{\dot\alpha}\Psi_\alpha -\frac 14 D_\alpha \Bar \Psi_{\dot\alpha} D^\alpha \Bar \Psi^{\dot\alpha} \Big\}~. \label{S-hat} \end{eqnarray} This massless $Y$ = 1 model is actually of some interest in the context of higher spin field theory. As mentioned in the introduction, there exist two dually equivalent gauge superfield formulations (called longitudinal and transverse) \cite{KS} for each massless {\it integer} $Y$ $\geq 1$, see \cite{BK} for a review. The longitudinal series\footnote{The transverse series terminates at a non-minimal gauge formulation for the massless \newline $~~~~~~$ gravitino multiplet realized in terms of an unconstrained real scalar $V$ and Majorana \newline $~~~~~~$ $\gamma$-traceless spin-vector ${\bf \Psi}_a= (\Psi_{a \beta}, \Bar \Psi_{a}{}^{\dot{\beta}})$, with $\gamma^a {\bf \Psi}_a=0$.} terminates, at $Y =1$, exactly at the action (\ref{tino}). To describe a massive gravitino multiplet, we introduce an action $S=S_{(1,\frac 32)}[\Psi , V] + S_m[\Psi ,V]$, where $S_m[\Psi ,V]$ stands for the mass term \begin{eqnarray} S_m[\Psi,V]=m \int d^8z\,\Big\{ \Psi^2 +\Bar \Psi^2 +\alpha m V^2 + V\Big(\beta D^\alpha \Psi_\alpha +\beta^* \Bar D_{\dot\alpha} \Bar \Psi^{\dot\alpha}\Big) \Big\}~, \label{S-m} \end{eqnarray} where $\alpha$ and $\beta$ are respectively real and complex parameters. These parameters should be fixed by the requirement that the equations of motion be equivalent to the constraints \begin{equation} i \pa_{\un a} \Bar \Psi^{\dot{\alpha}} + m \Psi_\alpha = 0~, \qquad D^\alpha \Psi_{\alpha}=0~, \qquad \Bar D^2 \Psi_{\alpha}=0~~,~~ \label{mass-shell} \end{equation} required to describe an irreducible on-shell multiplet with $Y$ = $1$, see \cite{BK,BGLP2}. In the space of spinor superfields obeying the Klein-Gordon equation, $(\Box -m^2) \Psi_\alpha =0$, the second and third constraints in (\ref{mass-shell}) are known to select the $Y$ = 1 subspace \cite{BK} (see also \cite{Sok}). Without imposing additional constraints (such as the first one in \ref{mass-shell}), the superfields $\Psi_\alpha $ and $\Bar \Psi_{\dot{\alpha}}$ describe two massive $Y$ = 1 representations. Generally, an irreducible representation emerges if these superfields are also subject to a reality condition of the form \begin{equation} \pa_{\un a} \Bar \Psi^{\dot{\alpha}} + m \,e^{i \varphi}\, \Psi_\alpha = 0~, \qquad |e^{i \varphi} | =1~, \end{equation} where $\varphi$ a constant real parameter. As is obvious, the latter constraint implies the Klein-Gordon equation. Applying a phase transformation to $\Psi_\alpha$, allows us to make the choice $e^{i \varphi} =-i$ corresponding to the Dirac equation. The equations of motion corresponding to $S=S_{(1,\frac 32)}[\Psi , V] + S_m[\Psi ,V]$ are: \begin{eqnarray} -\Bar D_{\dot\alpha}D_\alpha\Bar \Psi^{\dot\alpha} +\frac 12\Bar D^2 \Psi_\alpha +2m\Psi_\alpha +W_\alpha -\beta m D_\alpha V&=&0~, \label{y1newVeom} \\ \frac 12 D^\alpha W_\alpha +\Big( \frac 14D^\alpha\Bar D^2 \Psi_\alpha +\beta mD^\alpha \Psi_\alpha +c.c.\Big) +2\alpha m^2V &=&0~. \label{y1newHeom} \end{eqnarray} Multiplying (\ref{y1newVeom}) and (\ref{y1newHeom}) by $\Bar D^2$ yields: \begin{eqnarray} \Bar D^2\Psi_\alpha= -2\beta W_\alpha~, \qquad \Bar D^2D^\alpha \Psi_\alpha =-2{ \alpha \over \beta }\, m\Bar D^2V~. \end{eqnarray} Next, substituting these relations into the contraction of $D^\alpha$ on (\ref{y1newVeom}) leads to: \begin{eqnarray} mD^\alpha \Psi_\alpha = \frac 12 (\beta+\beta^*-1)D^\alpha W_\alpha +\frac \b2\Big(1+{\alpha\over |\beta|^2}\Big)mD^2V~. \end{eqnarray} Substitute these three results into (\ref{y1newHeom}) gives \begin{eqnarray} \frac 12 (1-\beta-\beta^*)^2D^\alpha W_\alpha +\frac 12m\Big(1+{\alpha\over |\beta|^2}\Big)[\beta^2D^2 +(\beta^*)^2\Bar D^2]V +2\alpha m^2 V =0~. \end{eqnarray} This equation implies that $V$ is auxiliary, $V=0$, if \begin{equation} \beta+\beta^*=1~, \qquad \alpha=-|\beta|^2~. \label{conditions} \end{equation} Then, the mass-shell conditions (\ref{mass-shell}) also follow.\footnote{One can consider more general action in which the term $m ( \Psi^2 + \Bar \Psi^2) $ in (\ref{S-m}) is replaced \newline $~~~~\,~$ by $ ( \mu \, \Psi^2 +\mu^*\,\Bar \Psi^2) $, with $\mu$ a complex mass parameter, $|\mu|=m$. Then, the first equation in \newline $~~~~\,~$ (\ref{conditions}) turns into $\beta/\mu + (\beta/\mu)^* = 1/m$.} The final action takes the form: \begin{eqnarray} S[ \Psi,V] &=& \hat{S}[\Psi ] + \int d^8z\,\Big\{ \Psi \, W + {\Bar \Psi} \, {\Bar W} \Big\} - {1\over 4} \int d^6z \,W^2 \label{ss1final} \\ &~& ~~+~m \int d^8z\,\Big\{ \Psi^2 +\Bar \Psi^2 -|\beta|^2 m V^2 + V\Big(\beta D \, \Psi +\beta^* \Bar D \, \Bar \Psi \Big) \Big\}~, \nonumber \end{eqnarray} where $\beta+\beta^*=1$. A superfield redefinition of the form $\Psi_\alpha\rightarrow \Psi_\alpha+ \delta \, \Bar D^2\Psi_\alpha$ can be used to change some coefficients in the action. The Lagrangian constructed turns out to possess a dual formulation. For simplicity, we choose $\beta =1/2$ in (\ref{ss1final}). Let us consider, following \cite{K}, the ``first-order'' action \begin{eqnarray} S_{Aux} &=& \hat{S}[\Psi ] + \int d^8z\,\Big\{ m(\Psi^2 + {\Bar \Psi}^2) +\Psi \, W + {\Bar \Psi}\, {\Bar W} -{m^2\over 4} V^2 + {m\over 2} V( D\, \Psi + \Bar D\, \Bar \Psi ) \Big\} \nonumber \\ &~& ~+~ { 1 \over 2} \left\{ m \int d^6z \, \eta^\alpha \Big(W_\alpha + {1 \over 4} \Bar D^2D^\alpha V\Big) - {1 \over 4} \int d^6z \,W^2 ~+~ c.c. \right\}~. \label{aux1} \end{eqnarray} Here $W_\alpha$ and $\eta_\alpha$ are unconstrained chiral spinor superfield, and there is no relationship between $V$ and $W_\alpha$. Varying $S_{Aux} $ with respect to $\eta_\alpha$ brings us back to (\ref{ss1final}). On the other hand, if we vary $S_{Aux} $ with respect to $V$ and $W_\alpha$ and eliminate these superfields, we then arrive at the following action: \begin{eqnarray} \tilde{S} = \hat{S}[\Psi ] &+& \int d^8z\,\Big\{ m(\Psi^2 + {\Bar \Psi}^2) +{1\over 4} \Big( D(\Psi+\eta) + {\Bar D} ({\Bar \Psi} + \Bar \eta ) \Big)^2 \Big\} \nonumber \\ &+& {1\over 8} \left\{ \int d^6z \, \Big( 2m \eta - \Bar D^2 \Psi \Big)^2 ~+~ c.c. \right\}~. \label{14} \end{eqnarray} Implementing here the shift \begin{equation} \Psi_\alpha ~\to~ \Psi_\alpha - \eta_\alpha~~,~~ \end{equation} brings the action to the form \begin{eqnarray} \tilde{S} = \hat{S}[\Psi ] &+&{1\over 4} \int d^8z\, \Big( D\,\Psi + {\Bar D} \,{\Bar \Psi} \Big)^2 - {1\over 2} \int d^8z\,\Big\{ \Psi^\alpha \Bar D^2 \Psi_\alpha + \Bar \Psi_{\dot{\alpha}} D^2 \Bar \Psi^{\dot{\alpha}} \Big\} \nonumber \\ &+& m \int d^8z\,\Big\{\Psi^2 + {\Bar \Psi}^2\Big\} + {m^2\over 2} \left\{ \int d^6z \, \eta^2 ~+~ c.c. \right\}~. \end{eqnarray} As is seen, the chiral spinor superfield $\eta_\alpha$ has completely decoupled! Therefore, the dynamical system obtained is equivalent to the following theory \begin{eqnarray} S[\Psi] = \hat{S}[\Psi ] &+&{1\over 4} \int d^8z\, \Big( D\,\Psi + {\Bar D} \,{\Bar \Psi} \Big)^2 -{1\over 2} \int d^8z\,\Big\{ \Psi^\alpha \Bar D^2 \Psi_\alpha + \Bar \Psi_{\dot{\alpha}} D^2 \Bar \Psi^{\dot{\alpha}} \Big\} \nonumber \\ &+& m \int d^8z\,\Big\{\Psi^2 + {\Bar \Psi}^2\Big\}~~,~~ \label{mgravitino2} \end{eqnarray} formulated solely in terms of the unconstrained spinor $\Psi_\alpha$ and it conjugate. Applying the phase transformation $\Psi_a \to i\, \Psi_\alpha$, it is seen that the action obtained is actually equivalent to \begin{eqnarray} S[\Psi] = \hat{S}[\Psi ] &-&{1\over 4} \int d^8z\, \Big( D\,\Psi - {\Bar D} \,{\Bar \Psi} \Big)^2 + m \int d^8z\,\Big\{\Psi^2 + {\Bar \Psi}^2\Big\}~. \label{mgravitino3} \end{eqnarray} It is interesting to compare (\ref{mgravitino3}) with the action for massive $Y$ = 1 multiplet obtained by Ogievetsky and Sokatchev \cite{OS2}. Their model is also formulated solely in terms of a spinor superfield. The corresponding action\footnote{Setting $m=0$ in (\ref{OS-action-2}) gives the model for massless gravitino multiplet discovered in \cite{OS}.} is \begin{eqnarray} S_{OS} [\Psi] = \hat{S}[\Psi ] + {1\over 4} \int d^8z\, \Big( D\Psi + {\Bar D} {\Bar \Psi} \Big)^2 + i \, m \int d^8z\, \Big( \Psi^2 - {\Bar \Psi}^2 \Big)~, \label{OS-action-2} \end{eqnarray} see Appendix A for its derivation.\footnote{It was argued in \cite{BGLP2} that there are no Lagrangian formulations for massive superspin-1 \newline $~~~\,~~$ multiplet solely in terms of an unconstrained spinor superfield and its conjugate. The \newline $~~~\,~~$ ``proof'' given in \cite{BGLP2} is incorrect, as shown by the two counter-examples (\ref{mgravitino3}) and (\ref{OS-action-2}).} The actions (\ref{mgravitino3}) and (\ref{OS-action-2}) look similar, although it does not seem possible to transform one to the other off the mass shell. In fact, the model (\ref{14}), which is equivalent to (\ref{mgravitino3}), can be treated as a massive extension of the Ogievetsky-Sokatchev model for massless gravitino multiplet \cite{OS}. Indeed, implementing in (\ref{14}) the shift \begin{equation} \Psi_\alpha ~\to \Psi_\alpha + {i \over 2m} \Bar D^2 \Psi_\alpha ~, \qquad \eta_\alpha ~\to \eta_\alpha - {i \over 2m} \Bar D^2 \Psi_\alpha~, \end{equation} which leaves $\hat{S}[\Psi ] $ invariant, we end up with \begin{eqnarray} S[\Psi, \eta ] = S_{(1,\frac 32)}[\Psi , G] &+& m \int d^8z\,\Big\{ \Psi^2 + {\Bar \Psi}^2 +2(1 + i) \Psi \eta + 2(1 - i)\Bar \Psi \Bar \eta \Big\} \nonumber \\ &+& {m^2 \over 2} \left\{ \int d^6z \, \eta^2 ~+~ c.c. \right\}~, \end{eqnarray} where \begin{eqnarray} S_{(1,\frac 32)}[\Psi , G] = \hat{S}[\Psi ] &+& \int d^8z\, \Big( G + {1\over 2} ( D\, \Psi + {\Bar D}\, {\Bar \Psi} ) \Big)^2 ~, \\ G &=&{1\over 2} ( D^\alpha \eta _\alpha + \Bar D_{\dot{\alpha}} \Bar \eta^{\dot{\alpha}} )~. \nonumber \end{eqnarray} Here $G$ is the linear superfield, $D^2 G= \Bar D^2 G =0$, associated with the chiral spinor $\eta_\alpha$ and its conjugate. The action $S_{(1,\frac 32)}[\Psi , G] $ corresponds to the Ogievetsky-Sokatchev formulation for massless gravitino multiplet \cite{OS} as presented in \cite{BK}. Before concluding this section, it is worth recalling one more possibility to describe the massless gravitino multiplet \cite{BK,GS} \begin{eqnarray} S_{(1,\frac 32)}[\Psi , \Phi] &=& \hat{S}[\Psi ] -{1\over 2} \int d^8z\,\Big\{ \Bar \Phi \Phi + ( \Phi +\Bar \Phi) ( D\, \Psi + \Bar D \, \Bar \Psi ) \Big\}~, \label{g-fixed} \end{eqnarray} with $\Phi$ a chiral scalar, $\Bar D _{\dot{\alpha}} \Phi =0 $. The actions (\ref{tino}) and (\ref{g-fixed}) can be shown to correspond to different partial gauge fixings in the mother theory \begin{eqnarray} S_{(1,\frac 32)}[\Psi , V, \Phi] = \hat{S}[\Psi ] &+& \int d^8z\,\Big\{ \Psi \, W + {\Bar \Psi} \, {\Bar W} \Big\} - {1\over 4} \int d^6z \,W^2 \nonumber \\ &-&{1\over 2} \int d^8z\,\Big\{ \Bar \Phi \Phi + (\Phi +\Bar \Phi) ( D \, \Psi + \Bar D \, \Bar \Psi ) \Big\}~~,~~ \end{eqnarray} possessing a huge gauge freedom, see \cite{BK,GS} for more details. The massive extension of (\ref{g-fixed}) was derived in \cite{BGLP2} and the corresponding action is \begin{eqnarray} S[\Psi , \Phi] = S_{(1,\frac 32)}[\Psi , \Phi] +m\int d^8z\,(\Psi^2 + {\Bar \Psi}^2) -{m\over 4} \Big\{ \int d^6z \, \Phi^2 + c.c. \Big\}~. \end{eqnarray} Unlike its massless limit, this theory does not seem to admit a nice dual formulation. \section{Massive Graviton Multiplets} \label{Massive Graviton Multiplets} $~~\,~$ The massive $Y$ = 3/2 multiplet (or massive graviton multiplet) can be realized in terms of a real (axial) vector superfield $H_a$ obeying the equations \cite{BK,BGLP1,Sok} \begin{eqnarray} \label{32irrepsp} (\Box-m^2)H_a=0~, \quad D^\alpha H_{\un a}=0~, \quad \Bar D^{\dot\alpha} H_{\un a}=0 \quad \longrightarrow \quad \pa^{\un a}H_{\un a}=0~. \end{eqnarray} We are interested in classifying those supersymmetric theories which generate these equations as the equations of motion. In what follows, we will use a set of superprojectors \cite{SG} for the real vector superfield $H_{\un a}$: \begin{eqnarray} (\Pi^T_{1})H_{\un a}&:=&\frac 1{32} \Box^{-2}\pa_{\dot\alpha}{}^\beta \{\Bar D^2,D^2\}\pa_{(\alpha}{}^{\dot\beta}H_{\beta)\dot\beta}~, \\ (\Pi^T_{1/2})H_{\un a}&:=& \frac 1{8\cdot3!}\Box^{-2}\pa_{\dot\alpha}{}^\beta D_{(\alpha}\Bar D^2D^\gamma (\pa_{\beta)}{}^{\dot\beta}H_{\gamma\dot\beta} +\pa_{|\gamma|}{}^{\dot\beta}H_{\beta)\dot\beta})~, \\ \label{trans} (\Pi^T_{3/2})H_{\un a}&:=& -\frac 1{8\cdot3!}\Box^{-2}\pa_{\dot\alpha}{}^\beta D^\gamma\Bar D^2D_{(\gamma} \pa_{\alpha}{}^{\dot\beta}H_{\beta)\dot\beta}~, \\ (\Pi^L_{0})H_{\un a}&:=& -\frac 1{32}\pa_{\un a} \Box^{-2}\{\Bar D^2,D^2\} \pa^{\un c}H_{\un c}~, \\ \label{long} (\Pi^L_{1/2})H_{\un a}&:=& \frac 1{16}\pa_{\un a}\Box^{-2}D^\beta\Bar D^2 D_\beta\pa^{\un c}H_{\un c}~. \end{eqnarray} In terms of the superprojectors introduced, we have \cite{GKP} \begin{eqnarray} D^\gamma {\Bar D}^2 D_\gamma H_{\un a} &=& -8\Box ( \Pi^L_{1/2} + \Pi^T_{1/2} +\Pi^T_{3/2}) H_{\un a} ~, \\ \pa_{\un a}\, \pa^{\un b} H_{\un b} &=& -2 \Box ( \Pi^L_{0} + \Pi^L_{1/2} ) H_{\un a}~, \label{id2}\\ \left[D_\alpha , {\Bar D}_{\dot{\alpha}} \right] \left[D_\beta , {\Bar D}_{\dot{\beta}} \right] H^{\un b} &=& \Box (8 \Pi^L_{0} - 24 \Pi^T_{1/2} ) H_{\un a}~. \label{id3} \end{eqnarray} \subsection{Massive Extensions of Type I Supergravity} \label{Massive Extensions of Type I Supergravity} $~~\,~$ Consider the off-shell massive supergravity multiplet derived in \cite{GSS} \begin{eqnarray} S^{({\rm IA})} [H, P] = S^{({\rm I})} [H, \Sigma] - {1\over 2} m^2\int d^8z \, \Big\{H^{\un a} H_{\un a} -\frac 92 P^2 \Big\}~, \label{IA} \end{eqnarray} where the massless part of the action takes the form \begin{eqnarray} S^{({\rm I})} [H, \Sigma] &=& \int d^8z \, \Big\{ H^{\un a}\Box( - \frac 13 \Pi^L_{0}+ \frac 12 \Pi^T_{3/2})H_{\un a} -i(\Sigma -\Bar \Sigma ) \pa^{\un a} H_{\un a} - 3 \Bar \Sigma \Sigma \Big\}~, \\ && \qquad \Sigma = -{1\over 4} \Bar D^2 P~, \qquad \Bar P = P~~,~~ \nonumber \end{eqnarray} and this corresponds to a linearized form of type I (old minimal) supergravity that has only appeared in the research literature \cite{VariantSG}. It has not been discussed in textbooks such as \cite{BK,GGRS}. The distinctive feature {\it {unique}} to this theory is that its set of auxiliary fields contains one axial vector, one scalar and one three-form ($S$, $C_{{\un a} \, {\un b} \, {\un c}} $, $A_{\un a}$). Interestingly enough and to our knowledge, there has {\it {never}} been constructed a massive theory that contains the standard auxiliary fields of minimal supergravity ($S$, $P$, $A_{\un a}$). This fact may be of some yet-to-be understood significance. The theory with action $S^{({\rm IA})} [H, P]$ turns out to possess a dual formulation. Let us introduce the ``first-order'' action \begin{eqnarray} S_{Aux} &=& \int d^8z \, \Big\{ H^{\un a}\Box( - \frac 13 \Pi^L_{0} + \frac 12 \Pi^T_{3/2})H_{\un a} -\frac 12 m^2 H^{\un a}H_{\un a} -U \pa^{\un a} H_{\un a} \nonumber \\ && ~~~~~~~~~~~ -\frac 32 U^2 + \frac 94 m^2 P^2 +3m V\Big( U + \frac14 \Bar D^2 P + \frac 14 D^2 P \Big) \Big\}~, \end{eqnarray} where $U$ and $V$ are real unconstrained superfields. Varying $V$ brings us back to (\ref{IA}). On the other hand, we can eliminate $U$ and $P$ using their equations of motion. With the aid of (\ref{id2}), this gives \begin{eqnarray} S^{({\rm IB})} [H, P] &=& \int d^8z \, \Big\{ H^{\un a}\Box( \frac 13 \Pi^L_{1/2} + \frac 12 \Pi^T_{3/2})H_{\un a} -\frac 12 m^2 H^{\un a}H_{\un a} \nonumber \\ &&~~~~~~~~~~- \frac{1}{16} V \{ \Bar D^2 , D^2 \} V - m V \pa^{\un a} H_{\un a} +\frac 32 m^2 V^2 \Big\}~. \label{IB} \end{eqnarray} This is one of the two formulations for the massive $Y$ = 3/2 multiplet constructed in \cite{BGLP1}. \subsection{Massive Extensions of Type II Supergravity} \label{Massive Extensions of Type II Supergravity} $~~\,~$ Let us now turn to type II (or new minimal) supergravity. Its linearized action is \begin{eqnarray} S^{({\rm II})} [H, {\cal U}] &=&\int d^8z\, \Big\{H^{\un a}\Box(-\Pi^T_{1/2}+\frac 12\Pi^T_{3/2})H_{\un a} +\frac 12{\cal U} [D_\alpha,\Bar D_{\dot\alpha}]H^{\un a} +\frac 32{\cal U}^2\Big\}~,~~ \label{II}\\ && \quad {\cal U}=D^\alpha \chi_\alpha+\Bar D_{\dot\alpha}\Bar \chi^{\dot\alpha}~, \qquad \Bar D_{\dot{\alpha}} \chi_\alpha = 0~, \nonumber \end{eqnarray} with $\chi_\alpha$ an unconstrained chiral spinor. It possesses a unique massive extension \begin{eqnarray} S^{({\rm IIA})} [H, \chi] &=& S^{({\rm II})} [H, {\cal U}] - {1\over 2} m^2\int d^8z\, H^{\un a}H_{\un a} +3m^2 \left\{ \int d^6z \, \chi^2 +c.c. \right\} \label{IIA} \end{eqnarray} which is derived in Appendix B. The theory (\ref{IIA}) admits a dual formulation. Let us consider the following ``first-order'' action \begin{eqnarray} S_{Aux} =\, \int d^8z\, \Big\{H^{\un a}\Box(-\Pi^T_{1/2}+\frac 12\Pi^T_{3/2})H_{\un a} -\frac 12 m^2 H^{\un a}H_{\un a} +\frac 12{\cal U} [D_\alpha,\Bar D_{\dot\alpha}]H^{\un a} +\frac 32{\cal U}^2 \nonumber\\ ~~~~~~~~~~~- 6m V \Big( {\cal U} - D^\alpha \chi_\alpha - \Bar D_{\dot\alpha}\Bar \chi^{\dot\alpha} \Big) \Big\} +3m^2 \Big\{ \int d^6z \, \chi^\alpha \chi_\alpha +c.c. \Big\}~,~~~ \end{eqnarray} in which ${\cal U}$ and $V$ are real unconstrained superfields. Varying $V$ gives the original action (\ref{IIA}). On the other hand, we can eliminate the independent real scalar ${\cal U}$ and chiral spinor $\chi_\alpha$ variables using their equations of motion. With the aid of (\ref{id3}) this gives \begin{eqnarray} S^{({\rm IIB})} [H, V] &=&\int d^8z\, \Big\{H^{\un a}\Box(- \frac 13 \Pi^L_{0}+\frac 12\Pi^T_{3/2})H_{\un a} -\frac 12 m^2 H^{\un a}H_{\un a} \nonumber \\ && \quad +mV [D_\alpha,\Bar D_{\dot\alpha}]H^{\un a} -6m^2 V^2\Big\} -6 \int d^6z \, W^\alpha W_\alpha~, \label{IIB} \end{eqnarray} where $W_\alpha$ is the vector multiplet field strength defined in (\ref{tino}). The obtained action (\ref{IIB}) constitutes a new formulation for massive supergravity multiplet. \subsection{Massive Extensions of Type III Supergravity} \label{Massive Extensions of Type III Supergravity} $~~\,~$ Let us now turn to linearized type III supergravity \cite{BGLP1} \begin{eqnarray} S^{({\rm III})} [H, {\cal U}] &=&\int d^8z\, \Big\{H^{\un a}\Box(\frac 13 \Pi^L_{1/2}+\frac 12\Pi^T_{3/2})H_{\un a} + {\cal U} \pa_{\un a} H^{\un a} +\frac 32{\cal U}^2\Big\}~, \label{III}\\ && \quad {\cal U}=D^\alpha \chi_\alpha+\Bar D_{\dot\alpha}\Bar \chi^{\dot\alpha}~, \qquad \Bar D_{\dot{\alpha}} \chi_\alpha = 0~, \nonumber \end{eqnarray} with $\chi_\alpha$ an unconstrained chiral spinor. It possesses a unique massive extension \begin{eqnarray} S^{({\rm IIIA})} [H, \chi] &=& S^{({\rm III})} [H, {\cal U}] - {1\over 2} m^2\int d^8z\, H^{\un a}H_{\un a} -9m^2 \left\{ \int d^6z \, \chi^2 +c.c. \right\}~,~~~ \label{IIIA} \end{eqnarray} and its derivation is very similar to that of (\ref{IIA}) given in Appendix B. Similarly to the type II case considered earlier, the theory (\ref{IIIA}) admits a dual formulation. Let us introduce the ``first-order'' action \begin{eqnarray} S_{Aux} &=& \int d^8z\, \Big\{H^{\un a}\Box(\frac 13 \Pi^L_{1/2}+\frac 12\Pi^T_{3/2})H_{\un a} -\frac 12 m^2 H^{\un a}H_{\un a} + {\cal U} \pa_{\un a} H^{\un a} +\frac 32{\cal U}^2 \nonumber \\ && +3m V \Big( {\cal U} - D^\alpha \chi_\alpha - \Bar D_{\dot\alpha}\Bar \chi^{\dot\alpha} \Big) \Big\} -9m^2 \Big\{ \int d^6z \, \chi^\alpha \chi_\alpha +c.c. \Big\}~, \end{eqnarray} in which ${\cal U}$ and $V$ are real unconstrained superfields. Varying $V$ gives the original action (\ref{IIIA}). On the other hand, we can eliminate the independent real scalar ${\cal U}$ and chiral spinor $\chi_\alpha$ variables using their equations of motion. With the aid of (\ref{id2}) this gives \begin{eqnarray} S^{({\rm IIIB})} [H, V] &=&\int d^8z\, \Big\{H^{\un a}\Box(- \frac 13 \Pi^L_{0}+\frac 12\Pi^T_{3/2})H_{\un a} -\frac 12 m^2 H^{\un a}H_{\un a} \nonumber \\ && \quad ~~~~~~~~-mV \pa_{\un a} H^{\un a} - \frac 32 m^2 V^2\Big\} +{1\over 2} \int d^6z \, W^\alpha W_\alpha~, \label{IIIB} \end{eqnarray} with the vector multiplet field strength $W_\alpha$ defined in eq. (\ref{tino}). This is one of the two formulations for the massive $Y$ = 3/2 multiplet constructed in \cite{BGLP1}. The other formulation is given by the action (\ref{IB}). \section{Summary} \label{Summary} $~~\,~$ We have formulated new free superfield dynamical theories for massive multiplets of superspin $Y$ = 1 and $Y$ = 3/2. We have shown that these new theories are dually equivalent to the theories with corresponding superspin given previously in the literature \cite{BGLP1,BGLP2,GSS}. Although the theories with a fixed and specific value of $Y$ are on-shell equivalent, they differ from one another by distinctive sets of auxiliary superfields (see discussion of this point in \cite{BGLP1}). The existence of their varied and distinctive off-shell structures together with their on-shell equivalence comes somewhat as a surprise. This surprise suggests that there is much remaining work to be done in order to understand and classify the distinct off-shell representations for all multiplets with higher values of $Y$ in both the massless and massive cases. Our results raise many questions. For example, for a fixed value of $Y$ what massless off-shell representations possess massive extensions? How does the number of such duality related formulations depend on the value of $Y$? Are there even more off-shell possibilities for the massless theories uncovered in the works of \cite{KSP,KS}? Another obvious question relates to the results demonstrated in the second work of \cite{KSG}. In this past work, it was shown that there is a natural way to combine 4D, $\cal N$ = 1 massless higher spin supermultiplets into 4D, $\cal N$ = 2 massless higher spin supermultiplets. Therefore, we are led to expect that it should be possible to combine 4D, $\cal N$ = 1 massive higher spin supermultiplets into 4D, $\cal N$ = 2 massive higher spin supermultiplets. As we presently only possess {\it {four}} $Y$ = 1 and {\it {six}} $Y$ = 3/2 4D, $\cal N$ = 1 supermultiplets, the extension to 4D, $\cal N$ = 2 supersymmetry promises to be an interesting study for the future. All of these questions bring to the fore the need for a comprehensive understanding of the role of duality for arbitrary $Y$ supersymmetric representations, of both the massless and massive varieties. In turn this raises the even more daunting specter of understanding the role of duality within the context of superstring/M-theory. To our knowledge the first time the question was raised about the possibility of dually related superstrings was in 1985 \cite{GNi} and there the question concerns on-shell dually related theories. So for both on-shell and off-shell theories we lack a complete understanding of duality. The most successful descriptions of superstrings are of the type pioneered by Berkovits (see \cite{BL} and references therein). As presently formulated, there is no sign of duality in that formalism. So does the superstring uniquely pick out representations among the many dual varieties suggested by our work? \vskip.5cm \noindent {\bf Acknowledgments:}\\ The work of ILB was supported in part by the RFBR grant, project No 03-02-16193, joint RFBR-DFG grant, project No 02-02-04002, the DFG grant, project No 436 RUS 113/669, the grant for LRSS, project No 125.2003.2 and INTAS grant, project INTAS-03-51-6346. The work of SJG and JP is supported in part by National Science Foundation Grant PHY-0099544. The work of SMK is supported in part by the Australian Research Council. \begin{appendix} \section{Derivation of (\ref{OS-action-2})} $~~\,~$ Let us start with the action \begin{eqnarray} S [\Psi] = \hat{S}[\Psi ] + {1\over 4} \int d^8z\, \Big( D\, \Psi + {\Bar D} \,{\Bar \Psi} \Big)^2 + \int d^8z\, \Big( \mu \Psi^2 + \mu^* \, {\Bar \Psi}^2 \Big)~, \label{OS-action} \end{eqnarray} where the functional $\hat{S}[\Psi]$ is defined in (\ref{S-hat}), and $\mu$ is a complex mass parameter to be specified later. The action (\ref{OS-action}) with $\mu=0$ describes the Ogievetsky-Sokatchev model for the massless gravitino multiplet \cite{OS}. We are going to analyze whether this action with $\mu \neq 0$ can be used to consistently describe the massive gravitino multiplet dynamics. The equation of motion for $\Psi^\alpha$ is \begin{eqnarray} -\Bar D_{\dot\alpha}D_\alpha\Bar \Psi^{\dot\alpha} +\frac 12\Bar D^2 \Psi_\alpha -\frac 12 D_\alpha ( D\, \Psi + {\Bar D}\, {\Bar \Psi} ) +2\mu\,\Psi_\alpha &=&0~. \label{OS-em1} \end{eqnarray} It implies \begin{eqnarray} -\frac 14\Bar D^2 D_\alpha ( D\, \Psi + {\Bar D} \, {\Bar \Psi} ) +\mu \,\Bar D^2 \Psi_\alpha =0~, \label{OS-em2} \end{eqnarray} and therefore \begin{eqnarray} 0&=& -\frac 14 D^\alpha\Bar D^2 D_\alpha ( D\, \Psi + {\Bar D} \, {\Bar \Psi} ) +\mu \,D^\alpha \Bar D^2 \Psi_\alpha \label{OS-em3} \\ &=& -\frac 14 D^\alpha\Bar D^2 D_\alpha ( D\, \Psi + {\Bar D} \, {\Bar \Psi} ) +\mu \,\Bar D^2 ( D\, \Psi + {\Bar D}\, {\Bar \Psi} ) +4i \mu \,\pa^{\un a} \Bar D_{\dot{\alpha}} \Psi_\alpha ~. \nonumber \end{eqnarray} Since the first term on the right is real and linear, we further obtain \begin{eqnarray} \mu \,D^\alpha \Bar D^2 \Psi_\alpha &=& \mu^* \, \Bar D_{\dot{\alpha}} D^2 \Bar \Psi^{\dot{\alpha}}~, \label{OS-em4} \\ D^2 \Bar D^2 ( D\, \Psi + {\Bar D}\, {\Bar \Psi} ) &+& 4i \mu \,\pa^{\un a} D^2 \Bar D_{\dot{\alpha}} \Psi_\alpha =0~. \label{OS-em5} \end{eqnarray} Since the operator $\Bar D^2 D^\alpha $ annihilates chiral superfields, applying it to (\ref{OS-em1}) and making use of (\ref{OS-em5}), we then obtain \begin{equation} \Bar D^2 ( D\, \Psi + {\Bar D}\, {\Bar \Psi} ) =D^2 ( D\, \Psi + {\Bar D} \, {\Bar \Psi} ) =0~. \label{OS-em6} \end{equation} Next, contracting $D^\alpha$ on (\ref{OS-em1}) and making use of (\ref{OS-em6}) gives \begin{equation} i \pa^{\un a} ( \Bar D_{\dot{\alpha}} \Psi_\alpha + D_\alpha \Bar \Psi_{\dot{\alpha}}) +\mu \, D\,\Psi =0~. \label{OS-em7} \end{equation} We also note that, due to (\ref{OS-em6}), the equation (\ref{OS-em4}) is now equivalent to $ \pa^{\un a} ( \mu \, \Bar D_{\dot{\alpha}} \Psi_\alpha - \mu^* \, D_\alpha \Bar \Psi_{\dot{\alpha}}) =0$. Therefore, with the choice $ \mu = i\, m$, where $m$ is real, we end up with \begin{equation} D \,\Psi = \Bar D \,\Bar \Psi =0~. \label{OS-em8} \end{equation} Then, eq. (\ref{OS-em2}) becomes \begin{equation} \Bar D^2 \Psi_\alpha = 0~. \label{OS-em9} \end{equation} Finally, the equation of motion (\ref{OS-em1}) reduces to \begin{equation} \pa_{\un a} \Bar \Psi^{\dot{\alpha}} + m \, \Psi_\alpha = 0~. \label{OS-em10} \end{equation} Eqs. (\ref{OS-em8}) -- (\ref{OS-em10}) define an irreducible $Y$ = 1 massive representation. They are equivalent to the equations of motion in the Ogievetsky-Sokatchev model (\ref{OS-action-2}). \section{Derivation of (\ref{IIA})} $~~\,~$ Let us consider an action $S = S^{({\rm II})} [H, {\cal U}] + S_m [H, \chi]$, where $S^{({\rm II})} [H, {\cal U}] $ is the type II supergravity action, eq. (\ref{II}), and $S_m [H, \chi]$ stands for the mass term \begin{eqnarray} S_m[H, \chi]= -\frac 12m^2\int d^8z\, H^{\un a}H_{\un a} +\frac 12\gamma m^2\int d^6z \, \chi^\alpha\chi_\alpha +\frac 12\gamma^* m^2\int d^6\bar z \, \bar \chi_{\dot\alpha}\bar\chi^{\dot\alpha}~, \end{eqnarray} with $\gamma$ a complex parameter. The latter should be determined from the requirement that the equations of motion \begin{eqnarray} \label{sweet} \Box\Big[ \Pi_{3/2}^T -2\Pi_{1/2}^T \Big]H_{\un a} -m^2H_{\un a} +\frac 12[D_\alpha,\Bar D_{\dot\alpha}]{\cal U} &=&0~, \\ \label{tasty} \frac 18\Bar D^2D_\alpha[D_\beta,\Bar D_{\dot\beta}] H^{\un b} +\frac 34\Bar D^2D_\alpha{\cal U} +m^2\gamma\chi_\alpha &=&0~,~~ \end{eqnarray} be equivalent to (\ref{32irrepsp}). Since ${\cal U}$ is linear, (\ref{sweet}) implies that $H_{\un a}$ is linear, $D^2H_{\un a}=0$. It is then possible to show that $\Bar D^{\dot\alpha}H_{\un a}\propto \chi_\alpha$ on-shell. To prove this proportionality, first contract $\Bar D^{\dot\alpha}$ on (\ref{sweet}) and use the following identities: \begin{eqnarray} \label{goodvibes} \Bar D^2 D_\beta [D_\alpha, \Bar D_{\dot\alpha}]H^{\un a} &=&2i\Bar D^2 D^\alpha\pa_{(\alpha}{}^{\dot\alpha}H_{\beta)\dot\alpha}~, \\ \Box\Bar D^{\dot\alpha}\Pi_{1/2}^T H_{\un a}&=& -\frac i8\Bar D^2D^\delta\pa_{(\alpha}{}^{\dot\beta} H_{\delta)\dot\beta} =-\frac 1{16}\Bar D^2 D_\beta [D_\alpha, \Bar D_{\dot\alpha}]H^{\un a}~,~ \end{eqnarray} to arrive at: \begin{eqnarray} +\frac 18\Bar D^2 D_\beta [D_\alpha, \Bar D_{\dot\alpha}]H^{\un a} +\frac 34\Bar D^2D_\alpha{\cal U} - m^2 \Bar D^{\dot\alpha}H_{\un a} =0~. \end{eqnarray} Substituting the first two terms with (\ref{tasty}) leads to: \begin{eqnarray} \label{thebomb} \gamma\chi_\alpha + \Bar D^{\dot\alpha}H_{\un a}=0~, \end{eqnarray} an upon substituting for ${\cal U}$ in (\ref{tasty}) by substituting (\ref{thebomb}) back in yields: \begin{eqnarray} +\frac 18\Bar D^2D_\alpha[D_\beta,\Bar D_{\dot\beta}]H^{\un b} -\frac 34\frac 1\gamma\Bar D^2D_\alpha[D^\beta\Bar D^{\dot\beta} -\frac \gamma{\gamma^*}\Bar D^{\dot\beta}D^\beta]H_{\un b} +m^2\gamma\chi_\alpha =0~~.~~ \end{eqnarray} This means that $\chi_\alpha$ will vanish if $\gamma$ is real and $\gamma=6$. Equation (\ref{thebomb}) implies that $H_{\un a}$ is irreducible when $\chi_\alpha$ vanishes. This means that $\Pi^T_{3/2}H_{\un a}=H_{\un a}$ and the Klein-Gordon equation is obtained from (\ref{sweet}). We therefore obtain (\ref{IIA}). \end{appendix}
{ "attr-fineweb-edu": 1.199219, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd3025V5jEBMNXWt_
\section{Introduction} \label{sec:intro} The $125$~GeV particle discovered by the ATLAS and CMS experiments~\citep{ Aad:2012tfa,Chatrchyan:2012xdj,Aad:2014aba,Khachatryan:2014jba,Aad:2015zhl} has been established as a spin-0 Higgs boson rather than a spin-2 particle~\citep{ Chatrchyan:2012jja,Aad:2013xqa}. The measurements of its couplings to fermions and gauge bosons are being updated constantly and the results confirm consistency with the expected Standard Model (SM) values~\citep{Khachatryan:2014jba, Khachatryan:2014kca,Aad:2015mxa,Aad:2015gba}. However, to establish that a scalar doublet $\Phi$ does indeed break the electroweak (EW) symmetry spontaneously when it acquires a nonzero vacuum expectation value, $v$, requires a direct measurement of the Higgs boson self coupling, $\lambda$. The minimal SM, merely on the basis of the economy of fields and interactions, assumes the existence of only one physical scalar, $h$, with $J^{PC} = 0^{++}$. Although Ref.~\citep{Khachatryan:2014kca} has ruled out a pure pseudoscalar hypothesis at a $99.98$\% confidence limit (C.L.), the new particle can still have a small CP-odd admixture to its couplings. Theoretically, the Higgs boson self coupling appears when, as a result of electroweak symmetry breaking in the SM, the scalar potential $V(\Phi)$ gives rise to the Higgs boson self interactions as follows: \begin{gather} V(\Phi) = \mu^2 \Phi^\dag \Phi + \lambda (\Phi^\dag \Phi)^2 \to \frac{1}{2}m^2_h h^2 + \lambda v h^3 + \frac{\lambda}{4} h^4, \label{Vphi} \end{gather} where $\lambda \!=\! \lambda_{\rm SM} \!=\! m^2_h/(2v^2) \!\approx\! 0.13$ and $\Phi$ is an $SU(2)_L$ scalar doublet. For a direct and independent measurement of $\lambda$ we need to access double Higgs boson production experimentally. However, this path is extremely challenging and requires a very high integrated luminosity to collect a substantial di-Higgs event rate, and an excellent detector with powerful background rejection capabilities. On the theoretical side we need to also take into account all vertices involved in the process that are sensitive to the presence of new physics beyond the SM (BSM). There are various proposals to build new, powerful high energy $e^+e^-\!\!$, $e^-p$ and $pp$ colliders in the future. We have based our study on a {\em Future Circular Hadron-Electron Collider} ({FCC-he}) which employs the $50$~TeV proton beam of a proposed $100$~km circular $pp$ collider ({FCC-pp}), and electrons from an Energy Recovery Linac (ERL) being developed for the {\em Large Hadron Electron Collider} ({LHeC})~\citep{AbelleiraFernandez:2012cc,Bruening:2013bga}. The design of the ERL is such that the $e^-p$ and $pp$ colliders operate simultaneously, thus optimising the cost and the physics synergies between $e^-p$ and $pp$ processes. Such facilities would be potent Higgs research centres, see e.g. Ref.~\citep{AbelleiraFernandez:2012ty}. The LHeC and the FCC-he configuration are advantageous with respect to the Large Hadron collider (LHC) (or FCC-pp in general) in terms of \begin{inparaenum}[(1)] \item initial states are asymmetric and hence backward and forward scattering can be disentangled, \item it provides a clean environment with suppressed backgrounds from strong interaction processes and free from issues like pile-ups, multiple interactions etc., \item such machines are known for high precision measurements of the dynamical properties of the proton allowing simultaneous test of EW and QCD effects. \end{inparaenum} A detailed report on the physics and detector design concepts can be found in the Ref.~\citep{AbelleiraFernandez:2012cc}. The choice of an ERL energy of $E_e = 60$ to $120$~GeV, with an available proton energy $E_p = 50\,(7)$~TeV, would provide a centre of mass (c.m.s.) energy of $\sqrt{s} \approx 3.5(1.3)$ to $5.0(1.8)$~TeV at the FCC-he (LHeC) using the FCC-pp (LHC) protons. The FCC-he would have sufficient c.m.s. energy to probe the Higgs boson self coupling via double Higgs boson production. The inclusive Higgs production cross section at the FCC-he is expected to be about five times larger than at the proposed $100$~km circular $e^+e^-$ collider (FCC-ee). This article is organised as follows: We discuss the process to produce the di-Higgs events in an $e^-p$ collider and the most general Lagrangian with all relevant new physics couplings in \cref{sec:formalism}. In \cref{sec:analysis} all the simulation tools and the kinematic cuts that are required to study the sensitivity of the involved couplings are given. Here we also discuss the details of the analyses that has gone into the study. In \cref{eftval} there is a discussion on the validity of the effective theory considered here. And finally we conclude and draw inferences from the analysis in \cref{sec:conc}. \section{Formalism} \label{sec:formalism} In an $e^-p$ collider environment a double Higgs event can be produced through: \begin{inparaenum}[(1)] \item the charged current process, $p\,e^- \to hhj\nu_e$, and \item the neutral current process, $p\,e^- \to hhj\,e^-$, \end{inparaenum} if there are no new physics processes involved. The SM background will cloud each of the processes greatly, and it will be a formidable task to separate signal from backgrounds. Here we study the charged current process because the signal strength of this is superior to the neutral current process. Hence we show in \cref{fig:figW} the Higgs boson pair production, at leading order, due to the resonant and non-resonant contributions in charged current deep inelastic scattering (CC DIS) at an $e^-p$ collider. As seen in \cref{fig:figW}, the \begin{figure}[!htbp] \centering \subfloat[]{\includegraphics[trim=0 0 0 10,clip,width=0.15\textwidth]{fig3.pdf}\label{fig:figWc}} \subfloat[]{\includegraphics[trim=0 0 0 10,clip,width=0.15\textwidth]{fig2.pdf}\label{fig:figWb}} \subfloat[]{\includegraphics[trim=0 0 0 10,clip,width=0.15\textwidth]{fig1.pdf}\label{fig:figWa}} \caption{\small Leading order diagrams contributing to the process $p\,e^- \to h h j \nu_e$ with $q \equiv u,c,\bar d,\bar s$ and $q^\prime \equiv d,s,\bar u, \bar c$ respectively.} \label{fig:figW} \end{figure} di-Higgs production involves $hhh, hWW$ and $hhWW$ couplings. Note that the $hWW$ coupling will be extensively probed at the LHC, where its value conforms to the value predicted by the SM~\citep{Khachatryan:2014jba,Khachatryan:2014kca,Aad:2015gba}. Through vector boson fusion Higgs production mode at the LHC, a BSM analyses to determine the CP and spin quantum numbers of the Higgs boson has been studied in Refs.~\citep{Plehn:2001nj, Hagiwara:2009wt, Djouadi:2013yb, Englert:2012xt}. The authors of Ref.~\citep{Biswal:2012mp} have shown the sensitivity of new physics contributions in $hWW$ couplings at $e^-p$ colliders through a study of the azimuthal angle correlation for single Higgs boson production in $p\,e^- \to hj\nu_e$ with an excellent signal-to-background ratio based on the $h\to b\bar{b}$ decay channel. Since we do not have any direct measurement of the Higgs boson self coupling ($hhh$) and quartic ($hhWW$) coupling, there can be several possible sources of new physics in the scalar sector. This article studies for the proposed FCC-he sensitivity of the Higgs boson self coupling around its SM value including BSM contributions by considering all possible Lorentz structures. In order to make it a complete study we also retain the possibilities for $hWW$ couplings that appear in the di-Higgs production modes. Following Refs.~\citep{Biswal:2012mp,Alloul:2013naa} the most general Lagrangian which can account for all bosonic couplings relevant for the phenomenology of the Higgs boson sector at the FCC-he are the three-point and four-point interactions involving at least one Higgs boson field. It can be written as: \begin{align} {\cal L}^{(3)}_{_{hhh}} = \dfrac{m^2_h}{2v} & (1 - \ensuremath{g^{(1)}_{_{hhh}}}) h^3 + \dfrac{1}{2 v} \ensuremath{g^{(2)}_{_{hhh}}} h \ensuremath{\partial}_\mu h \ensuremath{\partial}^\mu h, \label{lagh} \\ {\cal L}^{(3)}_{_{hWW}} = - g \bigg[ & \dfrac{\ensuremath{g^{(1)}_{_{hWW}}}}{2m_W} W^{\mu\nu} W^\dag_{\mu\nu} h + \dfrac{\ensuremath{g^{(2)}_{_{hWW}}}}{m_W} ( W^\nu \ensuremath{\partial}^\mu W^\dag_{\mu\nu} h + {\rm h.c} ) \notag\\ &\quad+ \dfrac{\ensuremath{\tilde g_{_{hWW}}}}{2m_W} W^{\mu\nu} \widetilde W^\dag_{\mu\nu} h \bigg], \label{lag3} \\ {\cal L}^{(4)}_{_{hhWW}} = - g^2 \bigg[ & \dfrac{\ensuremath{g^{(1)}_{_{hhWW}}}}{4m^2_W} W^{\mu\nu} W^\dag_{\mu\nu} h^2 + \dfrac{\ensuremath{g^{(2)}_{_{hhWW}}}}{2m^2_W} ( W^\nu \ensuremath{\partial}^\mu W^\dag_{\mu\nu} h^2 + {\rm h.c} ) \notag\\ &\quad+ \dfrac{\ensuremath{\tilde g_{_{hhWW}}}}{4m^2_W} W^{\mu\nu} \widetilde W^\dag_{\mu\nu} h^2 \bigg]. \label{lag4} \end{align} Here $g^{(i)}_{(\cdots)}, i = 1,2$, and $\tilde g_{(\cdots)}$ are real coefficients corresponding to the CP-even and CP-odd couplings respectively (of the $hhh$, $hWW$ and $hhWW$ anomalous vertices), $W_{\mu\nu} = \ensuremath{\partial}_\mu W_\nu - \ensuremath{\partial}_\nu W_\mu$ and $\widetilde W_{\mu\nu} = \frac{1}{2} \ensuremath{\epsilon}_{\mu\nu\rho\sigma} W^{\rho\sigma}$. In \cref{lagh} $\ensuremath{g^{(1)}_{_{hhh}}}$ is parametrised with a multiplicative constant with respect to $\lambda_{\rm SM}$ as in \cref{Vphi}. Thus the Higgs self coupling $\lambda$ appears as $\ensuremath{g^{(1)}_{_{hhh}}} \lambda_{\rm SM}$ in the expression for $V(\Phi)$. Clearly, in the SM $\ensuremath{g^{(1)}_{_{hhh}}} = 1$ and all other anomalous couplings vanish in \cref{lagh,lag3,lag4}. The Lorentz structures of \cref{lagh,lag3,lag4} can be derived from the $SU(2)_L \otimes U(1)_Y$ gauge invariant dimension-6 operators given in Ref.~\citep{Alloul:2013naa}. The complete Lagrangian we work with is as follows: \begin{align} {\cal L} =& {\cal L}_{\rm SM} + {\cal L}^{(3)}_{_{hhh}} + {\cal L}^{(3)}_{_{hWW}} + {\cal L}^{(4)}_{_{hhWW}}. \label{lag} \end{align} The most general effective vertices take the form: \begin{align} & \Gamma_{hhh} = - 6 \lambda v \bigg[ \ensuremath{g^{(1)}_{_{hhh}}} + \dfrac{\ensuremath{g^{(2)}_{_{hhh}}}}{3 m^2_h} (p_1 \cdot p_2 + p_2 \cdot p_3 + p_3 \cdot p_1) \bigg], \\ & \Gamma_{hW^-W^+} = g m_W \bigg[\bigg\{1 + \dfrac{\ensuremath{g^{(1)}_{_{hWW}}}}{m^2_W} p_2 \cdot p_3 + \dfrac{\ensuremath{g^{(2)}_{_{hWW}}}}{m^2_W} (p^2_2 + p^2_3) \bigg\} \eta^{\mu_2 \mu_3} \notag\\ & \qquad\,\qquad\quad\quad - \dfrac{\ensuremath{g^{(1)}_{_{hWW}}}}{m^2_W} p_2^{\mu_3} p_3^{\mu_2} - \dfrac{\ensuremath{g^{(2)}_{_{hWW}}}}{m^2_W} (p_2^{\mu_2} p_2^{\mu_3} + p_3^{\mu_2} p_3^{\mu_3}) \notag\\ & \qquad\,\qquad\quad\quad - {\rm i} \dfrac{\ensuremath{\tilde g_{_{hWW}}}}{m^2_W} \ensuremath{\epsilon}_{\mu_2\mu_3\mu\nu} p_2^\mu p_3^\nu \bigg], \end{align} \begin{align} & \Gamma_{hhW^-W^+} = g^2 \bigg[\bigg\{ \dfrac{1}{2} + \dfrac{\ensuremath{g^{(1)}_{_{hhWW}}}}{m^2_W} p_3 \cdot p_4 + \dfrac{\ensuremath{g^{(2)}_{_{hhWW}}}}{m^2_W} (p^2_3 + p^2_4) \bigg\} \eta^{\mu_3 \mu_4} \notag\\ & \qquad\ \,\qquad\quad\quad - \dfrac{\ensuremath{g^{(1)}_{_{hhWW}}}}{m^2_W} p_3^{\mu_4} p_4^{\mu_3} - \dfrac{\ensuremath{g^{(2)}_{_{hhWW}}}}{m^2_W} (p_3^{\mu_3} p_3^{\mu_4} + p_4^{\mu_3} p_4^{\mu_4}) \notag\\ & \qquad\ \,\qquad\quad\quad - {\rm i} \dfrac{\ensuremath{\tilde g_{_{hhWW}}}}{m^2_W} \ensuremath{\epsilon}_{\mu_3\mu_4\mu\nu} p_3^\mu p_4^\nu \bigg]. \end{align} The momenta and indices considered above are of the same order as they appear in the index of the respective vertex $\Gamma$. For example, in the vertex $\Gamma_{hW^-W^+}$ the momenta of $h, W^-$ and $W^+$ are $p_1,p_2$ and $p_3$ respectively. Similarly, $\mu_2$ and $\mu_3$ are the indices of $W^-$ and $W^+$. Using the above effective field theory (EFT) approach a study has been performed as an example for di-Higgs production in vector boson fusion at the LHC in Ref.~\citep{Alloul:2013naa}. \section{Simulation Tools and Analysis} \label{sec:analysis} We begin our probe of the sensitivity of these couplings by building a model file for the Lagrangian in \cref{lag} using \texttt{FeynRules}~\citep{Alloul:2013naa}, and then simulate the charged current double Higgs boson production channel $p\,e^- \to hhj\nu_e$ (see \cref{fig:figW}), with $h$ further decaying into a $b \bar b$ pair,\footnote{In $pp$ collider like the LHC, the main challenge of this search is to distinguish the signal of four bottom quarks in the final state (that hadronise into jets ($b$-jets)) from the QCD multijet backgrounds. Such challenges and difficulties are discussed and performed in ATLAS and CMS studies~\cite{Aad:2015uka, Aad:2015xja, Khachatryan:2015yea}. } in the FCC-he set up with $\sqrt{s} \approx 3.5$~TeV. Our analysis starts with optimising the SM di-Higgs signal events with respect to all possible backgrounds from multi-jet events, $ZZ$+jets, $hb\bar b$ + jets, $hZ$ + jets and $t\bar t$+jets in charged and neutral current deep-inelastic scattering (DIS) and in photo-production\footnote{ We cross checked the modelling of photo-production cross sections from \texttt{MadGraph5} by switching on the ``Improved Weizs\"acker-Williams approximation formula'' described in Ref.~\citep{Budnev:1974de} to give the probability of photons from the incoming electron, versus the expectation of the \texttt{Pythia} Monte Carlo generator. }, taking into account appropriate $b$-tagged jets and a high performance multipurpose $4\pi$ detector. In \cref{tab:xsec} we have given an estimation of cross sections for signal and backgrounds considering all possible modes with basic cuts. We then investigate the limits on each coupling taking BSM events as the signal. For the generation of events we use the Monte Carlo event generator \texttt{MadGraph5}~\citep{Alwall:2011uj} and the \texttt{CTEQ6L1}~\citep{Pumplin:2002vw} parton distribution functions. Further fragmentation and hadronisation are done with a {\em customised} \texttt{Pythia-PGS}\footnote{ In \texttt{Pythia-PGS} we modified several parameters in a way to use it for $e^-p$ collision and to get all required numbers of events demanded in each simulation. The coordinate system is set as for the HERA experiments, i.e. the proton direction defines the forward direction. The modifications have been successfully validated using neutral current DIS events and switched off QCD ISR. For $e^-p$ collisions multiple interactions and pile-up are expected to be very small and are switched off in our studies. }~\citep{Sjostrand:2006za}. The detector level simulation is performed with reasonably chosen parameters using \texttt{Delphes}\footnote{ For \texttt{Delphes} we used the ATLAS set-up with the modifications in the $|\eta|$ ranges for forward and $b$-tagged jets up to 7 and 5 respectively with $70$\% tagging efficiency of $b$-jets as mentioned in the text. The resolution parameters for energy deposits in the calorimeters are based on the ATLAS Run-1 performance. }~\citep{deFavereau:2013fsa} and jets were clustered using \texttt{FastJet }~\citep{Cacciari:2011ma} with the anti-$k_T$ algorithm~\citep{Cacciari:2008gp} using the distance parameter, $R = 0.​4​$. The factorisation and renormalisation scales for the signal simulation are fixed to the Higgs boson mass, $m_h = 125$~GeV. The background simulations are done with the default \texttt{MadGraph5} dynamic scales. The $e^-$ polarisation is assumed to be $-80$\%. \begin{table}[!htbp] \centering \resizebox{\linewidth}{!}{ {\tabulinesep=5pt \begin{tabu}{|lccc|}\hline\hline Process & {\scshape cc} (fb) & {\scshape nc} (fb) & {\scshape photo} (fb) \\ \hline\hline Signal: & $2.40 \times 10^{-1}$ & $3.95 \times 10^{-2}$ & $3.30 \times 10^{-6}$ \\ \hline\hline $b\bar b b\bar b j$: & $8.20 \times 10^{-1}$ & $3.60 \times 10^{+3}$ & $2.85 \times 10^{+3}$ \\ \hline $b\bar bjjj$: & $6.50 \times 10^{+3}$ & $2.50 \times 10^{+4}$ & $1.94 \times 10^{+6}$ \\ \hline $ZZj$ ($Z\to b\bar b$): & $7.40 \times 10^{-1}$ & $1.65 \times 10^{-2}$ & $1.73 \times 10^{-2}$ \\ \hline $t\bar tj$ (hadronic): & $3.30 \times 10^{-1}$ & $1.40 \times 10^{+2}$ & $3.27 \times 10^{+2}$ \\ \hline $t\bar tj$ (semi-leptonic): & $1.22 \times 10^{-1}$ & $4.90 \times 10^{+1}$ & $1.05 \times 10^{+2}$ \\ \hline $hb\bar bj$ $(h \to b\bar b)$:& $5.20 \times 10^{-1}$ & $1.40 \times 10^{ 0}$ & $2.20 \times 10^{-2}$ \\ \hline $hZj$ $(Z,h\to b\bar b)$: & $6.80 \times 10^{-1}$ & $9.83 \times 10^{-3}$ & $6.70 \times 10^{-3}$ \\ \hline\hline \end{tabu}} } \caption{\small Cross sections of signal and backgrounds in charged current ({\scshape cc}), neutral current ({\scshape nc}) and photo-production ({\scshape photo}) modes for $E_e = 60$ GeV and $E_p = 50$ TeV, where $j$ is light quarks and gluons. For this estimation we use basic cuts $|\eta| \le 10$ for light-jets, leptons and $b$-tagged jets, $p_T \ge 10$ GeV, $\Delta R_{\rm min} = 0.4$ for all particles. And electron polarisation is taken to be $-0.8$.} \label{tab:xsec} \end{table} \begin{figure*}[ht] \noindent\begin{minipage}{\linewidth} \centering \resizebox{\linewidth}{!}{ {\tabulinesep=5pt \begin{tabu}{|c|c|c|c|c|c|c|c|c|c|}\hline\hline Cuts / Samples & Signal & $4b$+jets & $2b$+jets & Top & $ZZ$ & $b\bar{b} H$ & $ZH$ & Total Bkg & Significance \\ \hline\hline Initial & $2.00\times 10^3$ &$3.21\times 10^7$ &$2.32\times 10^9$ &$7.42\times 10^6$ &$7.70\times 10^3$ &$1.94\times 10^4$ &$6.97\times 10^3$ &$2.36\times 10^9$ &0.04\\ \hline At least $4b+1j$ &$3.11\times 10^2$ &$7.08\times 10^4$ &$2.56\times 10^4$ &$9.87\times 10^3$ &$7.00\times 10^2$ &$6.32\times 10^2$ &$7.23\times 10^2$ &$1.08\times 10^5$ &0.94\\ \hline Lepton rejection $p_T^\ell > 10$~GeV &$3.11\times 10^2$ &$5.95\times 10^4$ &$9.94\times 10^3$ &$6.44\times 10^3$ &$6.92\times 10^2$ &$2.26\times 10^2$ &$7.16\times 10^2$ &$7.75\times 10^4$ &1.12\\ \hline Forward jet $\eta_J > 4.0$ & 233 & 13007.30 & 2151.15 & 307.67 & 381.04 & 46.82 & 503.22 & 16397.19 & 1.82 \\ \hline $\slashed E_{T} > 40$~GeV &155 & 963.20 & 129.38 & 85.81 & 342.18 & 19.11 & 388.25 & 1927.93 & 3.48 \\ \hline $\Delta\phi_{\slashed E_T j} > 0.4$ & 133 & 439.79 & 61.80 & 63.99 & 287.10 & 14.53 & 337.14 & 1204.35 & 3.76 \\\hline $m_{bb}^1 \in [95,125]$, $m_{bb}^2 \in [90,125]$ & 54.5 & 28.69 & 5.89 & 6.68 & 5.14 & 1.42 & 17.41 & 65.23 & 6.04 \\ \hline $m_{4b} >290$~GeV & 49.2 & 10.98 & 1.74 & 2.90 & 1.39 & 1.21 & 11.01 & 29.23 &7.51 \\ \hline\hline \end{tabu}} } \captionof{table}{\small A summary table of event selections to optimise the signal with respect to the backgrounds in terms of the weights at 10~\ensuremath{\rm ab^{-1}}. In the first column the selection criteria are given as described in the text. The second column contains the weights of the signal process $p\,e^- \to hhj\nu_e$, where both the Higgs bosons decay to $b\bar b$ pair. In the next columns the sum of weights of all individual prominent backgrounds in charged current, neutral current and photo-production are given with each selection, whereas in the penultimate column all backgrounds' weights are added. The significance is calculated at each stage of the optimised selection criteria using the formula ${\cal S} = \sqrt{2 [ ( S + B ) \log ( 1 + S/B ) - S ]}$, where $S$ and $B$ are the expected signal and background yields at a luminosity of 10~\ensuremath{\rm ab^{-1}} respectively. This optimisation has been performed for $E_e = 60$~GeV and $E_p = 50$~TeV.} \label{tab:cut_flow} \bigskip \includegraphics[trim=0 0 0 50,clip,width=0.33\textwidth,height=0.3\textwidth] {model_hhh_cutfinal_fjdphiMET_fit.pdf} \includegraphics[trim=0 0 0 50,clip,width=0.33\textwidth,height=0.3\textwidth] {model_hww_cutfinal_fjdphiMET_fit.pdf} \includegraphics[trim=0 0 0 50,clip,width=0.33\textwidth,height=0.3\textwidth] {model_hhww_cutfinal_fjdphiMET_fit.pdf} \captionof{figure}{\small Azimuthal angle distributions, at \texttt{Delphes} detector-level, between missing transverse energy, $\slashed E_T$, and the forward jet, J, in the SM (including backgrounds) and with the anomalous $hhh$, $hWW$ and $hhWW$ couplings. The error bars are statistical.} \label{fig:dist} \end{minipage} \end{figure*} \subsection{Cut-based optimisation} \label{sec:cuts} We base our simulation on the following kinematic selections in order to optimise the significance of the SM signal over all the backgrounds: \begin{inparaenum}[(1)] \item At least four $b$-tagged jets and one additional light jet are selected in an event with transverse momenta, $p_T$, greater than $20$~GeV. \item For $non$-$b$-tagged jets, the absolute value of the rapidity, $|\eta|$, is taken to be less than $7$, whereas for $b$-tagged jets it is less than $5$. \item The four $b$-tagged jets must be well separated and the distance between any two jets, defined as $\Delta R = \sqrt{(\Delta \phi)^2+(\Delta\eta)^2}$, $\phi$ being the azimuthal angle, is taken to be greater than $0.7$. \item Charged leptons with $p_T > 10$~GeV are rejected. \item For the largest $p_T$ forward jet J (the $non$-$b$-tagged jet after selecting at least four $b$-jets) $\eta_{J} > 4.0$ is required. \item The missing transverse energy, $\slashed E_{T}$, is taken to be greater than $40$~GeV. \item The azimuthal angle between $\slashed E_T$ and the $b$-tagged jets are: $\Delta \Phi_{\slashed E_{T},\ leading\,jet} > 0.4$ and $\Delta\Phi_{\slashed E_{T},\ sub-leading\,jet} > 0.4$. \item The four $b$-tagged jets are grouped into two pairs such that the distances of each pair to the true Higgs mass are minimised. The leading mass contains the leading $p_T$-ordered $b$-jet. The first pair is required to be within $95$-$125$~GeV and the second pair within $90$-$125$~GeV\footnote{ Among the four $b$-tagged jets, choices of pairing have been performed via appropriate selection of mass window, keeping in mind to reconstruct the Higgs boson mass, $m_h$, in the signal as well as the $Z$-boson mass, $m_Z$, in the backgrounds. We choose the pair in which the quadratic sum ($m_1 -m_c$) and ($m_2-m_c$) is smallest, and in each mass $m_i$, mass $m_1$ has the largest $p_T$ $b$-jet, $m_c = (m_h - m_0)$~GeV, and normally $m_0\approx 20$-$40$~GeV (which is not important, since the false pairing will have a much higher quadratic sum). }. \item The invariant mass of all four $b$-tagged jets has to be greater than $290$~GeV. \end{inparaenum} In the selections (described above) the $b$-tagging efficiency is assumed to be $70$\%, with fake rates from $c$-initiated jets and light jets to the $b$-jets of $10$\% and $1$\% respectively. Corresponding weights\footnote{Here weights mean the number of expected events at a particular luminosity. The number of events of the photo-production of $4b$+jets are derived using the efficiencies of the Monte Carlo samples due to the low statistics. The other backgrounds are obtained directly from the event selection.} at a particular luminosity of 10~\ensuremath{\rm ab^{-1}} for a signal, and all backgrounds with significance has been tabulated in \cref{tab:cut_flow}. Significance at all stages of the cuts are calculated using the Poisson formula ${\cal S} = \sqrt{2 [ ( S + B ) \log ( 1 + S/B ) - S ]}$, where $S$ and $B$ are the expected signal and background yields at a particular luminosity respectively. From \cref{tab:cut_flow} we recognise that selection on the forward jet in the FCC-he type machine plays a very significant role in distinguishing the signal with respect to background. By selecting events with $\eta_J > 4.0$, there is loss of 25\% of signal events, while the total background loss is around 80\%. The next significant cut on missing energy ($\slashed{E}_T > 40$~GeV) is also very significant as due to this cut there is loss of 88\% of events in total background, however, for the signal there is a loss of only 34\% of events after forward jet selection. Furthermore the mass window cut for the invariant mass of two $b$-tagged jets, after the $\Delta\phi_{\slashed E_Tj} > 0.4$ selection, significantly reduces the total background events to 5\%, only while the signal events remains at 40\%. Efficient requirements on the invariant mass window of four $b$-jets are efficient, such that to reduce backgrounds by 44\% leads to a signal of 90\% with respect to the previous two $b$-tagged jet mass window selection. And hence there is a 20\% enhancement in the significance obtained in comparison to the two different mass window selection criteria, and overall with respect to initial events this cut-based optimisation is enhanced from a 0.04 to 7.51 significance. Here it is also important to mention that photo-production of the 4$b$ final state is one of the main background with similar topological final states from $Z h$, where $Z, h \to b\bar b$, and is equally important. Hence choice of efficient selection criteria is too important to reduce these backgrounds. \subsection{Kinematic distributions and observable} \label{kinobs} For our analysis we take {\em ad hoc} values of positive and negative couplings in such a manner that the production cross section does not deviate much from the SM value, and in particular modifications in the shapes of the normalised azimuthal angle distribution between the missing transverse energy and the leading (forward) jet are studied, in addition to other kinematic distributions. Taking into account all the above criteria we study BSM modifications in various differential distributions at the \texttt{Delphes} detector-level. This leads to the following observations: \begin{inparaenum}[(1)] \item $p_T$ has the usual tail behaviour, i.e. the number of events are more populated in the higher $p_T$ region with respect to the SM for the chosen values of the anomalous couplings. \item In cases of the $\eta$ distributions: \begin{inparaenum}[(a)] \item For the forward jet, particularly for the couplings of $hWW$ and $hhWW$ vertices, the mean $\eta$ is more central in the detector. The behaviour is similar if we increase the c.m.s. energy of the collider by increasing $E_e$ to higher ($>60$~GeV) values. For $hhh$ couplings the $\eta$ distribution remains the same as for the SM. \item In case of $b$-tagged jets, for all values of anomalous couplings, the distribution is populated around the value of $\eta$ of the SM distribution. \end{inparaenum} \item For the specific observable of the azimuthal angle difference between missing transverse energy and the forward jet ($\Delta\phi_{\slashed E_T J}$) the shapes are clearly distinguishable from the SM. \end{inparaenum} This behaviour is shown in \cref{fig:dist}, where the values of the couplings are {\em ad hoc}. However, these values are taken only for the purpose of illustration, and in the limit of the couplings going to their SM values the shapes will coincide with the SM distributions. The specific characteristics of the curves also depend on the details of the selection requirements, but the qualitative differences could be seen at every selection step. The shape of the curves is due to the fact that all new physics couplings have a momentum dependent structure (apart from \ensuremath{g^{(1)}_{_{hhh}}}) and positive or negative interference with SM events. We note that $\Delta\phi_{\slashed E_T J}$ is a novel observable and commands more focused and deeper analyses. In this regard one should follow the analysis (as performed in Ref.~\citep{Dutta:2013mva}) based on an asymmetry with two equal bins in $\Delta\phi_{\slashed E_T J} \lessgtr \pi/2$, defined as \begin{gather} {\cal A}_{\Delta\phi_{\slashed E_T J}} = \dfrac{|A_{\Delta \phi > \pi/2}| - |A_{\Delta \phi < \pi/2}|} {|A_{\Delta \phi > \pi/2}| + |A_{\Delta \phi <\pi/2}|}, \label{asym_def} \end{gather} \begin{table}[!htbp] \centering {\tabulinesep=3pt \begin{tabu}{|l|l|c|c|}\hline\hline \multicolumn{2}{|c|}{Samples} & ${\cal A}_{\Delta\phi_{\slashed E_T J}}$ & $\sigma {(\rm fb)}$ \\ \hline\hline \multicolumn{2}{|c|}{SM+Bkg} & $0.277 \pm 0.088$ & {} \\ \hline \multirow{2}*{$\ensuremath{g^{(1)}_{_{hhh}}}$} & = \ \ 1.5 & $0.279 \pm 0.052$ & $ 0.18$ \\ & = \ \ 2.0 & $0.350 \pm 0.053$ & $ 0.21$ \\ \hline \multirow{2}*{$\ensuremath{g^{(2)}_{_{hhh}}}$} & = - 0.5 & $0.381 \pm 0.050$ & $ 0.19$ \\ & = \ \ 0.5 & $0.274 \pm 0.024$ & $0.74 $ \\ \hline \multirow{2}*{$\ensuremath{g^{(1)}_{_{hWW}}}$} & = - 0.5 & $0.506 \pm 0.022$ & $ 0.88$ \\ & = \ \ 0.5 & $0.493 \pm 0.020$ & $0.94 $ \\ \hline \multirow{2}*{$\ensuremath{g^{(2)}_{_{hWW}}}$} & = - 0.02 & $0.257 \pm 0.025$ & $ 0.67$ \\ & = \ \ 0.02 & $0.399 \pm 0.040$ & $ 0.33$ \\ \hline \multirow{2}*{$\ensuremath{\tilde g_{_{hWW}}}$} & = - 1.0 & $0.219 \pm 0.016$ & $ 1.53$ \\ & = \ \ 1.0 & $0.228 \pm 0.016$ & $1.53 $ \\ \hline \multirow{2}*{$\ensuremath{g^{(1)}_{_{hhWW}}}$} & = - 0.05 & $0.450 \pm 0.033$ & $ 0.52$ \\ & = \ \ 0.05 & $0.254 \pm 0.029$ & $ 0.68$ \\ \hline \multirow{2}*{$\ensuremath{g^{(2)}_{_{hhWW}}}$} & = - 0.03 & $0.462 \pm 0.022$ & $1.22 $ \\ & = \ \ 0.03 & $0.333 \pm 0.018$ & $ 1.46$ \\ \hline \multirow{2}*{$\ensuremath{\tilde g_{_{hhWW}}}$} & = - 0.1 & $0.351 \pm 0.020$ & $ 1.60$ \\ & = \ \ 0.1 & $0.345 \pm 0.020$ & $1.61 $ \\ \hline\hline \end{tabu}} \caption{\small Estimation of the asymmetry, defined in \cref{asym_def}, and statistical error associated with the kinematic distributions in \cref{fig:dist} at an integrated luminosity of 10~\ensuremath{\rm ab^{-1}}. The cross section ($\sigma$) for the corresponding coupling choice is given in the last column with same parameters as in \cref{tab:xsec}.} \label{tab:asym_dphi} \end{table} in which $A$ is the yields obtained for the given \ensuremath{\rm ab^{-1}} data after all of the selections, including both the signal and backgrounds. \cref{tab:asym_dphi} shows the estimation of the asymmetry for a set of representative values of the couplings, shown in \cref{fig:dist}, along with the associated statistical uncertainty. Though the new physics couplings are representative, we can infer the sensitivities of these couplings with respect to the SM+Bkg estimation of asymmetry from \cref{tab:asym_dphi}, where $\ensuremath{g^{(1)}_{_{hWW}}}$ seems to have large fluctuations for both positive and negative choices of its values. Similarly the sensitivities of $\ensuremath{g^{(2)}_{_{hhWW}}}$ and $\ensuremath{\tilde g_{_{hhWW}}}$ can also be noted, however $\ensuremath{g^{(1)}_{_{hhWW}}}$ is more sensible for the negative choice of its value. The study of the sensitivity of the non-standard couplings through this asymmetry observable considering a kinematic distribution is basically corresponding to two bins by dividing the whole distribution in two halves with large bin-width. Moreover, this kind of study can be further appended with finer bin widths and a more efficient $\chi^2$ analysis (for example in Ref.~\citep{Dutta:2013mva}). However, these detailed analyses are beyond the scope of this article. \subsection{Exclusion limits through fiducial cross section as a function of luminosity} \label{exc} Furthermore, we probe the exclusion limits on these couplings as a function of the integrated luminosity, with the log-likelihood method described in Ref.~\citep{Cowan:2010js}, using directly the fiducial inclusive cross section as an observable. In \cref{fig:lumi} we present exclusion plots at $95$\% C.L. for anomalous $hhh$, $hWW$ and $hhWW$ couplings, where the shaded areas are the allowed region. The exclusion limits are based on the SM `di-Higgs signal + backgrounds' hypotheses considering BSM contributions as the signal at the given luminosity. Each limit is given by scanning one coupling and fixing the other couplings to their SM value, where a $5$\% systematic uncertainty is taken into account on the signal and background yields respectively. \begin{figure}[!htbp] \includegraphics[trim=0 40 0 20,clip,width=0.48\textwidth,height=0.48\textwidth]{lumi.pdf} \caption{\small The exclusion limits on the anomalous $hhh$ (top panel), $hWW$ (middle panel) and $hhWW$ (lower panel) couplings at $95$\% C.L. as a function of integrated luminosity (shaded areas). Note that the allowed values of $\ensuremath{g^{(2)}_{_{hhh}}}$ and $\ensuremath{g^{(2)}_{_{hWW}}}$ are multiplied by $5$ and $10$ respectively to highlight their exclusion region, since the values are of the order $10^{-1}$.} \label{fig:lumi} \end{figure} From \cref{fig:lumi} our observations are as follows: \begin{inparaenum}[(1)] \item If the integrated luminosity exceeds $0.5\ \ensuremath{\rm ab^{-1}}$ $\ensuremath{g^{(1)}_{_{hhh}}}$ is restricted to be positive. $\ensuremath{g^{(1)}_{_{hhh}}}$ is allowed to be within $0.7$-$2.5$ when the integrated luminosity reaches $15~\ensuremath{\rm ab^{-1}}$ as for values of $1 < \ensuremath{g^{(1)}_{_{hhh}}} \leq 2.1$ the cross section is smaller than the SM di-Higgs production. \item The $\ensuremath{g^{(2)}_{_{hhh}}}$ value is restricted to around $10^{-1}$. We only exclude the positive part of this coupling because its negative part has cancellations with the SM di-Higgs cross section. \item The sensitivity for $hWW$ couplings, namely $\ensuremath{g^{(1)}_{_{hWW}}}$ and $\ensuremath{\tilde g_{_{hWW}}}$, can be better probed at much lower energies and luminosity at the LHeC using the single Higgs boson production as shown in Ref.~\citep{Biswal:2012mp}. However, we have shown the sensitivity of $\ensuremath{g^{(2)}_{_{hWW}}}$, which is not considered in Ref.~\citep{Biswal:2012mp}, and it is of the order $10^{-2}$ in the allowed region. \item One important aspect of di-Higgs production in this type of collider is that one can measure the sensitivity of the $hhWW$ couplings also. In our analysis, since the CP-even (odd) coupling $\ensuremath{g^{(1)}_{_{hhWW}}}$ ($\ensuremath{\tilde g_{_{hhWW}}}$) has similar Lorentz structures, with the sensitivity of the exclusion plot having almost the same order of magnitude. However, the structure of $\ensuremath{g^{(2)}_{_{hhWW}}}$ allows a comparatively narrower region of values. \end{inparaenum} The couplings belonging to both the $hWW$ and $hhWW$ vertices are strongly constrained because of their high production cross section at very low values of the couplings. By increasing the luminosity from $0.1$-$1~\ensuremath{\rm ab^{-1}}$ the constraint on the couplings increases and its limits are reduced by a factor two. A further increase of the luminosity will not change the results. All limits are derived by varying only one coupling at a time, as mentioned earlier. The exclusion limits on the couplings in this analysis are based on the constraints from an excess above the SM expectation while potential deficits from interference contributions are not sensitive yet to be used for limit settings. \subsection{Prospects at higher $E_e$ and sensitivity of the Higgs self coupling} \label{sensh} Finally we discuss what happens once the electron energy $E_e$ is increased to higher values, where we focus our analysis on a determination of the SM Higgs self coupling, assuming no further BSM contributions. Without going into detail we can note that with increasing $E_e$ (from $60$~GeV to $120$~GeV) the SM signal and dominant background production cross sections are enhanced by a factor of $2.2$ and $2.1$ respectively. As a result, the cut efficiency for the selection of four $b$-tagged jets and one forward jet is improved, but for the other cuts described previously (invariant mass, $\slashed E_T$, $\eta_{J}$ and $\Delta \phi_{\slashed E_T j}$) it remains very similar. This leads to an enhancement of the selected signal and dominant background events by a factor $2.5$ and $2.6$ respectively. Hence we would obtain the same statistical precision with only $40$\% of the luminosity of an $E_e = 60$~GeV beam when increasing the electron energy to $120$~GeV. At an ultimate integrated luminosity of $10~\ensuremath{\rm ab^{-1}}$, increasing $E_e$ from $60$ to $120$~GeV would increase the significance of the observed SM di-Higgs events from $7.2$ to $10.6$, obtained from a log-likelihood fit. This includes a $5$\% signal and background systematics mentioned earlier. For the SM Higgs boson self coupling, where the scaling factor is expected to be $\ensuremath{g^{(1)}_{_{hhh}}} = 1$, we perform an intelligent signal injection test, which gives locally measured uncertainties for $\ensuremath{g^{(1)}_{_{hhh}}}$. From this test the $1\sigma$ error band around the expected SM strength of this coupling is $\ensuremath{g^{(1)}_{_{hhh}}} = 1.00^{+0.24(0.14)}_{-0.17(0.12)}$ for $E_e = 60 (120)$~GeV. \section{Validity of EFT} \label{eftval} In the EFT-based approach for our analyses, the usual SM Lagrangian is supplemented by higher-dimensional operators that parametrise the possible effects of new states assumed to appear at energies larger than the effective scales identified with $m_W$ (or equivalently $v$) by restricting the operators of dimension less than or equal to six. We have estimated the sensitivity of the involved coupling coefficients appearing in the effective Lagrangians in \cref{lagh,lag3,lag4} with the EW scale for the derivative terms. A detailed discussion with general couplings and mass scales with a higher dimension EFT Lagrangian can be found in Ref.~\cite{Giudice:2007fh}. For the processes at high energy, it is well known that an EFT approach provides an accurate description of the underlying new physics as long as the energies are below the new physics scale, $\Lambda$, and thus the limits on the couplings obtained in the above analyses shall degrade at scales higher than the EW scales (since for the fixed values of the couplings, the interference and pure BSM terms always give low contributions in the cross section measurements for high values of scale choice). Also for ${\cal O}(1)$ values of anomalous couplings (apart from \ensuremath{g^{(1)}_{_{hhh}}}) $g^{(i)}_{(\cdots)}$, $\tilde g_{(\cdots)}$ and TeV-scale momenta, one reaches the regime where the operators in \cref{lagh,lag3,lag4} may not be dominant, and operators with four and more derivatives may be equally important. In other words, the EFT behind these Lagrangian expansions breaks down. It would be important then to know how much the projected sensitivity depends on events that violate this EFT bound. With an EW precision test in Ref.~\cite{Farina:2016rws}, it is shown how an EFT's reach deteriorates when only data below the cutoff scales are employed on the mass variable in the case of Drell-Yan processes at the LHC. \begin{figure}[!ht] \includegraphics[trim=0 40 0 20,clip,width=0.48\textwidth,height=0.48\textwidth]{lumiscale.pdf} \caption{\small Percentage of deterioration of exclusion limits of anomalous tensorial couplings (shown in \cref{fig:lumi}) with respect to the upper di-Higgs invariant mass cut $m_{4b}^\prime \equiv m^{\rm cut}_{4b}$ [in GeV] for fixed luminosity of 1~\ensuremath{\rm ab^{-1}} ({\it blue}) and 10~\ensuremath{\rm ab^{-1}} ({\it red}). The numbers in the vertical axis above (below) 0 is the degradation in the upper (lower) limits.} \label{fig:eft_exc} \end{figure} A similar exercise can be performed in our case to estimate the deterioration of limits on anomalous tensorial couplings $g^{(i)}_{(\cdots)}$ and $\tilde g_{(\cdots)}$ (the coupling coefficients which corresponds to momentum-dependent Lorentz structure) as a function of the cut-off scale. In this approach we put an upper cut on the di-Higgs invariant mass ($m_{4b}^\prime$)\footnote{Note that in previous subsections we used the notation $m_{4b}$ for the lowest cut on di-Higgs invariant mass. Here we use $m_{4b}^\prime$ to avoid confusion since for all analyses apart from the EFT validity we selected the events $m_{4b} > 290$~GeV to suppress backgrounds and increase the overall significance. Here, to investigate the sensitivity of BSM tensorial couplings, we chose the events below $m_{4b}^\prime$ cuts, keeping $m_{4b} > 290$~GeV so that an one to one comparison can be performed. } such that EFT-violating events ($> m_{4b}^\prime$) are cut away, and then we estimate by how much the projected sensitivity of $\ensuremath{g^{(2)}_{_{hhh}}}$, $g_{_{hWW}}^{(1,2)}$, $g_{_{hhWW}}^{(1,2)}$ and $\ensuremath{\tilde g_{_{hWW}}}$, $\ensuremath{\tilde g_{_{hhWW}}}$ degrades with respect to their previous limits. In \cref{fig:eft_exc} we present the percentage of deterioration of the exclusion limits of these anomalous effective couplings by selecting events below $m_{4b}^\prime \in [0.35, 1]$~TeV for fixed luminosity of 1~\ensuremath{\rm ab^{-1}} and 10~\ensuremath{\rm ab^{-1}} at 95\% C.L. It is apparent from \cref{fig:eft_exc} that the deterioration in the limits of these anomalous couplings is large for low values of the $m_{4b}^\prime$ cut, because the effective cross section decreases (which is equivalent to the increase of the scale $\Lambda$ of the tensorial couplings) with the decrease of the values of $m_{4b}^\prime$. Comparing the exclusion limits obtained in \cref{fig:lumi} we observed that at $m_{4b}^\prime = 350$~GeV the percentage of deterioration in $\ensuremath{g^{(2)}_{_{hhh}}}$ is more than 100\%, while other $hWW$ and $hhWW$ couplings deteriorate by $~ 60 - 80$\% on both upper and lower sides at 1~\ensuremath{\rm ab^{-1}} and 10~\ensuremath{\rm ab^{-1}}. After 350~GeV a sudden decrease in degradation percentage can be noticed for $m_{4b}^\prime = 400 - 450$~GeV for all couplings. Furthermore around 500~GeV for $\ensuremath{g^{(2)}_{_{hhh}}}$, it remains 18\% while others are around 10\%. Beyond a 650~GeV cut, all the couplings converge to the original value of limits obtained in our previous analyses, as shown in \cref{fig:lumi}. \section{Summary and conclusions} \label{sec:conc} We conclude that the FCC-he, with an ERL energy of $E_e \ge 60$~GeV and a proton energy $E_p = 50$~TeV, would provide significant di-Higgs event rates, and through this channel one can probe accurately the Higgs boson self coupling provided that integrated luminosities of more than $1~\ensuremath{\rm ab^{-1}}$ may be achieved. Along with the Higgs self coupling one can search for any BSM signal through the measurement of the anomalous $hhWW$ contributions. One interesting feature of this type of machine is recognised by identifying forward jets in the signal events where an appropriate selection, as shown for our study, reduces backgrounds efficiently around 80\% with a loss of only 25\% of signal events. Our work also shows that $\Delta\phi_{\slashed E_Tj}$ is a very good observable for any new physics contributions in the given channel. Estimation of an asymmetry observable in $\Delta\phi_{\slashed E_Tj}$ for this kinematic distribution gives a preliminary idea of sensitivities of any new non-standard couplings. The limits on each coupling are set by measuring the observed event rate. But the asymmetry in $\Delta\phi_{\slashed E_Tj}$ can provide more distinguishability of the new physics, especially cancelling many potential systematics, which is helpful to distinguish the signatures of each model. An exclusion limit with respect to luminosity for these couplings is studied, and a signal injection test shows the uncertainty of the Higgs self coupling around its expected SM strength. With all these analyses we infer that the order of sensitivities of all non-standard couplings considered for our study within most of the luminosity ranges are consistent with the adopted methodology of asymmetry observable, and exclusion limits through fiducial cross sections at 95\% C.L. However, at luminosities $\sim$10-15~\ensuremath{\rm ab^{-1}} or higher, the method based on fiducial cross sections constrains the non-standard couplings more tightly. In addition to the fiducial cross sections, the drastic change in $\Delta\phi_{\slashed E_Tj}$ shape for $\ensuremath{g^{(1)}_{_{hhh}}}$ around 2.0 with respect to the SM in \cref{fig:dist} (left), suggest that using further observables like $\Delta\phi_{\slashed E_Tj}$ may significantly improve the sensitivity of BSM couplings in general. It is to be noted that the non-standard momentum dependent structures of the EFT breaks down at the TeV energy regime for couplings of ${\cal O}(1)$ and then additional derivative terms become relevant. Hence we also show the deteriorations in the limits of anomalous tensorial couplings for different regions of di-Higgs invariant mass (upper) cuts by an exclusion at fixed luminosity of 1~\ensuremath{\rm ab^{-1}} and 10~\ensuremath{\rm ab^{-1}} corresponding to 95\% C.L. limits, with respect to the limits obtained using fiducial inclusive cross section as an observable. This method is used as an alternative approach to estimate the sensitivity of the scale dependent couplings in EFT and gives a probe to understand the regions where validity of EFT breaks down. Our studies show a unique capability and potential of FCC-he collider to probe the precision measurement of not only the Higgs boson self coupling but also other involved couplings with tensorial structure through di-Higgs boson production. \section*{Acknowledgements} We acknowledge fruitful discussions within the LHeC Higgs group, especially with Masahiro Kuze and Masahiro Tanaka. RI acknowledges the DST-SERB grant SRlS2/HEP-13/2012 for partial financial support. \section*{References}
{ "attr-fineweb-edu": 1.774414, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd3M5qX_BxN0lOeaC
\section{Introduction} \label{sec:intro} We consider a system with $n$ identical servers, where customers arrive according to a Poisson process with rate $n\lambda$, and service times are i.i.d.\ exponentially distributed with rate $1$. Each server maintains an individual buffer of infinite length. When a customer arrives, he will either enter service immediately if an idle server is available, or be routed to the server with the smallest number of customers in its buffer; ties are broken arbitrarily. Once a customer is routed to a server, he cannot switch to a different server. This model is known as the Join the Shortest Queue (JSQ) model. To describe the system, let $Q_i(t)$ be the number of servers with $i$ or more customers at time $t \geq 0$, and let $Q(t) = (Q_{i}(t) )_{i=1}^{\infty}$. Then $\{Q(t)\}_{t \geq 0}$ is a continuous time Markov chain (CTMC), and it is positive recurrent provided $n \lambda < 1$ \cite{Bram2011a}. Let $Q_i$ be the random variables having the stationary distributions of $\{Q_i(t)\}_{t}$. In this paper we work in the Halfin-Whitt regime \cite{HalfWhit1981}, which assumes that \begin{align} \lambda = 1 - \beta/\sqrt{n}, \label{eq:hw} \end{align} for some fixed $\beta > 0$. The first paper to study the JSQ model in this regime is \cite{GamaEsch2015}, which shows that the scaled process \begin{align} \Big\{ \Big( \frac{Q_1(t) - n}{\sqrt{n}}, \frac{Q_2(t)}{\sqrt{n}}, \frac{Q_3(t)}{\sqrt{n}}, \ldots\Big) \Big\}_{t \geq 0} \label{eq:diffscale} \end{align} converges to a diffusion limit as $n \to \infty$. The diffusion limit of \eqref{eq:diffscale} is essentially two dimensional, because $Q_i(t)/\sqrt{n}$ becomes negligible for $i \geq 3$. The results of \cite{GamaEsch2015} are restricted to the transient behavior of the JSQ model and steady-state convergence is not considered, i.e.\ convergence to the diffusion limit is proved only for finite time intervals. In the present paper, we study the steady-state properties of the JSQ system. Specifically, we prove the existence of an explicitly known constant $C(\beta) > 0$ depending only on $\beta$ such that \begin{align} &n - \mathbb{E} Q_1 = n (1 - \lambda), \notag \\ &\mathbb{E} Q_2 \leq C(\beta) \sqrt{n} , \notag \\ &\mathbb{E} Q_i \leq C(\beta), \quad i \geq 3, \quad n \geq 1.\label{eq:introresults} \end{align} In other words, the expected number of idle servers is known, the expected number of non-empty buffers is of order $\sqrt{n}$, and the expected number of buffers with two or more waiting customers is bounded by a constant independent of $n$. A consequence of \eqref{eq:introresults} is tightness of the sequence of diffusion-scaled stationary distributions. In addition to \eqref{eq:introresults}, we also prove that the two-dimensional diffusion limit of the JSQ model is exponentially ergodic. Stability of this diffusion limit remained an open question until the present paper. Combining the process-level convergence of \cite{GamaEsch2015}, tightness of the prelimit stationary distributions in \eqref{eq:introresults}, and stability of the diffusion limit, we are able to justify convergence of the stationary distributions via a standard limit-interchange argument. To prove our results, we use the generator expansion framework, which is a manifestation of Stein's method \cite{Stei1986} in queueing theory and was recently introduced to the stochastic systems literature in \cite{Gurv2014, BravDai2017}; see \cite{BravDaiFeng2016} for an accessible introduction. The idea is to perform Taylor expansion on the generator of a CTMC, and by looking at the second-order terms, to identify a diffusion model approximating the CTMC. One then proves bounds on the steady-state approximation error of the diffusion, which commonly results in convergence rates to the diffusion approximation \cite{Gurv2014, BravDai2017, BravDaiFeng2016, GurvHuan2016, FengShi2017}. In this paper, we use only the first-order terms of the generator expansion, which correspond to the generator of a related fluid model. We then carry out the machinery of Stein's method to prove convergence rates to the fluid model equilibrium. The bounds in \eqref{eq:introresults} are then simply an alternative interpretation of these convergence rates. For other examples of Stein's method for fluid, or mean-field models, see \cite{Ying2016,Ying2017, Gast2017,GastHoud2017}. Specifically, \cite{Ying2016} was the first to make the connection between Stein's method and convergence rates to the mean-field equilibrium. Our approach can also be tied to the drift-based fluid limit (DFL) Lyapunov functions used in \cite{Stol2015}, which appeared a few years before \cite{Ying2016}. As we will explain in more detail in Section~\ref{sec:ingred}, the DFL approach and Stein's method for mean-field approximations are essentially one and the same. This paper contributes another example of the successful application of the generator expansion method to the queuing literature. Although the general framework has already been laid out in previous work, examples of applying the framework to non-trivial systems are the only way to display the power of the framework and promote its adoption in the research community. Furthermore, tractable examples help showcase and expand the versatility of the framework and the type of results it can prove. The present paper contributes from this angle in two ways. First, the JSQ model is an example where the dimension of the CTMC is greater than that of the diffusion approximation. To justify the approximation, one needs a way to show that the additional dimensions of the CTMC are asymptotically negligible; this is known as state space collapse (SSC). Our way of dealing with SSC in Section~\ref{sec:mainproof} differs from the typical solution of bounding the magnitude of the SSC terms \cite{BravDai2017, EryiSrik2012, MaguSrik2016,BurlMaguSrik2016} (only \cite{BravDai2017} of the aforementioned papers uses the generator expansion framework, but the rest still deal with steady-state SSC in a conceptually similar way). Second, this paper presents the first working example of the generator expansion framework being used to prove exponential ergodicity of the diffusion approximation. The insight used is simple, but can be easily generalized to prove exponential ergodicity for other models. \subsection{Literature Review and Contributions} \label{sec:lit} Early work on the JSQ model appeared in the late 50's and early 60's \cite{Haig1958,King1961}, followed by a number of papers in the 70's--90's \cite{FlatMcKe1977a, FoscSalz1978, Half1985, WangZhan1989, HsuWangZhan1995}. This body of literature first studied the JSQ model with two servers, and later considered heavy-traffic asymptotics in the setting where the number of servers $n$ is fixed, and $\lambda \to 1$; see \cite{GamaEsch2015} for an itemized description of the aforementioned works. A more recent paper \cite{EryiSrik2012} considers the steady-state behavior of the JSQ model, but again in the setting where $n$ is fixed, and $\lambda \to 1$. The asymptotic regime where $n \to \infty$ has been untouched until very recently. In \cite{Stol2015a}, the author studies a variant of the JSQ model where the routing policy is to join an idle server if one is available, and otherwise join any buffer uniformly; this is known as the Join the Idle Queue (JIQ) policy. In that paper, the arrival rate is $n \lambda$ where $\lambda < 1$ is fixed, and $n \to \infty$. The author shows that in this underloaded asymptotic regime, JIQ is asymptotically optimal on the fluid scale, and therefore asymptotically equivalent to the JSQ policy. We have already described \cite{GamaEsch2015}, which is the first paper to study a non-underloaded regime. In \cite{MukhBorsLeeuWhit2016}, the authors work in the Halfin-Whitt regime and show that JIQ is asymptotically optimal, and therefore asymptotically equivalent to JSQ, on the diffusion scale. Most recently, \cite{GuptWalt2017} studies the JSQ model in the non-degenerate slowdown (NDS) regime introduced in \cite{Atar2012}. In this regime, $\lambda = 1 - \beta/n$ for some fixed $\beta > 0$, i.e.\ NDS is even more heavily loaded than the Halfin-Whitt regime. The authors of \cite{GuptWalt2017} establish a diffusion limit for the total customer count process. In the asymptotic regime where $n \to \infty$, all previous considerations of the diffusion-scaled model \cite{GamaEsch2015,MukhBorsLeeuWhit2016,GuptWalt2017} have been in the transient setting. In particular, convergence to the diffusion limit is only proved over finite time intervals. In contrast, the present paper deals with steady-state distributions. Since the seminal work of \cite{GamaZeev2006}, justifying convergence of steady-state distributions has become the standard in heavy-traffic approximations, and is recognized as being a non-trivial step beyond convergence over finite-time intervals \cite{BudhLee2009, ZhanZwar2008, Kats2010, YaoYe2012, Tezc2008, GamaStol2012, Gurv2014a, Stol2015, GurvHuan2016, BravDai2017,BravDaiFeng2016,Gurv2014}. The methodology used in this paper can be discussed in terms of \cite{Ying2016,Ying2017, Gast2017, Stol2015}. The main technical driver of our results are bounds on the derivatives of the solution to a certain first order partial differential equation (PDE) related to the fluid model of the JSQ system. In the language of \cite{Stol2015}, we need to bound the derivatives of the DFL Lyapunov function. These derivative bounds are a standard requirement to apply Stein's method, and \cite{Ying2016,Ying2017, Gast2017} provide sufficient conditions to bound these derivatives for a large class of PDEs. The bounds in \cite{Ying2016,Ying2017, Gast2017} require continuity of the vector field defining the fluid model, but the JSQ fluid model does not satisfy this continuity due to a reflecting condition at the boundary. To circumvent this, we leverage knowledge of how the fluid model behaves to give us an explicit expression for the PDE solution, and we bound its derivatives directly using this expression. Using the behavior of the fluid model is similar to what was done in \cite{Stol2015}. However, bounding the derivatives in this way requires detailed understanding of the fluid model, and as such this is a case-specific approach that varies significantly from one model to another. Furthermore, unlike \cite{Stol2015} where the dimension of the CTMC equals the dimension of the diffusion approximation, our CTMC is infinite-dimensional whereas the diffusion process is two-dimensional. These additional dimensions in the CTMC create additional technical difficulties which we handle in Section~\ref{sec:mainproof}. Regarding our proof of exponential ergodicity. The idea of using a fluid model Lyapunov function to establish exponential ergodicity of the diffusion model was initially suggested in Lemma 3.1 of \cite{Gurv2014}. However, the discussion in \cite{Gurv2014} is at a conceptual level, and it is only after the working example of the present paper that we have a simple and general implementation of the idea. Indeed, our Lyapunov function in \eqref{eq:lyapeexp} of Section~\ref{sec:proofergod} violates the condition in Lemma 3.1 of \cite{Gurv2014}. \subsection{Notation} We use $\Rightarrow$ to denote weak convergence, or convergence in distribution. We use $1(A)$ to denote the indicator of a set $A$. We use $D = D([0,\infty),\mathbb{R})$ to denote the space of right continuous functions with left limits mapping $[0,\infty)$ to $\mathbb{R}$. For any integer $k \geq 2$, we let $D^{k} = D([0,\infty),\mathbb{R}^{k})$ be the product space $D \times \ldots \times D$. The rest of the paper is structured as follows. We state our main results in Section~\ref{sec:main}, and prove them in Section~\ref{sec:three}. Section~\ref{sec:derbounds} is devoted to understanding the JSQ fluid model, and using this to prove the derivative bounds that drive the proof of our main results. \section{Model and Main Results} \label{sec:main} Consider the CTMC $\{Q(t)\}_{t \geq 0}$ introduced in Section~\ref{sec:intro}. The state space of the CTMC is \begin{align*} S = \big\{ q \in \{ 0,1,2, \ldots , n \}^{\infty} \ |\ q_{i} \geq q_{i+1} \text{ for } i \geq 1 \text{ and } \sum_{i=0}^{\infty}q_i < \infty \big\}. \end{align*} The requirement that $\sum_{i=0}^{\infty}q_i < \infty$ if $q \in S$ means that we only consider states with a finite number of customers. Recall that $Q_i$ are random variables having the stationary distributions of $\{Q_i(t)\}_{t}$, and let $Q = (Q_{i})$ be the corresponding vector. The generator of the CTMC $G_Q$ acts on function $f : S \to \mathbb{R}$, and satisfies \begin{align*} G_Q f(q) =&\ n \lambda 1(q_1 < n) \big( f(q + e^{(1)}) - f(q) \big) \\ &+ \sum_{i=2}^{\infty} n \lambda 1(q_1 = \ldots = q_{i-1} = n, q_i < n) \big( f(q+e^{(i)}) - f(q)\big)\\ &+ \sum_{i=1}^{\infty} (q_i - q_{i+1}) \big(f(q-e^{(i)}) - f(q)\big),. \end{align*} where $e^{(i)}$ is the infinite dimensional vector where the $i$th element equals one, and the rest equal zero. The generator of the CTMC encodes the stationary behavior of the chain. Exploiting the relationship between the generator and the stationary distribution can be done via the following lemma, which is proved in Section~\ref{app:gzlemma}. \begin{lemma} \label{lem:gz} For any function $f: S \to \mathbb{R}$ such that $\mathbb{E} \abs{f(Q)} < \infty$, \begin{align} \mathbb{E} G_Q f(Q) = 0. \label{eq:bar} \end{align} \end{lemma} By choosing different test functions $f(q)$, we can use \eqref{eq:bar} to obtain stationary performance measures of our CTMC. For example, we are able to prove the following using rather simple test functions. \begin{lemma} \label{lem:q1} \begin{align*} &\mathbb{E} Q_1 = n\lambda, \\ &\mathbb{E} Q_i = n \lambda \mathbb{P}( Q_1 = \ldots = Q_{i-1} = n), \quad i > 1. \end{align*} \end{lemma} \begin{proof} Fix $M > 0$ and let $f(q) = \min \big( M, \sum_{i=1}^{\infty} q_i\big)$. Then \begin{align*} G_{Q} f(q) = n\lambda 1\big(\sum_{i=1}^{\infty} q_i < M\big) - q_1 1\big( \sum_{i=1}^{\infty} q_i \leq M\big). \end{align*} Using \eqref{eq:bar}, \begin{align*} n\lambda\mathbb{P}\big(T < M\big) = \mathbb{E} \Big(Q_1 1\big( T \leq M\big)\Big), \end{align*} where $T = \sum_{i=1}^{\infty} Q_i$ is the total customer count. Although the infinite series in the definition of $T$ may seem worrying at first, stability of the JSQ model in fact implies that $T < \infty$ almost surely. To see why this is true, observe that an alternative way to describe the JSQ model is via the CTMC $\{(S_1(t), \ldots, S_n(t))\}_{t \geq 0}$, where $S_i(t)$ be the number of customers assigned to server $i$ at time $t$; we can view $Q(t)$ as a deterministic function of $(S_1(t), \ldots, S_n(t))$. This new CTMC is also positive recurrent, but now the total number of customers in the system at time $t$ is the finite sum $\sum_{i=1}^{n} S_i(t)$. Therefore, $T < \infty$ almost surely, and we can take $M \to \infty$ and apply the monotone convergence theorem to conclude that \begin{align*} \mathbb{E} Q_1 = n\lambda. \end{align*} Repeating the argument above with $f(q) = \min \big( M, \sum_{j=i}^{\infty} q_j\big)$ gives us \begin{align*} n\lambda \mathbb{P}(Q_{1}= \ldots = Q_{i-1}=n) = \mathbb{E} Q_i. \end{align*} \end{proof} Although Lemma~\ref{lem:q1} does characterize quantities like $E Q_{i}$, its results are of little use to us unless we can control $\mathbb{P}( Q_1 = \ldots = Q_{i-1} = n)$. One may continue to experiment by applying $G_Q$ to various test functions in the hope of getting more insightful results from \eqref{eq:bar}. In general, the more complicated the Markov chain, the less likely this strategy will be productive. In this paper we take a more systematic approach to selecting test functions. To state our main results, define \begin{align*} X_1(t) = \frac{Q_1(t) - n}{n}, \quad X_i(t) = \frac{Q_i(t)}{n}, \quad i \geq 2, \end{align*} and let $X(t) = (X_i(t))_{i=1}^{\infty}$ be the fluid-scaled CTMC. Also let $X_i$ be the random variables having the stationary distributions of $\{X_i(t)\}_{t}$ and set $X = (X_i)_{i=1}^{\infty}$. Our first result is about bounding the expected value of $X_2$. The main ingredients needed for the proof are presented in Section~\ref{sec:ingred}, and are followed by a proof in Section~\ref{sec:mainproof}. \begin{theorem} \label{thm:main} For all $n \geq 1$, $\beta > 0$, and $\kappa >\beta$, \begin{align} \mathbb{E} \Big((X_2 - \kappa/\sqrt{n}) 1(X_2 \geq \kappa/\sqrt{n})\Big) \leq \frac{1}{\beta\sqrt{n}} \Big(12+\frac{6\kappa}{\kappa- \beta}\Big)\mathbb{P}(X_2 \geq \kappa/\sqrt{n} - 1/n), \label{eq:main1} \end{align} which implies that \begin{align} \mathbb{E} \sqrt{n}X_2 \leq 2 \kappa + \frac{1}{\beta } \Big(12+\frac{6\kappa}{\kappa- \beta}\Big). \label{eq:tight} \end{align} \end{theorem} To parse the bound in \eqref{eq:tight} into a friendlier form, let us choose $\kappa = \beta + \varepsilon$ to get \begin{align*} \mathbb{E} \sqrt{n}X_2 \leq 2(\beta + \varepsilon)+ \frac{1}{\beta } \Big(12+\frac{6(\beta + \varepsilon)}{\varepsilon}\Big), \quad \varepsilon > 0, \end{align*} and to see that the right hand side above can be bounded by a constant independent of $n$. A consequence of Theorem~\ref{thm:main} is the following result, which tells us that for $i \geq 3$, we can bound $\mathbb{E} Q_i$ by a constant that is independent of $n$. \begin{theorem} \label{thm:q3} For all $i \geq 3$, $\beta > 0$, $\kappa > \beta$, $n \geq 1$ such that $\max(\beta/\sqrt{n},1/n) < 1$, and $\tilde \kappa \in ( \max(\beta/\sqrt{n},1/n), 1) $, \begin{align} \mathbb{E} Q_i \leq \frac{1}{\beta ( 1 - \tilde \kappa)} \Big(12+\frac{6\tilde \kappa}{\tilde \kappa- \beta/\sqrt{n}}\Big)\frac{1}{\tilde \kappa - 1/n} \bigg(2 \kappa + \frac{1}{\beta } \Big(12+\frac{6\kappa}{\kappa- \beta}\Big)\bigg). \label{eq:main2} \end{align} \end{theorem} \begin{remark} The bound in \eqref{eq:main2} is intended to be used with $\kappa = \beta + \varepsilon$ and $\tilde \kappa$ being some constant, say $\tilde \kappa = 1/2$. Then for $n$ large enough so that $1/2 > \max(\beta/\sqrt{n},1/n)$, the bound implies that \begin{align*} \mathbb{E} Q_i \leq \frac{2}{\beta } \Big(12+\frac{3}{0.5- \beta/\sqrt{n}}\Big)\frac{1}{0.5 - 1/n} \bigg(2 (\beta + \varepsilon) + \frac{1}{\beta } \Big(12+\frac{6(\beta + \varepsilon)}{\varepsilon}\Big)\bigg), \quad \varepsilon > 0. \end{align*} Since there only finitely many values of $n$ violate $1/2 > \max(\beta/\sqrt{n},1/n)$, the above bound means that $\mathbb{E} Q_i$ can be bounded by a constant independent of $n$. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:q3} ] Since $\mathbb{E} Q_i \leq \mathbb{E} Q_3$ for $i \geq 3$, it suffices to prove \eqref{eq:main2} for $i = 3$. Fix $\tilde \kappa \in ( \max(\beta/\sqrt{n},1/n), 1) $ and invoke Theorem~\ref{thm:main} with $\sqrt{n} \tilde \kappa$ in place of $\kappa$ there to see that \begin{align} \mathbb{E} \Big((X_2 - \tilde \kappa) 1(X_2 \geq \tilde \kappa)\Big) \leq&\ \frac{1}{\beta\sqrt{n}} \Big(12+\frac{6\tilde \kappa}{\tilde \kappa- \beta/\sqrt{n}}\Big)\mathbb{P}(X_2 \geq \tilde \kappa- 1/n) \notag \\ =&\ \frac{1}{\beta n} \Big(12+\frac{6\tilde \kappa}{\tilde \kappa- \beta/\sqrt{n}}\Big)\sqrt{n}\mathbb{E}\Big(\frac{X_2}{X_2} 1(X_2 \geq \tilde \kappa- 1/n)\Big)\notag \\ \leq&\ \frac{1}{\beta n} \Big(12+\frac{6\tilde \kappa}{\tilde \kappa- \beta/\sqrt{n}}\Big)\frac{1}{\tilde \kappa- 1/n} \mathbb{E} \sqrt{n} X_2 \notag \\ \leq&\ \frac{1}{\beta n} \Big(12+\frac{6\tilde \kappa}{\tilde \kappa- \beta/\sqrt{n}}\Big)\frac{1}{\tilde \kappa- 1/n} \bigg(2 \kappa + \frac{1}{\beta } \Big(12+\frac{6\kappa}{\kappa- \beta}\Big)\bigg), \label{eq:tailbound} \end{align} where in the last inequality we used \eqref{eq:tight}. Therefore, \begin{align*} \frac{1}{\beta n} \Big(12+\frac{6\tilde \kappa}{\tilde \kappa- \beta/\sqrt{n}}\Big)\frac{1}{\tilde \kappa - 1/n} \bigg(2 \kappa + \frac{1}{\beta } \Big(12+\frac{6\kappa}{\kappa- \beta}\Big)\bigg) \geq&\ \mathbb{E} \Big((X_2 - \tilde \kappa) 1(X_2 \geq \tilde \kappa)\Big) \\ \geq&\ (1 - \tilde \kappa)\mathbb{P}(X_2 = 1)\\ =&\ (1 - \tilde \kappa)\mathbb{P}(Q_2 = n)\\ \geq&\ (1-\tilde \kappa) \frac{1}{n} \mathbb{E} Q_3, \end{align*} where in the second inequality we used the fact that $\tilde \kappa < 1$, and in the last inequality we used Lemma~\ref{lem:q1}. \end{proof} \begin{remark} The bound in \eqref{eq:main2} will be sufficient for our purposes, but it is unlikely to be tight. The argument in \eqref{eq:tailbound} can be modified by observing that for any integer $m > 0$, \begin{align*} \mathbb{P}(X_2 \geq \tilde \kappa - 1/n) =&\ \frac{n^m}{n^m} \mathbb{E}\Big(\frac{X_2^{2m}}{X_2^{2m}} 1(X_2 \geq \tilde \kappa - 1/n)\Big) \leq \frac{1}{n^m (\tilde \kappa - 1/n)^{2m}} \mathbb{E} (\sqrt{n} X_2)^{2m}. \end{align*} Provided we have a bound on $\mathbb{E} (\sqrt{n} X_2)^{2m}$ that is independent of $n$, it follows that $\mathbb{E} Q_3 \leq C(\beta)/n^{m-1/2}$. Although we have not done so, we believe the arguments used in Theorem~\ref{thm:main} can be extended to provide the necessary bounds on $\mathbb{E} (\sqrt{n} X_2)^{2m}$. \end{remark} \subsection{The Diffusion Limit: Exponential Ergodicity} Let us now consider the diffusion limit of the JSQ model. The following result is copied from \cite{MukhBorsLeeuWhit2016} (but it was first proved in \cite{GamaEsch2015}). \begin{theorem}[Theorem 1 of \cite{MukhBorsLeeuWhit2016}]\label{thm:transient} Suppose $Y(0) = (Y_1(0), Y_2(0)) \in \mathbb{R}^2$ is a random vector such that $\sqrt{n}X_i(0) \Rightarrow Y_i(0)$ for $i = 1,2$ as $n \to \infty$ and $\sqrt{n}X_i(0) \Rightarrow 0$ for $i \geq 3$ as $n \to \infty$. Then the process $\{\sqrt{n}(X_1(t), X_2(t))\}_{t\geq 0}$ converges uniformly over bounded intervals to $\{(Y_1(t),Y_2(t))\}_{t \geq 0} \in D^2$, which is the unique solution of the stochastic integral equation \begin{align} &Y_1(t) = Y_1(0) + \sqrt{2} W(t) - \beta t + \int_{0}^{t} (-Y_1(s) + Y_2(s)) ds - U(t), \notag \\ &Y_2(t) = Y_2(0) + U(t) - \int_{0}^{t} Y_2(s) ds, \label{eq:diffusion} \end{align} where $\{W(t)\}_{t\geq 0}$ is standard Brownian motion and $\{U(t)\}_{t\geq 0}$ is the unique non-decreasing, non-negative process in $D$ satisfying $\int_{0}^{\infty} 1(Y_1(t)<0) d U(t) = 0$. \end{theorem} Theorem~\ref{thm:transient} proves that $\{\sqrt{n}(X_1(t), X_2(t))\}_{t\geq 0}$ converges to a diffusion limit. Convergence was established only over finite time intervals, but convergence of steady-state distributions was not justified. In fact, it has not been shown that the process in \eqref{eq:diffusion} is even positive recurrent. We show that not only is this process positive recurrent, but it is also exponentially ergodic. We first need to introduce the generator of the diffusion process $\{(Y_1(t),Y_2(t))\}_{t \geq 0}$. Let \begin{align} \Omega = (-\infty,0] \times [0,\infty). \label{eq:omega} \end{align} Going forward, we adopt the convention that for any function $f:\Omega \to \mathbb{R}$, partial derivatives are understood to be one-sided derivatives for those values $x \in \partial \Omega$ where the derivative is not defined. For example, the partial derivative with respect to $x_1$ is not defined on the set $\{x_1=0,\ x_2\geq 0\}$. In particular, for any integer $k > 0$ we let $C^{k}(\Omega)$ be the set of $k$-times continuously differentiable functions $f: \Omega \to \mathbb{R}$ obeying the notion of one-sided differentiability just described. We use $f_i(x)$ to denote $\frac{d f(x)}{d x_i}$. For any function $f(x) \in C^2(\Omega)$ such that $f_1(0,x_2) = f_2(0,x_2)$, define $G_Y$ as \begin{align*} G_Y f(x) = (-x_1 + x_2- \beta ) f_1(x) - x_2 f_2(x) + f_{11}(x), \quad x \in \Omega. \end{align*} It follows from Ito's lemma that \begin{align} f(Y_1(t),Y_2(t)) - f(Y_1(0),Y_2(0)) - \int_{0}^{t} G_Y f(Y_1(s),Y_2(s)) ds \label{eq:mart} \end{align} is a martingale, and $G_Y$ is the generator of this diffusion process. The condition $f_1(0,x_2) = f_2(0,x_2)$ is essential due to the reflecting term $U(t)$ in \eqref{eq:diffusion}; otherwise \eqref{eq:mart} is not a martingale. The following theorem proves the existence of a function satsfying the Foster-Lyapunov condition \cite{MeynTwee1993b} that is needed for exponential ergodicity. The proof of the theorem is given in Section~\ref{sec:proofergod}, where one can also obtain more insight about the form of the Lyapunov function. \begin{theorem} \label{thm:ergodic} For any $\alpha, \kappa_1,\kappa_2 > 0$ with $\beta < \kappa_1 < \kappa_2$, there exists a function $V^{(\kappa_1,\kappa_2)} : \Omega \to [1,\infty)$ with $V^{(\kappa_1,\kappa_2)}(x) \to \infty$ as $\abs{x } \to \infty$ such that \begin{align} G_Y V^{(\kappa_1,\kappa_2)}(x) \leq&\ -\alpha c V^{(\kappa_1,\kappa_2)}(x) + \alpha d, \label{eq:expgen} \end{align} where \begin{align*} c =&\ 1- \frac{12}{ (\kappa_2-\kappa_1)^2}\log\Big( \frac{\kappa_2 -\beta}{\kappa_1-\beta} \Big) - \frac{1}{\beta(\kappa_1-\beta)}\Big(1 + \frac{\kappa_1}{\kappa_1- \beta} \frac{4(\kappa_1- \beta)}{\kappa_2-\kappa_1} \Big)\\ &- \alpha \Big(\frac{4 }{\kappa_2-\kappa_1}\log\Big( \frac{\kappa_2 -\beta}{\kappa_1-\beta} \Big)+ 1/\beta \Big)^2,\\ d=&\ \Big(\frac{\kappa_2-\beta}{\kappa_1-\beta} \frac{\kappa_2}{\kappa_1}\Big)^{\alpha} e^{\alpha\frac{\kappa_2-\kappa_1}{\beta}}. \end{align*} \end{theorem} We can choose $\kappa_1 = \beta + \varepsilon$ and $\kappa_2 = \beta + 2\varepsilon$ in Theorem~\ref{thm:ergodic} so that \begin{align} &c = 1 - 12/\varepsilon^2 - \frac{1 + 4(1 + \beta/\varepsilon)}{\beta \varepsilon} - \alpha(4/\varepsilon + 1/\beta)^2,\quad d= 4^{\alpha} e^{\alpha( \varepsilon/\beta)}. \label{eq:cd} \end{align} We can make $ c < 0$ by choosing $\varepsilon$ and $\alpha$ appropriately. A consequence of Theorem~\ref{thm:ergodic} is exponential ergodicity of the diffusion in \eqref{eq:diffusion}. \begin{corollary} The diffusion process $\{(Y_1(t),Y_2(t))\}_{t \geq 0}$ defined in \eqref{eq:diffusion} is positive recurrent. Furthermore, if $Y = (Y_1,Y_2)$ is the vector having its stationary distribution, then for any $\kappa_1,\kappa_2 > 0$ with $\beta < \kappa_1<\kappa_2$ such that the constant $c$ in \eqref{eq:cd} is negative, there exist constants $b < 1$ and $B < \infty$ such that \begin{align*} \sup_{\abs{f}\leq V^{(\kappa_1,\kappa_2)}} \abs{ \mathbb{E}_{x} f(Y(t)) - \mathbb{E} f(Y)} \leq B V^{(\kappa_1,\kappa_2)}(x) b^{t} \end{align*} \end{corollary} \begin{proof} The proof is an immediate consequence of Theorem 6.1 of \cite{MeynTwee1993b}. \end{proof} \subsection{Convergence of Stationary Distributions} In this section we leverage the results of Theorems~\ref{thm:main}--\ref{thm:transient} together with positive recurrence of the diffusion limit to verify steady-state convergence. \begin{theorem} \label{thm:interchange} Let $Y = (Y_1,Y_2)$ have the stationary distribution of the diffusion process defined in \eqref{eq:diffusion}. Then \begin{align} \sqrt{n}(X_1,X_2) \Rightarrow Y \text{ as $n \to \infty$}. \end{align} \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:interchange}] Lemma~\ref{lem:q1} and Theorem~\ref{thm:main} imply that the sequence $\{\sqrt{n}(X_1,X_2)\}_{n}$ is tight. It follows by Prohorov's Theorem \cite{Bill1999} that the sequence is also relatively compact. We will now show that any subsequence of $\{\sqrt{n}(X_1,X_2)\}_{n}$ has a further subsequence that converges weakly to $Y$. Fix $n > 0$ and initialize the process $\{X(t)\}_{t \geq 0}$ by letting $\sqrt{n}X(0)$ have the same distribution as $\sqrt{n}X$. Prohorov's Theorem implies that for any subsequence \begin{align*} \{\sqrt{n'}X(0)\}_{n'} \subset \{\sqrt{n}X(0)\}_{n}, \end{align*} there exists a further subsequence \begin{align*} \{\sqrt{n''}X(0)\}_{n''} \subset \{\sqrt{n'}X(0)\}_{n'} \end{align*} that converges weakly to some random vector $Y^{(0)} = (Y_1^{(0)}, Y_2^{(0)}, \ldots)$. Theorem~\ref{thm:q3} implies that $Y_i^{(0)} = 0$ for $i \geq 3$. Now for any $t \geq 0$, let $(Y_1(t),Y_2(t))$ solve the integral equation in \eqref{eq:diffusion} with intial condition $(Y_1(0),Y_2(0)) = (Y_1^{(0)},Y_2^{(0)})$. Then \begin{align*} \sqrt{n''}(X_1(0),X_2(0)) \stackrel{d}{=} \sqrt{n''}(X_1(t),X_2(t)) \Rightarrow (Y_1(t),Y_2(t)), \text{ as $n \to \infty$}, \end{align*} where the weak convergence follows from Theorem~\ref{thm:transient}. We conclude that \begin{align*} \lim_{n \to \infty} \sqrt{n''}(X_1(0),X_2(0)) =&\ \lim_{t \to \infty} \lim_{n \to \infty} \sqrt{n''}(X_1(0),X_2(0)) \\ \stackrel{d}{=}&\ \lim_{t \to \infty} (Y_1(t),Y_2(t)) \stackrel{d}{=} (Y_1,Y_2). \end{align*} \end{proof} \section{Proving the Main Results} \label{sec:three} In this section we prove our main results. Section~\ref{sec:ingred} outlines the main components needed for the proofs, and then Sections~\ref{sec:mainproof} and \ref{sec:proofergod} follow up with proofs of Theorems~\ref{thm:main} and \ref{thm:ergodic}, respectively. \subsection{Proof Ingredients: Generator Expansion} \label{sec:ingred} For any differentiable function $f: \Omega \to \mathbb{R}$, let \begin{align*} L f(x) = (- x_1 + x_2 - \beta/\sqrt{n}) f_1(x) - x_2 f_2(x), \quad x \in \mathbb{R}^2, \end{align*} where $f_i(x) = \frac{d f(x)}{d x_i}$. For any function $h:\Omega \to \mathbb{R}$, consider the partial differential equation (PDE) \begin{align} L f(x) =&\ -h(x), \quad x \in \Omega, \notag\\ f_1(0,x_2) =&\ f_2(0,x_2), \quad x_2 \geq 0. \label{eq:pde} \end{align} The PDE above is related to the fluid model corresponding to $\{(X_1(t),X_2(t))\}_{t\geq 0}$. This connection will be expanded upon in Section~\ref{sec:derbounds}. Assume for now that a solution to \eqref{eq:pde} exists and denote it by $f^{(h)}(x)$. Let $G_X$ be the generator of the CTMC $\{ X(t) \}_{t\geq 0}$. For a function $f:\mathbb{R}_2 \to \mathbb{R}$, define the lifted version $ A f: S \to \mathbb{R}$ by \begin{align*} (A f)(q) = f(x_1,x_2) = f(x), \quad q \in S, \end{align*} where $x_1 = (q_1-n)/n$, and $x_2 = q_2/n$. We know that $\mathbb{E} \abs{f(X_1,X_2)} < \infty $ for any $f:\mathbb{R}_2 \to \mathbb{R}$ because $(X_1,X_2)$ can only take finitely many values. A variant of Lemma~\ref{lem:gz} then tells us that \begin{align*} \mathbb{E} G_X Af^{(h)}(X) =0. \end{align*} We can therefore take expected values in \eqref{eq:pde} to conclude that \begin{align} \mathbb{E} h(X) = \mathbb{E} \big( G_X Af^{(h)} (X) - L f^{(h)} (X)\big). \label{eq:stein} \end{align} \begin{lemma} \label{lem:gentaylor} For any $q_i \in \mathbb{Z}_+$, let $x_1 = (q_1-n)/n$ and $x_i = q_i/n$ for $i \geq 2$. Suppose $f(x_1,x_2)$ is defined on $\Omega$, and has absolutely continuous first-order partial derivatives. Then for all $q \in S$, \begin{align*} G_X A f(q) - L f(x) =&\ (f_2(x)-f_1(x) ) \lambda 1(q_1 = n) - f_2(x) \lambda 1(q_1 = q_2 = n) \notag \\ &+ q_3 \int_{x_2-1/n}^{x_2} f_2(x_1,u) du \notag \\ &+ n\lambda 1(q_1 < n) \int_{x_1}^{x_1+1/n} (x_1 + 1/n - u) f_{11}(u,x_2) du \notag \\ &+ n\lambda 1(q_1 = n, q_2 < n) \int_{x_2}^{x_2+1/n} (x_2 + 1/n - u) f_{22}(x_1,u) du \notag \\ &+ (q_1 - q_2) \int_{x_1-1/n}^{x_1} (u - (x_1 - 1/n)) f_{11}(u,x_2) du \notag \\ &+ q_2 \int_{x_2-1/n}^{x_2} (u - (x_2 - 1/n)) f_{22}(x_1,u) du. \end{align*} \end{lemma} Lemma~\ref{lem:gentaylor} tells us that to bound the error in \eqref{eq:stein}, we need to know more about the solution to \eqref{eq:pde} and its derivatives. Section~\ref{sec:derbounds} is devoted to proving the following lemma. \begin{lemma} \label{cor:derivs} Fix $\kappa > \beta$ and consider the PDE \eqref{eq:pde} with $h(x) = \big((x_2 - \kappa/\sqrt{n}) \vee 0\big)$. There exists a solution $f^{(h)}:\Omega \to \mathbb{R}$ with absolutely continuous first-order partial derivatives, such that the second-order weak derivatives satisfy \begin{align} & f_{11}^{(h)}(x), f_{12}^{(h)}(x), f_{22}^{(h)}(x) \geq 0,& \quad x \in \Omega, \label{eq:dpos}\\ &f_{11}^{(h)}(x) = f_{22}^{(h)}(x) = 0, &\quad x_2 \in [0,\kappa/\sqrt{n}], \label{eq:dzero}\\ &f_{11}^{(h)}(x) \leq \frac{\sqrt{n}}{\beta}\Big( \frac{\kappa}{\kappa-\beta} + 1\Big), \quad f_{22}^{(h)}(x) \leq \frac{\sqrt{n}}{\beta}\Big( 5 +\frac{2\kappa}{\kappa- \beta } \Big), &\quad x_2 \geq \kappa/\sqrt{n}. \label{eq:d22} \end{align} \end{lemma} \subsection{Proof of Theorem~\ref{thm:main}} \label{sec:mainproof} \begin{proof}[Proof of Theorem~\ref{thm:main}] Let $f(x) = f^{(h)}(x)$ from Lemma~\ref{cor:derivs}. By Lemma~\ref{lem:gentaylor}, \begin{align} G_X A f(q) - L f(x) =&\ (f_2(x)-f_1(x) ) \lambda 1(q_1 = n) \label{eq:diff1} \\ & - f_2(x) \lambda 1(q_1 = q_2 = n) + q_3 \int_{x_2-1/n}^{x_2} f_2(x_1,u) du \label{eq:diff2}\\ &+ n\lambda 1(q_1 < n) \int_{x_1}^{x_1+1/n} (x_1 + 1/n - u) f_{11}(u,x_2) du \label{eq:diff3}\\ &+ n\lambda 1(q_1 = n, q_2 < n) \int_{x_2}^{x_2+1/n} (x_2 + 1/n - u) f_{22}(x_1,u) du \label{eq:diff4}\\ &+ (q_1 - q_2) \int_{x_1-1/n}^{x_1} (u - (x_1 - 1/n)) f_{11}(u,x_2) du \label{eq:diff5}\\ &+ q_2 \int_{x_2-1/n}^{x_2} (u - (x_2 - 1/n)) f_{22}(x_1,u) du. \label{eq:diff6} \end{align} We know \eqref{eq:diff1} equals zero because of the derivative condition in \eqref{eq:pde}. Assume for now that lines \eqref{eq:diff3}-\eqref{eq:diff6} are all non-negative, and their sum is upper bounded by $\frac{1}{\beta\sqrt{n}} \Big( 12 +\frac{6\kappa}{\kappa- \beta } \Big)1(x_2 \geq \kappa/\sqrt{n} - 1/n)$. Then \eqref{eq:stein} implies \begin{align*} 0 \leq&\ \mathbb{E} \big((X_2 - \kappa/\sqrt{n}) \vee 0\big) = \mathbb{E} (G_X A f (X) - L f (X)) \\ \leq&\ \frac{1}{\beta\sqrt{n}} \Big( 12 +\frac{6\kappa}{\kappa- \beta } \Big)\mathbb{P}(X_2 \geq \kappa/\sqrt{n} - 1/n)\\ &- f_{2}(0,1)\lambda\mathbb{P}( Q_1=Q_2=n) + \mathbb{E}\bigg[ Q_3 \int_{X_2-1/n}^{X_2} f_2(X_1,u) du \bigg]. \end{align*} The term containing $Q_3$ above is present because our CTMC is infinite dimensional, but the PDE in \eqref{eq:pde} is two-dimensional. To deal with this error term, we invoke Lemma~\ref{lem:q1}, \begin{align*} & - f_{2}(0,1)\lambda\mathbb{P}( Q_1=Q_2=n) + \mathbb{E}\bigg[ Q_3 \int_{X_2-1/n}^{X_2} f_2(X_1,u) du \bigg] \\ =&\ - f_{2}(0,1)\frac{1}{n} \mathbb{E} Q_3 + \mathbb{E}\bigg[ Q_3 \int_{X_2-1/n}^{X_2} f_2(X_1,u) du \bigg] \\ =&\ \mathbb{E}\bigg[ Q_3 \int_{X_2-1/n}^{X_2} (f_2(X_1,u) - f_{2}(0,1)) du \bigg] \\ \leq&\ 0, \end{align*} where in the last inequality we used $f_{21}(x), f_{22}(x) \geq 0$ from \eqref{eq:dpos}. To conclude the proof, it remains to verify the bound on \eqref{eq:diff3}-\eqref{eq:diff6}. By \eqref{eq:dpos} we know that \eqref{eq:diff3}-\eqref{eq:diff6} all equal zero when $x_2 < \kappa/\sqrt{n} - 1/n$. Now suppose $x_2 \geq \kappa/\sqrt{n} - 1/n$. From \eqref{eq:d22} and the fact that $q_1\geq q_2$ if $q \in S$, we can see that each of \eqref{eq:diff3} and \eqref{eq:diff5} is non-negative and bounded by $\frac{1}{\beta\sqrt{n}} \Big( \frac{\kappa}{\kappa-\beta} + 1\Big)$. Similarly, \eqref{eq:d22} tells us that each of \eqref{eq:diff4} and \eqref{eq:diff6} is non-negative and bounded by $\frac{1}{\beta\sqrt{n}} \Big(5+\frac{2\kappa}{\kappa-\beta}\Big)$. \end{proof} \subsection{Proving Theorem~\ref{thm:ergodic}} \label{sec:proofergod} Given $h(x)$ and $\alpha > 0$, let $f^{(h)}(x)$ solve the PDE \eqref{eq:pde} and set \begin{align} g(x) = e^{\alpha f^{(h)}(x/\sqrt{n})}. \label{eq:lyapeexp} \end{align} Observe that $g_1(0,x_2)=g_2(0,x_2)$, which means that the diffusion generator applied to $g(x)$ is \begin{align} G_Y g(x) =&\ (-x_1 + x_2 - \beta) g_1(x) - x_2 g_2(y) + g_{11}(x) \notag \\ =&\ (-x_1/\sqrt{n} + x_2/\sqrt{n} - \beta/\sqrt{n}) f_1^{(h)}(x/\sqrt{n})\alpha g(x) - \frac{x_2}{\sqrt{n}} f_2^{(h)}(x/\sqrt{n})\alpha g(x) \notag \\ &+ g_{11}(x/\sqrt{n})\notag \\ =&\ -h(x/\sqrt{n}) \alpha g(x) + \frac{1}{n}\big(\alpha f_{11}^{(h)}(x/\sqrt{n}) + \alpha^2 (f_{1}^{(h)}(x/\sqrt{n}))^2 \big)g(x) , \label{eq:diffusiongen} \end{align} where the last equality comes from \eqref{eq:pde}. The $h(x)$ we will use is a smoothed version of the indicator function. Namely, for any $\ell < u$, let \begin{align} \phi^{(\ell,u)}(x) = \begin{cases} 0, \quad &x \leq \ell,\\ (x-\ell)^2\Big( \frac{-(x-\ell)}{ ((u+\ell)/2-\ell )^2(u-\ell)} + \frac{2}{ ((u+\ell)/2-\ell)(u-\ell)}\Big), \quad &x \in [\ell,(u+\ell)/2],\\ 1 - (x-u)^2\Big( \frac{(x-u)}{ ((u+\ell)/2-u )^2(u-\ell)} - \frac{2}{ ((u+\ell)/2-u )(u-\ell)}\Big), \quad &x \in [(u+\ell)/2,u],\\ 1, \quad &x \geq u. \end{cases} \label{eq:phi} \end{align} It is straightforward to check that $\phi^{(\ell,u)}(x)$ has an absolutely continuous first derivative, and that \begin{align} \abs{(\phi^{(\ell,u)})'(x)} \leq \frac{4}{u-\ell}, \quad \text{ and } \quad \abs{(\phi^{(\ell,u)})''(x)} \leq \frac{12}{(u-\ell)^2} . \label{eq:phider} \end{align} \begin{lemma} \label{lem:expergodderivs} Fix $\kappa_1< \kappa_2$ such that $\kappa_1 > \beta$. There exists functions $f^{(1)}(x)$ and $f^{(2)}(x)$ satisfying \begin{align} L f^{(1)}(x) =&\ -\phi^{(\kappa_1 ,\kappa_2)} (-x_1), \quad x \in \Omega, \notag\\ f_1^{(1)}(0,x_2) =&\ f_2^{(1)}(0,x_2), \quad x_2 \geq 0, \label{eq:pde1} \end{align} and \begin{align} L f^{(2)}(x) =&\ -\phi^{(\kappa_1 ,\kappa_2)} (x_2), \quad x \in \Omega, \notag\\ f_1^{(2)}(0,x_2) =&\ f_2^{(2)}(0,x_2), \quad x_2 \geq 0, \label{eq:pde2} \end{align} Such that both $f^{(1)}(x)$ and $f^{(2)}(x)$ belong to $C^2(\Omega)$, \begin{align} &f^{(1)}(x) \leq \log\Big( \frac{\kappa_2 -\beta}{\kappa_1-\beta} \Big), \quad &x \in [-\kappa_2/\sqrt{n},0]\times [0,\kappa_2/\sqrt{n}],\label{eq:expfsum1}\\ & f^{(2)}(x) \leq \log(\kappa_2/\kappa_1) + \frac{\kappa_2-\kappa_1}{\beta}, \quad &x \in [-\kappa_2/\sqrt{n},0]\times [0,\kappa_2/\sqrt{n}], \label{eq:expfsum2} \end{align} and for all $x \in \Omega$, \begin{align} &\big| f_{1}^{(1)}(x) \big| \leq \frac{4\sqrt{n}}{\kappa_2-\kappa_1}\log\Big( \frac{\kappa_2 -\beta}{\kappa_1-\beta} \Big), \quad \big|f_{11}^{(1)}(x)\big| \leq \frac{12n}{ (\kappa_2-\kappa_1)^2}\log\Big( \frac{\kappa_2 -\beta}{\kappa_1-\beta} \Big), \label{eq:expf1} \\ &\big| f_{1}^{(2)}(x) \big| \leq \frac{\sqrt{n}}{\beta}, \quad \big|f_{11}^{(2)}(x)\big| \leq \frac{n}{\beta(\kappa_1-\beta)}\Big(1 + \frac{\kappa_1}{\kappa_1- \beta} \frac{4(\kappa_1- \beta)}{\kappa_2-\kappa_1} \Big). \label{eq:expf2} \end{align} \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm:ergodic}] Fix $\kappa_1 < \kappa_2$ with $\kappa_1 > \beta$ and $\alpha > 0$, let $f^{(1)}(x)$ and $f^{(2)}(x)$ be as in Lemma~\ref{lem:expergodderivs}, and let $V^{(\kappa_1,\kappa_2)}(x) = e^{\alpha (f^{(1)}(x/\sqrt{n}) + f^{(2)}(x/\sqrt{n}))}$. It follows from \eqref{eq:diffusiongen} and \eqref{eq:expf1}-\eqref{eq:expf2} that \begin{align*} &\frac{ G_Y V^{(\kappa_1,\kappa_2)}(x)}{\alpha V^{(\kappa_1,\kappa_2)}(x)} \\ =&\ -\phi^{(\kappa_1 ,\kappa_2)} (-x_1/\sqrt{n})-\phi^{(\kappa_1 ,\kappa_2)} (x_2/\sqrt{n}) + \frac{1}{n} (f_{11}^{(1)}(x/\sqrt{n})+f_{11}^{(2)}(x/\sqrt{n})) \\ &+ \frac{\alpha}{n} \big(f_{1}^{(1)}(x/\sqrt{n}) + f_{1}^{(2)}(x/\sqrt{n})\big)^2 \\ \leq&\ -\phi^{(\kappa_1 ,\kappa_2)} (-x_1/\sqrt{n})-\phi^{(\kappa_1 ,\kappa_2)} (x_2/\sqrt{n})+ \frac{12}{ (\kappa_2-\kappa_1)^2}\log\Big( \frac{\kappa_2 -\beta}{\kappa_1-\beta} \Big) \\ &+ \frac{1}{\beta(\kappa_1-\beta)}\Big(1 + \frac{\kappa_1}{\kappa_1- \beta} \frac{4(\kappa_1- \beta)}{\kappa_2-\kappa_1} \Big) + \alpha \Big(\frac{4 }{\kappa_2-\kappa_1}\log\Big( \frac{\kappa_2 -\beta}{\kappa_1-\beta} \Big)+ 1/\beta \Big)^2. \end{align*} If $x_1 \leq -\kappa_2$ or $x_2 \geq \kappa_2$, then $-\phi^{(\kappa_1 ,\kappa_2)} (-x_1/\sqrt{n})-\phi^{(\kappa_1 ,\kappa_2)} (x_2/\sqrt{n}) \leq -1$, and \eqref{eq:expgen} is satisfied. If $x \in [-\kappa_2/\sqrt{n},0]\times [0,\kappa_2/\sqrt{n}]$, then \begin{align*} \frac{ G_Y V^{(\kappa_1,\kappa_2)}(x)}{\alpha V^{(\kappa_1,\kappa_2)}(x)} \leq&\ 1 -1 + \frac{12}{ (\kappa_2-\kappa_1)^2}\log\Big( \frac{\kappa_2 -\beta}{\kappa_1-\beta} \Big) + \frac{1}{\beta(\kappa_1-\beta)}\Big(1 + \frac{\kappa_1}{\kappa_1- \beta} \frac{4(\kappa_1- \beta)}{\kappa_2-\kappa_1} \Big) \\ &+\alpha \Big(\frac{4 }{\kappa_2-\kappa_1}\log\Big( \frac{\kappa_2 -\beta}{\kappa_1-\beta} \Big)+ 1/\beta \Big)^2, \end{align*} multiplying both sides by $\alpha V^{(\kappa_1,\kappa_2)}(x)$ and applying the bound in \eqref{eq:expfsum1}-\eqref{eq:expfsum2} verifies \eqref{eq:expgen} and concludes the proof. \end{proof} \begin{remark} In the proof of Theorem~\ref{thm:ergodic} we compare the generator of the diffusion process $G_Y$ to $L$, which can be thought of as the generator of the associated fluid model. One may wonder why we do not use a similar argument to compare $L$ to $G_X$, and prove that the CTMC is also exponentially ergodic. The answer is that the CTMC is infinite dimensional, while the operator $L$ acts on functions of only two variables. As a result, comparing $G_X$ to $L$ leads to excess error terms that $L$ does not account for, e.g.\ $q_3$ in \eqref{eq:diff2}. Although we were able to get around this issue in the proof of Theorem~\ref{thm:main} by taking expected values, the same trick will not work now because \eqref{eq:expgen} has to hold for every state. To prove exponential ergodicity, one needs to replace the operator $L$ and the PDE \eqref{eq:pde} by infinite-dimensional counterparts corresponding to the infinite-dimensional fluid model of $\{(X_1(t),X_2(t),X_3(t),\ldots) \}_{t\geq 0}$. This is left as an open problem to the interested reader, as Theorem~\ref{thm:ergodic} is sufficient for the purposes of illustrating the proof technique. \end{remark} \section{Derivative Bounds} \label{sec:derbounds} The focus of this section is to prove Lemma~\ref{cor:derivs}. The following informal discussion provides a roadmap of the procedure. Given $x \in \Omega$, consider the system of integral equations \begin{align} &v_1(t) = x_1 - \frac{\beta}{\sqrt{n}}t - \int_{0}^{t} (v_1(s) - v_2(s)) ds - U_1(t), \notag \\ &v_2(t) = x_2 - \int_{0}^{t} v_2(s) ds + U_1(t), \notag \\ &\int_{0}^{\infty} v_1(s) dU_1(s) = 0, \quad U_1(t) \geq 0, \quad t \geq 0, \label{eq:dynamic} \end{align} and let $v^{x}(t)$ denote the solution; existence and uniqueness was proved in \cite[Lemma 1]{GamaEsch2015}. The dynamical system above is the fluid model of $\{(X_1(t), X_2(t)) \}_{ t \geq 0}$. The key idea is that \begin{align} f^{(h)}(x) = \int_{0}^{\infty} h(v^{x}(t)) dt \label{eq:value} \end{align} solves the PDE \eqref{eq:pde}. Our plan is to a) better understand the behavior of the fluid model and b) use this understanding to obtain a closed form representation of the integral in \eqref{eq:value} to bound its derivatives. Section~\ref{sec:tau} takes care of a), while b) and the proof of Lemma~\ref{cor:derivs} can be found in Section~\ref{sec:dertech}. The function in \eqref{eq:value} is precisely what \cite{Stol2015} calls the drift-based fluid limit Lyapunov function. Derivative bounds play the same role in that paper as they do in the present one. We wish to point out that in the arguments that follow, we never actually have to prove existence and uniqueness of $v^{x}(t)$, or that \eqref{eq:value} does indeed solve the PDE, or that the fluid model behaves the way we will describe it in Section~\ref{sec:tau}. All of this is merely guiding intuition for our final product Lemma~\ref{lem:derivs} in Section~\ref{sec:dertech}, where we describe a well-defined function that happens to solve the PDE. \subsection{Understanding the Fluid Model} \label{sec:tau} Let us (heuristically) examine the fluid model in \eqref{eq:dynamic}. We refer to $U_1(t)$ as the regulator, because it prevents $v^{x}_1(t)$ from becoming positive. In the absence of this regulator, i.e.\ $U_1(t) \equiv 0$, the system would have been a linear dynamical system \begin{align*} \dot v = F(v), \quad \text{ where } \quad F(v) = (-v_1+v_2 - \beta/\sqrt{n},-v_2). \end{align*} However, due to the presence of the regulator, for values in the set $\{v_1 = 0,\ v_2 \geq \beta/\sqrt{n}\}$ it is as if the vector field becomes \begin{align*} F(v) = (0,-\beta/\sqrt{n}). \end{align*} The dynamics of the fluid model are further illustrated in Figure~\ref{fig:dynamics}. \begin{figure} \hspace{-2.5cm} \begin{tikzpicture} \node (ref) at (0,0){}; \draw[thick] (2,1) -- (8,1) (7,0)--(7,6); \node at (6,0.6) { $-\beta/\sqrt{n}$}; \node at (7.6,2) { $\beta/\sqrt{n}$}; \draw[mark options={fill=red}] plot[mark=*] coordinates {(6,1)}; \draw[blue,thick,->] (3.2,1) -- (3.6,1); \draw[blue,thick,->] (4,1) -- (4.5,1); \draw[blue,thick,->] (3.2,1.5) -- (3.7,1.49); \draw[blue,thick,->] (4.1,1.485) -- (4.6,1.47); \draw[blue,thick,->] (5,1.46) -- (5.5,1.4); \draw[blue,thick,->] (3.7,2) -- (4.2,1.95); \draw[blue,thick,->] (4.5,1.9) -- (5.2,1.75); \draw[blue,thick,->] (5.7,1.7) -- (6.2,1.4); \draw[blue,thick,->] (4.5,2.5) -- (5,2.4); \draw[blue,thick,->] (5.5,2.3) -- (6,2); \draw[blue,thick,->] (6.7,1.7) -- (6.6,1.4); \draw[blue,thick,->] (6.6,1.2) -- (6.4,1.1); \draw[blue,thick,->] (3.2,1) -- (3.6,1); \draw[blue,thick,->] (4,1) -- (4.5,1); \draw[blue,thick,->] (3.2,1.5) -- (3.7,1.49); \draw[blue,thick,->] (4.1,1.485) -- (4.6,1.47); \draw[blue,thick,->] (5,1.46) -- (5.5,1.4); \draw[blue,thick,->] (3.7,2) -- (4.2,1.95); \draw[blue,thick,->] (4.5,1.9) -- (5.2,1.75); \draw[blue,thick,->] (5.7,1.7) -- (6.2,1.4); \draw[blue,thick,->] (4.5,2.5) -- (5,2.4); \draw[blue,thick,->] (5.5,2.3) -- (6,2); \draw[blue,thick,->] (6.7,1.7) -- (6.6,1.4); \draw[blue,thick,->] (6.6,1.2) -- (6.4,1.1); \draw[dashed] (7,2) .. controls (6,2.7) and (5,3.2) .. (2,3.6); \end{tikzpicture} \begin{tikzpicture} \node (ref) at (0,0){}; \draw[thick] (2,1) -- (8,1) (7,0)--(7,6); \node at (6,0.6) {$-\beta/\sqrt{n}$}; \node at (7.6,2) { $\beta/\sqrt{n}$}; \draw[mark options={fill=red}] plot[mark=*] coordinates {(6,1)}; \draw[dashed] (7,2) .. controls (6,2.7) and (5,3.2) .. (2,3.6); \draw[blue,thick,->] (6,4.5) -- (6.4,4.35); \draw[blue,thick,->] (6.6,4.2) -- (7,4); \draw[red,line width=0.5mm,->] (7,4)-- (7,2); \draw[blue,thick,->] (7,2) -- (6.6,1.4); \end{tikzpicture} \caption{Dynamics of the fluid model. Any trajectory starting below the dahsed curve will not hit the vertical axis, and anything starting above the curve will hit the axis and travel down until reaching the point $(0,\beta/\sqrt{n})$. } \label{fig:dynamics} \end{figure} The key to characterizing $v^{x}(t)$ is the quantity \begin{align*} \inf \{t \geq 0 : v^{x}_1(t) = 0 \} \end{align*} which is the first hitting time of the vertical axis given initial condition $x$. The following lemma characterizes a curve $\Gamma^{(\kappa)} \subset \Omega $, such that for any point $x \in \Gamma^{(\kappa)}$, the fluid path $v^{x}(t)$ first hits the vertical axis at the point $(0,\kappa/\sqrt{n})$. It is proved in Section~\ref{app:gamma}. \begin{lemma} \label{lem:gamma} Fix $\kappa \geq \beta$ and $x_1 \leq 0$. The nonlinear system \begin{align} &-\beta/\sqrt{n} + (x_1 + \beta/\sqrt{n}) e^{-\eta} + \eta \nu e^{-\eta} = 0, \notag \\ & \nu e^{-\eta} = \kappa/\sqrt{n},\notag \\ &\nu \geq \kappa/\sqrt{n}, \quad \eta \geq 0. \label{eq:nonlin} \end{align} has exactly one solution $(\nu^*(x_1), \eta^*(x_1))$. Furthermore, for every $x_1 \leq 0$, let us define the curve \begin{align*} \gamma^{(\kappa)}(x_1) = \Big\{ \big(-\beta/\sqrt{n} + (x_1 + \beta/\sqrt{n}) e^{-t} + t \nu^*(x_1) e^{-t}, \nu^*(x_1) e^{-t} \big) \ \Big| \ t \in [0,\eta^*(x_1)] \Big\} \end{align*} and let \begin{align*} \Gamma^{(\kappa)} = \{ x \in \Omega \ | \ x_2 = \nu^{*}(x_1) \}. \end{align*} Then $\gamma^{(\kappa)}(x_1) \subset \Gamma^{(\kappa)}$ for every $x_1 \leq 0$. \end{lemma} Given $\kappa \geq \beta$ and $x\in \Omega$, let $\Gamma^{(\kappa)}$ and $\nu^{*}(x_1)$ be as in Lemma~\ref{lem:gamma}. Let us adopt the convention of writing \begin{align} x > \Gamma^{(\kappa)} \text{ if } x_2 > \nu^{*}(x_1), \label{eq:xgeqgamma} \end{align} and define $x \geq \Gamma^{(\kappa)}$, $x < \Gamma^{(\kappa)}$, and $x \leq \Gamma^{(\kappa)}$ similarly. Observe that the sets \begin{align*} \{x \in \Omega \ |\ x > \Gamma^{(\kappa)}\}, \quad \{x \in \Omega \ |\ x < \Gamma^{(\kappa)}\}, \quad \text{ and } \quad \{x \in \Omega \ |\ x \in \Gamma^{(\kappa)}\} \end{align*} are disjoint, and that their union equals $\Omega$. Furthermore, \begin{align*} \{x \in \Omega \ |\ x \geq \Gamma^{(\kappa)}\} \cap \{x \in \Omega \ |\ x \leq \Gamma^{(\kappa)}\} = \{x \in \Omega \ |\ x \in \Gamma^{(\kappa)}\}. \end{align*} The next lemma characterizes the first hitting time of the vertical axis given initial condition $x$, and shows that this hitting time is differentiable in $x$. It is proved in Section~\ref{app:tau}. \begin{lemma} \label{lem:tau} Fix $\kappa \geq \beta$ and $x \in (-\infty,0]\times [\kappa/\sqrt{n},\infty)$. Provided it exists, define $\tau(x)$ to be the smallest solution to \begin{align*} \beta/\sqrt{n} - (x_1 + \beta/\sqrt{n}) e^{-\eta} - \eta x_2 e^{-\eta} = 0, \quad \eta \geq 0, \end{align*} and define $\tau(x) = \infty$ if no solution exists. Let $\Gamma^{(\kappa)}$ be as in Lemma~\ref{lem:gamma}. \begin{enumerate} \item If $x > \Gamma^{(\kappa)}$, then $\tau(x) < \infty$ and \begin{align} x_2 e^{-\tau(x)} > \kappa/\sqrt{n}, \label{eq:gammaprop} \end{align} and if $x \in \Gamma^{(\kappa)}$, then $\tau(x) < \infty$ and $x_2 e^{-\tau(x)} = \kappa/\sqrt{n}$. \item If $\kappa > \beta$, then the function $\tau(x)$ is differentiable at all points $x \geq \Gamma^{(\kappa)}$ with \begin{align} \tau_1(x) = - \frac{ e^{-\tau(x)}}{x_2 e^{-\tau(x)} - \beta/\sqrt{n}} \leq 0, \quad \tau_2(x) = \tau_1(x) \tau(x) \leq 0, \quad x \geq \Gamma^{(\kappa)}, \label{eq:tauder} \end{align} where $\tau_1(x)$ is understood to be the left derivative when $x_1 = 0$. \item For any $\kappa_1,\kappa_2$ with $\beta < \kappa_1 < \kappa_2$, \begin{align} x \geq \Gamma^{(\kappa_2)} \text{ implies } x > \Gamma^{(\kappa_1)}, \label{eq:gammaabove} \end{align} i.e. the curve $\Gamma^{(\kappa_2)}$ lies strictly above $\Gamma^{(\kappa_1)}$. \end{enumerate} \end{lemma} Armed with Lemmas~\ref{lem:gamma} and \ref{lem:tau}, we are now in a position to describe the solution to \eqref{eq:pde} and prove Lemma~\ref{cor:derivs}. \subsection{Proving Lemma~\ref{cor:derivs}} \label{sec:dertech} Fix $\kappa > \beta$ and partition the set $\Omega$ into three subdomains \begin{align*} \{x_2 \in [0,\kappa/\sqrt{n}]\}, \quad \{x \leq \Gamma^{(\kappa)},\ x_2 \geq \kappa/\sqrt{n} \}, \quad \text{ and } \quad \{x \geq \Gamma^{(\kappa)} \}, \end{align*} where $\Gamma^{(\kappa)}$ is as in Lemma~\ref{lem:gamma}. From \eqref{eq:gammaprop} we know that $x \geq \Gamma^{(\kappa)}$ implies $x_2 \geq \kappa/\sqrt{n}$, and therefore any point in $\Omega$ must indeed lie in one of the three subdomains. \begin{lemma} \label{lem:derivs} Fix $\kappa > \beta$ and let $ \Gamma^{(\kappa)}$ be as in Lemma~\ref{lem:gamma}, and $\tau(x)$ be as in Lemma~\ref{lem:tau}. For $x \in \Omega$, let \begin{align} f(x) = \begin{cases} &0, \hfill x_2 \in [0,\kappa/\sqrt{n}],\\ &x_2- \frac{\kappa}{ \sqrt{n}} - \frac{\kappa}{\sqrt{n}} \log (\sqrt{n}x_2/\kappa), \hfill x \leq \Gamma^{(\kappa)} \text{ and } x_2 \geq \kappa/\sqrt{n}, \\ &x_2(1 - e^{-\tau(x)}) - \frac{\kappa}{\sqrt{n}} \tau(x) + \frac{1}{2} \frac{\sqrt{n}}{\beta} (x_2 e^{-\tau(x)} - \kappa/\sqrt{n})^2, \quad x \geq \Gamma^{(\kappa)}. \end{cases} \label{eq:candidate} \end{align} This function is well-defined and has absolutely continuous first-order partial derivatives. When $x \leq \Gamma^{(\kappa)}$ and $x_2 \geq \kappa/\sqrt{n}$, \begin{align} f_{1}(x) = 0, \quad f_2(x) = 1 - \frac{\kappa}{x_2 \sqrt{n}}, \quad f_{22}(x) = \frac{1}{x_2^2} \frac{\kappa}{\sqrt{n}}, \label{eq:d0} \end{align} and when $x \geq \Gamma^{(\kappa)}$, \begin{align} f_{1}(x) =&\ \frac{\sqrt{n}}{\beta} e^{-\tau(x)} \big( x_2 e^{-\tau(x)} -\kappa/\sqrt{n} \big),\label{eq:d1} \\ f_{2}(x) =&\ 1 - \frac{1}{x_2} \frac{\kappa}{\sqrt{n}} + \frac{\sqrt{n}}{\beta}\big( x_2 e^{-\tau(x)} - \kappa/\sqrt{n}\big) \Big( \frac{x_2 e^{-\tau(x)} -\beta/\sqrt{n}}{x_2} + \tau(x) e^{-\tau(x)} \Big) \label{eq:d2}. \end{align} \end{lemma} A full proof of the lemma is postponed to Section~\ref{app:dertech}, but the following is an outline containing the main bits of intuition. We write down \eqref{eq:candidate} motivated by \eqref{eq:value} and the behavior of the fluid model. At this point, we can think of \eqref{eq:candidate} as a standalone mathematical object that we simply guess to be a solution to \eqref{eq:pde}. Deriving the forms of the derivatives in \eqref{eq:d0}-\eqref{eq:d2} is purely an algebraic exercise that relies on the characterization of $\tau(x)$ from Lemma~\ref{lem:tau}. \begin{proof}[Proof of Lemma~\ref{cor:derivs}] Let $f(x)$ be as in Lemma~\ref{lem:derivs}. It is straightforward to verify that our candidate $f(x)$ solves the PDE \eqref{eq:pde} with $h(x) = \big((x_2 - \kappa/\sqrt{n}) \vee 0\big)$ there; for the reader wishing to verify this, recall that $\tau(x) = 0$ when $x_1 = 0$. We can now prove Lemma~\ref{cor:derivs}. We now prove that \eqref{eq:dpos}--\eqref{eq:d22} hold when $x \geq \Gamma^{(\kappa)}$. We omit the proof when $x \in \{x_2 \in [0,\kappa/\sqrt{n}]\}$ and $x \in \{x \leq \Gamma^{(\kappa)},\ x_2 \geq \kappa/\sqrt{n} \}$ because those cases are simple. Differentiating \eqref{eq:d1} and using \eqref{eq:tauder} one arrives at \begin{align*} f_{11}(x) =&\ \frac{\sqrt{n}}{\beta} e^{-2\tau(x)} \frac{x_2e^{-\tau(x)} + \big( x_2e^{-\tau(x)} -\kappa/\sqrt{n} \big)}{ \big( x_2e^{-\tau(x)} -\beta/\sqrt{n} \big)}, \end{align*} from which we conclude that \begin{align*} 0 \leq f_{11}(x) \leq \frac{\sqrt{n}}{\beta}\Big( \frac{x_2e^{-\tau(x)} }{ x_2e^{-\tau(x)} -\beta/\sqrt{n} } + 1 \Big) =&\ \frac{\sqrt{n}}{\beta}\Big( \frac{1}{ 1 -\frac{\beta/\sqrt{n}}{x_2e^{-\tau(x)}} } + 1 \Big) \\ \leq&\ \frac{\sqrt{n}}{\beta}\Big( \frac{1}{ 1 -\frac{\beta}{\kappa} } + 1 \Big), \end{align*} where all three inequalities above follow from the fact that $x_2 e^{-\tau(x)} \geq \kappa/\sqrt{n} > \beta/\sqrt{n}$; c.f.\ \eqref{eq:gammaprop} in Lemma~\ref{lem:tau}. Taking the derivative in \eqref{eq:d1} with respect to $x_2$, we see that \begin{align*} &f_{12}(x) \\ =&\ \frac{\sqrt{n}}{\beta} \big( -\tau_2(x) e^{-\tau(x)}\big) \big( x_2 e^{-\tau(x)} -\kappa/\sqrt{n} \big) + \frac{\sqrt{n}}{\beta} e^{-\tau(x)} \big( e^{-\tau(x)} - \tau_2(x) x_2 e^{-\tau(x)} \big). \end{align*} The quantity above is non-negative because $x_2 e^{-\tau(x)} \geq \kappa/\sqrt{n}$ and $-\tau_2(x) \geq 0$; the latter follows from \eqref{eq:tauder}. Lastly, we can differentiate \eqref{eq:d2} and use $\tau_2(x) = \tau_1(x) \tau(x)$ from \eqref{eq:tauder} to see that \begin{align*} f_{22}(x) =&\ \frac{1}{x_2^2} \frac{\kappa}{\sqrt{n}} + \frac{\sqrt{n}}{\beta}\big( e^{-\tau(x)} - \tau_1(x) \tau(x) x_2 e^{-\tau(x)} \big) \Big( \frac{x_2 e^{-\tau(x)} -\beta/\sqrt{n}}{x_2} + \tau(x) e^{-\tau(x)} \Big)\\ &+ \frac{\sqrt{n}}{\beta}\big( x_2 e^{-\tau(x)} - \kappa/\sqrt{n}\big) \Big( \frac{\beta/\sqrt{n}}{x_2^2} - \tau_1(x) \tau^2(x) e^{-\tau(x)} \Big). \end{align*} Again, $f_{22}(x) \geq 0$ because $x_2e^{-\tau(x)} \geq \kappa/\sqrt{n}$ and $-\tau_1(x) \geq 0$. Let us now bound $f_{22}(x)$. The first term on the right hand side above is bounded by $\sqrt{n}/\kappa$, because $x \geq \Gamma^{(\kappa)}$ implies $x_2 \geq \kappa/\sqrt{n}$. For the second term, note that \begin{align*} &\frac{x_2 e^{-\tau(x)} -\beta/\sqrt{n}}{x_2} + \tau(x) e^{-\tau(x)} \leq 1 + e^{-1} \leq 2, \end{align*} and using the form of $\tau_1(x)$ from \eqref{eq:tauder}, \begin{align*} e^{-\tau(x)} - \tau_1(x) \tau(x) x_2 e^{-\tau(x)} =&\ e^{-\tau(x)} +\frac{e^{-\tau(x)}}{x_2 e^{-\tau(x)}-\beta/\sqrt{n}} \tau x_2 e^{-\tau(x)} \\ =&\ e^{-\tau(x)} + \frac{\tau e^{-\tau(x)}}{1-\frac{\beta/\sqrt{n}}{x_2e^{-\tau(x)}} } \leq 1 + \frac{e^{-1}}{1-\frac{\beta }{\kappa} } \leq 1 + \frac{\kappa}{\kappa-\beta}. \end{align*} For the third term, observe that \begin{align*} &\big( x_2 e^{-\tau(x)} - \kappa/\sqrt{n}\big) \Big( \frac{\beta/\sqrt{n}}{x_2^2} - \tau_1(x) \tau^2(x) e^{-\tau(x)} \Big)\\ \leq&\ \big( x_2 e^{-\tau(x)} - \kappa/\sqrt{n}\big) \Big(\frac{1}{x_2} \frac{\beta}{\kappa} - \tau_1(x) \tau^2(x) e^{-\tau(x)} \Big)\\ =&\ \frac{ x_2 e^{-\tau(x)} - \kappa/\sqrt{n}}{x_2} \frac{\beta}{\kappa} + \frac{x_2 e^{-\tau(x)} - \kappa/\sqrt{n}}{x_2 e^{-\tau(x)} - \beta/\sqrt{n}} \tau^2(x) e^{-2\tau(x)} \\ \leq&\ \frac{\beta}{\kappa} + 1 \leq 2, \end{align*} where we use $x_2 \geq \kappa/\sqrt{n}$ in the first inequality, the form of $\tau_1(x)$ in \eqref{eq:tauder} in the first equation, and the fact that $\beta < \kappa$ in the last two inequalities. Combining the bounds on all three terms, we conclude that \begin{align*} f_{22}(x) \leq \frac{\sqrt{n}}{\kappa} + \frac{\sqrt{n}}{\beta} 2 \Big( 1 + \frac{\kappa}{\kappa-\beta}\Big) + 2\frac{\sqrt{n}}{\beta} \leq \frac{\sqrt{n}}{\beta} \Big( 5 + \frac{2\kappa}{\kappa-\beta}\Big), \end{align*} where in the last inequality we used the fact that $\sqrt{n}/\kappa < \sqrt{n}/\beta$. \end{proof} \section*{Acknowledgements} This work was inspired by a talk given by David Gamarnik at Northwestern University's Kellogg School of Business in October 2017.
{ "attr-fineweb-edu": 1.676758, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd3TxaL3SuiswgLHU
\section{Introduction } \label{S:Introduction} Consider a {c}onformal field theory in $d$ dimensions perturbed by a relevant (scalar) operator $ {\cal O} $ of dimension $\Delta<d$. We are interested in evaluating the correlators of $ {\cal O} $ in the presence of the perturbation. The partition function is \begin{equation} Z= \langle \exp( -\alpha\int d^d x {\cal O} (x) )\rangle= \left\langle \sum_{n=0}^\infty \frac 1{n!}\left(-\alpha\int d^d x {\cal O} (x) \right)^n\right\rangle\label{eq:Z} \end{equation} and the formal evaluation of correlators with the infinite sum in the equation above is what is known as conformal perturbation theory. To begin with such a program, one can compute the one point function of $ {\cal O} (x)$ as follows \begin{equation} \vev{ {\cal O} (x)} {= \left\langle{-\alpha \int d^d y {\cal O} (y) {\cal O} (x)}+\dots\right\rangle} = -\alpha \int d^dy\frac{1}{|x-y|^{2\Delta}}{+\dots}\label{eq:onep} \end{equation} The right hand side is infinite regardless of $\Delta$. The divergence comes either from the small distance UV regime, or from the long distance IR regime. This is because we have to perform an integral of a scaling function. The problem seems ill defined until one resums the full perturbation expansion. This is a very important conceptual point in the AdS/CFT correspondence \cite{Maldacena:1997re} where standard `experiments' insert time dependent or time independent sources for various fields on the boundary of $AdS$ \cite{Gubser:1998bc,Witten:1998qj} and these in turn can be associated {with} sources for an operator such as $ {\cal O} (x)$. Some of these results have been argued to be universal in \cite{Buchel:2012gw,Buchel:2013lla,Buchel:2013gba,Das:2014jna}, independent of the AdS origin of such a calculation. We want to understand such type of results under a more controlled setting, where we can use the philosophy of conformal perturbation theory to get finite answers {\em ab initio} without a resummation. A natural way to solve the problem above is to introduce a meaningful infrared regulator, so that the only divergences that survive arise from the UV of the theory and can then be handled via the usual procedure of renormalization. Such a natural regulator is provided by the conformal field theory on the cylinder $S^{d-1}\times \BR$, which also happens to be the conformal boundary of global $AdS$ spacetime, rather than just the Poincar\'e patch. The cylinder also is conformally equivalent to flat space and provides both the radial quantization and the operator state correspondence. In this sense, we are not modifying the $AdS$ space in a meaningful way. However, a constant source for $ {\cal O} (x)$ in such a geometry is different than a constant source on the Poincar\'e patch. In the rest of the paper we discuss the details of such a computation for two universal quantities. These are the one point function of $ {\cal O} (x)$, and the energy stored in a configuration where we quench from $\alpha\neq 0$ to $\alpha=0$. We also explain how to deal with general time dependent sources in the conformal field theory side for more general AdS motivated experiments. Because we work with arbitrary $d, \Delta$, our results can naturally be cast as a real space dimensional regularization formalism. We find that the $AdS$ answer, which is generally finite for the one point function, matches this notion of dimensional regularization. The only singularities that arise are those that one associates {with} logarithmic divergences. We are also able to match this result to the CFT calculation exactly, where the calculation is more involved. We also argue how to calculate the energy of the configuration and that having solved for the one point function naturally produces the result for this other computation. \section{One point functions on the sphere} What we want to do is set up the equivalent calculation to \eqref{eq:Z} and \eqref{eq:onep}, but where we substitute the space $\BR\times S^{d-1}$ in the integral. {That} is, we want to compute \begin{equation} \vev{ {\cal O} (\tau, \theta)} \simeq \left\langle{-\alpha \int d^{d-1} \Omega' d\tau' {\cal O} (\tau', \theta') {\cal O} (\tau, \theta)}+\dots\right\rangle = -\alpha C_\Delta \label{eq:onepsp} \end{equation} for $\tau$ a time coordinate on $\BR$ and $\theta$ an angular position on the sphere. Because the operator $ {\cal O} $ is not marginal, $\alpha$ has units and we need to choose a specific radius for the sphere. We will choose this radius to be one. Our job is to compute the number $C_\Delta$. Because the sphere times time as a space is both spherically invariant and time independent, properties that are also shared by the perturbation, we find that the result of equation \eqref{eq:onepsp} should be independent of both $\theta$ and $\tau$. As such, we can choose to perform the angular integral by setting the point $\theta$ at the north pole of the sphere, so that we only need to do an integral over the polar angle in $\theta'$. We want to do this calculation both in the AdS spacetime and in conformal field theory. We will first do the AdS calculation and then we will do the conformal field theory calculation. \subsubsection{The AdS calculation} As described in the introduction, we need to compute the answer in global $AdS$ spacetime. We first describe the global $AdS$ geometry as follows \begin{equation} ds^2=-(1+r^2) dt^2 +\frac {dr^2}{(1+r^2)} + r^2 d\Omega_{d-1}^2 \end{equation} We need to find solutions for a perturbatively small scalar field $\phi$ with mass $m$ and time independent boundary conditions at infinity. Such a perturbation is a solution to the free equations of motion of the field $\phi$ in global $AdS$. Such boundary conditions allow separation of variables in time, angular coordinates and $r$. A solution which is time independent and independent of the angles can be found. We only need to solve the radial equation of motion. Using $|g| \propto r^{2(d-1)}$ we find that we need to solve \begin{equation} \frac 1{r^{d-1}} \frac{\partial}{\partial r} \left( r^{(d-1)} (1+r^2)\frac{\partial}{\partial r} \right)-m^2 \phi(r)=0 \end{equation} The nonsingular solution at the origin is provided by \begin{equation} \phi(r) = A\ \!_2 F_1\left(\frac d 4-\frac 14\sqrt{d^2+4m^2}, \frac d 4 +\frac 14 \sqrt{d^2 +4m^2};\frac d 2; - r^2\right) \end{equation} where $A$ indicates the amplitude of the solution. We now switch to a coordinate $y=1/r$ to study the asymptotic form of the field by expanding near $y\simeq 0$. In this coordinate system we have that \begin{equation} ds ^2 = - \frac {dt^2 }{y^2}- dt^2 + \frac{dy^2}{y^2(1+y^2)}+ \frac{ d \Omega^2 }{y^2}\simeq -\frac {dt^2}{y^2} + \frac {dy^2} {y^2} + \frac{d\Omega^2 }{y^2} \end{equation} So zooming into any small region of the sphere on the boundary $y=0$ we have an asymptotic form of the metric that matches the usual Poincare slicing of AdS. In such a coordinate system the asymptotic growth or decay of $\phi(y)$ in the $y$ coordinate is polynomial, proportional to $y^{\Delta_{\pm}}$ and can be matched to the usual dictionary for a flat slicing, where $\Delta_{\pm} = \frac d2 \pm \frac 12\sqrt{d^2 + 4m^2}$. We have made the match $ \Delta_+= \Delta, $ the operator dimension for irrelevant perturbations. For relevant perturbations we get a choice. Reading the coefficients of this expansion has the same interpretation as in flat space: one is a source and the other one is the response. Writing this as \begin{equation} \phi(y) \simeq A (f_+ y^{\Delta_+}+f_- y^{\Delta_-}) \end{equation} we find that $f_+ = \Gamma(d/2) \Gamma(d/2-\Delta_+)/ \Gamma( 1/2 (\Delta_-))^2$, and $f_-$ is the same expression with $\Delta_+$ replaced by $\Delta_-$. We now use \begin{equation} \Delta = \Delta_+ \end{equation} in what follows to distinguish between vev and source, although we will find the answer is symmetric in this choice. The relation between source and vacuum expectation value is then \begin{equation} f_+= \frac{ \Gamma(\frac d2 -\Delta) \Gamma(\frac 12 \Delta)^2}{\Gamma(\Delta-\frac d2) \Gamma(\frac d2 -\frac {\Delta}2)^2} f_-\label{eq:AdSf} \end{equation} We have artificially chosen $\Delta=\Delta_+$ over $\Delta _-$ to indicate the vacuum expectation value versus the source as one would do for irrelevant perturbations, but since the expressions for $f_+$ and $f_-$ are symmetric in the exchange of $\Delta_+$ and $\Delta_-$, we can eliminate the distinction in equation \eqref{eq:AdSf}. Notice that this relation seems to be completely independent of the normalization of the field $\phi$. We will explain how to get the correct normalization later. \subsubsection{The conformal field theory computation} The basic question for the conformal field theory computation is how does one compute the two point function on the cylinder. Since the cylinder results from a Weyl rescaling of the plane, the two point functions are related to each other in a standard way. The Weyl rescaling is as follows \begin{equation} ds^2 =d\vec x^2= r^2 \left( \frac {dr^2}{r^2}+d \Omega_{d-1}^2\right) \to d\tau^2 + d\Omega_{d-1}^2 \end{equation} which uses a Weyl factor of $r^2$ (the rescaling of units is by a factor of $r= \exp(\tau)$). As {a} primary field of conformal dimension $\Delta$, $ {\cal O} (x)$ will need to be rescaled by $ {\cal O} (\theta, r)\simeq r^\Delta {\cal O} (x)$ to translate to the new rescaled metric. For the two point functions this means that \begin{equation} \langle {\cal O} (\tau_1, \theta_1) {\cal O} (\tau_2,\theta_2) \rangle_{cyl} = \frac{\exp (\Delta \tau_1)\exp( \Delta \tau_2)}{|x_1-x_2|^{2\Delta}}= \frac 1{\left(\exp[ (\tau_1-\tau_2)] + \exp[(\tau_2-\tau_1) ] - 2 \cos(\theta_{rel})\right)^\Delta} \end{equation} where $\theta_{rel}$ is the angle computed between the unit vectors $\hat x_1, \hat x_2$ in standard cartesian coordinates. If we choose $\hat x_1$ to be fixed, and at the north pole, the angle $\theta_{rel}$ is the polar angle of the insertion of $ {\cal O} $ over which we will integrate. Since the answer only depends on the {the difference of the times,} $\tau_2-\tau_1$, the end result is time translation invariant. Notice that we have used throughout conformal primaries that are unit normalized in the Zamolodchikov metric. Now we need to integrate over the angles and the relative time $\tau$. Our expression for $C_\Delta$ reduces to the following definite double integral \begin{eqnarray} C_\Delta&=&\int_{-\infty} ^\infty d\tau \int_0^\pi d \theta \sin^{d-2}\theta Vol(S_{d-2}) \frac{ 1}{2^\Delta(\cosh \tau - \cos\theta)^\Delta}\\ &=& 2^{1-\Delta} Vol(S_{d-2}) \int_1^\infty du \frac 1{\sqrt{u^2-1}} \int_{-1}^1 dv (1-v^2)^{\frac{d-3}{2}} [u-v]^{-\Delta}\label{eq:int} \end{eqnarray} where we have changed variables to $u =\cosh \tau$ and $v= \cos \theta$. For the integral to converge absolutely, we need that $0<2 \Delta<d$, but once we find an analytic formula for arbitrary $0<2 \Delta<d$ we can analytically continue it for all values of $\Delta,d$. The volume of spheres can be computed in arbitrary dimensions as is done in dimensional regularization, so we also get an analytic answer for the variable $d$ itself. Any answer we get can therefore be interpreted as one would in a real space dimensional regularization formalism, where we keep the operator dimension fixed but arbitrary, but where we allow the dimension of space to vary. The final answer we get is \begin{equation} C_\Delta= \pi^{\frac{(d+1)}2}{ 2^{1-\Delta}} \left[\frac{\Gamma(\frac d2 - \Delta) \Gamma(\frac \Delta 2)}{ \Gamma (\frac d2- \frac\Delta 2)^2\Gamma(\frac 12 +\frac \Delta 2)} \right]\label{eq:CFT2pt} \end{equation} \subsubsection{Divergences} On comparing the answers for the AdS and CFT calculation, equations \eqref{eq:AdSf} and \eqref{eq:CFT2pt} seem to be completely different. But here we need to be careful about normalizations of the operator $ {\cal O} $ in the conformal field theory and the corresponding fields in the gravity formulation. We should compare the Green's function of the field $\phi$ in gravity and take it to the boundary to match the two point function one expects in the CFT dual. The correct normalization factor that does so can be found in equation (A.10) in \cite{Berenstein:1998ij}. Naively, it seems that we just need to multiply the result from equation \eqref{eq:CFT2pt} by $\frac{\Gamma(\Delta)}{2 \pi^{\frac d2} \Gamma (\Delta -\frac d2 +1)} $ and then we might expect \begin{equation} \frac {f_+}{f_-} \simeq \frac{\Gamma(\Delta)}{2 \pi^{\frac d2} \Gamma (\Delta -\frac d2 +1)} C_\Delta. \end{equation} However, if we compare the ratio of the left hand side to the right hand side we get that the ratio of the two is given by \begin{equation} \frac {f_+}{f_-} \left( \frac{\Gamma(\Delta)}{2 \pi^{\frac d2} \Gamma (\Delta -\frac d2 +1)} C_\Delta\right)^{-1}= 2\Delta-d = \Delta_+-\Delta_- \end{equation} Happily, this extra factor is exactly what is predicted from the work \cite{Marolf:2004fy} (in particular, eq. 4.24). See also \cite{Freedman:1998tz,Klebanov:1999tb}. This is because one needs to add a counter-term to the action of the scalar field when one uses a geometric regulator in order to have a well defined boundary condition in gravity. We see then that the gravity answer and the field theory answer match each other exactly, for arbitrary $d,\Delta$ once the known normalization issues are dealt with carefully. Now we want to interpret the end result $C_\Delta$ itself. {The expression we found has singularities at specific values of $\Delta$. These arise from poles in the $\Gamma$ function, which occur when $(d/2 - \Delta)$ is a negative integer. However, these poles are cancelled when $(d - \Delta)/2$ is a negative integer, because we then have a double pole in the denominator. For both of these conditions to be true simultaneously, we need both $d$ and $\Delta$ to be even, and furthermore $\Delta\geq d$.} The origin of such poles is from the UV structure of the integral \eqref{eq:onep}. The singular integral (evaluated at $x=0$) is of the form \begin{equation} A_{sing} = \int_0^\epsilon {d^d y} \, y^{-2\Delta} \propto \int_0^\epsilon dy \, y^{d-1-2\Delta} \simeq \int_{1/\epsilon}^\infty dp \, p^{d-1} (p^2+m^2)^{-g}\label{eq:dimreg} \end{equation} {where $g = d-\Delta$ and in the last step we introduced} a momentum like variable $p=1/y$ and a mass $m$ infrared regulator to render it into a familiar form for dimensional regularization integrals that would arise from Feynman diagrams. Singularities on the right hand side arise in dimensional regularization in the UV whenever {there are} logarithmic subdivergences. This {can be seen by} factorizing $p^2+m^2= p^2(1+ m^2/p^2)$ and expanding in power series in $m^2$. Only when $d-1-2g-2k = -1$ for some non-negative integer $k$ do we expect {a logarithmic} singularity. In our case, with $-g = \Delta-d$, the condition for such a logarithmic singularity is that $- g= \Delta-d =-\frac d2+k$, which is exactly the same condition as we found for there to exist poles in the numerator of equation \eqref{eq:CFT2pt}. The first such singularity arises when $\Delta = d/2$. Beyond that, the integral in equation \eqref{eq:int} is not convergent, but is rendered finite in the dimensional regularization spirit. Notice that this was never really an issue in the gravitational computation, since the final answer depended only the asymptotic expansion of hypergeometric functions and we never had to do an integral. The presence of singularities in gravity has to do with the fact that when $\Delta_+-\Delta_-$ is twice an integer, then the two linearly independent solutions to the hypergeometric equation near $y=0$ have power series expansions where the exponents of $y$ match between the two. Such singularities are resolved by taking a limit which produces an additional logarithm between the two solutions. We should take this match to mean that the AdS gravity computation already {\em knows} about dimensional regularization. Another interesting value for $\Delta$ is when we take $\Delta\to d$. The denominator will have a double pole that will always render the number $C_{\Delta=d}=0$. This is exactly as expected for a marginal operator in a conformal field theory: it should move us to a near conformal point where all one point functions of non-trivial local operators vanish. \section{The energy of a quench} After concluding that the AdS and CFT calculation really {did give} the same answer for a constant perturbation we want to understand the energy stored in such a solution. This needs to be done carefully, because as we have seen divergences can appear. Under such circumstances, we should compare the new state to the vacuum state in the absence of a perturbation and ask if we get a finite answer for the energy. {That} is, we need to take the state and quench the dynamics to the unperturbed theory. In that setup one can compute the energy unambiguously. We would also like to have a better understanding of the origin of the divergences in field theory, to understand how one can regulate the UV to create various states we {might} be interested in. For this task we will now do a Hamiltonian analysis. Although in principle one could use a three point function including the stress tensor and integrate, performing a Hamiltonian analysis will both be simpler and more illuminating as to what is the physics of these situations. Also, it is more easily adaptable to a real time situation. \subsubsection{A Hamiltonian approach} The perturbation we have discussed in the action takes the Euclidean action $S\to S+\alpha \int {\cal O} $. When thinking in terms of the Hamiltonian on a sphere, we need to take \begin{equation} H\to H+ \alpha \int d\Omega' {\cal O} (\theta') \end{equation} and we think of it as a new time independent Hamiltonian. When we think of using $\alpha$ as a perturbation expansion {parameter}, we need to know the action of $\int d\Omega' {\cal O} (\theta')$ on the ground state of the Hamiltonian $ {\cal O} (\theta')\ket 0$. This is actually encoded in the two point function we computed. Consider the time ordered two point function with $\tau_1>\tau_2$ \begin{eqnarray} \langle {\cal O} (\tau_1, \theta_1) {\cal O} (\tau_2,\theta_2) \rangle_{cyl}&=&\frac 1{\left(\exp[ (\tau_1-\tau_2)] + \exp[(\tau_2-\tau_1) ] - 2 \cos(\theta_{rel})\right)^\Delta}\\ &=& \sum_s \langle 0| {\cal O} ( \theta_1) \exp({- H\tau_1}) \ket s\bra s \exp(H\tau_2) {\cal O} (\theta_2)|0 \rangle \label{eq:Ham2pt}\\ &=& \sum_s \exp(- E_s (\tau_1-\tau_2)) \bra 0 {\cal O} ( \theta_2) \ket s \bra s {\cal O} (\theta_1) \ket 0 \end{eqnarray} where $s$ is a complete basis that diagonalizes the Hamiltonian $H$ and we have written the operators $ {\cal O} (\tau)\simeq \exp(H \tau) {\cal O} (0) \exp(-H\tau)$ as corresponds to the Schrodinger picture. The states $\ket s$ that can contribute are those that are related to $ {\cal O} $ by the operator-state correspondence: the primary state of $ {\cal O} $ and it's descendants. When we integrate over the sphere, only {the} descendants that are spherically invariant can survive. For a primary $ {\cal O} (0)$, these are the descendants {given by} $(\partial_\mu\partial^\mu)^k {\cal O} (0)$. The normalized states corresponding to these descendants will have energy (dimension) $\Delta+2k$, and are unique for each $k$. We will label them by $\Delta+2k$. We are interested in computing the amplitudes \begin{equation} A_{\Delta+2k}= \bra{\Delta +2 k } \int d\Omega' {\cal O} (\theta')\ket 0\label{eq:ampdef} \end{equation} These amplitudes can be read from equation \eqref{eq:Ham2pt} by integration over $\theta_1, \theta_2$. Indeed, we find that \begin{eqnarray} \int {d^{d-1} \Omega} \langle {\cal O} (\tau_1, \theta_1) {\cal O} (\tau_2,\theta_2) \rangle_{cyl}& &=2^{-\Delta} Vol(S_{d-2}) \int_{-1}^1 dv (1-v^2)^{\frac{d-3}{2}} [\cosh(\tau)-v]^{-\Delta}\\ &=&{\pi^{\frac d 2}} 2^{1-\Delta} \cosh[\tau]^{-\Delta} \ _2\tilde F_1[\frac \Delta 2, \frac{1+\Delta}2;\frac d2 ; \cosh^{-2}(\tau)]\label{eq:angint}\\ &=& M \sum |A_{\Delta+2k}|^2 \exp[(-\Delta -2k)\tau] \end{eqnarray} where $\tau=\tau_1-\tau_2$ and {$_2\tilde F_1$} is the regularized hypergeometric function. From this expression further integration over $\Omega_1$ is trivial: it gives the volume of the sphere $Vol(S^{d-1})$. We want to expand this in powers of $\exp(-\tau)$. To do this we use the expression $\cosh(\tau) = \exp(\tau)( 1+\exp(-2\tau))/2$, and therefore \begin{equation} \cosh^{-a} (\tau) = \exp(- a\tau) 2^{a} [1+\exp(-2\tau)]^{-a} = \sum_{n=0}^\infty 2^{a}\exp(-a \tau -2 n\tau) (-1)^n\frac{\Gamma[a+n]}{n! \Gamma[a]} \end{equation} Inserting this expression into the power series of the hypergeometric function appearing in \eqref{eq:angint} gives us our desired expansion. Apart from common factors to all the amplitudes $A_{\Delta+2k}$ (which are trivially computed for $k=0$) we are in the end only interested in the $k$ dependence of the amplitude itself. After a bit of algebra one finds that \begin{equation} |A_{\Delta+2k}|^2 \propto \frac{\Gamma[k+\Delta] \Gamma[\Delta -\frac d2 +k+1]}{\Gamma[1+\Delta-\frac d2]^2 \Gamma[k+\frac d2] k!}\label{eq:kdep} \end{equation} and to normalize we have that \begin{equation} |A_{\Delta}|^2= [Vol( S^{d-1})]^2 \end{equation} For these amplitudes to make sense quantum mechanically, their squares have to be positive numbers. This implies that none of the $\Gamma$ functions in the numerator can be negative. {The condition for that to happen is that the argument of the $\Gamma$ function in the numerator must positive and therefore $\Delta\geq \frac d2 -1$,} which is the usual unitary condition for scalar primary fields. Also, at saturation $\Delta= d/2-1$ we have a free field and then the higher amplitudes vanish $A_{k>0}/A_0=0$. This is reflected in the fact that $\partial_\mu\partial^\mu\phi=0$ is the free field equation of motion. We are interested in comparing our results to the AdS setup. In the CFT side this usually corresponds to a large $N$ field theory. If the primary fields we are considering are single trace operators, they give rise to an approximate Fock space of states of multitraces, whose anomalous dimension is the sum of the individual traces plus corrections of order $1/N^2$ from non-planar diagrams. In the large $N$ limit we can ignore these corrections, so we want to imagine that the operator insertion of $ {\cal O} $ is a linear combination of raising and lowering operators $\int d\Omega {\cal O} (\theta)\simeq \sum A_{\Delta+2k} a^\dagger_{2k+\Delta}+A_{\Delta+2k} a_{2k+\Delta}$ with $[a,a^\dagger]=1$. In such a situation we can write the perturbed Hamiltonian in terms of the free field representation of the Fock space in the following form \begin{equation} H+\delta H= \sum E_s a_s^\dagger a_s + \alpha(\sum A_{\Delta+2k} a^\dagger_{2k+\Delta}+A_{\Delta+2k} a_{2k+\Delta}) +O(1/N^2) a^\dagger a^\dagger a a+\dots \label{eq:hamp} \end{equation} Indeed, when we work in perturbation theory, if this Fock space exists or not is immaterial, as the expectation value of the energy for a first order perturbation will only depend on the amplitudes we have computed already. It is for states that do not differ infinitesimally from the ground state that we need to be careful about this and this Fock space representation becomes very useful. When we computed using conformal perturbation theory abstractly, we were considering the vacuum state of the Hamiltonian in equation \eqref{eq:hamp} to first order in $\alpha$. We write this as \begin{equation} \ket 0 _\alpha = \ket 0 + \alpha \ket 1 \end{equation} and we want to compute the value of the energy for the unperturbed Hamiltonian for this new state. This is what quenching the system to the unperturbed theory does for us. We find that \begin{equation} \bra {0_\alpha} H\ket 0 _\alpha = \alpha^2 \bra 1 H \ket 1 \end{equation} Now, we can use the expression \eqref{eq:hamp} to compute the state $\ket 0_\alpha$. Indeed, we find that we can do much better than infinitesimal values of $\alpha$. What we can do is realize that if we ignore the subleading pieces in $N$ then the ground state for $H+\delta H$ is a coherent state for the independent harmonic oscillators $a^\dagger_{2k+\Delta}$. Such a coherent state is of the form \begin{equation} \ket 0_\alpha = {\cal N} \exp( \sum \beta_{2k+\Delta} a^\dagger_{2k+\Delta}) \ket 0\label{eq:cohs} \end{equation} For such a state we have that \begin{equation} \langle H+\delta H \rangle = \sum (2k+\Delta) |\beta_{2k+\Delta}|^2 + \alpha \beta_{2k+\Delta} A_{2k+\delta}+\alpha \beta^*_{2k+\Delta}A_{2k+\delta}\label{eq:corrham} \end{equation} and the energy is minimized by \begin{equation} \beta_{2k+\Delta} = - \alpha \frac{A_{2k+\Delta}}{2k+\Delta} \end{equation} Once we have this information, we can compute the energy of the state in the unperturbed setup and the expectation value of $ {\cal O} $ (which we integrate over the sphere). We find that \begin{equation} \vev H = \sum (2k+ \Delta) |\beta_{2k+\Delta}|^2= \alpha^ 2\sum \frac{|A_{2k+\Delta}|^2}{2k+\Delta}\label{eq:com} \end{equation} \begin{equation} \vev {\cal O} \simeq \sum 2 A_{k+2\Delta} \beta_{2k+\Delta} \simeq -2 \alpha \sum \frac{|A_{2k+\Delta}|^2 }{2k+\Delta}\label{eq:com2} \end{equation} so that in general \begin{equation} \vev H \simeq - \frac{\alpha \vev {\cal O} }{2} \end{equation} {That} is, the integrated one point function of the operator $ {\cal O} $ over the sphere and the strength of the perturbation is enough to tell us the value of the energy of the state. For both of these to be well defined, we need that the sum appearing in \eqref{eq:com} is actually finite. Notice that this matches the Ward identity for gravity \cite{Buchel:2012gw} integrated adiabatically (for a more general treatment in holographic setups see \cite{Bianchi:2001kw}). \subsubsection{Amplitude Asymptotics, divergences and general quenches } Our purpose now is to understand in more detail the sum appearing in \eqref{eq:com} and \eqref{eq:com2}. We are interested in the convergence and asymptotic values for the terms in the series, {that} is, we want to understand the large $k$ limit. This can be read from equation \eqref{eq:kdep} by using Stirlings approximation $\log \Gamma[t+1]\simeq (t) \log(t) - (t)$ in the large $t$ limit. We find that after using this approximation on all terms that depend on $k$, that \begin{eqnarray} \log( A_{2k+\Delta}^2) &\simeq & (k+\Delta -1) \log(k+\Delta -1) + (k+\Delta -d/2) \log(k+\Delta -d/2)\\ &&- (k+d/2-1) \log(k+d/2-1)- k\log(k)+O(1) \\&\simeq&( \Delta-1 +\Delta -\frac d2 -(\frac d 2-1)) \log k = (2\Delta-d)\log k \label{eq:stir} \end{eqnarray} So that the sum is bounded by a power law in $k$ \begin{equation} \sum \frac{|A_{2k+\Delta}|^2}{2k+\Delta}\simeq \sum \frac{1}{k^{ d +1-2\Delta}} \end{equation} Again, we see that convergence of the sum requires $2\Delta -d <0$. This is the condition to have a finite vacuum expectation value of both the energy and the operator $ {\cal O} $. If we consider instead the $L^2$ norm of the state, the norm is finite so long as $d+2-2\Delta>1$, {that} is, so long as $\Delta<(d+1)/2$. The divergence in the window $d/2\leq \Delta <(d+1)/2$ is associated {with} the unboundedness of the Hamiltonian, not to the infinite norm of the state. In general we can use higher order approximations to find subleading terms in the expression \eqref{eq:stir}. Such approximations will give that $A_{2k+\Delta}$ will have a polynomial expression with leading term as above, with power corrections in $1/k$. Only a finite number of such corrections lead to divergent sums, so the problem of evaluating $\langle \cal O\rangle $ can be dealt with {using} a finite number of substractions of UV divergences. In this sense, we can renormalize the answer with a finite number of counterterms. A particularly useful regulator to make the sum finite is to choose to modify $A_{2k+\Delta} \to A_{2k+\Delta} \exp (- \epsilon (2k+\Delta))$. This is like inserting the operator $ {\cal O} $ at time $t=0$ in the Euclidean cylinder and evolving it in Euclidean time for a time $\epsilon$. Because the growth of the coefficients is polynomial in $k$, any such exponential will render the sum finite. We can trade the divergences in the sums for powers of $1/\epsilon$ and then take the limit $\epsilon\to 0$ of the regulated answer. This is beyond the scope of the present paper. Notice that we can also analyze more general quenches from studying equation \eqref{eq:corrham}. All we have to do is make $\alpha$ time dependent. The general problem can then be analyzed in terms of linearly driven harmonic oscillators, one for each $a^\dagger, a$ pair. Since the driving force is linear in raising and lowering operators, the final state will always be a coherent state as in equation \eqref{eq:cohs} for some $\beta$ which is the linear response to the source. The differential equation, derived from the Schrodinger equation applied to a time dependent coherent state, is the following \begin{equation} i \dot \beta_{2k+\Delta} (t)= (2k+\Delta) \beta_{2k+\Delta} + \alpha(t) A_{2k+\Delta}\label{eq:deq} \end{equation} The solution is given by \begin{equation} \beta_{2k+\Delta}(t) = \beta_{2k+\Delta}(0)\exp (-i \omega t) + A_{2k+\Delta}\int^\infty_0 dt' \alpha(t') \theta(t-t') \exp(-i \omega(t-t')) \end{equation} with $\omega= 2k+\Delta$ the frequency of the oscillator. Consider the case that $\alpha$ only acts over a finite amount of time between $0, \tau$ and that we start in the vacuum. After the time $\tau$ the motion for $\beta$ will be trivial, and the amplitude will be given by \begin{equation} \beta_{2k+\Delta}(\tau) = A_{2k+\Delta}\exp(-i (2k+\Delta ) \tau) \int^\tau_0 dt' \alpha(t') \exp(i \omega t') \end{equation} and all of these numbers can be obtained from the Fourier transform of $\alpha(t)$. Notice that these responses are always correct in the infinitesimal $\alpha$ regime, as can be derived using time dependent perturbation theory. What is interesting is that in the large $N$ limit they are also valid for $\alpha(t)$ that is not infinitesimal, so long as the $O(1/N)$ corrections can still be neglected. One can also compute the energy of such processes. In particular, so long as $\Delta <d/2$, any such experiment with bounded $\alpha(t)$ will give a finite answer. The simplest such experiment is to take $\alpha$ constant during a small interval $\tau=\delta t<<1$. For modes with small $\omega$, {that} is, those such that $\omega \delta t <1$, we then have that \begin{equation} \beta_{2k+\Delta}(\tau)\simeq A_{2k+\Delta} \alpha \delta t \end{equation} While for those modes such that $\omega \delta t>1$, we get that \begin{equation} |\beta_{2k+\Delta} (\tau)| \simeq \alpha\frac{A_{2k+\Delta}}{\omega} \end{equation} When we compute the energy of such a configuration, we need to divide the sum between high frequency and low frequency modes. The energy goes as \begin{equation} E\simeq \sum \omega |\beta_{2k+\Delta}|^2 \simeq \int_0^{1/(2\delta t) } dk \omega |A_{2k+\Delta} \alpha \delta t|^2 +\int_{1/(2 \delta t)}^{\infty } dk \frac{|\alpha A_{2k+\Delta}|^2}{\omega} \end{equation} now we use the fact that $|A_{2k+\Delta}|^2 \simeq k^{2\Delta -d}$ and that $\omega\propto k$ to find that \begin{equation} E \simeq |\alpha|^2 (\delta t)^{d-2\Delta} \end{equation} which shows an interesting power law for the energy deposited into the system. One can similarly argue that the one point function of $ {\cal O} (\tau)$ scales as $\alpha (\delta t)^{d-2\Delta}$: for the slow modes, the sum is proportional to $\sum A_{2k+\Delta}^2\alpha\delta t$, while for the fast modes one can argue that they have random phases and don't contribute to $ {\cal O} (\tau)$. If we want to study the case $\Delta \geq d/2$, divergences arise, so we need to choose an $\alpha(t)$ that is smooth enough that the high energy modes are not excited in the process because they are adiabatic, but if we scale that into a $\delta t$ window, the adiabatic modes are going to be those such $\omega \delta t > 10$, {let's} say. Then for these modes we take $\beta\simeq 0$, and then the estimate is also as above. For $\Delta= d/2$, in an abrupt quench one obtains a logarithmic singularity rather than power law, coming from the UV modes. This matches the results in \cite{Das:2014jna} and gives a reason for their universality as arising from the universality of 2-point functions in conformal perturbation theory. Essentially, the nature of the singularities that arise is that the amplitudes to generate descendants are larger than amplitudes to generate primaries, so the details of the cutoff matter. Here is another simple way to understand the scaling for the one point function of the operator $ {\cal O} (\tau)$. The idea is that we need to do an integral similar to $\int d^d x {\cal O} (\tau)\alpha(x) {\cal O} (x)$, but which takes into account causality of the perturbation relative to the response. If we only turn on the perturbation by a small amount of time $\delta t$, the backwards lightcone volume to the insertion of an operator at $\tau=\delta t $ is of order $\delta t^d$, and this finite volume serves as an infrared regulator, while the two point function that is being integrated is of order $\delta t ^{-2 \Delta}$. When we combine these two pieces of information we get a result proportional to $\delta t ^{d-2\Delta}$, which again is finite for $\Delta < d/2$ and otherwise has a singularity in the corresponding integral. Similarly, the energy density would be an integral of the three point function $T {\cal O} {\cal O} \simeq \delta t^{-2 \Delta- d} $ times the volume of the past lightcone squared which is again proportional to $\delta t ^{2d}$, giving an answer with the scaling we have already found. The additional corrections would involve an extra insertion of $ {\cal O} $ and the volume of the past lightcone, so they scale as $\delta t^{d-\Delta}$, multiplied by the amplitude of the perturbation. This lets us recover the scalings of the energy \cite{Das:2014jna} in full generality. \subsubsection{A note on renormalization} So far we have described our experiment as doing a time dependent profile for $\alpha(t)$ such that $\alpha(t)=0$ for $t>\tau$. Under such an experiment, we can control the outcome of the operations we have described and we obtain the scaling relations that we want. If on the other hand we want to measure the operator $ {\cal O} (t,\theta)$ for some $t<\tau$, we need to be more careful. This is where we need a better prescription for subtracting divergences: as we have seen, in the presence of a constant value for $\alpha$ we already have divergences for $\Delta\geq d/2$. Under the usual rules of effective field theory, all UV divergences correcting $\langle {\cal O} (t,\theta) \rangle $ should be polynomial in the (perturbed) coupling constants and their derivatives. Moreover, if a covariant regulator exists, it suggests that the number of such derivatives should be even. Since we are working to linear order in $\alpha(t)$, these can only depend linearly on $\alpha(t)$ and it's time derivatives. Another object that can show up regularly is the curvature of the background metric in which we are doing conformal field theory. That is, we can have expressions of the form $\partial_t^k \alpha(t) R^s$ and $(\partial_t^k \alpha \partial_t^\ell \alpha ) R^s$ appearing as counterterms in the effective action. These are needed if we want to compute the energy during the quench. Although in principle we can also get covariant derivatives acting on the curvature tensor $R$, these vanish on the cylinder. The counterterms are particularly important in the case of logarithmic divergences, as these control the renormalization group. Furthermore, the logarithmic divergences are usually the only divergences that are immediately visible in dimensional regularization. It is also the case that there are logarithmic enhancements of the maximum value of $ {\cal O} (t,\theta)$ during the quench \cite{Das:2014jna} and these will be captured by such logarithmic divergences. We need to identify when such logarithmic divergences can be present. In particular, we want to do a subtraction of the adiabatic modes (which do contribute divergences) to the one point function of $ {\cal O} (\theta, t)$ at times $t<\tau$. To undertake such a procedure, we want to solve equation \eqref{eq:deq} recursively for the adiabatic modes (those high $k$). We do this by taking \begin{equation} \beta_{2k+\Delta}(t) = -\alpha(t)\frac{A_{2k+\Delta}}{2k+\Delta} + \beta_{2k+\Delta}^1(t) +\beta_{2k+\Delta}^2(t) +\dots \end{equation} where we determine the $\beta_i(t)$ recursively for high $k$ by substituting $\beta_{2k+\Delta}(t)$ as above in the differential equation. The solution we have written is correct to zeroth order, and we then write the next term as follows \begin{equation} -i\dot \alpha(t)\frac{A_{2k+\Delta}}{2k+\Delta} = (2k+\Delta) \beta_{2k+\Delta}^1(t) \end{equation} and in general \begin{equation} i \dot \beta_{2k+\Delta}^{n-1}(t)= (2k+\Delta) \beta_{2k+\Delta}^n(t) \label{eq:rec} \end{equation} This will generate a series in $\frac1{(2k+\Delta)^n}\partial_t^n \alpha(t)$, which is also proportional to $A_{2k+\Delta}$. We then substitute this solution into the expectation value of $ {\cal O} (t, \theta)$, where we get an expression of the form \begin{equation} \vev{ {\cal O} (t)} \simeq \sum_{k, \ell} |A_{2k+\delta}|^2 \frac{c_\ell}{(2k+\Delta)^{\ell+1}}\partial_t^\ell \alpha(t) \simeq \int dk\sum_{\ell}\frac{1}{k^{2\Delta-d}} \frac{c_\ell}{(2k+\Delta)^{\ell+1}}\partial_t^\ell \alpha(t) \end{equation} The right hand side has a logarithmic divergence when $2\Delta-d + \ell=0$. Notice that this divergence arises from the combination $\beta+\beta^*$, so the terms with odd derivatives vanish because of the factors of $i$ in equation \eqref{eq:rec}. Thus, such logarithmic divergences will only be present when $\ell $ is even. This matches the behavior we expect when we have a covariant regulator. This translates to $\Delta= d/2 +k$, where $k$ is an integer. Notice that this is the same condition that we need to obtain a pole in the numerator of the Gamma function in equation \eqref{eq:int}. We see that such logarithmic divergences are exactly captured by dimensional regularization. As a logarithmic divergence, it needs to be of the form $\log( \Lambda_{UV}/\Lambda_{IR})= \log(\Lambda_{UV}/\mu)+\log(\mu/\Lambda_{IR})$. In our case, the $IR$ limit is formally set by the radius of the sphere, while the UV is determined by how we choose to work precisely with the cutoff. The counterterm is the infinite term $ \log(\Lambda_{UV}/\mu)$, but the finite term depends on the intermediate scale $\mu$, which is also usually taken to be a UV scale which is finite. This lets us consider the Lorentzian limit by taking a small region of the sphere and to work with $\delta t$ as our infrared cutoff: only the adiabatic modes should be treated in the way we described above. Then the logarithmic term scales as $\log((\mu\delta t))\partial_t^{2\Delta-d} \alpha(t) $. These logarithmic terms are exactly as written in \cite{Das:2014jna}. Notice that after the quench, we have that $\alpha(t)=0$ and all of it's derivatives are zero, so no counterterms are needed at that time. We only need the pulse $\alpha(t)$ to be smooth enough so that the state we produce has finite energy. \section{Conclusion} In this paper we have shown how to do conformal perturbation theory on the cylinder rather than in flat space. The main reason to do so was to use a physical infrared regulator in order to understand the process of renormalization of UV divergences in a more controlled setting. We showed moreover that the results that are found using AdS calculations actually match a notion of dimensional regularization where the dimension of the perturbation operator stays fixed. In this sense the AdS geometry knows about dimensional regularization as a regulator. This is an interesting observation that merits closer attention. In particular, it suggests that one can try a real space dimensional regularization approach to study perturbations of conformal field theory. We then showed that one could treat in detail also a time dependent quench, and not only where we able to find the energy after a quench, but we also were able to understand scalings that have been observed before for fast quenches. Our calculations show in what sense they are universal. They only depend on the two point function of the perturbation. The singularities that arise can be understood in detail in the Hamiltonian formulation we have pursued, and they arise from amplitudes to excite descendants increasing with energy, or just not decaying fast enough. In this way they are sensitive to the UV cutoff associated to a pulse quench: the Fourier transform of the pulse shape needs to decay sufficiently fast at infinity to compensate for the increasing amplitudes to produce descendants. We were also able to explain some logarithmic enhancements for the vacuum expectation values of operators during the process of the quench that can be understood in terms of renormalizing the theory to first order in the perturbation. Understanding how to do this to higher orders in the perturbation is interesting and should depend on the OPE coefficients of a specific theory. \acknowledgments D.B. Would like to thank D. Marolf, R. Myers, J. Polchinski and M. Srednicki for discussions. D. B. is supported in part by the DOE under grant DE-FG02-91ER40618.
{ "attr-fineweb-edu": 1.109375, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd3vxaKgQNICjgsOX
\section*{Introduction} Fragment-based quantum mechanical methods \cite{Otto1975,Gao1997,GEBF,EEMB,SMF,KEM,Ryde2010,MTA,ELG,DC,XO-ONIOM,MFCC-D} are becoming increasingly popular\cite{gordon2012fragmentation}, and have been used to describe a very diverse set of molecular properties for large systems. Although these methods have been applied to refine the energetics of some enzymatic reactions\cite{ishida2006all,nemukhin2012} they are usually not efficient enough to allow for many hundreds of single point calculations needed to map out a reaction path for a system containing thousands of atoms, although geometry optimizations of large systems can be performed for systems consisting of several hundreds of atoms \cite{XO-ONIOM,MTA,ELG,FMOopt,FMOMP2_PCM,fedorov2011geometry}. In fact, typically applications of fragment-based methods to biochemical systems, for example, to protein-ligand binding \cite{Sawada}, are based on performing a few single point calculations for structures obtained at a lower level of theory (such as with force fields). Although many force fields are well tuned to treat typical proteins, for ligands they can be problematic. In this work we extend the effective fragment molecular orbital (EFMO) method\cite{steinmann2010effective,steinmann2012effective} into the frozen domain (FD) formalism \cite{fedorov2011geometry}, originally developed for the fragment molecular orbital (FMO) method \cite {FMO1,FMOrev1,FMObook,FMOrev2}. For FMO, there is also partial energy gradient method \cite{PEG}. EFMO is based on dividing a large molecular system into fragments and performing ab initio calculations of fragments and their pairs, and combining their energies in the energy of the whole system (see more below). In the FD approach we employ here, one defines an active region associated with the active site, and the cost of a geometry optimization is then essentially given by the cost associated with the active region. However, unlike the quantum-mechanical / molecular mechanical (QM/MM) method \cite{MD-rev4} with non-polarizable force fields, the polarization of the whole system is accounted for in FMO and EFMO methods: in the former via the explicit polarizing potential and in the latter via fragment polarizabilities. Another important difference between EFMO and QM/MM is that the former does not involve force fields, and the need to elaborately determine parameters for ligands does not exist in EFMO. Also, in EFMO all fragments are treated with quantum mechanics, and the problem of the active site size \cite{Ryde} does not arise. The paper is organized as follows: First, we derive the EFMO energy and gradient expressions for the frozen domain approach, when some part of the system is frozen during the geometry optimization. Secondly, we predict the reaction barrier of barrier of the conversion of chorismate to prephenate (Figure~\ref{fig:chorismate}) in chorismate mutase. The reaction has been studied previously using conventional QM/MM techniques \cite{lyne1995insights,davidson1996mechanism,hall2000aspects,marti2001transition,worthington2001md,lee2002reaction,lee2003exploring,ranaghan2003insights,ranaghan2004transition,friesner2005ab,crespo2005multiple,claeyssens2011analysis}. The EFMO method is similar in spirit to QM/MM in using a cheap model for the less important part of the system and the mapping is accomplished with a reasonable amount of computational resources (four days per reaction path using 80 CPU cores). Finally we summarize our results and discuss future directions. \section*{Background and Theory} The EFMO energy of a system of $N$ fragments (monomers) is \begin{equation} E^\mathrm{EFMO} = \sum_I^N E_I^0 + \sum_{IJ}^{R_{I,J}\leq R_\mathrm{resdim}} \left( \Delta E_{IJ}^0 - E_{IJ}^\mathrm{POL} \right) + \sum_{IJ}^{R_{I,J} > R_\mathrm{resdim}} E_{IJ}^\mathrm{ES} + E_\mathrm{tot}^\mathrm{POL} \label{eqn:efmoenergy} \end{equation} where $E_I^0$ is the gas phase energy of monomer $I$. The second sum in equation~\ref{eqn:efmoenergy} is the pairwise correction to the monomer energy and only applies for pairs of fragments (dimers) separated by an interfragment distance $R_{I,J}$ (defined previously \cite{steinmann2010effective}) less than a threshold $R_\mathrm{resdim}$. The correction for dimer $IJ$ is \begin{equation} \Delta E_{IJ}^0 = E_{IJ}^0 - E^0_I - E^0_J. \end{equation} $E_{IJ}^\mathrm{POL}$ and $E_\mathrm{tot}^\mathrm{POL}$ are the classical pair polarization energy of dimer $IJ$ and the classical total polarization energy, respectively. Both energies are evaluated using the induced dipole model\cite{day1996effective,gordon2001effective} based on distributed polarizabilities\cite{minikis2001accurate}. The final sum over $E_{IJ}^\mathrm{ES}$ is the classical electrostatic interaction energy and applies only to dimers separated by a distance greater than $R_\mathrm{resdim}$. These energies are evaluated using atom-centered multipole moments through quadrupoles\cite{stone1981distributed}. The multipole moments and distributed polarizabilities are computed on the fly for each fragment\cite{steinmann2010effective,steinmann2012effective}. In cases where only part of a molecular system is to be optimized by minimizing the energy, equation~\ref{eqn:efmoenergy} can be rewritten, resulting in a method conceptually overlapping with QM/MM in using a cheap model for the less important part of the system. Consider a system $S$ (Figure~\ref{fig:active}) where we wish to optimize the positions of atoms in region $A$, while keeping the atoms in region $b$ and $F$ frozen (the difference between $b$ and $F$ will be discussed below). With this definition, we rewrite the EFMO energy as \begin{equation} \label{eqn:energyregion} E^\mathrm{EFMO} = E^0_F + E^0_b + E^0_A + E^0_{F/b} + E^0_{F/A} +E^0_{A/b} + E_\mathrm{tot}^\mathrm{POL}, \end{equation} where $E^0_A$ is the internal energy of region $A$. Region $A$ is made of fragments contaning atoms whose position is optimized, and $A$ can also have some frozen atoms \begin{equation} E^0_A = \sum_{I\in A}^N E_I^0 + \sum_{{I,J\in A}}^{R_{I,J}\leq R_\mathrm{resdim}} \left( \Delta E_{IJ}^0 - E_{IJ}^\mathrm{POL} \right) + \sum_{{I,J\in A}}^{R_{I,J} > R_\mathrm{resdim}} E_{IJ}^\mathrm{ES}. \end{equation} Similarly, $E^0_b$ is the internal energy of $b$ \begin{equation} \label{eqn:regionbenergy} E^0_b = \sum_{I\in b}^N E_I^0 + \sum_{{I,J\in b}}^{R_{I,J}\leq R_\mathrm{resdim}} \left( \Delta E_{IJ}^0 - E_{IJ}^\mathrm{POL} \right) + \sum_{{I,J\in b}}^{R_{I,J} > R_\mathrm{resdim}} E_{IJ}^\mathrm{ES}, \end{equation} Region $A$ is surrounded by a buffer $b$, because fragment pairs computed with QM containing one fragment outside of $A$ (i.e., in $b$) can still contribute to the total energy gradient (see below). On the other hand, fragment pairs with one fragment in $F$ can also contribute to the total gradient, but they are computed using a simple classical expression rather than with QM. Note that the relation between the notation used in FMO/FD and that we use here is as follows: $A,F$ and $S$ are the same. The buffer region $B$ includes $A$, but $b$ does not, i.e., $A$ and $b$ share no atoms. Formally, $A$ and $b$ are always treated at the same level of theory by assigning fragments to the same layer. {\color{black}In the EFMO method, covalent bonds between fragments are not cut. Instead, electrons from a bond connecting two fragments are placed entirely to one of the fragments. The electrons of the fragments are kept in place by using frozen orbitals across the bond. \cite{fedorov2008covalent,fedorov2009analytic,steinmann2012effective} Fragments connected by a covalent bond} share atoms (Figure~\ref{fig:bondregion}) through the bonding region so it is possible that one side changes the wave function of the bonding region\cite{steinmann2012effective}. It is therefore necessary to re-evaluate the internal \emph{ab initio} energy of region $b$ for each new geometry step. The internal geometries of fragments in region $F$ are completely frozen so the internal energy is constant and is therefore neglected \begin{equation} \label{eqn:eintfrozen} E^0_F = 0. \end{equation} However, it is still necessary to compute the multipole moments and polarizability tensors (and therefore the wave function) of the fragments in $F$ once at the beginning of a geometry-optimization to evaluate $E_\mathrm{tot}^\mathrm{POL}$ in equation~\ref{eqn:energyregion} as well as some inter-region interaction energies defined as \begin{align} \label{eqn:crossba} E^0_{b/A} &= \sum_{\substack{I\in b\\ J\in A}}^{R_{I,J}\leq R_\mathrm{resdim}} \left(\Delta E_{IJ}^0 - E_{IJ}^\mathrm{POL} \right) + \sum_{\substack{I\in b\\ J\in A}}^{R_{I,J} > R_\mathrm{resdim}} E_{IJ}^\mathrm{ES}, \\ \label{eqn:crossfa} E^0_{F/A} &= \sum_{\substack{I\in A\\ J\in F}} E_{IJ}^\mathrm{ES}, \\ \label{eqn:crossfb} E^0_{F/b} &= 0. \end{align} Equation~\ref{eqn:crossfa} assumes that $b$ is chosen so that fragments in $A$ and $F$ are sufficiently separated (i.e., $R_{I,J} > R_\mathrm{resdim}$) so the interaction is evaluated classically. If all atoms in region $b$ are frozen, then $E^0_{F/b}$ is constant and can be neglected. However, this assumes that the positoins of all atoms at both sides of the bonds connecting fragments are frozen. The final expression for the EFMO frozen domain (EFMO/FD) energy is \begin{equation} \label{eqn:efmofinal} E^{\mathrm{EFMO}} = E^0_b + E^0_A + E^0_{b/A} + \sum_{\substack{I\in A\\ J\in F}}^{R_{I,J} > R_\mathrm{resdim}} E_{IJ}^\mathrm{ES} + E_\mathrm{tot}^\mathrm{POL}. \end{equation} Finally, we note that due to the frozen geometry of $b$ we can further gain a speedup by not evaluating dimers in $b$ (cross terms between $A$ and $b$ are handled explicitly according to equation~\ref{eqn:crossba}) since they do not contribute to the energy or gradient of $A$. This corresponds to the frozen domain with dimers (EFMO/FDD), and equation~\ref{eqn:regionbenergy} becomes \begin{equation} \label{eqn:approxregionb} E^0_b = \sum_{I\in b}^N E_I^0. \end{equation} The gradient of each region is \begin{align} \frac{\partial E^{\mathrm{EFMO}}}{\partial x_A} &= \frac{\partial E^0_A}{\partial x_A} + \frac{\partial E^0_{A/b}}{\partial x_A} + \frac{\partial E^0_{A/F}}{\partial x_A} + \frac{\partial E_\mathrm{tot}^\mathrm{POL}}{\partial x_A}, \\ \label{eqn:gradregionb}\frac{\partial E^{\mathrm{EFMO}}}{\partial x_b} &= 0, \\ \frac{\partial E^{\mathrm{EFMO}}}{\partial x_F} &= 0, \end{align} and the details of their evaluation has been discussed previously\cite{steinmann2010effective,steinmann2012effective}. Equation~\ref{eqn:gradregionb} does not apply to non-frozen atoms shared with region $A$. The frozen domain formulation of EFMO was implemented in GAMESS\cite{schmidt1993general} and parallelized using the generalized distributed data interface\cite{FMO3,fedorov2004new}. \section*{Computational Details} \subsection*{Preparation of the Enzyme Model} We followed the strategy by Claeyssens \emph{et~al.}\cite{claeyssens2011analysis} The structure of chorismate mutase (PDB: 2CHT) solved by Chook \emph{et~al}.\cite{chook1993crystal} was used as a starting point. Chains A, B and C were extracted using PyMOL\cite{schrodinger2010pymol} and subsequently protonated with PDB2PQR\cite{dolinsky2004pdb2pqr,dolinsky2007pdb2pqr} and PROPKA\cite{li2005very} at $\mathrm{pH} = 7$. {\color{black}The protonation state of all residues can be found in Table~S1.} The inhibitor between chain A and C was replaced with chorismate in the reactant state (\textbf{1}, Figure~\ref{fig:chorismate}) modeled in Avogadro\cite{avogadro,avogadro2012paper}. The entire complex (chorismate mutase and chorismate) was solvated in water (TIP3P\cite{jorgensen1983comparison}) using GROMACS. \cite{van2005gromacs,hess2008gromacs} {\color{black} To neutralize the system 11 Na$^{+}$ counter ions were added.} The protein and counter ions were treated with the CHARMM27\cite{kerell2000development,brooks2009charmm} force field in GROMACS. Force-field parameters for chorismate were generated using the SwissParam\cite{zoete2011swissparam} tool. To equilibrate the complex a $100$ ps NVT run at $T=300\unit{K}$ was followed by a $100\unit{ps}$ NPT run at $P=1\unit{bar}$ and $T=300\unit{K}$. The production run was an isothermal-isobaric trajectory run for $10$ ns. A single conformation was randomly selected from the last half of the simulation and energy minimized in GROMACS to a force threshold of $F_\mathrm{max}=300 \unit{KJ}\unit{mol^{-1}}\unit{nm^{-1}}$. During equilibration and final molecular dynamics (MD) simulation, the $\chem{C}3$ and $\chem{C}4$ atoms of chorismate (see Figure~\ref{fig:chorismate}) were harmonically constrained to a distance of 3.3 {\AA} to keep it in the reactant state. Finally, a sphere of 16 {\AA} around the C1 atom of chorismate was extracted in PyMOL and hydrogens were added to correct the valency where the backbone was cut. The final model contains a total of 2398 atoms. \subsection*{Mapping the Reaction Path} To map out the reaction path, we define the reaction coordinate similarly to Claeyssens \emph{et~al.}\cite{claeyssens2011analysis} as the difference in bond length between the breaking O2-C1 bond and the forming C4-C3 bond in chorismate (see also Figure~\ref{fig:chorismate}), i.e. \begin{equation} \label{eqn:reactioncoordinate} R = R_\mathrm{21} -R_\mathrm{43}. \end{equation} The conversion of chorismate ($R=-2.0$~{\AA}, $R_{21} = 1.4$~{\AA}, $R_{43} =-3.4$~{\AA}) to prephenate ($R=1.9$ {\AA}, $R_{21} = 3.3$~{\AA}, $R_{43} =1.4$~{\AA}) in the enzyme was mapped by constraining the two bond lengths in equation~\ref{eqn:reactioncoordinate} with a harmonic force constant of 500 kcal mol$^{-1}$ \AA$^{-2}$ in steps of $0.1$~{\AA}. For each step, all atoms in the active region ($A$) were minimized to a threshold on the gradient of $5.0\cdot10^{-4}$ Hartree Bohr$^{-1}$ (OPTTOL=5.0e-4 in \$STATPT). For the enzyme calculations we used EFMO-RHF and FMO2-RHF with the frozen domain approximation presented above. We used two different sizes for the active region small: (EFMO:$\mathbf{S}$, Figure~\ref{fig:modelfull}) and large (EFMO:$\mathbf{L}$, Figure~\ref{fig:modelfull_l}). The active region (colored red in Figure~\ref{fig:active}) is defined as all fragments with a minimum distance $R_\mathrm{active}$ from any atom in chorismate (EFMO:$\mathbf{S}:R_\mathrm{active}=2.0$ {\AA}, EFMO:$\mathbf{L}:R_\mathrm{active}=3.0$ {\AA}). In EFMO:$\mathbf{S}$ the active region consists of chorismate, 4 residues and 5 water molecules, while the active region in EFMO:$\mathbf{L}$ consists of chorismate, 11 residues and 4 water molecules. The buffer region (blue in Figure~\ref{fig:active}) is defined as all fragments within $2.5$ {\AA} of the active region for both EFMO:$\mathbf{S}$ and EFMO:$\mathbf{L}$. The rest of the system is frozen. To prepare the input files we used FragIt\cite{steinmann2012fragit}, which automatically divides the system into fragments; in this work we used the fragment size of one amino acid residue or water molecule per fragment. In order to refine the energetics, for each minimized step on the reaction path we performed two-layer ONIOM\cite{svensson1996oniom,dapprich1999new} calculations \begin{equation} E^\mathrm{high}_\mathrm{real} \approx E^\mathrm{low}_\mathrm{real} + E^\mathrm{high}_\mathrm{model} - E^\mathrm{low}_\mathrm{model}, \end{equation} where $E^\mathrm{low}_\mathrm{real} = E^\mathrm{EFMO}$ according to equation~\ref{eqn:energyregion}. This can be considered a special case of the more general multicenter ONIOM based on FMO \cite{ONIOM-FMO}, using EFMO instead of FMO. The high level model system is chorismate in the gas-phase calculated using B3LYP\cite{becke1993new,stephens1994ab,hertwig1997parameterization} (DFTTYP=B3LYP in \$CONTRL) or MP2 (MPLEVL=2 in \$CONTRL) with either 6-31G(d) or the cc-pVDZ{\color{black}, cc-pVTZ and cc-pVQZ basis sets} by Dunning \cite{dunning1989gaussian}. We also carried out multilayer EFMO and FMO\cite{fedorov2005multilayer} single-point calculations where region $F$ is described by RHF/6-31G(d) and $b$ and $A$ (for EFMO) or $B$ ($B=A \cup b$ for FMO \cite{fedorov2011geometry}) is calculated using MP2/6-31G(d). The FDD approximation in equation~\ref{eqn:approxregionb} is enabled by specifying MODFD=3 in \$FMO, similarly to the frozen domain approach in FMO\cite{fedorov2011geometry}. All calculations had spherical contaminants removed from the basis set (ISPHER=1 in \$CONTRL). \subsection*{Obtaining the Activation Enthalpy} The activation enthalpy is obtained in two different ways by calculating averages of $M$ adiabatic reaction pathways. The starting points of the $M$ pathways were randomly extracted from the MD simulation, followed by the reaction path mapping procedure described above for each pathway individually. One way to obtain the activation enthalpy averages the barriers from each individual adiabatic reaction path \cite{claeyssens2006high} \begin{equation} \label{eqn:enthalpy1} \Delta H_1^\ddagger = \frac{1}{M} \sum_{i=1}^M \left(E_{\mathrm{TS},i} - E_{\mathrm{R},i}\right) -1.6\,\mathrm{kcal\,mol}^{-1}. \end{equation} Here $M$ is the number of reaction paths ($M = 7$, Figure~\ref{fig:average}) $E_{\mathrm{TS},i}$ is the highest energy on the adiabatic reaction path while $E_{\mathrm{R},i}$ is the lowest energy with a negative reaction coordinate. 1.6 kcal mol$^{-1}$ corrects for the change in zero point energy and thermal contributions\cite{claeyssens2006high}. The other way of estimating the activation enthalpy is \cite{ranaghan2004transition}: \begin{equation} \label{eqn:enthalpy2} \Delta H_2^\ddagger = \langle E_{\mathrm{TS}}\rangle - \langle E_\mathrm{R}\rangle - 1.6\,\mathrm{kcal\,mol}^{-1}. \end{equation} Here $\langle E_{\mathrm{TS}}\rangle$ and $\langle E_\mathrm{R}\rangle$ are, respectively, the highest energy and lowest energy with a negative reaction coordinate on the averaged adiabatic path (bold line in Figure~\ref{fig:average}). The brackets here mean averaging over 7 reaction paths; and the difference of Eqs \ref{eqn:enthalpy1} and \ref{eqn:enthalpy2} arises because of the noncommutativity of the sum and the min/max operation over coordinates: in Eq \ref{eqn:enthalpy1} we found a minimum and a maximum for each curve separately, and averaged the results, but in Eq \ref{eqn:enthalpy2} we first averaged and then found the extrema. As discussed below, the two reaction enthalpies are within 0.2 kcal/mol, which indicates that the TS occurs at roughly the same value of the reaction coordinate for most paths. \section*{Results and Discussion} \subsection*{Effects of Methodology, Region Sizes and Approximations} \label{sect1} Reaction barriers obtained in the enzyme using harmonic constraints are plotted on Figure~\ref{fig:rhfbarrier} and listed in Table~\ref{tbl:rhfbarriers} for different settings of region sizes and approximations. All calculated reaction barriers are within 0.5 kcal mol$^{-1}$ from each other when going from the reactant ($R_R$) to the proposed transition ($R_{TS}$) state where the reaction barriers for the TSs are around 46 kcal mol$^{-1}$. The same is true when going to the product $R_P$. Only the large model (EFMO:$\mathbf{L}$) shows a difference in energy near the product ($R_P$) with a lowering of the relative energy by 4 kcal mol$^{-1}$ compared to the other settings. The reaction coordinates are also similar for the small systems ($R_P=1.41$ \AA, except for $R_\mathrm{resdim}=2.0$ which is $R_P=1.56$ \AA) with some minor kinks on the energy surface from optimization of the structures without constraints at $R_P$. The EFMO:\textbf{L} model has a different reaction coordinate for the product ($R_P=1.57$ \AA) and also a shifted reaction coordinate for the transition state $R_{TS}=-0.12$ \AA\, which we can attribute to a better description of more separated pairs in the active region but more importantly that around the TS, the energy surface is very flat. Interestingly, using FMO2 shows no significant change in either reaction barriers or reaction coordinates for the reactant, transition state or product which differ from EFMO:\textbf{S} by 0.02 \AA, 0.03 \AA\, and 0.01 \AA\, respectively. Timings are discussed below. Previous work by Ranaghan \emph{et~al.}\cite{ranaghan2003insights,ranaghan2004transition} obtained an RHF barrier of $36.6$ kcal mol$^{-1}$ which is 10 kcal/mol lower than what we obtained. Also, they observed that the transition state happened earlier at $R_{TS} = -0.3$ \AA. The difference in reaction barrier from our findings is attributed to a poorer enzyme structure and other snapshots do yield similar or better reaction barriers (see below). Furthermore, the same study by Ranaghan et al. found that the reaction is indeed exothermic with a reaction energy of around $-30$ kcal mol$^{-1}$ at the RHF/6-31G(d) level of theory. We expect this difference from our results to arise from the fact the study by Ranaghan et al. used a fully flexible model for both the substrate and the enzyme where the entire protein is free to adjust contrary to our model where we have chosen active fragments and atoms in a uniform sphere around a central fragment. This is perhaps not the best solution if one includes too few fragments (which lowers the computational cost) due to fragments in the buffer region are unable to move and cause steric clashes. The lowering of the energy for EFMO:\textbf{L} suggests this. \subsection*{Refined Reaction Energetics} \label{sect2} For the smallest EFMO:\textbf{S} system ONIOM results are presented on Figure~\ref{fig:oniombarrier} and in Table~\ref{tbl:oniombarriers} for various levels of theory. By calculating the MP2/cc-pVDZ:EFMO-RHF/6-31G(d) energy using ONIOM we obtain a 19.8 kcal mol$^{-1}$ potential energy barrier. Furthermore, the reaction energy is lowered from $-1.3$ kcal mol$^{-1}$ to $-5.5$ kcal mol$^{-1}$. {\color{black}Increasing the basis set size through cc-pVTZ and cc-pVQZ reduces the barrier to 21.8 kcal mol$^{-1}$ and 21.7 kcal mol$^{-1}$, respectively and the reaction energy is -1.1 kcal mol$^{-1}$ and 0.8 kcal mol$^{-1}$.} Using the smaller 6-31G(d) basis set with MP2, the reaction barrier is 22.2 kcal mol$^{-1}$ and reaction energy is $-3.2$ kcal mol$^{-1}$. The B3LYP results are improvements for the TS only reducing the barrier to $23.8$ kcal mol$^{-1}$ for B3LYP/cc-pVDZ:EFMO-RHF/6-31G(d). The same is not true for the product where the energy is increased by about $3$ kcal mol$^{-1}$. For the other systems treated using EFMO-RHF/6-31G(d) discussed in the previous section ONIOM corrected results {\color{black}at the MP2 or B3LYP level of theory using a cc-pVDZ basis set} are listed in tables~S2 to S5 and show differences from the above by less than $1$ kcal mol$^{-1}$, again the reaction coordinates changes slightly depending on the system. {\color{black}The effect of including correlation effects by means of MP2 and systematically larger basis sets is that the potential energy barrier for the reaction rises as more correlation effects are included, the same is true for the overall reaction energy.} The results presented here for MP2 are in line with what has been observed previously by Ranaghan \emph{et~al.}\cite{ranaghan2004transition} and Claeyssens \emph{et~al.}\cite{claeyssens2011analysis}. Overall, the reaction barrier is reduced to roughly half of the RHF barrier and the observed coordinates for the reaction shift slightly. We do note that this study and the study by Ranaghan \emph{et~al.} use ONIOM style energy corrections for the correlation and not geometry optimizations done at a correlated level. Overall, we observe that the predicted reaction coordinate for the approximate transition state in the conversion of chorismate to prephenate happens around $0.2$ {\AA} later than in those studies. The results for the multilayer single points along the energy surface are presented in Table~\ref{tbl:efmomp2barrier}. The barrier calculated at the EFMO-RHF:MP2/6-31G(d) level of theory is predicted to be $27.6$ kcal mol$^{-1}$ which is {\color{black}5.4} kcal mol$^{-1}$ higher than the ONIOM barrier and the reaction coordinates are shifted for both the reactant and the TS from $R_{R}=-1.95$ {\AA} ~to $R_{R}=-1.64$ {\AA} ~and $R_{TS}=-0.36$ {\AA} ~to $R_{TS}=-0.11$ {\AA}. Similar results are obtained at the FMO2-RHF:MP2/6-31G(d) level of theory. {\color{black}The difference from the ONIOM corrected values in table~\ref{tbl:efmomp2barrier} is likely due to the inclusion of dispersion effects between the chorismate and the enzyme which is apparently weaker at the transition state compared to the reactant state.} \subsection*{Ensemble Averaging} In Figure~\ref{fig:average} and Figure~\ref{fig:average_cct} we show 7 adiabatic reaction paths mapped with EFMO-RHF/6-31G(d) starting from 7 MD snapshots; the energetics were refined with ONIOM at the MP2/cc-pVDZ {\color{black}and MP2/cc-pVTZ level}. In EFMO, we used a small active region (EFMO:\textbf{S}) and $R_\mathrm{resdim}=1.5$ and no dimer calculations in region $b$ (S15FD3 in Figure~\ref{fig:rhfbarrier}). Out of the 7 trajectories one is described in detail in the previous sub-section. For MP2/cc-pVDZ:EFMO-RHF/6-31G(d) the reaction enthalpies are $\Delta H_1^\ddagger = 18.3 \pm 3.5$ kcal mol$^{-1}$ and $\Delta H_2^\ddagger = 18.2$ kcal mol$^{-1}$ [cf. Equations (\ref{eqn:enthalpy1}) and (\ref{eqn:enthalpy2})], the latter having an uncertainty of the mean of $6.9$ kcal mol$^{-1}$. {\color{black}For MP2/cc-pVTZ:EFMO-RHF/6-31G(d) the reaction enthalpies are $\Delta H_1^\ddagger = 19.3 \pm 3.7$ kcal mol$^{-1}$ and $\Delta H_2^\ddagger = 18.8$ kcal mol$^{-1}$ with an uncertainty of the mean of $7.1$ kcal mol$^{-1}$.} These barriers are ca $5.5$ {\color{black}($6.5$)} kcal mol$^{-1}$ higher than the experimental value of $12.7 \pm 0.4$ kcal mol$^{-1}$ for MP2/cc-pVDZ {\color{black}(MP2/cc-pVTZ)}. For comparison, the activation enthalpy obtained by Claeyssens \emph{et~al.} \cite{claeyssens2006high,claeyssens2011analysis} ($9.7 \pm 1.8$ kcal mol$^{-1}$) is underestimated by $3.0$ kcal mol$^{-1}$. There are several differences between our study and that of Claeyssens \emph{et~al.} that could lead to an overestimation of the barrier height: biasing the MD towards the TS rather than the reactant, a larger enzyme model (7218 vs 2398 atoms), and more conformational freedom when computing the potential energy profile. With regard to the latter point, while Figure~\ref{fig:rhfbarrier} shows that increasing the active region has a relatively small effect on the barrier this may not be the case for all snapshots. We did identify one trajectory that failed to produce a meaningful reaction path and is presented in Figure S1. Here, the energy of the barrier becomes unrealistically high because of very little flexibility in the active site and unfortunate placement of Phe57 (located in the buffer region, Figure S2), which hinders the conformational change needed for the successful conversion to prephenate yielding an overall reaction energy of around $+11$ kcal mol$^{-1}$. As noted above, the EFMO:\textbf{L} settings is a possible solution to this as more of the protein in available to move, but as seen from Table~\ref{tbl:rhfbarriers} the computational cost doubles. \subsection*{Timings} Using the computationally most efficient method tested here (EFMO:\textbf{S}), $R_\mathrm{resdim}=1.5$, and skipping dimers in the buffer region $b$, an adiabatic reaction path, which requires a total of 467 gradient evaluations, can be computed in four days using 80 CPU cores (20 nodes with 4 cores each) at the RHF/6-31G(d) level of theory. As shown in Table~\ref{tbl:rhfbarriers}, the same calculation using FMO2 requires takes roughly $T^\mathrm{full}_\mathrm{rel}=7.5$ times longer. Increasing $R_\mathrm{resdim}$ from 1.5 to 2 has a relatively minor effect of the CPU time (a factor of 1.2), while performing the dimer calculations in the buffer region nearly doubles (1.7) the CPU time. Similarly, increasing the size of active region from $2.0$ \AA ~to $3.0$ \AA ~around chorismate also nearly doubles (1.8) the CPU time. This is mostly due to the fact that more dimer calculations must be computed, but the optimizations also require more steps (513 gradient evaluations) to converge due to the larger number of degrees of freedom that must be optimized. Looking at a single minimization for a specific reaction coordinate $R=-1.79$ {\AA}, the most efficient method takes 4.5 hours. Here, the relative timings $T_\mathrm{rel}$ are all larger than for the full run ($T^\mathrm{full}_\mathrm{rel}$) due to a slight increase in the number of geometry steps (around 25) taken for all but FMO2 which is identical to the reference (22 steps). Thus, the overall cost of performing the FMO2 minimization is 6.7 times as expensive as EFMO. \section*{Summary and Outlook} In this paper we have shown that the effective fragment molecular orbital (EFMO) method \cite{steinmann2010effective,steinmann2012effective} can be used to efficiently map out enzymatic reaction paths provided the geometry of a large part of the enzyme and solvent is frozen. In EFMO one defines an active region associated with the active site, and the cost of a geometry optimization is then essentially the cost of running quantum-mechanical calculations of the active domain. This is similar to the cost of QM/MM, if the QM region is the same; the difference is that in EFMO we freeze the coordinates of the rest of the system, whereas in QM/MM they are usually fully relaxed. On the other hand, EFMO does not require parameters and can be better considered an approximation to a full QM calculation rather than a QM/MM approach. In this work we used the mapping technique based on running a classical MD simulation, selecting some trajectories, freezing the coordinates of the outside region, and doing constrained geometry optimizations along a chosen reaction coordinate. An alternative to this approach is to run full MD simulation of a chemical reaction using EFMO. This has already been done for many chemical reactions using FMO-MD \cite{fmomd,sato2008does,fmomd-rev} and can be done in future with EFMO. A potential energy profile for the chorismate to prephenate reaction in chorismate has been computed in 4 days using 80 CPU cores for an RHF/6-31G(d) description of a truncated model of the enzyme containing 2398 atoms. For comparison, a corresponding FMO2 calculation takes about 7.5 times more. The cost of EFMO calculations is mainly determined by the size of the buffer- and active region. Comparing to a QM/MM with QM region of the same size, EFMO as a nearly linear scaling method, becomes faster than QM if the system size is sufficiently large; especially for correlated methods like MP2 where this cross-over should happen with relatively small sizes. Our computed conformationally-averaged activation enthalpy is in reasonable agreement to the experimental value, although overestimated by 5.5 kcal/mol. The energetics of this reaction depends on the level of calculation. We have shown that by using a level better than RHF, for instance, MP2 or DFT, considerably improves the energetics and by using such an appropriate level to also determine the reaction path following the formalism in this work can be used to provide a general and reliable way in future. EFMO, as one of the fragment-based methods \cite{gordon2012fragmentation}, can be expected to be useful in various biochemical studies, such as in enzymatic catalysis and protein-ligand binding. It should be noted that in addition to its paramater-free ab initio based nature, EFMO and FMO also offer chemical insight on the processes by providing subsystem information, such as the properties of individual fragments (e.g., the polarization energy) as well as the pair interaction energies between fragments \cite{piedagas,piedapcm}. This can be of considerable use to fragment-based drug discovery \cite{fbdd,gamess-drug}. \section*{Acknowledgements} CS and JHJ thank the Danish Center for Scientific Computing at the University of Copenhagen for providing computational resources. DGF thanks the Next Generation Super Computing Project, Nanoscience Program (MEXT, Japan) and Strategic Programs for Innovative Research (SPIRE, Japan).
{ "attr-fineweb-edu": 1.722656, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd4jxK7IAHYL5-Kov
\section{Introduction} \label{sec:intro} Non-Gaussianity in primordial fluctuations is one of the most powerful tools to distinguish different models of inflation (see, {\em e.g.}, Ref.~\cite{CQG-focus-NG} and references therein). To quantify the amount of non-Gaussianity and clarify non-Gaussian observational signatures, it is important to develop methods to deal with nonlinear cosmological perturbations. Second-order perturbation theory~\cite{Maldacena:2002vr,Acquaviva:2002ud,Malik:2003mv} is frequently used for this purpose. Spatial gradient expansion is employed as well in the literature~\cite{Lifshitz:1963ps, Starobinsky:1986fxa,Salopek:1990jq,Deruelle:1994iz,Nambu:1994hu, Sasaki:1995aw,Sasaki:1998ug, Kodama:1997qw, Wands:2000dp, Lyth:2004gb, Lyth:2005du, Rigopoulos:2003ak,Rigopoulos:2004gr,Lyth:2005fi,Takamizu:2010je,Takamizu:2010xy,Takamizu:2008ra,Tanaka:2007gh,Tanaka:2006zp,Naruko:2012fe,Sugiyama:2012tj,Narukosasaki,Gao}. The former can be used to describe the generation and evolution of primordial perturbations inside the horizon, while the latter can deal with the classical {\em nonlinear} evolution after horizon exit. Therefore, it is important to develop both methods and to use them complementarily. In this paper, we are interested in the classical evolution of nonlinear cosmological perturbations on superhorizon scales. This has been addressed extensively in the context of the separate universe approach~\cite{Wands:2000dp} or, equivalently, the $\delta N$ formalism~\cite{Starobinsky:1986fxa,Sasaki:1995aw, Sasaki:1998ug, Lyth:2005fi}. The $\delta N$ formalism is the zeroth-order truncation of gradient expansion. However, higher-order contributions in gradient expansion can be important for extracting more detailed information about non-Gaussianity from primordial fluctuations. Indeed, it has been argued that, in the presence of a large effect from the decaying mode due to slow-roll violation, the second-order corrections play a crucial role in the superhorizon evolution of the curvature perturbation already at linear order~\cite{Seto:1999jc, Leach:2001zf,Jain:2007au}, as well as at nonlinear order~\cite{Takamizu:2010xy,Takamizu:2010je}. In order to study time evolution of the curvature perturbation on superhorizon scales in the context of non-Gaussianity, it is necessary to develop nonlinear theory of cosmological perturbations valid up to second order in spatial gradient expansion. The gradient expansion technique has been applied up to second order to a universe dominated by a canonical scalar field and by a perfect fluid with a constant equation of state parameter, $P/\rho=$const~\cite{Tanaka:2006zp,Tanaka:2007gh}. The formulae have been extended to be capable of a universe filled with a non-canonical scalar field described by a generic Lagrangian of the form $W(-\partial^{\mu}\phi\partial_{\mu}\phi/2,\phi)$, as well as a universe dominated by a perfect fluid with a general equation of state, $P=P(\rho)$~\cite{Takamizu:2008ra,Takamizu:2010xy}. Those systems are characterized by a single scalar degree of freedom, and hence one expects that a single master variable governs the evolution of scalar perturbations even at nonlinear order. By virtue of gradient expansion, one can indeed derive a simple evolution equation for an appropriately defined master variable ${\frak R}_u^{\rm NL}$: \begin{eqnarray} {{{\frak R}}_u^{\rm NL}}''+2 {z'\over z} {{{\frak R}}_u^{\rm NL}}' +{c_s^2\over 4} \,{}^{(2)}R[\, {{\frak R}}_u^{\rm NL}\,]={\cal O}(\epsilon^4)\,,\label{ieq1} \end{eqnarray} where the prime represents differentiation with respect to the conformal time, $\epsilon$ is the small expansion parameter, and the other quantities will be defined in the rest of the paper. This equation is to be compared with its linear counterpart: \begin{eqnarray} {{\cal R}^{\rm Lin}_u}''+2{z'\over z} {{\cal R}^{\rm Lin}_u}' -c_s^2\,\Delta {\cal R}^{\rm Lin}_u=0,\label{ieq2} \end{eqnarray} from which one notices the correspondence between the linear and nonlinear evolution equations. Gradient expansion can be applied also to a multi-component system, yielding the formalism ``beyond $\delta N$'' developed in a recent paper~\cite{Naruko:2012fe}. The purpose of this paper is to extend the gradient expansion formalism further to include a more general class of scalar-field theories obeying a second-order equation of motion. The scalar-field Lagrangian we consider is of the form $W(-\partial^{\mu}\phi\partial_{\mu}\phi/2,\phi)-G(-\partial^{\mu}\phi\partial_{\mu}\phi/2,\phi)\Box\phi$. Inflation driven by this scalar field is more general than k-inflation and is called G-inflation. It is known that k-inflation \cite{ArmendarizPicon:1999rj, Garriga:1999vw} with the Lagrangian $W(-\partial^{\mu}\phi\partial_{\mu}\phi/2,\phi)$ is equivalently described by a perfect fluid cosmology. However, in the presence of $G\Box\phi$, the scalar field is no longer equivalent to a perfect fluid and behaves as a {\em imperfect} fluid~\cite{KGB,KGB2}. The authors of Ref.~\cite{Narukosasaki} investigated superhorizon conservation of the curvature perturbation from G-inflation at zeroth order in gradient expansion, and Gao worked out the zeroth-order analysis in the context of the most general single-field inflation model~\cite{Gao}. In this paper, we present a general analytic solution for the metric and the scalar field for G-inflation at second order in gradient expansion. By doing so we extend the previous result for a perfect fluid \cite{Takamizu:2008ra} and show that the nonlinear evolution equation of the form~(\ref{ieq1}) is deduced straightforwardly from the corresponding linear result even in the case of G-inflation. This paper is organized as follows. In the next section, we define the non-canonical scalar-field theory that we consider in this paper. In Sec.~\ref{sec:formulation}, we develop a theory of nonlinear cosmological perturbations on superhorizon scales and derive the field equations employing gradient expansion. We then integrate the relevant field equations to obtain a general solution for the metric and the scalar field in Sec.~\ref{sec:solution}. The issue of defining appropriately the nonlinear curvature perturbation is addressed and the evolution equation for that variable is derived in Sec.~\ref{sec:nlcurvpert}. Section~\ref{sec:summary} is devoted to a summary of this paper and discussion. \section{G-inflation} \label{sec:scalarfield} In this paper we study a generic inflation model driven by a single scalar field. We go beyond k-inflation for which the Lagrangian for the scalar field is given by an arbitrary function of $\phi$ and $X:=-g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi/2$, ${\cal L}=W(X, \phi)$, and consider the scalar field described by \begin{equation} I = \int d^4x\sqrt{-g}\bigl[ W(X,\phi)-G(X,\phi)\Box\phi\bigr], \label{scalar-lag} \end{equation} where $G$ is also an arbitrary function of $\phi$ and $X$. We assume that $\phi$ is minimally coupled to gravity. Although the above action depends upon the second derivative of $\phi$ through $\Box\phi =g^{\mu\nu}{\nabla}_\mu{\nabla}_\nu\phi$, the resulting field equations for $\phi$ and $g_{\mu\nu}$ remain second order. In this sense the above action gives rise to a more general single-field inflation model than k-inflation, {\em i.e.}, G-inflation~\cite{Kobayashi:2010cm,Kobayashi:2011nu,Kobayashi:2011pc}. The same scalar-field Lagrangian is used in the context of dark energy and called kinetic gravity braiding~\cite{KGB}. Interesting cosmological applications of the Lagrangian~(\ref{scalar-lag}) can also be found, {\em e.g.}, in~\cite{Mizuno:2010ag, Kimura:2010di, Kamada:2010qe, GBounce, Kimura:2011td, Qiu:2011cy, Cai:2012va, Ohashi:2012wf}. In fact, the most general inflation model with second-order field equations was proposed in~\cite{Kobayashi:2011nu} based on Horndeski's scalar-tensor theory~\cite{Horndeski, GenG}. However, in this paper we focus on the action~(\ref{scalar-lag}) which belongs to a subclass of the most general single-field inflation model, because it involves sufficiently new and interesting ingredients while avoiding unwanted complexity. Throughout the paper we use Planck units, $M_{\rm pl}=1$, and assume that the vector $-g^{\mu\nu}{\nabla}_{\nu}\phi$ is timelike and future-directed. (The assumption is reasonable because we are not interested in a too inhomogeneous universe.) The equation of motion for $\phi$ is given by \begin{equation} \nabla_{\mu}\Bigl[(W_X-G_\phi-G_X \Box\phi){\nabla}^\mu \phi-G_X {\nabla}^\mu X\Bigr]+W_\phi-G_\phi\Box\phi= 0, \end{equation} where the subscripts $X$ and $\phi$ stand for differentiation with respect to $X$ and $\phi$, respectively. More explicitly, we have \begin{eqnarray} &&W_X\Box\phi-W_{XX}({\nabla}_\mu{\nabla}_\nu \phi)({\nabla}^\mu\phi{\nabla}^\nu \phi)- 2W_{\phi X}X+W_\phi-2(G_\phi-G_{\phi X}X)\Box\phi\nonumber\\ &&+G_X\left[({\nabla}_\mu{\nabla}_\nu \phi)({\nabla}^\mu{\nabla}^\nu \phi)-(\Box\phi)^2+R_{\mu\nu}{\nabla}^\mu\phi{\nabla}^\nu\phi\right]+2G_{\phi X}({\nabla}_\mu{\nabla}_\nu \phi)({\nabla}^\mu\phi{\nabla}^\nu \phi)+2G_{\phi\phi}X\notag\\ &&-G_{XX}({\nabla}^\mu{\nabla}^\lambda\phi-g^{\mu\lambda}\Box\phi)({\nabla}_\mu{\nabla}^\nu\phi){\nabla}_\nu\phi{\nabla}_\lambda\phi=0. \label{eqn:EOM-phi} \end{eqnarray} The energy-momentum tensor of the scalar field is given by \begin{equation} T_{\mu\nu} = W_X{\nabla}_\mu\phi{\nabla}_\nu\phi+Wg_{\mu\nu}-({\nabla}_\mu G {\nabla}_\nu \phi +{\nabla}_\nu G {\nabla}_\mu \phi) +g_{\mu\nu} {\nabla}_\lambda G{\nabla}^\lambda \phi- G_X\Box\phi{\nabla}_\mu\phi{\nabla}_\nu\phi. \label{eqn:Tmunu-phi} \end{equation} It is well known that k-inflation allows for an equivalent description in terms of a perfect fluid, {\em i.e.}, the energy-momentum tensor reduces to that of a perfect fluid with a four-velocity $u_\mu=\partial_\mu\phi/\sqrt{2X}$. However, as emphasized in~\cite{KGB}, for $G_X\neq 0$ the energy-momentum tensor cannot be expressed in a perfect-fluid form in general. This {\em imperfect} nature characterizes the crucial difference between G- and k-inflation. \section{Nonlinear cosmological perturbations} \label{sec:formulation} In this section we shall develop a theory of nonlinear cosmological perturbations on superhorizon scales, following Ref.~\cite{Takamizu:2008ra}. For this purpose we employ the Arnowitt-Deser-Misner (ADM) formalism and perform a gradient expansion in the uniform expansion slicing and the time-slice-orthogonal threading. \subsection{The ADM decomposition} Employing the ($3+1$)-decomposition of the metric, we write \begin{equation} ds^2 = g_{\mu\nu}dx^{\mu}dx^{\nu} = - \alpha^2 dt^2 + \hat{\gamma}_{ij}(dx^i+\beta^idt)(dx^j+\beta^jdt), \end{equation} where $\alpha$ is the lapse function and $\beta^i$ is the shift vector. Here, latin indices run over $1, 2, 3$. We introduce the unit vector $n^{\mu}$ normal to the constant $t$ hypersurfaces, \begin{equation} n_{\mu}dx^{\mu} = -\alpha dt, \quad n^{\mu}\partial_{\mu} = \frac{1}{\alpha}(\partial_t-\beta^i\partial_i). \end{equation} The extrinsic curvature $K_{ij}$ of constant $t$ hypersurfaces is given by \begin{equation} K_{ij} ={\nabla}_i n_j= \frac{1}{2\alpha}\left(\partial_t\hat{\gamma}_{ij}-\hat{D}_i\beta_j- \hat{D}_j\beta_i\right), \label{eqn:def-K} \end{equation} where $\hat{D}_i$ is the covariant derivative associated with the spatial metric $\hat{\gamma}_{ij}$. The spatial metric and the extrinsic curvature can further be expressed in a convenient form as \begin{align} \hat{\gamma}_{i j} &= a^2(t) e^{2\psi(t, \mathbf{x})} \gamma_{i j} , \\ K_{i j} &= a^2(t) e^{2\psi} \left( \frac{1}{3} K \gamma_{i j} + A_{i j} \right), \label{deco-Kij} \end{align} where $a(t)$ is the scale factor of a fiducial homogeneous Friedmann-Lema\^itre-Robertson-Walker (FLRW) spacetime, the determinant of $\gamma_{ij}$ is normalised to unity, $\ma{det}\,\gamma_{ij}=1$, and $A_{i j}$ is the trace-free part of $K_{ij}$, $\gamma^{ij}A_{ij}=0$. The trace $K:= \hat\gamma^{ij}K_{ij}$ is explicitly written as \begin{gather} K = \frac{1}{\alpha} \Bigl[ 3( H + {\partial}_t \psi) - \hat{D}_i \beta^i \Bigr] \,, \label{def-K} \end{gather} where $H=H(t)$ is the Hubble parameter defined by $H := d \ln a (t)/d t$. In deriving Eq.~(\ref{def-K}) $\partial_t(\ma{det}\,\gamma_{ij})=\gamma^{ij}\partial_t\gamma_{ij}=0$ was used. Hereafter, in order to simplify the equations we choose the spatial coordinates appropriately to set \begin{align} \beta^i=0. \label{time-orthogonal-cond} \end{align} We call this choice of spatial coordinates as the {\it time-slice-orthogonal threading}. With $\beta_i=0$ all the independent components of the energy-momentum tensor (\ref{eqn:Tmunu-phi}) are expressed as \begin{eqnarray} E & := & T_{\mu\nu}n^{\mu}n^{\nu} = (W_X-G_X\Box\phi)({\partial}_\perp\phi)^2-W- {\partial}_\perp G{\partial}_\perp\phi-\hat{\gamma}^{ij}{\partial}_i G{\partial}_j \phi, \label{def:EEE}\\ -J_i & := & T_{\mu i}n^{\mu} = \left[(W_X-G_X\Box\phi){\partial}_i\phi-{\partial}_i G\right]{\partial}_\perp\phi-{\partial}_\perp G{\partial}_i \phi, \label{def:J}\\ S_{ij} & := & T_{ij}, \end{eqnarray} where $\partial_{\perp}:= n^{\mu}\partial_{\mu}$. Let us now move on to the $(3+1)$-decomposition of the Einstein equations. In the ADM Language, the Einstein equations are separated into four constraints (the Hamiltonian constraint and three momentum constraints) and six dynamical equations for the spatial metric. The constraints are \begin{align} & {1\over a^2} R[e^{2\psi}\gamma] + {2\over 3}K^2 - A_{ij}A^{ij} = 2E, \label{eqn:Hamiltonian-const}\\ & {2\over 3}\partial_i K -e^{-3\psi} D_j\left(e^{3\psi} A^j_{\ i}\right) = J_i, \label{eqn:Momentum-const} \end{align} where $R[e^{2\psi}\gamma]$ is the Ricci scalar constructed from the metric $e^{2\psi}\gamma_{ij}$ and $D_i$ is the covariant derivative with respect to $\gamma_{ij}$. The spatial indices here are raised or lowered by $\gamma^{ij}$ and $\gamma_{ij}$, respectively. As for the dynamical equations, the following first-order equations for the spatial metric ($\psi$, $\gamma_{ij}$) are deduced from the definition of the extrinsic curvature~(\ref{eqn:def-K}): \begin{eqnarray} \partial_{\perp}\psi& = & -\frac{H}{\alpha}+{K \over 3}, \label{eqn:dpsi}\\ \partial_{\perp}{\gamma}_{ij} & = & 2{A}_{ij}. \label{eqn:dgamma} \end{eqnarray} The dynamical equations for the extrinsic curvature ($K$, ${A}_{ij}$) are given by \begin{eqnarray} \partial_{\perp}K & = & -\frac{K^2}{3} - {A}^{ij}{A}_{ij} + \frac{\hat{D}^2\alpha}{\alpha} - \frac{1}{2}\left(E+3P\right), \label{eqn:dK}\\ \partial_{\perp}{A}_{ij} & = & -K{A}_{ij}+2{A}_i^{\ k}{A}_{kj} + \frac{1}{\alpha}\left[\hat{D}_i \hat{D}_j \alpha\right]^{{\rm TF}} -\frac{1}{a^2e^{2\psi}} \left[R_{ij}[e^{2\psi}\gamma]- S_{ij}\right]^{{\rm TF}}, \label{eqn:dA} \end{eqnarray} where \begin{align} P:= {1\over 3}a^{-2} e^{-2\psi} \gamma^{ij} S_{ij}, \label{def:P-Sii} \end{align} $\hat{D}^2:=\hat{\gamma}^{ij}\hat{D}_i\hat{D}_j$, and $R_{ij}[e^{2\psi}\gamma]$ is the Ricci tensor constructed from the metric $e^{2\psi}\gamma_{ij}$. The trace-free projection operator $[\ldots]^{{\rm TF}}$ is defined for an arbitrary tensor $Q_{ij}$ as \begin{align} \left[Q_{ij}\right]^{{\rm TF}}:= Q_{ij}-{1\over 3} \gamma_{ij}\gamma^{kl}Q_{kl}. \end{align} For the purpose of solving the Einstein equations, the most convenient choice of the temporal coordinate is such that the expansion $K$ is uniform and takes the form: \begin{equation} K(t, \mathbf{x}) = 3H(t). \label{eqn:uniform-Hubble} \end{equation} Hereafter we call this gauge choice with (\ref{time-orthogonal-cond}) {\em the uniform expansion gauge}. Adopting this gauge choice, Eq.~(\ref{eqn:dpsi}) reduces simply to \begin{equation} \partial_t \psi=H(\alpha-1)=:H\, \delta \alpha(t, \mathbf{x}). \label{eqn:uniform-H} \end{equation} From this, if we take the uniform expansion gauge, we can see that {\it the time evolution of the curvature perturbation $\psi$ is caused by the inhomogeneous part of the lapse function $\delta \alpha(t,x^i)$ only.} It is related to the non-adiabatic perturbation. \subsection{Gradient expansion: basic assumptions and the order of various terms} In the gradient expansion approach, we introduce a flat FLRW universe characterized by ($a(t)$, $\phi_0(t)$) as a background and suppose that the characteristic length scale $L=a/k$, where $k$ is a wavenumber of a perturbation, is longer than the Hubble length scale $1/H$ of the background, $HL\gg 1$. We use $\epsilon:= 1/(HL)=k/(aH)$ as a small parameter to keep track of the order of various terms and expand the equations in terms of $\epsilon$, so that a spatial derivative acting on a perturbation raises the order by $\epsilon$. The background flat FLRW universe characterized by ($a(t)$, $\phi_0(t)$) satisfies the Einstein equations, \begin{equation} H^2(t) = \frac{1}{3}\rho_0, \quad \dot{H}(t)=-{1\over 2}(\rho_0+P_0), \label{eqn:background-H} \end{equation} and the scalar-field equation of motion, \begin{equation} \dot{\cal J}_0+3H{\cal J}_0 =\left(W_\phi-2XG_{\phi\phi}-2G_{\phi X}X\ddot{\phi}\right)_0. \label{eqn:background-phi-J} \end{equation} Here, an overdot $(\dot{\ })$ denotes differentiation with respect to $t$, and a subscript $0$ indicates the corresponding background quantity, {\em i.e.}, $W_0:= W(X_0,\phi_0)$, $(W_{X})_0:= W_X(X_0,\phi_0)$, etc., where $X_0:=\dot{\phi}^2_0/2$. The background energy density and pressure, $\rho_0$ and $P_0$, are given by \begin{eqnarray} &&\rho_0=\left[-W+2X(W_{X}+3HG_{X} \dot{\phi}-G_{\phi})\right]_0,\\ &&P_0=\left[W-2X(G_{X}\ddot{\phi}+G_{\phi})\right]_0, \end{eqnarray} while ${\cal J}_0$ is defined as \begin{align} &{\cal J}_0=\left[W_{X}\dot{\phi}-2G_{\phi}\dot{\phi}+6H G_{X} X\right]_0. \end{align} If $W$ and $G$ do not depend on $\phi$, the right hand side of Eq.~(\ref{eqn:background-phi-J}) vanishes and hence ${\cal J}_0$ is conserved. In this case, ${\cal J}_0$ is the Noether current associated with the shift symmetry $\phi\to\phi+c$. Note that the above quantities may be written in a different way as \begin{align} &\rho_0={\cal J}_0\dot{\phi}_0-W_0+2(X G_{\phi})_0,\\ &\rho_0+P_0={\cal J}_0\dot{\phi}_0-2(G_{X}X\ddot{\phi})_0. \label{eqn:rho0+P0} \end{align} Note also that the scalar-field equation of motion~(\ref{eqn:background-phi-J}) can be written as \begin{align} {\cal{G}}\ddot{\phi}_0+3\Theta {\cal J}_0 +{\cal E}_\phi=0, \label{eqn:background-gphi} \end{align} where \begin{align} &{\cal{G}}(t) := {\cal E}_X-3\Theta (G_{X}\dot{\phi})_0,\\ &\Theta(t) := H -(G_X X \dot{\phi})_0,\\ &{\cal E}_\phi(t) := \left[2 X W_{X\phi} -W_\phi+ 6 H G_{\phi X}X \dot{\phi}- 2X G_{\phi\phi}\right]_0, \label{def:cale-phi}\\ &{\cal E}_X (t) := \left[W_X+2 XW_{XX}+9H G_X \dot{\phi}+6 H G_{XX} \dot{\phi}_0 -2 G_\phi-2 X G_{\phi X}\right]_0.\label{def:cale-X} \end{align} These functions will also be used later. Since the background FLRW universe must be recovered at zeroth order in gradient expansion, the spatial metric must take the locally homogeneous and isotropic form in the limit $\epsilon\to 0$. This leads to the following assumption: \begin{equation} \partial_t{\gamma}_{ij} = {\cal O}(\epsilon^2). \label{eqn:assumption-gamma} \end{equation} This assumption is justified as follows~\cite{Lyth:2004gb,Tanaka:2006zp,Tanaka:2007gh,Takamizu:2008ra,Takamizu:2010xy,Takamizu:2010je,Naruko:2012fe,Sugiyama:2012tj}. If $\partial_t\gamma_{ij}$ were ${\cal O}(\epsilon)$, the leading term would correspond to homogeneous and anisotropic perturbations, which are known to decay quickly. We may therefore reasonably assume $\partial_t\gamma_{ij}={\cal O}(\epsilon^2)$ and not $\partial_t\gamma_{ij}={\cal O}(\epsilon)$. However, $\psi$ and ${\gamma}_{ij}$ (without any derivatives acting on them) are of order ${\cal O}(1)$. Using the assumption (\ref{eqn:assumption-gamma}) made above and the basic equations derived in the previous subsection, one can now deduce the order of various terms in gradient expansion. First, from Eq.~(\ref{eqn:dgamma}) we see that \begin{equation} {A}_{ij}={\cal O}(\epsilon^2). \label{eqn:A-e2} \end{equation} Substituting Eq.~(\ref{eqn:A-e2}) into Eq.~(\ref{eqn:Momentum-const}) under the gauge condition~(\ref{eqn:uniform-Hubble}), we obtain $J_i={\cal O}(\epsilon^3)$. Then, this condition combined with the definition~(\ref{def:J}) implies that $\partial_i\delta\phi={\cal O}(\epsilon^3)$, where $\delta\phi(t,\mathbf{x}) : = \phi(t,\mathbf{x})-\phi_0(t)$. The same equations also imply that ${\partial}_i G={\cal O}(\epsilon^3)$. By absorbing a homogeneous part of $\delta \phi$ into $\phi_0$ (and redefining $a(t)$ accordingly), we have \begin{equation} \delta \phi = {\cal O}(\epsilon^2). \end{equation} It is clear from the condition~(\ref{eqn:A-e2}) and the Hamiltonian constraint~(\ref{eqn:Hamiltonian-const}) that \begin{align} \delta E : = E(t,x^i)-\rho_0(t)={\cal O}(\epsilon^2). \end{align} Since the definition~(\ref{def:EEE}) tells that $E-\rho_0={\rm max}\{{\cal O}(\delta\phi),\, {\cal O}({\partial}_\perp \phi-{\partial}_t\phi_0)\}$, we see $\partial_t(\delta\phi)-\dot{\phi}_0 \delta\alpha ={\cal O}(\epsilon^2)$, leading to \begin{equation} \delta \alpha={\cal O}(\epsilon^2). \end{equation} Then, it follows immediately from Eq.~(\ref{eqn:uniform-H}) that \begin{equation} {\partial}_t \psi = {\cal O}(\epsilon^2). \end{equation} Similarly, for the spatial energy-momentum component we find \begin{equation} \delta P : = P(t,\mathbf{x})-P_0(t) = {\cal O}(\epsilon^2). \end{equation} In summary, we have evaluated the order of various quantities as follows: \begin{eqnarray} & & \psi = {\cal O}(1), \quad {\gamma}_{ij} = {\cal O}(1), \nonumber\\ & & \delta \alpha = {\cal O}(\epsilon^2), \quad \delta \phi={\cal O}(\epsilon^2), \quad \delta E={\cal O}(\epsilon^2), \quad \delta P = {\cal O}(\epsilon^2), \quad {A}_{ij} = {\cal O}(\epsilon^2),\nonumber\\ & & \partial_t{\gamma}_{ij}={\cal O}(\epsilon^2), \quad \partial_t\psi={\cal O}(\epsilon^2), \quad \beta^i = {\cal O}(\epsilon^3), \quad {\partial}_i G = {\cal O}(\epsilon^3), \quad [S_{ij}]^{{\rm TF}} = {\cal O}(\epsilon^6), \label{eqn:order-of-magnitude} \end{eqnarray} where the assumptions made have also been included. \subsection{Field equations up to ${\cal O}(\epsilon^2)$ in gradient expansion} Keeping the order of various terms~(\ref{eqn:order-of-magnitude}) in mind, let us derive the governing equations in the uniform expansion gauge. The Hamiltonian and momentum constraints are \begin{eqnarray} R[e^{2\psi}\gamma]& = & 2\delta E + {\cal O}(\epsilon^4), \label{eqn:Hamiltonian}\\ e^{-3\psi}{D}_j\left(e^{3\psi}{A}_{i}^{\ j}\right) & = & -J_i + {\cal O}(\epsilon^5). \label{eqn:Momentum} \end{eqnarray} The evolution equations for the spatial metric are given by \begin{eqnarray} {\partial_t\psi} = H\delta \alpha + {\cal O}(\epsilon^4), \qquad \partial_t{\gamma}_{ij} = 2{A}_{ij}+{\cal O}(\epsilon^4), \label{eqn:evol-gamma} \end{eqnarray} while the evolution equations for the extrinsic curvature are \begin{eqnarray} \partial_t{A}_{ij}& = & -3H{A}_{ij} -\frac{1}{a^2e^{2\psi}}\left[R_{ij}[e^{2\psi} \gamma]\right]^{{\rm TF}} +{\cal O}(\epsilon^4), \label{eqn:-evol-K1}\\ {3\over \alpha} {\partial}_t H & = &-3H^2-{1\over 2}(E+3P) + {\cal O}(\epsilon^4). \label{eqn:evol-K2} \end{eqnarray} Note that with the help of the background equations Eq.~(\ref{eqn:evol-K2}) can be recast into \begin{align} \delta P+{ \delta E\over 3}+(\rho_0+P_0)\delta \alpha={\cal O}(\epsilon^4). \label{eqn:evol-K3} \end{align} The components of the energy-momentum tensor are expanded as \begin{align} E&=2XW_X-W+6HG_X{\partial}_\perp\phi-2XG_\phi+{\cal O}(\epsilon^6), \label{eqn:E-grad} \\ P&=W-{\partial}_\perp G{\partial}_\perp \phi +{\cal O}(\epsilon^6), \label{eqn:P-grad}\\ -J_i&={\cal J}_0{\partial}_i (\delta\phi) -(G_X \dot{\phi})_0{\partial}_i (\delta X)+ {\cal O}(\epsilon^5), \end{align} where \begin{align} &X=({\partial}_\perp \phi)^2/2+{\cal O}(\epsilon^6),\\ &\delta X := X-X_0=\dot{\phi}_0 {\partial}_t(\delta \phi) -2 X_0 \delta \alpha +{\cal O}(\epsilon^4). \label{eqn:delta-X} \end{align} Finally, noting that $\Box\phi=-{\partial}_\perp^2\phi-3H{\partial}_\perp \phi+{\cal O}(\epsilon^4)$, the scalar-field equation of motion~(\ref{eqn:EOM-phi}) reduces to \begin{eqnarray} W_X({\partial}_\perp^2\phi+3H{\partial}_\perp\phi)+2XW_{XX}{\partial}_\perp^2\phi+2W_{\phi X} X-W_\phi-2(G_\phi-G_{\phi X}X)({\partial}_\perp^2\phi+3H{\partial}_\perp \phi) \notag\\ +6G_X[{\partial}_\perp (HX)+3H^2 X]-4XG_{\phi X} {\partial}_\perp^2\phi-2G_{\phi\phi} X +6HG_{XX} X{\partial}_\perp X={\cal O}(\epsilon^4). \end{eqnarray} This equation can also be written in a slightly simpler form as \begin{eqnarray} {\partial}_\perp {\cal J}+3H{\cal J} =W_\phi-2 XG_{\phi\phi}-2 X G_{\phi X}{\partial}_\perp^2 \phi +{\cal O}(\epsilon^4), \label{eqn:EOM-phi-grad} \end{eqnarray} where \begin{align} {\cal J}=W_{X}{\partial}_\perp{\phi}-2G_{\phi}{\partial}_\perp\phi+6H G_{X} X. \end{align} It can be seen that Eq.~(\ref{eqn:EOM-phi-grad}) takes {\em exactly the same form} as the background scalar-field equation of motion~(\ref{eqn:background-phi-J}) under the identification ${\partial}_t\leftrightarrow\partial_\perp $. Now Eq.~(\ref{eqn:E-grad}) can be written using ${\cal J}$ as \begin{align} E={\cal J} {\partial}_\perp \phi-W+2 X G_\phi. \label{eqn:E-J-grad} \end{align} From Eqs.~(\ref{eqn:EOM-phi-grad}) and (\ref{eqn:E-J-grad}) we find \begin{align} {\partial}_\perp E=-3 H(E+P) +{\cal O}(\epsilon^4). \label{eqn:conservation} \end{align} This equation is nothing but the conservation law, $n_\mu{\nabla}_\nu T^{\mu\nu}=0$. Combining Eq.~(\ref{eqn:evol-K2}) with Eq.~(\ref{eqn:conservation}), we obtain \begin{equation} \partial_t\left[a^2(\delta E)\right] = O(\epsilon^4). \label{eqn:eq-for-delta} \end{equation} We can expand Eq.~(\ref{eqn:E-grad}) in terms of $\delta \phi$ and $\delta X$, and thus $\delta E$ can be expressed as \begin{align} \delta E={\cal E}_\phi(t) \delta\phi+{\cal E}_X(t)\delta X+{\cal O}(\epsilon^4). \label{eqn:exp-E} \end{align} This equation relates $\delta\phi$ and $\delta X$ with a solution to the simple equation~(\ref{eqn:eq-for-delta}). With the help of Eq.~(\ref{eqn:delta-X}), Eq.~(\ref{eqn:exp-E}) can be regarded as an equation relating $\delta\phi$ and $\delta \alpha$. Similarly, one can express $\delta P$ as \begin{align} \delta P={1\over a^3}{\partial}_t\left\{a^3 \left[{\cal J}_0 (\delta \phi)-(G_X\dot{\phi})_0 (\delta X) \right]\right\}-(\rho_0+P_0)(\delta \alpha)+{\cal O}(\epsilon^4). \end{align} Using Eq.~(\ref{eqn:evol-K3}), one has \begin{align} {\partial}_t\left\{a^3\left[{\cal J}_0(\delta \phi)-(G_X\dot{\phi})_0 (\delta X)\right]\right\}= -{a^3\over 3} \delta E+{\cal O}(\epsilon^4), \label{eqn:exp-P} \end{align} which can easily be integrated once to give another {\em independent} equation relating $\delta\phi$ and $\delta\alpha$. In the next section, we will give a general solution to the above set of equations. \section{General solution} \label{sec:solution} Having thus derived all the relevant equations up to second order in gradient expansion, let us now present a general solution. First, since $\psi={\cal O}(1)$ and $\partial_t\psi={\cal O}(\epsilon^2)$, we find \begin{equation} \psi = {}^{(0)}C^\psi(\mathbf{x}) + {\cal O}(\epsilon^2), \label{eqn:psi-leading} \end{equation} where ${}^{(0)}C^\psi(\mathbf{x})$ is an integration constant which is an arbitrary function of the spatial coordinates $\mathbf{x}$. Here and hereafter, the superscript $(n)$ indicates that the quantity is of order $\epsilon^n$. Similarly, it follows from ${\gamma}_{ij}={\cal O}(1)$ and $\partial_t{\gamma}_{ij}={\cal O}(\epsilon^2)$ that \begin{equation} {\gamma}_{ij} = {}{}^{(0)}C^\gamma_{ij}(\mathbf{x}) + {\cal O}(\epsilon^2), \label{eqn:gammatilde-leading} \end{equation} where ${}^{(0)}C^\gamma_{ij}(\mathbf{x})$ is a $3\times 3$ matrix with a unit determinant whose components depend only on the spatial coordinates. The evolution equations~(\ref{eqn:evol-gamma}) can then be integrated to determine the ${\cal O}(\epsilon^2)$ terms in $\psi$ and $\gamma_{ij}$ as \begin{align} &\psi={}^{(0)}C^\psi(\mathbf{x})+\int_{t_*}^t H(t') \delta \alpha(t', \mathbf{x}) dt' + {\cal O}(\epsilon^4), \label{eqn:psi-alpha}\\ &\gamma_{ij}={}^{(0)}C^\gamma_{ij}(\mathbf{x})+2 \int_{t_*}^t A_{ij}(t', \mathbf{x})dt' + {\cal O}(\epsilon^4), \label{eqn:gamma-A} \end{align} where $t_*$ is some initial time and integration constants of ${\cal O}(\epsilon^2)$ have been absorbed to ${}^{(0)}C^\psi(\mathbf{x})$ and ${}^{(0)}C_{ij}^\gamma(\mathbf{x})$. Now Eq.~(\ref{eqn:-evol-K1}) can be integrated to give \begin{eqnarray} A_{ij}={1\over a^3(t)} \left[{}^{(2)}F_{ij}(\mathbf{x}) \int_{t_*}^t a(t') dt' +{}^{(2)}C^A_{ij}(\mathbf{x})\right]+{\cal O}(\epsilon^4), \label{eqn: solution-Aij} \end{eqnarray} where \begin{eqnarray} {}^{(2)}F_{ij}(\mathbf{x}) &:=& -\frac{1}{e^{2\psi}}\left[R_{ij}[e^{2\psi} \gamma]\right]^{{\rm TF}} \nonumber\\ & = &-\frac{1}{e^{2{}^{(0)}C^\psi}} \left[ \left({}^{(2)}{R}_{ij}-\frac{1}{3}{}^{(2)}{R}{}^{(0)} C^\gamma_{ij}\right) + \left(\partial_i{}^{(0)}C^\psi \partial_j{}^{(0)}C^\psi - {}^{(0)}{D}_i {}^{(0)}{D}_j{}^{(0)}C^\psi \right)\right.\nonumber\\ & & \qquad \left. - \frac{1}{3}{}^{(0)}C^{\gamma \,kl} \left(\partial_k {}^{(0)}C^\psi \partial_l{}^{(0)}C^\psi - {}^{(0)}{D}_k {}^{(0)}{D}_l{}^{(0)}C^\psi \right) {}^{(0)}C^\gamma_{ij} \right]. \label{eqn:def-F2ij} \end{eqnarray} Here, $^{(0)}C^{\gamma\, kl}$ is the inverse matrix of ${}^{(0)}C^\gamma_{ij}$, ${}^{(2)}{R}_{ij}(\mathbf{x}):=R_{ij}[{}^{(0)}C^\gamma]$ and ${}^{(2)}{R}(\mathbf{x}):=R[{}^{(0)}C^\gamma]$ are the Ricci tensor and the Ricci scalar constructed from the zeroth-order spatial metric $^{(0)}C_{ij}^\gamma(\mathbf{x})$, and ${}^{(0)}{D}$ is the covariant derivative associated with ${}^{(0)}C^\gamma_{ij}$. Note that $^{(0)}C^{\gamma\, ij}\,{}^{(2)}F_{ij}=0$ by definition. The integration constant, ${}^{(2)}C^A_{ij}(\mathbf{x})$, is a symmetric matrix whose components depend only on the spatial coordinates and which satisfies the traceless condition $^{(0)}C^{\gamma\, ij}\,{}^{(2)}C^A_{ij}=0$. Substituting the above result to Eq.~(\ref{eqn:gamma-A}), we arrive at \begin{equation} {\gamma}_{ij} = {}^{(0)}C^\gamma_{ij}(\mathbf{x}) + 2\left[ {}^{(2)}F_{ij}(\mathbf{x})\int^t_{t_*}\frac{dt'}{a^3(t')}\int^{t'}_{t_*}a(t'')dt'' + {}^{(2)}C^A_{ij}(\mathbf{x})\int^t_{t_*}\frac{dt'}{a^3(t')}\right] + {\cal O}(\epsilon^4). \label{eqn:gammatilde-sol} \end{equation} Next, it is straightforward to integrate Eq.~(\ref{eqn:eq-for-delta}) to obtain \begin{equation} \delta E = \frac{1}{a^2(t)}{}^{(2)}{\cal K}(\mathbf{x}) + {\cal O}(\epsilon^4), \label{eqn: solution-delta} \end{equation} where ${}^{(2)}{\cal K}(\mathbf{x})$ is an arbitrary function of the spatial coordinates. With this solution for $\delta E$, Eqs.~(\ref{eqn:exp-E}) and~(\ref{eqn:exp-P}) reduce to \begin{align} &{\cal E}_\phi(t) \delta\phi+{\cal E}_X(t)\delta X= \frac{1}{a^2(t)}{}^{(2)}{\cal K}(\mathbf{x}) +{\cal O}(\epsilon^4), \label{eqn:exp-E2}\\ &{\partial}_t\left\{a^3\left[{\cal J}_0(\delta \phi)-(G_X\dot{\phi})_0 (\delta X)\right]\right\}= -{{1} \over 3 a^2(t)} {}^{(2)}{\cal K}(\mathbf{x})+{\cal O}(\epsilon^4), \label{eqn:exp-P1} \end{align} and the latter equation can further be integrated to give \begin{align} {\cal J}_0(\delta \phi)-(G_X\dot{\phi})_0 (\delta X)= {{}^{(2)}C^\chi(\mathbf{x}) \over a^3(t)}-{{}^{(2)}{\cal K}(\mathbf{x}) \over 3 a^3(t)} \int_{t_*}^t a(t') dt'+{\cal O}(\epsilon^4), \label{eqn:exp-P2} \end{align} where we have introduced another integration constant $^{(2)}C^\chi (\mathbf{x})$. With the help of Eqs.~(\ref{eqn:delta-X}), one can solve the system of equations~(\ref{eqn:exp-E2}) and~(\ref{eqn:exp-P2}), leading to \begin{align} \delta \phi={1\over \cal A}\left\{\left[(G_X\dot{\phi})_0 -{{\cal E}_X(t) \over 3 a(t)}\int_{t_*}^t a(t') dt' \right] \frac{{}^{(2)}{\cal K}}{a^2(t)}+{{\cal E}_X(t)\over a^3(t)} {}^{(2)}C^\chi\right\}+{\cal O}(\epsilon^4), \end{align} and \begin{align} \delta\alpha={\partial}_t\left({\delta \phi\over \dot{\phi}_0}\right) -{1\over 2 X_0 {\cal G} (t)} \left\{ \left[ 1-{\Theta(t)\over a(t)}\int_{t_*}^t a(t') dt' \right] \frac{{}^{(2)}{\cal K}}{a^2(t)} +{3\Theta(t)\over a^3(t)} {}^{(2)}C^\chi\right\}+{\cal O}(\epsilon^4), \label{solalpha} \end{align} where ${{\cal A}}:=({\cal E}_\phi G_X \dot{\phi}+{\cal E}_X {\cal J})|_{0}$. In deriving the above solution we have used the background scalar-field equation of motion~(\ref{eqn:background-gphi}). Finally, substituting Eq.~(\ref{solalpha}) to Eq.~(\ref{eqn:psi-alpha}), we obtain \begin{align} \psi&= {}^{(0)}C^\psi +\int_{t_*}^t dt' H(t') {\partial}_{t'} \left( {\delta \phi\over \dot{\phi}_0}\right) -\int_{t_*}^t dt' {H(t')\over 2 X_0 {\cal G} (t')} \left\{\left[1-{\Theta(t')\over a(t')} \int_{t_*}^{t'} a(t'') dt'' \right]\frac{{}^{(2)}{\cal K}}{a^2(t')} +{3\Theta(t')\over a^3(t')} {}^{(2)}C^\chi\right\}+{\cal O}(\epsilon^4) \notag\\&= {}^{(0)}C^\psi(\mathbf{x})+ {H \delta \phi\over \dot{\phi}_0}+ \int_{t_*}^t dt' {(\rho_0+P_0) \delta\phi \over 2 \dot{\phi}_0} \notag\\ &\qquad -\int_{t_*}^t dt'{H (t')\over 2 X_0 {\cal G} a^2\, (t') } \left\{ \left[1-{\Theta(t')\over a(t')}\int_{t_*}^{t'} a(t'') dt'' \right] {}^{(2)}{\cal K}+{3\Theta(t')\over a(t')} {}^{(2)}C^\chi\right\} +{\cal O}(\epsilon^4), \end{align} where we performed integration by parts and used the background equation. So far we have introduced five integration constants, ${}^{(0)}C^\psi(\mathbf{x})$, ${}^{(0)}C^\gamma_{ij}(\mathbf{x})$, ${}^{(2)}C^A_{ij}(\mathbf{x})$, ${}^{(2)}{\cal K}(\mathbf{x})$, and ${}^{(2)}C^\chi(\mathbf{x})$, upon solving the field equations up to ${\cal O}(\epsilon^2)$. Here, it should be pointed out that they are not independent. Indeed, Eqs.~(\ref{eqn:Hamiltonian}) and~(\ref{eqn:Momentum}) impose the following constraints among the integration constants: \begin{eqnarray} {}^{(2)}{\cal K}(\mathbf{x}) & =& \frac{{}^{(2)}\hat R(\mathbf{x})}{2} + {\cal O}(\epsilon^4), \nonumber\\ e^{-3 {}^{(0)} C^\psi} {}^{(0)}C^{\gamma\, j k} {}^{(0)}D_j \left[e^{3 {}^{(0)}C^\psi} \,{}^{(2)}C^A_{ki}(\mathbf{x})\right] & =& {\partial}_i\, {}^{(2)}C^\chi(\mathbf{x}) + {\cal O}(\epsilon^5), \notag\\ e^{-3 {}^{(0)} C^\psi} {}^{(0)}C^{\gamma\, j k} {}^{(0)}D_j\left[ e^{3 {}^{(0)}C^\psi} \,{}^{(2)}F_{ki}(\mathbf{x})\right] & =& -{1\over 6} {\partial}_i\, {}^{(2)}\hat R(\mathbf{x}) + {\cal O}(\epsilon^5), \label{eqn: constraint-integration-const} \end{eqnarray} where $^{(2)}\hat R(\mathbf{x}):=R[e^{2 {}^{(0)}C^\psi}\, {}^{(0)}C^\gamma]$ is the Ricci scalar constracted from the metric $e^{2{}^{(0)}C^\psi} \,{}^{(0)}C^\gamma_{ij}$. Here, $^{(2)}\hat R(\mathbf{x})$ should not be confused with $^{(2)}R(\mathbf{x})$. The latter is the Ricci scalar constructed from $^{(0)}C_{ij}^\gamma$ and not from $e^{2{}^{(0)}C^\psi} \,{}^{(0)}C^\gamma_{ij}$. Explicitly, \begin{align} {}^{(2)}\hat R(\mathbf{x})= \left[{}^{(2)}R(\mathbf{x}) -2 \left(2 {}^{(0)}D^2 {}^{(0)}C^\psi +{}^{(0)}C^{\gamma\, ij} \partial_i {}^{(0)}C^\psi \partial_j {}^{(0)}C^\psi \right)\right]e^{-2 {}^{(0)}C^\psi}. \label{def:K22} \end{align} Note that the third equation is automatically satisfied provided that the last equation holds, as can be verified by using Eq.~(\ref{eqn:def-F2ij}). In summary, we have integrated the field equations up to second order in gradient expansion and obtained the following solution for generic single-field inflation: \begin{eqnarray} \delta E & = & \frac{{}^{(2)} \hat R(\mathbf{x})}{2a^2} + {\cal O}(\epsilon^4), \nonumber\\ \delta \phi &=&{1\over {\cal A} a^2}\left[\left((G_X\dot{\phi})_0 -{{\cal E}_X \over 3 a}\int_{t_*}^t a(t') dt' \right){{}^{(2)}\hat R(\mathbf{x})\over 2} +{{\cal E}_X\over a} {}^{(2)}C^\chi(\mathbf{x})\right]+{\cal O}(\epsilon^4), \nonumber\\ \delta \alpha &= & {\partial}_t\left({\delta \phi\over \dot{\phi}_0}\right) -{1\over 2 X_0 {\cal G} a^2} \left[ \left(1-{\Theta\over a}\int_{t_*}^t a(t') dt' \right) {{}^{(2)}\hat R(\mathbf{x})\over 2} +{3\Theta\over a} {}^{(2)}C^\chi(\mathbf{x})\right]+{\cal O}(\epsilon^4), \nonumber\\ \psi &= & {}^{(0)}C^\psi(\mathbf{x}) + {H \delta \phi\over \dot{\phi}_0}+ \int_{t_*}^t dt' {(\rho_0+P_0) \delta\phi \over 2 \dot{\phi}_0}\notag\\ &&-\int_{t_*}^t dt'{H \over 2 X_0 {\cal G} a^2 } \left[ \left(1-{\Theta \over a}\int_{t_*}^{t'} a(t'') dt'' \right) {{}^{(2)}\hat R(\mathbf{x})\over 2}+{3\Theta\over a} {}^{(2)}C^\chi(\mathbf{x})\right] +{\cal O}(\epsilon^4), \nonumber\\ {A}_{ij} &= & \frac{1}{a^3} \left[{}^{(2)}F_{ij}(\mathbf{x})\int_{t_0}^t a(t')dt' + {}^{(2)}C^A_{ij}(\mathbf{x})\right] + O(\epsilon^4), \notag\\ {\gamma}_{ij} & = & {}^{(0)}C^\gamma_{ij}(\mathbf{x})+2\left[ {}^{(2)}F_{ij}(\mathbf{x}) \int^t_{t_*}\frac{dt'}{a^3(t')}\int^{t'}_{t_*}a(t'')dt'' + {}^{(2)}C^A_{ij}(\mathbf{x}) \int^t_{t_*}\frac{dt'}{a^3(t')}\right] + {\cal O}(\epsilon^4). \label{eqn:general solution} \end{eqnarray} The $\mathbf{x}$-dependent integration constants, ${}^{(0)}C^\psi$, ${}^{(0)}C^\gamma_{ij}$, ${}^{(2)}C^\chi$ and ${}^{(2)}C^A_{ij}$, satisfy the following conditions: \begin{eqnarray} &&{}^{(0)}C^\gamma_{ij}= {}^{(0)}C^\gamma_{ji}, \quad \det({}^{(0)}C^\gamma_{ij}) = 1,\quad {}^{(2)}C^A_{ij} = {}^{(2)}C^A_{ji}, \quad {}^{(0)}C^{\gamma\,ij} \, {}^{(2)}C^A_{ij} = 0, \nonumber\\&& e^{-3 ^{(0)} C^\psi} {}^{(0)}C^{\gamma\, j k} {}^{(0)}D_j\left(e^{3 ^{(0)}C^\psi} \,{}^{(2)}C^A_{ki}\right) = {\partial}_i\, {}^{(2)}C^\chi. \end{eqnarray} Before closing this section, we remark that the gauge condition~(\ref{time-orthogonal-cond}) remains unchanged under a purely spatial coordinate transformation \begin{equation} x^i \to \bar x^i = f^i(\mathbf{x}). \end{equation} This means that the zeroth-order spatial metric ${}^{(0)}C^\gamma_{ij}$ contains three residual gauge degrees of freedom. Therefore, the number of degrees of freedom associated with each integration constant is summarized as follows: \begin{eqnarray} {}^{(0)}C^\psi & \cdots & 1 \mbox{ scalar growing mode} = 1 \mbox{ component}, \nonumber\\ {}^{(0)}C^\gamma_{ij} & \cdots & 2 \mbox{ tensor growing modes} = 5 \mbox{ components} - 3 \mbox{ gauge}, \nonumber\\ {}^{(2)}C^\chi & \cdots & 1 \mbox{ scalar decaying mode} = 1 \mbox{ component}, \nonumber\\ {}^{(2)}C^A_{ij} & \cdots & 2 \mbox{ tensor decaying modes} = 5 \mbox{ components} - 3 \mbox{ constraints}. \end{eqnarray} \section{Nonlinear curvature perturbation} \label{sec:nlcurvpert} In this section, we will define a new variable which is a nonlinear generalization of the curvature perturbation up to ${\cal O}(\epsilon^2)$ in gradient expansion. We will show that this variable satisfies a nonlinear second-order differential equation, and, as in Ref.~\cite{Takamizu:2010xy}, the equation can be deduced as a generalization of the corresponding linear perturbation equation. To do so, one should notice the following fact on the definition of the {\em curvature perturbation}: in linear theory the curvature perturbation is named so because it is directly related to the three-dimensional Ricci scalar; $\psi$ may be called so at fully nonlinear order in perturbations and at leading order in gradient expansion; and, as pointed out in Ref.~\cite{Takamizu:2010xy}, $\psi$ is no longer appropriate to be called so at second order in gradient expansion. To define the curvature perturbation appropriately at ${\cal O}(\epsilon^2)$, one needs to take into account the contribution from $\gamma_{ij}$. Let us denote this contribution as $\chi$. We carefully define the curvature perturbation as a sum of $\psi$ and $\chi$ so that the new variable reproduces the correct result in the linear limit. In what follows we remove the subscript $0$ from the background quantities since there will be no danger of confusion. \subsection{Assumptions and definitions} \label{subsec:assumption} As mentioned in the previous section, we still have residual spatial gauge degrees freedom, which we are going to fix appropriately. To do so, we assume that the contribution from gravitational waves to $\gamma_{ij}$ is negligible and consider the contribution from scalar-type perturbations only. We may then choose the spatial coordinates so that $\gamma_{ij}$ coincides with the flat metric at sufficiently late times during inflation, \begin{eqnarray} {\gamma}_{ij}\to\delta_{ij}\quad (t\to\infty). \label{eq: gamma-ij t-infty} \end{eqnarray} In reality, the limit $t\to\infty$ may be reasonably interpreted as $t\to t_{\rm late}$ where $t_{\rm late}$ is some time close to the end of inflation. Up to ${\cal O}(\epsilon^2)$, this condition completely removes the residual three gauge degrees of freedom. We wish to define appropriately a nonlinear curvature perturbation on uniform $\phi$ hypersurfaces ($\delta \phi(t,\mathbf{x})=0$) and derive a nonlinear evolution equation for the perturbation. The nonlinear result on uniform $\phi$ hypersurfaces is to be compared with the linear result for G-inflation~\cite{Kobayashi:2010cm}. However, in the previous section the general solution was derived in the uniform expansion gauge. For our purpose we will therefore go from the uniform expansion gauge to the uniform $\phi$ gauge\footnote{The gauge in which $\phi$ is uniform is sometimes called the unitary gauge. The unitary gauge does not coincide with the comoving gauge in G-inflation, as emphasized in~\cite{Kobayashi:2010cm}.}. It is clear that at leading order in gradient expansion the uniform expansion gauge coincides with the uniform $\phi$ gauge. In this case, it would be appropriate simply to define $\psi$ to be the nonlinear curvature perturbation. At second order in gradient expansion, however, this is not the correct way of defining the nonlinear curvature perturbation. We must extract the appropriate scalar part $\chi$ from $\gamma_{ij}$, which will yield an extra contribution to the total curvature perturabtion, giving a correct definition of the nonlinear curvature perturbation at ${\cal O}(\epsilon^2)$. Let us use the subscripts $K$ and $u$ to indicate the quantity in the uniform $K$ and $\phi$ gauges, respectively, so that in what follows the subscript $K$ is attached to the solution derived in the previous section. First, we derive the relation between $\psi_K$ and $\psi_u$ up to ${\cal O}(\epsilon^2)$. In general, one must consider a nonlinear transformation between different time slices. The detailed description on this issue can be found in Ref.~\cite{Naruko:2012fe}. However, thanks to the fact that $\delta \phi_K={\cal O}(\epsilon^2)$, one can go from the uniform $K$ gauge to the uniform $\phi$ gauge by the transformation analogous to the familiar linear gauge transformation. Thus, $\psi_u$ is obtained as \begin{eqnarray} {\psi}_u={\psi}_K-\frac{H}{\dot{\phi}}\delta \phi_K+{\cal O}(\epsilon^3). \label{def: comoving nonlinear curvature perturb} \end{eqnarray} One might think that the shift vector $\beta^i_u$ appears in this new variable as a result of the gauge transformation, but $\beta^i$ can always be gauged away by using a spatial coordinate transformation. The general solution for ${\psi}_u$ valid up to ${\cal O}(\epsilon^2)$ is thus given by the linear combination of the solution for $\psi_K$ and $\delta \phi_K$ displayed in Eq.~(\ref{eqn:general solution}). Note here that the spatial metric $\gamma_{ij}$ remains the same at ${\cal O}(\epsilon^2)$ accuracy under the change from the uniform $K$ gauge to the uniform $\phi$ gauge: \begin{eqnarray} \gamma_{ij\, K}=\gamma_{ij\,u}+{\cal O}(\epsilon^4)\,. \end{eqnarray} We now turn to the issue of appropriately defining a nonlinear curvature perturbation to ${\cal O}(\epsilon^2)$ accuracy. Let us denote the linear curvature perturbation in the uniform $\phi$ gauge by ${\cal R}^{\rm Lin}_u$. In the linear limit, $\psi$ reduces to the longitudinal component $H^{\rm Lin}_L$ of scalar perturbations, while $\chi$ to traceless component $H^{\rm Lin}_T$: \begin{align} \psi\to H^{\rm Lin}_L, \quad \chi\to H^{\rm Lin}_T. \end{align} The linear curvature perturbation is given by ${\cal R}^{\rm Lin}=(H^{\rm Lin}_L+H^{\rm Lin}_T/3)Y$. Here, we have followed Ref.~\cite{Kodama:1985bj} and the perturbations are expanded in scalar harmonics $Y$ satisfying $\left(\partial_i\partial^i+k^2\right)Y_{\mathbf{k}}=0$, with the summation over $\mathbf{k}$ suppressed for simplicity. The spatial metric in the linear limit is expressed as \begin{eqnarray} \hat\gamma_{ij}^{{\rm Lin}}=a^2\left(\delta_{ij} +2 H^{\rm Lin}_L Y \delta_{ij}+2 H^{\rm Lin}_T Y_{ij}\right)\,, \end{eqnarray} where $Y_{ij}=k^{-2} \left[\partial_i \partial_j -(1/ 3) \delta_{ij} \partial_l\partial^l \right] Y_{\mathbf{k}}$. Since $\psi$ corresponds to $H^{\rm Lin}_L$, one can read off from the above expression that ${\gamma}_{ij} = \delta_{ij} + 2H^{\rm Lin}_T Y_{ij}$ in the linear limit. Thus, our task is to extract from $\gamma_{ij}$ the scalar component $\chi$ that reduces to $H^{\rm Lin}_T $ in linear limit. It was shown in Ref.~\cite{Takamizu:2010xy} that by using the inverse Laplacian operator on the flat background, $\Delta^{-1}$, one can naturally define $\chi$ as \begin{eqnarray} \chi := -\frac{3}{4}\Delta^{-1}\left[\partial^i e^{-3\psi} \partial^j e^{3\psi}({\gamma}_{ij}-\delta_{ij}) \right]. \label{def: nonlinear HT0} \end{eqnarray} In terms of $\chi$ defined above, the nonlinear curvature perturbation is defined, to ${\cal O}(\epsilon^2)$, as \begin{eqnarray} {{\frak R}}^{\rm NL}:= {\psi}\,+\,{\chi\over 3}\,. \label{def0: nonlinear variable zeta} \end{eqnarray} As is clear from Eq.~(\ref{def: nonlinear HT0}), extracting $\chi$ generally requires a spatially nonlocal operation. However, as we will see in the next subsection, in the uniform $\phi$ gauge supplemented with the asymptotic condition on the spatial coordinates~(\ref{eq: gamma-ij t-infty}), it is possible to obtain the explicit expression for the nonlinear version of $\chi$ from our solution~(\ref{eqn:general solution}) without any nonlocal operation. \subsection{Solution} \label{subsec:explicit} We start with presenting an explicit expression for $\psi_u$. It follows from Eqs.~(\ref{eqn:general solution}) and~(\ref{def: comoving nonlinear curvature perturb}) that \begin{eqnarray} {\psi}_u={}^{(0)}C^\psi(\mathbf{x})+{}^{(2)}C^\psi(\mathbf{x}) +{f}_R(t)\,{}^{(2)}\hat R(\mathbf{x})+ {f}_\chi (t)\, {}^{(2)}C^\chi(\mathbf{x}) +{\cal O}(\epsilon^4). \label{sol: general sol R} \end{eqnarray} Note here that although the integration constant ${}^{(2)}C^\psi(\mathbf{x})$ was absorbed into the redefinition of ${}^{(0)}C^\psi(\mathbf{x})$ in the previous section, we do not do so in this section for later convenience. The time-dependent functions $f_R(t)$ and $f_C(t)$ are defined as \begin{eqnarray} {f}_R(t)&:=& \int_{t_*}^t {dt' \over 2 a^2} \left\{{(\rho+P)\over 2 \dot{\phi} {\cal A} } \left[ G_X\dot{\phi}-{{\cal E}_X\over 3 a} \int_{t_*}^{t'} a(t'') dt'' \right]-{H\over 2 X {\cal G} } \left( 1-{\Theta \over a}\int_{t_*}^{t'} a(t'') dt'' \right)\right\}, \label{eq: f-K t} \\ {f}_\chi(t) &:=&\int_{t_*}^t {dt' \over a^3} \left[{(\rho+P) {\cal E}_X \over 2 \dot{\phi} {\cal A} } -{3 H \Theta \over 2 X{\cal G} }\right]. \label{eq: f-C t} \end{eqnarray} Since $\gamma_{ij\,u}$ coincides with $\gamma_{ij\, K}$ up to ${\cal O}(\epsilon^2)$, it is straightforward to see \begin{eqnarray} {\gamma}_{ij\,u}={}^{(0)}C^\gamma_{ij}(\mathbf{x})+{}^{(2)}C^\gamma_{ij}(\mathbf{x})+ 2g_F(t){}^{(2)}F_{ij}(\mathbf{x}) +2 g_A(t) {}^{(2)}C^A_{ij}(\mathbf{x})+{\cal O}(\epsilon^4), \label{eq: til-gamma-ij} \end{eqnarray} where \begin{eqnarray} g_F(t):=\int_{t_*}^{t} {dt'\over a^3(t')}\int_{t_*}^{t'} a(t'') dt''\,, \quad g_A(t):=\int_{t_*}^t {dt'\over a^3(t')}\,. \label{def: integral-A-B} \end{eqnarray} The integration constants ${}^{(0)}C^\gamma_{ij}$ and ${}^{(2)}C^\gamma_{ij}$ are determined from the condition~(\ref{eq: gamma-ij t-infty}) as \begin{eqnarray} && {}^{(0)}C^\gamma_{ij} = \delta_{ij}\,,\nonumber\\ && {}^{(2)}C^\gamma_{ij} = -2 g_F(\infty){}^{(2)}F_{ij}-2g_A(\infty) {}^{(2)}C^A_{ij}. \label{def: f2-ij} \end{eqnarray} We now have ${}^{(0)}C_{ij}^\gamma =\delta_{ij}$, and hence ${}^{(2)}R_{ij}(\mathbf{x})=R_{ij}[{}^{(0)}C^\gamma]=0$. This simplifies the explicit expression for ${}^{(2)}\hat R(\mathbf{x})$ and ${}^{(2)}F_{ij}(\mathbf{x})$; they are given solely in terms of ${}^{(0)}C^\psi$ and the usual derivative operator $\partial_i$. Substituting Eq.~(\ref{eq: til-gamma-ij}) to the definition~(\ref{def: nonlinear HT0}), we obtain \begin{eqnarray} {\chi_u\over 3}= {{}^{(2)}\hat R(\mathbf{x})\over 12}\left[g_F(t)-g_F(\infty)\right] -{{}^{(2)}C^\chi(\mathbf{x})\over 2}\left[g_A(t)-g_A(\infty)\right] +{\cal O}(\epsilon^4). \label{eq: HT} \end{eqnarray} It is easy to verify that the linear limit of $\chi_u$ reduces consistently to $H_T^{\rm Lin}Y$. We then finally arrive at the following explicit solution for the appropriately defined nonlinear curvature perturbation in the uniform $\phi$ gauge: \begin{eqnarray} {{\frak R}}_u^{\rm NL} = {}^{(0)}C^\psi(\mathbf{x})+{}^{(2)}C^\psi(\mathbf{x}) +{}^{(2)}\hat R(\mathbf{x})\left[ f_R(t)+ \frac{g_F(t)}{12}-\frac{g_F(\infty)}{12} \right] +{}^{(2)}C^\chi(\mathbf{x})\left[ f_\chi(t) -\frac{g_A(t)}{2}+\frac{g_A(\infty)}{2}\right]. \label{def: nonlinear variable zeta} \end{eqnarray} Let us comment on the dependence of ${\frak R}_u^{\rm NL}$ on the initial fiducial time $t_*$. One may take $t_*$ as the time when our nonlinear superhorizon solution is matched to the perturbative solution whose initial condition is fixed deep inside the horizon. Then, ${\frak R}_u^{\rm NL}$ should not depend on the choice of $t_*$, though apparent dependences are found in the lower bounds of the integrals ${f}_R(t)$, ${f}_\chi(t)$, $g_F(t)$, and $g_A(t)$. Actually, in the same way as discussed in Ref.~\cite{Takamizu:2010xy}, one can check that ${\frak R}_u^{\rm NL}$ is invariant under the infinitesimal shift $t_*\to t_*+\delta t_*$. \subsection{Second-order differential equation} \label{subsec:nleq} Having obtained explicitly the solution ${\frak R}_u^{\rm NL}$ in Eq.~(\ref{def: nonlinear variable zeta}), now we are going to deduce the second-order differential equation that ${\frak R}_u^{\rm NL}$ obeys at ${\cal O}(\epsilon^2)$ accuracy. For this purpose, we rewrite $f_R(t)$ and $f_\chi(t)$ in terms of \begin{eqnarray} z:={a\dot{\phi}\sqrt{{\cal G}} \over \Theta}\,. \label{def: variable-z} \end{eqnarray} This is a generalization of familiar ``$z$'' in the Mukhanov-Sasaki equation~\cite{Mukhanov:1990me}, and reduces indeed to $a\sqrt{(\rho+P)}/H c_s$ in the case of k-inflation. With some manipulation, it is found that $f_R(t)$ and $f_\chi(t)$ can be rewritten as \begin{eqnarray} {f}_R(\eta)&=&{1\over 2}\int_{\eta_*}^{\eta} {a(\eta') d\eta'\over z^2 \Theta\,(\eta')}+{1\over 2}\int_{\eta_*}^{\eta} {d\eta'\over z^2(\eta')}\int_{\eta_*}^{\eta'} a(\eta'') d\eta'' -{1\over 12} \int_{\eta_*}^{\eta} {d\eta'\over a^2(\eta')} \int_{\eta_*}^{\eta'} a^2 (\eta'') d\eta'',\nonumber\\ {f}_\chi(\eta)&=&{1\over 2}\int_{\eta_*}^\eta {d\eta'\over a^2(\eta')} -3 \int_{\eta_*}^{\eta}{d\eta'\over z^2(\eta')}\,, \end{eqnarray} where the conformal time defined by $d\eta=dt/a(t)$ was used instead of $t$, and $\eta_*$ corresponds to the fiducial initial time. Further, it is convenient to express them in the form \begin{eqnarray} {f}_R(\eta)= F(\eta_*)-F(\eta)-{1\over 12} g_F(\eta) \,,\qquad {f}_\chi(\eta)={1\over 2} g_A(\eta) +D(\eta_*)-D(\eta), \end{eqnarray} where we defined \begin{eqnarray} D(\eta)=3 \int_{\eta}^{0}{ d\eta' \over z^2(\eta')}\,, \quad F(\eta)={1 \over 2 }\int_{\eta}^{0} {d\eta'\over z^2(\eta')} \int_{\eta_*}^{\eta'} a^2 (\eta'') d\eta''- {1 \over 2 } \int_{\eta}^{0} {a(\eta')d\eta'\over z^2\Theta (\eta')}\,. \label{def: integral-D-F} \end{eqnarray} The functions $D(\eta)$ and $F(\eta)$ are defined so that $D, F\to 0$ as $\eta\to 0$. It is important to notice that $D(\eta)$ is the decaying mode in the long-wavelength limit, {\em i.e.}, at leading order in gradient expansion, in the linear theory, satisfying \begin{eqnarray} D''+2\frac{z'}{z}D'=0\,, \end{eqnarray} while $F(\eta)$ is the ${\cal O}(k^2)$ correction to the growing (constant) mode satisfying \begin{eqnarray} F''+2\frac{z'}{z}F'+c_s^2=0\,, \label{eq:DFmeaning} \end{eqnarray} where we assume that the growing mode solution is of the form $1+k^2F(\eta)+{\cal O}(k^4)$. In the above equations the prime stands for differentiation with respect to the conformal time and $c_s^2$ is the sound speed squared of the scalar fluctuations defined as \begin{align} c^2_s:= {{\cal F}(t)\over {\cal G}(t)},\quad {\cal F}(t):={1\over X_0} (-{\partial}_t \Theta+\Theta G_X X \dot{\phi})_0. \end{align} Using $D$ and $F$, Eq.~(\ref{def: nonlinear variable zeta}) can be written as \begin{eqnarray} {{\frak R}}_u^{\rm NL}(\eta) = {}^{(0)}C^\psi(\mathbf{x}) +{}^{(2)}C^{{\frak R}}(\mathbf{x}) -{}^{(2)}\hat R(\mathbf{x})F(\eta) -{}^{(2)}C^{\chi}(\mathbf{x})D(\eta) +{\cal O}(\epsilon^4), \label{eq: sol-tild-zeta} \end{eqnarray} where time-independent terms of ${\cal O}(\epsilon^2)$ are collectively absorbed to ${}^{(2)}C^{{\frak R}}(\mathbf{x})$. It turns out that the solution can be expressed simply in terms of the two time-dependent functions corresponding to the decaying mode and the ${\cal O}(k^2)$ correction to the growing mode in the linear theory. This shows that, within ${\cal O}(\epsilon^2)$ accuracy in gradient expansion, the curvature perturbation ${{\frak R}}_u^{\rm NL}$ obeys the following nonlinear second-order differential equation: \begin{eqnarray} {{{\frak R}}_u^{\rm NL}}''+2 {z'\over z} {{{\frak R}}_u^{\rm NL}}' +{c_s^2\over 4} \,{}^{(2)}R[\, {{\frak R}}_u^{\rm NL}\,]={\cal O}(\epsilon^4)\,, \label{eq: basic eq for NL} \end{eqnarray} where ${}^{(2)}R[\,{{\frak R}}_u^{\rm NL}\,]$ is the Ricci scalar of the metric $\delta_{ij}\exp\left(2{\frak R}_u^{\rm NL}\right)$. Equation~(\ref{eq: basic eq for NL}) is our main result. It is easy to see that in the linear limit Eq.~(\ref{eq: basic eq for NL}) reproduces the previous result for the curvature perturbation in the unitary gauge~\cite{Kobayashi:2010cm}, \begin{eqnarray} {{\cal R}^{\rm Lin}_u}''+2{z'\over z} {{\cal R}^{\rm Lin}_u}' -c_s^2\,\Delta {\cal R}^{\rm Lin}_u=0\,, \label{eq: linear eq R} \end{eqnarray} where $\Delta$ denotes the Laplacian operator on the flat background. Equation~(\ref{eq: basic eq for NL}) can be regarded as the master equation for the nonlinear superhorizon curvature perturbation at second order in gradient expansion. It must, however, be used with caution, since it is derived under the assumption that the decaying mode is negligible at leading order in gradient expansion. Moreover, if one the right-hand side of (\ref{eq: basic eq for NL}) set to exactly zero, this master equation becomes a closed equation, and it be a useful approximation to a full nonlinear solution on the Hubble horizon scales or even on scales somewhat smaller than the Hubble radius. \section{Summary and discussion} \label{sec:summary} In this paper, we have developed a theory of nonlinear cosmological perturbations on superhorizon scales for G-inflation, for which the inflaton Lagrangian is given by $W(X,\phi)-G(X,\phi)\Box\phi$. In the case of $G_X=0$, {\em i.e.}, k-inflation, the energy-momentum tensor for the scalar field is equivalent to that of a perfect fluid. In the case of G-inflation, however, it can no longer be recast into a perfect fluid form, and hence its imperfect nature shows up when the inhomogeneity of the Universe is considered. We have solved the field equations using spatial gradient expansion in terms of a small parameter $\epsilon:=k/(aH)$, where $k$ is a wavenumber, and obtained a general solution for the metric and the scalar field up to ${\cal O}(\epsilon^2)$. We have introduced an appropriately defined variable for the nonlinear curvature perturbation in the uniform $\phi$ gauge, ${{\frak R}}_u^{\rm NL}$. Upon linearization, this variable reduces to the previously defined linear curvature perturbation ${\cal R}_u^{\rm Lin}$ on uniform $\phi$ hypersurfaces. Then, it has been shown that ${{\frak R}}_u^{\rm NL}$ satisfies a nonlinear second-order differential equation~(\ref{eq: basic eq for NL}), which is a natural extension of the linear perturbation equation for ${\cal R}_u^{\rm Lin}$. We believe that our result can further be extended to include generalized G-inflation, {\em i.e.}, the most general single-field inflation model~\cite {Kobayashi:2011nu}, though the computation required would be much more complicated. The nonlinear evolution of perturbations, and hence the amount of non-Gaussianity, are affected by the ${\cal O}(\epsilon^2)$ corrections if, for example, there is a stage during which the slow-roll conditions are violated. Calculating the three point correlation function of curvature perturbations including the ${\cal O}(\epsilon^2)$ corrections will be addressed in a future publication. Finally we have comment on our method compared to the in-in formalism developed in the literature. Our formalism is vaid on the classical evolution in superhorizon scales, while the in-in formalism can also calculate a quantum evolution on sub-horizon scale. So comparison with each other leads to picking out the quantum effect of non-Gaussianity directly. We have handled the curvature perturbation itself in our formalism, not the correlation function in the in-in one, then its time evolution is more clearly understood. \acknowledgments This work was supported by the JSPS Grant-in-Aid for Young Scientists (B) No.~23740170 and for JSPS Postdocoral Fellowships No. 24-2236.
{ "attr-fineweb-edu": 1.620117, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd7425V5jBGHQ2ttE
\section{Supplemental Material} Part~A contains computational details for Approach~A [Eqs.~\eqref{effective_eqs_of_motion}--\eqref{G_EJ}] and Part~B for Approach~B [Eqs.~\eqref{EE_ka_f_th_reg} and~\eqref{L_th}]. \bigskip \begin{center} \bfseries\small Part~A: Heat Waves in Random Media \end{center} \smallskip The energy density operator given by \eqref{Hamiltonian_inhomog_CFT} and the corresponding heat current operator are \begin{equation} \mathcal{E}(x)=v(x)[T_+(x)+T_-(x)], \quad \cJ(x) = v(x)^2 [T_{+}(x) - T_{-}(x)]. \end{equation} Using the Heisenberg equation and \eqref{T_rx_T_rpxp}, one verifies that \begin{equation} \partial_{t} \mathcal{E}(x,t) + \partial_{x} \cJ(x,t) = 0, \quad \partial_t \cJ(x,t) + v(x) \partial_x \bigl[ v(x)\mathcal{E}(x,t) + S(x) \bigr] = 0, \end{equation} where $S(x) = -(c/12 \pi) \bigl[ v(x)v''(x) - v'(x)^2/2 \bigr]$ is an anomaly originating from the Schwinger term in \eqref{T_rx_T_rpxp}. These equations of motion guarantee conservation of energy but not of the total current $\int dx\, \cJ(x)$ or the momentum $\int dx\, [T_+(x)-T_-(x)]$. We define $\tilde{E}(x,t) = \langle \mathcal{E}(x,t) \rangle$ and $\tilde{J}(x,t) = \langle \cJ(x,t) \rangle$ with $\langle \cdot \rangle$ the expectation value in some arbitrary state in the thermodynamic limit. The expectations satisfy the same equations of motion as the operators. They have the following static solutions: $\tilde{E}_{\textnormal{stat}}(x) = [C_1 - S(x)]/v(x)$ and $\tilde{J}_{\textnormal{stat}}(x) = C_2$ with real constants $C_{1}$ and $C_{2}$, which describe an equilibrium state if $C_2=0$ and a nonequilibrium steady state if $C_2 \neq 0$. It follows that $E(x,t) = \tilde{E}(x,t) - \tilde{E}_{\textnormal{stat}}(x)$ and $J(x,t) = \tilde{J}(x,t) - \tilde{J}_{\textnormal{stat}}(x)$ satisfy \eqref{effective_eqs_of_motion}. To solve \eqref{effective_eqs_of_motion} with our initial conditions, we observe that we can write \begin{equation} E(x,t) = [{u_{+}(x,t)} + {u_{-}(x,t)}]/{v(x)}, \quad J(x,t) = {u_{+}(x,t)} - {u_{-}(x,t)}, \end{equation} with $u_{\pm}(x,t)$ satisfying $\partial_t u_{\pm}(x,t) \pm v(x) \partial_x u_{\pm}(x,t) = 0$. Our initial conditions $E(x,0) = e_0(x)$ and $J(x,0) = 0$ translate into $u_{\pm}(x,0) = v(x) e_0(x)/2$. Using standard methods for partial differential equations, we find the following exact solution for the initial value problem for $u_{\pm}$:% \begin{equation} u_{\pm}(x,t) = \int \!dy\, \frac{\th(\pm(x-y))}{2} \int \frac{d\o}{2\pi} \mathrm{e}^{\mathrm{i}\o \int_{y}^{x} \!d\tilde{x}\, v(\tilde{x})^{-1} \mp \mathrm{i}\o t} e_0(y) \end{equation} with the Heaviside function $\theta(x)$. Inserting \eqref{v_xi}, using \eqref{main_property} to compute the impurity average, and using a standard Gaussian integral, we obtain \begin{equation} \mathbb{E}[u_{\pm}(x,t)] = \int dy\, \frac{v}{2} G_{\pm}(x-y,t) e_0(y) \end{equation} with $G_{\pm}(x,t)$ in \eqref{G_r_xt}. In a similar manner we compute $\mathbb{E}[u_{\pm}(x,t)/v(x)]$. From this, the results in \eqref{EE_EJ_xi} and \eqref{G_EJ} follow. Energy conservation in \eqref{EE_E_xi} can be shown as follows. We can write $G^{\mathcal{E}}_\pm(x,t) = [\th(\pm x)/2] (2\pi)^{-1/2} \partial_{x} [\chi_{\pm}] \mathrm{e}^{-\chi_{\pm}^2/2}$ with $\chi_{\pm} = (x\mp vt)/\sqrt{\Lambda(x)}$. Using this, one finds $\int \!dx\, e(x,t) = \int \!dx\, e_0(x)$ by a change of variables to $\chi_{\pm}$ and computing a Gaussian integral. \bigskip \begin{center} \bfseries\small Part~B: Linear-Response Theory \end{center} \smallskip Let $\ka_{\textnormal{th},\xi}(\o)$ be the thermal conductivity at fixed impurity configuration indicated by the subscript $\xi$. We define it as the response function related to the total heat current obtained by perturbing the equilibrium state at temperature $\b^{-1}$ with a unit pulse perturbation $V = -(\d\b/\b) \int dx\, W(x) \mathcal{E}(x)$ at time zero, where $W(x)$ is a smooth function equal to $1/2$ ($-1/2$) to the far left (right), cf.\ [\ref{GLM_SM}, \ref{LLMM2_SM}]. Using standard linear-response theory [\ref{Kubo_SM}], one derives the Green-Kubo formula \begin{equation} \label{GK_1} \ka_{\textnormal{th},\xi}(\o) = \b \!\int_{0}^{\b} \!d\t \!\int_{0}^{\infty} \!dt\, \mathrm{e}^{\mathrm{i}\o t} \!\int\!dx \int\! dx'\, \partial_{x'} \bigl[ -W(x') \bigr] \bigl\langle \cJ(x,t) \cJ(x',i\t) \bigr\rangle_{\b}^{c}, \end{equation} where $\langle \cJ(x,t) \cJ(x',\mathrm{i}\t) \rangle_{\b}^{c}$ is the connected current-current correlation function in thermal equilibrium with respect to $H$ in \eqref{Hamiltonian_inhomog_CFT}. Since translational invariance is broken, we cannot change variables to do the $x'$-integral. Using CFT results developed in [\ref{GLM_SM}], we derive an explicit formula for the correlation function in \eqref{GK_1}. Computing the time integrals exactly using the residue theorem, we obtain $D_{\textnormal{th},\xi} = \pi vc/3\b = D_{\textnormal{th}}$ independent of $\xi$ and \begin{equation} \label{ka_f_th_reg} \Re \ka_{\textnormal{th},\xi}^{\textnormal{reg}}(\o) = \frac{\pi c}{6\b} \biggl[ 1 + \left( \frac{\o\b}{2\pi} \right)^2 \biggr] \!\int \!dx\int\! dx'\, \partial_{x'} \bigl[ -W(x') \bigr] \biggl( 1 - \frac{v}{v(x)} \biggr) \cos \biggl( \o \!\int_{x'}^{x} \frac{d\tilde{x}}{v(\tilde{x})} \biggr). \end{equation} For standard CFT, \eqref{ka_f_th_reg} is zero. To compute $\ka_{\textnormal{th}}(\o) = \mathbb{E}[\ka_{\textnormal{th},\xi}(\o)]$, we write the cosine as sum of exponentials, insert \eqref{v_xi}, and use \eqref{main_property}. After averaging, translation invariance is recovered, which allows us to do the $x'$-integral and obtain the result independent of $W(x)$ given in \eqref{EE_ka_f_th_reg}. Lastly, that $L_{\mathrm{th}}$ is given by \eqref{L_th} independent of impurity details can be shown as follows. We change the integration variable in \eqref{EE_ka_f_th_reg} to $\zeta = \o x/v$ and note that the function in the exponential becomes $-(1/2) \Gamma_0 a_0 (\o/v)^2 F(v|\zeta|/\o a_0)$, which equals $-(1/2) \Gamma_0 \o |\zeta|/v$ up to subleading terms not contributing to the integral as $\o\to0$. This implies the result in \eqref{L_th}. \bigskip \noindent {\bf References} {\small% \begin{enumerate}[leftmargin=2.0em, itemsep=0.0em, label={[\arabic*]}, ref={\arabic*}, itemindent=0.0em] \setcounter{enumi}{11} \item \label{GLM_SM} K.~Gaw\k{e}dzki, E.~Langmann, and P.~Moosavi, ``Finite-time universality in nonequilibrium CFT,'' J.\ Stat.\ Phys.\ {\bf 172}, 353 (2018). \setcounter{enumi}{25} \item \label{LLMM2_SM} E.~Langmann, J.~L.~Lebowitz, V.~Mastropietro, and P.~Moosavi, ``Time evolution of the Luttinger model with nonuniform temperature profile,'' Phys.\ Rev.\ B {\bf 95}, 235142 (2017). \setcounter{enumi}{29} \item \label{Kubo_SM} R.~Kubo, ``Statistical-mechanical theory of irreversible processes. I. General theory and simple applications to magnetic and conduction problems,'' J.\ Phys.\ Soc.\ Jpn.\ {\bf 12}, 570 (1957). \end{enumerate} }
{ "attr-fineweb-edu": 1.517578, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd845qoTBB2nxJR4V
\section{Introduction} \label{intro} \noindent Let ${\boldsymbol\gamma}$ be a smooth closed curve parametrized by arc length, embedded in the Euclidean plane endowed with its canonical inner product denoted by a single point. A one-parameter smooth family of plane closed curves $\left({\bf{\boldsymbol\gamma}}(\cdot,t)\right)_t$ with initial condition ${\bf{\boldsymbol\gamma}}(\cdot,0)={\bf{\boldsymbol\gamma}}$ is said to evolve by the curve shortening flow (CSF for short) if \begin{equation}\label{csf1} \frac{\partial{\bf{\boldsymbol\gamma}}}{\partial t}=\kappa {\bf n} \end{equation} \noindent where $\kappa$ is the signed curvature and ${\bf n}$ the inward pointing unit normal. By the works of Gage, Hamilton and Grayson any embedded closed curve evolves to a convex curve (or remains convex if so) and shrinks to a point in finite time\footnote[1]{The reader could find a dynamic illustration of this result on the internet page http://a.carapetis.com/csf/}. \noindent In this note, we are interested by {\it self-similar} solutions that is solutions which shapes change homothetically during the evolution. This condition is equivalent to say, after a suitable parametrization, that \begin{equation}\label{csf2} \kappa=\varepsilon{\bf{\boldsymbol\gamma}}\cdot{\bf n} \end{equation} \noindent with $\varepsilon=\pm 1$. If $\varepsilon=-1$ (resp. $+1$), the self-similar family is called {\it contracting} (resp. {\it expanding}). For instance, for any positive constant $C$ the concentric circles $\left(s\mapsto \sqrt{1-2t}\,(\cos s,\sin s)\right)_t$ form a self-similar contracting solution of the CSF shrinking to a point in finite time and as a matter of fact, there is no more example than this one:\newline {\it by the curve shortening flow, the only closed embedded contracting self-similar solutions are circles\footnote[2]{The nonembedded closed curves were studied and classified by Abresch and Langer\cite{al}}.}\newline \noindent To the author knowledge, the shortest proof of this was given by Chou-Zhu \cite{cz} by evaluating a clever integral. The proof given here is purely geometric and based on an genuine trick used by Gage in \cite{ga}. \section{A geometric proof} \label{sec:1} \noindent Let ${\bf{\boldsymbol\gamma}}$ be a closed, simple embedded plane curve, parametrized by arclength $s$, with signed curvature $\kappa$. By reversing the orientation if necessary, we can assume that the curve is counter-clockwise oriented. The length of ${\bf{\boldsymbol\gamma}}$ is denoted by $L$, the compact domain enclosed by ${\bf{\boldsymbol\gamma}}$ will be denoted by $\Omega$ with area $A$ and the associated moving Frenet frame by $({\bf t},{\bf n})$. Let ${\boldsymbol\gamma_t}={\boldsymbol\gamma}(t,\cdot)$ be the one parameter smooth family solution of the CSF, with the initial condition ${\bf{\boldsymbol\gamma}}_0={\bf{\boldsymbol\gamma}}$.\newline \noindent Multiplying $(1)$ by ${\bf n}$, we obtain \begin{equation}\label{csf2} \frac{\partial{\bf{\boldsymbol\gamma}}}{\partial t}\cdot{\bf n}=\kappa \end{equation} \noindent Equations $(\ref{csf1})$ and $(\ref{csf2})$ are equivalent: from $(\ref{csf2})$, one can look at a reparametrization $(t,s)\mapsto \varphi(t,s)$ such that $\widetilde{\bf{\boldsymbol\gamma}}(t,s)={\bf{\boldsymbol\gamma}}(t,\varphi(t,s))$ satisfies $(1)$. A simple calculation leads to an ode on $\varphi$ which existence is therefore guaranteed \cite{cz}. From now, we will deal with equation $(\ref{csf2})$.\newline \noindent If a solution $\bf{\boldsymbol\gamma}$ of $(\ref{csf2})$ is self-similar, then there exists a non-vanishing smooth function $t\mapsto \lambda(t)$ such that ${\boldsymbol\gamma}_t(s)=\lambda(t)\,{\boldsymbol\gamma}(s)$. By $(2)$, this leads to $\lambda'(t)\,{\boldsymbol\gamma}(s)\cdot{\bf n}({\boldsymbol\gamma}_t(s))=\kappa({\boldsymbol\gamma}_t(s))$, that is $\lambda'(t)\lambda(t)\,{\boldsymbol\gamma}(s)\cdot{\bf n}(s)=\kappa(s)$. The function $s\mapsto{\boldsymbol\gamma}(s)\cdot{\bf n}(s)$ must be non zero at some point (otherwise $\kappa$ would vanish everywhere and ${\boldsymbol\gamma}$ would be a line) so the function $\lambda'\lambda$ is constant equal to a real $\varepsilon$ which can not be zero. By considering the new curve $s\mapsto \sqrt{\vert \varepsilon\vert}{\boldsymbol\gamma}\left(s/\sqrt{\vert \varepsilon\vert}\right)$ which is still parametrized by arc length, we can assume that $\varepsilon=\pm 1$. In the sequel we will assume that ${\boldsymbol\gamma}$ is contracting, that is $\varepsilon=-1$ which says that we have the fundamental relation: \begin{equation}\label{fund-rela} \kappa+{\boldsymbol\gamma}\cdot {\bf n}=0 \end{equation} \noindent An immediate consequence is the value of $A$: indeed, by the divergence theorem and the turning tangent theorem, $$A=\frac{1}{2}\int_{{\boldsymbol\gamma}} \left(x\,dy-y\,dx\right)=-\frac{1}{2}\int_0^L {\boldsymbol\gamma}(s)\cdot{\bf n}(s)\,ds=\frac{1}{2}\int_0^L k(s)\,ds=\pi$$ \noindent Therefore, our aim will be to prove that $L=2\pi$ and we will conclude by using the equality case in the isoperimetric inequality.\newline \noindent The second remark is that the curve is an oval or strictly convex: indeed, by differentiating $(\ref{fund-rela})$ and using Frenet formulae, we obtain that $\kappa'=\kappa\,{\boldsymbol\gamma}\cdot{\boldsymbol\gamma}'$ which implies that $\kappa=Ce^{\vert{\bf{\boldsymbol\gamma}}\vert^2/2}$ for some non zero constant $C$. As the rotation index is $+1$, $C$ is positive and so is $\kappa$.\newline \subsection{Polar tangential coordinates} \label{sec:2} \noindent As equation $(\ref{fund-rela})$ is invariant under Euclidean motions, we can assume that the origin $O$ of the Euclidean frame lies within $\Omega$ with axis $[Ox)$ meeting ${\boldsymbol\gamma}$ orthogonally. We introduce the angle function $\theta$ formed by $-{\bf n}$ with the $x$-axis as shown in the figure below: \begin{figure}[h] \includegraphics*[scale=0.8,bb=120 350 550 750]{figure1.eps} \caption{Polar tangential coordinates} \label{fig:1} \end{figure} \noindent Then $-{\bf n}(s)=(\cos\theta(s),\sin\theta(s))$. Differentiating this equality, we obtain $\kappa(s)\,{\bf t}(s)=\theta'(s)\,{\bf t}(s)$, that is $\theta(s)=\int_0^s \kappa(u)\,du$ since $\theta(0)=0$. As $\theta'=\kappa\geqslant C>0$, $\theta$ is a strictly increasing function on $\mathbb{R}$ onto $\mathbb{R}$. So $\theta$ can be choosen as a new parameter and we set $\overline{{\bf{\boldsymbol\gamma}}}(\theta)=(\overline{x}(\theta),\overline{y}(\theta))={\bf{\boldsymbol\gamma}}(s)$, $\overline{\bf t}(\theta)=(-\sin\theta,\cos\theta)$, $\overline{\bf n}(\theta)=(-\cos\theta,-\sin\theta)$ and we consider the function $p$ defined by $p(\theta)=-\overline{{\bf{\boldsymbol\gamma}}}(\theta)\cdot \overline{{\bf n}}(\theta)$. As $\theta(s+L)=\theta(s)+2\pi$, we note that $\overline{\bf t}$, $\overline{\bf n}$ and $p$ are $2\pi$-periodic functions. The curve $\overline{{\bf{\boldsymbol\gamma}}}$ is regular but not necessarily parametrized by arc length because $\overline{{\bf{\boldsymbol\gamma}}}'(\theta)=\frac{1}{k(s)}{\bf{\boldsymbol\gamma}}'(s)$ and we note $\overline{\kappa}$ its curvature. By definition, we have \begin{equation}\label{p=} \overline{x}(\theta)\cos\theta+\overline{y}(\theta)\sin\theta=p(\theta) \end{equation} \noindent which, by differentiation w.r.t. $\theta$, gives \begin{equation}\label{p'=} -\overline{x}(\theta)\sin\theta+\overline{y}(\theta)\cos\theta=p'(\theta) \end{equation} \noindent Thus, \begin{equation}\label{xy} \left\{ \begin{array}{ll} \overline{x}(\theta) &= p(\theta)\cos\theta-p'(\theta)\sin\theta\\ \overline{y}(\theta) &= p(\theta)\sin\theta+p'(\theta)\cos\theta\\ \end{array} \right. \end{equation} \noindent Differentiating once more, we obtain \begin{equation}\label{x'y'} \left\{ \begin{array}{ll} \overline{x}'(\theta) &= -\left[p(\theta)+p''(\theta)\right]\sin\theta\\ \overline{y}'(\theta) &= \hspace{0.32cm}\left[p(\theta)+p''(\theta)\right]\cos\theta\\ \end{array} \right. \end{equation} \noindent Since $\gamma$ is counter-clockwise oriented, we have $p+p''>0$.\newline \noindent Coordinates $(\theta,p(\theta))_{0\leqslant \theta\leqslant 2\pi}$ are called {\it polar tangential coordinates} and $p$ is the {\it Minkowski support function}. By (\ref{x'y'}), we remark that the tangent vectors at $\overline{{\bf{\boldsymbol\gamma}}}(\theta)$ and $\overline{{\bf{\boldsymbol\gamma}}}(\theta+\pi)$ are parallel. We will introduce the {\it width function} $w$ defined by $$w(\theta)=p(\theta)+p(\theta+\pi)$$ \noindent which is the distance between the parallel tangent lines at $\overline{{\bf{\boldsymbol\gamma}}}(\theta)$ and $\overline{{\bf{\boldsymbol\gamma}}}(\theta+\pi)$ and we denote by $\ell(\theta)$ the segment joining $\overline{{\bf{\boldsymbol\gamma}}}(\theta)$ and $\overline{{\bf{\boldsymbol\gamma}}}(\theta+\pi)$.\newline \noindent With these coordinates, the perimeter has a nice expression: \begin{equation} L=\int_0^{2\pi}({\overline{x}'^2+\overline{y}'^2})^{1/2}d\theta=\int_0^{2\pi} \left(p+p''\right)d\theta=\int_0^{2\pi} p\,d\theta\tag{\mbox{\rm Cauchy formula}} \end{equation} \noindent The curvature $\overline{\kappa}$ of $\overline{{\bf{\boldsymbol\gamma}}}$ is \begin{equation*} {\overline\kappa}=\frac{\overline{x}'\overline{y}''-\overline{x}''\overline{y}'}{\left(\overline{x}'^2+\overline{y}'^2\right)^{3/2}}=\frac{1}{p+p''} \end{equation*} \noindent and equation $(\ref{fund-rela})$ reads ${\overline\kappa}=p$. So, finally, \begin{equation}\label{k=p} {\overline\kappa}=p=\frac{1}{p+p''} \end{equation} \subsection{Bonnesen inequality} \label{sec:3} \noindent If $B$ is the unit ball of the Euclidean plane, it is a classical fact that the area of $\Omega-tB$ (figure \ref{fig:2}) is $A_\Omega(t)=A-Lt+\pi t^2$ \cite{bz}. \vspace{3.0cm} \begin{figure}[h] \centerline{\includegraphics[scale=0.7,bb=-300 0 550 140]{figure2.eps}} \caption{The domain $\Omega-tB$ with positive $t$} \label{fig:2} \end{figure} \noindent The roots $t_1,t_2$ (with $t_1\leqslant t_2$) of $A_\Omega(t)$ are real by the isoperimetric inequality and they have a geometric meaning: indeed, if $R$ is the circumradius of $\Omega$, that is the radius of the circumscribed circle, and if $r$ is the inradius of $\Omega$, that is the radius of the inscribed circle, Bonnesen \cite{bz,zc} proved in the 1920's a series of inequalities, one of them being the following one: $$t_1\leqslant r\leqslant R\leqslant t_2$$ \noindent Moreover, and this is a key point in the proof, any equality holds if and only if $\overline{{\bf{\boldsymbol\gamma}}}$ is a circle. We also note that $A_\Omega(t)<0$ for any $t\in(t_1,t_2)$. \subsection{End of proof} \label{sec:4} \noindent {\bf Special case: $\overline{{\bf{\boldsymbol\gamma}}}$ is symmetric w.r.t. the origin $O$}, that is $\overline{{\bf{\boldsymbol\gamma}}}(\theta+\pi)=-\overline{{\bf{\boldsymbol\gamma}}}(\theta)$ for all $\theta\in[0,2\pi]$, which also means that $p(\theta+\pi)=p(\theta)$ for all $\theta\in[0,2\pi]$. So the width function $w$ is twice the support function $p$. As $2r\leqslant w\leqslant 2R$, we deduce that for all $\theta$, $r\leqslant p(\theta)\leqslant R$. If $\overline{{\bf{\boldsymbol\gamma}}}$ is not a circle, then one would derive from Bonnesen inequality that $t_1<r\leqslant p(\theta)\leqslant R<t_2$. So $A_\Omega(p(\theta))<0$ for all $\theta$, that is $\pi p^2(\theta)<Lp(\theta)-\pi$. Multiplying this inequality by $1/p=p+p''$ ($>0$) and integrating on $[0,2\pi]$, we would obtain $\pi L<\pi L$ by Cauchy formula ! By this way, we proved that any symmetric smooth closed curve satisfying $(\ref{fund-rela})$ is a circle. As the area is $\pi$, the length is $2\pi$ of course. \vspace{0.4cm} \noindent {\bf General case:} using a genuine trick introduced by Gage \cite{ga}, we assert that\newline {\it for any oval enclosing a domain of area $A$, there is a segment $\ell(\theta_0)$ dividing $\Omega$ into two subdomains of equal area $A/2$.}\newline \noindent Proof: let $\sigma(\theta)$ be the area of the subdomain of $\Omega$, bounded by $\overline{{\bf{\boldsymbol\gamma}}}([\theta,\theta+\pi])$ and the segment $\ell(\theta)$. We observe that $\sigma(\theta)+\sigma(\theta+\pi)=A$. We can assume without lost of generality that $\sigma(0)\leqslant A/2$. Then we must have $\sigma(\pi)\geqslant A/2$, and by continuity of $\sigma$ and the intermediate value theorem, there exists $\theta_0$ such that $\sigma(\theta_0)=A/2$ and the segment $\ell(\theta_0)$ proves the claim.\hfill$\Box$ \vspace{0.4cm} \noindent Let $\omega_0$ be the center of $\ell(\theta_0)$. If $\overline{{\bf{\boldsymbol\gamma}}}_1$ and $\overline{{\bf{\boldsymbol\gamma}}}_2$ are the two arcs of $\overline{{\bf{\boldsymbol\gamma}}}$ separated by $\ell(\theta_0)$, we denote by $\overline{{\bf{\boldsymbol\gamma}}}_i^s$ ($i=1,2$) the closed curve formed by $\overline{{\bf{\boldsymbol\gamma}}}_i$ and its reflection trough $\omega_0$. Each $\overline{{\bf{\boldsymbol\gamma}}}_i^s$ is a symmetric closed curve and as $\ell(\theta_0)$ joins points of the curve where the tangent vectors are parallel, each one is strictly convex and smooth (figure \ref{fig:3}). \vspace{-6.0cm} \begin{figure}[h] \centerline{\includegraphics*[scale=0.8,bb=120 400 550 900]{figure3.eps}} \caption{Symmetrization of the curve} \label{fig:3} \end{figure} \noindent Moreover, each $\overline{{\bf{\boldsymbol\gamma}}}_i^s$ satisfies equation $(\ref{fund-rela})$ and encloses a domain of area $2\times A/2=A$. So we can apply the previous case to these both curves and this gives that $\mbox{\rm length}(\overline{{\bf{\boldsymbol\gamma}}}_i^s)=2\pi$ for $i=1,2$, that is $\mbox{\rm length}(\overline{{\bf{\boldsymbol\gamma}}}_i)=\pi$ which in turn implies that $L=2\pi$. So $\overline{{\bf{\boldsymbol\gamma}}}$ (that is ${\bf{\boldsymbol\gamma}}$) is a circle. This proves the theorem.\hfill$\Box$
{ "attr-fineweb-edu": 1.99707, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd8DxaKgQL8L9jR_5
\section{Introduction}\label{intro} We investigate the isotropic XY quantum spin chain with a periodically time-dependent transverse external field acting only on one site, namely the $\kappa$-th, and free boundary conditions. The Hamiltonian reads \begin{equation}\label{eq:HXX-Vt} H_N(t)=-g\sum_{j=1}^{N-1}\left(\sigma^x_j\sigma^x_{j+1}+\sigma^y_{j}\sigma^y_{j+1}\right)-hV(\omega t)\sigma^z_\kappa\,,\quad 1<\kappa<N\,. \end{equation} Here $\sigma^x,\sigma^y,\sigma^z$ denote the Pauli matrices, $g,h,\omega>0$ are parameters ruling respectively the spin-spin coupling, the magnitude of the external field and its frequency. We assume that $V(\om t)$ is a real periodic analytic function with frequency $\om$: \begin{equation}\label{eq:V} V(\om t)=\sum_{\substack{k\in\ZZZ}} e^{\ii k \om t} V_k\,, \qquad |V_k|\le C_0 e^{-\s |k|}\,. \end{equation} For any $t\in\mathds{R}$ and $N\in\mathds{N}$ $H_N(t)$ is a self-adjoint operator on $\mathcal{H}_N:={\mathbb{C}^2}^{\otimes N}$, and the thermodynamic limit $N\to\infty$ is done as customary in the Fock space $\mathcal{F}:=\bigoplus_N \mathcal{H}_N$. It is well-known that this system is equivalent to a chain of quasi-free fermions and therefore the $N$-particle state is fully described by a one-particle wave function. At fixed $t$ the forcing $V(\omega t)$ is just a number which we can incorporate into $h$ and the spectrum is given by the standard analysis of the rank-one perturbation of the Laplacian on $\mathds{Z}$ (see \cite{ABGM1}). Precisely, as $N\to\infty$ we have a band $[-g,g]$ and a isolated eigenvalue given by \begin{equation}\label{eq:spectHn} g\operatorname{sign}(h)\sqrt{1+\frac{h^2}{g^2}}\,. \end{equation} The study of the dynamics however is not as simple, because when $t$ varies the eigenvalue moves, and it can touch the band creating resonances. More precisely, the dynamics in the time interval $[t_0,t]$ is governed by the following Floquet-Schr\"odinger equation on $\mathds{Z}$ (for details on its derivation and its relation with the many-body system we refer to \cite{ABGM1, ABGM2, GG, CG}) \begin{equation}\label{THE EQ} \ii\del_t\psi(x,t)=gh\Delta\psi(x,t)+hH_F(t,t_0)\psi(x,t)\,,\quad \psi(x,t_0)=\delta(x)\,,\quad x\in\mathds{Z}\,. \end{equation} Here $\delta(x)$ is the Kronecker delta centred in the origin, $\Delta$ is the Laplacian on $\mathds{Z}$ with spectrum given by $\{-\cos q, q\in[-\pi,\pi]\}$, and \begin{equation} H_F(t,t_0)\psi(x,t):= V(\om t)\psi(x,t)+ig\int_{t_0}^{t}\,\text{\rm d} t' J_1(g(t-t'))e^{-\ii\Delta (t-t')}V(\omega t')\psi(x, t')\,, \end{equation} where $$ J_{k}(t):=\frac{1}{2\pi}\int_{-\pi}^{\pi}dx e^{\ii xk+\ii t\cos x }\,,\quad k\in\mathds{Z}\, $$ The Floquet operator $H_F$ acts as a memory-term, accounting for the retarded effect of the rest of the chain on the site $\kappa$. This equation finds a more compact form in the Duhamel representation in the momentum space. We denote by $\xi\in[-1,1]$ the points of the spectrum of $-\Delta$. Moreover with a slight abuse of notation throughout the paper we will systematically omit the customary $\hat \cdot$ to indicate either Fourier transforms (when transforming in space) and Fourier coefficients (when transforming in time). Let $\psi(\xi,t)$, $\xi\in[-1,1]$, denote the Fourier transform of $\psi(x,t)$, $x\in\mathds{Z}$. The corresponding of equation (\ref{THE EQ}) for $\psi(\xi,t)$ in its Duhamel form reads \begin{equation}\label{duat0} (\uno+\ii hW_{t_0})\psi(\x,t)=1\,, \end{equation} where $\{W_{t_0}\}_{t_0\in\mathds{R}}$ is a family of Volterra operators for any $t>t_0$ and $\xi\in[-1,1]$, defined via \begin{equation}\label{eq:Wt0} W_{t_0}f(\x,t):=\int_{t_0}^t\,\text{\rm d} t' J_0(g(t-t'))e^{\ii g\x(t-t')}V(\omega t')f(\x, t')\,. \end{equation} Let $L^{2}_\xi C^\omega_t([-1,1]\times[t_0,t])$ denote the space of square integrable functions in $[-1,1]$ and real analytic\footnote{Mind here the superscript $\omega$ denoting analyticity as customary, not to be confused with the frequency.} in the time interval $[t_0,t]$ . Each $W_{t_0}$ is a linear map from $L^{2}_\xi C^\omega_t([-1,1]\times[t_0,t])$ into itself. For any $t_0$ finite $W_{t_0}$ is a compact integral operator, which ensures the existence of a unique solution for $t-t_0<\infty$ (see for instance \cite{EN}). We denote this one-parameter family of functions with $\psi_{t_0}(\xi,t)$. As $t_0\to-\infty$ the limit of the $W_{t_0}$ is an unbounded operator, denoted by $W_{\infty}$, defined through \begin{equation}\label{eq:Winf} W_{\infty}f(\x,t):=\int_{-\infty}^t\,\text{\rm d} t' J_0(g(t-t'))e^{\ii g\x(t-t')}V(\omega t')f(\x, t')\,. \end{equation} One can therefore use $W_{\infty}$ to define an asymptotic version of equation (\ref{duat0}) as $t_0\to-\infty$ \begin{equation}\label{duat} (\uno+\ii hW_{\infty})\psi(\x,t)=1\,, \end{equation} whose solutions are denoted by $\psi_\infty(\xi,\omega t)$: indeed it is easy to check that $W_{\infty}$ maps periodic functions of frequency $\omega$ into periodic functions of frequency $\omega$, thus it is somehow expected to find solutions of (\ref{duat}) in this class of functions. Our main result partially confirms this idea. Precisely we need the following assumption on the frequency: \begin{equation}\label{eq:dioph-alpha} \bar{\upepsilon}:=\inf_{k\in\mathds{N}}\left|\frac{2g}{\om}- k\right|>0\,. \end{equation} Then we have the following result. \begin{theorem}\label{main-as} Let $\omega>0$ satisfying (\ref{eq:dioph-alpha}). There is $\gamma_0=\gamma_0(\om,g,V)$ small enough such that if $h<\gamma_0\om$, then there exists a periodic solution of (\ref{duat}) with frequency $\om$, $\psi_\infty(x,\omega t)\in L_x^2C_t^{\omega}(\mathds{Z}\times\mathds{R})$. In particular $\gamma_0$ is explicitly computable; see \eqref{gamma0}. Moreover \begin{equation}\nonumber \psi_{t_0}(x,t)=\psi_{\infty}(x,\omega t)+\OO{\sqrt{t-t_0}}\,. \end{equation} \end{theorem} The main relevance of this result lies in its validity for low frequencies. To the best of our knowledge, a similar control of the convergence to the synchronised periodic state for a periodically forced small quantum system coupled with free fermion reservoirs has been achieved only in \cite{fr}, for a different class of models. In general it is known that the low-frequency assumption makes the dynamics harder to study. In \cite{CG} we proved the existence of periodic solutions of (\ref{duat}) with frequency $\omega$ if $V_0=\OO{h}$ and $h$ small or if $V_0=0$, $\omega>2g$ (high frequencies) and $h/\omega$ small. The meaning of both these conditions is clear: if $V_0$ is large and $h$ is small, then the eigenvalue does not touch the band; if $\omega>2g$ then the forcing cannot move energy levels within the band. In particular the high-frequency assumption appears in other related works in mathematical and theoretical physics \cite{woj1, BDP, NJP, woj2, woj3, alberto}. In \cite[Proposition 3.1]{CG} we also proved that if a periodic solutions of (\ref{duat}) $\psi_\infty(\xi,t)$ with frequency $\omega$ exists, then $\psi_{t_0}$ must approach $\psi_\infty(\xi,t)$ as $t_0\to-\infty$, namely the following result. \begin{prop}\label{prop:asintotica} Let $\psi_\infty(\x,\omega t)$ a periodic solution of (\ref{duat}) with frequency $\omega$ satisfying (\ref{eq:dioph-alpha}). For any $t\in\RRR$, $\xi\in[-1,1]$ one has \begin{equation}\label{eq:delta} \psi_{t_0}(\x,t)=\psi_\infty(\x,\omega t)+\OO{\sqrt{t-t_0}}\,. \end{equation} \end{prop} Therefore the control on the long time behaviour of the solution of (\ref{duat0}) amounts to establish the existence of a periodic solution of (\ref{duat}) for $\omega<2g$, a condition defining the low frequency regime. This is a genuine PDE question, which is indeed the main focus of this paper. More specifically, we are facing an unbounded time-dependent perturbation of the continuous spectrum of the Laplacian on $\mathds{Z}$. Problems involving periodic forcing are typically dealt with via a KAM-approach, namely one tries to reduce the perturbation to a constant operator by means of a sequence of bounded maps. This is for instance the approach adopted in \cite{NJP, woj3} in the context of interacting many-body system, in which a generalisation of the classical Magnus expansion is exploited via normal form methods. Indeed some salient features of periodically driven systems, as for instance pre-thermalisation or slow heating, from the mathematical point of view are essentially consequences of the KAM reduction. A similar approach has been used in \cite{alberto} for the Klein-Gordon equation with a quasi-periodic forcing. All the aforementioned results are valid if the frequency is large enough, as usual in Magnus expansion approaches. We cope here with two main sources of difficulty. First we deal with a perturbation of operators with continuous spectrum. Secondly the operator in (\ref{duat}) is a perturbation of the identity, which makes trivial the homological equation at each KAM step. Thus we have to use a different approach. As in \cite{CG}, we explicitly construct a solution of (\ref{duat}) by resumming the Neumann series. The main difficulty is represented by the occurrence of small denominators which actually vanish at some point within the spectrum of the Laplacian and also accumulate in the coefficients of the series. We cure these divergences by a suitable renormalisation of the Neumann series and one major advance of this work is that this is done regardless of the size of the frequency $\omega$. The interaction of a small system (the impurity) with an environment (the rest of the chain) while it is irradiated by monochromatic light is a question of primary interest in non-equilibrium statistical physics. Although more complicated systems have been considered \cite{matt, bru, fr}, quantum spin chains are particularly appealing as they present a rich phenomenology along with a limited amount of technical difficulties. Indeed the lack of ergodicity of such systems has already been object of study in the '70s \cite{Leb, rob}. The choice of considering the isotropic XY chain itself simplifies greatly the computations, as one gets exact formulas for the functions $J_k$. The dynamics of an impurity was first analysed in \cite{ABGM2} with different forms of time-dependent external fields. In particular in the case $V(\omega t)=\cos\omega t$ the authors computed the magnetisation of the perturbed spin at the first order in $h$, observing a divergence at $\omega= 2g$, i.e. for a value violating \eqref{eq:dioph-alpha}. \ The rest of the paper is organised as follows. In Section \ref{eq} the main objects needed for the proof are introduced while in Section \ref{renorm} we prove the existence of a periodic solution of (\ref{duat}) with frequency $\omega$. In Section \ref{importa} and Section \ref{stimazze} we prove few accessory results used in Section \ref{renorm}. Finally we attach an Appendix in which we sketch the proof of Proposition \ref{prop:asintotica}. \subsection*{Acknowledgements} The authors thank for the nice hospitality the School of Mathematics of Georgia Institute of Technology, where part of this work was done. L. C. was partially supported by NSF grant DMS-1500943 and by Emory University. The authors are also especially grateful to an anonymous referee who pointed out a crucial mistake in the first version of the manuscript. \section{Set-up}\label{eq} It is convenient to define \begin{equation}\label{fag} \varphi:=\omega t\,,\qquad\alpha:=\frac g\omega\,,\qquad\gamma:=\frac h\omega\,, \end{equation} so that we can rewrite (\ref{eq:Winf}) as $$ W'_\infty\psi(\x,\varphi):=\int_{-\infty}^\varphi \,\text{\rm d} \varphi' J_0\left(\alpha(\varphi-\varphi')\right)e^{\ii \alpha \x (\varphi-\varphi')}V(\varphi')\psi(\x,\varphi')\,, $$ and hence a periodic solution of (\ref{duat}) with frequency $\omega$ should satisfy \begin{equation}\label{dua} (\uno+\ii\gamma W'_{\infty})\psi(\x,\varphi)=1\,. \end{equation} Such a solution will be explicitly constructed. \smallskip Note that the $\inf$ in \eqref{eq:dioph-alpha} is indeed a $\min$, and it is attained either at $k=\lfloor2\alpha\rfloor$, i.e. the integer part of $2\al$, or at $k=\lceil2\alpha\rceil:=\lfloor 2\al\rfloor+1$. Moreover $\bar{\upepsilon}<1$. Recall the formula \begin{equation}\label{eq:jk-tau} j(\tau):=\int_0^\infty dt J_0(t)e^{\ii\tau t}=\frac{\chi(|\tau|\leq1)+\ii\operatorname{sign}(\tau)\chi(|\tau|>1)}{\sqrt{|1-\tau^2|}}\,. \end{equation} The proof of \eqref{eq:jk-tau} can be found for instance in \cite[Lemma A.3]{CG}. Unfortunately in \cite[(A.11)]{CG} the $\operatorname{sign}(\tau)$ in the imaginary part is mistakenly omitted, whereas it is clear from the proof that it should appear; see also \cite[(A.12)]{CG}. Set \begin{equation}\label{jk} j_k(\x) :=\frac{1}{\al}j(\x + \frac{k}{\al}) \end{equation} and let us define $\xi_0:=1$, $\xi_0^*:=-1$ and for $k\in\mathds{N}$ \begin{equation} \xi_k:=\operatorname{sign} k-\frac k\alpha\,,\quad\xi^*_k:=\operatorname{sign} k+\frac k\alpha\,. \end{equation} \begin{lemma}\label{lemma:jk-crux} For all $k\neq0$ one has \begin{equation}\label{eq:jk-crux} j_k(\xi)=\frac{\chi(\operatorname{sign} (k)(\xi-\xi_k)\leq0)+\ii\operatorname{sign} (k)\chi(\operatorname{sign} (k)(\xi-\xi_k)>0)}{\alpha\sqrt{|(\xi-\xi_k)(\xi+\xi^*_k)|}}\,. \end{equation} \end{lemma} \begin{proof} Using (\ref{eq:jk-tau}) and \eqref{jk} we have \begin{equation}\nonumber j_k(\xi)=\frac{\chi(|\alpha\xi+k|\leq\alpha)+\ii\operatorname{sign}(\alpha\xi+k)\chi(|\alpha\xi+k|>\alpha)}{\alpha\sqrt{|(\xi-\xi_k)(\xi+\xi^*_k)|}}\,. \end{equation} Let us write \[ \chi(|\alpha\xi+k|\leq\alpha)=\chi(\xi\leq1-\frac k\alpha)\chi(\xi\geq-1-\frac k\alpha)\nonumber \] and \[ \begin{aligned} \operatorname{sign}(\alpha\xi+k)\chi(|\alpha\xi+k|>\alpha)&=\chi(\alpha\xi+k>\alpha)-\chi(\alpha\xi+k<-\alpha)\nonumber\\ &=\chi(\xi>1-\frac k\alpha)-\chi(\xi<-1-\frac k\alpha)\,.\nonumber \end{aligned} \] Now we note that since $\xi\in[-1,1]$, if $k\geq1$ then $\chi(\xi<-1-\frac k\alpha)=0$ and if $k\leq-1$ then $\chi(\xi>1-\frac k\alpha)=0$. This implies \[ \chi(\xi\leq1-\frac k\alpha)\chi(\xi\geq-1-\frac k\alpha)=\chi(\operatorname{sign} (k)(\xi-\xi_k)\leq0)\,,\nonumber \] and \[ \chi(\xi>1-\frac k\alpha)-\chi(\xi<-1-\frac k\alpha)=\operatorname{sign} (k)\chi(\operatorname{sign} (k)(\xi-\xi_k)>0)\, \] so that the assertion is proven. \end{proof} Note that by the Lemma \ref{lemma:jk-crux}, $j_k(\xi)$ is either real or purely imaginary. On the other hand $j_0(\x)$ is always real, while $j_k(\x)$ is purely imaginary for $|k|>2\alpha$. We conveniently localise the functions $j_k$ about their singularities. Let $r>0$ and set \begin{equation}\label{eq:jloc} \begin{cases} Lj_0(\xi)&:=j_0(\xi)(\chi(\x<-1+r)+\chi(\x>1-r))\,,\\ Lj_k(\x) &:=j_k(\x)\chi(|\x-\x_k|<r)\,,\\ Rj_{k}(\x)&:=j_k(\x) - Lj_k(\x)\,,\qquad k\in\ZZZ\setminus\{0\}. \end{cases} \end{equation} The following properties are proved by straightforward computations. \begin{lemma}\label{lemma:easy} \hspace{1cm} \begin{itemize} \item [i)] $\xi_k=-\xi_{-k}$ and $\xi^*_k=-\xi^*_{-k}$; \item [ii)] One has $$ \min_{\substack{|k|,|k'|\le \lfloor2\al\rfloor \\ k\ne k'}} |\x_k - \x_{k'}|=\frac{\bar{\upepsilon}}{\alpha}\,; $$ \item [iii)] $\xi_k>0$ if and only if $k<-\alpha$ or $0<k<\alpha$. \item[iv)] One has \begin{equation}\nonumber \xi_k>\x_{k'} \Longleftrightarrow \begin{cases} k'>k>0\\ k<k'<0\\ k'>0, k<0, k'-k>2\alpha\\ k'<0, k>0, k'-k>-2\alpha\\ \end{cases} \end{equation} \item[v)] If $|k|>2\alpha$ then $|\xi_k|>1$; \item[vi)] For $k\geq1$ and $r\in(0,(4\alpha)^{-1})$ one has \begin{eqnarray} Lj_k(\xi)&=&\frac{\chi(\xi_k-r<\xi\leq\xi_k)+\ii\chi(\xi_k<\xi<\xi_k+r)}{\alpha\sqrt{|(\xi-\xi_k)(\xi+\xi^*_k)|}}\label{eq:Lj+}\\ Rj_k(\xi)&=&\frac{\chi(\xi\leq\xi_k-r)+\ii\chi(\xi\geq\xi_k+r)}{\alpha\sqrt{|(\xi-\xi_k)(\xi+\xi^*_k)|}}\label{eq:Rj+}\, \end{eqnarray} and for $k\leq-1$ \begin{eqnarray} Lj_k(\xi)&=&\frac{\chi(\xi_k<\xi <\xi_k+r)-\ii\chi(\xi_k-r<\xi\leq\xi_k)}{\alpha\sqrt{|(\xi-\xi_k)(\xi+\xi^*_k)|}}\label{eq:Lj-}\\ Rj_k(\xi)&=&\frac{\chi(\xi\geq\xi_k+r)-\ii\chi(\xi\leq\xi_k-r)}{\alpha\sqrt{|(\xi-\xi_k)(\xi+\xi^*_k)|}}\label{eq:Rj-}\,. \end{eqnarray} \item[vii)] There exist $c_1,c_2>0$ such that for all $k\in\mathds{Z}$ \begin{equation}\label{eq:chisarebbeZ} \inf_{\xi\in[-1,1]}|Lj_k(\xi)|\geq \frac{c_1}{\alpha\sqrt r}\,,\quad \sup_{\xi\in[-1,1]}|Rj_k(\xi)|\leq \frac{c_2}{\alpha\sqrt r}\,. \end{equation} \item[viii)] For $|k|>2\alpha$ and $\upepsilon<\bar\upepsilon$ one has \begin{equation}\label{jbello} |j_k(\x)|\le \frac{c_0}{\sqrt{\alpha{\upepsilon}}}\,, \end{equation} \end{itemize} \end{lemma} Fix $\upepsilon<\bar{\upepsilon}$ (say $\upepsilon=\bar{\upepsilon}/2$) and take $r\in(0,r^*)$, with $r^*<\frac{\upepsilon}{4\alpha}$ so that in particular property (vi) in Lemma \ref{lemma:easy} above is satisfied and moreover one has \begin{equation}\label{separati} Lj_k(\x) Lj_{k'}(\x) = 0 \qquad \mbox{for }\quad k\ne k' \end{equation} by property (ii) of Lemma \ref{lemma:easy}. \ Combining a (formal) expansion as a power series in $\gamma$ and Fourier series in $\varphi$ (i.e. the so-called Lindstedt series) we can now obtain a formal series representation for the solution of (\ref{dua}) which is the starting point of our analysis. Precisely, we start by writing \begin{equation}\label{taylor} \psi(\x,\varphi) = \sum_{n\ge0}\gamma^n \psi_{n}(\x,\varphi)\,, \end{equation} so that inserting \eqref{taylor} into \eqref{dua} we see that the coefficients $\psi_n$ must satisfy \begin{equation} \psi_0=1\,,\quad \psi_n=-\ii W'_\infty[\psi_{n-1}]\,. \end{equation} We now expand \[ \psi_n(\x,\varphi)=\sum_{k\in\ZZZ}\psi_{n,k}(\x) e^{\ii k\varphi}. \] Using that \begin{equation}\label{eq:Winf-conv} (W_\infty \psi_n)_k(\x)=j_k(\x)\sum_{\mu\in\ZZZ}V_{k-\mu}\psi_{n,\mu}(\x)\,, \end{equation} by a direct computation we obtain \begin{equation}\label{eq:yk} \left\{ \begin{aligned} \psi_1(\x,\varphi)&=\sum_{k_1\in \ZZZ}j_{k_1}(\x)V_{k_1}e^{\ii k_1\varphi}\,,\\ \psi_2(\x,\varphi)&=\sum_{k_1,k_2\in\ZZZ} j_{k_1+k_2}(\x)V_{k_2} j_{k_1}(\x)V_{k_1} e^{\ii(k_1+k_2)\varphi}\\ &\vdots\\ \psi_n(\varphi)&=\sum_{k_1,\dots,k_n\in \ZZZ}\Big(\prod_{i=1}^n j_{\mu_i}(\x) V_{k_i}\Big)e^{\ii\mu_n\varphi}\,, \end{aligned} \right. \end{equation} where we denoted \begin{equation}\label{conserva} \mu_p=\mu(k_1,\ldots,k_p):=\sum_{j=1}^p k_j\,. \end{equation} Therefore we arrive to write the formal series \begin{equation}\label{formale} \begin{aligned} \tilde \psi(\x,\varphi;\gamma):&=\sum_{\mu\in\ZZZ}e^{\ii\mu\varphi}\psi_\mu(\x;\gamma) =\sum_{\mu\in\ZZZ}e^{\ii\mu\varphi} \sum_{N\geq0}(-\ii\gamma)^N \psi_{N,\mu}(\x)\\ &=\sum_{\mu\in\ZZZ}e^{\ii\mu\varphi}\sum_{N\geq0} \sum_{\substack{k_1,\ldots,k_N\in \ZZZ\\ \mu_N=\mu}} (-\ii\gamma)^{N} \Big(\prod_{p=1}^N j_{\mu_p}(\x) V_{k_p}\Big)\,, \end{aligned} \end{equation} which solves\eqref{duat} to all orders in $\gamma$. Note that for each $N\in\mathds{N}$ the coefficient of $\gamma^N$ is a sum of singular terms. This makes it difficult (if not impossible) to show the convergence of (\ref{formale}), and we will instead prove the convergence of a resummed series which solves the equation. \section{Proof of the Theorem}\label{renorm} To explain our construction of the series giving a solution of \eqref{duat}, it is useful to introduce a slightly modified version of the graphical formalism of \cite{CG}, inspired by the one developed in the context of KAM theory (for a review see for instance \cite{G10}). Since our problem is linear, we shall deal with linear trees, or \emph{reeds}. Precisely, an oriented tree is a finite graph with no cycle, such that all the lines are oriented toward a single point (the \emph{root}) which has only one incident line (called \emph{root line}). All the points in a tree except the root are called \emph{nodes}. Note that in a tree the orientation induces a natural total ordering ($\preceq$) on the set of the nodes $N(\rho)$ and lines. If a vertex $v$ is attached to a line $\ell$ we say that $\ell$ exits $v$ if $v\preceq\ell$, otherwise we say that $\ell$ enters $v$. Moreover, since a line $\ell$ may be identified by the node $v$ which it exits, we have a natural total ordering also on the set of lines $L(\rho)$. We call \emph{end-node} a node with no line entering it, and \emph{internal node} any other node. We say that a node has \emph{degree} $d$ if it has exactly $d$ incident lines. Of course an end-node has degree one. We call \emph{reed} a labelled rooted tree in which each internal node has degree two. Given a reed $\rho$ we associate labels with each node and line as follows. We associate with each node $v$ a \emph{mode label} $k_v\in \ZZZ$ and with each line $\ell$ a {\it momentum} $\mu_\ell \in \ZZZ$ with the constraint \begin{equation}\label{conservareed} \mu_\ell = \sum_{v\prec \ell} k_v\,. \end{equation} Note that \eqref{conservareed} above is a reformulation of \eqref{conserva}. We call \emph{order} of a reed $\rho$ the number $\# N(\rho)$ and \emph{total momentum} of a reed the momentum associated with the root line. $\Theta_{N,\mu}$ denotes the set of reeds of order $N$ and total momentum $\mu$. We say that a line $\ell$ is \emph{regular} if $|\mu_\ell|\geq \lceil2\al\rceil$, otherwise we say it is \emph{singular}. With every singular line $\ell$ we attach a further {\it operator label} $\calO_\ell\in\{L,R\}$; if $\ell$ is singular we say that it is \emph{localised} if $\calO_\ell=L$, otherwise we say that it is {\emph{regularised}}. We then associate with each node $v$ a \emph{node factor} \begin{equation}\label{nodefactor} \calF_v = V_{k_v} \end{equation} and with each line $\ell$ a \emph{propagator} \begin{equation}\label{propagator} \calG_\ell(\x) = \left\{ \begin{aligned} &j_{\mu_\ell}(\x)\,,\qquad \ell\mbox{ is regular}\\ &\calO_\ell j_{\mu_\ell}(\x)\,,\qquad \ell \mbox{ is singular}, \end{aligned} \right. \end{equation} so that we can associate with each reed $\rho$ a value as \begin{equation}\label{val} \operatorname{Val}(\rho) = \Big(\prod_{v\in N(\rho)} \calF_v\Big) \Big(\prod_{\ell\in L(\rho)} \calG_\ell(\x)\Big). \end{equation} In particular one has formally \begin{equation}\label{ovvio} \psi_{N,\mu} = \sum_{\rho\in \Theta_{N,\mu}} \operatorname{Val}(\rho) \,. \end{equation} \begin{rmk}\label{stoqua} If in a reed $\rho$ with $\operatorname{Val}(\rho)\ne0$ there is a localised line $\ell$, i.e. if $\calO_\ell=L$, then all the lines with momentum $\mu\ne \mu_\ell$ are either regular or regularised. Indeed if $\ell$ is localised, then by \eqref{eq:jloc} we have that $\x$ is $r$-close to $\x_{\mu_\ell}$ and hence it cannot be $r$-close to $\x_\mu$ for $\mu\ne \mu_\ell$; see also \eqref{separati}. \end{rmk} Given a reed $\rho$ we say that a connected subset $\mathtt{s}$ of nodes and lines in $\rho$ is a {\it closed-subgraph} if $\ell\in L(\mathtt{s})$ implies that $v,w\in N(\mathtt{s})$ where $v,w$ are the nodes $\ell$ exits and enters respectively. We say that a closed-subgraph $\mathtt{s}$ has degree $d:=|N(\mathtt{s})|$. We say that a line $\ell$ {\it exits} a closed-subgraph ${\mathtt{s}}$ if it exits a node in $N(\mathtt{s})$ and enters either the root (so that $\ell$ is the root line) or a node in $N(\rho)\setminus N(\mathtt{s})$. Similarly we say that a line {\it enters} $\mathtt{s}$ if it enters a node in $N(\mathtt{s})$ and exits a node in $N(\rho)\setminus N(\mathtt{s})$. We say that a closed-subgraph $\mathtt{s}$ is a {\it resonance} if it has an exiting line $\ell_{\mathtt{s}}$ and an entering line $\ell_{\mathtt{s}}'$, both $\ell_{\mathtt{s}}$ and $\ell_{\mathtt{s}}'$ are localised (so that in particular by Remark \ref{stoqua} the exiting and entering lines of a resonance must carry the same momentum), while all lines $\ell\in L(\mathtt{s})$ have momentum $\mu_{\ell}\ne\mu_{\ell_{\mathtt{s}}} $. Note that by \eqref{conservareed} one has \begin{equation}\label{sec} \sum_{v\in N(\mathtt{s})} k_v=0\,, \end{equation} We denote by $\TT_{d,\mu}$ the set of resonances with degree $d$ and entering and exiting lines with momentum $\mu$. Note that if $d=1$ then $\TT_{1,\mu}$ is constituted by a single node $v$ with mode $k_v=0$. Let us set \begin{equation} \MM_{d,\mu}(\x) := \sum_{\mathtt{s}\in\TT_{d,\mu}}\operatorname{Val}(\mathtt{s})\,, \end{equation} where we define the value of a resonance $\mathtt{s}$ as in \eqref{val} but with the products restricted to nodes and lines in $\mathtt{s}$, namely $$ \operatorname{Val}(\mathtt{s}) := \Big(\prod_{v\in N(\mathtt{s})} \calF_v\Big) \Big(\prod_{\ell\in L(\mathtt{s})} Rj_{\mu_\ell}(\x)\Big)\,. $$ Next we proceed with the proof, which we divide into several steps. \begin{proof}[Step 1: resummation.] The idea behind resummation can be roughly described as follows. The divergence of the sum in \eqref{ovvio} is due to the presence of localised lines (and their possible accumulation). If a reed $\rho_0\in \Theta_{N,\mu}$ has a localised line $\ell$, say exiting a node $v$, then we can consider another reed $\rho_1\in\Theta_{N+1,\mu}$ obtained from $\rho_0$ by inserting an extra node $v_1$ with $k_{v_1}=0$ and an extra localised line $\ell'$ between $\ell$ and $v$, i.e. $\rho_1$ has an extra resonace of degree one. Of course, while $\rho_0$ is a contribution to $\psi_N(\varphi)$, $\rho_1$ is a contribution to $\psi_{N+1}(\varphi)$, so when (formally) considering the whole sum, the value of $\rho_1$ will have an extra factor $(-\ii\gamma)$. In other words, in the formal sum \eqref{formale} there will be a term of the form \[ \begin{aligned} \operatorname{Val}(\rho_0)+(-\ii\gamma) \operatorname{Val}(\rho_1) &= (\mbox{common factor} )\big(Lj_{\mu_\ell}(\x) + Lj_{\mu_\ell}(\x) (-\ii\gamma) V_0 Lj_{\mu_\ell}(\x)\big) \\ &= (\mbox{common factor} ) Lj_{\mu_\ell}(\x)\big(1 + (-\ii\gamma) V_0 Lj_{\mu_\ell}(\x)\big) \end{aligned} \] Of course we can indeed insert any chain of resonances of degree one, say of length $p$, so as to obtain a reed $\rho_p\in\Theta_{N+p,\mu}$, and when summing their values together we formally have \[ \begin{aligned} \sum_{p\ge0} (-\ii\gamma)^p\operatorname{Val}(\rho_p) &=(\mbox{common factor} ) Lj_{\mu_\ell}(\x)\big( 1+ (-\ii\gamma) V_0 Lj_{\mu_\ell}(\x) \\ &\qquad\qquad \qquad\qquad+ (-\ii\gamma )V_0 Lj_{\mu_\ell}(\x) (-\ii\gamma) V_0 Lj_{\mu_\ell}(\x) +\ldots \big)\\ &=(\mbox{common factor} ) Lj_{\mu_\ell}(\x)\sum_{p\ge0 } ((-\ii\gamma) V_0 Lj_{\mu_\ell}(\x))^p\\ &=(\mbox{common factor} ) \frac{Lj_{\mu}(x)}{1+ \ii\gamma V_0 Lj_\m(\x)} \end{aligned} \] In other words we formally replace the sum over $N$ of the sum of reeds in $\Theta_{N,\mu}$ with the sum of reeds where no resonance of degree one appear, but with the localised propagators replaced with \[ \frac{Lj_{\mu}(x)}{1+ \ii\gamma V_0 Lj_\m(\x)}. \] Clearly in principle we can perform this formal substitution considering resonances of any degree. Here it is enough to consider resummations only of resonances of degree one and two. The advantage of such a formal procedure is that the localised propagators do not appear anymore. However, since the procedure is only formal, one has to prove not only that the new formally defined object is indeed well defined, but also that it solves \eqref{dua}. Having this in mind, let $\Theta^{\RR}_{N,\mu}$ be the set of reeds in which no resonance of deree $1$ nor $2$ appear, and define \begin{eqnarray} \MM_\mu(\x)=\MM_\mu(\x,\gamma) &:=&(-i\gamma) \MM_{0,\mu}(\x) + (-i\gamma)^2 \MM_{1,\mu}(\x)\nonumber\\ &=&-i\gamma V_0 -\gamma^2\sum_{k\in\ZZZ} V_k Rj_{k+\mu}(\x) V_{-k} \,.\label{emme} \end{eqnarray} In Section \ref{importa} we prove the following result. \begin{prop}\label{prop:main} For all $\mu\in\mathds{Z}\cap[-2\alpha,2\alpha]$ and for \begin{equation}\label{enorme} \gamma\in\left\{ \begin{aligned} &(0,+\io)\quad\qquad V_0\ge 0 \\ &(0, c \sqrt{\frac{\upepsilon}{\al}}\frac{|V_0|}{\|V\|_{L^2}^2}) \quad\qquad V_0< 0 \end{aligned} \right. \end{equation} where $c$ is a suitable absolute constant, one has \begin{equation} \inf_{\xi\in[-1,1]} |1-\mathcal M_\mu(\xi)Lj_\mu(\xi)|\geq\frac12\,. \end{equation} \end{prop} Proposition \ref{prop:main} allows us to set \begin{equation}\label{staqua} Lj^\RR_\mu(\x):=\frac{Lj_\mu(\x)}{1-\MM_\mu(\x,\gamma) Lj_\mu(\x)}\,. \end{equation} For any $\rho\in\Theta^{\RR}_{N,\mu}$ let us define the renormalised value of $\rho$ as \begin{equation}\label{renval} \operatorname{Val}^{\RR}(\rho):=\left(\prod_{v\in N(\rho)} \calF_{v}\right)\left(\prod_{\ell\in L(\rho)}\calG^{\RR}_{\ell}\right)\,, \end{equation} where \begin{equation}\label{renprop} \calG^{\RR}_{\ell_i}=\left\{ \begin{aligned} &Lj_\mu^{\RR}(\x),\qquad \qquad |\mu_{\ell_i}|\le \lfloor2\al\rfloor,\quad \calO_{\ell_i}=L\,,\\ &Rj_\mu(\x),\qquad \qquad |\mu_{\ell_i}|\le \lfloor2\al\rfloor,\quad \calO_{\ell_i}=R\,,\\ &j_{\mu_{\ell_i}}(\x),\qquad\qquad |\mu_{\ell_i}|\geq \lceil2\al\rceil\,. \end{aligned} \right. \end{equation} In particular if $\lfloor2\al\rfloor=0$ we have to renormalise only $j_0$, which is the case in our previous paper \cite{CG}. Then we define \begin{equation}\label{coeffrin} \psi_\mu^\RR(\x;\gamma):=\sum_{N\ge1}(-\ii\gamma)^N \sum_{\rho\in\Theta^\RR_{N,\mu}}\operatorname{Val}^\RR(\rho)\,, \end{equation} so that \begin{equation}\label{asy1} \psi^\RR(\varphi;\x,\gamma):= \sum_{\mu\in\ZZZ}e^{\ii\mu\varphi}\psi_\mu^\RR(\x;\gamma)\,, \end{equation} is the renormalised series we want to prove to be a regular solution of (\ref{dua}). \end{proof} \begin{proof}[Step 2: radius of convergence.] First of all we prove that the function \eqref{asy1} is well defined. We start by noting that the node factors are easily bounded by \eqref{eq:V}. The propagators defined in (\ref{renprop}) are bounded as follows. If $|\mu_\ell|\ge \lceil2\al\rceil$ formula (\ref{jbello}) yields $$ |j_\mu(\x)|\leq\frac{c_0}{\sqrt{2\lceil2\al\rceil\upepsilon}}\,, $$ while for $|\mu|\le\lfloor2\alpha\rfloor$, by (\ref{eq:chisarebbeZ}) we have $$ |{Rj}_\mu(\x)|\le \frac{c_2}{\alpha\sqrt r}\,. $$ Regarding the resummed propagators the bound is more delicate. We start by denoting \begin{equation}\label{minimo} \overline{V}:= \left\{ \begin{aligned} &0\qquad \mbox{ if } V_k=0,\ \forall \,k\ge1\\ &\max_{k\in\ZZZ\setminus\{0\} }|V_k|^2 \quad \mbox{otherwise}\,, \end{aligned} \right. \end{equation} and \begin{equation}\label{vbarl} \underline{V}_{\leq 2\alpha}:=\left\{ \begin{aligned} &0\qquad \mbox{ if } V_k=0,\ \forall \,k=1,\ldots,\lfloor2\al\rfloor \\ &\min_{\substack{|k|\leq\lfloor2\alpha\rfloor \\ V_k\ne0}}|V_k| \quad \mbox{otherwise}\,, \end{aligned} \right. \qquad \overline{V}_{> 2\alpha}:= \left\{ \begin{aligned} &0\qquad \mbox{ if } V_k=0,\ \forall \,k\ge \lceil2\al\rceil \\ &\max_{\substack{|k|>\lceil2\alpha\rceil \\ V_k\ne0}}|V_k| \quad \mbox{otherwise}\,, \end{aligned} \right. \end{equation} In Section \ref{stimazze} we prove the following result. \begin{prop}\label{lemma:jR} There is a constant $c>0$ such that \begin{equation}\label{stastimazza} |Lj^\RR_\mu(\x)|\leq T(V,\upepsilon,\alpha;\gamma)=T(\gamma):= \begin{cases} \frac{c}{\gamma|V_0|}&\mbox{if}\quad V_0\neq0\mbox{ and } \gamma \leq c\sqrt{\frac{\upepsilon}{{\al}}} \frac{|V_0|}{\|V\|^2_{L^2}}\,;\\ c\sqrt{\frac{\alpha}{\upepsilon}}\gamma^{-2} \underline{V}^{-2}_{\le 2\al}&\mbox{if}\quad V_0=0,\ \underline{V}_{\le 2\al}\ne0\,;\\ c{{\sqrt{\alpha}}{}}\gamma^{-2} \ol{V}^{-2}_{> 2\al}&\mbox{if}\quad V_0=0,\ \underline{V}_{\le 2\al}=0\,. \end{cases} \end{equation} \end{prop} Let us set now \begin{equation}\label{eq:B} B=B(r,\alpha,\upepsilon):=\max\left(\frac{1}{\alpha\sqrt r},\frac{1}{\sqrt{\alpha\upepsilon}}\right) =\frac{1}{\al\sqrt{r}} \,,\qquad C_1:=\max(c_0,c_2/2)\,. \end{equation} Note that if in a reed $\rho\in\Theta^\RR_{N,\mu}$ there are $l$ localised lines, we have \begin{equation}\label{mestavoascorda1} \begin{aligned} |\operatorname{Val}^\RR(\rho)|&=\Big(\prod_{v\in N(\rho)} |\calF_v|\Big) \Big(\prod_{\ell\in L(\rho)}| \calG_\ell|\Big)\\ &\le \Big( C_0e^{-\s\sum_{v\in N(\rho)}|n_v|}\Big) \Big(\prod_{\ell\in L(\rho)}| \calG_\ell|\Big)\\ &\le C_0C_1^NB^{N}T(\gamma)^{l }e^{-\s|\mu|}\,, \end{aligned} \end{equation} for some constant $C_0>0$. By construction, in a renormalised reed there must be at least two lines between two localised lines, since we resummed the resonances of degree one and two. This implies that a reed with $N$ nodes can have at most $l=\lceil N/3\rceil$ localised lines. Then by (\ref{coeffrin}) we obtain \begin{equation}\label{stimatotale1} |\psi_\mu^\RR(\x;\gamma)|\le C \sum_{N\ge 1}\gamma^{N} B^{N}T(\gamma)^{\frac N3}e^{-\s|\mu|/2}\,, \end{equation} so that the series above converge for \begin{equation}\label{eq:cond2gamma} \gamma^3 T(\gamma)B^3<1\,. \end{equation} This entails \begin{equation}\label{eq:conv-gamma} \gamma< \begin{cases} \min\left(\sqrt[3]{|V_0|B^{-3}},c\sqrt{\frac{\upepsilon}{{\al}}} \frac{|V_0|}{\|V\|^2_{L^2}}\right)&V_0\neq0\,;\\ c^{-1} B^{-3}\sqrt{\frac{\upepsilon}{\al}}\ \underline{V}_{\le2\al}^2 &V_0=0\,,\underline{V}_{\le2\al}>0;\\ c^{-1} B^{-3}{\frac{1}{\sqrt{\al}}} \ol{V}_{>2\al}^2 &V_0=0\,,\underline{V}_{\le2\al}=0.\\ \end{cases} \end{equation} Therefore under such smallness condition on $\gamma$, the function $\psi^\RR(\varphi;\x,\gamma)$ (recall (\ref{asy1})) is analytic w.r.t. $\varphi\in\TTT$, uniformly in $\x\in[-1,1]$ and for $\gamma$ small enough \end{proof} Choosing $\upepsilon=\ol{\upepsilon}/2$ and $r:=\frac{\upepsilon}{8\alpha}$ we have by (\ref{eq:B}) $$ B(r,\alpha,\upepsilon)^{-3}=\sqrt{\frac{{\al^3\upepsilon^3}}{{64}}}\,, $$ so that condition \eqref{eq:conv-gamma} implies that the series converges for $\gamma\le \gamma_0:=c_1\gamma_1$ where \begin{equation}\label{gamma0} \gamma_1:=\begin{cases} \min\left(\sqrt[3]{|V_0|(\frac{\sqrt{\al\bar{\upepsilon}}}{4})^{3}},\sqrt{\frac{\bar \upepsilon}{{2\al}}} \frac{|V_0|}{\|V\|^2_{L^2}}\right)&V_0\neq0\,;\\ (\frac{\sqrt{\al\bar{\upepsilon}}}{4})^{3}\sqrt{\frac{\bar\upepsilon}{2\al}}\ \underline{V}_{\le2\al}^2 &V_0=0\,,\underline{V}_{\le2\al}>0;\\ (\frac{\sqrt{\al\bar{\upepsilon}}}{4})^{3}{\frac{1}{\sqrt{\al}}} \ol{V}_{>2\al}^2 &V_0=0\,,\underline{V}_{\le2\al}=0.\\ \end{cases} \end{equation} and $c_1:=\min\{c,c^{-1}\}$. \begin{proof}[Step 3: $\psi^\RR(\varphi;\x,\gamma)$ solves \eqref{dua}.] Now we want to prove that $$ (\uno+i\gamma W'_{\infty})\psi^{\RR}(\varphi;\x,\gamma)=1\,. $$ This is essentially a standard computation. Using (\ref{coeffrin}) and (\ref{asy1}), the last equation can be rewritten as \begin{equation}\label{eq:W'-step5} i\gamma W'_{\infty}\psi^{\RR}(\varphi;\x,\gamma)=1-\psi^{\RR}(\varphi;\x,\gamma)=-\sum_{\mu\in\mathds{Z}}e^{\ii \mu \varphi}\sum_{N\geq1}(-\ii\gamma)^N \sum_{\rho\in\Theta^\RR_{N,\mu}}\operatorname{Val}^\RR(\rho)\,. \end{equation} Moreover thanks to (\ref{eq:Winf-conv}) we can compute \begin{eqnarray} i\gamma W'_{\infty}\psi^{\RR}(\varphi;\x,\gamma)&=&\ii\gamma\sum_{\mu\in\mathds{Z}}\psi^\RR(\x;\gamma)(W'_\infty e^{\ii\mu\varphi})\nonumber\\ &=&\ii\gamma\sum_{\mu\in\mathds{Z}} e^{\ii\mu\varphi}j_{\mu}(\x)\sum_{k\in\mathds{Z}} V_{\mu-k}\psi_k^{\RR}(\x;\gamma)\nonumber\\ &=&\ii\gamma\sum_{\mu\in\mathds{Z}} e^{\ii\mu\varphi}\sum_{N\ge0}(-\ii\gamma)^{N}j_{\mu}(\x)\sum_{\mu_1+\mu_2=\mu}V_{\mu_1} \sum_{\rho\in\Theta^\RR_{N,\mu_2}}\!\!\!\! \operatorname{Val}^\RR(\rho)\,.\nonumber \end{eqnarray} Thus we can write (\ref{eq:W'-step5}) in terms of Fourier coefficients as \begin{equation}\label{perico} \sum_{N\ge1}(-\ii\gamma)^{N}j_{\mu}(\x)\sum_{\mu_1+\mu_2=\mu}V_{\mu_1} \sum_{\rho\in\Theta^\RR_{N-1,\mu_2}}\!\!\!\! \operatorname{Val}^\RR(\rho)=\sum_{N\geq1}(-\ii\gamma)^N \sum_{\rho\in\Theta^\RR_{N,\mu}}\operatorname{Val}^\RR(\rho)\,. \end{equation} Note that the root line $\ell$ of a reed has to be renormalised only if it carries momentum label $|\mu_\ell|\le\lfloor{2\al}\rfloor$ and operator $\calO_\ell=L$, thus for $|\mu_\ell|\ge\lceil{2\al}\rceil$, or $|\mu_\ell|\le\lfloor2\al\rfloor$ and $\calO_\ell=R$ we see immediately that \eqref{perico} holds. Concerning the case $\mu_\ell=\mu$ with $|\mu|\le\lfloor2\al\rfloor$ and $\calO_\ell=L$, we first note that $$ j_{\mu}(\x) \sum_{\mu_1+\mu_2=\mu}V_{\mu_1} \sum_{\rho\in\Theta^\RR_{N-1,\mu_2}}\!\!\!\! \operatorname{Val}^\RR(\rho)= \sum_{\rho\in\ol{\Theta}^\RR_{N,\mu}} \operatorname{Val}^\RR(\rho) $$ where $\ol{\Theta}^\RR_{N,\mu}$ is the set of reeds such that the root line may exits a resonance of degree $\le2$, so that equation \eqref{perico} reads \begin{equation}\label{sforzo} \psi^\RR_\mu(\x;\gamma)= \sum_{N\ge1}(-\ii\gamma)^{N} \sum_{\rho\in\ol{\Theta}^\RR_{N,\mu}} \operatorname{Val}^\RR(\rho)\,, \end{equation} Let us split \begin{equation}\label{split} \ol{\Theta}^\RR_{N,\mu}=\widetilde{\Theta}^\RR_{N,\mu}\cup \hat{\Theta}^\RR_{N,\mu}\,, \end{equation} where $\hat{\Theta}^\RR_{N,\mu}$ are the reeds such that the root line indeed exits a resonance of degree $\le2$, while $\widetilde{\Theta}^\RR_{N,\mu}$ is the set of all other renormalised reeds. Therefore we have \begin{equation}\label{pezzouno} \sum_{N\ge1}(-\ii\gamma)^N\sum_{\rho\in\widetilde{\Theta}^\RR_{N,\mu}} \operatorname{Val}^\RR(\rho) = Lj_\mu^\RR(\x) \sum_{\mu_1+\mu_2=\mu} (\ii\gamma V_{\mu_1}) \psi_{\mu_2}^\RR(\x;\gamma) \end{equation} and \begin{equation}\label{pezzodue} \sum_{N\ge1}(-\ii\gamma)^N\sum_{\rho\in\hat{\Theta}^\RR_{N,\mu}} \operatorname{Val}^\RR(\rho) = Lj_\mu^\RR(\x)\MM_{\mu}(\x,\gamma)Lj_\mu^\RR(\x) \sum_{\mu_1+\mu_2=\mu} (\ii\gamma V_{\mu_1}) \psi_{\mu_2}^\RR(\x;\gamma)\,, \end{equation} so that summing together \eqref{pezzouno} and \eqref{pezzodue} we obtain $\psi^\RR_\mu(\x;\gamma)$. \end{proof} This concludes the proof of the Theorem. \section{Proof of Proposition \ref{prop:main}}\label{importa} In this section we prove Proposition \ref{prop:main}. We will consider explicitly the case $\mu\in\mathds{N}$, since negative $\mu$ are dealt with in a similar way. Set for brevity \begin{eqnarray} D_k(\xi)&:=&\alpha\sqrt{|(\xi-\xi_k)(\xi+\xi^*_k)|}\,,\label{eq_D}\\ A_{\mu,k}(\xi)&:=&-D^{-1}_{\mu+k}(\xi)+D^{-1}_{\mu-k}(\xi)\,,\label{eq:A}\\ G_{\mu,k}(\xi)&:=&(Rj_{\mu+k}(\xi)+Rj_{\mu-k}(\xi))Lj_\mu(\xi)\,, \label{eq_G} \end{eqnarray} and note that we can write \begin{equation}\label{eq:MeG} \mathcal M_\mu(\xi) Lj_\mu(\xi)=-\ii\gamma V_0 Lj_\mu(\xi) -\gamma^2\sum_{k\geq1}|V_k|^2G_{\mu,k}(\xi)\,. \end{equation} The next two lemmas establish useful properties of the functions $G_{\mu,k}(\xi)$ and $A_{\mu,k}(\xi)$. \begin{lemma}\label{lemma:Aposi} Let $k\in\mathds{N}$ and $r$ sufficiently small. One has \begin{equation}\label{eq:Aposi} \inf_{\xi\in[\xi_\mu-r,\xi_\mu+r]} A_{\mu,k}(\xi)>0\, \end{equation} \end{lemma} \begin{proof} By explicit calculation \begin{equation}\label{eq:Aximu} A_{\mu,k}(\xi_\mu)= \begin{cases} \frac{2\sqrt k}{\sqrt{4\alpha^2-k^2}(\sqrt{k(2\alpha-k)}+\sqrt{k(k+2\alpha)})}>0&k<2\alpha\,,\\ \frac{4\alpha}{\sqrt{k^2-4\alpha^2}(\sqrt{k(k-2\alpha)}+\sqrt{k(k+2\alpha)})}>0&k>2\alpha\, \end{cases} \end{equation} so we can conclude by continuity. \end{proof} \begin{lemma}\label{lemma:G} If $\xi \in(\xi_\mu,\xi_\mu+r)$ one has \begin{eqnarray} \Re(G_{\mu,k}(\x))&=&\begin{cases} -(D_\mu(\x) D_{\mu+k}(\x))^{-1}&1\leq k\leq \mu\\ -(D_\mu(\x) D_{\mu+k}(\x))^{-1}&\mu+1\leq k\leq \lfloor2\alpha\rfloor\\ \frac{A_{\mu,k}(\xi)}{D_\mu(\xi)}&k\geq \lceil2\alpha\rceil\,. \end{cases}\label{eq:ReGr}\\ \Im(G_{\mu,k}(\x))&=&\begin{cases} (D_\mu (\x)D_{\mu-k}(\x))^{-1}&1\leq k\leq \mu\\ (D_\mu (\x)D_{\mu-k}(\x))^{-1}&\mu+1\leq k\leq \lfloor2\alpha\rfloor\\ 0&k\geq \lceil2\alpha\rceil\,.\label{eq:ImGr} \end{cases} \end{eqnarray} If $\xi \in(\xi_\mu-r,\xi_\mu]$ one has \begin{eqnarray} \Re(G_{\mu,k}(\x))&=&\begin{cases} (D_\mu (\x)D_{\mu-k}(\x))^{-1}&1\leq k\leq \mu\\ (D_\mu (\x)D_{\mu-k}(\x))^{-1}&\mu\leq k\leq \lfloor2\alpha\rfloor\\ 0&k\geq \lceil2\alpha\rceil\,. \end{cases}\label{eq:ReGl}\\ \Im(G_{\mu,k}(\x))&=&\begin{cases} (D_\mu (\x)D_{\mu+k}(\x))^{-1}&1\leq k\leq \mu\\ (D_\mu (\x)D_{\mu+k}(\x))^{-1}&\mu+1\leq k\leq \lfloor2\alpha\rfloor\\ -\frac{A_{\mu,k}(\xi)}{D_\mu(\xi)}&k\geq \lceil2\alpha\rceil\,.\end{cases}\label{eq:ImGl} \end{eqnarray} \end{lemma} \begin{proof} Since we are considering the case $\mu\geq1$, $Lj_\mu(\x)$ is given by (\ref{eq:Lj+}). Our analysis of $G_{\mu,k}(\xi)$ splits in several cases. \ \textit{i)} $1\leq k\leq \mu$ In this case $\mu-k\geq0$ and $\xi_{\mu+k}<\xi_\mu<\xi_{\mu-k}$. By (\ref{eq:Rj+}) we write \begin{eqnarray} Rj_{\mu+k}(\xi)+Rj_{\mu-k}(\xi)&=&D_{\mu+k}^{-1}(\x)\chi(\xi<\xi_{\mu+k}-r)+ D^{-1}_{\mu-k}(\x)\chi(\xi<\xi_{\mu-k}-r)\nonumber\\ &+&\ii D_{\mu+k}^{-1}(\x)\chi(\xi>\xi_{\mu+k}+r)+\ii D_{\mu-k}^{-1}(\x)\chi(\xi>\xi_{\mu-k}+r)\,. \end{eqnarray} A direct computation gives \begin{eqnarray} D_\mu(\x) G_{k,\mu}(\x)&=&D^{-1}_{\mu-k}(\x)\chi(\xi_\mu-r<\xi\leq\xi_\mu)- D^{-1}_{\mu+k}(\x)\chi(\xi_\mu<\xi<\xi_\mu+r)\nonumber\\ &+&\ii(D^{-1}_{\mu+k}(\x)\chi(\xi_\mu-r<\xi\leq\xi_\mu)+D^{-1}_{\mu-k}(\x)\chi(\xi_\mu<\xi<\xi_\mu+r))\,. \end{eqnarray} \ \textit{ii)} $\mu+1\leq k\leq \lfloor2\alpha\rfloor$ Now $\mu-k<0$ and $\max(\xi_{\mu+k},\xi_{\mu-k})<\xi_\mu$. Therefore by (\ref{eq:Rj+}), (\ref{eq:Rj-}) \begin{eqnarray} Rj_{\mu+k}(\xi)+Rj_{\mu-k}(\xi)&=&D_{\mu+k}^{-1}(\x)\chi(\xi<\xi_{\mu+k}-r)+ D^{-1}_{\mu-k}(\x)\chi(\xi>\xi_{\mu-k}+r)\nonumber\\ &+&\ii D_{\mu+k}^{-1}(\x)\chi(\xi>\xi_{\mu+k}+r)-\ii D_{\mu-k}^{-1}(\x)\chi(\xi<\xi_{\mu-k}-r)\,. \end{eqnarray} Moreover \begin{eqnarray} D_\mu(\x) G_{k,\mu}(\x)&=&D^{-1}_{\mu-k}(\x)\chi(\xi_\mu-r<\xi\leq\xi_\mu)- D^{-1}_{\mu+k}(\x)\chi(\xi_\mu<\xi<\xi_\mu+r)\nonumber\\ &+&\ii(D^{-1}_{\mu+k}(\x)\chi(\xi_\mu-r<\xi\leq\xi_\mu)+D^{-1}_{\mu-k}(\x)\chi(\xi_\mu<\xi<\xi_\mu+r))\,. \end{eqnarray} \ \textit{iii)} $k\geq \lceil2\alpha\rceil$ We have $\mu-k<0$, $\xi_{\mu+k}<\xi_\mu<\xi_{\mu-k}$ and again \begin{eqnarray} Rj_{\mu+k}(\xi)+Rj_{\mu-k}(\xi)&=&D_{\mu+k}^{-1}(\x)\chi(\xi<\xi_{\mu+k}-r)+ D^{-1}_{\mu-k}(\x)\chi(\xi>\xi_{\mu-k}+r)\nonumber\\ &+&\ii D_{\mu+k}^{-1}(\x)\chi(\xi>\xi_{\mu+k}+r)-\ii D_{\mu-k}^{-1}(\x)\chi(\xi<\xi_{\mu-k}-r)\,. \end{eqnarray} Therefore \begin{equation} G_{k,\mu}(\x)=-\ii A_{\mu,k}(\xi)Lj_\mu(\xi)=\frac{A_{\mu,k}(\xi)}{D_\mu(\xi)}(\chi(\xi_\mu<\xi<\xi_\mu+r)- \ii\chi(\xi_\mu-r<\xi\leq\xi_\mu))\,. \end{equation} Note that for $\xi \notin(\xi_\mu-r,\xi_\mu+r)$ all $G_{\mu,k}$ are identically zero and by direct inspection we deduce (\ref{eq:ReGr}), (\ref{eq:ImGr}), (\ref{eq:ReGl}) and (\ref{eq:ImGl}). \end{proof} Let us introduce the notation \begin{equation}\label{capponinuovi} \begin{aligned} K_1(\xi)&:=\sum_{k=1}^{\lfloor2\alpha\rfloor} |V_k|^2(D_{\mu-k}(\x))^{-1}\\ K_2(\xi)&:=\sum_{k>2\al} |V_k|^2A_{\mu,k}(\x)\,,\qquad \tilde{K}_2(\x):=\sum_{k\ge1} |V_k|^2A_{\mu,k}(\x) \end{aligned} \end{equation} Note that $K_1(\x)$ can vanish if and only if $V_k=0$ for all $|k|\le \lfloor 2\al\rfloor$ (that is $\underline V_{\leq 2\alpha}=0$). Similarly $K_2(\x)$ can vanish if and only if $V_k=0$ for all $|k|> \lfloor 2\al\rfloor$ (i.e. $\overline V_{>2\alpha}=0$), while $\tilde{K}_2(\x)$ can vanish if and only if the forcing is constant in time, i.e. $V(\om t)\equiv V_0$. Recall the notations introduced in \eqref{minimo}--\eqref{vbarl}. We need the following bounds on $K_1(\x)$, $K_2(\x)$ and $\tilde{K}_2(\x)$. \begin{lemma}\label{lemma:crux} There is $c>0$ such that for all $\mu\in\mathds{Z}$ and all $\x\in(\xi_\mu-r,\xi_\mu+r)$ if $K_1(\x)\ne0$ then one has \begin{equation}\label{stimeK1} c \sqrt{\frac{\upepsilon}{\al}}\ \underline{ V}_{\leq 2\alpha}^2 \le K_1(\x) \le c\sqrt{\frac{\al}{\upepsilon}}\|V\|^2_{L^2}\,. \end{equation} \end{lemma} \begin{proof} By the Lipschitz-continuity of $D^{-1}_{\mu-k}(\xi)$ in $(\xi_\mu-r,\xi_\mu+r)$ there is $c_1\geq0$ such that for all $\xi \in(\xi_\mu-r,\xi_\mu+r)$ we have \begin{equation}\nonumber |D^{-1}_{\mu-k}(\xi)-D^{-1}_{\mu-k}(\xi_\mu)|\leq c_1\frac{r}{\sqrt\alpha}\,. \end{equation} Furthermore since $r\in(0,\frac{\upepsilon}{2\alpha})$ and $$ D_{\mu-k}(\xi_\mu)={\sqrt{|2\alpha-k|k}}\,, $$ we have \begin{eqnarray}\label{eq:inf1} \sum_{k=1}^{\lfloor2\alpha\rfloor} \frac{|V_k|^2}{ D_{\mu-k}(\x)}&\leq& C_1\left(\sup_{k\in\mathds{N}}|V_k|^2\sum_{k=1}^{\lfloor2\alpha\rfloor}\frac{1}{\sqrt{k|k-2\alpha|}}+\frac{r\|V\|^2_{L^2}}{\sqrt \alpha}\right)\nonumber\\ &\leq& {C_2}\|V\|_{L^2}^2(\sqrt{\frac{\al}{\upepsilon}} + \frac{\upepsilon}{\al\sqrt{\al}})\nonumber\\ &\le& C_3 \sqrt{\frac{\al}{\upepsilon}}\|V\|_{L^2}^2 \nonumber\,. \end{eqnarray} for some constants $C_1,C_2,C_3>0$. Similarly \begin{eqnarray} \sum_{k=1}^{\lfloor2\alpha\rfloor} \frac{|V_k|^2}{ D_{\mu-k}(\x)}&\geq& C_1\left(\underline{ V}_{\leq 2\alpha}^2\sum_{k=1}^{\lfloor2\alpha\rfloor}\frac{1}{\sqrt{k|k-2\alpha|}}-\frac{r\|V\|^2_{L^2}}{\sqrt \alpha}\right)\nonumber\\ &\geq& C_2\sqrt{\frac{\upepsilon}{\al}}\ \underline{ V}_{\leq 2\alpha}^2 \nonumber\,, \end{eqnarray} so the assertion follows. \end{proof} \begin{lemma}\label{lemma:bound-A} There is $c>0$ such that for all $\mu\in\mathds{Z}$ and all $\x\in(\x_\mu -r , \x_\mu+r)$ if $K_2(\x)\ne0$ then one has \begin{equation}\label{stimeK2} \frac{c}{\sqrt\alpha} \overline{V}^2_{>2\alpha} \le K_2(\x) \le c\sqrt{\frac{\alpha}{\upepsilon}} \|V\|^2_{L^2}. \end{equation} \end{lemma} \begin{proof} We use again the Lipschitz-continuity of $D^{-1}_{\mu\pm k}(\xi)$ in $(\x_\mu-r,\x_\mu+r)$ to obtain that there is $c\geq0$ such that for all $\xi \in(\xi_\mu-r,\xi_\mu+r)$ one has \begin{equation}\nonumber |A_{\mu,k}(\xi)-A_{\mu,k}(\xi_\mu)|\leq c\frac{r}{\sqrt\alpha}\,, \end{equation} and hence for all $\xi \in(\xi_\mu-r,\xi_\mu+r)$ \begin{equation}\label{combino} |\sum_{k>2\alpha} |V_k|^2 A_{\mu,k}(\xi)-\sum_{k>2\alpha} |V_k|^2 A_{\mu,k}(\xi_\mu)|\leq c\|V\|_{L^2}\frac{r}{\sqrt\alpha}\,. \end{equation} Now we have by \eqref{eq:Aximu} \begin{equation}\nonumber \sum_{k>2\alpha} |V_k|^2 A_{\mu,k}(\xi_\mu)\leq \|V\|^2_{L^2}\sum_{k>2\alpha} \frac{4\alpha}{\sqrt{k^2-4\alpha^2}(\sqrt{k(k-2\alpha)}+\sqrt{k(k+2\alpha)})}\leq c_1\sqrt{\frac{{\al}}{\upepsilon}}\|V\|^2_{L^2}\,, \end{equation} and similarly, if $\hat{k}$ denotes the Fourier mode at which the max in \eqref{vbarl} is attained, we have \begin{equation}\nonumber \sum_{k>2\alpha} |V_k|^2 A_{\mu,k}(\xi_\mu)\geq\overline{V}^2_{>2\alpha} \frac{4\alpha}{\sqrt{\hat{k}^2-4\alpha^2}(\sqrt{\hat{k}(\hat{k}-2\alpha)}+\sqrt{\hat{k}(\hat{k}+2\alpha)})}\geq \frac{c_2}{\sqrt{\al}} \overline{V}^2_{>2\alpha}\,, \end{equation} so the assertion follows combining the latter two with \eqref{combino}. \end{proof} \begin{rmk} Note that if $V_{\rceil 2\al\rceil}\ne0$ there is a further $1/\sqrt{\upepsilon}$ in the lower bound in \eqref{stimeK2}. \end{rmk} \begin{lemma}\label{monte} There is $c>0$ such that for all $\mu\in\mathds{Z}$ and all $\x\in(\x_\mu -r , \x_\mu+r)$ if $\tilde{K}_2(\x)\ne0$ then one has \begin{equation}\label{stimeK2} \frac{c}{\sqrt\alpha} \overline{V}^2 \le \tilde{K}_2(\x) \le c \sqrt{\frac{\alpha}{\upepsilon} } \|V\|^2_{L^2}. \end{equation} \end{lemma} \begin{proof} The lower bound follows exactly as the lower bound in Lemma \ref{lemma:bound-A}. As for the upper bound we simply add to the upper bound for $K_2(\x)$ the quantity \begin{equation}\label{pezzetto} \begin{aligned} \sum_{k=1}^{\lfloor 2\al\rfloor} |V_k|_2 A_{\mu,k}(\x) &\le c_1 \|V\|_{L^2}^2\left( \frac{r}{\sqrt{\al}} + \sum_{k=1}^{\lfloor 2\al\rfloor}\frac{4\alpha}{\sqrt{k^2-4\alpha^2}(\sqrt{k(k-2\alpha)}+\sqrt{k(k+2\alpha)})} \right) \\ &\le c_2 \|V\|_{L^2}^2 \frac{8\al^2}{\sqrt{\upepsilon\lfloor 4\al\rfloor } ( \sqrt{\upepsilon\lfloor 2\al\rfloor } + 2\lfloor\al\rfloor)} \le c_3 \sqrt{\frac{\al}{\upepsilon}}\|V\|_{L^2}^2 \end{aligned} \end{equation} so the assertion follows. \end{proof} Now we are in position to prove Proposition \ref{prop:main}. \begin{proof}[Proof of Proposition \ref{prop:main}] The case $\mu=0$ is easier and can be studied separately. Indeed by a direct computation we see that \begin{eqnarray} D_0(\xi)G_{0,k}(\xi)&=&D^{-1}_{-k}(\x)\chi(\xi>1-r)+D^{-1}_{k}(\x)\chi(\xi<-1+r)\nonumber\\ &+&\ii\left(-D^{-1}_{-k}(\x)\chi(\xi<-1+r)+D^{-1}_{k}(\x)\chi(\xi>1-r)\right)\,.\nonumber \end{eqnarray} In particular $\Re( G_{0,k}(\xi))>0$ for all $k\in\mathds{Z}$, so that by (\ref{eq:MeG}) we have \begin{equation}\nonumber |1-\mathcal M_0(\xi) Lj_0(\xi)|\geq 1-\Re\left(\mathcal M_0(\xi) Lj_0(\xi)\right)= 1+\gamma^2 \sum_{k\geq1}|V_k|^2\Re (G_{0,k}(\xi))>1\,. \end{equation} Now we study the case $\mu\geq1$. If $\xi \in(\xi_\mu-r,\xi_\mu]$ by Lemma \ref{lemma:G} we see that \begin{equation} |1-\Re(\mathcal M_\mu(\xi)Lj_\mu(\xi))|=1+\gamma^2\sum_{k\geq1} |V_k|^2 \Re(G_{\mu,k})>1\,, \end{equation} which entails \begin{equation}\label{eq:infM1} \inf_{\xi \in(\xi_\mu,\xi_\mu+r)}|1-\mathcal M_\mu(\xi)Lj_\mu(\xi)|>1\,. \end{equation} For all $\xi \in(\xi_\mu,\xi_\mu+r)$ we claim that \begin{equation}\label{eq:claim1/2} |1-\mathcal M_\mu(\xi)Lj_\mu(\xi)|\geq\frac12\,, \end{equation} for $\gamma$ small enough. To prove it, we use again Lemma \ref{lemma:G}. We have \begin{eqnarray} |1-\mathcal M_\mu(\xi)Lj_\mu(\xi)|^2&=&|1+\gamma^2\sum_{k\geq1}|V_k|^2\Re(G_{\mu,k}(\x))+ i(-\gamma \frac{V_0}{ D_\mu(\x)} + \gamma^2\sum_{k\geq1}|V_k|^2\Im(G_{\mu,k}(\x)))|^2\nonumber\\ &=&\left|1+\gamma^2\frac{\tilde{K}_2(\xi)}{D_\m(\x)}-\gamma^2\frac{K_1(\xi)}{D_\m(\x)}+\ii (\gamma \frac{V_0}{D_\mu(\x)}+\gamma^2\frac{K_1(\xi)}{D_\m(\x)})\right|^2\nonumber\\ &=&\left(1-\gamma^2\frac{K_1(\xi)}{D_\m(\x)}+\gamma^2\frac{\tilde{K}_2(\xi)}{D_\m(\x)})^2+ (\gamma \frac{V_0}{D_\mu(\x)}+ \gamma^2\frac{K_1(\x)}{D_\m(\x)}\right)^2\,. \label{eq:marco} \end{eqnarray} Now if $$ \left|1-\frac{\gamma^2}{D_\m(\x)}(K_1(\xi)-\tilde{K}_2(\xi)) \right|\geq\frac12 $$ we have \begin{equation}\label{eq:claim1/2-1} \mbox{r.h.s of (\ref{eq:marco})}\geq \frac14+\left(\gamma \frac{V_0}{D_\mu(\x)}+ \gamma^2\frac{K_1(\xi)}{D_\m(\x)}\right)^2\geq\frac14\,, \end{equation} while if $$ \left|1-\frac{\gamma^2}{D_\m(\x)}(K_1(\xi)-\tilde{K}_2(\xi))\right|\leq\frac12 $$ then $$ \frac{\gamma^2}{D_\m(\x)}K_1(\xi)\geq \frac12+\frac{\gamma^2}{D_\m(\x)}\tilde{K}_2(\xi) $$ and moreover using Lemmata \ref{lemma:crux} and \ref{monte} we have also \begin{equation}\label{numeretto} \sqrt{\frac{\upepsilon}{\al}}\frac{1}{4\|V\|_{L^2}^2} \le \frac{\gamma^2}{D_\m(\x)}\le \frac{3\sqrt{\al}}{2( - \underline{V}^2_{\le 2\al} \sqrt{\upepsilon} + \ol{V}^2)}. \end{equation} But then \begin{equation}\label{eq:claim1/2-2} \mbox{r.h.s of (\ref{eq:marco})}\geq (\frac{\gamma}{D_\mu(\x)} {V_0}+ \frac{\gamma^2}{D_\mu(\x)} K_1(\x))^2 \geq(\frac12+\frac{\gamma}{D_\mu(\x)} {V_0}+\frac{\gamma^2}{D_\mu(\x)}\tilde{K}_2(\xi))^2 \ge \frac{1}{4}\,, \end{equation} which is obvious if $V_0\ge0$ while if $V_0<0$ we need to impose \begin{equation}\label{condi} \gamma < c \sqrt{\frac{\upepsilon}{\al}}\frac{|V_0|}{\|V\|_{L^2}^2} \end{equation} where $c$ is the constant appearing in Lemma \ref{monte}, in order to obtain \[ \frac12+\frac{\gamma}{D_\mu(\x)} {V_0}+\frac{\gamma^2}{D_\mu(\x)}\tilde{K}_2(\xi) < - \frac{1}{2} \] Thus the assertion follows. \end{proof} \section{Proof of Proposition \ref{lemma:jR}}\label{stimazze} Here we prove Proposition \ref{lemma:jR}. \begin{proof}[Proof of Proposition \ref{lemma:jR}] First of all we note that by (\ref{eq:jloc}) and (\ref{staqua}) we have \begin{equation}\label{eq:prima!} \sup_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|j_\mu^{\mathcal R}(\xi)|=\sup_{\xi\in(\xi_\mu-r,\xi_\mu+r)} \left|\frac{j_\mu(\xi)}{1-\mathcal M_\mu(\x) j_\mu(\xi)}\right|=\left(\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)} |D_\mu(\xi)-\mathcal M_\mu(\xi)|\right)^{-1}\,. \end{equation} Note that $$ \mathcal M_\mu(\xi)=-\ii\gamma V_0-\gamma^2D_{\mu}(\xi)\sum_{k\geq1}|V_k|^2 G_{k,\mu}(\xi)\,. $$ So thanks to Lemma \ref{lemma:G} and using the notation in \eqref{capponinuovi} we can write \begin{eqnarray} \mathcal M_\mu(\xi)&=&-\ii\gamma V_0+\chi(\xi_\mu<\xi<\xi_\mu+r)\left(\gamma^2(K_1(\xi)-\tilde{K}_2(\xi))- \ii\gamma^2K_1(\xi)\right)\nonumber\\ &-&\chi(\xi_\mu-r<\xi<\xi_\mu)\left(\ii\gamma^2(K_1(\xi)-\tilde{K}_2(\xi))+\gamma^2K_1(\xi)\right)\,.\label{eq:dec-M} \end{eqnarray} Therefore \begin{eqnarray} |D_\mu(\xi)-\mathcal M_\mu(\xi)|&=&\chi(\xi_\mu<\xi<\xi_\mu+r)|D_\mu(\xi)-\gamma^2(K_1(\xi)-\tilde{K}_2(\xi))- \ii(\gamma V_0+\gamma^2K_1(\xi))|\nonumber\\ &+&\chi(\xi_\mu-r<\xi<\xi_\mu)|D_\mu(\xi)+\gamma^2K_1(\xi)-\ii(\gamma V_0+\gamma^2(K_1(\xi)-\tilde{K}_2(\xi)))|\nonumber\,, \end{eqnarray} whence \begin{equation}\label{eq:combining} \inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|D_\mu(\xi)-\mathcal M_\mu(\xi)|\geq\min\left(\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}d(\xi), \inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}s(\xi)\right)\,, \end{equation} with \begin{eqnarray} d(\xi)&:=&|D_\mu(\xi)-\gamma^2(K_1(\xi)-\tilde{K}_2(\xi))-\ii(\gamma V_0+\gamma^2K_1(\xi))|\label{eq:destra}\\ s(\xi)&:=&|D_\mu(\xi)+\gamma^2K_1(\xi)-\ii(\gamma V_0+\gamma^2(K_1(\xi)-\tilde{K}_2(\xi)))|\label{eq:sinistra}\,. \end{eqnarray} To estimate $d(\xi)$ and $s(\xi)$ we treat separately the cases $V_0=0$ and $V_0\neq0$. Moreover for the first case we consider two sub-cases, namely either $\underline V_{\leq 2\alpha}\neq0$ or $\underline V_{\leq 2\alpha}=0$. \ \noindent {\bf case I.1:} \underline{$V_0=0$, $\underline V_{\leq 2\alpha}\neq0$}. By Lemma \ref{lemma:crux} there is a constant $c>0$ such that \begin{eqnarray} \inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}d(\xi)&\geq&\gamma^2\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|K_1(\xi)|\geq c \gamma^2\sqrt{\frac{\upepsilon}{\alpha}} \ \underline V^2_{\leq 2\alpha} \label{eq:bound1.1-d}\\ \inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}s(\xi)&\geq&\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|D_\mu(\xi)+\gamma^2K_1(\xi)|\nonumber\\ &\geq&\gamma^2\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|K_1(\xi)|\geq c \gamma^2\sqrt{\frac{\upepsilon}{\alpha}} \ \underline V^2_{\leq 2\alpha} \label{eq:bound1.1-s}\,. \end{eqnarray} \ \noindent {\bf case I.2:} \underline{$V_0=0$, $\underline V_{\leq 2\alpha}=0$}. In this case $K_1(\x)=0$. On the other hand by Lemma \ref{lemma:bound-A} there is a constant $c>0$ such that \begin{eqnarray} \inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}d(\xi)&\geq&\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|D_\mu(\xi)+\gamma^2K_2(\xi)|\nonumber\\ &\geq&\gamma^2\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|K_2(\xi)|\geq \gamma^2 \frac{c}{\sqrt\alpha} \overline{V}^2_{>2\alpha} \label{eq:bound1.2-d}\\ \inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}s(\xi)&\geq&\gamma^2\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|K_2(\xi)|\geq \gamma^2 \frac{c}{\sqrt\alpha} \overline{V}^2_{>2\alpha} \label{eq:bound1.2-s}\,. \end{eqnarray} Combining (\ref{eq:prima!}), (\ref{eq:combining}), (\ref{eq:bound1.1-d}), (\ref{eq:bound1.1-s}), (\ref{eq:bound1.2-d}), (\ref{eq:bound1.2-s}) gives the second line of (\ref{stastimazza}). \ \noindent {\bf case II:} \underline{$V_0\neq0$}. We have \begin{eqnarray} \inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}d(\xi)&\geq&\gamma\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|V_0+\gamma K_1(\xi)|\geq \gamma\frac{| V_0|}{2}\label{eq:bound2.1-d}\\ \inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}s(\xi)&\geq&\gamma\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|V_0+\gamma K_1(\xi)-\gamma K_2(\xi)|\nonumber\\ &\geq& \gamma\inf_{\xi\in(\xi_\mu-r,\xi_\mu+r)}|V_0-\gamma K_2(\xi)|\geq \frac{\gamma |V_0|}{2}\label{eq:bound2.1-s}\,. \end{eqnarray} The last inequality is always satisfied if $\overline V_{>2\alpha}=0$, while otherwise we need to require \begin{equation}\label{eq:gammaprovided1} \gamma\leq \frac{1}{c}\sqrt{\frac{\upepsilon}{\al}}\frac{|V_0|}{\|V\|^2_{L^2}} \end{equation} by Lemma \ref{lemma:bound-A} Combining (\ref{eq:prima!}), (\ref{eq:combining}), (\ref{eq:bound2.1-d}), (\ref{eq:bound2.1-s}), and (\ref{eq:gammaprovided1}) the result follows \end{proof} \medskip
{ "attr-fineweb-edu": 1.09082, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd8PxaKgQKN2tU__m
\section{Introduction} The announced discovery of a boson in the mass range 125-126 GeV, by both the ATLAS~\cite{Aad:2012tfa} and CMS~\cite{Chatrchyan:2012ufa} collaborations at the Large Hadron Collider (LHC) experiment, has naturally generated a lot of enthusiasm among particle physicists. As of now, the properties of the particle whose signature has been avowedly noticed are consistent with those of the Standard Model (SM) higgs boson. However, the present data also leave some scope for it being a scalar with a certain degree of non-standard behaviour. The analysis of such possibilities, both model-independently and on the basis of specific theoretical scenarios, has consumed rather substantial efforts in the recent months. One scenario of particular interest in this context is one with a warped extra spacelike dimension. First proposed by Randall and Sundrum (RS), it has a non-factorizable geometry with an exponential warp factor \cite{Randall:1999ee}. Furthermore, the extra dimension is endowed with an $S_1 /Z_2$ orbifold symmetry, with two 3-branes residing at the orbifold fixed points, the SM fields being confined to one of the branes (called the `visible brane', at $y = r_c \pi$, where $r_c$ is the radius of the compact dimension and $y$ is the co-ordinate along that dimension). When the warp-factor in the exponent has a value of about 35, mass parameters of the order of the Planck scale in the `bulk' get scaled down to the TeV scale on the visible branch, thus providing a spectacular explanation of the hierarchy between these two scales. A bonus in the low-energy phenomenology of this model is the occurrence of TeV-scale Kaluza-Klein (KK) excitations of the spin-2 graviton on the visible brane, with coupling to the SM fields suppressed by the TeV scale \cite{Davoudiasl:1999jd, Davoudiasl:2000wi, Chang:1999yn}. The mass limit on the lowest excitation of the graviton in this scenario has already gone beyond 2 TeV (with certain assumptions on the model parameters) \cite{ATLAS-CONF-2013-017, Chatrchyan:2013qha}. However, another interesting and testable feature of this theory results from the mechanism introduced to stabilize the radius of the compact dimension, where the radius is envisioned as arising from the vacuum expectation value (vev) of a modular field. This field can be naturally given a vev by hypothesizing a brane-dependent potential for it, resulting in a physical field of geometrical origin, popularly called the radion field, with mass and vev around the electroweak scale, which couples to the trace of the energy-momentum tensor \cite{Goldberger:1999uk, Goldberger:1999un}. Consistency with general covariance demands the addition of terms giving rise to mixing between the SM higgs and the radion \cite{Giudice:2000av, Csaki:2000zn, Csaki:1999mp, Dominici:2002jv, Csaki:2007ns}. Consequently, speculations have been made on whether the 125-126 GeV state, instead of being a pure SM higgs, could instead be the radion, or a mixture of the two. A number of studies have already taken place in this direction, based on both the `pure radion' and `radion-higgs mixing' hypotheses~\cite{Mahanta:2000zp, Gunion:2003px, Toharia:2004pv, Toharia:2008tm, Frank:2012nb, Rizzo:2002bt, Cheung:2005pg, Han:2001xs, Chaichian:2001rq, Battaglia:2003gb, Bhattacherjee:2011yh, Goncalves:2010dw, Barger:2011hu, Cheung:2011nv, Davoudiasl:2012xd, Low:2012rj, Soa:2012ra, Chacko:2012vm, Barger:2011qn, Grzadkowski:2012ng, deSandes:2011zs, Kubota:2012in, Ohno:2013sc, Cho:2013mva}. In the present work, we perform a global analysis of the available data, assuming that both of the physical states arising from radion-higgs mixing contribute to the event rates in various channels. Using both the 2011 and 2012 data, we obtain the best fit points in terms of the parameters of the model. Furthermore, we obtain the 95\% confidence level contours in the parameter space, which indicate the extent to which new physics can be accommodated in the light of the available results. Side by side, we identify the regions which are disallowed by data in one or more channels, as obtained from the published 95\% C.L. exclusion limits on the signal strength, defined as $\mu = \sigma/\sigma_{SM}$, where $\sigma$ is the predicted cross-section in the relevant channel for a specific combination of the model parameters, and $\sigma_{SM}$ is the corresponding prediction for the SM higgs boson. The region that is left after such exclusion can be treated as one where the presence of a radion-like (higgs-like) scalar is compatible with the data as of now. A comparison of this region with the 95\% C.L. contours around the best fit values of the parameters indicates the viability (or otherwise) of this particular new physics scenario. Our work improves upon other recent studies based on LHC data \cite{Kubota:2012in, Grzadkowski:2012ng, deSandes:2011zs, Cho:2013mva} in a number of ways. This is the first global analysis, following a $\chi^2$-minimisation procedure, of radion-higgs mixing, using the latest available data from 7 and 8 TeV LHC runs to obtain best fit parameters and significance contours. We include the possibility of an additional scalar mass eigenstate coexisting with the 125 GeV state, with both of them contributing to the final states looked for, subject to event selection criteria pertaining to the 125 GeV higgs. While it is unlikely that the contribution from the additional scalar will be confused with the signal of a 125 GeV scalar in the $\gamma\gamma$ and $ZZ^{(*)}$ final states (as the reconstructed invariant mass will point to two distinct resonances), it cannot {\it a priori} be ruled out for the $WW^{(*)}$ channel. The presence of two neutrinos in the di-lepton final state makes it impossible to reconstruct the mass of the parent particle and one would therefore expect some enhancement to the signal strength due to the extra contribution from the second state which must be estimated by simulating the effect of the selection cuts used by the correponding experimental analyses. This makes the best-fit regions different from what one finds with the assumptions that the entire contribution in every channel comes from one scalar resonance only. Secondly, we also use the strategy of simulating the full cut-based analysis in restricting the allowed regions from the available upper limit on $\sigma/\sigma_{SM}$ for an addition scalar with different mass, demanding not only (a) the extra contribution at 125 GeV be smaller than the current upper limit, but also (b) the combined contribution using cuts correponding to the SM higgs search at the mass of the extra resonance be smaller than the upper limit at that mass. Again, this makes a difference mainly in the $WW^{(*)}$ channel. The contribution here (as also in the case of global fits) is the sum of those from two distinct mass eigenstates, so that the acceptance of the cuts does not factor out when taking the ratio to expected SM cross section. Thirdly, we have taken into account the interference between processes mediated by radion-higgs mixed mass eigenstates whenever they are close to each other. And finally, we have explicitly included processes where a relatively heavy, radion(higgs)-dominated state decays into two higgs(radion)-dominated scalars at 125 GeV, each of which can go to the decay channels searched for. In a way, this leads to an additional production mechanism of the 125 GeV state, which we have felt should be included in a full analysis. The presentation of our paper is as follows. We outline the RS model with higgs-radion mixing in the next section. The strategy of our analysis is described in section 3, while section 4 contains the numerical results. We summarise and conclude in section 5. \section{The model and its parameters} \subsection{The minimal Randall-Sundrum model and the radion} In the minimal version of Randall-Sundrum (RS) model, one has an extra warped spacelike compact dimension $y = r_c \phi$, where $r_c$ is the radius of compactification. An $S_1/Z_2$ orbifolding is applied with a pair of 3-branes at the orbifold fixed points (at $\phi = 0$ and $\phi = \pi$). Gravity, propagating in the bulk, peaks at the first of these branes, usually called the Planck (hidden) brane (at $\phi = 0$), while the SM fields are confined to the visible brane (at $\phi = \pi$).\footnote{While various modifications, including for example, gauge fileds in the bulk have been considered \cite{Goldberger:1999wh, Davoudiasl:1999tf, Pomarol:1999ad, Agashe:2006hk, Agashe:2004ay, Agashe:2004cp, Agashe:2003zs}, we have, however, confined ourselves to the minimal RS scenario.} The action for the above configuration is given by \cite{Randall:1999ee} \begin {eqnarray} S & = & S_{gravity} + S_{v} + S_{h} \nonumber \\ S_{gravity} & = & \int d^4x \int _{-\pi} ^{\pi} d\phi \sqrt{-G} \{-\Lambda + 2M_5^3R\} \nonumber \\ S_{v} & = & \int d^4 x \sqrt{-g_{v}} \{{\cal{L}}_{v} - V_{v}\} \nonumber \\ S_{h} & = & \int d^4 x \sqrt{-g_{h}} \{{\cal{L}}_{h} - V_{h}\} \end {eqnarray} \noindent where the subscripts $v$ and $h$ refer to the visible and hidden branes respectively, $G$ is the determinant of the five dimensional metric $G_{MN}$ and the metrics on the visible and hidden branes are given by \begin{equation} g^{v}_{\mu\nu}(x^\mu) \equiv G_{\mu\nu}(x^\mu,\phi = \pi) , g^{h}_{\mu\nu}(x^\mu) \equiv G_{\mu\nu}(x^\mu,\phi = 0) \end{equation} \noindent the greek indices being representation of (1+3) dimensional coordinates on the visible (hidden) brane. $M_{5}$ is the 5-dimensional Planck mass and $\Lambda$ is the bulk cosmological constant. $V_{v}$ and $V_{h}$ are the brane tensions of visible and hidden branes respectively. The bulk metric obtained after solving Einstein's equations is then \begin{equation} ds^2 = e^{-2k|y|} \eta_{\mu\nu} dx^\mu dx^\nu - dy^2 \end{equation} \noindent where $k = \sqrt{\frac{-\Lambda}{24 M_{5}^3}}$ and \begin{equation} V_{h} = -V_{v} = 24M_5^3k \end{equation} $M_5$ is related to the 4-dimensional Planck mass $M_{Pl}$ by \begin{equation} M^2_{Pl} = \frac{M^3_{5}}{k}[1 - e^{-2kr_c\pi}] \end{equation} The 5-dimensional metric consists solely of mass parameters whose values are around the Planck scale. For the choice $kr_c \simeq 12$, which requires barely an order of disparity between the scales $k$ and $1/r_c$, the mass parameters on the visible brane are suppressed with respect to the Planck scale by the exponential factor $e^{kr_c\pi} \simeq 10^{16}$, thus offering a rather appealing explanation of the hierarchy between the Planck and TeV scales. The Kaluza-Klein (KK) decomposition of the graviton on the visible brane leads to a discrete tower of states, with one massless graviton and a series of TeV-scale spin-2 particles. The massless graviton couples to all matter fields with strength $\sim 1/{M_P}$, while the corresponding couplings for the massive modes (in the TeV range) receive an exponential enhancement, thus opening up the possibility of observing signals of the massive gravitons in TeV-scale experiments ~\cite{Davoudiasl:1999jd, Davoudiasl:2000wi, Chang:1999yn}. Current experimental limits from the LHC rule out any mass for the lowest graviton excitation below $1.15(2.47)$ TeV for $k/M_P \le 0.01(0.1)$ \cite{ATLAS-CONF-2013-017}. The radius of compactification $r_c$, was an input by hand in the original model, however, it can be given a dynamic origin by linking it to the vev of a $\phi$-independent modulus field, $T(x)$, so that $r_c = \langle T \rangle$. We can define a new field \begin{equation} \varphi(x) = \Lambda_\varphi e^{-k(T(x) - r_c)\pi} \end{equation} with its vev given by $\Lambda_\varphi = \sqrt{{\frac{24M_{5}^3}{k}}}e^{-k\pi r_c}$. \noindent A vev for the modulus field can be dynamically generated if it has a potential. To generate the potential for $\varphi(x)$, a scalar field with bulk action is included along with interaction terms on the hidden and visible branes. The terms on the branes cause the scalar field to develop a $\phi$-dependent vev. Inserting this solution into the bulk scalar action and integrating over $\phi$ yields an effective potential for $\varphi(x)$ of the form \begin{equation} V_{\varphi} (r_c) = k\epsilon v_h ^2 + 4ke^{-4kr_c\pi}(v_v - v_he^{-\epsilon kr_c\pi})^2(1+\epsilon/4) - k\epsilon v_he^{-(4+\epsilon)kr_c\pi}(2v_v - v_he^{-\epsilon kr_c\pi}) \end{equation} where $\epsilon \simeq m^2/4k^2$ \begin{equation} V(\varphi) = \frac{k^3}{144M_{5}^6} \varphi^4(v_v - v_h(\frac{\varphi}{\Lambda_\varphi exp(k\pi r_c)})^\epsilon), \end{equation} \noindent where $v_v$ and $v_h$ are interaction terms on the visible and hidden branes respectively and by assumption $\epsilon \ll 1$ This new massive filed $\varphi$ is the radion field, where mass is obtained from $\frac{\partial^{2}V(\varphi)}{\partial\varphi^{2}}$. Furthermore, one obtains the minimum of V($\varphi$) for $kr_c \approx 12$ for $ln(\frac{v_v}{v_h}) \sim 1$. The radion mass, $m_\varphi$, and the vev $\Lambda_\varphi$, constitute the set of free parameters of the theory in the radion sector, which now has the distinction of `naturally' generating a TeV-scale vev on the visible brane. They have implications on particle phenomenology within the reach of the LHC. In particular, the radion mass may turn out to be a little below a TeV, thus making the detection of radion somewhat easier that that of the KK mode of the graviton \cite{Goldberger:1999uk, Goldberger:1999un}. Integrating over the orbifold coordinates it can be shown that the radion field couples to the trace of energy-momentum tensor $(T^{\mu}_{\nu})$. The canonically normalized effective action is \begin{equation} S_\varphi = \int d^4 x \sqrt{-g}[\frac{2M_5^3}{k}(1 - \frac{\varphi^2}{\Lambda_\varphi^2}e^{-2k\pi r_c})R + \frac{1}{2}\partial_\mu\varphi\partial^\mu\varphi - V(\varphi) + (1 - \frac{\varphi}{\Lambda_\varphi})T_\mu^\mu] \end{equation} It should be noted that, while the radion has couplings that are very similar to those of the SM higgs, it has additional interaction with massless gauge boson (photon, gluon) pairs via the trace anomaly terms. \subsection{Radion-Higgs mixing} In addition to the above action, general covariance also allows a higgs-radion mixing term \cite{Giudice:2000av}, parametrized by the dimensionless quantity $\xi$. Such a term couples the higgs field to the Ricci scalar of the induced metric ($g_{ind}$) on the visible brane \begin{equation} S = -\xi\int d^4x \sqrt{-g_{ind}}R(g_{ind})H^{\dagger}H \end{equation} where $H = [(v + h)/\sqrt{2},0]$ with $v = 246$ GeV For phenomenological purpose, we are interested in terms in $T^\mu_\mu$, which are bilinear in the SM fields. Retaining such terms only, one has \begin{equation} T^{\mu}_{\mu} = T^{(1)\mu}_{\mu} + T^{(2)\mu}_{\mu} \end{equation} with \begin{eqnarray} T^{(1)\mu}_{\mu} & = & 6\xi v\Box h \nonumber \\ T^{(2)\mu}_{\mu} & = & (6\xi - 1)\partial_\mu h\partial^\mu h + 6\xi h \Box h + 2 m_h^2 h^2 + m_{ij}\bar{\psi}_i\psi_j - M_v^2V_{A\mu}V^{\mu}_{A} \end{eqnarray} $T^{(1)\mu}_\mu$ induces a kinetic mixing between $\varphi$ and h. After shifting $\varphi$ with respect to its vacuum expectation value $\Lambda_\varphi$ we obtain \begin{equation} {\cal{L}} = -\frac{1}{2} \varphi(\Box + m_\varphi ^2)\varphi - \frac{1}{2} h(\Box + m_h ^2)h - 6 \xi \frac{v}{\Lambda_\varphi}\varphi\Box h \end{equation} We confine our study to a region of the paremeter space where the radion vev $\Lambda_\varphi$ is well above the vev of the SM higgs. Besides, it is phenomenologically safe not to consider $\xi$ with magnitude much above unity, since a large value may destabilise the geometry itself through back-reaction. Thus one can make the further approximation $6\xi \frac{v}{\Lambda_\varphi} << 1$. In this approximation, the kinetic energy terms acquire a canonical form under the basis transformation from $(\varphi, h)$ to $(\varphi^{'},h^{'})$, such that \begin{eqnarray} \varphi & = & (\sin\theta-\sin\rho \cos\theta)h^{'}+(\cos\theta+\sin\rho \sin\theta)\varphi^{'}\nonumber\\ h & = & \cos\rho \cos\theta h^{'}-\cos\rho \sin\theta \varphi^{'} \end{eqnarray} \noindent where \begin{equation} \tan\rho = 6\xi\frac{v}{\Lambda_\varphi}, ~~~~ \tan2\theta = \frac{2\sin\rho m_\varphi^2}{\cos^2\rho(m_{\varphi}^{2} - m_{h}^{2})} \label{eqn:theta} \end{equation} \noindent and one ends up with the physical masses \begin{equation} \label{eqn:mass} m_{\varphi^{'},h^{'}}^2 = \frac{1}{2}\left[(1 + \sin^{2}\rho) m_\varphi^{2} + \cos^{2}\rho m_h^{2} \pm \sqrt{\cos^4\rho (m_\varphi^2 - m_h^2)^2 + 4\sin^2\rho m_\varphi^4}\right] \end{equation} The interactions of $\varphi^{'}$ and $h^{'}$ with fermions ($f$) and massive gauge bosons ($V$) is given by \begin{equation} {\cal{L}}_{1} = \frac{-1}{v}(m_{ij}\bar{\psi_i}\psi_{j} - M_{v}^2 V_{A\mu}V_{A}^{\mu})(A_{h}h{'} + \frac{v}{\Lambda_{\varphi}} A_{\varphi} \varphi^{'}) \end{equation} As has been mentioned above, the coupling of $\varphi$ to a pair of gluons also includes the trace anomaly term. Taking it into account, the gluon-gluon couplings for both of the mass eigenstates are given by \begin{equation} {\cal{L}}_{2} = \frac{-1}{v}\frac{\alpha_s}{16\pi}G_{\mu\nu}G^{\mu\nu}(B_{h}h^{'} + \frac{v}{\Lambda_{\varphi}} B_{\varphi} \varphi{'}) \end{equation} while the corresponding Lagrangian for the photon is \begin{equation} {\cal{L}}_{3} = \frac{-1}{v}\frac{\alpha_{EM}}{8\pi}F_{\mu\nu}F^{\mu\nu}(C_{h}h^{'} + \frac{v}{\Lambda_{\varphi}} C_{\varphi} \varphi{'}) \end{equation} \noindent where \begin{eqnarray} \nonumber a_{h}^{1} & = & \frac{v}{\Lambda_{\varphi}}(\sin \theta - \sin \rho \cos \theta),\\ \nonumber a_{h}^{2} & = & \cos \rho \cos \theta,\\ \nonumber a_{\varphi}^{1} & = & \cos \theta + \sin \rho \sin \theta,\\ \nonumber a_{\varphi}^{2} & = & \frac{\Lambda_{\varphi}}{v}(\cos \rho \sin \theta),\\ \nonumber A_{h} & = & a_{h}^{1} + a_{h}^{2}, \\ \nonumber A_{\varphi} & = & a_{\varphi}^{1} - a_{\varphi}^{2}, \end{eqnarray} \begin{eqnarray} \nonumber B_{h} & = & A_{h} F_{1/2}(\tau_{t}) - 2b_{3}a_{h}^{1}, \\ \nonumber B_{\varphi} & = & A_{\varphi} F_{1/2}(\tau_{t}) - 2b_{3}a_{\varphi}^{1}, \\ \nonumber C_{h} & = & A_{h} (\frac{4}{3}F_{1/2}(\tau_{t}) + F_{1}(\tau_{W})) - (b_{2} + b_{y})a_{h}^{1},\\ \nonumber C_{\varphi} & = & A_{\varphi} (\frac{4}{3}F_{1/2}(\tau_{t}) + F_{1}(\tau_{W})) - (b_{2} + b_{y})a_{\varphi}^{1} \\ \nonumber \tau_t & = & \frac{4m^{2}_t}{q^2}, \\ \nonumber \tau_W & = & \frac{4m^{2}_W}{q^2}, \\ b_3 & = & 7,~ b_2 = 19/6, ~ b_Y = -41/6. \end{eqnarray} \noindent where $q^2 = m^2_{h^{'}}(m^2_{\varphi^{'}})$ depending on $h^{'}(\varphi^{'})\rightarrow gg,\gamma\gamma$. $b_{2}, b_{3}$ and $b_{Y}$ are the SM $\beta$-function coefficients in $SU(3)$ and $SU(2)\times U(1)_{Y}$ respectively. $F_{1}(\tau_W)$ and $F_{1/2}(\tau_t)$ are the form factor for W and top loop respectively. The form of these functions are \begin{eqnarray} \nonumber F_{1/2}(\tau) & = & -2\tau[1 + (1 - \tau)f(\tau)], \\ \nonumber F_{1}(\tau) & = & 2 + 3\tau + 3\tau(2 - \tau)f(\tau), \\ \nonumber f(\tau) & = & [\sin^{-1}(\frac{1}{\sqrt{\tau}})]^2, ~~~~ if~~\tau \geq 1 \\ \nonumber & = & \frac{1}{4}[ln(\frac{\eta_+}{\eta_{-}}) - \imath\pi]^2,~~~ if~~ \tau<1 \\ \eta_{\pm} & = & 1 \pm \sqrt{ 1 - \tau}. \end{eqnarray} The coupling of $\varphi$ to $h$ depends on the Goldberger-Wise stabilization potential $V(\varphi)$. On assuming the self-couplings of $\varphi$ in $V(\varphi)$ to be small, we have \begin{equation} \Gamma(\varphi^{'}\rightarrow h^{'} h^{'}) = \frac{m_{\varphi'}^{3}}{32\pi\Lambda_{\varphi}^2}[1 - 6\xi + 2\frac{m_{h'}^2}{m_{\varphi'}^2}(1 + 6\xi)]^2\sqrt{[1 - 4\frac{m_{h'}^2}{m_{\varphi{'}}^2}]} \end{equation} Obviously, all interactions of either physical state are now functions of $m_{\varphi^{'}}, m_{h^{'}}, \Lambda_{\varphi}$ and $\xi$. In our subsequent calculations, we use these as the basic parameters, obtaining in each case the quantities $m_\varphi, m_h$ by inverting (Eqn.~\ref{eqn:mass}). Requiring that the discriminant in (Eqn.~\ref{eqn:mass}) to remain positive implies a restriction on the parameter $\xi$ as a function of the remaining three parameters. This constitutes a ``theoretically allowed'' region in $\xi$ for given ($m_{h^{'}}$, $m_{\phi^{'}}$, $\Lambda_\varphi$). Within this region, we have two solutions corresponding to $m_\varphi > m_h$ and $m_\varphi < m_h $ in (Eqn.~\ref{eqn:mass}). In the first case we have $m_{\varphi^{'}} \rightarrow m_\varphi$ and $m_{h^{'}} \rightarrow m_{h}$ in the limit $\xi \rightarrow 0$. Exactly the opposite happens in the other case, with $m_{\varphi^{'}} \rightarrow m_{h}$ and $m_{h^{'}} \rightarrow m_\varphi $ as $\xi$ approaches zero. A further constraint on $\xi$ follows when one requires $m_\varphi > m_h$. This is because one has in that case, \begin{equation} m_\varphi^2 - m_h^2 = \frac{\sqrt{D} - \sin^{2}\rho(m_{\varphi^{'}}^2 + m_{h^{'}}^2)}{1 - \sin^{4}\rho} \end{equation} where, \begin{equation} D = (m_{\varphi^{'}}^2 + m_{h^{'}}^2)^2 - 4(1 + \sin^{2}\rho)m_{\varphi^{'}}^2 m_{h^{'}}^2 \end{equation} One thus ends up with the condition $ \sqrt{D} > \sin^{2}{\rho} (m_{\varphi^{'}}^2 + m_{h^{'}}^2) $, thus yielding an additional constraints on $\xi$. In the other case described above one has \begin{equation} m_\varphi^2 - m_h^2 = -\frac{\sqrt{D} + \sin^{2}\rho(m_{\varphi^{'}}^2 + m_{h^{'}}^2)}{1 - \sin^{4}\rho} \end{equation} which trivially ensures $m_\varphi < m_h$. We now define the convention for our analysis. (Eqn.~\ref{eqn:mass}) implies that the lightest state will always be $h'$. Thus, when $m_\varphi < m_h$, $h'$ becomes the radion-dominated state i.e.\ $m_{h'}\rightarrow m_{\varphi}$ when $\xi \rightarrow 0$. On the other hand, when $m_\varphi > m_h$, we have \ $m_{h'}\rightarrow m_{h}$ when $\xi \rightarrow 0$. Let us label $\varphi^{'}(h^{'})$ as the mixed radion state $(R)$ if, on setting $\xi= 0$, one recovers $m_{\varphi^{'}} = m_\varphi~(m_{h^{'}} = m_\varphi)$. The other state is named the mixed higgs state (H). Basically, the two interchangeable limits of the states $h^{'}$ and $\varphi^{'}$ for $\xi = 0$ in the two cases arise from the fact that the angle $\theta$ in (Eqn.~\ref{eqn:theta}) is 0 or $\pi/2$, depending on whether $m_\varphi > m_h$ or $m_\varphi < m_h$. Both of the above mass inequalities are thus implicit in (Eqn.~\ref{eqn:mass}). \section{Strategy for analysis} We propose to scan over the parameter space in terms of masses of the observable physical eigenstates $m_H$ and $m_R$ for all allowed values of the mixing parameter $\xi$ for a given $\Lambda_\varphi$. Since one scalar has been discovered at the LHC, two possibilities arise --- viz.\ we identify the resonance near 125 GeV with either $H$ or $R$. To cover both these, we present two scenarios based on the conventions defined in the previous section. In the first case, we will fix mass of the mixed higgs state ($m_H = 125$~GeV) and scan over the mass of the mixed radion state ($m_R$) from 110 to 600 GeV. Exactly the opposite is done in the other case. We describe our analysis using the first case with the understanding that the identical arguments apply when $m_R$ is held fixed at 125 GeV. To improve the efficiency of our scan, we restrict it to two parameters viz.\ $(m_R,\xi)$ and take snapshot values of $\Lambda_\varphi$ at 1.5, 3, 5 and 10 TeV. While it is possible to constrain $\Lambda_\varphi$ further using either heuristic arguments or from searches for KK excitation of the RS graviton \cite{Tang:2012pv}, we refrain from doing so to examine whether the current higgs search data can provide a complementary method for constraining the parameters of the RS model. Thus we start our study with the lowest value radion vev at 1.5 TeV. Taken together with the mass limits on the first excitation of the RS graviton, this might imply values of the bulk cosmological constant well into the trans-Planckian region where quantum gravity effects may in principle invalidate the classical RS solution. However, it may also be possible to reconcile a low radion vev with rather large gravition masses in some extended scenarios, such as one including a Gauss-Bonnet term in the 5-dimensional action \cite{Kim:1999dq, Kim:2000pz, Rizzo:2004rq, Choudhury:2013yg, Maitra:2013cta}. We simulate the kinematics of the signal (higgs production and decay) using Pythia 8.160 \cite{Sjostrand:2007gs} and reweighting according to the changed couplings. In the region where the second resonance lies between 122-127~GeV, we use Madgraph 5 \cite{Alwall:2011uj} to calculate the full cross section for $pp \rightarrow X \rightarrow WW^{(*)}/ZZ^{(*)}/\gamma\gamma$ to include interference from both states. The SM rates are taken from \cite{Dittmaier:2011ti, Denner:2011mq}. \subsection{The overall scheme} \label{sec:scheme} In this study, we ask two questions: first, what fraction of the radion-higgs mixing parameter space survives the observed exclusion limits on signal strengths in various search channels for the SM higgs; and second, if a radion-higgs scenario can explain the current data with a better fit than the SM? Having framed these questions, we compare the theoretical predictions with observed data in various channels, namely, $\gamma\gamma$, $ZZ^{(*)} \rightarrow 4\ell$, $WW^{(*)} \rightarrow 2\ell + MET$, $b \bar b$ and $\tau \bar{\tau}$. Each channel recieves contribution from both of the states $H$ and $R$. Since the production channels for both $H$ and $R$ are same as the SM higgs (denoted henceforth as $h_{SM}$), albeit with modified couplings to SM particles, the production cross section of a given scalar can be written in terms of the SM higgs production cross section multiplied by a function of the modified couplings. We denote this function by $p_{mode}^{R,H}$, e.g.\ in the gluon-fusion mode, \begin{equation} p_{gg}^R(m) = \left. \frac{\sigma(gg \rightarrow R)}{\sigma(gg \rightarrow h_{SM})} \right|_{m_R=m_h=m} = \frac{B(R \rightarrow gg)}{B(h_{SM} \rightarrow gg)} \end{equation} In general, we expect the acceptance of the cuts to depend on (a) the production mode, and (b) mass of the resonance. Let us denote the acceptance of cuts applied for a candidate mass $m$ by the experimental analysis in a given channel as $a(m)_{prod-channel}$. Thus the predicted signal strength at a particular mass $\mu(m) =\sigma/\sigma_{SM}(m_{h_{SM}} = m)$ in any given decay channel $c$ is given by \begin{multline} \mu(m;c) = \displaystyle\sum\limits_{j = gg, VBF, VH} \left \{ p_j^H \frac{a(m;H)_{j}}{a(m;h_{SM})_{j}} \frac{ \mathrm{B}(H \rightarrow c )}{\mathrm{B}(h_{SM} \rightarrow c) } \right. \\ \left. + p_j^R \frac{a(m;R)_{j}}{a(m;h_{SM})_{j}} \frac{ \mathrm{B}(R \rightarrow c )}{\mathrm{B}(h_{SM} \rightarrow c) }\right\} \end{multline} In this analysis, we will be assuming that the state discovered at the LHC is the higgs-like $H$ ($m_H = m_{h_{SM}} = 125$~GeV) for the first case and the radion like state $R$ ($m_R = m_{h_{SM}} = 125$~GeV) for the second. Therefore, we expect the acceptances to cancel for one of the terms but not for the other where the second physical state has a different mass. For the rest of this section, we derive the formulae assuming the first case with the understanding that the expressions for the second case can be obtained merely by switching $m_R$ and $m_H$. For channels where the resonance is fully reconstructible viz. $\gamma \gamma$, $b \bar b$ and $ZZ^{(*)}$, the analyses use reconstructed mass to identify the resonance and therefore contribution from the second state are negligible if the resonance is narrow. Furthermore, by restricting the number of jets in the final state, it is possible to restrict contribution to the dominant production mode. Since the Lorentz structure of the couplings of $R$ or $H$ is the same as the SM higgs $h_{SM}$, the acceptances also factor out. Therefore, for $\mathrm{h}+0~\mathrm{jets}$, in $\gamma \gamma$ and $ZZ^{(*)}$ channels, $\mu =\sigma/\sigma_{SM}$ takes the simplified form \begin{eqnarray} \mu(c) = p^H_{gg}\frac{ \mathrm{B}(H \rightarrow c)}{\mathrm{B}(h_{SM} \rightarrow c)} = \frac{ \mathrm{B}(H \rightarrow c) \mathrm{B}(H \rightarrow gg)}{\mathrm{B}(h_{SM} \rightarrow c)\mathrm{B}(h \rightarrow gg)} \label{eqn:simpleratio} \end{eqnarray} However, in the $WW^{(*)}$ channel, the final state is not fully reconstructible and therefore we need to consider contributions from both the scalar physical states. Even on restricting to zero- and one-jet final states (which are largely due to $gg$ fusion), we still have \begin{eqnarray} \label{eqn:WWstrength} \mu(m;WW) & = & p_{gg}^H \frac{a(m;H)}{a(m;h_{SM})} \frac{ \mathrm{B}(H \rightarrow WW )}{\mathrm{B}(h_{SM} \rightarrow WW) } + p_{gg}^R \frac{a(m;R)}{a(m;h_{SM})} \frac{ \mathrm{B}(R \rightarrow WW)}{\mathrm{B}(h_{SM} \rightarrow WW)} \end{eqnarray} The branching fraction $R \rightarrow WW^{(*)}$ reaches its maximal value when its mass passes the threshold $m_{R} = 2 m_W$. At this point, the largest contribution to the dilepton final state can come from decay of $R$ rather than $H$. Therefore, even with fixed mass of $H$ at 125 GeV, the presence of another state that can contribute to the signature results in much stronger bounds on the radion-higgs mixed scenario. To estimate the effect of this, we have implemented the kinematical cuts on the leptons, jets and missing energy as described by the respective ATLAS~\cite{ATLAS-CONF-2013-030} and CMS~\cite{CMS-PAS-HIG-13-003} analyses. We verify that our simulation of these analyses reproduce the expected number of signal events for a SM higgs within the errors quoted by the respective analyses. In the $\mathrm{h}+2~\mathrm{jets}$ channel, the requirement of two well-separated jets means the dominant contribution comes to VBF instead of $gg$ fusion. However, the gluon-fusion contribution is still a significant fraction and therefore, the correct estimate would require simulation of the kinematics of $gg \rightarrow R(H)+2~\mathrm{jets}$ to high accuracy as well as full detector simulation. A possible way out is to use the $gg$-fusion subrtacted numbers as have been reported by ATLAS. However, to extract this contribution the ATLAS analysis uses the estimate of gluon fusion production for SM higgs as a background which requires, by definition, to assume the SM. We have therefore neglected the VBF mode in our study. Another important effect arises when the mass of both the scalar eigenstates is close to each other. In such cases, the interference effects cannot be neglected. We have therefore calculated the full interference effects when $122 < m_R < 127$~GeV. As we shall see in the next section, this has important effects both on exclusions as well as on the global best-fit regions. In addition, there is the possibility that the branching ratio for the decay $\varphi^{'} \rightarrow h^{'} h^{'}$ can be substantial in certain regions of the parameter space, resulting in an enhancement even in fully reconstructible channels. Such signals are relatively suppressed for the $WW^{(*)}$ channel because of various vetos on aditional leptons and jets. However they contribute to the $ZZ^{(*)}$ and $\gamma\gamma$ channels where the analysis is by and large inclusive. We have included this kind of processes whenever the resultant enhancement is more than 5\% of the direct production rate i.e.\ $\sigma(pp\longrightarrow \varphi^{'}) \times B(\varphi^{'} \longrightarrow h^{'}h^{'}) \ge 0.05 \sigma(pp\longrightarrow h^{'})$ for the sake of completeness. We end this subsection by reiterating the parameters used in our scan. They are $\Lambda_{\varphi}, \xi$ and mass of either of the mixed radion state $m_R$ (or the mixed higgs state $m_H$), with the other fixed at 125 GeV. We use four representative values of $\Lambda_{\varphi}$, namely 1.5 TeV, 3 TeV, 5 TeV and 10 TeV. $\xi$ is varied over the entire theoretically allowed region according to the criteria discussed earlier. \subsection{Allowed regions of the parameter space} First, we remember that the experiments have provided 95$\%$ upper limits on the signal strength in each channel, which can be used to rule out regions of our parameter space incompatible with observed data. For the $\gamma\gamma$ and $ZZ^{(*)}$ channel-based exclusions, we make use of the simplified formula given in (Eqn.~\ref{eqn:simpleratio}) for the entire range of $m_{R}$. The case for $WW^{(*)}$ is more complicated in the region where $m_R$ lies in the range 110 - 160~GeV since contribution from both the eigenstates are of comparable magnitude. Therefore, we add the contributions from both states (Eqn.~\ref{eqn:WWstrength}). For example, for calculating the cross section at say 150~GeV, we consider the contribution from $m_R=150$~GeV as well as the contribution from $m_H=125$~GeV to cuts designed for the 150 GeV analysis. As $m_R$ approaches 160 GeV, the contribution from the 125 GeV state becomes smaller and smaller till after 160, it is dominated entirely by $m_R$. After this point, we continue with the simple ratio treatment viz. \begin{eqnarray} \mu(125;WW) = \frac{ \mathrm{B}(R \rightarrow WW) \mathrm{B}(R \rightarrow gg)}{\mathrm{B}(h_{SM} \rightarrow WW)\mathrm{B}(h \rightarrow gg)} \label{eqn:simpleratio2} \end{eqnarray} A second source of upper limits comes from demanding that the total signal strength at 125 GeV does not exceed the upper limit at that mass. The cuts based on transverse mass e.g.\ the ATLAS cut on transverse mass demanding $0.75 m_H < m_T < m_H $ cuts off part of the contribution from $m_R$ state. \begin{eqnarray} \label{eqn:WWstrength2} \mu(WW) = p_{gg}^H \frac{ \mathrm{B}(H \rightarrow WW )}{\mathrm{B}(h_{SM} \rightarrow WW) } + p_{gg}^R \frac{a(125;R)}{a(125;h_{SM})} \frac{ \mathrm{B}(R \rightarrow WW)}{\mathrm{B}(h_{SM} \rightarrow WW)} \end{eqnarray} In the ATLAS analysis, the kinematical cuts for higgs search up to mass of 200 GeV are identical excepting the transeverse mass cut. In the CMS analysis, the cuts vary continuously with mass. We refer the reader to the relevant papers \cite{Aad:2012uub, ATLAS-CONF-2013-030, CMS-PAS-HIG-13-003} for details of the cuts used. \subsection{Best fit contours} \label{sec:chisq} \begin{table}[tp] \begin{center} \begin{tabular}{lccc} \hline \textbf{Channel} & \textbf{ATLAS} & \textbf{CMS} & \textbf{Tevatron}\\ \hline $WW^*$ & $1.0 \pm 0.3 $ & $0.68 \pm 0.20$ & \\ $ZZ^*$ & $1.5 \pm 0.4 $ & $0.92 \pm 0.28$ & \\ $\gamma \gamma$ & $1.6 \pm 0.3$ & $0.77 \pm 0.27$ & \\ $\tau \tau$ & $0.8 \pm 0.7$ & $1.10 \pm 0.41$ & \\ $b \bar b$ (Tevatron) & & & $1.97 \pm 0.71$ \\ \hline \end{tabular} \end{center} \caption{Best-fit values of signal strength used for global fits \cite{ATLAS-CONF-2013-014, CMS-PAS-HIG-13-005, Aaltonen:2012qt}. \label{tab:bestfit}} \end{table} To answer the second question posed at the begining of Sec.\ \ref{sec:scheme}, we wish to obtain the best fit values for $\xi$ and the varying scalar mass ($m_R$ or $m_H$) for each value of $\Lambda_\phi$. We primarily use data in the $\gamma\gamma, ZZ^{(*)}$ and $WW^{(*)}$ channels, which are the most robust. We also use $\tau \bar \tau$ data, however, we find that the error bars for these are so large its role in deciding the favoured region of the parameter space is somewhat inconsequential. For the $b\bar{b}$ final state, we use data in the associated production channels $WH, ZH$ \cite{Aaltonen:2012qt}. We do not use the data from LHC in this channel as its error bars are larger even than the $\tau \bar \tau$ channel and therefore do not restrict any of the parameter space. To find the best fit, our task is to scan the parameter space and find the values of $m_{\varphi^{'}}$ and $\xi$ for any $\Lambda_\phi$, which minimise \begin{equation} \chi^2 = \sum_i \frac{(\mu_i - \hat{\mu_i})^2}{\bar{\sigma_i}^2} \end{equation} \noindent where $\mu_i = \sigma/ \sigma_{SM}$ is the signal strength at 125 GeV as calculated in the {\it i}th channel, $\hat{\mu_i}$ denotes the experimental best fit value for that channel, and $\bar{\sigma_i}$ being the corresponding standard deviation. Changing $\xi$ and $m_R$ affect the signal strength of $H$ even though $m_H$ is held fixed at 125 GeV. Again, we use the simple ratio-based formulae for $\gamma \gamma$, $ZZ^{(*)}$, $b \bar b$ and $\tau \bar \tau$ (using associated production instead of gluon fusion for $b \bar b$). For $WW^{(*)}$, the formula (Eqn.~\ref{eqn:WWstrength2}) is used. The data points used for performing global fit are summarised in Table~\ref{tab:bestfit}. The 68\% and 95 \% contours are determined using \begin{equation} \chi^2 = \chi^2_{min} + \Delta \chi^2 \end{equation} where $\Delta \chi^2$ values corresponding to the confidence levels for seven degrees of freedom (8.15, 14.1) are used. Since the best-fit values reported by the experiments are based on combination of 7 and 8 TeV runs, we combine our signal strengths at 7 and 8 TeV weigted by the luminosity. Since the upper limits are based on signal strength mainly due to the second resonance whereas the best-fit requires the correct signal strength at 125 GeV, there may be regions with a small chi-squared that are already ruled out due to constraints on signal from the second resonance. We therefore also perform the best fit in the region left out after the exclusion limits are applied. However, to avoid overconstraining the parameter space, we do not include the exclusions arising from upper limit on the signal strength at 125 GeV as given by (Eqn.~\ref{eqn:WWstrength2}) while performing the chi-squared minimisation. \begin{figure}[tp] \begin{center} \includegraphics[scale=0.5]{plots/WW-sigma-combined} \includegraphics[scale=0.5]{plots/WW-combined} \includegraphics[scale=0.5]{plots/WW-from-125} \includegraphics[scale=0.5]{plots/WW-full} \caption{\label{fig:WWchange} The effect on the excluded parmeter space (shown in red) from various contributions. The top-left panel shows the excluded region using ratios of branching fractions of $m_R$ alone. The top-right panel is the exclusion when contribution from both states are taken into account. The bottom-left panel shows the exclusion from applying the limit on signal strength at 125 GeV. Finally, the bottom-right panel shows the total excluded parameter space. This illustration uses $\Lambda_\varphi = 3$~TeV and 95\% CL limits from the ATLAS collaboration.} \end{center} \end{figure} \section{Results and discussions} The most recent CMS and ATLAS search results exclude the Standard Model higgs in the mass range $128$ to $600$ GeV at $95\%$ CL~\cite{ATLAS-CONF-2013-014, CMS-PAS-HIG-13-005}. In this section we present the regions of the RS parameter space that allow the presence of an extra scalar consistent with observed upper limits. We illustrate the effect of taking signal contributions from both states in Fig.~\ref{fig:WWchange}. The top-left panel shows the excluded region when the upper limits are placed on signal strength of the extra R state alone using only the multiplicative correction of Eqn.~\ref{eqn:simpleratio}. This was the approach used e.g.\ in~\cite{Kubota:2012in}. However, the presence of two states means there are two sources of limits --- firstly, we require the total signal strength at 125 GeV to be less than the observed upper limit at 125 GeV (bottom-left panel) and secondly, we also require that the combined signal strength be smaller than the observed limit at the mass of the radion-like resonance $m_R$ (top-right panel). Finally we show the effects of both these taken together to give the full exclusion (bottom-right panel). \begin{figure}[tp] \includegraphics[scale=0.6]{plots/MT-045.pdf} \includegraphics[scale=0.6]{plots/MT-065.pdf} \caption{Comparison of $m_{T}$ distribution after contribution from both scalars is taken into account for a parameter point that is ruled out and one that is not by the ATLAS limits. The parameters for illustration are $\xi = 0.045$ (left; disallowed) and $\xi=0.065$ (right; allowed), $m_H=125$~GeV, $m_R=164$~GeV and $\Lambda_{\varphi} = 3$~TeV. The label ``SM'' refers to the total SM background as extracted from ~\cite{Aad:2012uub, ATLAS-CONF-2013-030}. \label{fig:mtdist}} \end{figure} A caveat in the above result is that the likelihood function used by the experiments to place limits makes use of not just on the total number of events but also the shape of certain distributions like the lepton invariant mass $m_{\ell \ell}$ or the transverse mass $m_T$.\footnote{The transverse mass variable is defined as $m_{T} = \sqrt{(E_{T}^{\ell \ell} + E_{T}^{miss})^2 - |(p_T^{\ell \ell} + E_{T}^{miss})|^{2}}$, where $E_{T}^{\ell\ell}$ is the transverse energy of the leptonic system, $p_{T}^{ll}$ is the total transverse momentum of the leptonic system and $E_{T}^{miss}$ is the missing energy.} The presence of a shoulder, in e.g.\ the $m_T$ distribution, can be indicative of a second state and could possibly lead to stronger exclusions in the region where $m_R > m_H$. For a fixed $\xi$, the branching fraction $R \rightarrow WW^*$ reaches it's maximum value for about 160 GeV. For masses greater than this threshold, the change in total signal strength is governed mainly by the change the production cross section. However, since the production cross section decreases with increasing $m_R$, the distortion in $m_T$ distribution from the extra state also becomes smaller with increasing $m_R$ and is maximal around 160 GeV. We present the $m_T$ distribution showing extra contribution from $R$ for $m_R = 164$~GeV in Fig.~\ref{fig:mtdist} for two nearby values of $\xi$ viz. 0.045 and 0.065. Our calculation of the $m_T$ distribution is superimposed over the estimated background reported by ATLAS \cite{ATLAS-CONF-2013-030}. There are in principle, regions of parameter space where the contribution at 125 GeV from $R$ even exceeds that from $H$. However, we find that the current upper limits on signal strength in $WW$ channel are so strong that this always results in a very large total signal strength at $m_R$ and is consequently ruled out. This is illustrated in Fig.~\ref{fig:mtdist} where the point with $\xi=0.045$ shows a significant contribution from $R$ but we find is already disallowed by the 95\% upper limits on signal strength at 164 GeV. This observation justifies our assumption that the distortion in the $m_T$ distribution is not too large even for $m_R \mathrel{\mathpalette\@versim>} 160$~GeV. We therefore present our results with the assumption that the upper limits on total signal strength give a reasonably good approximation of the true exclusion limits even though in principle it corresponds to a limit on the overall normalisation of the distribution only. \subsection{Exclusion of the Parameter Space} \begin{figure}[tp] \begin{center} \includegraphics[scale=0.5]{plots/Excl-All-1500} \includegraphics[scale=0.5]{plots/Excl-All2-1500} \includegraphics[scale=0.5]{plots/Excl-All-3000} \includegraphics[scale=0.5]{plots/Excl-All2-3000} \includegraphics[scale=0.5]{plots/Excl-All-5000} \includegraphics[scale=0.5]{plots/Excl-All2-5000} \caption{\label{fig:exclmH} Excluded parameter space for the case with $m_H = 125$~GeV (shown in red) using 95\% CL limits from the ATLAS and CMS. This illustration uses $\Lambda_\varphi = $1.5~TeV(top), 3~TeV(mid) and 5~TeV(bottom).} \end{center} \end{figure} We show the regions of parameter space ruled out from current ATLAS and CMS data in Fig.~\ref{fig:exclmH}. As expected, the allowed parameter space for low $\Lambda_\varphi$ is more restricted than for higher values. We find that barring a small sliver close to $\xi = 0$, almost the entire parameter space is ruled out for $\Lambda_\varphi = 1.5$~TeV. For $\Lambda_\varphi = 3,~5$~TeV, the exclusion is less severe. However, the region with nearly degenerate $R$ and $H$ states is ruled out. At large $m_R$, the most stringent limits come from $ZZ$. We therefore find regions where a significant branching fraction $R \rightarrow t \bar t$ reduces the constraints after $m_R > 350$~GeV. However limits are still restrictive for negative $\xi$ values as the production via gluon fusion is enhanced in this region. We also find that CMS constraints are much stronger than ATLAS. This is expected in $WW^{(*)}$ since CMS has provided limits based on the full 7 and 8 TeV dataset whereas ATLAS has provided only partial results \cite{CMS-PAS-HIG-13-003, ATLAS-CONF-2013-030}. We list here the corresponding conference notes from ATLAS that have been used for determining the ATLAS limits. Both experiments give limits in $ZZ$ channel based on the full dataset \cite{CMS-PAS-HIG-13-002, ATLAS-CONF-2013-013}. The $\gamma \gamma$ limits are available only in the range 110-150 GeV \cite{CMS-PAS-HIG-13-001, ATLAS-CONF-2012-168}, presumably since the SM higgs decays into the diphoton channel becomes negligibly small beyond this range. However, since there can be enhancements to this rate in the radion-higgs mixed scenario, it may be useful to have the limits in the full range. Taking interference of both states when their masses lie between 122 and 127 GeV pushes the predicted signal strength beyond the observed upper limits thus ruling out the degenerate region entirely. The $b\bar{b}$ limits, from ATLAS, CMS or Tevatron are found to not affect the extent of the region of exclusion. Whenever the limits are based on combined datasets, we combine our calculated signal strength at 7 and 8 TeV with the luminosities serving as weights. For $\Lambda_\varphi = 10$~TeV, we do not find any significant exclusions. \begin{figure}[tp] \begin{center} \includegraphics[scale=0.5]{plots/Excl-All-1500r} \includegraphics[scale=0.5]{plots/Excl-All2-1500r} \includegraphics[scale=0.5]{plots/Excl-All-3000r} \includegraphics[scale=0.5]{plots/Excl-All2-3000r} \caption{\label{fig:exclmR} Excluded parameter space (shown in red) for the case with $m_R = 125$~GeV using 95\% CL limits from the ATLAS and CMS. This illustration uses $\Lambda_\varphi = $1.5~TeV(top) and 3~TeV (bottom). Almost the entire parmeter space is excluseded for $\Lambda_\varphi = $5~TeV and higher.} \end{center} \end{figure} A natural question to follow this analysis is what happens if the boson found at 125 GeV is the $m_R$ state and not the $m_H$ one. The exclusions resulting from reversing our analysis in accord with this change is shown in Fig.~\ref{fig:exclmR}. We find here that larger values of $\Lambda_\varphi$ have larger exclusions with almost the entire parameter space being excluded for $\Lambda_\varphi > 5$~TeV. This is in accordance with \cite{Barger:2011hu} where they show that a pure radion at 125 GeV is already ruled out. As $\Lambda_\varphi$ increases, $H$ becomes more and more like the SM higgs (and equivalently $R$ becomes a pure radion). As the lmits on SM higgs already rule it out in most of the mass range, we find that nearly the entire parameter space is ruled out too. In performing the reverse analysis, we have not considered the interference from both states, therefore the small allowed region near 125 GeV should be taken with a pinch of salt. Since the result should not change from the earlier case as $m_R \simeq m_H$ in this region and we may assume that it will be ruled out if a full calculation with interference is made. \subsection{Regions of best-fit with the data} Using the chi-squared analysis outlined in the Sec.~\ref{sec:chisq}, we perform a global fit using the values of signal strength shown in Table~\ref{tab:bestfit}. We also perform the same excercise after removing the regions excluded by the upper limits. Of course, while doing so, we do not apply the upper limit on signal strength at 125 GeV. So the only exclusions considered are those resulting from limits on signal from $m_R$ only. For illustraion, we show the results at $\Lambda_\varphi = 3$~TeV in Fig.~\ref{fig:BFplots}. The first panel shows the regions that agree with the data within 68\% and 95\%. The second panel shows the reduction in the best-fit region when the exclusions reported in Fig.~\ref{fig:exclmR} are imposed as well. The bottom panel shows the best-fit region after exclusions for the reverse case where $m_R=125$~GeV and $m_H$ is varied. \begin{figure}[tp] \begin{center} \includegraphics[scale=0.5]{plots/BFplot-3000} \includegraphics[scale=0.5]{plots/BFplot2-3000} \includegraphics[scale=0.5]{plots/BFplot2-3000r} \caption{\label{fig:BFplots} Regions that agree with current data within 68\% (green) and 95.4\% (yellow) for $\Lambda_\varphi = 3$~TeV. The top-left plot shows the case where no exclusions have been taken into account. The top-right side shows the change after taking exclusions into account. The bottom plot is for the case where we hold $m_R=125$~GeV instead of $m_H$.} \end{center} \end{figure} The chi-squared value for the SM is 10.93 for nine degrees of freedom. We find that in the first case with $m_H=125$~GeV, there is always a small region of parameter space that fits with a similar $\chi^2/dof$ as the SM. For $\Lambda_\varphi = 1.5$~TeV, the minumum chi-squared value found is 9.06 without exclusions and 11.57 with exclusions at point $m_R = 600$~GeV and $\xi=0.15$ (after excl.). For 3 TeV, the numbers are (9.03, 9.08) respectively with the best-fit point at $m_R = 407$~GeV and $\xi=0.15$ and for 5 TeV, they are (9.03, 9.04) with the best-fit point at $m_R = 383$~GeV and $\xi=-0.25$. Thus, the exclusions affect less and less as we increase $\Lambda_\varphi$, which is expected as the excluded parameter space also reduces. In particular, as the exclusions on negative $\xi$ are relaxed, these values seem to give a slightly better fit. Altough, as seen from the change in $\chi^2$ with and without exclusion, the distribution is rather flat for large $m_R$. Also, as the best-fit value for $m_R$ is at the edge of our scan for $\Lambda_\varphi = 1.5$~TeV, it is possible that the fit would be further improved by increasing $m_R$. For larger values of $\Lambda_\varphi$ however, increasing $m_R$ seems to increase the $\chi^2/dof$ slightly. The chi-squared for the reverse case is decidedly worse than in the normal case. We find that the minimum values of chi-squared after exclusions are 35.6, 18.22, 52.0 for (1.5, 3, 5 TeV). Therefore, we can say that this scenario is strongly disfavoured compared to the SM. \section{Conclusions} We have examined the possibility that the currently observed scalar is one of the two states of a mixed radion-higgs scenario. To perform this analysis, we have considered the contribution from both states in the $WW^{(*)}$ channel, differently affected by cuts, to calculate the signal strength. We also take into account effects of intereference when both states are nearly degenerate. We find that if the 125 GeV state is radion-dominated, only a very small region of the parameter space with a small $\Lambda_\varphi$ is consistent with current upper limits. Even in these regions, the goodness of fit with data is decidedly worse than in the SM. Therefore, we may conclude that the idea that the discovered boson at 125 GeV is dominantly radion-like is largely disfavoured. The second possiblity, namely that the LHC has found a 125 GeV higgs-dominated scalar, but a radion-dominated state, too, hangs around to contribute to the observed signals (especially the $WW^{(*)}$ signal), can not be ruled out with current data. We find the scenario with small (but non-zero) mixing and an accompanying radion-dominated state with high mass results in a good fit for almost all values of $\Lambda_\varphi$. However, if we include exclusions on the presence of the second, radion-dominated boson that would surely accompany the higgs-dominated state, the goodness of fit is reduced for TeV-range values of $\Lambda_\varphi$. We find that for $\Lambda_\varphi$ up to 5 TeV, the SM still provides a better fit. As a special case, we find that situations where the two mass eigenstates are degenerate enough to warrant the inclusion of interference terms, are ruled out. Finally $\Lambda_\varphi=10$~TeV is mostly indistinguishable from the SM as the modifications to signal strengths are too small to be significant. \section{Acknowledgements} We would like to thank Soumitra SenGupta for helpful discussions. UM would like to thank Taushif Ahmed, Shankha Banerjee, Atri Bhattacharya, Nabarun Chakraborty, Ujjal Kumar Dey, Anushree Ghosh, Sourav Mitra , Manoj Mandal, Tanumoy Mandal and Saurabh Niyogi for discussions and assistance. We would also like to thank the Indian Association for the Cultivation of Science, Kolkata for hospitality while this study was in progress. ND is supported partly by the London Centre for Terauniverse Studies (LCTS), using funding from the European Research Council via the Advanced Investigator Grant 267352. UM and BM are partially supported by funding available from the Department of Atomic Energy, Government of India for the Regional centre for Accelerator-based Particle Physics, Harish-Chandra Research Institute. Computational work for this study was partially carried out at the cluster computing facility in the Harish Chandra Research Institute (http:/$\!$/cluster.hri.res.in).
{ "attr-fineweb-edu": 1.899414, "attr-cc_en_topic": 12, "domain": "arxiv" }